This was caused by some weird dmraid setup which kind of takes control of drives even if they're blank/unused.
1. Check the table.
dmsetup table
ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172: 0 3905945600 linear 8:0 0
ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172p3: 0 37124096 linear 253:0 284547072
ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172p2: 0 283496448 linear 253:0 1050624
ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172p1: 0 1048576 linear 253:0 2048
2.) Delete the entry
dmsetup remove ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172
device-mapper: remove ioctl on ddf1_44656c6c202020201000006010281f0b3f5195b77cf86172 failed: Device or resource busy
Command failed
*Delete the partitions first (eg. ending with p3 p2 p1 and then delete the main one)
After that mdadm will be happy.
Here's another example of lvm causing the issue:
[root@localhost ~]# mdadm --manage /dev/md1 -a /dev/ sdb2
mdadm: Cannot open /dev/ sdb2: Device or resource busy
[root@localhost ~]# dmsetup table
cl-swap: 0 4194304 linear 253:2 2048
[root@localhost ~]# dmsetup remove cl-swap
[root@localhost ~]# mdadm --manage /dev/md1 -a /dev/ sdb2
mdadm: added /dev/ sdb2
mdadm, dev, sda, resource, busythis, dmraid, unused, dmsetup, ddf, _, cf, linear, delete, entry, mapper, ioctl, partitions, eg,