cat /proc/mdstat
Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md124 : inactive sdj1[0](S)
1048512 blocks
Solution, we "run" the array
sudo mdadm --manage /dev/md124 --run
mdadm: started array /dev/md/0_0........
Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS?
cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb4[0] sda4[1]
897500672 blocks super 1.2 2 near-copies [2/2] [UU]
[==========>..........] check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec
........
It is unfortunate that LXC's dir mode is completely insecure and allows way too much information from the host to be seen. I wonder if there will eventually be a way to break into the host filesystem or other container's storage?
OpenVZ better security:
[root@ev ~]# cat /proc/mdstat
cat: /proc/mdstat: No such file or directory
/dev/simfs 843G 740G 61G........
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid10 sdc1[0] sdb1[2]
1953382400 blocks super 1.2 512K chunks 2 far-copies [2/1] [U_]
resync=PENDING
bitmap: 15/15 pages [60KB], 65536KB chunk
Solution force repai........
myguy@devbox:~$ sudo mdadm -As
myguy@devbox:~$ cat /proc/mdstat |grep sdf
md125 : inactive sdf3[2](S)
sudo mdadm --manage /dev/md125 --run
mdadm: started /dev/md125
........
A great way if you have a bunch of drives and mdadm connected and are looking for backups/archives and don't know what is where!
for md in `cat /proc/mdstat|grep md[0-99]|awk '{print $1}'`; do mkdir /mnt/$md; mount /dev/$md /mnt/$md; done........
Remove the failed partition /dev/sde1
mdadm --manage /dev/md99 -r /dev/sde1
mdadm: hot removed /dev/sde1 from /dev/md99
Now add another drive back to replace it:
# mdadm --manage /dev/md99 -a /dev/sdf1
mdadm: added /dev/sdf1
A "cat /proc/mdstat" should show it resyncing if all is well.........
Here is the scenario you or a client have a remote machine that was installed as a standard/default minimal Centos 6.x machine on a single disk with LVM for whatever reason. Often many people do not know how to install it to a RAID array so it is common to have this problem and why reinstall if you don't need to? In some cases on a remote system you can't easily reinstall without physical or KVM access.
So in this case you add a second physical or disk or already ha........
[3805108.257042] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257052] sd 0:0:0:0: [sda] Write Protect is off
[3805108.257054] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[3805108.257066] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[3805108.257083] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257090] sd 0:0:0:0: [sda] Write Protect is off........
Have you ever unplugged the wrong drive and then had to rebuild the entire array? It may not be a big deal in some ways but it does make your system vulnerable until the rebuild is done.
Many distros often enable the "bitmap" feature and this basically keeps track of what parts need to be resynced in the case of a temporary removal of a drive from the array, this way it only needs to sync what has changed.
To enable bitmap to speed up rebuilds and sync........
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1] sda3[0]
1363020736 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 8.3% (113597440/1363020736) finish=276.2min speed=75366K/sec
........
mdadm --manage /dev/md3 --add /dev/sda1
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdd2[1] sdd1[2](S)
31270272 blocks
md3 : active raid1 sda1[2] sdb1[1] sdc1[3](F)
943730240 blocks [2/1] [_U]
[>....................]........
This array is a RAID 1 and in this case 1 of the 2 drives failed (a WD drive and I've found them to be the weakest and most unreliable of any brand and are easily damaged/DOA when shipping them).
mdadm --manage /dev/md0 --add /dev/sdb1
The above assumes the array you want to add to is /dev/md0 and the device we are adding is /dev/sdb1
*One thing to remember is to make sure the partition you are adding is the correct size for the array. You can also g........
This doesn't seem to be widely known (maybe it's in some documentation that none of us read though)but there's an easy way to check the integrity of any mdadm array:
sudo echo check > /sys/block/md0/md/sync_action
-bash: /sys/block/md0/md/sync_action: Permission denied
sudo will never work, this only works as root since echo is not actually a binary/command. It is built-into bash.
/sys/devices/virtu........
This really made me nervous but notice the mdstat says "check". This is because in Ubuntu there is a scheduled mdadm cronscript that runs everyday on Sunday at 00:57 that checks your entire array. This is a good way because it prevents gradual but unnoticed data corruption which Inever thought of.
As long as the check completes properly you have peace of mind knowing that your data integretiy is assured and that your hard drives are functioning properly (I'........
I separated the 2 drives in the RAID 1 array.
1 is the old one /dev/sda and is out of date, while the separated other one /dev/sdc was in another drive and mounted and used with more data (updated).
I wonder how mdadm will handle this:
usb-storage: device scan complete
md: md127 stopped.
md: bind
md: md127: raid array is not clean -- starting background reconstruction
raid1: raid set md127 active with 1 out of 2 m........
Create New RAID 1 Array:
First setup your partitions (make sure they are exactly the same size)
In my example I have sda3 and sdb3 which are 500GB in size.
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm: array /dev/md2 started.
Check Status Of The Array
*Note I already have other arrays md0 and md1.
You can see below that md2 is syn........
If you have the "(auto-read-only)" beside an arrayI have no idea why that happens but it is easy to fix.
Just run "mdadm --readwrite /dev/md1" (rename md0 to the device with the problem and it will begin to resync.
md1 : active (auto-read-only) raid1 sdb2[0] sda2[1]
19534976 blocks [2/2] [UU]
resync=PENDING
........
mdadm --assemble --scan
mdadm: /dev/md/diaghost05102010:2 has been started with 2 drives.
mdadm: /dev/md/diaghost05102010:1 has been started with 2 drives.
mdadm: /dev/md/diaghost05102010:0 has been started with 2 drives.
-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md125 : active raid1 sda1[0] sdb1[1]
14658185 blocks super 1.2........
Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data. So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose.
The RAID 1 Setup (Hardware Wise)
I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your curr........