This article about migrating to a CentOS 7 /8 RAID mdadm array has a lot of info but I wanted to focus specifically on what newer versions of CentOS 7 require to boot mdadm and what changes are necessary on CentOS 7.8+
CentOS 7 / 8 mdadm RAID booting requirements
This assumes you are chrooting into an existing install or using it to get a new deployment ready. However, these steps can........
The cool thing here is that we only need 1 drive to make a RAID 10 or RAID 1 array, we just tell the Linux mdadm utility that the other drive is "missing" and we can then add our original drive to the array after booting into our new RAID array.
Step#1 Install tools we need
yum -y install mdadm rsync
Step #2 Create your partitions on the drive that will be our RAID array
Here I assume it is /dev........
Done on Centos 7.3 very important as clearly based on older guides it was a lot easier and more simpler! Hint do not use grub2-install!
If you have trouble booting after this check this CentOS mdadm RAID booting/fixing guide.
One huge caveat if you are an oldschool user or sysadmin who has avoided UEFIbooting
The nor........
In short the two drives in the array were /dev/sdd and /dev/sde. The kernel sees they were unplugged and have gone down as you can see below.
mdadm caught the first one being unplugged /dev/sde and disabled the missing drive. However when the final drive that was part of the array is unplugged it didn't notice at all. Instead it complains about an IO error later for drives that the kernel knows do not exist anymore.
[45817.162728] ata4: exception........
1.) Replicate the number of partitions in your new drives.
gdisk /dev/sda
gdisk /dev/sdb
I created 3 partitions of the same same size.
partition #1: +1G (/boot)
partition #2: +60G (swap)
partition #3: rest of it (/)
#note if you are using GPT/gdisk you need to create separate a partition at least 1MB in size (in my case I would a 4th partition and mark it type ef02).........
I keep reading these drives are slower, but they are cheap and still SSDand work very fast for my needs.
As you can see the sequential read is 481-491MB/s, if I put them in MDADM RAID10 mode (normal RAID1) they should give me well over 900MB/s and with redundancy and being very cheap for what they offer.
[1232206.315622] scsi 8:0:1:0: Direct-Access ATA ADATA SU800&........
The only way I've found in mdadm to make 2 drives perform like a proper RAID 1 (eg. the read speed should be 2x that of a single drive) is to use the --layout=f2 (far 2).
mdadm raid10 performance issues.
Be very aware that mdadm seems to default to layout=n2 (which means near). In this scenario it means it is like mdadm RAID 1 performance (you get maximum read speeds of a single drive).
dd if=/dev/md126 of=/dev/null bs=1M cou........
Here is the scenario you or a client have a remote machine that was installed as a standard/default minimal Centos 6.x machine on a single disk with LVM for whatever reason. Often many people do not know how to install it to a RAID array so it is common to have this problem and why reinstall if you don't need to? In some cases on a remote system you can't easily reinstall without physical or KVM access.
So in this case you add a second physical or disk or already ha........
Iwas surprised to see that Linux Mint at the latest 17.2 version still has NO mdadm installer option, and worse the installer will not be able to create a proper booting environment even when you do install it.
How to setup mdadm in Linux mint LiveCD
sudo su
apt-get install mdadm
# partition as you need and then create your mdadm devices
# create your SWAP md0
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /d........
root (hd2,1)
Filesystem type unknown, partition type 0x83
grub> root (hd2,2)
root (hd2,2)
Filesystem type is ext2fs, partition type 0x83
grub> setup (hd2)
setup (hd2)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... no
#weird thing about grub is that the drive you enter is considered hd0
For example when booted fu........
mdadm --create /dev/md1 --level 10 --raid-devices=2 /dev/sdb2 /dev/sdc2 --layout=f2 --metadata=0.90
Note that layout=f2 or layout=n2 is very important as without it you'll get a complaint like this:
mdadm --create /dev/md0 --level 10 --raid-devices /dev/sdb1 /dev/sdc1 missing missing
mdadm: invalid number of raid devices: /dev/sdb1
It is basically more like a prop........
I messed up the bootloader by accident on a standard Centos 6.3 install because I turned the /dev/vda1 boot partition into an mdadm raid 1. This was all done correctly aside from one point Ididn't realize was an issue metadata=00.90 is the only thing that will allow you to boot (otherwise grub won't work and you won't boot).
So the next step is rescue mode from a CD right? The problem you will find is that grub does not detect your hard drives, this is Ibelieve is be........
[3805108.257042] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257052] sd 0:0:0:0: [sda] Write Protect is off
[3805108.257054] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[3805108.257066] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[3805108.257083] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257090] sd 0:0:0:0: [sda] Write Protect is off........
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1] sda3[0]
1363020736 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 8.3% (113597440/1363020736) finish=276.2min speed=75366K/sec
........
This booting error is because the Xen PV guest image uses the Xen kernel, this is not compatible with anything but a host running a Xen kernel.
I did a kpartx -av virtual.img and then it created some partitions that showed up in fdisk.
I mounted it and did a chroot into it and removed the xen kernel and installed a normal kernel but Xen still shows the same kernel in Grub (only the Xen one).
This is strange but it seems like this Xen PV guest has some sort of hidden or........
GNU GRUB version 0.97 (640K lower / 3072K upper memory)
[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename.]
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup........
I separated the 2 drives in the RAID 1 array.
1 is the old one /dev/sda and is out of date, while the separated other one /dev/sdc was in another drive and mounted and used with more data (updated).
I wonder how mdadm will handle this:
usb-storage: device scan complete
md: md127 stopped.
md: bind
md: md127: raid array is not clean -- starting background reconstruction
raid1: raid set md127 active with 1 out of 2 m........
Moving to RAID was a pain.
What you have to do is the following from an existing install:
Install mdadm
Create your mdadm RAID 1 array on your spare hard drive.
Start it with the missing disk.
rsync the entire contents of your current / to the md partition.
Here's a good way of doing it:
rsync -Pha --exclude=/proc/* --exclude=/sys/* --exclude=/mnt/* /. /mnt/md2........
[27969.398749] sd 5:0:0:0: [sdb] 3907029168 512-byte hardware sectors (2000399 MB)
[27969.398749] sd 5:0:0:0: [sdb] Write Protect is off
[27969.398749] sd 5:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[27969.398749] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[27972.117543] ata6.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
[27972.117543] ata6.00: irq_stat 0x48000000
[27972.117543] ata6.00: cmd 60/08:00:ff:7........
This is obviously a bug in the r8169 kernel module and it seems to affect a lot of people. I upgraded to the latest kernel and hope this won't happen anymore, as it is a very serious error. This is especially serious for those who are running servers with this chipset, who can afford for the NIC to randomly go off-line for no apparent reason?
[655548.189113] type=1505 audit(1277067560.902:5): operation="profile_load" name="/usr/bin/freshclam&q........