mdadm RAID 1 adventures

I separated the 2 drives in the RAID 1 array.
1 is the old one /dev/sda and is out of date, while the separated other one /dev/sdc was in another drive and mounted and used with more data (updated).

I wonder how mdadm will handle this:

usb-storage: device scan complete
md: md127 stopped.
md: bind
md: md127: raid array is not clean -- starting background reconstruction
raid1: raid set md127 active with 1 out of 2 mirrors
md: md126 stopped.
md: bind
raid1: raid set md126 active with 1 out of 2 mirrors
md: md125 stopped.
md: bind
raid1: raid set md125 active with 1 out of 2 mirrors
kjournald starting. Commit interval 5 seconds
EXT3-fs warning: checktime reached, running e2fsck is recommended
EXT3 FS on md127, internal journal
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
spurious 8259A interrupt: IRQ7.
md: md127 stopped.
md: unbind
md: export_rdev(sda3)
ata1: exception Emask 0x10 SAct 0x0 SErr 0x90202 action 0xe frozen
ata1: irq_stat 0x00400000, PHY RDY changed
ata1: SError: { RecovComm Persist PHYRdyChg 10B8B }
ata1: hard resetting link
ata1: link is slow to respond, please be patient (ready=0)
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1.00: configured for UDMA/133
ata1: EH complete
sd 0:0:0:0: [sda] 3907029168 512-byte hardware sectors (2000399 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
ata3: exception Emask 0x10 SAct 0x0 SErr 0x40d0000 action 0xe frozen
ata3: irq_stat 0x00400040, connection status changed
ata3: SError: { PHYRdyChg CommWake 10B8B DevExch }
ata3: hard resetting link
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3.00: ATA-8: WDC WD20EARS-00S8B1, 80.00A80, max UDMA/133
ata3.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
ata3.00: configured for UDMA/133
ata3: EH complete
scsi 2:0:0:0: Direct-Access ATA WDC WD20EARS-00S 80.0 PQ: 0 ANSI: 5
sd 2:0:0:0: [sdc] 3907029168 512-byte hardware sectors (2000399 MB)
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:0:0: [sdc] 3907029168 512-byte hardware sectors (2000399 MB)
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdc: sdc1 sdc3
sd 2:0:0:0: [sdc] Attached SCSI disk
sd 2:0:0:0: Attached scsi generic sg2 type 0

-bash-3.1# fdisk -l

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 1825 14659281 83 Linux
/dev/sda2 * 1826 9121 58605120 83 Linux
/dev/sda3 9122 243201 1880247600 83 Linux

Disk /dev/sdb: 1002 MB, 1002438656 bytes
255 heads, 63 sectors/track, 121 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 122 978912+ c W95 FAT32 (LBA)
Partition 1 has different physical/logical endings:
phys=(120, 254, 63) logical=(121, 222, 37)

Disk /dev/md126: 60.0 GB, 60011577344 bytes
2 heads, 4 sectors/track, 14651264 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md126 doesn't contain a valid partition table

Disk /dev/md125: 15.0 GB, 15009981440 bytes
2 heads, 4 sectors/track, 3664546 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md125 doesn't contain a valid partition table

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 1825 14659281 83 Linux
/dev/sdc3 9122 243201 1880247600 83 Linux
-bash-3.1# mdadm -A -s
mdadm: /dev/md/diaghost05102010:2 exists - ignoring
mdadm: /dev/md127 has been started with 1 drive (out of 2).
mdadm: /dev/md/diaghost05102010:2 exists - ignoring
mdadm: /dev/md124 has been started with 1 drive (out of 2).
-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md124 : active raid1 sda3[1]
1880246440 blocks super 1.2 [2/1] [_U]

md125 : active raid1 sda1[1]
14658185 blocks super 1.2 [2/1] [_U]

md126 : active raid1 sda2[1]
58605056 blocks [2/1] [_U]

md127 : active raid1 sdc3[0]
1880246440 blocks super 1.2 [2/1] [U_]

unused devices:

dmesg:

md: md127 stopped.
md: bind
md: bind
md: kicking non-fresh sda3 from array!
md: unbind
md: export_rdev(sda3)
raid1: raid set md127 active with 1 out of 2 mirrors
md: md124 stopped.
md: bind
raid1: raid set md124 active with 1 out of 2 mirrors

=====================
It seems it considers the new sdc (up to date) disc as /dev/md124
The old one is /dev/md127

So it somehow thinks they are totally separate arrays.

mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Mon May 10 22:05:48 2010
Raid Level : raid1
Array Size : 1880246440 (1793.14 GiB 1925.37 GB)
Used Dev Size : 1880246440 (1793.14 GiB 1925.37 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Thu Dec 2 18:46:10 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : diaghost05102010:2
UUID : d851091d:3d109a17:921ab0e1:3a465899
Events : 89394

Number Major Minor RaidDevice State
0 8 35 0 active sync /dev/sdc3
1 0 0 1 removed

===========
mdadm -D /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Mon May 10 22:05:48 2010
Raid Level : raid1
Array Size : 1880246440 (1793.14 GiB 1925.37 GB)
Used Dev Size : 1880246440 (1793.14 GiB 1925.37 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Fri Dec 3 02:34:08 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : diaghost05102010:2
UUID : d851091d:3d109a17:921ab0e1:3a465899
Events : 38

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3



Actually this seems smart, it seems to know that /dev/sdc3 is the fresh one.
I don't want to test this, but I wonder what would have happened if I had md127 mounted before using just sda3, how would mdadm react if it was part of a live filesystem? I'm guessing (hopefully) it wouldn't allow sdc3 to join the array and let the user decide how to handle it.

This could still be an issue, why did mdadm when I did a -A -s find md124 as a new array when the UUID for both is the same?
I kind of get it, it sees md124 as being the other out of sync array.

I'm going to manually kill md124

mdadm --manage --stop /dev/md124

Now let's add the out of sync/old one back manually to the array.

mdadm --add /dev/md127 /dev/sda3
mdadm: re-added /dev/sda3
-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md125 : active raid1 sda1[1]
14658185 blocks super 1.2 [2/1] [_U]

md126 : active raid1 sda2[1]
58605056 blocks [2/1] [_U]

md127 : active raid1 sda3[1] sdc3[0]
1880246440 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.0% (210496/1880246440) finish=446.5min speed=70165K/sec



good it worked as it should, when checking mdadm it clearly sees sdc3 as the active device and sda3 as the spare (I wish the /proc/mdstat would clearly show which device is active and which one is being synced to):



mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : d851091d:3d109a17:921ab0e1:3a465899
Name : diaghost05102010:2
Creation Time : Mon May 10 22:05:48 2010
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 3760493152 (1793.14 GiB 1925.37 GB)
Array Size : 3760492880 (1793.14 GiB 1925.37 GB)
Used Dev Size : 3760492880 (1793.14 GiB 1925.37 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 3f924621:bd0d3b67:917ec834:90ef479f

Update Time : Fri Dec 3 03:01:21 2010
Checksum : 4acbcec8 - correct
Events : 89460


Device Role : spare
Array State : A. ('A' == active, '.' == missing)
-bash-3.1# mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : d851091d:3d109a17:921ab0e1:3a465899
Name : diaghost05102010:2
Creation Time : Mon May 10 22:05:48 2010
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 3760493152 (1793.14 GiB 1925.37 GB)
Array Size : 3760492880 (1793.14 GiB 1925.37 GB)
Used Dev Size : 3760492880 (1793.14 GiB 1925.37 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 5302b74a:fbb3097e:5f960dc5:b2d1461a

Update Time : Fri Dec 3 03:01:21 2010
Checksum : 964edf09 - correct
Events : 89460


Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing)
-bash-3.1#



So today we learned that mdadm isn't as smart as we think, it shouldn't have created two separate arrays when the UUID's matched.
It should have just kicked the unfresh device from the array like it did but not create a separate array.

It also isn't as smart as say DRBD (yes I know it's network block replication) but it's basically RAID 1 over the network and doesn't sync the entire disc contents. I think it would be interesting if the DRBD method of marking blocks as being out of sync could apply to mdadm. It is rather silly that if there are any changes to the disk, it has to be entirely rebuilt, whereas DRBD would just rebuild the changed contents.


Tags:

mdadm, raid, adventuresi, array, dev, sda, sdc, mounted, updated, usb, scan, md, bind, reconstruction, active, mirrors, kjournald, interval, ext, fs, checktime, fsck, recommended, filesystem, mode, spurious, irq, unbind, export_rdev, ata, exception, emask, sact, serr, xe, irq_stat, phy, rdy, serror, recovcomm, persist, phyrdychg, resetting, sata, gbps, sstatus, scontrol, configured, udma, byte, hardware, sectors, mb, cache, enabled, doesn, dpo, fua, commwake, devexch, wdc, wd, multi, lba, ncq, depth, scsi, pq, ansi, disk, generic, sg, bash, fdisk, gb, bytes, cylinders, linux, sdb, partition, endings, phys, contain, valid, diaghost, ignoring, proc, mdstat, personalities, linear, multipath, _u, u_, unused, devices, dmesg, considers, disc, arrays, creation, gib, persistence, superblock, persistent, update, thu, dec, degraded, uuid, ab, raiddevice, sync, fri, wouldn, user, manually, min, synced, efc, feature, avail, offset, bd, ec, ef, checksum, acbcec, fbb, dc, edf, isn, shouldn, matched, unfresh, drbd, replication, contents, method, marking, rebuilt, whereas, rebuild,

Latest Articles

  • How To Install OpenProject on Centos 7 Step-by-Step Guide
  • Ubuntu Debian Linux Cannot Install Wine Solution - wine1.6 : Depends: wine1.6-i386 (= 1:1.6.2-0ubuntu14.2) but it is not installable wine1.4 : Depends: wine1.6 but it is not going to be installed
  • How To Install python 3.4 3.5 and up on Linux with wine - Working Solution
  • using Xvfb on virtual remote ssh server to have X graphical programs work
  • ssh Received disconnect from port 22:2: Too many authentication failures
  • named bind errors - DNSKEY: unable to find a DNSKEY which verifies the DNSKEY RRset and also matches a trusted key for '.'
  • OpenVZ vs LXC DIR mode poor security in LXC
  • httpd: Syntax error on line 221 of /etc/httpd/conf/httpd.conf: Syntax error on line 6 of /etc/httpd/conf.d/php.conf: Cannot load modules/libphp5.so into server: /lib64/libresolv.so.2: symbol __h_errno, version GLIBC_PRIVATE not defined in file libc.s
  • Radeon R3 GPU on Debian Crashing
  • MySQL 5.7 on Debian and Ubuntu - How To Reset Root Password
  • SSH and sshfs timeout settings keepalive
  • Linux How To Add User To Additional Group
  • Howto Set Static IP on boot in initramfs for dropbear or other purposes NFS, Linux, Debian, Ubuntu, CentOS
  • Convert and install to LUKS Encrypted Drive Ubuntu 18.04 19.10 Linux Mint and Debian Based Linux
  • Debian and Netplan
  • CentOS 8 how to restart the network!
  • CentOS 8 how to convert to a bootable mdadm RAID software array
  • ADATA USB Thumb Drive Issues
  • KMODE EXCEPTION NOT HANDLED - QEMU/KVM Won't Boot Windows 2016 or 10 Image or Physical Machine
  • Linux Mint / Ubuntu / Debian Mate Disable Guest Session and Hide Usernames on Lightdm Login screen GUI