I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer. It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.
GRUB Loading stage 1.5.
GRUB loading, please wait...
I get "Error 2" when trying to boot Debian. I also notice from a LiveCD that I can't even access hd0 or hd1 (both of which are part of the RAID 1 array). I'm thinking if GRUB can't access it then this explains the boot error. Is this a weird/problem/quick with Debian? It seems like it must be. Why can't they make this work like Centos does (I installed on 5.5 Centos in the same setup and it booted fine).
From my LiveCD I can access the md array itself no problem, but inside GRUB I can't access any of the partitions individually.
Error 2: Bad file or directory type.
How can this happen? It seems Centos sets up the RAID 1 so each partition can be access individually (which is what GRUB needs), but Debian sets it up so you can only access data through the md device/array only.
mdadm version 3.1.2 produces this output when querying a created array (I don't know if Debian uses this version or not but I've experienced a similar issue with other systems using this version of mdadm):
Version : 1.2
Creation Time : Mon May 10 22:05:12 2010
Raid Level : raid1
Array Size : 14658185 (13.98 GiB 15.01 GB)
Used Dev Size : 14658185 (13.98 GiB 15.01 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu May 13 05:17:33 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
I tested and the Debian install returns "Version .9" which is the same one that works fine in Centos (at least for the ability for GRUB to access the partitions individually).
I don't get it, I can mount the two Debian partitions, but GRUB cannot read it, I get Error 2 just like when attempting to boot with GRUB.
*The issue here is clearly the difference in superblocks ".9" and "1.2". GRUB cannot access the new type of superblock, I read something about it here, where the structure and layout is different. This is something that neither GRUB or mdadm has accounted for. https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#mdadm_v3.0_--_Adding_the_Concept_of_User-Space_Managed_External_Metadata_Formats
It seems that starting with version 3.0 of mdadm that the new superblock came into play. GRUB really needs to be maintained (the. 97) so the distro maintainers don't need to keep making their own updates.
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --metadata=0.90
*Note the "--metadata=0.90", that is the only metadata format that GRUB will be able to boot from (at this time).
I did a test using mdadm version 3.1.2 from my LiveCD (where I also can't access individual partitions), then I used an older Debian mdadm - v2.5.6 - 9 November 2006 and the partitions are accessible. Obviously Debian 5.04 is using a newer mdadm which is buggy and causes this issue. So my solution will be to ignore the Debian installer and re-create my RAID array manually!
I have confirmed the following versions produce RAID 1 arrays that GRUB can read: 2.5.6 and 2.6.9
debian, raid, mdadm, grub, errori, successfully, partition, includes, directory, installer, installed, booting, os, loading, quot, livecd, hd, array, centos, booted, md, partitions, individually, versions, produces, output, querying, ve, creation, gib, gb, dev, devices, persistence, superblock, persistent, update, thu, active, readable, install, returns, mount, attempting, superblocks, layout, accounted, https, wiki, kernel, org, index, php, raid_superblock_formats, mdadm_v, _, _adding_the_concept_of_user, space_managed_external_metadata_formats, maintained, distro, maintainers, updates, metadata, sda, sdb, format, individual, november, accessible, newer, buggy, manually, arrays,