Debian 5.04 RAID 1 mdadm boot problem GRUB error

I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer.  It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.

When trying to boot from GRUB

GRUB Loading stage 1.5.
GRUB loading, please wait...
Error 2

I get "Error 2" when trying to boot Debian.  I also notice from a LiveCD that I can't even access hd0 or hd1 (both of which are part of the RAID 1 array).  I'm thinking if GRUB can't access it then this explains the boot error.  Is this a weird/problem/quick with Debian?  It seems like it must be.  Why can't they make this work like Centos does (I installed on 5.5 Centos in the same setup and it booted fine).

From my LiveCD I can access the md array itself no problem, but inside GRUB I can't access any of the partitions individually.

Error 2: Bad file or directory type.

How can this happen?  It seems Centos sets up the RAID 1 so each partition can be access individually (which is what GRUB needs), but Debian sets it up so you can only access data through the md device/array only.

Difference in mdadm versions:

mdadm version 3.1.2 produces this output when querying a created array (I don't know if Debian uses this version or not but I've experienced a similar issue with other systems using this version of mdadm):

       Version : 1.2
  Creation Time : Mon May 10 22:05:12 2010
     Raid Level : raid1
     Array Size : 14658185 (13.98 GiB 15.01 GB)
  Used Dev Size : 14658185 (13.98 GiB 15.01 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu May 13 05:17:33 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

With the Centos mdadm array it says the following (the Centos array is readable by GRUB):

Version: .9

I tested and the Debian install returns "Version .9" which is the same one that works fine in Centos (at least for the ability for GRUB to access the partitions individually). 

I don't get it, I can mount the two Debian partitions, but GRUB cannot read it, I get Error 2 just like when attempting to boot with GRUB.

*The issue here is clearly the difference in superblocks ".9" and "1.2".  GRUB cannot access the new type of superblock, I read something about it here, where the structure and layout is different.  This is something that neither GRUB or mdadm has accounted for. https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#mdadm_v3.0_--_Adding_the_Concept_of_User-Space_Managed_External_Metadata_Formats

It seems that starting with version 3.0 of mdadm that the new superblock came into play.  GRUB really needs to be maintained (the. 97) so the distro maintainers don't need to keep making their own updates.

Solution/How to Create a Proper 0.90 metadata Superblock that GRUB boot with

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --metadata=0.90

*Note the "--metadata=0.90", that is the only metadata format that GRUB will be able to boot from (at this time).

I did a test using mdadm version 3.1.2 from my LiveCD (where I also can't access individual partitions), then I used an older Debian mdadm - v2.5.6 - 9 November 2006 and the partitions are accessible.  Obviously Debian 5.04 is using a newer mdadm which is buggy and causes this issue.  So my solution will be to ignore the Debian installer and re-create my RAID array manually!

I have confirmed the following versions produce RAID 1 arrays that GRUB can read: 2.5.6 and  2.6.9


Tags:

debian, raid, mdadm, grub, errori, successfully, partition, includes, directory, installer, installed, booting, os, loading, quot, livecd, hd, array, centos, booted, md, partitions, individually, versions, produces, output, querying, ve, creation, gib, gb, dev, devices, persistence, superblock, persistent, update, thu, active, readable, install, returns, mount, attempting, superblocks, layout, accounted, https, wiki, kernel, org, index, php, raid_superblock_formats, mdadm_v, _, _adding_the_concept_of_user, space_managed_external_metadata_formats, maintained, distro, maintainers, updates, metadata, sda, sdb, format, individual, november, accessible, newer, buggy, manually, arrays,

Latest Articles

  • How high can a Xeon CPU get?
  • bash fix PATH environment variable "command not found" solution
  • Ubuntu Linux Mint Debian Redhat Youtube Cannot Play HD or 4K videos, dropped frames or high CPU usage with Nvidia or AMD Driver
  • hostapd example configuration for high speed AC on 5GHz using WPA2
  • hostapd how to enable and use WPS to connect wireless devices like printers
  • Dell Server Workstation iDRAC Dead after Firmware Update Solution R720, R320, R730
  • Cloned VM/Server/Computer in Linux won't boot and goes to initramfs busybox Solution
  • How To Add Windows 7 8 10 11 to GRUB Boot List Dual Booting
  • How to configure OpenDKIM on Linux with Postfix and setup bind zonefile
  • Debian Ubuntu 10/11/12 Linux how to get tftpd-hpa server setup tutorial
  • efibootmgr: option requires an argument -- 'd' efibootmgr version 15 grub-install.real: error: efibootmgr failed to register the boot entry: Operation not permitted.
  • Apache Error Won't start SSL Cert Issue Solution Unable to configure verify locations for client authentication SSL Library Error: 151441510 error:0906D066:PEM routines:PEM_read_bio:bad end line SSL Library Error: 185090057 error:0B084009:x509 certif
  • Linux Debian Mint Ubuntu Bridge br0 gets random IP
  • redis requirements
  • How to kill a docker swarm
  • docker swarm silly issues
  • isc-dhcp-server dhcpd how to get longer lease
  • nvidia cannot resume from sleep Comm: nvidia-sleep.sh Tainted: Linux Ubuntu Mint Debian
  • zfs and LUKS how to recover in Linux
  • [error] (28)No space left on device: Cannot create SSLMutex Apache Solution Linux CentOS Ubuntu Debian Mint