Debian 5.04 RAID 1 mdadm boot problem GRUB error

I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer.  It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.

When trying to boot from GRUB

GRUB Loading stage 1.5.
GRUB loading, please wait...
Error 2

I get "Error 2" when trying to boot Debian.  I also notice from a LiveCD that I can't even access hd0 or hd1 (both of which are part of the RAID 1 array).  I'm thinking if GRUB can't access it then this explains the boot error.  Is this a weird/problem/quick with Debian?  It seems like it must be.  Why can't they make this work like Centos does (I installed on 5.5 Centos in the same setup and it booted fine).

From my LiveCD I can access the md array itself no problem, but inside GRUB I can't access any of the partitions individually.

Error 2: Bad file or directory type.

How can this happen?  It seems Centos sets up the RAID 1 so each partition can be access individually (which is what GRUB needs), but Debian sets it up so you can only access data through the md device/array only.

Difference in mdadm versions:

mdadm version 3.1.2 produces this output when querying a created array (I don't know if Debian uses this version or not but I've experienced a similar issue with other systems using this version of mdadm):

       Version : 1.2
  Creation Time : Mon May 10 22:05:12 2010
     Raid Level : raid1
     Array Size : 14658185 (13.98 GiB 15.01 GB)
  Used Dev Size : 14658185 (13.98 GiB 15.01 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu May 13 05:17:33 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

With the Centos mdadm array it says the following (the Centos array is readable by GRUB):

Version: .9

I tested and the Debian install returns "Version .9" which is the same one that works fine in Centos (at least for the ability for GRUB to access the partitions individually). 

I don't get it, I can mount the two Debian partitions, but GRUB cannot read it, I get Error 2 just like when attempting to boot with GRUB.

*The issue here is clearly the difference in superblocks ".9" and "1.2".  GRUB cannot access the new type of superblock, I read something about it here, where the structure and layout is different.  This is something that neither GRUB or mdadm has accounted for. https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#mdadm_v3.0_--_Adding_the_Concept_of_User-Space_Managed_External_Metadata_Formats

It seems that starting with version 3.0 of mdadm that the new superblock came into play.  GRUB really needs to be maintained (the. 97) so the distro maintainers don't need to keep making their own updates.

Solution/How to Create a Proper 0.90 metadata Superblock that GRUB boot with

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --metadata=0.90

*Note the "--metadata=0.90", that is the only metadata format that GRUB will be able to boot from (at this time).

I did a test using mdadm version 3.1.2 from my LiveCD (where I also can't access individual partitions), then I used an older Debian mdadm - v2.5.6 - 9 November 2006 and the partitions are accessible.  Obviously Debian 5.04 is using a newer mdadm which is buggy and causes this issue.  So my solution will be to ignore the Debian installer and re-create my RAID array manually!

I have confirmed the following versions produce RAID 1 arrays that GRUB can read: 2.5.6 and  2.6.9


Tags:

debian, raid, mdadm, grub, errori, successfully, partition, includes, directory, installer, installed, booting, os, loading, quot, livecd, hd, array, centos, booted, md, partitions, individually, versions, produces, output, querying, ve, creation, gib, gb, dev, devices, persistence, superblock, persistent, update, thu, active, readable, install, returns, mount, attempting, superblocks, layout, accounted, https, wiki, kernel, org, index, php, raid_superblock_formats, mdadm_v, _, _adding_the_concept_of_user, space_managed_external_metadata_formats, maintained, distro, maintainers, updates, metadata, sda, sdb, format, individual, november, accessible, newer, buggy, manually, arrays,

Latest Articles

  • Centos 7 how to save iptables rules like Centos 6
  • nfs tuning maximum amount of connections
  • qemu-kvm error "Could not initialize SDL(No available video device) - exiting"
  • Centos 7 tftpd will not work with selinux enabled
  • Debian Ubuntu Mint Howto Create Bridge (br0)
  • How To Control Interface that dhcpd server listens to on Debian based Linux like Mint and Ubuntu
  • LUKS unable to type password to unlock during boot on Debian, Ubuntu and Mint
  • Debian Ubuntu and Linux Mint Broken Kernel After Date - New Extra Module Naming Convention
  • Wordpress overwrites and wipes out custom htaccess rules and changes soluton
  • Apache htaccess and mod_rewrite how to redirect and force all URLs and visitors to the SSL / HTTPS version
  • python 3 pip cannot install mysql module
  • QEMU-KVM won't boot Windows 2016 or 2019 server on an Intel Core i3
  • Virtualbox vbox not starting
  • Bind / named not responding to queries solution
  • Linux Mint How To Set Desktop Background Image From Bash Prompt CLI
  • ImageMagick Convert PDF Not Authorized
  • ImageMagick Converted PDF to JPEG some files have a black background solution
  • Linux Mint Mate Customize the Lock screen messages and hide username and real name
  • Ubuntu/Gnome/Mint/Centos How To Take a partial screenshot
  • ssh how to verify your host key / avoid MIM attacks