Debian 5.04 RAID 1 mdadm boot problem GRUB error

I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer.  It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.

When trying to boot from GRUB

GRUB Loading stage 1.5.
GRUB loading, please wait...
Error 2

I get "Error 2" when trying to boot Debian.  I also notice from a LiveCD that I can't even access hd0 or hd1 (both of which are part of the RAID 1 array).  I'm thinking if GRUB can't access it then this explains the boot error.  Is this a weird/problem/quick with Debian?  It seems like it must be.  Why can't they make this work like Centos does (I installed on 5.5 Centos in the same setup and it booted fine).

From my LiveCD I can access the md array itself no problem, but inside GRUB I can't access any of the partitions individually.

Error 2: Bad file or directory type.

How can this happen?  It seems Centos sets up the RAID 1 so each partition can be access individually (which is what GRUB needs), but Debian sets it up so you can only access data through the md device/array only.

Difference in mdadm versions:

mdadm version 3.1.2 produces this output when querying a created array (I don't know if Debian uses this version or not but I've experienced a similar issue with other systems using this version of mdadm):

       Version : 1.2
  Creation Time : Mon May 10 22:05:12 2010
     Raid Level : raid1
     Array Size : 14658185 (13.98 GiB 15.01 GB)
  Used Dev Size : 14658185 (13.98 GiB 15.01 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu May 13 05:17:33 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

With the Centos mdadm array it says the following (the Centos array is readable by GRUB):

Version: .9

I tested and the Debian install returns "Version .9" which is the same one that works fine in Centos (at least for the ability for GRUB to access the partitions individually). 

I don't get it, I can mount the two Debian partitions, but GRUB cannot read it, I get Error 2 just like when attempting to boot with GRUB.

*The issue here is clearly the difference in superblocks ".9" and "1.2".  GRUB cannot access the new type of superblock, I read something about it here, where the structure and layout is different.  This is something that neither GRUB or mdadm has accounted for. https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#mdadm_v3.0_--_Adding_the_Concept_of_User-Space_Managed_External_Metadata_Formats

It seems that starting with version 3.0 of mdadm that the new superblock came into play.  GRUB really needs to be maintained (the. 97) so the distro maintainers don't need to keep making their own updates.

Solution/How to Create a Proper 0.90 metadata Superblock that GRUB boot with

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --metadata=0.90

*Note the "--metadata=0.90", that is the only metadata format that GRUB will be able to boot from (at this time).

I did a test using mdadm version 3.1.2 from my LiveCD (where I also can't access individual partitions), then I used an older Debian mdadm - v2.5.6 - 9 November 2006 and the partitions are accessible.  Obviously Debian 5.04 is using a newer mdadm which is buggy and causes this issue.  So my solution will be to ignore the Debian installer and re-create my RAID array manually!

I have confirmed the following versions produce RAID 1 arrays that GRUB can read: 2.5.6 and  2.6.9


Tags:

debian, raid, mdadm, grub, errori, successfully, partition, includes, directory, installer, installed, booting, os, loading, quot, livecd, hd, array, centos, booted, md, partitions, individually, versions, produces, output, querying, ve, creation, gib, gb, dev, devices, persistence, superblock, persistent, update, thu, active, readable, install, returns, mount, attempting, superblocks, layout, accounted, https, wiki, kernel, org, index, php, raid_superblock_formats, mdadm_v, _, _adding_the_concept_of_user, space_managed_external_metadata_formats, maintained, distro, maintainers, updates, metadata, sda, sdb, format, individual, november, accessible, newer, buggy, manually, arrays,

Latest Articles

  • Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. solution
  • Cisco How To Use a Third Party SIP Phone (eg. Avaya, 3CX)
  • Cisco Unified Communication Manager (CUCM) - How To Add Phones
  • pptp / pptpd not working in DD-WRT iptables / router
  • systemd-journald high memory usage solution
  • How to Install FreePBX in Linux Debian Ubuntu Mint Guide
  • How To Install Cisco's CUCM (Cisco Unified Communication Manager) 12 Guide
  • Linux Ubuntu Redhat How To Extract Images from PDF
  • Linux and Windows Dual Boot Issue NIC Won't work After Booting Windows
  • Cisco CME How To Enable ACD hunt groups
  • How to install gns3 on Linux Ubuntu Mint
  • How to convert audio for Asterisk .wav format
  • Using Cisco CME Router with Asterisk as a dial-peer
  • Cisco CME How To Configure SIP Trunk VOIP
  • Virtualbox host Only Network Error Failed to save host network interface parameter - Cannot change gateway IP of host only network
  • Cisco CME and C7200 Router Testing and Learning Environment on Ubuntu 20+ Setup Tutorial Guide
  • Abusive IP ranges blacklist
  • How to Install Any OS on a Physical Drive from Windows Using VMware Workstation (Linux, Windows, BSD)
  • CDN Cloudflare how to set and preserve the real IP of the client without modifying application code on Apache
  • CentOS 7 fix Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was 14: curl#6 -