Debian 5.04 RAID 1 mdadm boot problem GRUB error

I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer.  It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.

When trying to boot from GRUB

GRUB Loading stage 1.5.
GRUB loading, please wait...
Error 2

I get "Error 2" when trying to boot Debian.  I also notice from a LiveCD that I can't even access hd0 or hd1 (both of which are part of the RAID 1 array).  I'm thinking if GRUB can't access it then this explains the boot error.  Is this a weird/problem/quick with Debian?  It seems like it must be.  Why can't they make this work like Centos does (I installed on 5.5 Centos in the same setup and it booted fine).

From my LiveCD I can access the md array itself no problem, but inside GRUB I can't access any of the partitions individually.

Error 2: Bad file or directory type.

How can this happen?  It seems Centos sets up the RAID 1 so each partition can be access individually (which is what GRUB needs), but Debian sets it up so you can only access data through the md device/array only.

Difference in mdadm versions:

mdadm version 3.1.2 produces this output when querying a created array (I don't know if Debian uses this version or not but I've experienced a similar issue with other systems using this version of mdadm):

       Version : 1.2
  Creation Time : Mon May 10 22:05:12 2010
     Raid Level : raid1
     Array Size : 14658185 (13.98 GiB 15.01 GB)
  Used Dev Size : 14658185 (13.98 GiB 15.01 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu May 13 05:17:33 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

With the Centos mdadm array it says the following (the Centos array is readable by GRUB):

Version: .9

I tested and the Debian install returns "Version .9" which is the same one that works fine in Centos (at least for the ability for GRUB to access the partitions individually). 

I don't get it, I can mount the two Debian partitions, but GRUB cannot read it, I get Error 2 just like when attempting to boot with GRUB.

*The issue here is clearly the difference in superblocks ".9" and "1.2".  GRUB cannot access the new type of superblock, I read something about it here, where the structure and layout is different.  This is something that neither GRUB or mdadm has accounted for. https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#mdadm_v3.0_--_Adding_the_Concept_of_User-Space_Managed_External_Metadata_Formats

It seems that starting with version 3.0 of mdadm that the new superblock came into play.  GRUB really needs to be maintained (the. 97) so the distro maintainers don't need to keep making their own updates.

Solution/How to Create a Proper 0.90 metadata Superblock that GRUB boot with

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --metadata=0.90

*Note the "--metadata=0.90", that is the only metadata format that GRUB will be able to boot from (at this time).

I did a test using mdadm version 3.1.2 from my LiveCD (where I also can't access individual partitions), then I used an older Debian mdadm - v2.5.6 - 9 November 2006 and the partitions are accessible.  Obviously Debian 5.04 is using a newer mdadm which is buggy and causes this issue.  So my solution will be to ignore the Debian installer and re-create my RAID array manually!

I have confirmed the following versions produce RAID 1 arrays that GRUB can read: 2.5.6 and  2.6.9


Tags:

debian, raid, mdadm, grub, errori, successfully, partition, includes, directory, installer, installed, booting, os, loading, quot, livecd, hd, array, centos, booted, md, partitions, individually, versions, produces, output, querying, ve, creation, gib, gb, dev, devices, persistence, superblock, persistent, update, thu, active, readable, install, returns, mount, attempting, superblocks, layout, accounted, https, wiki, kernel, org, index, php, raid_superblock_formats, mdadm_v, _, _adding_the_concept_of_user, space_managed_external_metadata_formats, maintained, distro, maintainers, updates, metadata, sda, sdb, format, individual, november, accessible, newer, buggy, manually, arrays,

Latest Articles

  • QEMU-KVM soundhw deprecated how to enable sound in QEMU 4.x series
  • Virtualbox Error Cannot register the hard disk because a hard disk with UUID already exists solution
  • kernel: [549267.368859] mate-terminal[7871]: segfault at 2000000101 ip 00007f5d0a9548f0 sp 00007fff7012c610 error 4 in libgobject-2.0.so.0.4800.2[7f5d0a920000+52000]
  • apcupsd how to setup and monitor APC UPS units
  • How To Password Reset, Recover, Bypass, Remove and Unlock on Windows 10,8,7,Vista,XP,NT,2000,2003,2008,2012,2016,2019 Administrative Login Programs
  • Nvidia Ubuntu Linux Screentearing Video with solution driver
  • ?? Question Marks for time, permissions and size of a file?
  • mdadm how to stop a check
  • access denied by acl file qemu-kvm: bridge helper failed
  • Linux NIC connecting at 100M instead of 1000M gigabit speeds? It could be overheating
  • "This kernel requires the following features not present on the CPU: cmov Unable to boot - please use a kernel appropriate for your CPU.
  • http://vault.centos.org/5.9/os/i386/repodata/filelists.xml.gz: [Errno -1] Metadata file does not match checksum solution
  • Linux Ubuntu Wifi Disabled Only Works When Laptop Plugged Into Wall AC Power
  • CentOS 6 impossible to compile a newer libguestfs
  • chroot
  • How To Get Started on Ubuntu with gpt-2 OpenAI Text Prediction
  • Remove cloud-init in your VM
  • QEMU-KVM KVM Command Line Practical Guide
  • Linux How To Change NIC Name to eth0 instead of enps33 or enp0s25
  • virt-resize: error: libguestfs error: could not create appliance through libvirt.