• CentOS 7 / 8 cannot boot with with mdadm RAID array solution


    This article about migrating to a CentOS 7 /8 RAID mdadm array has a lot of info but I wanted to focus specifically on what newer versions of CentOS 7 require to boot mdadm and what changes are necessary on CentOS 7.8+ CentOS 7 / 8 mdadm RAID booting requirements This assumes you are chrooting into an existing install or using it to get a new deployment ready. However, these steps can........
  • CentOS 8 how to convert to a bootable mdadm RAID software array


    The cool thing here is that we only need 1 drive to make a RAID 10 or RAID 1 array, we just tell the Linux mdadm utility that the other drive is "missing" and we can then add our original drive to the array after booting into our new RAID array. Step#1 Install tools we need yum -y install mdadm rsync Step #2 Create your partitions on the drive that will be our RAID array Here I assume it is /dev........
  • mdadm how to make inactive array active


    This happens when you assemble array it doesn't mean it will be active for many reasons: md20 : inactive sdf1[2](S) 732442488 blocks super 1.2 Solution: sudo mdadm --manage /dev/md20 --run........
  • mdadm force/fix proper md127 name


    I have an md0 arary that my Centos install refers to. I feel this is half the reason why it won't boot anymore. I saw the initrd for Centos was assembling it as md127 even though it was known as md0. The reason for this is because I used mdadm --assemble --scan to detect the array on a LiveCD. I had no idea this name would stick (but now I realize the name is permanently stored in the metadata once you mount md127 or whatever random name assemble gives it). W........
  • mdadm Linux Software RAID auto-detect and assemble RAID Array


    mdadm --assemble --scan mdadm: /dev/md/diaghost05102010:2 has been started with 2 drives. mdadm: /dev/md/diaghost05102010:1 has been started with 2 drives. mdadm: /dev/md/diaghost05102010:0 has been started with 2 drives. -bash-3.1# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath] md125 : active raid1 sda1[0] sdb1[1] 14658185 blocks super 1.2........
  • Latest Articles

  • Cloned VM/Server/Computer in Linux won't boot and goes to initramfs busybox Solution
  • How To Add Windows 7 8 10 11 to GRUB Boot List Dual Booting
  • How to configure OpenDKIM on Linux with Postfix and setup bind zonefile
  • Debian Ubuntu 10/11/12 Linux how to get tftpd-hpa server setup tutorial
  • efibootmgr: option requires an argument -- 'd' efibootmgr version 15 grub-install.real: error: efibootmgr failed to register the boot entry: Operation not permitted.
  • Apache Error Won't start SSL Cert Issue Solution Unable to configure verify locations for client authentication SSL Library Error: 151441510 error:0906D066:PEM routines:PEM_read_bio:bad end line SSL Library Error: 185090057 error:0B084009:x509 certif
  • Linux Debian Mint Ubuntu Bridge br0 gets random IP
  • redis requirements
  • How to kill a docker swarm
  • docker swarm silly issues
  • isc-dhcp-server dhcpd how to get longer lease
  • nvidia cannot resume from sleep Comm: nvidia-sleep.sh Tainted: Linux Ubuntu Mint Debian
  • zfs and LUKS how to recover in Linux
  • [error] (28)No space left on device: Cannot create SSLMutex Apache Solution Linux CentOS Ubuntu Debian Mint
  • Save money on bandwidth by disabling reflective rpc queries in Linux CentOS RHEL Ubuntu Debian
  • How to access a disk with bad superblock Linux Ubuntu Debian Redhat CentOS ext3 ext4
  • ImageMagick error convert solution - convert-im6.q16: cache resources exhausted
  • PTY allocation request failed on channel 0 solution
  • docker error not supported as upperdir failed to start daemon: error initializing graphdriver: driver not supported
  • Migrated Linux Ubuntu Mint not starting services due to broken /var/run and dbus - Failed to connect to bus: No such file or directory solution