• md mdadm array inactive how to start and activate the RAID array


    cat /proc/mdstat Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] md124 : inactive sdj1[0](S) 1048512 blocks Solution, we "run" the array sudo mdadm --manage /dev/md124 --run mdadm: started array /dev/md/0_0........
  • mdadm how to stop a check


    Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS? cat /proc/mdstat Personalities : [raid1] [raid10] md127 : active raid10 sdb4[0] sda4[1] 897500672 blocks super 1.2 2 near-copies [2/2] [UU] [==========>..........] check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec ........
  • OpenVZ vs LXC DIR mode poor security in LXC


    It is unfortunate that LXC's dir mode is completely insecure and allows way too much information from the host to be seen. I wonder if there will eventually be a way to break into the host filesystem or other container's storage? OpenVZ better security: [root@ev ~]# cat /proc/mdstat cat: /proc/mdstat: No such file or directory /dev/simfs 843G 740G 61G........
  • mdadm force resync when resync=PENDING solution


    cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active (auto-read-only) raid10 sdc1[0] sdb1[2] 1953382400 blocks super 1.2 512K chunks 2 far-copies [2/1] [U_] resync=PENDING bitmap: 15/15 pages [60KB], 65536KB chunk Solution force repai........
  • mdadm how to mount inactive array


    myguy@devbox:~$ sudo mdadm -As myguy@devbox:~$ cat /proc/mdstat |grep sdf md125 : inactive sdf3[2](S) sudo mdadm --manage /dev/md125 --run mdadm: started /dev/md125 ........
  • How to find and mount mdadm arrays automatically


    A great way if you have a bunch of drives and mdadm connected and are looking for backups/archives and don't know what is where! for md in `cat /proc/mdstat|grep md[0-99]|awk '{print $1}'`; do mkdir /mnt/$md; mount /dev/$md /mnt/$md; done........
  • mdadm how to recover from failed drive


    Remove the failed partition /dev/sde1 mdadm --manage /dev/md99 -r /dev/sde1 mdadm: hot removed /dev/sde1 from /dev/md99 Now add another drive back to replace it: # mdadm --manage /dev/md99 -a /dev/sdf1 mdadm: added /dev/sdf1 A "cat /proc/mdstat" should show it resyncing if all is well.........
  • Centos 6 how to guide convert LVM non-RAID into mdadm 1/10 RAID array live without reinstalling


    Here is the scenario you or a client have a remote machine that was installed as a standard/default minimal Centos 6.x machine on a single disk with LVM for whatever reason. Often many people do not know how to install it to a RAID array so it is common to have this problem and why reinstall if you don't need to? In some cases on a remote system you can't easily reinstall without physical or KVM access. So in this case you add a second physical or disk or already ha........
  • What a hdd hard drive and mdadm RAID array failure looks like in Linux


    [3805108.257042] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB) [3805108.257052] sd 0:0:0:0: [sda] Write Protect is off [3805108.257054] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [3805108.257066] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [3805108.257083] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB) [3805108.257090] sd 0:0:0:0: [sda] Write Protect is off........
  • mdadm enable bitmap to speed up rebuilds


    Have you ever unplugged the wrong drive and then had to rebuild the entire array? It may not be a big deal in some ways but it does make your system vulnerable until the rebuild is done. Many distros often enable the "bitmap" feature and this basically keeps track of what parts need to be resynced in the case of a temporary removal of a drive from the array, this way it only needs to sync what has changed. To enable bitmap to speed up rebuilds and sync........
  • mdadm create RAID 1 array example


    mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3 cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb3[1] sda3[0] 1363020736 blocks super 1.2 [2/2] [UU] [=>...................] resync = 8.3% (113597440/1363020736) finish=276.2min speed=75366K/sec ........
  • mdadm recover from dead drive


    mdadm --manage /dev/md3 --add /dev/sda1 cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdd2[1] sdd1[2](S) 31270272 blocks md3 : active raid1 sda1[2] sdb1[1] sdc1[3](F) 943730240 blocks [2/1] [_U] [>....................]........
  • mdadm how to add a device to an array after a failure


    This array is a RAID 1 and in this case 1 of the 2 drives failed (a WD drive and I've found them to be the weakest and most unreliable of any brand and are easily damaged/DOA when shipping them). mdadm --manage /dev/md0 --add /dev/sdb1 The above assumes the array you want to add to is /dev/md0 and the device we are adding is /dev/sdb1 *One thing to remember is to make sure the partition you are adding is the correct size for the array. You can also g........
  • mdadm/md-check how to check array integrity without rebuilding


    This doesn't seem to be widely known (maybe it's in some documentation that none of us read though)but there's an easy way to check the integrity of any mdadm array: sudo echo check > /sys/block/md0/md/sync_action -bash: /sys/block/md0/md/sync_action: Permission denied sudo will never work, this only works as root since echo is not actually a binary/command. It is built-into bash. /sys/devices/virtu........
  • md: data-check of RAID array md3


    This really made me nervous but notice the mdstat says "check". This is because in Ubuntu there is a scheduled mdadm cronscript that runs everyday on Sunday at 00:57 that checks your entire array. This is a good way because it prevents gradual but unnoticed data corruption which Inever thought of. As long as the check completes properly you have peace of mind knowing that your data integretiy is assured and that your hard drives are functioning properly (I'........
  • mdadm RAID 1 adventures


    I separated the 2 drives in the RAID 1 array. 1 is the old one /dev/sda and is out of date, while the separated other one /dev/sdc was in another drive and mounted and used with more data (updated). I wonder how mdadm will handle this: usb-storage: device scan complete md: md127 stopped. md: bind md: md127: raid array is not clean -- starting background reconstruction raid1: raid set md127 active with 1 out of 2 m........
  • mdadm Linux Software RAID QuickStart Guide


    Create New RAID 1 Array: First setup your partitions (make sure they are exactly the same size) In my example I have sda3 and sdb3 which are 500GB in size. mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3 mdadm: array /dev/md2 started. Check Status Of The Array *Note I already have other arrays md0 and md1. You can see below that md2 is syn........
  • mdadm "auto-read-only" Linux Software RAID solution


    If you have the "(auto-read-only)" beside an arrayI have no idea why that happens but it is easy to fix. Just run "mdadm --readwrite /dev/md1" (rename md0 to the device with the problem and it will begin to resync. md1 : active (auto-read-only) raid1 sdb2[0] sda2[1] 19534976 blocks [2/2] [UU] resync=PENDING ........
  • mdadm Linux Software RAID auto-detect and assemble RAID Array


    mdadm --assemble --scan mdadm: /dev/md/diaghost05102010:2 has been started with 2 drives. mdadm: /dev/md/diaghost05102010:1 has been started with 2 drives. mdadm: /dev/md/diaghost05102010:0 has been started with 2 drives. -bash-3.1# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath] md125 : active raid1 sda1[0] sdb1[1] 14658185 blocks super 1.2........
  • Convert HDD/Hard Drive Partition(s) into non-RAID into RAID 1 using existing data without data loss and without reformatting.


    Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data. So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose. The RAID 1 Setup (Hardware Wise) I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your curr........
  • Latest Articles

  • How to resize a pdf without losing much quality in Linux Mint Ubuntu Debian Redhat Solution
  • qemu: could not load PC BIOS 'bios-256k.bin' solution
  • Proxmox How To Custom Partition During Install
  • Hyper-V Linux VM Boots to Black Screen, Storage, NIC Not Found Issues
  • Ubuntu Mint How to Fix Missing/Broken /dev and /dev/pts which causes terminal to immediately close exit and not work
  • How high can a Xeon CPU get?
  • bash fix PATH environment variable "command not found" solution
  • Ubuntu Linux Mint Debian Redhat Youtube Cannot Play HD or 4K videos, dropped frames or high CPU usage with Nvidia or AMD Driver
  • hostapd example configuration for high speed AC on 5GHz using WPA2
  • hostapd how to enable and use WPS to connect wireless devices like printers
  • Dell Server Workstation iDRAC Dead after Firmware Update Solution R720, R320, R730
  • Cloned VM/Server/Computer in Linux won't boot and goes to initramfs busybox Solution
  • How To Add Windows 7 8 10 11 to GRUB Boot List Dual Booting
  • How to configure OpenDKIM on Linux with Postfix and setup bind zonefile
  • Debian Ubuntu 10/11/12 Linux how to get tftpd-hpa server setup tutorial
  • efibootmgr: option requires an argument -- 'd' efibootmgr version 15 grub-install.real: error: efibootmgr failed to register the boot entry: Operation not permitted.
  • Apache Error Won't start SSL Cert Issue Solution Unable to configure verify locations for client authentication SSL Library Error: 151441510 error:0906D066:PEM routines:PEM_read_bio:bad end line SSL Library Error: 185090057 error:0B084009:x509 certif
  • Linux Debian Mint Ubuntu Bridge br0 gets random IP
  • redis requirements
  • How to kill a docker swarm