Convert HDD/Hard Drive Partition(s) into non-RAID into RAID 1 using existing data without data loss and without reformatting.

Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data.  So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose.

The RAID 1 Setup (Hardware Wise)

I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your current partitions (original hard drive).

Before you start you should be in single user mode.

You can see my fdisk output here:


Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1460    11727418+  83  Linux
/dev/sda2            1461        3893    19543072+  83  Linux
/dev/sda3            3894      121601   945489510   83  Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1460    11727418+  83  Linux
/dev/sdb2            1461        3893    19543072+  83  Linux
/dev/sdb3            3894      121601   945489510   83  Linux

RAID 1 Migration Plan (Using Existing Data and Partitions)

My plan is to use the /dev/sda1 and /dev/sdb1 partitions for SWAP.

sda2 and sdb2 will hold about 18GB for the OS partition (md0).

sda3 and sdb3 will hold about 888GB for other data/backups.

*remember you could just make a single partition and only have an md0 but I prefer to have a separate OS partition.  This way it's much more simple to backup just the OS and config files setc..

Create The RAID Arrays (md0 and md1)

With that in mind let's create our first RAID 1 array (md0) which uses sda2 and sdb2 (currently sdb2 is what has the data so we will create and initialze the array with sda2).  Notice the "missing" clause I added below.  This allows us to create the md0 array with only a single partition (even though RAID1 obviously needs 2 otherwise it's not RAID of course).

Create the md0 RAID 1 Array (OS partition)

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sda2

Be careful below, yes my sda2 was an ext3 partition already (before I planned to add the extra 1TB drive and make it RAID).  But I know for sure (I checked and mounted to double check) to see that sda2 is empty and contains no useful or important data.


mdadm: /dev/sda2 appears to contain an ext2fs file system
    size=19543072K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? yes
mdadm: array /dev/md0 started.

Format the RAID 1 arrays md0 and md1 with EXT3

*Make sure that you specify -b 4096 otherwise it will default to -b 1024 which gives you a maximum filesize of just 16GB which is a big issue for things like backup files adn VMWare Disks

mkfs -t ext3 -b 4096 -L md0 /dev/md0       
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md0
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2443200 inodes, 4885744 blocks
244287 blocks (5.00%) reserved for the super user
First data block=0
150 block groups
32768 blocks per group, 32768 fragments per group
16288 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

-bash-3.1# mkfs -t ext3 -b 4096 -L md1 /dev/md1
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md1
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
118194176 inodes, 236372352 blocks
11818617 blocks (5.00%) reserved for the super user
First data block=0
7214 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

How fdisk sees the arrays

Disk /dev/md0: 20.0 GB, 20012007424 bytes
2 heads, 4 sectors/track, 4885744 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 968.1 GB, 968181153792 bytes
2 heads, 4 sectors/track, 236372352 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Don't worry about the "doesn't contain a valid partition table" warnings.  Of course md0 and md1 don't have partition tables, they are essentially (RAID) partitions so of course they won't, but fdisk does not know this.  It treats it just like an sda with no partitions.

Copy Data To New RAID Arrays

Now copy all data from /dev/sdb2 to /mnt/md0 (I don't have any since this is a new setup and the sdb drive only had data on sdb3 which is completely used for backups).

Now copy all data from /dev/sdb3 to /mnt/md1.

Once you're satisfied the data has copied properly and nothing is missing, you are ready to actually put the RAID 1 arrays into action.

Synchronize The RAID Arrays

First we'll do md0 (OS partition)

mdadm /dev/md0 -a /dev/sdb2
mdadm: added /dev/sdb2


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sda3[1]
      945489408 blocks [2/1] [_U]
     
md0 : active raid1 sdb2[2] sda2[1]
      19542976 blocks [2/1] [_U]
      [>....................]  recovery =  1.7% (334848/19542976) finish=6.6min speed=47835K/sec
     
unused devices: <none>

Now by checking /proc/mdstat you can track the progress of the synchronization.

Now it's time for md1

mdadm /dev/md1 -a /dev/sdb3
mdadm: added /dev/sdb3


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[2] sda3[1]
      945489408 blocks [2/1] [_U]
      [>....................]  recovery =  0.0% (725888/945489408) finish=195.2min speed=80654K/sec
      
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]

When It's All Done

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[0] sda3[1]
      945489408 blocks [2/2] [UU]
     
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]
     
unused devices: <none>

Notice the [UU] for both arrays, it means both drives are up and the Array is fine.


Tags:

convert, hdd, partition, raid, existing, reformatting, precaution, hardware, ve, tb, seagate, identical, partitions, user, mode, fdisk, output, disk, dev, sda, gb, bytes, sectors, cylinders, linux, sdb, migration, swap, os, md, backups, config, setc, arrays, array, currently, initialze, quot, clause, allows, mdadm, devices, ext, mounted, contains, contain, fs, mtime, wed, dec, creating, format, specify, default, maximum, filesize, adn, vmware, disks, mkfs, mke, wip, nov, filesystem, label, fragment, inodes, reserved, groups, fragments, superblock, stored, inode, superblocks, accounting, automatically, mounts, whichever, override, bash, doesn, valid, warnings, essentially, treats, mnt, copied, synchronize, ll, proc, mdstat, personalities, linear, multipath, active, _u, min, unused, synchronization, uu,

Latest Articles

  • How to install Windows or other OS and then bring to another computer by using a physical drive and Virtual Machine with QEMU
  • PXE-E23 Error BOOTx64.EFI GRUB booting is 0 bytes tftp pxe dhcp solution NBP filesize is 0 Bytes
  • vagrant install on Debian Mint Ubuntu Linux RHEL Quick Setup Guide Tutorial
  • RHEL 8 CentOS 8, Alma Linux 8, Rocky Linux 8 System Not Booting with RAID or on other servers/computers Solution for dracut and initramfs missing kernel modules
  • How to Upgrade to Debian 11 from Version 8,9,10
  • Ubuntu Linux Mint Debian Redhat Cannot View Files on Android iPhone USB File Transfer Not Working Solution
  • Virtualbox Best Networking Mode In Lab/Work Environment without using NAT Network or Bridged
  • debootstrap how to install Ubuntu, Mint, Debian install
  • Linux grub not using UUID for the root device instead it uses /dev/sda1 or other device name solution
  • How To Restore Partition Table on Running Linux Mint Ubuntu Debian Machine
  • Debian Ubuntu apt install stop daemon questions/accept the default action without prompting
  • iptables NAT how to enable PPTP in newer Debian/Ubuntu/Mint Kernels Linux
  • Grandstream Phone Vulnerability Security Issue Remote Backdoor Connection to 207.246.119.209:3478
  • Linux How to Check Which NIC is Onboard eth0 or eth1 Ubuntu Centos Debian Mint
  • VboxManage VirtualBox NAT Network Issues Managment Troubleshooting
  • Dell PowerEdge Server iDRAC Remote KVM/IP Default Username, Password Reset and Login Information Solution
  • Nvidia Tesla GPUs K40/K80/M40/P40/P100/V100 at home/desktop hacking, cooling, powering, cable solutions Tutorial AIO Solutions
  • Stop ls in Linux Debian Mint CentOS Ubuntu from applying quotes around filenames and directory names
  • Thunderbird Attachment Download Error Corrupt Wrong filesize of 29 or 27 bytes Solution
  • Generic IP Camera LAN Default IP Settings DVR