Convert HDD/Hard Drive Partition(s) into non-RAID into RAID 1 using existing data without data loss and without reformatting.

Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data.  So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose.

The RAID 1 Setup (Hardware Wise)

I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your current partitions (original hard drive).

Before you start you should be in single user mode.

You can see my fdisk output here:


Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1460    11727418+  83  Linux
/dev/sda2            1461        3893    19543072+  83  Linux
/dev/sda3            3894      121601   945489510   83  Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1460    11727418+  83  Linux
/dev/sdb2            1461        3893    19543072+  83  Linux
/dev/sdb3            3894      121601   945489510   83  Linux

RAID 1 Migration Plan (Using Existing Data and Partitions)

My plan is to use the /dev/sda1 and /dev/sdb1 partitions for SWAP.

sda2 and sdb2 will hold about 18GB for the OS partition (md0).

sda3 and sdb3 will hold about 888GB for other data/backups.

*remember you could just make a single partition and only have an md0 but I prefer to have a separate OS partition.  This way it's much more simple to backup just the OS and config files setc..

Create The RAID Arrays (md0 and md1)

With that in mind let's create our first RAID 1 array (md0) which uses sda2 and sdb2 (currently sdb2 is what has the data so we will create and initialze the array with sda2).  Notice the "missing" clause I added below.  This allows us to create the md0 array with only a single partition (even though RAID1 obviously needs 2 otherwise it's not RAID of course).

Create the md0 RAID 1 Array (OS partition)

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sda2

Be careful below, yes my sda2 was an ext3 partition already (before I planned to add the extra 1TB drive and make it RAID).  But I know for sure (I checked and mounted to double check) to see that sda2 is empty and contains no useful or important data.


mdadm: /dev/sda2 appears to contain an ext2fs file system
    size=19543072K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? yes
mdadm: array /dev/md0 started.

Format the RAID 1 arrays md0 and md1 with EXT3

*Make sure that you specify -b 4096 otherwise it will default to -b 1024 which gives you a maximum filesize of just 16GB which is a big issue for things like backup files adn VMWare Disks

mkfs -t ext3 -b 4096 -L md0 /dev/md0       
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md0
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2443200 inodes, 4885744 blocks
244287 blocks (5.00%) reserved for the super user
First data block=0
150 block groups
32768 blocks per group, 32768 fragments per group
16288 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

-bash-3.1# mkfs -t ext3 -b 4096 -L md1 /dev/md1
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md1
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
118194176 inodes, 236372352 blocks
11818617 blocks (5.00%) reserved for the super user
First data block=0
7214 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

How fdisk sees the arrays

Disk /dev/md0: 20.0 GB, 20012007424 bytes
2 heads, 4 sectors/track, 4885744 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 968.1 GB, 968181153792 bytes
2 heads, 4 sectors/track, 236372352 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Don't worry about the "doesn't contain a valid partition table" warnings.  Of course md0 and md1 don't have partition tables, they are essentially (RAID) partitions so of course they won't, but fdisk does not know this.  It treats it just like an sda with no partitions.

Copy Data To New RAID Arrays

Now copy all data from /dev/sdb2 to /mnt/md0 (I don't have any since this is a new setup and the sdb drive only had data on sdb3 which is completely used for backups).

Now copy all data from /dev/sdb3 to /mnt/md1.

Once you're satisfied the data has copied properly and nothing is missing, you are ready to actually put the RAID 1 arrays into action.

Synchronize The RAID Arrays

First we'll do md0 (OS partition)

mdadm /dev/md0 -a /dev/sdb2
mdadm: added /dev/sdb2


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sda3[1]
      945489408 blocks [2/1] [_U]
     
md0 : active raid1 sdb2[2] sda2[1]
      19542976 blocks [2/1] [_U]
      [>....................]  recovery =  1.7% (334848/19542976) finish=6.6min speed=47835K/sec
     
unused devices: <none>

Now by checking /proc/mdstat you can track the progress of the synchronization.

Now it's time for md1

mdadm /dev/md1 -a /dev/sdb3
mdadm: added /dev/sdb3


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[2] sda3[1]
      945489408 blocks [2/1] [_U]
      [>....................]  recovery =  0.0% (725888/945489408) finish=195.2min speed=80654K/sec
      
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]

When It's All Done

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[0] sda3[1]
      945489408 blocks [2/2] [UU]
     
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]
     
unused devices: <none>

Notice the [UU] for both arrays, it means both drives are up and the Array is fine.


Tags:

convert, hdd, partition, raid, existing, reformatting, precaution, hardware, ve, tb, seagate, identical, partitions, user, mode, fdisk, output, disk, dev, sda, gb, bytes, sectors, cylinders, linux, sdb, migration, swap, os, md, backups, config, setc, arrays, array, currently, initialze, quot, clause, allows, mdadm, devices, ext, mounted, contains, contain, fs, mtime, wed, dec, creating, format, specify, default, maximum, filesize, adn, vmware, disks, mkfs, mke, wip, nov, filesystem, label, fragment, inodes, reserved, groups, fragments, superblock, stored, inode, superblocks, accounting, automatically, mounts, whichever, override, bash, doesn, valid, warnings, essentially, treats, mnt, copied, synchronize, ll, proc, mdstat, personalities, linear, multipath, active, _u, min, unused, synchronization, uu,

Latest Articles

  • How high can a Xeon CPU get?
  • bash fix PATH environment variable "command not found" solution
  • Ubuntu Linux Mint Debian Redhat Youtube Cannot Play HD or 4K videos, dropped frames or high CPU usage with Nvidia or AMD Driver
  • hostapd example configuration for high speed AC on 5GHz using WPA2
  • hostapd how to enable and use WPS to connect wireless devices like printers
  • Dell Server Workstation iDRAC Dead after Firmware Update Solution R720, R320, R730
  • Cloned VM/Server/Computer in Linux won't boot and goes to initramfs busybox Solution
  • How To Add Windows 7 8 10 11 to GRUB Boot List Dual Booting
  • How to configure OpenDKIM on Linux with Postfix and setup bind zonefile
  • Debian Ubuntu 10/11/12 Linux how to get tftpd-hpa server setup tutorial
  • efibootmgr: option requires an argument -- 'd' efibootmgr version 15 grub-install.real: error: efibootmgr failed to register the boot entry: Operation not permitted.
  • Apache Error Won't start SSL Cert Issue Solution Unable to configure verify locations for client authentication SSL Library Error: 151441510 error:0906D066:PEM routines:PEM_read_bio:bad end line SSL Library Error: 185090057 error:0B084009:x509 certif
  • Linux Debian Mint Ubuntu Bridge br0 gets random IP
  • redis requirements
  • How to kill a docker swarm
  • docker swarm silly issues
  • isc-dhcp-server dhcpd how to get longer lease
  • nvidia cannot resume from sleep Comm: nvidia-sleep.sh Tainted: Linux Ubuntu Mint Debian
  • zfs and LUKS how to recover in Linux
  • [error] (28)No space left on device: Cannot create SSLMutex Apache Solution Linux CentOS Ubuntu Debian Mint