Convert HDD/Hard Drive Partition(s) into non-RAID into RAID 1 using existing data without data loss and without reformatting.

Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data.  So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose.

The RAID 1 Setup (Hardware Wise)

I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your current partitions (original hard drive).

Before you start you should be in single user mode.

You can see my fdisk output here:


Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1460    11727418+  83  Linux
/dev/sda2            1461        3893    19543072+  83  Linux
/dev/sda3            3894      121601   945489510   83  Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1460    11727418+  83  Linux
/dev/sdb2            1461        3893    19543072+  83  Linux
/dev/sdb3            3894      121601   945489510   83  Linux

RAID 1 Migration Plan (Using Existing Data and Partitions)

My plan is to use the /dev/sda1 and /dev/sdb1 partitions for SWAP.

sda2 and sdb2 will hold about 18GB for the OS partition (md0).

sda3 and sdb3 will hold about 888GB for other data/backups.

*remember you could just make a single partition and only have an md0 but I prefer to have a separate OS partition.  This way it's much more simple to backup just the OS and config files setc..

Create The RAID Arrays (md0 and md1)

With that in mind let's create our first RAID 1 array (md0) which uses sda2 and sdb2 (currently sdb2 is what has the data so we will create and initialze the array with sda2).  Notice the "missing" clause I added below.  This allows us to create the md0 array with only a single partition (even though RAID1 obviously needs 2 otherwise it's not RAID of course).

Create the md0 RAID 1 Array (OS partition)

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sda2

Be careful below, yes my sda2 was an ext3 partition already (before I planned to add the extra 1TB drive and make it RAID).  But I know for sure (I checked and mounted to double check) to see that sda2 is empty and contains no useful or important data.


mdadm: /dev/sda2 appears to contain an ext2fs file system
    size=19543072K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? yes
mdadm: array /dev/md0 started.

Format the RAID 1 arrays md0 and md1 with EXT3

*Make sure that you specify -b 4096 otherwise it will default to -b 1024 which gives you a maximum filesize of just 16GB which is a big issue for things like backup files adn VMWare Disks

mkfs -t ext3 -b 4096 -L md0 /dev/md0       
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md0
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2443200 inodes, 4885744 blocks
244287 blocks (5.00%) reserved for the super user
First data block=0
150 block groups
32768 blocks per group, 32768 fragments per group
16288 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

-bash-3.1# mkfs -t ext3 -b 4096 -L md1 /dev/md1
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=md1
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
118194176 inodes, 236372352 blocks
11818617 blocks (5.00%) reserved for the super user
First data block=0
7214 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

How fdisk sees the arrays

Disk /dev/md0: 20.0 GB, 20012007424 bytes
2 heads, 4 sectors/track, 4885744 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 968.1 GB, 968181153792 bytes
2 heads, 4 sectors/track, 236372352 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Don't worry about the "doesn't contain a valid partition table" warnings.  Of course md0 and md1 don't have partition tables, they are essentially (RAID) partitions so of course they won't, but fdisk does not know this.  It treats it just like an sda with no partitions.

Copy Data To New RAID Arrays

Now copy all data from /dev/sdb2 to /mnt/md0 (I don't have any since this is a new setup and the sdb drive only had data on sdb3 which is completely used for backups).

Now copy all data from /dev/sdb3 to /mnt/md1.

Once you're satisfied the data has copied properly and nothing is missing, you are ready to actually put the RAID 1 arrays into action.

Synchronize The RAID Arrays

First we'll do md0 (OS partition)

mdadm /dev/md0 -a /dev/sdb2
mdadm: added /dev/sdb2


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sda3[1]
      945489408 blocks [2/1] [_U]
     
md0 : active raid1 sdb2[2] sda2[1]
      19542976 blocks [2/1] [_U]
      [>....................]  recovery =  1.7% (334848/19542976) finish=6.6min speed=47835K/sec
     
unused devices: <none>

Now by checking /proc/mdstat you can track the progress of the synchronization.

Now it's time for md1

mdadm /dev/md1 -a /dev/sdb3
mdadm: added /dev/sdb3


-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[2] sda3[1]
      945489408 blocks [2/1] [_U]
      [>....................]  recovery =  0.0% (725888/945489408) finish=195.2min speed=80654K/sec
      
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]

When It's All Done

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[0] sda3[1]
      945489408 blocks [2/2] [UU]
     
md0 : active raid1 sdb2[0] sda2[1]
      19542976 blocks [2/2] [UU]
     
unused devices: <none>

Notice the [UU] for both arrays, it means both drives are up and the Array is fine.


Tags:

convert, hdd, partition, raid, existing, reformatting, precaution, hardware, ve, tb, seagate, identical, partitions, user, mode, fdisk, output, disk, dev, sda, gb, bytes, sectors, cylinders, linux, sdb, migration, swap, os, md, backups, config, setc, arrays, array, currently, initialze, quot, clause, allows, mdadm, devices, ext, mounted, contains, contain, fs, mtime, wed, dec, creating, format, specify, default, maximum, filesize, adn, vmware, disks, mkfs, mke, wip, nov, filesystem, label, fragment, inodes, reserved, groups, fragments, superblock, stored, inode, superblocks, accounting, automatically, mounts, whichever, override, bash, doesn, valid, warnings, essentially, treats, mnt, copied, synchronize, ll, proc, mdstat, personalities, linear, multipath, active, _u, min, unused, synchronization, uu,

Latest Articles

  • ssh Too many authentication failures not prompting for password
  • LightDM Mint Ubuntu Debian won't start errors Nvidia Graphics
  • WARNING: Unable to determine the path to install the libglvnd EGL vendor library config files. Check that you have pkg-config and the libglvnd development libraries installed, or specify a path with --glvnd-egl-config-path. Linux Ubuntu Mint Debian E
  • How To Upgrade Linux Mint 18.2 to 18.3 to 19.x and 20.x
  • MP3s Won't Play / ID3 Version 2.4 Issues in Cars and Other MP3 Players/CDs/DVDs Solution
  • LXC Containers LXD How to Install and Configure Tutorial Ubuntu Debian Mint
  • GlusterFS HowTo Tutorial For Distributed Storage in Docker, Kubernetes, LXC, KVM, Proxmox
  • Ubuntu Mint audio output not working pulseaudio "pulseaudio[13710]: [pulseaudio] sink-input.c: Failed to create sink input: too many inputs per sink."
  • How To Shrink Dynamically Allocated VM QEMU KVM VMware Disk Image File
  • How To Enable Linux Swapfile Instead of Partition Ubuntu Mint Debian Centos
  • 404 Not Found [IP: 151.101.194.132 80] apt update Debian 11 Bullseye Solution The repository 'http://security.debian.org bullseye/updates Release' does not have a Release file.
  • WARNING: Can't download daily.cvd from db.local.clamav.net freshclam clamav error solution
  • (firefox:9562): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Failed to execute child process "dbus-launch" (No such file or directory) Solution
  • Debian Mint Ubuntu Which Package Provides missing top, ps and w Solution
  • Vbox Virtualbox DNS NAT Network Mode NOT working
  • Docker Tutorial HowTo Install Docker, Use and Create Docker Container Images Clustering Swarm Mode Monitoring Service Hosting Provider
  • Zoom Password Error 'That passcode was incorrect' - Solution Wrong Passcode Wrong Meeting Name
  • How To Startup and Open Remote/Local Folder/Directory in Ubuntu Linux Mint automatically upon login
  • How To Reset Windows Server Password 2019, 2022, 7, 8, 10, 11 Recovery and Removal Guide Using Linux Ubuntu Mint Debian
  • How To Create OpenVPN Server for Secure Remote Corporate Access in Linux Debian/Mint/Ubuntu with client public key authentication