On a test machine Iwas never able to access to a newly created 4th partiton. As we can see there are dev devices for everything but the 4th partition.
The normal "partprobe" or "kpartx" or kernel being told to rescan the block device didn't help (only a reboot did).
fdisk -l /dev/sda
Disk /dev/sda: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units........
This is a 8TB Seagate external USB 3.0 device apparently newer kernels use a module called "UAS" instead of "USB Storage" which causes issues as a lot of devices are not properly supported in UAS mode by the kernel driver. The solution some say is to disable UAS specifically for your USB device but I'd rather just disable UAS altogether.
Solution blacklist UAS: *do not do this it does not work and just causes your USB 3.0........
This is a VIA made VL805 USB 3.0 Chipset with 4-ports and MOLEX powered. First of all this unit was cheap at about only 9 USD with fast shipping. My biggest concern was if this was a quality unit and would it really give you full USB 3.0 speeds (some people reported with similar cards that for some weird r........
In short the two drives in the array were /dev/sdd and /dev/sde. The kernel sees they were unplugged and have gone down as you can see below.
mdadm caught the first one being unplugged /dev/sde and disabled the missing drive. However when the final drive that was part of the array is unplugged it didn't notice at all. Instead it complains about an IO error later for drives that the kernel knows do not exist anymore.
[45817.162728] ata4: exception........
It is already known this is not possible
mdadm --create /dev/md3 --level 10 --layout=f2 --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid10 devices=2 ctime=Sat Dec 24 18:44:29 2016
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid10 devices=2 ctime=Sat Dec 24 18:44:29 2016
Continue creating ar........
[3805108.257042] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257052] sd 0:0:0:0: [sda] Write Protect is off
[3805108.257054] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[3805108.257066] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[3805108.257083] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257090] sd 0:0:0:0: [sda] Write Protect is off........
Have you ever unplugged the wrong drive and then had to rebuild the entire array? It may not be a big deal in some ways but it does make your system vulnerable until the rebuild is done.
Many distros often enable the "bitmap" feature and this basically keeps track of what parts need to be resynced in the case of a temporary removal of a drive from the array, this way it only needs to sync what has changed.
To enable bitmap to speed up rebuilds and sync........
This is something many people and especially businesses worry about, or at least they should. Before throwing away a hard drive, returning a hard drive, or especially Warrantying/RMAing it, you should wipe the drive.
Linux provides the "shred" and "dd" utlities which work quite well. It seems even a single pass is good enough but by default shred will do 3 passes.
Here's an example of using shred in Linux (I use a custom made distribution from........
mdadm --manage /dev/md3 --add /dev/sda1
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdd2[1] sdd1[2](S)
31270272 blocks
md3 : active raid1 sda1[2] sdb1[1] sdc1[3](F)
943730240 blocks [2/1] [_U]
[>....................]........
dd if=/dev/sda of=/dev/null creates ever increasing load every second.
After minutes the load has moved up to 4.79
I've tried with two different discs in my system.
I wonder if it's a delay or problem with the SATA bus because one drive I have connected has recently failed.
I notice when drives fail that you get some IO/blocking issue when they don't respond properly.
Yes I believe it was, because here's the same disc after removing the dead........
[137392.910057] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x80000 action 0x6 frozen
[137392.910077] ata4: SError: { 10B8B }
[137392.910095] ata4.00: cmd 60/20:00:00:00:00/00:00:00:00:00/40 tag 0 ncq 16384 in
[137392.910099] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[137392.910122] ata4.00: status: { DRDY }
[137392.910135] ata4: hard resetting link
[137393.440060] ata4: SATA link........
Seagate Inventory/Firmware Check
I heard about this issue a long time ago but never looked into it. I figured I wasn't affected since my 500GB drives were running for so long. I've been using Seagate's since 2002 and to this day all of the drives I have are alive from Seagate.
*Update the bad news is that I realize one of my 500GB's is about to die, it's not even a year old, but is also not affected by the recall according to Seagate!
Seagate Inventory/Firm........
From the package "parted" you can use the command "partprobe" to re-read the partition table. I really hate rebooting, and that's what Iloved to hear about AHCI motherboards, that they allow hotswap so you don't have to reboot. But that's only as good as the OS, if the OS does not reload the partition table you won't be able to do anything with that new drive you attached without rebooting. Yes, even without re-reading the partiton table Linux will........