Have you got this error from Apache?
[notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[error] (28)No space left on device: Cannot create SSLMutex
At first glance it appears that you may be out of disk space but the issue is ipc or interprocess communication.
This will clear out the ipcs processes so things can work, this often happens during high traffic and may be a sign of DDOS.
The command below will fix it, it will list al........
This seems to have changed for RHEL 8 where a normal dracut to update your initramfs creates a system that only boots for the running kernel. For example if you have Kernel 5 and then chroot into a RHEL 8 variant which uses kernel 4.18, and run dracut, it seems that by default the system will be unbootable.
It is also the case that if you move your RAID array or drives to another server that it will be unbootable, because dracut seems to only include modules needed for the curre........
This article about migrating to a CentOS 7 /8 RAID mdadm array has a lot of info but I wanted to focus specifically on what newer versions of CentOS 7 require to boot mdadm and what changes are necessary on CentOS 7.8+
CentOS 7 / 8 mdadm RAID booting requirements
This assumes you are chrooting into an existing install or using it to get a new deployment ready. However, these steps can........
cat /proc/mdstat
Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md124 : inactive sdj1[0](S)
1048512 blocks
Solution, we "run" the array
sudo mdadm --manage /dev/md124 --run
mdadm: started array /dev/md/0_0........
Bonding is an excellent way to get both increased redundancy and throughput. It is similar to the "Network Teaming" feature in Windows.
There are a few different modes but we will use mode 6, I think it's the best of both worlds, as it is not just a failover, but it provides round robin, so you will get redundancy and load balancing. So if you have a 1G single port, you will have a combined throughput of 4G at this point. Just bear in mind that the true thr........
-?????????? ? ? ? ? ? shadow
----------. 1 root root 748 Jul 10 04:35 shadow-
cat: shadow: Input/output error
If you see this you are probably in big trouble, it could be a physical error or if it's a VM image that it is corrupted due to a physical error on the underlying disk/array/NAS or it could a........
Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS?
cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb4[0] sda4[1]
897500672 blocks super 1.2 2 near-copies [2/2] [UU]
[==========>..........] check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec
........
mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb1 missing --metadata=0.90
mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy
mdadm: /dev/sdb1 is not suitable for this array.
mdadm: create aborted
Sometimes running "partprobe" can fix this. Other times it requires a reboot.
One other manual thing that can be done is the following to fix it (if dm is using and blocking it):........
It is unfortunate that LXC's dir mode is completely insecure and allows way too much information from the host to be seen. I wonder if there will eventually be a way to break into the host filesystem or other container's storage?
OpenVZ better security:
[root@ev ~]# cat /proc/mdstat
cat: /proc/mdstat: No such file or directory
/dev/simfs 843G 740G 61G........
The cool thing here is that we only need 1 drive to make a RAID 10 or RAID 1 array, we just tell the Linux mdadm utility that the other drive is "missing" and we can then add our original drive to the array after booting into our new RAID array.
Step#1 Install tools we need
yum -y install mdadm rsync
Step #2 Create your partitions on the drive that will be our RAID array
Here I assume it is /dev........
When using strip_tags and html_entity_decode with PHPit often breaks and produces annoying diamonds with question marks.
It is probably because of characters like these:
… (looks like 3 dots but it is a single weird character).
’ (looks like a normal apostraphe but it is not)
” (looks like a normal double quote but it is not).
An easy way to sort this out is to copy the above and search in an ASCII table to extend the functional........
In a RAID array I had a have periodically lost a drive here and there over the past several months. Iwas always able to readd and resync without losing data. However at some point it looks like some minor corruption happened and this makes DRBD unhappy.
Using fsck did not help either.
Dec 19 06:01:45 storageboxtest4 kernel: [19005.945890] EXT3-fs error (device drbd0): ext3_get_inode_loc: unable to read inode block - inode=22184379........
Server Side Config
1.) First install nfs-utils
yum -y install nfs-utils
2.) Configure nfs share
Create a directory for your NFS share
mkdir /datastore
Create your NFS share in /etc/exports
echo "/datastore 10.220.101.0/24(rw,sync,no_root_squash)" >> /etc/exports
systemctl restart nfs........
The main issue is it looks like Java is not configured to accept the invalid ssl cert that is coming from the download location.
Exception in thread "main" java.lang.RuntimeException: javax.net.ssl.SSLException: java.security.ProviderException: java.security.InvalidKeyException: EC parameters error
export ANDROID_HOME=/home/user/Downloads/tools/
Conversations-master$ ./gradlew
Downloading https://services.gradle.org/distributions/grad........
Normally when I've seen this it's when you are using a variable like a normal string when in fact it's actually an array such as this example:
[Tue Mar 13 04:22:35 2018] [error] PHP Catchable fatal error: Object of class WP_Term could not be converted to string in /vhost/httpdocs/wp-content/plugins/wp-instagram-post/classes/class-woo-igp.php on line 578
&nbs........
myguy@devbox:~$ sudo mdadm -As
myguy@devbox:~$ cat /proc/mdstat |grep sdf
md125 : inactive sdf3[2](S)
sudo mdadm --manage /dev/md125 --run
mdadm: started /dev/md125
........
A great way if you have a bunch of drives and mdadm connected and are looking for backups/archives and don't know what is where!
for md in `cat /proc/mdstat|grep md[0-99]|awk '{print $1}'`; do mkdir /mnt/$md; mount /dev/$md /mnt/$md; done........
Done on Centos 7.3 very important as clearly based on older guides it was a lot easier and more simpler! Hint do not use grub2-install!
If you have trouble booting after this check this CentOS mdadm RAID booting/fixing guide.
One huge caveat if you are an oldschool user or sysadmin who has avoided UEFIbooting
The nor........
In short the two drives in the array were /dev/sdd and /dev/sde. The kernel sees they were unplugged and have gone down as you can see below.
mdadm caught the first one being unplugged /dev/sde and disabled the missing drive. However when the final drive that was part of the array is unplugged it didn't notice at all. Instead it complains about an IO error later for drives that the kernel knows do not exist anymore.
[45817.162728] ata4: exception........
1.) Replicate the number of partitions in your new drives.
gdisk /dev/sda
gdisk /dev/sdb
I created 3 partitions of the same same size.
partition #1: +1G (/boot)
partition #2: +60G (swap)
partition #3: rest of it (/)
#note if you are using GPT/gdisk you need to create separate a partition at least 1MB in size (in my case I would a 4th partition and mark it type ef02).........
mdadm won't boot in Ubuntu/Mint/Debian anymore.
You just get the following in a loop:
mdadm: CREATE group disk not found
Incrementally started RAID arrays.
Incrementally starting RAID arrays...
mdadm: CREATE group disk not found
Incrementally started RAID arrays.
Incrementally starting RAID arrays...
mdadm: CREATE group disk not found
Incrementally started RAID arrays.
Incrementally starting RAID arrays...
mdadm: CREATE group dis........
This was a surprising bug but I unplugged all drives for an array md127. At first it was just 1 drive and mdadm seemed to notice this. I unplugged the second drive taking the array offline but mdadm did not realize it was offline and still showed a non-existent disk as being part of it. This created problems trying to unmount it or even to stop this array with mdadm freezing.
As for how to fix it I can only think of making sure you are not in a mounted path of........
This happens when you assemble array it doesn't mean it will be active for many reasons:
md20 : inactive sdf1[2](S)
732442488 blocks super 1.2
Solution:
sudo mdadm --manage /dev/md20 --run........
It is already known this is not possible
mdadm --create /dev/md3 --level 10 --layout=f2 --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid10 devices=2 ctime=Sat Dec 24 18:44:29 2016
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid10 devices=2 ctime=Sat Dec 24 18:44:29 2016
Continue creating ar........
md127 issue, it should be /dev/md3 per mdadm.conf
Any time something is mounted as md127 it almost always means there is no entry for this mdadm array in the mdadm.conf in initramfs (which is separate from your actual /etc/mdadm.conf).
cat /etc/mdadm.conf
ARRAY /dev/md3 metadata=1.2 UUID=b6722845:381cc94e:7a2c5b5f:8e3b7c4f
The reason for this is something strange, most Linux OS's bizarrely always keep their own copy of /etc/mdadm.con........
First you have to stop it.
mdadm --stop /dev/md0
Then you can remove it:
mdadm --remove /dev/md0........
It is possible to tell mdadm to create an md device on a raw disk even though it will give you an error, it writes a superblock and this corrupts the partition table which can result in your system not booting.
To fix it just zero the super-block on the offending device that you made the mistake in.
Eg: /dev/sda
mdadm --zero-superblock /dev/sda
It is also a way of starting fresh if you wanted to create a new array.........
This happened while an mdadm array was syncing, all access from writing a new blank file to opening a small .txt file was very slow:
[222117.312078] kjournald starting. Commit interval 5 seconds
[222117.685060] EXT3-fs (md0): using internal journal
[222117.685096] EXT3-fs (md0): mounted filesystem with ordered data mode
[222122.376847] kjournald starting. Commit interval 5 seconds
[222122.602825] EXT3-fs (md2): using internal jour........
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
But if you do:
root (hd1,0)
setup (hd1)
it does work, I think hd0/sda had a GPT partition that was not removed properly (what I did was just dd bs=512 count=1 the partition table from another drive since the partition table should be identical).
Checking if "/boot/grub/........
In this example we have 2 drives in a RAID array and /dev/sdb is the one that failed. /dev/sda1 is also the /boot partition which we tell grub to install on /dev/sdb eg install root (hd0,0) /dev/sda1 on the new drive /dev/sdb (hd1)
First copy the partition table from /dev/sda to /dev/sdb
dd if=/dev/sda of=/dev/sdb bs=512 count=1
Run partprobe to detect the new partition table
partprobe........
Here is the scenario you or a client have a remote machine that was installed as a standard/default minimal Centos 6.x machine on a single disk with LVM for whatever reason. Often many people do not know how to install it to a RAID array so it is common to have this problem and why reinstall if you don't need to? In some cases on a remote system you can't easily reinstall without physical or KVM access.
So in this case you add a second physical or disk or already ha........
Iwas surprised to see that Linux Mint at the latest 17.2 version still has NO mdadm installer option, and worse the installer will not be able to create a proper booting environment even when you do install it.
How to setup mdadm in Linux mint LiveCD
sudo su
apt-get install mdadm
# partition as you need and then create your mdadm devices
# create your SWAP md0
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /d........
root (hd2,1)
Filesystem type unknown, partition type 0x83
grub> root (hd2,2)
root (hd2,2)
Filesystem type is ext2fs, partition type 0x83
grub> setup (hd2)
setup (hd2)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... no
#weird thing about grub is that the drive you enter is considered hd0
For example when booted fu........
mdadm --create /dev/md1 --level 10 --raid-devices=2 /dev/sdb2 /dev/sdc2 --layout=f2 --metadata=0.90
Note that layout=f2 or layout=n2 is very important as without it you'll get a complaint like this:
mdadm --create /dev/md0 --level 10 --raid-devices /dev/sdb1 /dev/sdc1 missing missing
mdadm: invalid number of raid devices: /dev/sdb1
It is basically more like a prop........
This is basically caused by upgrading PHPto a new version like 5.4 when you had 5.2 before and an old version of Joomla. The only solution is to upgrade Joomla or downgrade PHP, both of which can be a pain.
Strict Standards: Non-static method JLoader::import() should not be called statically in /home/userdir/public_html/libraries/joomla/import.php on line 29
Strict Standards: Non-static method JLoader::register() should not be ca........
[Wed Jan 08 18:50:07 2014] [emerg] (28)No space left on device: Couldn't create accept lock (/etc/httpd/logs/accept.lock.15449) (5)
This may happen when trying to restart Apache and you find it dies right after starting and check /var/log/httpd/error_logs.
What is the cause of this?
You could be out of disk space (if you're not then see #2 and below)
You're out of Semaphores, you need to kill all the old ones.........
[3805108.257042] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257052] sd 0:0:0:0: [sda] Write Protect is off
[3805108.257054] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[3805108.257066] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[3805108.257083] sd 0:0:0:0: [sda] 1953525168 512-byte hardware sectors: (1.00 TB/931 GiB)
[3805108.257090] sd 0:0:0:0: [sda] Write Protect is off........
This is a great way to upgrade your RAID array or move it/copy it to a new set of hard drives.
Eg. you have a current RAID 1 array on older/slower drives.
Just add at least 1 of the new drives to the array, update grub/install it and then boot into it. Then you have a transparent data migration that is fully synchronized.
mdadm --grow /dev/md126 --raid-devices 3
md127 : active raid1 sdc1........
I've got one of these for testing projects from work at home and got more than I bargained for with the time I've spent on it due to the storage handing/Perc 6/i cards.
My particular model came with the following:
2U Rack Mount Server with Rails
2xOpteron 2373 EE (Quad Core, there is a 6-core version that can be found at times)
16GB RAM
2 x 250GB Seagate SATA
2 x Dell Perc 6/i (horrible and a nightmare to work........
LSi Megaraid
At first it was configured as a RAID 0, then I deleted the Virtual Disk Group.
I thought both drives would be shown and detected in Linux as sda and sdb but it actually shows nothing.
To make them work you have to hit Ctrl+R before the system boots (when prompted) and create a Virtual Disk Group. In my case I created each one as RAID 0 (with a single drive only) as I just wanted JBOD but there is no such option or default in these Dell Pe........
Crashing with a RAID 1 array and when burning a CD.
Screen goes blank (no video signal) and system stops responding during heavier loads.
Is this a defective power supply or is it possible I have too many devices connected to the same rail?
How can I verify/troubleshoot this?........
Have you ever unplugged the wrong drive and then had to rebuild the entire array? It may not be a big deal in some ways but it does make your system vulnerable until the rebuild is done.
Many distros often enable the "bitmap" feature and this basically keeps track of what parts need to be resynced in the case of a temporary removal of a drive from the array, this way it only needs to sync what has changed.
To enable bitmap to speed up rebuilds and sync........
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1] sda3[0]
1363020736 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 8.3% (113597440/1363020736) finish=276.2min speed=75366K/sec
........
Here's a proven example of what a bad hard drive can do, it was technically functioning OKin a RAID array but the system became extremely low and the load become high and IOWAIT was even higher and I always thought it was a bad application. The truth is that this failing 1TBHitachi has slowly gotten worse and caused huge slowdowns, (eg. 100% load on Thunderbird waiting for e-mails to load etc..). After swapping it out, tabs change instantly, emails are not lagged, and........
mdadm --manage /dev/md1 --add /dev/sdb1
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: /dev/sdb1 not large enough to join array
md1's first primary member /dev/sda3 has 57394 cylinders while the /dev/sdb1 has 57393 (1 less cylinder) which is why it won't work.
fdisk -l /dev/sda3
Disk /dev/sda3: 47........
This array is a RAID 1 and in this case 1 of the 2 drives failed (a WD drive and I've found them to be the weakest and most unreliable of any brand and are easily damaged/DOA when shipping them).
mdadm --manage /dev/md0 --add /dev/sdb1
The above assumes the array you want to add to is /dev/md0 and the device we are adding is /dev/sdb1
*One thing to remember is to make sure the partition you are adding is the correct size for the array. You can also g........
Neither the blkid or the UUID internal to mdadm work to automount for some reason in Debian
partprobe doesn't work but was a good suggestion from: http://pato.dudits.net/2008/11/03/special-device-uuidxxxxxxxxxxxxxxxx-does-not-exist-especially-with-lvm
mount: special device /dev/disk/by-uuid/431b9b96-29e8f298-e89bd504-7065bddd does not exist
mdadm -D /dev/md_d12
mdadm: metadata format 00.90 unknown, ignored.
/dev/md_d12:
&nb........
For years I've always built cheap systems believing that there is little difference in more expensive components when it comes to reliability and quality, I generally believe this still except for Power Supplies.
I've always bought cheap cases with nice sounding 350-550W stock/cheap/crap power supplies and haven't had any issues for the most part until recently.
One such case is an NGEAR case with a 550W Optimax power supply, I always read that these supplies don't produce the........
This is one in a series of weird things whichIthought was motherboard related (I RMA'd the motherboard), the RAM tests fine with memtest86 and I used badblocks on both RAID 1 members with no errors and smartctl is happy with them.
Basically the array crashes the kernel a lot and has issues when writing.
[112322.723465] md0: rw=0, want=14958668696, limit=1887460480
[112322.731077] attempt to access beyond end of device
[112322.731087] md........
GNU GRUB version 0.97 (640K lower / 3072K upper memory)
[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename.]
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup........
This happened during a RAID array check:
SMART says both drives pass the test, but I'm doing a long test on them and hopefully this is not a hardware error.
Apr 3 04:22:01 remote kernel: md: syncing RAID array md2
Apr 3 04:22:01 remote kernel: md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
Apr 3 04:22:01 remote kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction.
Apr........
mysql errors even though these files do exist:
110405 13:21:37 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
110405 13:26:15 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means my........
high IO wait
424 root 39 19 1900 848 552 D 0.0 0.0 0:00.91 updatedb
root 424 0.0 0.0 1900 848 ? DN Mar11 0:00 /usr/bin/updatedb -f sysfs?rootfs?bdev?proc?cpuset?binfmt_misc?debugfs?sockfs?usbfs?pipefs?anon_inodefs?futexfs?tmpfs?inotifyfs?eventp........
I think this will be useful to others because I have a server that kept crashing mysteriously during intense disk usage/RAID checks. It would only crash during the weekly RAID integrity check.
ThenI noticed during a reboot that not all CPUs were being brought up, as a result this actually creates much higher temperatures with the output I got from sensors, just booting the system produced higher than normal temperatures.
You can imagine that a full blown RAID check........
Jan 16 04:02:03 centosbox syslogd 1.4.1: restart.
Jan 16 04:07:34 centosbox kernel: INFO: task updatedb:20771 blocked for more than 300 seconds.
Jan 16 04:07:34 centosbox kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 16 04:07:34 centosbox kernel: updatedb D F78BE050 6476 20771 20766&n........
This made me nervous but it's clearly a cronjob based on the messages log that happens every Sunday at about 4:22.
I actually can't find any evidence of it in cron.d cron.daily but it is there somewhere obviously.
What I don't get is why doesn't this cronjob do a datacheck like Ubuntu's cronscript does? When you unnecessarily rebuild the array you lose your redundancy during that point which makes your data extremely vulnerable.
*Update I did a grep of &q........
This doesn't seem to be widely known (maybe it's in some documentation that none of us read though)but there's an easy way to check the integrity of any mdadm array:
sudo echo check > /sys/block/md0/md/sync_action
-bash: /sys/block/md0/md/sync_action: Permission denied
sudo will never work, this only works as root since echo is not actually a binary/command. It is built-into bash.
/sys/devices/virtu........
This really made me nervous but notice the mdstat says "check". This is because in Ubuntu there is a scheduled mdadm cronscript that runs everyday on Sunday at 00:57 that checks your entire array. This is a good way because it prevents gradual but unnoticed data corruption which Inever thought of.
As long as the check completes properly you have peace of mind knowing that your data integretiy is assured and that your hard drives are functioning properly (I'........
mdadm: metadata format 00.90 unknown, ignored.
This happens with various versions of older mdadm such as mdadm - v2.6.7.1 - 15th October 2008
It is all because an extra 0 in 00.90 in /etc/mdadm/mdadm.conf that it doesn't like (it doesn't seem to cause any problem except that message though):
Solution - Edit your /etc/mdadm/mdadm.conf and change 00.90 to 0.90 in your arrays:
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=f41a4644:6b2a05f........
I separated the 2 drives in the RAID 1 array.
1 is the old one /dev/sda and is out of date, while the separated other one /dev/sdc was in another drive and mounted and used with more data (updated).
I wonder how mdadm will handle this:
usb-storage: device scan complete
md: md127 stopped.
md: bind
md: md127: raid array is not clean -- starting background reconstruction
raid1: raid set md127 active with 1 out of 2 m........
Moving to RAID was a pain.
What you have to do is the following from an existing install:
Install mdadm
Create your mdadm RAID 1 array on your spare hard drive.
Start it with the missing disk.
rsync the entire contents of your current / to the md partition.
Here's a good way of doing it:
rsync -Pha --exclude=/proc/* --exclude=/sys/* --exclude=/mnt/* /. /mnt/md2........
Create New RAID 1 Array:
First setup your partitions (make sure they are exactly the same size)
In my example I have sda3 and sdb3 which are 500GB in size.
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm: array /dev/md2 started.
Check Status Of The Array
*Note I already have other arrays md0 and md1.
You can see below that md2 is syn........
Different distributions such as Debian and Centos behave differently when trying to shutdown your system.
shutdown -H now on Debian does not do what you'd expect. The system won't power down, it will halt and do everything but power down.
shutdown -PH now will do the job though (actually power the system off). This is important to test especially if you are not near the system. If you just use -P it forcefully shuts off which is not........
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sdb1 ...
md: adding sdb1 ...
md: adding sda1 ...
md: created md0
md: bind
md: bind
md: running:
md: kicking non-fresh sda1 from array!
md: unbind
md: export_rdev(sda1)
raid1: raid set md0 active with 1 out of 2 mirrors
The md0 raid kicked sda1 ou........
If you have the "(auto-read-only)" beside an arrayI have no idea why that happens but it is easy to fix.
Just run "mdadm --readwrite /dev/md1" (rename md0 to the device with the problem and it will begin to resync.
md1 : active (auto-read-only) raid1 sdb2[0] sda2[1]
19534976 blocks [2/2] [UU]
resync=PENDING
........
http://www.tomshardware.com/news/RAID-5-Doomed-2009,6525.html
I found this article interesting, it basically says that with 2TB hard drives or larger sizes, you are more likely to encounter an unrecoverable read error. But is this just another Y2K doomsday? Don't HDD's have enough advanced hardware ECC error and read recovery to prevent this from happening?
I'm almost tempted to build a 3 x........
I was creating a RAID array and got this error: mdadm: /dev/sda1 is too small: 0K
mdadm: create aborted
Of course sda1 is not too small, both partitions sda1 and sdb1 are identical in size:
Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Sta........
Why would you want to downgrade the superblock? Old mdadm verisons like mdadm 2.5.6 only use the 0.90 superblock/metadata and new versions use 1,1.0,1.1 and 1.2 superblocks by default.
There are some annoying caveats with this, first of all the new superblocks (later than 0.90) CANNOT be read by GRUB, so you won't even be able to install GRUB. Even worse, old versions of mdadm CANNOT automatically detect arrays even if they were created with a new version of mdadm with th........
Which one does the OS care about? blkid says the UUID is "787f1fa4-b010-4d77-a010-795b42884f56" while md insists its UUID is "4d96dd3b:deb5d555:7adb93cb:ce9182d9"
When in doubt, do we assume the OS takes the one from blkid?
/dev/md0: UUID="787f1fa4-b010-4d77-a010-795b42884f56" TYPE="ext3"
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 0.90
&........
I have an md0 arary that my Centos install refers to. I feel this is half the reason why it won't boot anymore.
I saw the initrd for Centos was assembling it as md127 even though it was known as md0.
The reason for this is because I used mdadm --assemble --scan to detect the array on a LiveCD. I had no idea this name would stick (but now I realize the name is permanently stored in the metadata once you mount md127 or whatever random name assemble gives it). W........
I successfully created a single RAID 1 partition which includes /boot inside it and my root directory through the Debian installer. It said GRUB installed successfully but when I try booting the OS it seems GRUB can't read anything.
When trying to boot from GRUB
GRUB Loading stage 1.5.
GRUB loading, please wait...
Error 2
I get "Error 2" when trying to boot Debian. I also notice from a LiveCD that........
I installed 5.5 with a 300GB RAID 1 partition (boot is also on this partition). It booted up fine the first few times until after I used a Live CD and accessed the array, and it became named /dev/md127 for some reason.
Now whenI boot into CentOS I get a kernel panic and different errors, once I got "invalid superblock", even though the array is fine (it didn't happen again, probably because I was sure to dismount and stop the mdadm array properly).
Here's what........
This was unbelievable how much the Xen kernel slows things down, keep in mind both tests were done on the hostnode, one was with the Openvz-Xen hybrid kernel and the other was just OpenVZ. You can see the performance difference is nearly 300% better when not using the Xen kernel.
OpenVZ-Xen Kernel Test Results (I was wondering what was wrong/so slow with my Core i5!)
# # # # # #&n........
mdadm --assemble --scan
mdadm: /dev/md/diaghost05102010:2 has been started with 2 drives.
mdadm: /dev/md/diaghost05102010:1 has been started with 2 drives.
mdadm: /dev/md/diaghost05102010:0 has been started with 2 drives.
-bash-3.1# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md125 : active raid1 sda1[0] sdb1[1]
14658185 blocks super 1.2........
From the package "parted" you can use the command "partprobe" to re-read the partition table. I really hate rebooting, and that's what Iloved to hear about AHCI motherboards, that they allow hotswap so you don't have to reboot. But that's only as good as the OS, if the OS does not reload the partition table you won't be able to do anything with that new drive you attached without rebooting. Yes, even without re-reading the partiton table Linux will........
Before we start I take no responsibility for this, you should have a backup and if you make a mistake during this process you could wipe out all of your data. So backup somewhere else before starting this as a precaution, or make sure it's data you could afford to lose.
The RAID 1 Setup (Hardware Wise)
I've already setup my 2 x 1TB (Seagate) drives with identical partitions, make sure your new hard drive (the empty one) is setup like your curr........
Everyone says there is a "manual" way of doing it and then they tell you to use iTunes, but if you're like me, you're travelling on business in a foreign country and your laptop does not have iTunes and you don't have a way of getting it and/or don't want it.
For this example I'm using the provider "du" in Dubai, UAE (United Arab Emirates) but this method works for virtually all providers.
The requirements in this case to truly "manually update........
I've tried to find a good sensible solution to cluster with and each technology has it's pros and cons and there is no perfect solution and I've found a lot of "exaggerations" in the applications, benefits and performance of these different filesystems.
DRBD
I first started off with DRBD and Ihave to say it does live up to the hype, is quite reliable (although it can be annoying to match up the kernel module and user applications since they must match and whe........