This sometimes happens when trying to install the EFIversion of grub to a device when you are booted into Legacy/MBR mode. It doesn't seem to occur on all machines, but some and seems somewhat BIOS dependent.
grub-install --target=x86_64-efi /dev/sda
Installing for x86_64-efi platform.
grub-install.real: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
grub-install.real: warning: Couldn't find physica........
cat /proc/mdstat
Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md124 : inactive sdj1[0](S)
1048512 blocks
Solution, we "run" the array
sudo mdadm --manage /dev/md124 --run
mdadm: started array /dev/md/0_0........
Bonding is an excellent way to get both increased redundancy and throughput. It is similar to the "Network Teaming" feature in Windows.
There are a few different modes but we will use mode 6, I think it's the best of both worlds, as it is not just a failover, but it provides round robin, so you will get redundancy and load balancing. So if you have a 1G single port, you will have a combined throughput of 4G at this point. Just bear in mind that the true thr........
iw dev wlan0 station dump
This is very useful because it is helpful if you are running something like hostapd and need to see the signal strength and negotiated connection speed.
Station ff:ff:ff:ff:ff:ff (on wlan0)
inactive time: 16309 ms
rx bytes: 25451
rx packets: 325
tx bytes: 44381
tx packets: 159
tx retries: 0
tx failed: 0
signal: -72 [-72] dBm
signal avg: -72 [-72] dBm........
myguy@devbox:~$ sudo mdadm -As
myguy@devbox:~$ cat /proc/mdstat |grep sdf
md125 : inactive sdf3[2](S)
sudo mdadm --manage /dev/md125 --run
mdadm: started /dev/md125
........
This happens when you assemble array it doesn't mean it will be active for many reasons:
md20 : inactive sdf1[2](S)
732442488 blocks super 1.2
Solution:
sudo mdadm --manage /dev/md20 --run........
vgchange -ay
3 logical volume(s) in volume group "vg_12" now active
lvscan
inactive '/dev/vg_12/lv_root' [144.04 GB] inherit
inactive '/dev/vg_12/lv_home' [1.00 GB] inherit
inactive '/dev/vg_12/lv_swap' [7.85 GB] inherit........
mdadm --manage /dev/md3 --add /dev/sda1
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdd2[1] sdd1[2](S)
31270272 blocks
md3 : active raid1 sda1[2] sdb1[1] sdc1[3](F)
943730240 blocks [2/1] [_U]
[>....................]........
I had a system running a 128MB live CD image with 2.8 gigs of available RAM and the OOM kernel killer went crazy when using dd for more than 8 minutes and kept killing everything. I've read that this is due to a low-memory issue and paging in the kernel and 32-bit systems with lots of RAM.
I even enabled swapspace on my LiveCD and the issue happened 25 minutes into dd rather than 8 minutes, so what gives?
Also no swap space was ever used!
cat /proc/s........
This doesn't seem to be widely known (maybe it's in some documentation that none of us read though)but there's an easy way to check the integrity of any mdadm array:
sudo echo check > /sys/block/md0/md/sync_action
-bash: /sys/block/md0/md/sync_action: Permission denied
sudo will never work, this only works as root since echo is not actually a binary/command. It is built-into bash.
/sys/devices/virtu........
Out of memory: kill process 7559 (rsync) score 635 or a child
Killed process 7559 (rsync)
I was surprised to see this in my dmesg whenmy rsync backup suddenly stalled/stopped.
This system has 3 gigs of RAM and lots of free memory so I don't understand what is happening.
rsync invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0
Pid: 7600, comm: rsync Not tainted 2.6.24.2 #83
[] oom_kill_pr........
Here are the results, it is Sempron 3000+ AMD Mobile, 500Gig HDD, 512MB RAM with shared ATI Radeon graphics.
# # # # # # # ##### ###### # # #### # #
# # ## # # # #&nb........