Bonding is an excellent way to get both increased redundancy and throughput. It is similar to the "Network Teaming" feature in Windows.
There are a few different modes but we will use mode 6, I think it's the best of both worlds, as it is not just a failover, but it provides round robin, so you will get redundancy and load balancing. So if you have a 1G single port, you will have a combined throughput of 4G at this point. Just bear in mind that the true thr........
In our 2024 VPS Server/Cloud Buyer's Guide, we place the location of your VPS/hosting/server as one of the priorities that is often overlooked.
2024 Update - Datacent........
For a lot of reasons, it may be convenient to detach or attach live disks to a running VM without having to reboot it. Sure, you can use some network based storage, but when performance counts, attaching a new virtual disk will usually give you better throughput and lower latency in a quick testing situation.
This doesn't work, why not?
drive_add 0 if=virtio,file=/tmp/vm.qcow2,if=virtio,format=qcow2,id=rtt
Can't hot-ad........
The first is a dual CPU AMD Opteron 2373EE (4 cores x 2) and I think it did bad because it has some old 250GB SATAs which can only do about 65MB/s max sequential reads. I think it should have blown away the second (AMD X4 640 Quad Core).
[root@fs12home unixbench-4.1.0-wht-2]# ./Run
make all
make[1]: Entering directory `/root/unixbench-4.1.0-wht-2'
Checking distribution of files
./pgms exists
./src exists........
Here's a proven example of what a bad hard drive can do, it was technically functioning OKin a RAID array but the system became extremely low and the load become high and IOWAIT was even higher and I always thought it was a bad application. The truth is that this failing 1TBHitachi has slowly gotten worse and caused huge slowdowns, (eg. 100% load on Thunderbird waiting for e-mails to load etc..). After swapping it out, tabs change instantly, emails are not lagged, and........
I like dd, although it only reads it, usually a read test of the entire disk will uncover if your hard drive is bad in some parts. This is a good thing to do at least once a month, a lot of times bizarre program behavior, laginess and crashing/unnmounting problems etc.. are due to a failing disc and SMART won't know it or indicate a problem:
We must also remember there's never a guarantee, I've found that ever since we moved to larger and more platters per drive with 1TB drives........
I had one of these shipped and it was not recognized when plugged in, here's what a dead drive looks like (I assume it's teh circuit board which is dead):
ata1: link is slow to respond, please be patient (ready=0)
ata1: softreset failed (device not ready)
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1: link online but device misclassified, retrying
ata1: link is slow to respond, please be patient (ready=0)
ata1: softreset f........
This was unbelievable how much the Xen kernel slows things down, keep in mind both tests were done on the hostnode, one was with the Openvz-Xen hybrid kernel and the other was just OpenVZ. You can see the performance difference is nearly 300% better when not using the Xen kernel.
OpenVZ-Xen Kernel Test Results (I was wondering what was wrong/so slow with my Core i5!)
# # # # # #&n........
I decided on using yum to help me decide even though I normaly use proftpd I decided to see what else I could find.
yum search ftp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* rpmforge: ftp-stud.fht-esslingen.de
* base: mirrors.netdna.com
* updates: updates.interworx.info
* addons: yum.singlehop.com
* extras: mirrors.netdna.com
rpmforge........
Here are the results, it is Sempron 3000+ AMD Mobile, 500Gig HDD, 512MB RAM with shared ATI Radeon graphics.
# # # # # # # ##### ###### # # #### # #
# # ## # # # #&nb........
I played around with xmit power (how much power in mW) to see if I could increase the range and signal strength. Idon't think this Linksys WRT54G's strength is the wireless, it seems to have poor signal quality and transfer rates all around.
Ithink part of the problem is also that there are several wireless networks around my house that could be interfering and the walls are thick here.
Anyaway, moving on now :) The default is 28mW and I increased it to........
The results are still not flattering and are nothing close to native performance. Unless GlusterFS has a "DRBD-like" option to delay writes over the network and to only read from the client side, I don't see how performance can ever improve much more.
After doing some client optimizations Iadded more to the score:
Start Benchmark Run: Sun Nov 29 00:37:44 PST 2009
00:37:44 up 3 min, 1 user, load average: 0.01........
You might remember my original GluserFS/OpenVZ benchmark which produced a horrible 29.8
This is the exact same system, but using the latest 2.0.8 (with some small files patch which speeds up performance) you can see it is about 25% faster.
I also haven't tuned my config files at all, but there are some settings that should increase performance on small files which I believe i........
This is very disappointing since GlusterFS markets itself as a solution to deploy VPS servers on. On the HNitself I get a Unixbench of about 360.
I'm also using an SSH tunnel to secure the communications, but even before that, things seemed very slow.
# # # # # # # #####&n........