If you have swarm services and dockerd is creating a high load even with the containers just being idle, the easiest solution is to upgrade to a newer docker version.
For example an identical config of 3 nodes, with Redis 5 with 30 replicas produces a load of about 1.45 in Debian 10 with Docker18.09.1
If I create the same setup on Debian 11, with Docker 20.10.5+dfsg1 then the CPU usage is low.
One other difference I wondered is the kernel. In my test setup........
The idlepc value is very important to dynamips and it is both image and often CPU dependent. There is no "magic"value that will work for all images and all CPUs so this is why I'll show you a quick and handy way.
Also don't be disappointed, some values do not work well but idlepc gives you several. For example in my example below #6 didn't help at all but #7 got me down to about 6% CPU from 99-100%.
*Befo........
This tutorial will get your router up and running using emulation tools. In this case we'll be getting a Cisco C7206 (C7200 series) VXR router going which also supports SCCP VOIP services.
dynamips is the emulator itself and dynagen is the front-end tool that helps us control everything. It is used by tools such as gns3 and eve-ng.
Together the two tools (dynamips and dynagen) allow us to create and emulate REAL router........
Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS?
cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb4[0] sda4[1]
897500672 blocks super 1.2 2 near-copies [2/2] [UU]
[==========>..........] check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec
........
Occasionally my whole screen locks up and I cannot even swith to the console and I find this in my syslog:
*-display
description: VGA compatible controller
product: Mullins [Radeon R3 Graphics]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
 ........
Video Links:
How To Setup 2 Phones on a Single CME Router and get the GUI going.
How to use Dialpeers with CME with two routers
How to implement call restrictions using COR / Class of Restriction
Set your clock:
HH:MM:SS
clock set 22:00:00 24 September 2024........
It looks like this has something to do with APIC but I am not sure. I have similar CPUs with a different MB and BIOS that work fine on the same type of kernel. A lot of time the issue is because of the C-step setting in the BIOS.
The same thing happened on the 2.6 kernel with Centos 6 but this is a homebrew 4.4 kernel soI am not sure why it is happening when even Centos 7 (3.2) kernel works OK.
Solution - It comes down to the BIOS set........
It's fairly simple to start or stop a check but I do wish mdadm's command had this built in. Sometimes it will do a check at the worst time causing the server to crawl to a halt.
Stop check on md126:
echo idle > /sys/block/md126/md/sync_action
Start check on md126:
echo check > /sys/block/md126/md/sync_action
........
Here's a proven example of what a bad hard drive can do, it was technically functioning OKin a RAID array but the system became extremely low and the load become high and IOWAIT was even higher and I always thought it was a bad application. The truth is that this failing 1TBHitachi has slowly gotten worse and caused huge slowdowns, (eg. 100% load on Thunderbird waiting for e-mails to load etc..). After swapping it out, tabs change instantly, emails are not lagged, and........
mod_status is a great way to track down the source of high CPU usage and to find what vhost/script is the cause of it.
It gives you a live view of bandwith usage, CPU usage, and memory usage broken down by domain/vhost and script/URI.
Enable mod_status
vi /etc/httpd/conf/httpd.conf
ExtendedStatus On
SetHandler server-status
Order Deny,Allow
Deny from all
All........
This happened during a RAID array check:
SMART says both drives pass the test, but I'm doing a long test on them and hopefully this is not a hardware error.
Apr 3 04:22:01 remote kernel: md: syncing RAID array md2
Apr 3 04:22:01 remote kernel: md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
Apr 3 04:22:01 remote kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction.
Apr........
high IO wait
424 root 39 19 1900 848 552 D 0.0 0.0 0:00.91 updatedb
root 424 0.0 0.0 1900 848 ? DN Mar11 0:00 /usr/bin/updatedb -f sysfs?rootfs?bdev?proc?cpuset?binfmt_misc?debugfs?sockfs?usbfs?pipefs?anon_inodefs?futexfs?tmpfs?inotifyfs?eventp........
I think this will be useful to others because I have a server that kept crashing mysteriously during intense disk usage/RAID checks. It would only crash during the weekly RAID integrity check.
ThenI noticed during a reboot that not all CPUs were being brought up, as a result this actually creates much higher temperatures with the output I got from sensors, just booting the system produced higher than normal temperatures.
You can imagine that a full blown RAID check........
Jan 16 04:02:03 centosbox syslogd 1.4.1: restart.
Jan 16 04:07:34 centosbox kernel: INFO: task updatedb:20771 blocked for more than 300 seconds.
Jan 16 04:07:34 centosbox kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 16 04:07:34 centosbox kernel: updatedb D F78BE050 6476 20771 20766&n........
This made me nervous but it's clearly a cronjob based on the messages log that happens every Sunday at about 4:22.
I actually can't find any evidence of it in cron.d cron.daily but it is there somewhere obviously.
What I don't get is why doesn't this cronjob do a datacheck like Ubuntu's cronscript does? When you unnecessarily rebuild the array you lose your redundancy during that point which makes your data extremely vulnerable.
*Update I did a grep of &q........
This doesn't seem to be widely known (maybe it's in some documentation that none of us read though)but there's an easy way to check the integrity of any mdadm array:
sudo echo check > /sys/block/md0/md/sync_action
-bash: /sys/block/md0/md/sync_action: Permission denied
sudo will never work, this only works as root since echo is not actually a binary/command. It is built-into bash.
/sys/devices/virtu........
This really made me nervous but notice the mdstat says "check". This is because in Ubuntu there is a scheduled mdadm cronscript that runs everyday on Sunday at 00:57 that checks your entire array. This is a good way because it prevents gradual but unnoticed data corruption which Inever thought of.
As long as the check completes properly you have peace of mind knowing that your data integretiy is assured and that your hard drives are functioning properly (I'........
This is obviously a bug in the r8169 kernel module and it seems to affect a lot of people. I upgraded to the latest kernel and hope this won't happen anymore, as it is a very serious error. This is especially serious for those who are running servers with this chipset, who can afford for the NIC to randomly go off-line for no apparent reason?
[655548.189113] type=1505 audit(1277067560.902:5): operation="profile_load" name="/usr/bin/freshclam&q........
This drive is clearly on the way out, the Kernel knows it but I'm surprised that SMART is not concerned. I didn't blame Seagate for their past issues until now. This hard drive has hardly been used and has not even been powered on for a year according to SMART.
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.11
Device........
The binary "iostat" comes from the package "sysstat" and is available on all Linux/Unix like platforms.
Use the "-m" option to give you what you probably want, which is to see in MB/s how much bandwidth each disk is doing.
iostat -m
Linux 2.6.24.2 ((none)) 04/16/10
avg-cpu: %user %nice %system %iowait %steal %idle
........
heartbeat is stopped for some reason
Anyway hnode2 was active and the services are running fine but I see heartbeat has been stopped somehow.
Here is the last log I see of heartbeat:
[quote:23c84415f5]
Sep 9 17:15:32 hnode2 heartbeat: [16738]: info: MSG stats: 9/1762471 ms age 0 [pid16738/MST_CONTROL]
Sep 9 17:15:32 hnode2 heartbeat: [16738]: info: cl_malloc stats: 716/51784021 152624/74519 [pid16738/MST_CONTROL]
Sep 9 17:15:32........
There's a lot of information and guides on OCFS2 for RHELand Centos Linux but the package setup and configuration is slightly different and this has thrown some people off.
Installing OC2FS
You should install the following packages to get started:
apt-get install ocfs2-tools ocfs2console
Configure OC2FS
In RHEL/Centos the main configuration file is located in /etc/sysconfig/o2cb
However in Debian based Linux it is located........