Even today we see a lot of servers that have different services and ports open for rpc and this creates not only potential inward vulnerabilities but perhaps more common, the abuse of your network resources in reflective rpc queries.
To stop this problem, you should disable and remove all services relating to rpc or at least block all relevant ports for the service.
Surprisingly, there are still some providers and OS installs in Linux that install these services and leave them........
A lot of companies are unsure which solution to choose and many may not be aware of Docker Swarm as an alternative to Kubernetes. One thing that many Sysadmins find is that Docker Swarm is simply easier, quicker to setup and maintain by far than Kubernetes.........
If you have swarm services and dockerd is creating a high load even with the containers just being idle, the easiest solution is to upgrade to a newer docker version.
For example an identical config of 3 nodes, with Redis 5 with 30 replicas produces a load of about 1.45 in Debian 10 with Docker18.09.1
If I create the same setup on Debian 11, with Docker 20.10.5+dfsg1 then the CPU usage is low.
One other difference I wondered is the kernel. In my test setup........
We used a simple Debian 10 VM and showed the memory before starting docker and with no docker containers being started. The goal is to show much much memory dockerd actually uses.
Before docker was started
The VMwas using 58M of RAM.
After docker was started
it was using 99MB of RAM.
How much RAM does docker use?
It's not scientific but fair to say dockerd itself uses about 41MB of RAM (99-58).........
Welcome to our in-depth guide on configuring NVIDIA GPUs with Docker on Ubuntu. This post is tailored for developers, data scientists, and IT professionals who are looking to leverage the power of NVIDIA's GPU acceleration within Docker containers.
Whether you're working on machine lea........
Just use apt-cache policy to find the repo of a package:
apt-cache policy lxd
lxd:
Installed: 3.0.3-0ubuntu1~18.04.2
Candidate: 3.0.3-0ubuntu1~18.04.2
Version table:
*** 3.0.3-0ubuntu1~18.04.2 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
&nb........
If you are using mint, delete the preference that stops snap from installing (as it is required for lxc)
sudo rm /etc/apt/preferences.d/nosnap.pref
1. Install lxd:
sudo apt install lxd
Issues install lxd or errors? Click here
Debian at this time does not have lxd so you'll need to use snap:
sudo apt in........
The Best Docker Tutorial for Beginners
We quickly explain the basic Docker concepts and show you how to do the most common tasks from starting your first container, to making custom images, a Docker Swarm Cluster Tutorial, docker compose and Docker buildfiles.........
If you don't aleady have it, you'll need EPEL
Install LXC
yum -y install lxc lxc-templates
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: mirror.it.ubc.ca
* epel: mirrors.kernel.org
* extras: mirror.it.ubc.ca
* updates:........
I've read a few guides about this but they didn't work for me.
sudo apt-get install bridge-utils
#don't think the above is enough it won't work still even though you have by default an /etc/qemu-ifup that handles it if you have the right tools and setup
sudo qemu-system-x86_64 -net tap -net nic -enable-kvm -cpu host,vmx=on ~/VirtualBox VMs/vsphere-vcenter/vsphere-vcenter.vdi
W: /etc/qemu-ifup: no bridge for guest interface foun........
I can't get vmx cpu extensions to show up in Virtualbox guests despite enabling nested paging and
enable vmx in virtualbox guest but this doesn't help that you check VT-X or the AMD Virtualization SVM it enables it for the guest to use BUT does not pass it through. This means if you check cat /proc/cpuinfo in the guest you will see the CPUdoesn't support virtualization. It looks like VirtualBox still hasn't implemented this!
But there is good news I&n........
migrating from an old OpenVZ (Centos 5) to new OpenVZ (Centos 6)
Also if migrating from 32-bit HN to 64-bit your RAM will probably be much bigger than it should be!
16x bigger
eg. 32bit HN:
total used free shared buffers cached
Mem:&nb........
Errors like this are shown on high usage servers and ports so it is common to see it on http and even imap ports:
possible SYN flooding on port 80. Sending cookies.
The Linux kernel will even detect flooding on OpenVZ containers:
possible SYN flooding on ctid 6000, port 993. Sending cookies.
In many cases this is not an issue and is more so simply a result of regular, but high usage traffic.........
Linux box13. 2.6.32-042stab076.5 #1 SMP Mon Mar 18 20:41:34 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
even setting privvmpages to a specific setting DOES not affect "free -m" in containers.
This is probably a kernel issue
23:36:29 up 159 days, 7:12, 4 users, load average: 0.42, 0.44, 0.33
[root@box13 ~]# free -m
total&n........
iptables v1.3.5: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
This solution applies to all other iptables modules/problems for OpenVZ, you'll just need to add them to both lists/lines below if you have modules other than what I have below.
The modules need to be enabled in both iptables and the OpenVZ hostnode itself and then the containers which need it must be restarted.
How To Enable IPTables Modules in OpenVZ........
One of my test Centos 5 containers was on a partition that filled up and it threw all sorts of errors and stopped responding but now I can't boot it again anymore.
All the console shows is the Linux Penguin on the top left corner and the xm console says "usbcore: registered new driver hub" and has halted there.
Centos 5 Xen container stuck/frozen won't boot on "usbcore: registered new driver hub"
Another great way of troubleshooting is booting fro........
This may not apply to everyone but here is what happened to me.
One day my IP connectivity for one container went dead, I could ping the hostnode from it and the hostnode could ping it but there was no external routing. I restarted the network service but it didn't help.
I checked the routing table inside the VPS and the host and everything looked normal. Iadded another different IPon the same subnet to the container and it worked. Right away I st........
This error is annoying, in a Virtuozzo KB entry about this ip tables nat problem they say the kernel needs to be ugpraded:
Symptoms
The node runs 2.6.18-x kernel older than 2.6.18-028stab053.10.
NAT module does not work in container, you get "can't initialize iptables table 'nat'" error:
# iptables -t nat........
Proxmox has made this free utility to backup running OpenVZ containers. It's a great program which is actually just a PERL script but gets the job done. This program is not 100% required because all it really does is cp -a from your container's path as far as I know but it is still good to have uniformity to how you backup your containers.
For RPM distros such as Centos/RHEL/Fedora etc.. download and install this:
wget http://www.proxmox.com/cms_proxm........