Idid a systemctl restart networking and it broke Proxmox VM connectivity!
#proxmox is the problem after restarting the network the tap devices go to disabled state
[2230884.919905] vmbr0: port 7(tap118i0) entered disabled state
[2230884.948864] vmbr0: port 8(tap122i0) entered disabled state
[2230884.972748] vmbr0: port 6(tap119i0) entered disabled state
[2230885.004745] vmbr0: port 5(tap117i0) entered disabled state
[2230885.03673........
Iam not sure why this is happening neither the hostnode or VM changed. All I did was reboot the hostnode and startup the Centos VM again, also note it happened with the original kernel on the VM and also the latest 6.9 kernel as of this writing as shown below.
Host Node: Centos 6.9
Kernel:2.6.32-696.6.3.el6.x86_64
Kernel: 2.6.32-042stab123.9
Same result in any kernel above........
migrating from an old OpenVZ (Centos 5) to new OpenVZ (Centos 6)
Also if migrating from 32-bit HN to 64-bit your RAM will probably be much bigger than it should be!
16x bigger
eg. 32bit HN:
total used free shared buffers cached
Mem:&nb........
iptables -t nat -A PREROUTING -p tcp -m tcp -d 192.168.2.1/32 --dport 3389 -j DNAT --to-destination 192.168.5.2:3389
iptables v1.4.7: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
The above is often because you don't have the correct modules loaded on the hostnode or enabled for the container but in some cases it's actually a weird openvz setting.
Che........
2014-08-12T19:05:55-0400 vzctl : CT 391801 : Unable to start init, probably incorrect template
2014-08-12T19:05:55-0400 vzctl : CT 391801 : Container start failed
This was caused by trying to run a 64-bit template on a 32-bit kernel hostnode which is obviously impossible.
The solution is to use a 32-bit template or upgrade the hostnode to 64-bit.........
iptables v1.3.5: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
This solution applies to all other iptables modules/problems for OpenVZ, you'll just need to add them to both lists/lines below if you have modules other than what I have below.
The modules need to be enabled in both iptables and the OpenVZ hostnode itself and then the containers which need it must be restarted.
How To Enable IPTables Modules in OpenVZ........
This may not apply to everyone but here is what happened to me.
One day my IP connectivity for one container went dead, I could ping the hostnode from it and the hostnode could ping it but there was no external routing. I restarted the network service but it didn't help.
I checked the routing table inside the VPS and the host and everything looked normal. Iadded another different IPon the same subnet to the container and it worked. Right away I st........
I shared a directory on my hostnode/local system (running Ubuntu 10.04) with my Guest system running Windows XP. I have no idea why it's not mentioned or documented in an obvious way, but in the Windows client you just access it with "\Vboxsvr"
Once you access that share you'll have access to all of the VBOX shares on your local host. I think it should indicate it somewhere when you enable the sharing. Yes, I'm sure it's buried so........
This was unbelievable how much the Xen kernel slows things down, keep in mind both tests were done on the hostnode, one was with the Openvz-Xen hybrid kernel and the other was just OpenVZ. You can see the performance difference is nearly 300% better when not using the Xen kernel.
OpenVZ-Xen Kernel Test Results (I was wondering what was wrong/so slow with my Core i5!)
# # # # # #&n........
I didn't find any useful information that actually fixed this. My VPS was in the "Running State" and I could not stop or restart it. I kept getting "Container already locked" no matter what Idid (I tried all the suggestions in the Google results for this error).
Most of the suggestions were for Windows but I only use Linux. The other solutioins also said to restart the VZ service or even the entire hostnode and this was not acceptable to me........
To enable Fuse to work inside a OpenVZ container it's very simple (although some people say it can't be done).
Remember that on your HN(HostNode) for OpenVZ, Fuse must be installed and the module must be loaded for this to work. In addition remember that you need the Fuse package installed inside the container of course.
vzctl set 2000 --devices c:10:229:rw --save
vzctl exec 2000 mknod /dev/fuse c 10 229
The part that most people forget........