vzctl stop 4096
Removing stale lock file /vz/lock/4096.lck
Stopping container ...
Child 546213 exited with status 1
^Z
[1]+ Stopped vzctl stop 4096
~]# rm /vz/lock/4096.lck
rm: remove regular file `/vz/lock/4096.lck'? y
~]# vzctl stop 4096
Stopping container ...
Child 546246 exited with status 1........
Error: Unable to apply new quota values: quota not running
Container start failed (try to check kernel messages, e.g. "dmesg | tail")
Killing container ...
Container was stopped
Error: Unable to apply new quota values: quota not running
Can't umount /vz/private/123123: Invalid argument
[root@rtt 123123]# vzquota on 123123
vzquota : (error) Can't open quota file for id 123123, maybe you need to reinit........
This container won't start after exhausting its memory. There are no relevant or helpful messages in dmesg or vzctl.log as well. Standard troubleshooting such as disabling PPP etc has not helped.
2017-07-06T23:33:29-0400 vzctl : CT 888171 : Locked by: pid 166029, cmdline vzctl start 888171
2017-07-06T23:33:29-0400 vzctl : CT 888171 : Container already locked
2017-07-06T23:33:29-0400 vzctl : CT 888171 : Container was stopped
2017-07........
The file ipupdate.txt should look like this:
ip.ip.ip.ip ctid
while read -r; do
ip=$REPLY
setip=`echo $ip|awk '{print $1}'`
ctid=`echo $ip|awk '{print $2}'`
echo "vzctl set $ctid --ipadd $setip --save"
vzctl set $ctid --ipadd $setip --save
done < ipupdate.txt........
migrating from an old OpenVZ (Centos 5) to new OpenVZ (Centos 6)
Also if migrating from 32-bit HN to 64-bit your RAM will probably be much bigger than it should be!
16x bigger
eg. 32bit HN:
total used free shared buffers cached
Mem:&nb........
Syncing private
Live migrating container...
Syncing 2nd level quota
11000: invalid option -- F
Usage: vzdqload quotaid [-c file] commands
Loads user/group qouta information from stdin into quota file.
-c file use given quota file
Commands specify what user/group information to load:
-G grace time
-U disk limits........
rm /vz/lock/1200.lck
rm: remove regular file `/vz/lock/1200.lck'? y
vzctl start 1200
Container already locked
vzctl start 1200
Starting container ...
vzquota : (error) can't lock quota file, some quota operations are performing for id 1200
vzquota on failed [7]
vzquota off 1200
vzctl start 1200
vzquota on 1200
root@rttbox ~]# vzquota off 1200
vzquota : (........
vzctl set 2 --devnodes fuse:rw --save
Where "2" is the ctid........
yum -y install wget
wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo
rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ
yum -y install vzkernel vzctl
#enable ip_forward
sed -i s/'net.ipv4.ip_forward = 0'/'net.ipv4.ip_forward = 1'/g /etc/sysctl.conf
#all interfaces should not send redirects
echo "net.ipv4.conf.default.send_redirects = 1" >> /etc/sysctl.conf
echo "net.ipv4.co........
vzctl set $CTID --devnodes net/tun:rw --capability net_admin:on --save........
Are you getting the same old error message even though your iptables settings for OpenVZ are correct?
iptables v1.3.5: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
The reason is because in newer vzctl the old way of setting IPTABLES="" in vz.conf is completely deprecated (I spent some time fiddling wondering why my settings were correct but........
2014-08-12T19:05:55-0400 vzctl : CT 391801 : Unable to start init, probably incorrect template
2014-08-12T19:05:55-0400 vzctl : CT 391801 : Container start failed
This was caused by trying to run a 64-bit template on a 32-bit kernel hostnode which is obviously impossible.
The solution is to use a 32-bit template or upgrade the hostnode to 64-bit.........
vzctl stop ctid
Killing container ...
Child 1033348 exited with status 7
Unable to stop container
vzctl enter ctid
enter into CT 29831 failed
Some have suggested using vzctl stop ctid --fast which does not work.
The only thing that seems to work is restarting the vz service.........
OpenVZ has made vzctl version 4.7 default to using ploop which is a big annoyance. No one wants it otherwise we'd use Xen or KVM.
Make sure to manually specify vzctl 4.6.1 or you will have issues with old scripts breaking since it defaults to using ploop (a single image like Xen/KVM).
Here's a list to old versions of vzctl.........
yum -y install wget
wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo
rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ
yum -y install vzkernel vzctl
After that just reboot and you may also have to enable ip_forward in /etc/sysctl.conf........
Linux box13. 2.6.32-042stab076.5 #1 SMP Mon Mar 18 20:41:34 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux
even setting privvmpages to a specific setting DOES not affect "free -m" in containers.
This is probably a kernel issue
23:36:29 up 159 days, 7:12, 4 users, load average: 0.42, 0.44, 0.33
[root@box13 ~]# free -m
total&n........
Starting container...
vzquota : (error) Quota on syscall for id 42131: No such file or directory
vzquota on failed [3]
Solution
cd /var/vzquota
mv quota.42131 quota.42131-disable
vzctl start 42131
Starting container...
Initializing quota ...
Container is mounted
Adding IP address(es):
Setting CPU units: 1000
Container start in progress...
........
mkdir: cannot create directory 'test': Disk quota exceeded
You are out of inodes usually:
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/simfs 200000 200000 0 100% /
none ........
Container is currently mounted (umount first)
The container is stuck in the "mounted" state, you must manually start it to get it out of that state (there is no umount option as implied by OpenVZ vzctl).........
vzquota : (warning) block_hard_limit [102] < block_current_usage [520824]
This is because your disk usage of the container exceeds the quota.
Eg. once on a test container I had accidentally set it to 32MB yet the OS took about 600MB.
Just set the quota to something bigger than the currently used space to solve it.
vzctl set 3891 --diskspace 5G:5G --save........
Error: detected vswap CT config but kernel does not support vswap
This means either old kernel or bad config (physpages NOT set to 0:unlimited)
Solution
vzctl set $veid --physpages 0:unlimited --save........
Stuff like this always happens/breaks after a vzctl update, whether it's new parameters being added or required etc..
File /etc/vz/conf/ve-vps.basic.conf-sample not found: No such file or directory
Fix the value of CONFIGFILE in /etc/vz/vz.conf
Creation of container private area failed
Warning: distribution not specified in CT config, using defaults from /etc/vz/dists/default
WARNING: /etc/vz/conf/4400.conf not found: No such file or directory........
Starting online migration of CT 25000 to 192.168.5.1
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Error: Failed to suspend container
CPT ERR: f68cf000,25000 :foreign process 15755/14731(vzctl) inside CT (e.g. vzctl enter or vzctl exec).
CPT ERR: f68cf000,25000 :suspend is impossible now.
CPT ERR: f68cf000,25000 :foreign process 15755/14731(vzctl) inside CT (e.g. vzctl enter or........
Make sure the module "tun" is loaded on the host.
vzctl set 2000 --devnodes net/tun:rw --save
*Note what's below is what OpenVZ says you need (but I've never had to do it)
vzctl exec 2000 mkdir -p /dev/net
vzctl exec 2000 mknod /dev/net/tun c 10 200
vzctl exec 2000 chmod 600 /dev/net/tun
On the container test the device:
when Something is wrong:........
Openvz problem, it is confusing because it's an inode issue and there is enough free space.
cp: cannot create regular file `forums/memberlist.php': Disk quota exceeded
/dev/simfs 60G 20G 41G 33% /
none 2.0G 4.0K 2.0G 1% /dev........
first container would not come up:
Starting CT 2333:
service vz stop
OpenVZ is locked [FAILED]
2010-11-29T23:26:23-0800 vzctl : CT 2333 : Starting container ...
2010-11-29T23:37:21-08........
Initializing quota ...
Error: Not enough parameters, diskinodes quota not set
vzctl set $veid --diskinodes 90000:91000 --save
New versions of OpenVZ seem to have some strange diskinodes parameter which is required.........
vzmigrate --online dest-host VEIDNO
eg.:
vzmigrate --oneline 192.168.1.55 101
One option I would recommend is "--keep-dst", that way if the migration is interrupted you can still bring the VPS back up on the original host. After the migration is successful you can manually destroy it.
OpenVZ has a good writeup on this including Checkpointing and Restoring etc:........
cat /proc/user_beancounters produces the following:
kmemsize 1861537 5139870 12752512 12752512 26965041
Notice the failcnt "26965041", that is for kmemsize and at first it confused me. The system had enough guaranteed and enough burst RAM available. kmemsize is a variable indepedent of that, but who cars about the explanation right, let's just make thing........
I didn't find any useful information that actually fixed this. My VPS was in the "Running State" and I could not stop or restart it. I kept getting "Container already locked" no matter what Idid (I tried all the suggestions in the Google results for this error).
Most of the suggestions were for Windows but I only use Linux. The other solutioins also said to restart the VZ service or even the entire hostnode and this was not acceptable to me........
To enable Fuse to work inside a OpenVZ container it's very simple (although some people say it can't be done).
Remember that on your HN(HostNode) for OpenVZ, Fuse must be installed and the module must be loaded for this to work. In addition remember that you need the Fuse package installed inside the container of course.
vzctl set 2000 --devices c:10:229:rw --save
vzctl exec 2000 mknod /dev/fuse c 10 229
The part that most people forget........