Published Solutions
-
H-Sphere 3.x End of Life (EOL) and End of Maintenance (EOM) Statement
Original Publishing Date: 2022-04-13 KB 9012 H-Sphere End-of-Life (EOL) Versions (2.4 - 3.6.2) Ingram Micro Cloud is committed to providing high-quality, cost-effective solutions to our customers. As new technologies emerge, the demand for older product versions and components decreases. Accordingly, as we continue to address our customers' needs by introducing new products and services, we also periodically need to end support for older software versions. Change in Support Status All H-Sphere versions reached “End of Life” and “End of Support” status, as of December 31, 2017, meaning that we ceased further development and security fixes and no longer accept technical support requests related to H-Sphere. Internal content
-
How to create an IP_GRE tunnel inside container?
Original Publishing Date: 2022-04-13 Answer To create a GRE PPTP tunnel inside a container, the following steps should be followed: Ensure that ip_gre and nf_conntrack_proto_gre modules are loaded on the node: # lsmod | egrep 'ip_gre|nf_conntrack_proto_gre' If they are not present, load them manually. # modprobe ip_gre # modprobe nf_conntrack_proto_gre If it is needed to load them automatically on boot, configure it in accordance with the corresponding OS instructions: RHEL 6 based OS RHEL 5 based OS Configure TUN/TAP devices inside the container Configure the container to support PPP device with ipgre feature: # vzctl set --save --devnodes "ppp:rw net/tun:rw" --features "ppp:on ipgre:on" Configure the container to load ip_gre iptables module: For Virtuozzo hypervisor(PSBM5) and Virtuozzo Containers 4.7 and earlier versions. # vzctl set --save --iptables ip_gre For Virtuozzo Server 6.0 # prlctl set --save --netfilter full For more detailed information on configuring iptables modules in containers, check this article Note: ip_gre module is virtualized since CU-2.6.18-028stab064.4. Internal content
-
Unable to set firewall rules inside container: "No chain/target/match by that name"
Original Publishing Date: 2022-04-13 Symptoms When trying to add an iptables rule inside a container, the operation results in an error similar to the following: # iptables -t mangle -A PREROUTING -s x.x.x.x -j TTL --ttl-set 64 iptables: No chain/target/match by that name. Cause To be able to execute "action" rules, it is necessary to have the corresponding matching and target modules available inside the container. It is likely that the required matching or target module is not loaded on the node. Resolution Check matching and target modules available for the container in question and load the absent ones. Example: For the command iptables -t mangle -A PREROUTING -s x.x.x.x -j TTL --ttl-set 64: [root@mycontainer ~]# cat /proc/net/ip_tables_matches udp tcp conntrack owner connlimit recent helper state length ttl tcpmss icmp multiport multiport limit tos [root@mycontainer ~]# cat /proc/net/ip_tables_targets REDIRECT MASQUERADE DNAT SNAT TCPMSS ERROR LOG TOS REJECT For the command above, we need the matching module ttl (which is available) and target module TTL, which is not present. In order to fix the issue, it is necessary to load the module on the node and restart the container: [root@node ~]# modprobe ipt_TTL [root@node ~]# vzctl restart CTID In order to fix the issue permanently, it is necessary to add the required modules to load automatically. Refer to this article for more information: Managing iptables modules in containers Internal content SYMPTOMS Sometimes, when running an iptables command inside a container, one of following errors occurs: iptables: Unknown error 4294967295 iptables: Unknown error 18446744073709551615 iptables: No chain/target/match by that name CAUSE Most likely, not all required iptables modules are loaded on the node itself and available for containers. RESOLUTION To identify which modules should be loaded on the node and enabled for containers, please follow the below instructions: Save the list of loaded-on-the-node modules: # lsmod | awk '/^ip|^nf|^xt/{print$1}' > file1 Next, run the iptables command that failed inside the container earlier on the node. This way, all required modules will be loaded on the node. Please remember to run only ALLOWING commands and only for NON-EXISTENT IP addresses, for example: # iptables -t mangle -A PREROUTING -s 1.1.1.1 -j TTL --ttl-set 64 Save the list of loaded-on-the-node modules again: # lsmod | awk '/^ip|^nf|^xt/{print$1}' > file2 Compare the differences between first and second lists: # diff -puN file1 file2 In our example, when we run # iptables -t mangle -A PREROUTING -s 1.1.1.1 -j TTL --ttl-set 64, the result is: # diff -puN file1 file2 | grep ^+ +++ file2 2012-05-10 11:56:36.000000000 -0700 +ipt_TTL i.e., module ipt_TTL was not loaded. Next, add the modules you received in step 4 to /etc/sysconfig/iptables-config on the node so that they will be loaded each time automatically after a node reboot, and enable these modules for container running: # vzctl set CT_ID --save --iptables -- --iptables -- Please keep in mind that you can allow the following modules: iptable_filter, ipt-able_mangle, ipt_limit, ipt_multiport, ipt_tos, ipt_TOS, ipt_REJECT, ipt_TCPMSS, ipt_tcpmss, ipt_ttl, ipt_LOG, ipt_length, ip_conntrack, ip_conntrack_ftp, ip_conntrack_irc, ipt_conntrack, ipt_state, ipt_helper, iptable_nat, ip_nat_ftp, ip_nat_irc, ipt_owner. If the modules you got in step 4 are not in this list (for example, ipt_TTL), this means that there is no need to enable them for containers; they will be available for all containers automatically if they are loaded on the node.
-
Online migration fails for containers: iptables-restore exited with 2
Original Publishing Date: 2022-04-13 Symptoms Container migration fails with the following errors: Actual result that you got : 1392199064: 1392216715: /usr/sbin/vzctl --skiplock --skipowner --ignore-ha-cluster restore 2737 --undump --skip_arpdetect --dumpfile /vz/dump/dumpfile.MCxGOk 1392199066: 1392216717: vzctl : Cannot undump the file: Invalid argument 1392199066: 1392216717: vzctl : Error: iptables-restore exited with 2 1392199066: 1392216717: vzctl : Error: Most probably some iptables modules are not loaded 1392199066: 1392216717: vzctl : Error: or CT's iptables utilities are incompatible with this kernel (version is older than 1.4.0) 1392199066: 1392216717: vzctl : Error: (Offline migration and iptools upgrade might help). 1392199066: 1392216717: vzctl : Error: rst_restore_net: -22 1392199066: 1392216717: vzctl : Failed to start the Container 1392199066: 1392216717: vzctl : Container is not running 1392199066: 1392216717: vzctl : Failed to start the Container 1392199066: 1392216717: /usr/sbin/vzctl exited with code 17 1392199069: 1392216720: /usr/sbin/vzctl exited with code 17 1392199069: 1392216720: error [-52] : /usr/sbin/vzctl exited with code 17 1392199069: /usr/sbin/vzctl exited with code 17 1392199069: can not undump CT#2737 : /usr/sbin/vzctl exited with code 17 Iptables configuration has been synced between the nodes. What is the problem? Cause The issue may arise from complex iptables configuration on the source node: it may contain a lot of rules that imply a big variety of iptables modules. As a result, all these modules are loaded on the source, and the container may freely use these modules inside of it, unless it's restricted to a certain set of modules. Resolution Get the list of modules, loaded on the source: [root@vz ~]# lsmod | awk '/^ip|^nf|^xt/{print$1}' | sort | uniq > file1 Copy file1 to the destination server. Get the list of modules, loaded on the destination: [root@vz ~]# lsmod | awk '/^ip|^nf|^xt/{print$1}' | sort | uniq > file2 Check the difference: [root@vz ~]# diff -pruN file1 file2 | grep ^- -ipmi_devintf -iptable_raw -ipt_addrtype -ipt_ah -ipt_ecn -ipt_ECN -ipt_MASQUERADE -ipt_NETMAP -ipt_REDIRECT -ipt_ULOG -nfnetlink_log -xt_CLASSIFY -xt_comment -xt_dccp -xt_hashlimit -xt_iprange -xt_mac -xt_mark -xt_MARK -xt_NFLOG -xt_NFQUEUE -xt_owner -xt_physdev -xt_pkttype -xt_policy -xt_recent This is the list of modules to be loaded. Load these modules: [root@vz ~]# diff -pruN file1 file2 | grep ^- | tail -n+3 | sed 's~-~~' | while read mod ; do modprobe $mod ; done Retry the migration. Internal content
-
Broken VZFS links prevents container to be converted to ploop
Original Publishing Date: 2022-04-13 Symptoms I am trying to convert VZFS container to PLOOP, however I can see errors like: /bin/cp: cannot stat `/vz/root/113/./usr/sbin/suexec': No such file or directory /bin/cp: cannot stat `/vz/root/113/./usr/sbin/httpd.worker': No such file or directory /bin/cp: cannot stat `/vz/root/113/./usr/sbin/httpd.event': No such file or directory /bin/cp: cannot stat `/vz/root/113/./usr/sbin/httxt2dbm': No such file or directory /bin/cp: cannot stat `/vz/root/113/./usr/sbin/apachectl': No such file or directory /bin/cp: cannot stat `/vz/root/113/./usr/lib/krb5': No such file or directory Cause Container has broken VZFS links that cannot be queried in order to be replaced with real files. To check broken links: ~# find /vz/root/CTID/ -xdev -ls >/dev/null Note: Replace CTID with the actual container ID value. Resolution Follow this article to resolve the problem. Internal content
-
A snapshot cannot be deleted or mounted
Original Publishing Date: 2022-04-13 Symptoms Ploop snapshot fails to be deleted with the following error message: ~# prlctl snapshot-delete 333 -i f941b9bd-bc78-4b8f-a846-c91ec61499d2 Delete the snapshot... Failed to delete snapshot: Operation failed. Failed to delete snapshot: Error in ioctl(PLOOP_IOC_MERGE): Device or resource busy [3] Failed to delete snapshot {f941b9bd-bc78-4b8f-a846-c91ec61499d2} It is not possible to create a new backup of the affected ploop container: ~# vzabackup -F localhost -e 333 Starting backup operation for node 'pcs.container.org'... * Operation with the Container pcs.container.org is started * Preparing for backup operation * Operation with the Container pcs.container.org is finished successfully. Backup operation for node 'pcs.container.org' failed: Backup failed Cause Previous backup of the container was not finished properly - it was terminated or it crashed. As a result, container's image is mounted twice. Resolution Clarify who locks root.hds: ~# grep /333/ /sys/block/ploop*/pdelta/*/image /sys/block/ploop32064/pdelta/0/image:/vz/private/333/root.hdd/root.hds /sys/block/ploop32064/pdelta/1/image:/vz/private/333/root.hdd/root.hds.{f37e6b00-7fcb-49b8-8942-58179ba3900d} /sys/block/ploop43803/pdelta/0/image:/vz/private/333/root.hdd/root.hds So you can see that root.hds is mounted twice. Check what is regular mount and what mount is related to terminated backup: ~# cat /proc/mounts | grep 333 | grep ploop /dev/ploop32064p1 /vz/root/333 ext4 rw,relatime,barrier=1,data=ordered,balloon_ino=12,pfcache_csum,pfcache=/vz/pfcache,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0 /dev/ploop43803p1 /vz/backup/333/tmpidyHKK/fs ext4 ro,relatime,barrier=1,data=ordered,balloon_ino=12,pfcache_csum 0 0 Most probably the second mount is related to a backup. You need to double check this assumption: ~# cat /sys/block/ploop43803/pstate/cookie vzbackup So it is obviously a backup-related mount that should be unmounted. Confirm that the mount is not used by any process: ~# lsof 2> /dev/null | grep /vz/backup/333/tmpidyHKK/fs ~# In case the mount is still in use by a backup process: ~# ps aux | grep vzlpl root 429273 0.0 0.0 124800 7552 ? S 2013 0:00 /opt/pva/agent/bin/vzlpl /var/opt/pva/agent/tmp.JJbakK root 520460 0.0 0.0 103256 848 pts/6 S+ 07:08 0:00 grep vzlpl root 568003 0.0 0.0 123968 4168 ? S 2013 0:00 /opt/pva/agent/bin/vzlpl /var/opt/pva/agent/tmp.QxeOqd Check if the process is actually operational and not stuck. For example, confirm that the temporary files in /var/opt/pva/agent/ are actually present. In case /var/opt/pva/agent/tmp.JJbakK and /var/opt/pva/agent/tmp.QxeOqd do not exist, kill both vzlpl processes and run lsof once again to confirm that the mount is not used. Unmount the device: ~# umount /vz/backup/333/tmpidyHKK/fs ~# ploop umount -d /dev/ploop43803 Unmounting device /dev/ploop43803 Internal content New backup creation should automatically unmount spare mounts, PVA-33402. But it still can fail to umount it properly and the operation might fail as the result. Temporary snapshots can also be checked with: # ploop list | grep "101" | grep VZABasicFunctionalityLocal ploop35802 /pcs/101/root.hdd/root.hds VZABasicFunctionalityLocal ploop44017 /pcs/101/root.hdd/root.hds VZABasicFunctionalityLocal ploop24305 /pcs/101/root.hdd/root.hds VZABasicFunctionalityLocal
-
Ploop snapshot cannot be deleted with "Single delta, nothing to merge" error message
Original Publishing Date: 2022-04-13 Symptoms Ploop snapshot cannot be deleted with the following error message: ~# prlctl snapshot-delete 333 -i 57ff7d22-ddce-4fd6-a8db-976094de6e55 Delete the snapshot... Failed to delete snapshot: Operation failed. Failed to delete snapshot: Single delta, nothing to merge [38] Failed to delete snapshot {57ff7d22-ddce-4fd6-a8db-976094de6e55} Cause The snapshot deletion process has been interrupted and DiskDescriptor.xml was not modified to reflect the changes. Resolution Confirm that only one delta is used: ~# grep 333 /sys/block/ploop*/pdelta/*/image /sys/block/ploop26196/pdelta/0/image:/vz/private/333/root.hdd/root.hds Check that the snapshot file is not held by any process: ~# ll /vz/private/333/root.hdd/ total 32879644 drwx------ 4 root root 4096 Feb 11 2013 cache-private drwxr-xr-x 2 root root 4096 Feb 11 2013 cache-root -rw-r--r-- 1 root root 1134 Mar 1 20:00 DiskDescriptor.xml -rw------- 1 root root 0 Feb 11 2013 DiskDescriptor.xml.lck -rw------- 1 root root 21077426176 Mar 5 02:49 root.hds -rw------- 1 root root 12487491584 Mar 4 19:21 root.hds.{230c98c4-bdbb-4c0d-ad29-5269cce7366b} drwx------ 2 root root 4096 Apr 29 2013 root.hds.mnt drwxr-xr-x 21 root root 4096 Apr 29 2013 templates ~# lsof 2>/dev/null | grep 230c98c4-bdbb-4c0d-ad29-5269cce7366b ~# (In the example above you can also notice that modification date for root.hds file is more recent then for root.hds.{230c98c4-bdbb-4c0d-ad29-5269cce7366b}) Stop the affected container. Create a backup copy of the directory /vz/private/CTID/root.hdd. Modify and sections in /vz/private/CTID/root.hdd/DiskDescriptor.xml to exclude the snapshot. and sections in DiskDescriptor.xml of the container withiout any snapshots looks this way: 0 20971520 2048 {5fbaabe3-6958-40ff-92a7-860e329aab41} Compressed root.hds {5fbaabe3-6958-40ff-92a7-860e329aab41} {5fbaabe3-6958-40ff-92a7-860e329aab41} {00000000-0000-0000-0000-000000000000} Delete the /vz/private/CTID/Snapshots.xml file. Start the container. Delete the unnecessary snapshot image (/vz/private/333/root.hdd/root.hds.{230c98c4-bdbb-4c0d-ad29-5269cce7366b}). Internal content
-
php-cgi.exe process exited unexpectedly. Unable to load dynamic library "php_interbase.dll" - The specified module could not be found
Original Publishing Date: 2022-04-13 Symptoms The website is displaying an "Internal error": HTTP Error 500.0 - Internal Server Error C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP53\php-cgi.exe - The FastCGI process exited unexpectedly 'php-cgi.exe -v' fails with the below warning: PHP Warning: PHP Startup: Unable to load dynamic library 'C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP53\ext\php_interbase.dll' - The specified module could not be found. Cause 'C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP53\ext\php_interbase.dll' requires a 'fbclient.dll' library, which is not present on a Windows servers by default. Resolution Either install Firebirdsql package, or comment out the following extension in PHP configuration file ('C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP53\php.ini'): ;extension=php_interbase.dll NOTE: In the above example we assumed that PHP 5.3 is used. In case if you use a different version, modify the 'php.ini' i na corresponding location. Internal content
-
[FAQ] Does Plesk support ASP.NET 4.5?
Original Publishing Date: 2022-04-13 Answer Yes, ASP.NET 4.5 is supported by Plesk 11 and 12. Below are the lists of ASP.NET versions supported by PP versions: Plesk 9.5: ASP.NET 1.1.4322 ASP.NET 2.0.50727 (.Net Framework 2.0/3.0/3.5) ASP.NET 4.0 Plesk 10.4: ASP.NET 1.1 ASP.NET 2.0 ASP.NET 4.0 Plesk 11: ASP.NET 1.1 - 4.5 Plesk 11.5: ASP.NET 1.1 - 4.5 Plesk 12: ASP.NET 1.1 - 4.5 Note #1: As .NET 4.5 Framework is iterative update to .NET 4.0, Plesk and IIS will continue displaying available version of ASP.NET as 4.0. Note #2: See KB article #115727 for instructions on how to set up .NET 1.1 Framework on Windows 2008 with IIS7. This information was taken from the applicable release notes: Plesk 9.5 Release Notes Plesk 10.4 Release Notes Plesk 11.0 Release Notes Plesk 11.5 Release Notes Plesk 12 Release Notes Internal content
-
BA 7.0.0 HOTFIX 129095 BILLING v1
Original Publishing Date: 2022-04-13 Release Notes This article was superseded by OA 7.0.0 HOTFIX 129312 BILLING v2 Internal content