Resolution

You may encounter several types of problems when trying to migrate virtual environments (VEs) between nodes:

  1. When trying to migrate VE #101 to a Hardware Node with the IP address 192.168.0.2, you see the following error mesage:

    ~# vzmigrate 192.168.0.2 101
    Connection to destination HN (192.168.45.36) is successfully established
    Moving/copying VE#101 -> VE#101, [], [] ...
    Can't move/copy VE#101 -> VE#101, [], [] : Destination HN has not got required packages [fedora-core-4 20051215], use '-f' option
    

    This means that the OS template "fedora-core-4/20051215" is absent on the destination Hardware Node. As a result, it is not possible to start the VE on the node. The same may be true for application templates.

    Solution #1:

    Upload and install all missing OS and application templates that the VE is using to the destination node. After that, the VE should migrate without problems:

    • For standard templates, run vzup2date -t
    • For EZ templates, run vzup2date -z

    If the required templates are no longer present on the repositories, use rsync:

    ~# rsync -auv --rsh="ssh" root@source-node:/vz/template/ root@destination-node:/vz/template
    

    NOTE: There is no trailing slash ("/") at the end of the destination path, while there is a trailing slash at the end of the source path. Refer to the manual page of rsync for more details.

    Solution #2:

    Use the "-f" option of the "vzmigrate" utility, which will ignore the absence of required package sets on the destination node. This solution is not recommended. To protect the VE against filesystem errors due to absent template files, it will not be started on the destination node after migration and must be started manually.

  2. A VE cannot be migrated with the following error message:

    ~# vzmigrate 192.168.0.2 101
    Enter passphrase for key '/root/.ssh/id_dsa':
    Can't init migrate : VE#101 already exists
    

    This error message means that the private area of VE #101 already exists on the destination node. There may be an entirely different VE #101 on the destination node, or the same VE #101 may exist on the destination node due to previous attempts to migrate the VE with the "--keep-dst" parameter.

    Solution #1:

    If the VE #101 on the destination node is not the same VE as on the source node, you can change the ID of the VE during migration:

    ~# vzmigrate 192.168.0.2 101:201
    

    This command will migrate VE #101 to the destination node and save it as VE #201. Of course, VE #201 should not already exist on the destination node.

    Solution #2:

    If the VE #101 on the destination node is the same VE as on the source node, rename the private area of the VE on the destination node to "VE_ID.migrated"

    ~# cd /vz/private
    ~# mv 101 101.migrated
    

    ... and run the migration again. The "vzmigrate" utility will recognize that part of the VE private area exists on the destination node and will correctly process the migration.

  3. A VE cannot be migrated with the following error message:

    ~# vzmigrate 192.168.0.2 101
    Enter passphrase for key '/root/.ssh/id_dsa':
    Can't init migrate : can't lock VE#201 :
    

    This error message means that VE #101 is locked by a process on the source node. It may mean that another migration is currently being performed, that the VE is being backed up, or that the VE is stopping/starting.

    Solution: Find the process that is locking the VE using this command:

    ~# cat /vz/lock/101.lck
    

    Analyze what the process is, then kill it if necessary or wait until it is finished.

  4. Other possible causes of problems with migrating VEs between nodes are:

    • The source node cannot connect to the destination node due to problems routing between them or firewall rules - check whether you can reach the destination node using the "ping", "ssh", or "traceroute" commands.
    • The destination node may be down - check if it is up and running.
    • The source node cannot connect to the destination node by SSH due to problems with the SSH daemon on it - check if the SSH daemon is up and running on the destination node and that it is listening to the required port (port 22 by default, although this can be overwritten in the "vzmigrate" command line.)
    • If you migrate a VE using Virtuozzo Management Console (VZMC) or Virtuozzo Control Center, the migration is performed via VZAgent. Problems can be caused by absent connections between the Service VEs on the source and destination nodes - check that the Service VEs on the nodes are able to communicate with each other by SSH.
    • Connection between source and destination nodes cannot be esablished if in PVA agent uses frontnet IP address as default instead of backnet IP and vice versa

Internal content