Wednesday, April 27, 2016

Copying Virtual Machines between repositories in Oracle VM for x86

Copying Virtual Machines between repositories in Oracle VM for x86

Back in the days of OVM 2.x, the repository filesystem structure was somewhat collapsed from what it looks like these days in 3.x.  It was much simpler to copy whole VM’s from one repository to another because the files were all in the same folder.  Today, the easiest way to move VM’s between repositories is to present both repo’s to the same pool (if they aren’t already), power off the VM and migrate it.  Voila!

Yeah- so what if I have two pools and I’m having problems getting both repo’s presented to the same pool?  Normally the way I’d handle this is by using an NFS repo as the go-between.  Create an NFS share and make a repository on top of it, then present it to both the source and target pools.  That’s one of the cool things about NFS repo’s is that you can have them presented to more than one server pool at a time unlike block based storage repo’s (Fiber Channel or iSCSI).  This is due to the nature of the OCFS2 filesystem and underlying clusterware mechanisms that don’t like repo’s from other clusters muscling in on their territory.  So the path there would be power off the VM, migrate it to the NFS repo in source pool.  Then go to the target pool and migrate the VM from the NFS repo into the target pool’s persistent repository whether that be Fiber Channel, iSCSI or even local disk on the OVM server.  Again- Voila!
Ok, but I don’t have NFS or don’t want to do it that way or just plain can’t do it that way form one reason or another.  So be stubborn then!  Below is the process that I followed recently with a customer to copy the VM files themselves from one repository to another.  In my case, the customer had 2 repositories sitting on Fiber Channel disk.  One repo used to belong to a server pool that had it’s pool filesystem sitting on an NFS share that inadvertantly got deleted out from under the pool.  Bad mojo there… Anyway I was having a heck of a time getting the repo presented to the new pool because OVM Mangler thought the repo was still part of the pool that now has no pool filesystem.  The pool filesystem’s job among other things is to help keep track of what is presented where and how.  Remember- the pool filesystem was whacked.  I’ll let that sink in for a bit.
Anyway- I was able to re-signature the OCFS2 filesystem and put the new cluster stamp on it so I could at least mount it on a server in my target pool where I wanted the VM’s to wind up.  I still couldn’t present the repository to the pool in OVM Manager due to the previously mentioned pool filesystem tom foolery that took place.  Regardless, it got me far enough to do what I need.  Keep in mind- I could have just copied the files across the network and not messed about with all this re-signaturing nonsense, but we’re talking about hundreds of gigs here and that would take too long for my impatienceness.
Here’s a high level outline of the steps I took to make this all work:
  • Make sure all VM’s to be moved are powered off
  • Take note of the following items for each VM (either from OVM Manager or the CLI) and put them into a notepad session
    • VM UUID (Servers and VM’s tab – Source Pool – Virtual Machines.  Click on the down arrow to expand the VM’s info)
    • VM Disk UUID’s (same place as above)
    • VM Disk names
    • Current Repository UUID (Repositories tab – Source Repository – info)
    • Page83 ID of the current repository LUN (see my post here on mapping physical disks in OVM to their page83 SCSI ID using OVM CLI)
  • create a folder in the target repository that matches the UUID of the VM you’ll be copying
mkdir /OVS/Repositories/{UUID of Target Repository}/VirtualMachines/{VM UUID}
  • make sure the source repository LUN is presented to all servers in the target pool from the SAN.  Also make sure you can see it in the output of fdisk -l or multipath -ll (if you’re using multipathing).
  • If necessary, resignature the repository LUN’s OCFS2 filesystem with the cluster ID of the target pool (do this from a server in the target pool)
fsck.ocfs2 /dev/mapper/{page83 ID of source repository LUN}

tunefs.ocfs2 --update-cluster-stack /dev/mapper/{page83 ID of source repository LUN}
  • mount the source repository on a server that belongs to the target pool and has that pool’s repository already mounted.  I use the /mnt mountpoint
  • copy the vm.cfg from the source repository to the destination repository
cp /mnt/VirtualMachines/{UUID of VM to copy}/vm.cfg /OVS/Repositories/{UUID of target Repository}/VirtualMachines/{UUID of VM to copy}
  • copy all the Virtual Disks for the VM from the source repository to the target repository.  Note the –sparse=always flag: this is used so that when you copy the files they don’t get inflated to their full size in the target repo.  I’m assuming you gave the VM thin provisioned disks (or sparse disks as OVM refers to them as) to begin with.
cp --sparse=always /mnt/VirtualDisks/{UUID of virtual disk(s) in VM}.img /OVS/Repositories/{UUID of target Repository}/VirtualDisks/
Here’s where things can go sideways.  Make sure you write down or copy into a notepad session the information I had you note before.  You’re going to create new UUID’s for the VM itself as well as all the virtual disks contained in that VM.  Pay attention and go slow!
  • Create a new UUID for the VM’s identity and for each virtual disk that belongs to the VM
# uuidgen -r
18286ea4-55d7-424f-9c9b-6d250e64daee
{repeat for each virtual disk and write each one down}
  • cd into the target repository
  • cd into the VirtualDisks folder
  • for each disk that belongs to the VM, rename the virtual disk from it’s original UUID to the new one you just created.  Note which UUID you used and map the old one to the new one in your notepad session for later.
  • make a backup of the vm.cfg file (put it in another folder not the current one)
  • open the vm.cfg file in the target repository (OVS/Repositories/{UUID}/VirtualMachines/{UUID of VM}/vm.cfg)
  • Locate the line that begins with “disk = [“.  This is the listing of all the virtual disks assigned to this VM.  You will need to modify each disk entry to reflect the following:
    • change the UUID of  the repository so it points to the target repo not the source (/OVS/Repositories/{UUID})
    • change the UUID of the disk file to reflect the new UUID you created a few steps earlier (/OVS/Repositories/{UUID}/VirtualDisks/{new UUID of virtual disk})
  • Locate the line that starts with “name = ”
    • Replace the current contents of this field with the UUID that you generated earlier for the VM’s identity.  Note that you’ll have to strip out all of the dashes for this field.
  • Locate the line that starts with “uuid = ”
    • Replace the current contents of this field with the UUID that you generated earlier for the VM’s identity.  In this field, you will leave the dashes intact as they are.
  • Write the file out to disk
  • cd to the target repository
  • cd to the VirtualMachines folder
  • rename the old UUID of the VM to the new one you created
mv {old UUID} {new UUID}
  • At this point, you should be able to go into the OVM Mangler interface and navigate to the repository tab
  • locate the target repository and refresh it
  • go to the Servers and VM’s tab
  • look in the Unassigned Virtual Machines folder and you should see the VM you copied
  • edit the VM and rename the virtual disk names to what they used to be in the source repo.  You will be renaming them from something like 18286ea455d7424f9c9b6d250e64daee.img to rootdisk_OS or whatever your disks were named before.  Use your notepad to fill in the details.
  • migrate the VM to the target pool
  • power on the VM
  • bask in your awesomeness for having done everything right the first time
OR
  • look at whatever error messages are thrown at you and observe your typo’s so you can fix them:).




No comments:

Post a Comment

  How to Change Instance Type & Security Group of EC2 in AWS By David Taylor Updated April 29, 2023 EC2 stands for Elastic Compute Cloud...