Monthly Archives: October 2016

Unleash the power of the NTNX-AVM – Move VM from one container to another container (AHV) with move_vm

This post is related to AHV only!!! Make sure a recent backup of the VM exist!

In the last weeks customers asked me how to move a VM from one AHV container to another AHV container on the same cluster. The answer is: “There is no PRISM/GUI option for this and the manual task is pretty difficult”. So i wrote a script called move_vm which I show in this post to simplify this.

But why should you move a VM?

There are several reasons for this.

  • container settings don’t fit
  • different containers for different organization units
  • DR/backup concepts based on containers
  • Automation based on containers
  • … and more

Example: Let’s say the customer started with two containers:

  1. ISO – just for templates and CD/DVD ISO images
  2. prod –  productions environment

Now he realizes that some of the server VMs would be great for compression but some are not. He used the best practices to figure out which server VMs would fit.

compression_bestpractices

For this case he created a new container called:

3 . prod_comp with active inline compression for SQL Server 2012 R2 VMs

Now he tried to move the VM from container prod to prod_comp and found that this does not work and asked me.

USE CASE: Move a VM from one AHV container to another AHV container to make use off inline compression for a several VMs!

I created a new container called prod_comp with inline compression in my Intel NUC environment.

container_prod_comp

Then I created a VM called “Move_VM_Test1” with different kind of disk devices and CD/DVD drives with all reside on the prod container.

move_vm_test1

Now I am using the tool move_vm which is installed on the NTNX-AVM to move the VM. The tool has several options which I list here.

Move a whole VM from container prod to prod_comp

In the first case the syntax is pretty simple to move the whole VM from prod to prod_comp. REMEMBER the source VM needs to be powered off before you are able to move it.

move_full_vm

overview_vms_double

You may noticed that there are two VMs now with the same name. I believe it makes sense to keep the old VM unless you are sure the new copy works.

You can use the option “–delete” to delete the source VM. The advantage is that the new network adapters will have the same MAC address then the source VM! 

 

Move just one vDisk/disk of the VM from container prod_comp back to prod

I renamed the new Move_VM_Test1 to Move_VM_Test2 for the next part.

Let’s say you would like to move just one vDisk from a container prod_comp back to prod because you found that inline compression makes no sense. An example maybe the “transaction log” vdisk of a MS SQL Server.

First we need to find out which vdisks exist and how the mapping looks like. This can be done with the “–list_mapping” option.

the output is:

which means that there are four vdisks and two CD/DVD drives. Let’s say we identified that the the second vdisk: “scsi.1” is the one we would like to move back to container prod. In this case we need to specify the whole mapping when calling the move_vm tool to only move the second vdisk. Copy and paste is the way to go!

The syntax is:

I only changed the prod_comp to prod for the scsi.1 vdisk!

move_vdisk

You may ask: “Why is the whole VM cloned and not only the vdisk?” Yep you are right. This would be the better way. This is how this tool works atm. This is more a copy then a move but it works and its pretty fast because only the vdisk “scsi.1” needs to be copied.

You could specify the option “–delete” the delete the source VM and to make sure the new network adapters get the same MAC address.

moved_vdisk

There we go.

For all the people who wants to know more, this is an overview how this tool works:

  1. Upload vDisk from source VM to image service. This is needed while a direct copy is not possible
  2. Create new VM
  3. Clone Images to Disks
  4. Add network
  5. Delete images

Intel NUC NUC6i7KYK – Upgrade of Nutanix Community Edition (CE)

Today I upgraded the Intel NUC cluster based on Nutanix CE. This was pretty easy.

Choose “gear wheel” in the top right corner and select “Upgrade Software”

upgrade_ce

As you can see. The new Nutanix CE version was released a few days ago. So choose “Upgrade” -> “Upgrade now”

upgrade_ce2

The upgrade process starts. We just need to wait now… Or click on “Nothing to do?”

upgrade_ce3

I played the game “2048” but I was not really good this time.

upgrade_ce4

So I checked the upgrade process.

upgrade_ce5

Not ready…. Wrote some emails than.

around 20 min after I clicked upgrade:

upgrade_ce6

It’s done…..

This is while Nutanix is called the 1-click upgrade solution. (Yes I clicked around 4-5 times overall)

Go Audacity!

 

Unleash the power of the NTNX-AVM – Import/Export/Migrate VMs to/from AHV

in a recent customer Proof of Concepts (PoC) I encountered a common task. Importing a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and after testing export the VM back to the production environment. The import via the AHV Prism GUI can be done without too much effort but there is no interface or command line tool which exports a VM. I wrote a script for the export which can be run on a linux/mac environment or on NTNX-AVM. Jump direct to the export part and skip the import.

USE CASE: Import a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and export it back after testing!

Importing/Migrate a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV)

There is a decent post from Artur Krzywdzinski on how to import/migrate a Windows 2012R2 from VMware vSphere/ESXi to AHV. There is even a detailed documentation on the Nutanix support portal. Search for “MIGRATING WINDOWS VMS FROM ESXI TO AHV”.

The way this works can be described as follow:

  • Install all needed drivers into the VM before migrating. (drivers for disk devices, video, network)
  • Copy the VM to a Nutanix NFS share which is mounted on the source ESXi/vSphere via Storage vMotion if available, else copy it with command line/GUI yourself.
  • Convert the VMware vmdk to a format AHV can read/write
  • Create a VM based on the converted vmdk files with the same settings like in ESXi/vSphere
  • …. something else maybe… Start the VM… done

migrate_portal

1. Windows VM Migration Prerequisites

I advice you to read the full documentation If you migrate VMs. I only list the basic steps needed in my test lab.

  • Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.
  • Optional: Clone any ESXi VMs you want to preserve.
  • Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later.
  • Mount the AHV container as an NFS datastore to vSphere.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

This is a Windows 20012 R2 Windows Server with VMware Tools installed.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter with no snapshots.

vm_nosnapshots

Optional: Clone any ESXi VMs you want to preserve.

Yes I would like to do this. The VM which I will migrate will be the clone and not the original VM. Just in case at any point we mess something up it would be nice to have the original one.

In this case I clone the VM directly to the NFS datastore mounted from the Nutanix cluster. So jump to the Mount AHV … part and continue here when finished this step.

I clone the Windows2012R2 server to the prod datastore which resides on the Nutanix cluster.

clone_prod

Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later. See the Hypervisor Details page on the Nutanix Support Portal for all the AHV versions

I am using the Nutanix CE edition so it is not that easy to make sure these requirements are met. Lets start with AOS 4.5.x or later.

I connected today (04.10.2016) to my Nutanix NUC cluster and choose the option “Upgrade Software” in the right top corner in PRISM (gear wheel).

upgrade_ahv

As you can see: “2016.06.30” is installed but is this equal or later than AOS 4.5.x ? Yep it is. The version format seems to be YYYY:MM:DD so this should be okay. There is a table called “AHV AND NOS/AOS VERSION UPGRADE COMPATIBILITY” on the Nutanix portal which will make it easier to understand. The newer the NOS/AOS the newer the AHV version which is required. In this case I believe the version should be 4.6.3 for NOS/AOS.

ahv_nos_upgrade

Now the AHV-20160217.2 hypervisor part. The Nutanix CE shows: Nutanix 20150513 which does not meet this requirement. But anyway. It works.

BTW: In this case there is an update available but I will upgrade after I finished this post.

Mount the AHV container as an NFS datastore to vSphere

First we need to make sure that the source ESXi/vSphere environment is able to mount the NFS container/datastore. To do this we need to whitelist the ESXi/vSphere environment on Nutanix. In my case the ESXi/vSphere (192.168.178.80 vCenter/192.168.178.81 ESXi) environment is in the same subnet than the Nutanix CE edition (192.168.178.130). Make sure that in your environment the ESXi and the CVM are able to reach each other via IP and no firewalls are blocking traffic!

Choose “gear wheel” on top right corner in PRISM and select “Filesystem Whitelists” and enter the IP range which should be able to mount the container/datastore. In my case I used the whole subnet 192.168.178.0/24.

filesystem_whitelist

Now we are able to mount the NFS datastore. I would like to mount the Nutanix container “prod” into vSphere.

container_prod

In the vSphere client choose to Add NFS datastore and insert the needed values like I did. I used the Nutanix cluster IP as the Server address and “/prod” as the folder.

add_nfs_prod

There we go. A datastore called “prod” is available on the source ESXi environment.

prod_mounted

2. Install Nutanix VirtIO

In my case I created a clone of the source VM so I install the drivers only into the clone. If you skipped this part install them to the source VM.

Download the Nutanix VirtIo drivers from the Nutanix portal. I prefer the ISO image because it seems to be easier to mount it via vSphere Web Client than copy something to the source VM.

download_virtio

Mount the ISO and install the drivers.

install_virtio

Set the Microsoft Windows SAN policy to online. This makes sure all disk devices will be mounted after migration.

san_policy

3. Migrate the VM disks to Acropolis Distributed Storage Fabric (DSF)

To migrate a VM to the Acropolis Distributed Storage Fabric you only need to SVMotion the VM to the mounted NFS datastore. I copied the VM already to the container/datastore “/prod” when cloning the source VM. If you didn´t do this you need to move all data of the VM to the prod container/datastore via SVMOTION.

In the vSphere Web Client choose “migrate” the VM and use the datastore only option. So all vmdisks will be moved to the container/datastore “prod”.

svmotion_prod

4. Convert the VM disks to AHV by importing them

To import a VMware vmdk to Nutanix AHV you need to use the Image Service/Image configuration. So click on “gear wheel” in the top right corner and select the “Image Configuration” (maybe the name changed already).

Choose “Upload” and enter the following:

import_disk

Attention!!! Make sure you use the “-flat” !!!

nfs://127.0.0.1 will always work!

As you can see it may not easy to know the exact filenames. Use the vSphere Web Client datastore browser to get all needed details.

vm_file_details

5. Create a new Windows VM on AHV and attach the Windows disk to the VM

Power Off the source VM now.

Create a VM in the Nutanix Prism GUI with the same settings than the source VM.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter in my case.

create_vm1

Add a disk and choose to clone from the image service where the disk should be imported already.

create_vm2

Add a network adapter and connect it.

create_vm3

Now you can start the VM and it should run. You need to configure network and maybe an option here and there. But basically the VM is imported.

windows_started

Exporting a VM from Nutanix Acropolis Hypervisor (AHV) to VMware vSphere

As you may have mentioned. The import works without the NTNX-AVM. But for the export there is no PRISM/NCLI/ACLI command to export a VM. So i wrote a script which helps with this part based on the great post from Artur Krzywdzinski.

The script will export the AHV VM to a container of your choice where you can copy it via NFS to ESXi/vSphere or to somewhere else.

In my case I would like to export the WinClone VM back to ESXi. So all tools are installed like VMware tools etc. I will export the WinClone to the ISO container. Just to make sure there is no confusion with the “prod” container where the import took place.

Connect to the NTNX-AVM via SSH. Now lets export the VM with the export_vm command.

STEP 1 – using export_vm to export the VM to a container

Make sure the source VM for the export is powered off!

export_winclone

The export takes some time because the all VM disk data needs to be converted into the VMware vmdk format.

Mount the ISO container to the target ESXi/vSphere environment.

mount_iso

Step 2 – Register the VM into the ESXi/vSphere

Now I  would like to create a new VM on ESXi/vSphere based on the exported files. I will just register the .vmx file! The vmdk file is atm not a proper ESXi/vSphere file. So it needs to be converted. I am using the migrate (move VM) which will do the same but so I can avoid the command line. But this KB article should help if you want to do it the manual way.

Browse the ISO datastore and register the .vmx file as Win-Clone-2. The “-2” is only needed if the original clone still exists.

registervm

registerdetails

Step 3 – Prepare the VM for the first boot

Upgrade the virtual Hardware.

upgradevm

Set the VM OS to Windows.

vm_os

Change the SCSI Controller from “Bus logic” to “LSI LOGIC SAS”

vm_scsi_controller

Add a network device or all needed onces.

vm_network

Now the nice part begins. Instead of manually converting all vmdisk we just migrate the VM to another datastore. In the best case directly where the VM should reside.

vm_migrate

vm_migrate2

Step 4 – Last step! Power on the VM.

vm_poweredon

Go 7-zip