Category Archives: NTNX-AVM

Unleash the power of the NTNX-AVM – Move VM from one container to another container (AHV) with move_vm

This post is related to AHV only!!! Make sure a recent backup of the VM exist!

In the last weeks customers asked me how to move a VM from one AHV container to another AHV container on the same cluster. The answer is: “There is no PRISM/GUI option for this and the manual task is pretty difficult”. So i wrote a script called move_vm which I show in this post to simplify this.

But why should you move a VM?

There are several reasons for this.

  • container settings don’t fit
  • different containers for different organization units
  • DR/backup concepts based on containers
  • Automation based on containers
  • … and more

Example: Let’s say the customer started with two containers:

  1. ISO – just for templates and CD/DVD ISO images
  2. prod –  productions environment

Now he realizes that some of the server VMs would be great for compression but some are not. He used the best practices to figure out which server VMs would fit.

compression_bestpractices

For this case he created a new container called:

3 . prod_comp with active inline compression for SQL Server 2012 R2 VMs

Now he tried to move the VM from container prod to prod_comp and found that this does not work and asked me.

USE CASE: Move a VM from one AHV container to another AHV container to make use off inline compression for a several VMs!

I created a new container called prod_comp with inline compression in my Intel NUC environment.

container_prod_comp

Then I created a VM called “Move_VM_Test1” with different kind of disk devices and CD/DVD drives with all reside on the prod container.

move_vm_test1

Now I am using the tool move_vm which is installed on the NTNX-AVM to move the VM. The tool has several options which I list here.

Move a whole VM from container prod to prod_comp

In the first case the syntax is pretty simple to move the whole VM from prod to prod_comp. REMEMBER the source VM needs to be powered off before you are able to move it.

move_full_vm

overview_vms_double

You may noticed that there are two VMs now with the same name. I believe it makes sense to keep the old VM unless you are sure the new copy works.

You can use the option “–delete” to delete the source VM. The advantage is that the new network adapters will have the same MAC address then the source VM! 

 

Move just one vDisk/disk of the VM from container prod_comp back to prod

I renamed the new Move_VM_Test1 to Move_VM_Test2 for the next part.

Let’s say you would like to move just one vDisk from a container prod_comp back to prod because you found that inline compression makes no sense. An example maybe the “transaction log” vdisk of a MS SQL Server.

First we need to find out which vdisks exist and how the mapping looks like. This can be done with the “–list_mapping” option.

the output is:

which means that there are four vdisks and two CD/DVD drives. Let’s say we identified that the the second vdisk: “scsi.1” is the one we would like to move back to container prod. In this case we need to specify the whole mapping when calling the move_vm tool to only move the second vdisk. Copy and paste is the way to go!

The syntax is:

I only changed the prod_comp to prod for the scsi.1 vdisk!

move_vdisk

You may ask: “Why is the whole VM cloned and not only the vdisk?” Yep you are right. This would be the better way. This is how this tool works atm. This is more a copy then a move but it works and its pretty fast because only the vdisk “scsi.1” needs to be copied.

You could specify the option “–delete” the delete the source VM and to make sure the new network adapters get the same MAC address.

moved_vdisk

There we go.

For all the people who wants to know more, this is an overview how this tool works:

  1. Upload vDisk from source VM to image service. This is needed while a direct copy is not possible
  2. Create new VM
  3. Clone Images to Disks
  4. Add network
  5. Delete images

Unleash the power of the NTNX-AVM – Import/Export/Migrate VMs to/from AHV

in a recent customer Proof of Concepts (PoC) I encountered a common task. Importing a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and after testing export the VM back to the production environment. The import via the AHV Prism GUI can be done without too much effort but there is no interface or command line tool which exports a VM. I wrote a script for the export which can be run on a linux/mac environment or on NTNX-AVM. Jump direct to the export part and skip the import.

USE CASE: Import a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and export it back after testing!

Importing/Migrate a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV)

There is a decent post from Artur Krzywdzinski on how to import/migrate a Windows 2012R2 from VMware vSphere/ESXi to AHV. There is even a detailed documentation on the Nutanix support portal. Search for “MIGRATING WINDOWS VMS FROM ESXI TO AHV”.

The way this works can be described as follow:

  • Install all needed drivers into the VM before migrating. (drivers for disk devices, video, network)
  • Copy the VM to a Nutanix NFS share which is mounted on the source ESXi/vSphere via Storage vMotion if available, else copy it with command line/GUI yourself.
  • Convert the VMware vmdk to a format AHV can read/write
  • Create a VM based on the converted vmdk files with the same settings like in ESXi/vSphere
  • …. something else maybe… Start the VM… done

migrate_portal

1. Windows VM Migration Prerequisites

I advice you to read the full documentation If you migrate VMs. I only list the basic steps needed in my test lab.

  • Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.
  • Optional: Clone any ESXi VMs you want to preserve.
  • Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later.
  • Mount the AHV container as an NFS datastore to vSphere.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

This is a Windows 20012 R2 Windows Server with VMware Tools installed.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter with no snapshots.

vm_nosnapshots

Optional: Clone any ESXi VMs you want to preserve.

Yes I would like to do this. The VM which I will migrate will be the clone and not the original VM. Just in case at any point we mess something up it would be nice to have the original one.

In this case I clone the VM directly to the NFS datastore mounted from the Nutanix cluster. So jump to the Mount AHV … part and continue here when finished this step.

I clone the Windows2012R2 server to the prod datastore which resides on the Nutanix cluster.

clone_prod

Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later. See the Hypervisor Details page on the Nutanix Support Portal for all the AHV versions

I am using the Nutanix CE edition so it is not that easy to make sure these requirements are met. Lets start with AOS 4.5.x or later.

I connected today (04.10.2016) to my Nutanix NUC cluster and choose the option “Upgrade Software” in the right top corner in PRISM (gear wheel).

upgrade_ahv

As you can see: “2016.06.30” is installed but is this equal or later than AOS 4.5.x ? Yep it is. The version format seems to be YYYY:MM:DD so this should be okay. There is a table called “AHV AND NOS/AOS VERSION UPGRADE COMPATIBILITY” on the Nutanix portal which will make it easier to understand. The newer the NOS/AOS the newer the AHV version which is required. In this case I believe the version should be 4.6.3 for NOS/AOS.

ahv_nos_upgrade

Now the AHV-20160217.2 hypervisor part. The Nutanix CE shows: Nutanix 20150513 which does not meet this requirement. But anyway. It works.

BTW: In this case there is an update available but I will upgrade after I finished this post.

Mount the AHV container as an NFS datastore to vSphere

First we need to make sure that the source ESXi/vSphere environment is able to mount the NFS container/datastore. To do this we need to whitelist the ESXi/vSphere environment on Nutanix. In my case the ESXi/vSphere (192.168.178.80 vCenter/192.168.178.81 ESXi) environment is in the same subnet than the Nutanix CE edition (192.168.178.130). Make sure that in your environment the ESXi and the CVM are able to reach each other via IP and no firewalls are blocking traffic!

Choose “gear wheel” on top right corner in PRISM and select “Filesystem Whitelists” and enter the IP range which should be able to mount the container/datastore. In my case I used the whole subnet 192.168.178.0/24.

filesystem_whitelist

Now we are able to mount the NFS datastore. I would like to mount the Nutanix container “prod” into vSphere.

container_prod

In the vSphere client choose to Add NFS datastore and insert the needed values like I did. I used the Nutanix cluster IP as the Server address and “/prod” as the folder.

add_nfs_prod

There we go. A datastore called “prod” is available on the source ESXi environment.

prod_mounted

2. Install Nutanix VirtIO

In my case I created a clone of the source VM so I install the drivers only into the clone. If you skipped this part install them to the source VM.

Download the Nutanix VirtIo drivers from the Nutanix portal. I prefer the ISO image because it seems to be easier to mount it via vSphere Web Client than copy something to the source VM.

download_virtio

Mount the ISO and install the drivers.

install_virtio

Set the Microsoft Windows SAN policy to online. This makes sure all disk devices will be mounted after migration.

san_policy

3. Migrate the VM disks to Acropolis Distributed Storage Fabric (DSF)

To migrate a VM to the Acropolis Distributed Storage Fabric you only need to SVMotion the VM to the mounted NFS datastore. I copied the VM already to the container/datastore “/prod” when cloning the source VM. If you didn´t do this you need to move all data of the VM to the prod container/datastore via SVMOTION.

In the vSphere Web Client choose “migrate” the VM and use the datastore only option. So all vmdisks will be moved to the container/datastore “prod”.

svmotion_prod

4. Convert the VM disks to AHV by importing them

To import a VMware vmdk to Nutanix AHV you need to use the Image Service/Image configuration. So click on “gear wheel” in the top right corner and select the “Image Configuration” (maybe the name changed already).

Choose “Upload” and enter the following:

import_disk

Attention!!! Make sure you use the “-flat” !!!

nfs://127.0.0.1 will always work!

As you can see it may not easy to know the exact filenames. Use the vSphere Web Client datastore browser to get all needed details.

vm_file_details

5. Create a new Windows VM on AHV and attach the Windows disk to the VM

Power Off the source VM now.

Create a VM in the Nutanix Prism GUI with the same settings than the source VM.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter in my case.

create_vm1

Add a disk and choose to clone from the image service where the disk should be imported already.

create_vm2

Add a network adapter and connect it.

create_vm3

Now you can start the VM and it should run. You need to configure network and maybe an option here and there. But basically the VM is imported.

windows_started

Exporting a VM from Nutanix Acropolis Hypervisor (AHV) to VMware vSphere

As you may have mentioned. The import works without the NTNX-AVM. But for the export there is no PRISM/NCLI/ACLI command to export a VM. So i wrote a script which helps with this part based on the great post from Artur Krzywdzinski.

The script will export the AHV VM to a container of your choice where you can copy it via NFS to ESXi/vSphere or to somewhere else.

In my case I would like to export the WinClone VM back to ESXi. So all tools are installed like VMware tools etc. I will export the WinClone to the ISO container. Just to make sure there is no confusion with the “prod” container where the import took place.

Connect to the NTNX-AVM via SSH. Now lets export the VM with the export_vm command.

STEP 1 – using export_vm to export the VM to a container

Make sure the source VM for the export is powered off!

export_winclone

The export takes some time because the all VM disk data needs to be converted into the VMware vmdk format.

Mount the ISO container to the target ESXi/vSphere environment.

mount_iso

Step 2 – Register the VM into the ESXi/vSphere

Now I  would like to create a new VM on ESXi/vSphere based on the exported files. I will just register the .vmx file! The vmdk file is atm not a proper ESXi/vSphere file. So it needs to be converted. I am using the migrate (move VM) which will do the same but so I can avoid the command line. But this KB article should help if you want to do it the manual way.

Browse the ISO datastore and register the .vmx file as Win-Clone-2. The “-2” is only needed if the original clone still exists.

registervm

registerdetails

Step 3 – Prepare the VM for the first boot

Upgrade the virtual Hardware.

upgradevm

Set the VM OS to Windows.

vm_os

Change the SCSI Controller from “Bus logic” to “LSI LOGIC SAS”

vm_scsi_controller

Add a network device or all needed onces.

vm_network

Now the nice part begins. Instead of manually converting all vmdisk we just migrate the VM to another datastore. In the best case directly where the VM should reside.

vm_migrate

vm_migrate2

Step 4 – Last step! Power on the VM.

vm_poweredon

Go 7-zip

Unleash the power of the NTNX-AVM – daily_health_report and monthly_ncc_health

I asked myself how could an admin use the NTNX-AVM. So I decided to show and provide some real world examples how this powerful automation VM can be used.

USE CASE: A daily health report should run on the Nutanix cluster and send to a specified email address!

Let’s start with the script itself. There is no script provided by Nutanix except the Nutanix Cluster Check (ncc). It does a decent job but because of the hundred of tests and output it may not be the easiest to start with. So based on the script provided by BMetcalf in the Nutanix Community I developed a script called “daily_health_report.sh” for the NTNX-AVM. It is automatically installed with the NTNX-AVM starting today.

It runs the following  command remote on a CVM which gives you a good overview of the current cluster status.

Okay we do have a script but how to run it once a day? For this case I introduced jobber to the NTNX-AVM.

Learn jobber the fast way

Connect via SSH to the NTNX-AVM and run:

jobber_list

In this case no job is known. I prepared an example which runs the script daily_health_report.sh every day at 04:00.

The easiest way to create this job is to copy the example from the source folder to a file called “.jobber” in the Nutanix home directory.

The last step is just to reload the jobs defined in “.jobber”.

Review the jobber list.

jobber_finished

How does the daily_health_check.sh work?

First of all, this script will not run in your environment because all parameters for the daily_health_check.sh are setup for my lab environment. Okay lets make sure it will run in your environment.

STEP1 – Enable SSH access from NTNX-AVM to the cluster CVMs

The script makes use of ssh/scp to run the commands remote on one of the CVMs. To run a script non-interactive we need to enable password-less authentication between the NTNX-AVM to the CVMs. I wrote a script which enables password-less authentication.

This scripts creates a key pair and deploy the keys to the CVMs. When you run it you need to specify the Cluster IP/Name and the PRISM admin password.

A test ssh connection should work now without requesting a password.

STEP2 – Edit the jobber file

Use an editor of your choice like “vi” and edit the line which starts with ” cmd :  daily_h…” and edit the parameters to your needs.

DO NOT use the cluster IP for host. Use one CVM IP.

  • –host=<YOUR-CVM-IP>
  • –recipient=<RECIPIENT-email>
  • –provider=other                                   // choose other to send email to a local email server
  • –emailuser=<EMAIL-USER>                //  Email user used to authenticate via SMTP (sender)
  • –emailpass=<EMAIL-PASSWORD>    //  Email password used to authenticate via SMTP
  • –server=<EMAIL-SERVER-IP>             // Email server IP
  • –port=<SMTP port>                            // Email server SMTP port

Reload the jobber file.

STEP3 – Test the job

The output should look like this:

edit_test_jobber

Check the email Account that the email was sent and received.

show_daily_email

There it is. Sorry for the German Thunderbird version but you should get the idea how the email looks like. An email with one attachment called “daily_health_report-<DATE>.txt”.

USE CASE: Run a monthly “ncc health_checks run_all” and send the output to a specified email address!

Some Nutanix people would say: “Why don’t you use the ncc instead?” Good point. This post shows how to run ncc every x hours and send an email. But how to run ncc once a month and get all ERROR/FAIL messages in the body?

For this case I created the ncc_health_report.sh script which runs the “ncc health_checks run_all” and send an email.

STEP1 – Extend the “.jobber” file to add this job

The example which can be found on the NTNX-AVM in  “~/work/src/github.com/Tfindelkind/automation/NTNX-AVM/jobber/example/monthly_ncc_health” defines a job which runs on the 1st of each month and calls the ncc_health_report.sh

Edit the ~/.jobber file and add the job text to the end of the file. BUT skip the first line “—“. the file should look like this.

both_jobs

Don’t forget to edit the parameters like in STEP2

Reload the jobber file.

STEP3 – Test the job

WARNING!!!! This may run for a while…

The output should look like this:

ncc_report_test

And an email should be in your inbox:

ncc_report_test_email

I know the format of the body is wired because all “newlines” have been removed. I hope to fix this in the near future.

BTW: I used the hMailServer in my lab environment. This was really the easiest mail server setup I have ever done.

Go hMailServer

 

Nutanix automation VM ( NTNX-AVM ) goes online

Since I started at Nutanix I thought about a way to write and run scripts/tools around the Nutanix ecosystem. But there are different languages which are used by the community. Perl/python/golang/powershell etc. So I asked myself: “Where the hack should I install the runtime and the scripts/tools, because the CVM is a bad place for this”

The answer took me a while but here we go:

Nutanix automation VM called NTNX-AVM

So there is no image which fits all but instead the NTNX-AVM is based on recipes which defines the runtime/scripts/tools which will be installed. The foundation of these are the cloud images which are designed to run on cloud solutions like AWS/Azure/Openstack. These images provide good security from scratch. Another advantage is that the images are already deployed which means there is no different way to install it other then “importing” a vendor controlled image. This is good for maintaining the whole project.

NTNX-AVM v1 when deployed provides golang , git, govc, java, ncli (CE edition), vsphere CLI and the automation scripts from https://github.com/Tfindelkind/automation preinstalled. So for example you can move a VM from container A to container B with the move_vm binary which leverage the Nutanix REST API which is not possible in AHV.

I introduced a job scheduler system called https://github.com/dshearer/jobber to automate tasks/jobs. The advantages are that you are able to review the history of already executed jobs and you have more control when something went wrong.

Use cases for the NTNX-AVM

  • Backup Nutanix VM’s to a NFS store like Synology/Qnap/linux…
  • Move VM from one container to another one
  • Do some daily tasks like generate reports of specific performance counters you would like to monitor which are not covered by Prism
  • anything which talks to Nutanix REST API and needs to be scheduled.
  • …. there will be more

Installation of NTNX-AVM on Acropolis Hypervisor (AHV)

For an easy deployment and usage I created a simple bash script which will do all the hard work.

The deployment for VMware and Hyper-V will follow. At the moment the process is more manual. I will post a “HOW-TO install”.

What you need is a Nutanix cluster based on AHV (>=4.7) and a client where you able to run the bash script. Ubuntu, Debian, Redhat, CentOS, Mac OS should work fine as a client. The Community Edition (CE) is the base of my development environment and is fully supported.

This is how the environment looks like before the deployment. My three node cluster based on Intel NUC.

cluster_before_NTNX-AVM

Image_service_before_NTNX-AVM

Step-by-Step deployment of NTNX-AVM with Deploy Cloud Image script (DCI)

We start at your client system in my case a Mac Book Pro. Download the latest stable release of DCI from https://github.com/Tfindelkind/DCI/releases.  In my case the version v1.0-stable is latest build available. The “Source code (tar.gz)” will work for me.

Release v1.0-stable · Tfindelkind-DCI Google Chrome, Heute at 11.40.54

Change to the Download folder and unpack/untar the file:

Downloads — bash — 92×28 Terminal, Heute at 11.44.46

You can see there are several recipes available but let’s focus just on NTNX-AVM v1 based on CentOS7.

NTNX-AVM recipe config file

IMPORTANT: THE NTNX-AVM needs internet connection when deployed. Because all tools need to be downloaded.

Now we need to edit the recipe config file of the NTNX-AVM to make sure that the IP,DNS,etc. is setup in the way we need it. Use a text editor of your choice to edit the “/recipes/NTNX-AVM/v1/CentOS7/config” file.

You should edit following settings to your needs:

  • VM-NAME          This is the name of the VM guest OS.
  • VM-IP                  The fixed IP
  • VM-NET              The network of VM
  • VM-MASK           The netmask of the network
  • VM-BC                 The broadcast address of the network
  • VM-GW                The gateway
  • VM-NS                 The nameserver
  • VM-USER             The username for the NTNX-AVM which will be created
  • VM-PASSWORD  The password for this user -> Support for access keys will be added soon.
  •                               You need to escape some special characters like “/” with a “\” (Backslash)
  • VCENTER_IP         IP of the vcenter when used.
  • VCENTER_USER   User of the vcenter
  • VCENTER_PASSWORD

This is an example file for my environment:

CentOS7 — bash — 92×28 Terminal, Heute at 11.47.18

NTNX -AVM with DHCP enabled

If you don’t want to specify a fixed IP,DNS,.. you could roll out the NTNX-AVM with DHCP. To do this edit the “/recipes/NTNX-AVM/v1/CentOS7/meta-data.template” file and remove the network part so the file looks like this one. The “ifdown eth0” and “ifup eth0” is related to a bug with the CentOS 7 cloud-image.

Deploy the NTNX-AVM to the Nutanix cluster

Now we are ready to deploy the VM to the Nutanix cluster with the dci.sh script.

We need to specify a few option to run it:

  • –recipe=NTNX-AVM             Use the pre build NTNX-AVM recipe
  • –rv=v1                                    It’s the first version so we use v1
  • –ros=CentOS7                      In this case we use the CentOS7 image and not Ubuntu
  • –host=192.168.178.130      This is the cluster IP of Nutanix/ CVM IP will work too
  • –username/–password      Prism user and password
  • –vm-name                            The name of the Nutanix VM object
  • –container=prod                  In my case I used the container “prod” (production)
  • –vlan=VLAN0                        The Nutanix network where the VM will be connected to

The dci.sh script will do the following:

  • First it will download the cloud image from a CentOS. Then it will download the deploy_cloud_vm binary.
  • It will read the recipe config file and generate a cloud seed CD/DVD image. Means all configuration like IP,DNS,.. will be saved into this CD/DVD image called “seed.iso”.
  • DCI will upload the CentOS image and seed.iso to the AHV image service.
  • The NTNX-AVM VM will be created based on the CentOS image and the seed.iso will be connected to the CD-ROM. At the first boot all settings will be applied. This is called the NoCloud deployment based on cloud-init. This will only work with cloud-init ready images.
  • The NTNX-AVM will be powered on and all configs will be applied.
  • In the background all tools/scripts will be installed

DCI-1.0-stable — bash — 92×28 Terminal, Heute at 12.05.51

The CentOS cloud image and the seed.iso have been uploaded to the image service.

Nutanix Web Console Google Chrome, Heute at 12.06.17

The NTNX-AVM has beed created and started.

Nutanix Web Console Google Chrome, Heute at 12.06.45

Using the Nutanix Automation VM aka NTNX-AVM the first time

Connect via ssh to the NTNX-AVM IP. 192.168.178.200 in my case. First of all we need to make sure that all tools are fully installed because this is done in the background after the first boot.

So let’s check if /var/log/cloud-init-output.log will show something like:

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.45.06

The NTNX-AVM is finally up, after … seconds

You should reconnect via ssh once all tools/scripts are installed to make sure all environment variables will be set.

Everything is installed and we can use it.

Test the NTNX-AVM environment

Let’s connect to the Nutanix cluster with the “ncli” (nutanix command line) and show the cluster status.

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.54.57

That’s it. NTNX-AVM is up running.

Today I start to implement the ntnx_backup tool which will be able to backup/restore a AHV VM to/from an external share  (NFS, SMB,….)  which will leverage jobber as the job scheduling engine.

Go Ubuntu