Intel NUC NUC6i7KYK – Upgrade of Nutanix Community Edition (CE)

Today I upgraded the Intel NUC cluster based on Nutanix CE. This was pretty easy.

Choose “gear wheel” in the top right corner and select “Upgrade Software”


As you can see. The new Nutanix CE version was released a few days ago. So choose “Upgrade” -> “Upgrade now”


The upgrade process starts. We just need to wait now… Or click on “Nothing to do?”


I played the game “2048” but I was not really good this time.


So I checked the upgrade process.


Not ready…. Wrote some emails than.

around 20 min after I clicked upgrade:


It’s done…..

This is while Nutanix is called the 1-click upgrade solution. (Yes I clicked around 4-5 times overall)

Go Audacity!


Unleash the power of the NTNX-AVM – Import/Export/Migrate VMs to/from AHV

in a recent customer Proof of Concepts (PoC) I encountered a common task. Importing a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and after testing export the VM back to the production environment. The import via the AHV Prism GUI can be done without too much effort but there is no interface or command line tool which exports a VM. I wrote a script for the export which can be run on a linux/mac environment or on NTNX-AVM. Jump direct to the export part and skip the import.

USE CASE: Import a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and export it back after testing!

Importing/Migrate a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV)

There is a decent post from Artur Krzywdzinski on how to import/migrate a Windows 2012R2 from VMware vSphere/ESXi to AHV. There is even a detailed documentation on the Nutanix support portal. Search for “MIGRATING WINDOWS VMS FROM ESXI TO AHV”.

The way this works can be described as follow:

  • Install all needed drivers into the VM before migrating. (drivers for disk devices, video, network)
  • Copy the VM to a Nutanix NFS share which is mounted on the source ESXi/vSphere via Storage vMotion if available, else copy it with command line/GUI yourself.
  • Convert the VMware vmdk to a format AHV can read/write
  • Create a VM based on the converted vmdk files with the same settings like in ESXi/vSphere
  • …. something else maybe… Start the VM… done


1. Windows VM Migration Prerequisites

I advice you to read the full documentation If you migrate VMs. I only list the basic steps needed in my test lab.

  • Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.
  • Optional: Clone any ESXi VMs you want to preserve.
  • Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later.
  • Mount the AHV container as an NFS datastore to vSphere.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

This is a Windows 20012 R2 Windows Server with VMware Tools installed.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter with no snapshots.


Optional: Clone any ESXi VMs you want to preserve.

Yes I would like to do this. The VM which I will migrate will be the clone and not the original VM. Just in case at any point we mess something up it would be nice to have the original one.

In this case I clone the VM directly to the NFS datastore mounted from the Nutanix cluster. So jump to the Mount AHV … part and continue here when finished this step.

I clone the Windows2012R2 server to the prod datastore which resides on the Nutanix cluster.


Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later. See the Hypervisor Details page on the Nutanix Support Portal for all the AHV versions

I am using the Nutanix CE edition so it is not that easy to make sure these requirements are met. Lets start with AOS 4.5.x or later.

I connected today (04.10.2016) to my Nutanix NUC cluster and choose the option “Upgrade Software” in the right top corner in PRISM (gear wheel).


As you can see: “2016.06.30” is installed but is this equal or later than AOS 4.5.x ? Yep it is. The version format seems to be YYYY:MM:DD so this should be okay. There is a table called “AHV AND NOS/AOS VERSION UPGRADE COMPATIBILITY” on the Nutanix portal which will make it easier to understand. The newer the NOS/AOS the newer the AHV version which is required. In this case I believe the version should be 4.6.3 for NOS/AOS.


Now the AHV-20160217.2 hypervisor part. The Nutanix CE shows: Nutanix 20150513 which does not meet this requirement. But anyway. It works.

BTW: In this case there is an update available but I will upgrade after I finished this post.

Mount the AHV container as an NFS datastore to vSphere

First we need to make sure that the source ESXi/vSphere environment is able to mount the NFS container/datastore. To do this we need to whitelist the ESXi/vSphere environment on Nutanix. In my case the ESXi/vSphere ( vCenter/ ESXi) environment is in the same subnet than the Nutanix CE edition ( Make sure that in your environment the ESXi and the CVM are able to reach each other via IP and no firewalls are blocking traffic!

Choose “gear wheel” on top right corner in PRISM and select “Filesystem Whitelists” and enter the IP range which should be able to mount the container/datastore. In my case I used the whole subnet


Now we are able to mount the NFS datastore. I would like to mount the Nutanix container “prod” into vSphere.


In the vSphere client choose to Add NFS datastore and insert the needed values like I did. I used the Nutanix cluster IP as the Server address and “/prod” as the folder.


There we go. A datastore called “prod” is available on the source ESXi environment.


2. Install Nutanix VirtIO

In my case I created a clone of the source VM so I install the drivers only into the clone. If you skipped this part install them to the source VM.

Download the Nutanix VirtIo drivers from the Nutanix portal. I prefer the ISO image because it seems to be easier to mount it via vSphere Web Client than copy something to the source VM.


Mount the ISO and install the drivers.


Set the Microsoft Windows SAN policy to online. This makes sure all disk devices will be mounted after migration.


3. Migrate the VM disks to Acropolis Distributed Storage Fabric (DSF)

To migrate a VM to the Acropolis Distributed Storage Fabric you only need to SVMotion the VM to the mounted NFS datastore. I copied the VM already to the container/datastore “/prod” when cloning the source VM. If you didn´t do this you need to move all data of the VM to the prod container/datastore via SVMOTION.

In the vSphere Web Client choose “migrate” the VM and use the datastore only option. So all vmdisks will be moved to the container/datastore “prod”.


4. Convert the VM disks to AHV by importing them

To import a VMware vmdk to Nutanix AHV you need to use the Image Service/Image configuration. So click on “gear wheel” in the top right corner and select the “Image Configuration” (maybe the name changed already).

Choose “Upload” and enter the following:


Attention!!! Make sure you use the “-flat” !!!

nfs:// will always work!

As you can see it may not easy to know the exact filenames. Use the vSphere Web Client datastore browser to get all needed details.


5. Create a new Windows VM on AHV and attach the Windows disk to the VM

Power Off the source VM now.

Create a VM in the Nutanix Prism GUI with the same settings than the source VM.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter in my case.


Add a disk and choose to clone from the image service where the disk should be imported already.


Add a network adapter and connect it.


Now you can start the VM and it should run. You need to configure network and maybe an option here and there. But basically the VM is imported.


Exporting a VM from Nutanix Acropolis Hypervisor (AHV) to VMware vSphere

As you may have mentioned. The import works without the NTNX-AVM. But for the export there is no PRISM/NCLI/ACLI command to export a VM. So i wrote a script which helps with this part based on the great post from Artur Krzywdzinski.

The script will export the AHV VM to a container of your choice where you can copy it via NFS to ESXi/vSphere or to somewhere else.

In my case I would like to export the WinClone VM back to ESXi. So all tools are installed like VMware tools etc. I will export the WinClone to the ISO container. Just to make sure there is no confusion with the “prod” container where the import took place.

Connect to the NTNX-AVM via SSH. Now lets export the VM with the export_vm command.

STEP 1 – using export_vm to export the VM to a container

Make sure the source VM for the export is powered off!


The export takes some time because the all VM disk data needs to be converted into the VMware vmdk format.

Mount the ISO container to the target ESXi/vSphere environment.


Step 2 – Register the VM into the ESXi/vSphere

Now I  would like to create a new VM on ESXi/vSphere based on the exported files. I will just register the .vmx file! The vmdk file is atm not a proper ESXi/vSphere file. So it needs to be converted. I am using the migrate (move VM) which will do the same but so I can avoid the command line. But this KB article should help if you want to do it the manual way.

Browse the ISO datastore and register the .vmx file as Win-Clone-2. The “-2” is only needed if the original clone still exists.



Step 3 – Prepare the VM for the first boot

Upgrade the virtual Hardware.


Set the VM OS to Windows.


Change the SCSI Controller from “Bus logic” to “LSI LOGIC SAS”


Add a network device or all needed onces.


Now the nice part begins. Instead of manually converting all vmdisk we just migrate the VM to another datastore. In the best case directly where the VM should reside.



Step 4 – Last step! Power on the VM.


Go 7-zip

Unleash the power of the NTNX-AVM – daily_health_report and monthly_ncc_health

I asked myself how could an admin use the NTNX-AVM. So I decided to show and provide some real world examples how this powerful automation VM can be used.

USE CASE: A daily health report should run on the Nutanix cluster and send to a specified email address!

Let’s start with the script itself. There is no script provided by Nutanix except the Nutanix Cluster Check (ncc). It does a decent job but because of the hundred of tests and output it may not be the easiest to start with. So based on the script provided by BMetcalf in the Nutanix Community I developed a script called “” for the NTNX-AVM. It is automatically installed with the NTNX-AVM starting today.

It runs the following  command remote on a CVM which gives you a good overview of the current cluster status.

Okay we do have a script but how to run it once a day? For this case I introduced jobber to the NTNX-AVM.

Learn jobber the fast way

Connect via SSH to the NTNX-AVM and run:


In this case no job is known. I prepared an example which runs the script every day at 04:00.

The easiest way to create this job is to copy the example from the source folder to a file called “.jobber” in the Nutanix home directory.

The last step is just to reload the jobs defined in “.jobber”.

Review the jobber list.


How does the work?

First of all, this script will not run in your environment because all parameters for the are setup for my lab environment. Okay lets make sure it will run in your environment.

STEP1 – Enable SSH access from NTNX-AVM to the cluster CVMs

The script makes use of ssh/scp to run the commands remote on one of the CVMs. To run a script non-interactive we need to enable password-less authentication between the NTNX-AVM to the CVMs. I wrote a script which enables password-less authentication.

This scripts creates a key pair and deploy the keys to the CVMs. When you run it you need to specify the Cluster IP/Name and the PRISM admin password.

A test ssh connection should work now without requesting a password.

STEP2 – Edit the jobber file

Use an editor of your choice like “vi” and edit the line which starts with ” cmd :  daily_h…” and edit the parameters to your needs.

DO NOT use the cluster IP for host. Use one CVM IP.

  • –host=<YOUR-CVM-IP>
  • –recipient=<RECIPIENT-email>
  • –provider=other                                   // choose other to send email to a local email server
  • –emailuser=<EMAIL-USER>                //  Email user used to authenticate via SMTP (sender)
  • –emailpass=<EMAIL-PASSWORD>    //  Email password used to authenticate via SMTP
  • –server=<EMAIL-SERVER-IP>             // Email server IP
  • –port=<SMTP port>                            // Email server SMTP port

Reload the jobber file.

STEP3 – Test the job

The output should look like this:


Check the email Account that the email was sent and received.


There it is. Sorry for the German Thunderbird version but you should get the idea how the email looks like. An email with one attachment called “daily_health_report-<DATE>.txt”.

USE CASE: Run a monthly “ncc health_checks run_all” and send the output to a specified email address!

Some Nutanix people would say: “Why don’t you use the ncc instead?” Good point. This post shows how to run ncc every x hours and send an email. But how to run ncc once a month and get all ERROR/FAIL messages in the body?

For this case I created the script which runs the “ncc health_checks run_all” and send an email.

STEP1 – Extend the “.jobber” file to add this job

The example which can be found on the NTNX-AVM in  “~/work/src/” defines a job which runs on the 1st of each month and calls the

Edit the ~/.jobber file and add the job text to the end of the file. BUT skip the first line “—“. the file should look like this.


Don’t forget to edit the parameters like in STEP2

Reload the jobber file.

STEP3 – Test the job

WARNING!!!! This may run for a while…

The output should look like this:


And an email should be in your inbox:


I know the format of the body is wired because all “newlines” have been removed. I hope to fix this in the near future.

BTW: I used the hMailServer in my lab environment. This was really the easiest mail server setup I have ever done.

Go hMailServer


Nutanix automation VM ( NTNX-AVM ) goes online

Since I started at Nutanix I thought about a way to write and run scripts/tools around the Nutanix ecosystem. But there are different languages which are used by the community. Perl/python/golang/powershell etc. So I asked myself: “Where the hack should I install the runtime and the scripts/tools, because the CVM is a bad place for this”

The answer took me a while but here we go:

Nutanix automation VM called NTNX-AVM

So there is no image which fits all but instead the NTNX-AVM is based on recipes which defines the runtime/scripts/tools which will be installed. The foundation of these are the cloud images which are designed to run on cloud solutions like AWS/Azure/Openstack. These images provide good security from scratch. Another advantage is that the images are already deployed which means there is no different way to install it other then “importing” a vendor controlled image. This is good for maintaining the whole project.

NTNX-AVM v1 when deployed provides golang , git, govc, java, ncli (CE edition), vsphere CLI and the automation scripts from preinstalled. So for example you can move a VM from container A to container B with the move_vm binary which leverage the Nutanix REST API which is not possible in AHV.

I introduced a job scheduler system called to automate tasks/jobs. The advantages are that you are able to review the history of already executed jobs and you have more control when something went wrong.

Use cases for the NTNX-AVM

  • Backup Nutanix VM’s to a NFS store like Synology/Qnap/linux…
  • Move VM from one container to another one
  • Do some daily tasks like generate reports of specific performance counters you would like to monitor which are not covered by Prism
  • anything which talks to Nutanix REST API and needs to be scheduled.
  • …. there will be more

Installation of NTNX-AVM on Acropolis Hypervisor (AHV)

For an easy deployment and usage I created a simple bash script which will do all the hard work.

The deployment for VMware and Hyper-V will follow. At the moment the process is more manual. I will post a “HOW-TO install”.

What you need is a Nutanix cluster based on AHV (>=4.7) and a client where you able to run the bash script. Ubuntu, Debian, Redhat, CentOS, Mac OS should work fine as a client. The Community Edition (CE) is the base of my development environment and is fully supported.

This is how the environment looks like before the deployment. My three node cluster based on Intel NUC.



Step-by-Step deployment of NTNX-AVM with Deploy Cloud Image script (DCI)

We start at your client system in my case a Mac Book Pro. Download the latest stable release of DCI from  In my case the version v1.0-stable is latest build available. The “Source code (tar.gz)” will work for me.

Release v1.0-stable · Tfindelkind-DCI Google Chrome, Heute at 11.40.54

Change to the Download folder and unpack/untar the file:

Downloads — bash — 92×28 Terminal, Heute at 11.44.46

You can see there are several recipes available but let’s focus just on NTNX-AVM v1 based on CentOS7.

NTNX-AVM recipe config file

IMPORTANT: THE NTNX-AVM needs internet connection when deployed. Because all tools need to be downloaded.

Now we need to edit the recipe config file of the NTNX-AVM to make sure that the IP,DNS,etc. is setup in the way we need it. Use a text editor of your choice to edit the “/recipes/NTNX-AVM/v1/CentOS7/config” file.

You should edit following settings to your needs:

  • VM-NAME          This is the name of the VM guest OS.
  • VM-IP                  The fixed IP
  • VM-NET              The network of VM
  • VM-MASK           The netmask of the network
  • VM-BC                 The broadcast address of the network
  • VM-GW                The gateway
  • VM-NS                 The nameserver
  • VM-USER             The username for the NTNX-AVM which will be created
  • VM-PASSWORD  The password for this user -> Support for access keys will be added soon.
  •                               You need to escape some special characters like “/” with a “\” (Backslash)
  • VCENTER_IP         IP of the vcenter when used.
  • VCENTER_USER   User of the vcenter

This is an example file for my environment:

CentOS7 — bash — 92×28 Terminal, Heute at 11.47.18

NTNX -AVM with DHCP enabled

If you don’t want to specify a fixed IP,DNS,.. you could roll out the NTNX-AVM with DHCP. To do this edit the “/recipes/NTNX-AVM/v1/CentOS7/meta-data.template” file and remove the network part so the file looks like this one. The “ifdown eth0” and “ifup eth0” is related to a bug with the CentOS 7 cloud-image.

Deploy the NTNX-AVM to the Nutanix cluster

Now we are ready to deploy the VM to the Nutanix cluster with the script.

We need to specify a few option to run it:

  • –recipe=NTNX-AVM             Use the pre build NTNX-AVM recipe
  • –rv=v1                                    It’s the first version so we use v1
  • –ros=CentOS7                      In this case we use the CentOS7 image and not Ubuntu
  • –host=      This is the cluster IP of Nutanix/ CVM IP will work too
  • –username/–password      Prism user and password
  • –vm-name                            The name of the Nutanix VM object
  • –container=prod                  In my case I used the container “prod” (production)
  • –vlan=VLAN0                        The Nutanix network where the VM will be connected to

The script will do the following:

  • First it will download the cloud image from a CentOS. Then it will download the deploy_cloud_vm binary.
  • It will read the recipe config file and generate a cloud seed CD/DVD image. Means all configuration like IP,DNS,.. will be saved into this CD/DVD image called “seed.iso”.
  • DCI will upload the CentOS image and seed.iso to the AHV image service.
  • The NTNX-AVM VM will be created based on the CentOS image and the seed.iso will be connected to the CD-ROM. At the first boot all settings will be applied. This is called the NoCloud deployment based on cloud-init. This will only work with cloud-init ready images.
  • The NTNX-AVM will be powered on and all configs will be applied.
  • In the background all tools/scripts will be installed

DCI-1.0-stable — bash — 92×28 Terminal, Heute at 12.05.51

The CentOS cloud image and the seed.iso have been uploaded to the image service.

Nutanix Web Console Google Chrome, Heute at 12.06.17

The NTNX-AVM has beed created and started.

Nutanix Web Console Google Chrome, Heute at 12.06.45

Using the Nutanix Automation VM aka NTNX-AVM the first time

Connect via ssh to the NTNX-AVM IP. in my case. First of all we need to make sure that all tools are fully installed because this is done in the background after the first boot.

So let’s check if /var/log/cloud-init-output.log will show something like:

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.45.06

The NTNX-AVM is finally up, after … seconds

You should reconnect via ssh once all tools/scripts are installed to make sure all environment variables will be set.

Everything is installed and we can use it.

Test the NTNX-AVM environment

Let’s connect to the Nutanix cluster with the “ncli” (nutanix command line) and show the cluster status.

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.54.57

That’s it. NTNX-AVM is up running.

Today I start to implement the ntnx_backup tool which will be able to backup/restore a AHV VM to/from an external share  (NFS, SMB,….)  which will leverage jobber as the job scheduling engine.

Go Ubuntu


Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part3 – 3 node cluster creation

Now it’s time to create a Nutanix cluster. But there are some default settings I would like to change before I create the cluster. This is not mandatory but this will increase the usability in the future. Just jump to the create cluster part if you want to skip that.

Changing the AHV hypervisor hostname (optional)

Use a ssh client like PuTTY or my favorite mRemoteNG  to connect to the AHV (Host) IP. Use the default password when connecting as the “root” user which is “nutanix/4u”. Use a text editor like vi/nano to edit the “/etc/hostname” file and change the entry to a hostname you would like to have.


The following table shows the hostnames i used in this setup.


Changing the AHV hypervisor timezone (optional)

By default the timezone of the AHV hypervisor is PDT (Pacific daylight time). From a support perspective it makes sense that all logging dates are using PDT, so that it is easier to analyse different log files side by side. But I would like to have the time in my timezone which is Germany. To change the timezone it is needed to use the correct /etc/localtime file. You can find the files needed in “/usr/share/zoneinfo”.

  • Make a backup of the actual /etc/localtime:  “mv  /etc/localtime /etc/localtime.bak”
  • Make a link to the wanted timezone file: “ln -s /usr/share/zoneinfo/Europe/Berlin /etc/localtime”


Changing the CVM name (optional)

This is a tricky part. I could not found a solution to change the CVM name. It seems there is no way to do this.

Changing the CVM timezone (optional)

@TimArenz remind me that it may be easier and the better way to change the timezone after the cluster is created. This can be done via the Nutanix CLI (ncli)

 Creating the 3 node cluster

There are two ways to install a multi node Nutanix CE cluster. Via the cluster init website or via the command line.

Cluster init web page

Connect to: http://CVMIP:2100/cluster_init.html

Enter the needed values and start the creation.


Cluster create via command line

We need to connect to one of CVMs of this setup via ssh with user “nutanix” and password “nutanix/4u”.

The creation is pretty simple which involves two steps. Invoke the create cluster command and set the DNS server.

cluster – s CVM-IP1, CVM-IP2, CVMIP3 create
ncli cluster add-to-name-servers servers=”DNS-SERVER”



The first connect to PRISM

Open a browser and connect to one of the CVM IPs. Enter the user credentials: “admin/admin”

When login the first time after the installation you will be asked to change the admin password.


The NEXT Credentials which have been used for the download need to be entered now. This means that Nutanix CE edition needs a internet connection to work. There is a grace period which should be around 30 days.


Prism will be be shown now and it’s ready to go.



Go Wireshark!

Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part2 – AHV installation

There are several great posts which show how to setup Nutanix CE in a HomeLab.

Tim Expert


Gareth Chapman


Mike Sheehy

I will focus on my own setup , based on the Intel NUC6i7KYK. The setup is pretty straight forward up to the point when the onboard network comes into play. The Intel driver which is included in the Nutanix CE does not provide the right ones needed for the Intel NUC6i7KYK onboard network.

Overview of the Nutanix CE install process

  1. Make sure your environment meets the minimum requirements. The table shows that a minimum of two disks are needed, at least one SSD. That´s the reason why I used 2x SanDisk X400 M.2 2280 in my environment. Remember that NVMe drives are not working atm.minimum_requirements
  2. Download the Nutanix CE disk image which will be copied to an USB flash drive. This will be the install and the boot device for this environment. The USB drive should be at least 8 GB in size but I recommend to use a device as big as possible. 32 GB flash drives starting at 10€. The reason is simple. If your environment for any reason starts to write extensive logs or data to the flash drive an 8 GB drive may end up with a wear out. Second! Maybe the image becomes bigger in the future?
  3. Boot from USB flash drive and start the installer with the right values (IP,DNS..) This step will install the Controller VM (CVM) to one of the SSD drives where all the Nutanix “Magic” resides. All local disks will be directly mapped to the CVM. This means the Acropolis Hypervisor (AHV) which is KVM based is not able to use the storage directly anymore.
  4. If chosen a single node cluster will be created. In my case where I will build a three node cluster I will leave this option blank.

Step-by-Step Installation of Nutanix CE based on Intel NUC6i7KYKD

Download the Nutanix Community Edition. You need to register first!


Download the software by scrolling down to the latest build.


The image itself is packed with “.gz”. I used the tool 7zip to unpack the file. A file like ce-2016.04.19-stable.img will be unpacked which is ready to be copied to the USB flash drive.


Now attach the USB flash drive and download the tool called Rufus. This program enables to “raw” copy an img like this one byte by byte to an USB flash drive. Choose the right USB flash drive, then switch to “DD Image” (dd means disk dump). Last step is to choose the img file and “Start”.

ATTENTION !!!! Make sure to choose the right device!!!


The copy process takes a while!

Now we need to install the Intel network drivers

Intel e1000e for Nutanix CE on Intel NUC6i7KYK
because the actual version does not provide the right ones. Unzip the file so you have got a file called “e1000e.ko”

Now we need to copy the file “e1000e.ko” which is a kernel module to the USB flash drive. But the filesystem which is used on the USB flash drive is ext4 which MS Windows is not able to edit by default. So we need a tool like EXT2FSD to do so.

After the installation of EXT2FSD and a reboot you start the Ext2 volume Manager. In my case I needed to choose a drive letter manually to be able to work with the USB drive. So scroll down to the right device in the bottom window and select the drive and hit the “F4” key which should assign an unused drive letter.


Copy the file “e1000e.ko” to the USB flash drive in the following directory: “/lib/modules/3.10.0-229.4.2.e17.nutanix.20150513.x86_64/kernel/drivers/net/ethernet/intel/e1000e/” and override the existing file.


The USB flash drive is ready to boot on the Intel NUC6i7KYK!

Attach the USB flash drive to your Intel NUC6i7KYK and boot it. Feel free to change the boot order right now so that the Intel NUC6i7KYK will always boot from the USB flash drive.

IMG_20160626_090725 IMG_20160626_090415

Now the Intel NUC6i7KYK is ready to boot from the USB flash drive.


After the boot you should see the login screen.


Login as user “root” with the password “nutanix/4u”. Loading the Intel network driver works with the command “modprobe e1000e”. Use “exit” to return to the login screen.


The user “install” starts the installation.


Choose your keyboard setting. In my case I used “de-nodeadkeys”.


The following screen shows a small form. This is an examples for a single node setup.


You may miss the configuration for a 3 or 4 node cluster. If you would like to setup a multi-node cluster your setup could look like this. This means that the cluster itself will be created later and we just install the environment. (Acropolis Hypervisor = Host, CVM = Nutanix Controller VM)


There are two IPs which are needed to be configured. Host IP is the IP of the hypervisor. In the case of Nutanix CE  the Acropolis hypervisor will be installed, which is based on the KVM hypervisor. There are a lot of changes compared to the vanilla KVM so it is not the same. The logic of all Nutanix functions are implemented in the Controller VM. This is the reason why the OS which is installed in the VM is called NOS (Nutanix OS). NOS is based on Centos.


The installation takes a while. In the end you should see a login screen with a random hostname.

The next post will show the configuration of the cluster.

Go mRemoteNG








Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part1 – Hardware setup

As already announced in my recent post I bought three Intel NUC NUC6i7KYK to setup my demo/showcase environment based on the Nutanix Community Edition which is free to use. In the following weeks I will show how I setup the environment step by step and I will document the live demos I would like to show at upcoming events. This will include the Openstack and docker integration.


It all starts with the hardware itself. The NUC skull canyon edition is pretty new and a this post in the Nutanix Community literally convinced me to build a lab with these boxes. I used the following hardware setup. Be aware that DDR4 and SSDs are not included when buying the Intel NUCs.


Intel NUC skull canyon NUC6i7KYK---
Intel Core i7-6770HQ
Skylake-H, 4C/8T
2.6 GHz (Turbo to 3.5 GHz), 14nm, 6MB L2, 45W TDP---
32GB (2x 16384MB) Crucial CT2K16G4SFD8213 DDR4-2133 SO-DIMM CL15 Dual Kit--
2 x SanDisk X400 M.2 2280 512 GB SATA SSD (6Gb/s)---
Intel Ethernet Connection I219-LM GbE Adapter-e1000o.ko -

Noise1-3 – HP ML 110 G6 cluster for Nutanix Community Edition

The HP ML 110 G6’s are pretty old. I bought these boxes around 2012 but with 10 GgE Broadcom CNA adapters and some fine SSDs they are still some nice boxes to run Nutanix Community Edition which is free to use.

BUT be aware. There is a reason why I called the boxes Noise1, Noise2, Noise3.


This is the actual listing of the components which I installed.

HP ML 110 G62011.08.26
CPUX3430 @ 2,4 GHz
RAM16GB DDR3 @1333 MHz
GraphicOnboard- MGA G200e
LSI SAS ControllerSAS1064ET Fusion-MPT SAS

SSD SamsungSamsung 750 EVO MZ-750250BW/dev/sda
SSD SandiskSDSSDP12 - 128 GB/dev/sdb
HDD 1 WDC WD10EZRX-00L 1TB/dev/sdc
HDD 2 WDC WD10EZRX-00L 1TB/dev/sdd
HDD 3VB0250EAVER 250GB/dev/sde
NIC OnboardBroadcom - NetXtreme BCM5723 - 1GBe
Intel NICIntel - 82541PI - 1Gbe
Brocade CNA 10 Gbe3.2.5
SanDisk/Fusion-IO ioDrive 2 1,2 TB


Nutanix – Upload ISO/Image to AHV from a NFS share

In addition to the post from Josh Odgers it seems it is not well known how to upload an ISO/Image directly from a NFS share to the image service. To achieve this you can leverage the “From URL” field in the PRISM interface.

The format for anonymous nfs access is:


If user and password is required:



Screenshot 2016-02-19 13.56.37




8. SQL Server Performance Tuning study with HammerDB – Solve PAGEIOLATCH latch contention

In the last part I found that there is a new bottleneck. It seems this is related to the PAGEIOLATCH_SH and PAGEIOLATCH_EX. The exact values depend on the time slots which is measured by the ShowIOBottlenecks script. The picture shows >70 percent wait time.


To track down the latch contention wait events Microsoft provides a decent whitepaper. I used the following script and run it several times to get an idea which resources are blocked.


The resource_description column returned by this script provides the resource description in the format <DatabaseID,FileID,PageID> where the name of the database associated with DatabaseID can be determined by passing the value of DatabaseID to the DB_NAME () function.

First lets find out which table this is. This can be done via inspecting the the page and retrieving the Metadata ObjectId.


The metadata objectid is 373576369. Now it is easy to retrieve the related table name.


It is the “warehouse” table.

What is the bottleneck here?


First of all this an explanation about the wait events:

Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem.

Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem

In our case this means a lot of inserts/updates are done when running the TPC-C workload and a task waits on a latch for this page shared or exclusive! When inspecting this page we know its the warehouse table and we created the database with 33 warehouses in the beginning.

The page size in SQL server is 8K and the 33 rows all fit just in one page (m_slotcnt =33). This means some operations can no be parallelized!!

To solve this I will change the “physical” design of this table which is still in-line with the TPC-C rules. There may be different ways to achieve this. I add a column and insert some text which forces SQL server to restructure the pages and then delete the column.


Okay now check if the m_slotCnt is 1 which means every row is in one page.


It’s done.


When running the workload again the PAGEIOLATCH_SH and PAGELATCHIO_EX wait events are nearly gone.


  • System achieved 338989 SQL Server TPM at 73685 NOPM
  • System achieved 348164 SQL Server TPM at 75689 NOPM
  • System achieved 336965 SQL Server TPM at 73206 NOPM


  • System achieved 386324 SQL Server TPM at 83941 NOPM
  • System achieved 370919 SQL Server TPM at 80620 NOPM
  • System achieved 366426 SQL Server TPM at 79726 NOPM

The workload increased slightly. Again I monitored that CPU is at 100% when running. At this point I could continue to tune the SQL statements as I did the last 2-3 posts. Remember I started the SQL Server Performance Tuning with 20820 TPM at 4530 NOPM. This means more then 10x faster!

But the next step maybe to add some hardware. This all runs on just 2 of the 4 cores which are available as I wrote in the first part.

Go ChaosMonkey!