Nutanix REST API with Golang/Go Tutorial – Part 2 – Authentication

The first step when interacting with the Nutanix Rest API with GO is to authenticate against the API. To do so you need to know that Nutanix makes use of user authentication. There is a role concept for users, which means a user can have different rights when connecting.

There are three roles a user can be assigned to:

  • Viewer: This role allows a user to view information only. It does not provide permission to perform any administrative tasks.
  • Cluster Admin: This role allows a user to view information and perform any administrative task (but not create or modify user accounts).
  • User Admin: This role allows the user to view information, perform any administrative task, and create or modify user accounts.

The first user in PRISM (GUI) which is direct related to the API authentication is “admin” with the default password “admin” which will be changed via the first connect to PRISM. I prefer to set the admin password to the default password “nutanix/4u” in my test environments but this is up to you. The admin user does have all three roles assigned.

admin_roles

Using Google Chrome developer tools to learn how PRISM authentication works (optional)

At the moment the documentation of the REST API lacks some easy examples and explanations how the authentication should be done. I started to learn this by making use of the Google Chrome developer tools when I am connecting to Nutanix via the Web GUI.

First step is to open Google Chrome and show the developer tools and switch to the Network tab. Clear all and start the recording. Type in CVM or Cluster IP/DNS to open the PRISM GUI.

open_prism_gui

There are two important URL the browser requested:

  1. https://192.168.178.130:9440/PrismGateway/services/rest/v1/users/session_info
    • It looks like that the client tries to check if we are already connected which is this case fails because the response code shows “401” which means we are not authorized to access this URL at the moment.
  2. https://192.168.178.130:9440/PrismGateway/services/rest/v1/utils/pre_login_details
    • After the client understood that this session is not authorized it ask to get some details before a login will take place.
    • What is good to know that a response looks like this which gives a lot of details which helps to identify what API version should be supported etc.: (without authentication!)

 

Let’s type in the user and password now and stop the recording ones we are successfully connected to the GUI. You should found a POST to  https://192.168.178.130:9440/PrismGateway/j_spring_security_check. There are three interesting parts:

  1. It is posting the username and password in Form data as j_username and j_password
  2. It is receiving the Set-Cookie: … in the Response header which means a cookie can be used for all subsequent http methods
  3. It is receiving the Location: https://192.168.178.130:9440/PrismGateway/nmb/loginsuccess which is like a redirection in this case

j_spring_security_check

The https://192.168.178.130:9440/PrismGateway/nmb/loginsuccess GET will be requested via the client with the cookie which seems to check if it works. A response of the status code of “200” and the response of “Success” means cookie authentication is working. Then a https://192.168.178.130:9440/PrismGateway/services/rest/v1/users/session_info is followed which gets some user session info like userDTO.

I believe the rest can be ignored for our task now. We learned this basic workflow for authentication.

  1. Request session_info to check if we are already authenticated
  2. Request pre_login_details which may be used to react on different API versions
  3. Send username and password with the request
  4. Set or receive a cookie for subsequent http methods
  5. check if we are connected via loginsuccess

Nutanix REST API Authentication with GO

There are two methods to authenticate to the Nutanix REST API.

1. Basic Authentication :
The user provides user-id and password every time a request is send as the auth-header.

2. Session Authentication :
The user credentials are stored in a cookie.

GO Authenticate via “Basic Authentication”

I wrote an example which shows how the basic authentication works. The general workflow is simple. EVERY http method sends the username and password in a encoded fashion. Nutanix REST API requires a base64 encondig which is included in the “encoding/base64” package.

I created the func EncodeCredentials which encodes the input parameter username and password as required.

The next two functions return the API entry points for v0.8 and v1.0 of the Nutanix API. It uses the NutanixHost parameter to generate the full https request string. New API version v2.0 and v3.0 will  be introduced soon.

For testing purposes I set some variables to meet my test environment in the main function. Feel free to change this to your settings.

The next part is only required if a certificate at the Nutanix cluster is used which is self-signed. Basically this ignores that this certificate can not be validated.

A http client will be created which leverage the Transport “tr” we just defined.

Now we are able to create the GET request to session_info at https://192.168.178.130:9440/PrismGateway/services/rest/v1//users/session_info. This is inline with the PRISM GUI workflow.

Before we send the request we make sure username and password will be included in the request header. The key we need to set is “Authorization” with a defined base64 value with isa string of “Basic “+encodedString.

The request can be send and we are able to handle the response. In this example I am checking for some response codes but it is up to you to implement more.

The last part prints the response body to give you a feedback and the received info about the user.

That’s it. Just remember you only need to make sure the header of the request includes the base64 encoded string.

GO Authenticate via “Session Authentication”

I wrote an example which shows how the session authentication works. The general workflow is like that. Send a “basic authentication” http Get with base64 encoded credentials and set a cookie. Use the cookie in all subsequent http methods .

I will only focus on the parts which change to the “basic authentication” part.

The way how the HTTP client will be created changed. First the cookie will be created and the the new created http client is using this cookie.

There is a second request but this time we don’t set the authorization header. The reason is simple. The http client makes sure the cookie is send with the encrypted credentials.

Done.

I found two blogs which helped me to learn about this:

Dwayne Lessner @dlink7 

and

Jason Langone @langonej

 

Nutanix REST API with Golang/Go Tutorial – Part 1 – Nutanix REST API Overview

This blog series is dedicated to the Nutanix REST API and how to automate tasks based on the language Go leveraging the REST API.

I will focus on the typical tasks you would like to automate. Some examples:

  • Retrieve data like VM and their configs, extract performance values, …
  • Create VM, VLAN, vdisk, projects….
  • Show typical tasks in the new self service portal (SSP)
  • Using different kind of Nutanix REST APIs (v0.8, v1, v2 , v3)
  • … a lot more

Part 1 – Nutanix REST API Overview

This part will show the basic REST API entry points and how to use them directly.

The REST API Explorer

The first step is to learn a little bit more about the Nutanix REST API. So there is a REST API Explorer which shows/documents the two versions (v0.8 and v1) of the Nutanix REST API.

Hint: You may need to login to PRISM before you are able to browse/use the REST API Explorer!

The documentation entry points are:

https://CLUSTER-IP_or_CVM_IP:9440/console/nutanixapi/       -> v0.8 API so called MGMT API

nutanixapi

https://CLUSTER-IP_or_CVM_IP:9440/console/api/

api

The PRISM GUI is using the REST API. This means everything you can do in PRISM can be done via the REST API and even more. I believe it makes no sense to explain every method of the REST API right now. Instead I will show some basic examples in this tutorial and explain it via implementing use cases. But feel free to browse through the different methods/objects.

Your first API call using the REST API Explorer

Connect to https://CLUSTER-IP_or_CVM_IP:9440/console/api/

Click on /vms and you are able to see the standard HTTP methods like GET/PUT/POST/DELETE which are used to modify/create/get VMs based on the Nutanix plattform.

vms_all_methods

Retrieve a list of all VMs which are running on Nutanix!

In this case we use the GET method to retrieve a list of all VMs which are running on Nutanix. So click on “GET /vms/”

vms_get

The Implementation Notes says: “Get the list of VMs configured in the cluster”.  This means we would get a list of ALL configured VMS if we call a “GET” using the the right URL. Okay you may ask: “What is the URL I need to send a GET to?”

URL entry points

  • For v0.8 the entry point is : https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/api/nutanix/v0.8
  • For v1 the entry point is : https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/services/rest/v1

So in this case we are using the v1 API and the URL is:

https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/services/rest/v1/vms/

So copy and paste this to the browser will show something like this:

vms_response

or you could use a tool like curl but you need to handle the authentication as well. I will talk about authentication in the second part of this tutorial.

 

But back to the REST Explorer because this can be done easier. If you scroll down in the /vms/GET you will find a button called “Try it out!” which will do exactly the same for you! Click Try it out!

vms_tryout

You will see that the format of the response which is json can be much better viewed now and we get some nice details!

First you will see the same URL I already showed you in the “Request URL”

Second you are able to scroll through the response and may search for the key “vmName” and the corresponding values to find all VM names. “Response Body”

Third the response Code is displayed. “200” which means: “Everything worked great” 🙂 “Response Code”

vms_tryout_output

But this lists all VMs which are configured and not only the once which are running. We would like to change this. Let’s first search the response if any key shows the actual state of the VM!

You may found a key called “powerState”.  Let’s than try to filter the response and only retrieve the VMs which are “on”.

vms_tryout_output_power

Using Nutanix FilterCriteria

For this case the we are able to use the option “filterCriteria” in the REST API Explorer to only find all VMs which are powered on. Type in “powerState==on” in the filterCriteria field and try it again.

vms_tryout_filtercriteria

vms_tryout_filtercriteria_fail

This request failed with a response code “500” and it says: “invalid filter criteria specified”. You may ask: “Why? It is exactly stated like in the response! And where the hell should I know more about this?”

The answer: “There is a KB article which shades some light here: KB 000001962”

It says for all who are note able to access the KB:

If you would like to learn more , about the filter you could use on a query, use the arithmos_cli on the CVM to get more details.

arith

In this case connect to a CVM and type:

arith_vm_power

which shows the attribute is called “power_state” instead of “powerState”. Let’s try it again with the filter criteria “power_state==on”

vms_tryout_filtercriteria_done

Boom!… It works!

This completes part 1 of this tutorial! It will get an update soon with the new API coming with the Asterix release (v2 and v3)!

If you would like to learn more about the REST API there are some resources you may have a look into:

Acropolis API Reference

Nutanix PrismAPI for Devs

Acropolis Python Tutorial

Andre Leibovici

Unleash the power of the NTNX-AVM – Move VM from one container to another container (AHV) with move_vm

This post is related to AHV only!!! Make sure a recent backup of the VM exist!

In the last weeks customers asked me how to move a VM from one AHV container to another AHV container on the same cluster. The answer is: “There is no PRISM/GUI option for this and the manual task is pretty difficult”. So i wrote a script called move_vm which I show in this post to simplify this.

But why should you move a VM?

There are several reasons for this.

  • container settings don’t fit
  • different containers for different organization units
  • DR/backup concepts based on containers
  • Automation based on containers
  • … and more

Example: Let’s say the customer started with two containers:

  1. ISO – just for templates and CD/DVD ISO images
  2. prod –  productions environment

Now he realizes that some of the server VMs would be great for compression but some are not. He used the best practices to figure out which server VMs would fit.

compression_bestpractices

For this case he created a new container called:

3 . prod_comp with active inline compression for SQL Server 2012 R2 VMs

Now he tried to move the VM from container prod to prod_comp and found that this does not work and asked me.

USE CASE: Move a VM from one AHV container to another AHV container to make use off inline compression for a several VMs!

I created a new container called prod_comp with inline compression in my Intel NUC environment.

container_prod_comp

Then I created a VM called “Move_VM_Test1” with different kind of disk devices and CD/DVD drives with all reside on the prod container.

move_vm_test1

Now I am using the tool move_vm which is installed on the NTNX-AVM to move the VM. The tool has several options which I list here.

Move a whole VM from container prod to prod_comp

In the first case the syntax is pretty simple to move the whole VM from prod to prod_comp. REMEMBER the source VM needs to be powered off before you are able to move it.

move_full_vm

overview_vms_double

You may noticed that there are two VMs now with the same name. I believe it makes sense to keep the old VM unless you are sure the new copy works.

You can use the option “–delete” to delete the source VM. The advantage is that the new network adapters will have the same MAC address then the source VM! 

 

Move just one vDisk/disk of the VM from container prod_comp back to prod

I renamed the new Move_VM_Test1 to Move_VM_Test2 for the next part.

Let’s say you would like to move just one vDisk from a container prod_comp back to prod because you found that inline compression makes no sense. An example maybe the “transaction log” vdisk of a MS SQL Server.

First we need to find out which vdisks exist and how the mapping looks like. This can be done with the “–list_mapping” option.

the output is:

which means that there are four vdisks and two CD/DVD drives. Let’s say we identified that the the second vdisk: “scsi.1” is the one we would like to move back to container prod. In this case we need to specify the whole mapping when calling the move_vm tool to only move the second vdisk. Copy and paste is the way to go!

The syntax is:

I only changed the prod_comp to prod for the scsi.1 vdisk!

move_vdisk

You may ask: “Why is the whole VM cloned and not only the vdisk?” Yep you are right. This would be the better way. This is how this tool works atm. This is more a copy then a move but it works and its pretty fast because only the vdisk “scsi.1” needs to be copied.

You could specify the option “–delete” the delete the source VM and to make sure the new network adapters get the same MAC address.

moved_vdisk

There we go.

For all the people who wants to know more, this is an overview how this tool works:

  1. Upload vDisk from source VM to image service. This is needed while a direct copy is not possible
  2. Create new VM
  3. Clone Images to Disks
  4. Add network
  5. Delete images

Intel NUC NUC6i7KYK – Upgrade of Nutanix Community Edition (CE)

Today I upgraded the Intel NUC cluster based on Nutanix CE. This was pretty easy.

Choose “gear wheel” in the top right corner and select “Upgrade Software”

upgrade_ce

As you can see. The new Nutanix CE version was released a few days ago. So choose “Upgrade” -> “Upgrade now”

upgrade_ce2

The upgrade process starts. We just need to wait now… Or click on “Nothing to do?”

upgrade_ce3

I played the game “2048” but I was not really good this time.

upgrade_ce4

So I checked the upgrade process.

upgrade_ce5

Not ready…. Wrote some emails than.

around 20 min after I clicked upgrade:

upgrade_ce6

It’s done…..

This is while Nutanix is called the 1-click upgrade solution. (Yes I clicked around 4-5 times overall)

Go Audacity!

 

Unleash the power of the NTNX-AVM – Import/Export/Migrate VMs to/from AHV

in a recent customer Proof of Concepts (PoC) I encountered a common task. Importing a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and after testing export the VM back to the production environment. The import via the AHV Prism GUI can be done without too much effort but there is no interface or command line tool which exports a VM. I wrote a script for the export which can be run on a linux/mac environment or on NTNX-AVM. Jump direct to the export part and skip the import.

USE CASE: Import a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and export it back after testing!

Importing/Migrate a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV)

There is a decent post from Artur Krzywdzinski on how to import/migrate a Windows 2012R2 from VMware vSphere/ESXi to AHV. There is even a detailed documentation on the Nutanix support portal. Search for “MIGRATING WINDOWS VMS FROM ESXI TO AHV”.

The way this works can be described as follow:

  • Install all needed drivers into the VM before migrating. (drivers for disk devices, video, network)
  • Copy the VM to a Nutanix NFS share which is mounted on the source ESXi/vSphere via Storage vMotion if available, else copy it with command line/GUI yourself.
  • Convert the VMware vmdk to a format AHV can read/write
  • Create a VM based on the converted vmdk files with the same settings like in ESXi/vSphere
  • …. something else maybe… Start the VM… done

migrate_portal

1. Windows VM Migration Prerequisites

I advice you to read the full documentation If you migrate VMs. I only list the basic steps needed in my test lab.

  • Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.
  • Optional: Clone any ESXi VMs you want to preserve.
  • Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later.
  • Mount the AHV container as an NFS datastore to vSphere.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

This is a Windows 20012 R2 Windows Server with VMware Tools installed.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter with no snapshots.

vm_nosnapshots

Optional: Clone any ESXi VMs you want to preserve.

Yes I would like to do this. The VM which I will migrate will be the clone and not the original VM. Just in case at any point we mess something up it would be nice to have the original one.

In this case I clone the VM directly to the NFS datastore mounted from the Nutanix cluster. So jump to the Mount AHV … part and continue here when finished this step.

I clone the Windows2012R2 server to the prod datastore which resides on the Nutanix cluster.

clone_prod

Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later. See the Hypervisor Details page on the Nutanix Support Portal for all the AHV versions

I am using the Nutanix CE edition so it is not that easy to make sure these requirements are met. Lets start with AOS 4.5.x or later.

I connected today (04.10.2016) to my Nutanix NUC cluster and choose the option “Upgrade Software” in the right top corner in PRISM (gear wheel).

upgrade_ahv

As you can see: “2016.06.30” is installed but is this equal or later than AOS 4.5.x ? Yep it is. The version format seems to be YYYY:MM:DD so this should be okay. There is a table called “AHV AND NOS/AOS VERSION UPGRADE COMPATIBILITY” on the Nutanix portal which will make it easier to understand. The newer the NOS/AOS the newer the AHV version which is required. In this case I believe the version should be 4.6.3 for NOS/AOS.

ahv_nos_upgrade

Now the AHV-20160217.2 hypervisor part. The Nutanix CE shows: Nutanix 20150513 which does not meet this requirement. But anyway. It works.

BTW: In this case there is an update available but I will upgrade after I finished this post.

Mount the AHV container as an NFS datastore to vSphere

First we need to make sure that the source ESXi/vSphere environment is able to mount the NFS container/datastore. To do this we need to whitelist the ESXi/vSphere environment on Nutanix. In my case the ESXi/vSphere (192.168.178.80 vCenter/192.168.178.81 ESXi) environment is in the same subnet than the Nutanix CE edition (192.168.178.130). Make sure that in your environment the ESXi and the CVM are able to reach each other via IP and no firewalls are blocking traffic!

Choose “gear wheel” on top right corner in PRISM and select “Filesystem Whitelists” and enter the IP range which should be able to mount the container/datastore. In my case I used the whole subnet 192.168.178.0/24.

filesystem_whitelist

Now we are able to mount the NFS datastore. I would like to mount the Nutanix container “prod” into vSphere.

container_prod

In the vSphere client choose to Add NFS datastore and insert the needed values like I did. I used the Nutanix cluster IP as the Server address and “/prod” as the folder.

add_nfs_prod

There we go. A datastore called “prod” is available on the source ESXi environment.

prod_mounted

2. Install Nutanix VirtIO

In my case I created a clone of the source VM so I install the drivers only into the clone. If you skipped this part install them to the source VM.

Download the Nutanix VirtIo drivers from the Nutanix portal. I prefer the ISO image because it seems to be easier to mount it via vSphere Web Client than copy something to the source VM.

download_virtio

Mount the ISO and install the drivers.

install_virtio

Set the Microsoft Windows SAN policy to online. This makes sure all disk devices will be mounted after migration.

san_policy

3. Migrate the VM disks to Acropolis Distributed Storage Fabric (DSF)

To migrate a VM to the Acropolis Distributed Storage Fabric you only need to SVMotion the VM to the mounted NFS datastore. I copied the VM already to the container/datastore “/prod” when cloning the source VM. If you didn´t do this you need to move all data of the VM to the prod container/datastore via SVMOTION.

In the vSphere Web Client choose “migrate” the VM and use the datastore only option. So all vmdisks will be moved to the container/datastore “prod”.

svmotion_prod

4. Convert the VM disks to AHV by importing them

To import a VMware vmdk to Nutanix AHV you need to use the Image Service/Image configuration. So click on “gear wheel” in the top right corner and select the “Image Configuration” (maybe the name changed already).

Choose “Upload” and enter the following:

import_disk

Attention!!! Make sure you use the “-flat” !!!

nfs://127.0.0.1 will always work!

As you can see it may not easy to know the exact filenames. Use the vSphere Web Client datastore browser to get all needed details.

vm_file_details

5. Create a new Windows VM on AHV and attach the Windows disk to the VM

Power Off the source VM now.

Create a VM in the Nutanix Prism GUI with the same settings than the source VM.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter in my case.

create_vm1

Add a disk and choose to clone from the image service where the disk should be imported already.

create_vm2

Add a network adapter and connect it.

create_vm3

Now you can start the VM and it should run. You need to configure network and maybe an option here and there. But basically the VM is imported.

windows_started

Exporting a VM from Nutanix Acropolis Hypervisor (AHV) to VMware vSphere

As you may have mentioned. The import works without the NTNX-AVM. But for the export there is no PRISM/NCLI/ACLI command to export a VM. So i wrote a script which helps with this part based on the great post from Artur Krzywdzinski.

The script will export the AHV VM to a container of your choice where you can copy it via NFS to ESXi/vSphere or to somewhere else.

In my case I would like to export the WinClone VM back to ESXi. So all tools are installed like VMware tools etc. I will export the WinClone to the ISO container. Just to make sure there is no confusion with the “prod” container where the import took place.

Connect to the NTNX-AVM via SSH. Now lets export the VM with the export_vm command.

STEP 1 – using export_vm to export the VM to a container

Make sure the source VM for the export is powered off!

export_winclone

The export takes some time because the all VM disk data needs to be converted into the VMware vmdk format.

Mount the ISO container to the target ESXi/vSphere environment.

mount_iso

Step 2 – Register the VM into the ESXi/vSphere

Now I  would like to create a new VM on ESXi/vSphere based on the exported files. I will just register the .vmx file! The vmdk file is atm not a proper ESXi/vSphere file. So it needs to be converted. I am using the migrate (move VM) which will do the same but so I can avoid the command line. But this KB article should help if you want to do it the manual way.

Browse the ISO datastore and register the .vmx file as Win-Clone-2. The “-2” is only needed if the original clone still exists.

registervm

registerdetails

Step 3 – Prepare the VM for the first boot

Upgrade the virtual Hardware.

upgradevm

Set the VM OS to Windows.

vm_os

Change the SCSI Controller from “Bus logic” to “LSI LOGIC SAS”

vm_scsi_controller

Add a network device or all needed onces.

vm_network

Now the nice part begins. Instead of manually converting all vmdisk we just migrate the VM to another datastore. In the best case directly where the VM should reside.

vm_migrate

vm_migrate2

Step 4 – Last step! Power on the VM.

vm_poweredon

Go 7-zip

Unleash the power of the NTNX-AVM – daily_health_report and monthly_ncc_health

I asked myself how could an admin use the NTNX-AVM. So I decided to show and provide some real world examples how this powerful automation VM can be used.

USE CASE: A daily health report should run on the Nutanix cluster and send to a specified email address!

Let’s start with the script itself. There is no script provided by Nutanix except the Nutanix Cluster Check (ncc). It does a decent job but because of the hundred of tests and output it may not be the easiest to start with. So based on the script provided by BMetcalf in the Nutanix Community I developed a script called “daily_health_report.sh” for the NTNX-AVM. It is automatically installed with the NTNX-AVM starting today.

It runs the following  command remote on a CVM which gives you a good overview of the current cluster status.

Okay we do have a script but how to run it once a day? For this case I introduced jobber to the NTNX-AVM.

Learn jobber the fast way

Connect via SSH to the NTNX-AVM and run:

jobber_list

In this case no job is known. I prepared an example which runs the script daily_health_report.sh every day at 04:00.

The easiest way to create this job is to copy the example from the source folder to a file called “.jobber” in the Nutanix home directory.

The last step is just to reload the jobs defined in “.jobber”.

Review the jobber list.

jobber_finished

How does the daily_health_check.sh work?

First of all, this script will not run in your environment because all parameters for the daily_health_check.sh are setup for my lab environment. Okay lets make sure it will run in your environment.

STEP1 – Enable SSH access from NTNX-AVM to the cluster CVMs

The script makes use of ssh/scp to run the commands remote on one of the CVMs. To run a script non-interactive we need to enable password-less authentication between the NTNX-AVM to the CVMs. I wrote a script which enables password-less authentication.

This scripts creates a key pair and deploy the keys to the CVMs. When you run it you need to specify the Cluster IP/Name and the PRISM admin password.

A test ssh connection should work now without requesting a password.

STEP2 – Edit the jobber file

Use an editor of your choice like “vi” and edit the line which starts with ” cmd :  daily_h…” and edit the parameters to your needs.

DO NOT use the cluster IP for host. Use one CVM IP.

  • –host=<YOUR-CVM-IP>
  • –recipient=<RECIPIENT-email>
  • –provider=other                                   // choose other to send email to a local email server
  • –emailuser=<EMAIL-USER>                //  Email user used to authenticate via SMTP (sender)
  • –emailpass=<EMAIL-PASSWORD>    //  Email password used to authenticate via SMTP
  • –server=<EMAIL-SERVER-IP>             // Email server IP
  • –port=<SMTP port>                            // Email server SMTP port

Reload the jobber file.

STEP3 – Test the job

The output should look like this:

edit_test_jobber

Check the email Account that the email was sent and received.

show_daily_email

There it is. Sorry for the German Thunderbird version but you should get the idea how the email looks like. An email with one attachment called “daily_health_report-<DATE>.txt”.

USE CASE: Run a monthly “ncc health_checks run_all” and send the output to a specified email address!

Some Nutanix people would say: “Why don’t you use the ncc instead?” Good point. This post shows how to run ncc every x hours and send an email. But how to run ncc once a month and get all ERROR/FAIL messages in the body?

For this case I created the ncc_health_report.sh script which runs the “ncc health_checks run_all” and send an email.

STEP1 – Extend the “.jobber” file to add this job

The example which can be found on the NTNX-AVM in  “~/work/src/github.com/Tfindelkind/automation/NTNX-AVM/jobber/example/monthly_ncc_health” defines a job which runs on the 1st of each month and calls the ncc_health_report.sh

Edit the ~/.jobber file and add the job text to the end of the file. BUT skip the first line “—“. the file should look like this.

both_jobs

Don’t forget to edit the parameters like in STEP2

Reload the jobber file.

STEP3 – Test the job

WARNING!!!! This may run for a while…

The output should look like this:

ncc_report_test

And an email should be in your inbox:

ncc_report_test_email

I know the format of the body is wired because all “newlines” have been removed. I hope to fix this in the near future.

BTW: I used the hMailServer in my lab environment. This was really the easiest mail server setup I have ever done.

Go hMailServer

 

Nutanix automation VM ( NTNX-AVM ) goes online

Since I started at Nutanix I thought about a way to write and run scripts/tools around the Nutanix ecosystem. But there are different languages which are used by the community. Perl/python/golang/powershell etc. So I asked myself: “Where the hack should I install the runtime and the scripts/tools, because the CVM is a bad place for this”

The answer took me a while but here we go:

Nutanix automation VM called NTNX-AVM

So there is no image which fits all but instead the NTNX-AVM is based on recipes which defines the runtime/scripts/tools which will be installed. The foundation of these are the cloud images which are designed to run on cloud solutions like AWS/Azure/Openstack. These images provide good security from scratch. Another advantage is that the images are already deployed which means there is no different way to install it other then “importing” a vendor controlled image. This is good for maintaining the whole project.

NTNX-AVM v1 when deployed provides golang , git, govc, java, ncli (CE edition), vsphere CLI and the automation scripts from https://github.com/Tfindelkind/automation preinstalled. So for example you can move a VM from container A to container B with the move_vm binary which leverage the Nutanix REST API which is not possible in AHV.

I introduced a job scheduler system called https://github.com/dshearer/jobber to automate tasks/jobs. The advantages are that you are able to review the history of already executed jobs and you have more control when something went wrong.

Use cases for the NTNX-AVM

  • Backup Nutanix VM’s to a NFS store like Synology/Qnap/linux…
  • Move VM from one container to another one
  • Do some daily tasks like generate reports of specific performance counters you would like to monitor which are not covered by Prism
  • anything which talks to Nutanix REST API and needs to be scheduled.
  • …. there will be more

Installation of NTNX-AVM on Acropolis Hypervisor (AHV)

For an easy deployment and usage I created a simple bash script which will do all the hard work.

The deployment for VMware and Hyper-V will follow. At the moment the process is more manual. I will post a “HOW-TO install”.

What you need is a Nutanix cluster based on AHV (>=4.7) and a client where you able to run the bash script. Ubuntu, Debian, Redhat, CentOS, Mac OS should work fine as a client. The Community Edition (CE) is the base of my development environment and is fully supported.

This is how the environment looks like before the deployment. My three node cluster based on Intel NUC.

cluster_before_NTNX-AVM

Image_service_before_NTNX-AVM

Step-by-Step deployment of NTNX-AVM with Deploy Cloud Image script (DCI)

We start at your client system in my case a Mac Book Pro. Download the latest stable release of DCI from https://github.com/Tfindelkind/DCI/releases.  In my case the version v1.0-stable is latest build available. The “Source code (tar.gz)” will work for me.

Release v1.0-stable · Tfindelkind-DCI Google Chrome, Heute at 11.40.54

Change to the Download folder and unpack/untar the file:

Downloads — bash — 92×28 Terminal, Heute at 11.44.46

You can see there are several recipes available but let’s focus just on NTNX-AVM v1 based on CentOS7.

NTNX-AVM recipe config file

IMPORTANT: THE NTNX-AVM needs internet connection when deployed. Because all tools need to be downloaded.

Now we need to edit the recipe config file of the NTNX-AVM to make sure that the IP,DNS,etc. is setup in the way we need it. Use a text editor of your choice to edit the “/recipes/NTNX-AVM/v1/CentOS7/config” file.

You should edit following settings to your needs:

  • VM-NAME          This is the name of the VM guest OS.
  • VM-IP                  The fixed IP
  • VM-NET              The network of VM
  • VM-MASK           The netmask of the network
  • VM-BC                 The broadcast address of the network
  • VM-GW                The gateway
  • VM-NS                 The nameserver
  • VM-USER             The username for the NTNX-AVM which will be created
  • VM-PASSWORD  The password for this user -> Support for access keys will be added soon.
  •                               You need to escape some special characters like “/” with a “\” (Backslash)
  • VCENTER_IP         IP of the vcenter when used.
  • VCENTER_USER   User of the vcenter
  • VCENTER_PASSWORD

This is an example file for my environment:

CentOS7 — bash — 92×28 Terminal, Heute at 11.47.18

NTNX -AVM with DHCP enabled

If you don’t want to specify a fixed IP,DNS,.. you could roll out the NTNX-AVM with DHCP. To do this edit the “/recipes/NTNX-AVM/v1/CentOS7/meta-data.template” file and remove the network part so the file looks like this one. The “ifdown eth0” and “ifup eth0” is related to a bug with the CentOS 7 cloud-image.

Deploy the NTNX-AVM to the Nutanix cluster

Now we are ready to deploy the VM to the Nutanix cluster with the dci.sh script.

We need to specify a few option to run it:

  • –recipe=NTNX-AVM             Use the pre build NTNX-AVM recipe
  • –rv=v1                                    It’s the first version so we use v1
  • –ros=CentOS7                      In this case we use the CentOS7 image and not Ubuntu
  • –host=192.168.178.130      This is the cluster IP of Nutanix/ CVM IP will work too
  • –username/–password      Prism user and password
  • –vm-name                            The name of the Nutanix VM object
  • –container=prod                  In my case I used the container “prod” (production)
  • –vlan=VLAN0                        The Nutanix network where the VM will be connected to

The dci.sh script will do the following:

  • First it will download the cloud image from a CentOS. Then it will download the deploy_cloud_vm binary.
  • It will read the recipe config file and generate a cloud seed CD/DVD image. Means all configuration like IP,DNS,.. will be saved into this CD/DVD image called “seed.iso”.
  • DCI will upload the CentOS image and seed.iso to the AHV image service.
  • The NTNX-AVM VM will be created based on the CentOS image and the seed.iso will be connected to the CD-ROM. At the first boot all settings will be applied. This is called the NoCloud deployment based on cloud-init. This will only work with cloud-init ready images.
  • The NTNX-AVM will be powered on and all configs will be applied.
  • In the background all tools/scripts will be installed

DCI-1.0-stable — bash — 92×28 Terminal, Heute at 12.05.51

The CentOS cloud image and the seed.iso have been uploaded to the image service.

Nutanix Web Console Google Chrome, Heute at 12.06.17

The NTNX-AVM has beed created and started.

Nutanix Web Console Google Chrome, Heute at 12.06.45

Using the Nutanix Automation VM aka NTNX-AVM the first time

Connect via ssh to the NTNX-AVM IP. 192.168.178.200 in my case. First of all we need to make sure that all tools are fully installed because this is done in the background after the first boot.

So let’s check if /var/log/cloud-init-output.log will show something like:

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.45.06

The NTNX-AVM is finally up, after … seconds

You should reconnect via ssh once all tools/scripts are installed to make sure all environment variables will be set.

Everything is installed and we can use it.

Test the NTNX-AVM environment

Let’s connect to the Nutanix cluster with the “ncli” (nutanix command line) and show the cluster status.

DCI-1.0-stable — nutanix@NTNX-AVM.~ — ssh — 105×28 Terminal, Heute at 07.54.57

That’s it. NTNX-AVM is up running.

Today I start to implement the ntnx_backup tool which will be able to backup/restore a AHV VM to/from an external share  (NFS, SMB,….)  which will leverage jobber as the job scheduling engine.

Go Ubuntu

 

Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part3 – 3 node cluster creation

Now it’s time to create a Nutanix cluster. But there are some default settings I would like to change before I create the cluster. This is not mandatory but this will increase the usability in the future. Just jump to the create cluster part if you want to skip that.

Changing the AHV hypervisor hostname (optional)

Use a ssh client like PuTTY or my favorite mRemoteNG  to connect to the AHV (Host) IP. Use the default password when connecting as the “root” user which is “nutanix/4u”. Use a text editor like vi/nano to edit the “/etc/hostname” file and change the entry to a hostname you would like to have.

change_AHV_hostname

The following table shows the hostnames i used in this setup.

DNS-NameTypeIP
NTNX-NUC1AHV192.168.178.121
NTNX-NUC2AHV192.168.178.122
NTNX-NUC3AHV192.168.178.123
NTNX-NUC1-CVMCVM192.168.178.131
NTNX-NUC2-CVMCVM192.168.178.132
NTNX-NUC3-CVMCVM192.168.178.133

Changing the AHV hypervisor timezone (optional)

By default the timezone of the AHV hypervisor is PDT (Pacific daylight time). From a support perspective it makes sense that all logging dates are using PDT, so that it is easier to analyse different log files side by side. But I would like to have the time in my timezone which is Germany. To change the timezone it is needed to use the correct /etc/localtime file. You can find the files needed in “/usr/share/zoneinfo”.

  • Make a backup of the actual /etc/localtime:  “mv  /etc/localtime /etc/localtime.bak”
  • Make a link to the wanted timezone file: “ln -s /usr/share/zoneinfo/Europe/Berlin /etc/localtime”

change_time_zone

Changing the CVM name (optional)

This is a tricky part. I could not found a solution to change the CVM name. It seems there is no way to do this.

Changing the CVM timezone (optional)

@TimArenz remind me that it may be easier and the better way to change the timezone after the cluster is created. This can be done via the Nutanix CLI (ncli)

 Creating the 3 node cluster

There are two ways to install a multi node Nutanix CE cluster. Via the cluster init website or via the command line.

Cluster init web page

Connect to: http://CVMIP:2100/cluster_init.html

Enter the needed values and start the creation.

cluster_config

Cluster create via command line

We need to connect to one of CVMs of this setup via ssh with user “nutanix” and password “nutanix/4u”.

The creation is pretty simple which involves two steps. Invoke the create cluster command and set the DNS server.

cluster – s CVM-IP1, CVM-IP2, CVMIP3 create
ncli cluster add-to-name-servers servers=”DNS-SERVER”

cluster_create_begin

create_cluster_end

The first connect to PRISM

Open a browser and connect to one of the CVM IPs. Enter the user credentials: “admin/admin”

When login the first time after the installation you will be asked to change the admin password.

change_password

The NEXT Credentials which have been used for the download need to be entered now. This means that Nutanix CE edition needs a internet connection to work. There is a grace period which should be around 30 days.

enter_next_account

Prism will be be shown now and it’s ready to go.

installation_done

 

Go Wireshark!

Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part2 – AHV installation

There are several great posts which show how to setup Nutanix CE in a HomeLab.

Tim Expert

TinkerTry

Gareth Chapman

XenAppBlog

Mike Sheehy

I will focus on my own setup , based on the Intel NUC6i7KYK. The setup is pretty straight forward up to the point when the onboard network comes into play. The Intel driver which is included in the Nutanix CE does not provide the right ones needed for the Intel NUC6i7KYK onboard network.

Overview of the Nutanix CE install process

  1. Make sure your environment meets the minimum requirements. The table shows that a minimum of two disks are needed, at least one SSD. That´s the reason why I used 2x SanDisk X400 M.2 2280 in my environment. Remember that NVMe drives are not working atm.minimum_requirements
  2. Download the Nutanix CE disk image which will be copied to an USB flash drive. This will be the install and the boot device for this environment. The USB drive should be at least 8 GB in size but I recommend to use a device as big as possible. 32 GB flash drives starting at 10€. The reason is simple. If your environment for any reason starts to write extensive logs or data to the flash drive an 8 GB drive may end up with a wear out. Second! Maybe the image becomes bigger in the future?
  3. Boot from USB flash drive and start the installer with the right values (IP,DNS..) This step will install the Controller VM (CVM) to one of the SSD drives where all the Nutanix “Magic” resides. All local disks will be directly mapped to the CVM. This means the Acropolis Hypervisor (AHV) which is KVM based is not able to use the storage directly anymore.
  4. If chosen a single node cluster will be created. In my case where I will build a three node cluster I will leave this option blank.

Step-by-Step Installation of Nutanix CE based on Intel NUC6i7KYKD

Download the Nutanix Community Edition. You need to register first!

NutanixCE_register_download

Download the software by scrolling down to the latest build.

nutanixCE_downloadLatestbuild

The image itself is packed with “.gz”. I used the tool 7zip to unpack the file. A file like ce-2016.04.19-stable.img will be unpacked which is ready to be copied to the USB flash drive.

7zipimage7zipIMG

Now attach the USB flash drive and download the tool called Rufus. This program enables to “raw” copy an img like this one byte by byte to an USB flash drive. Choose the right USB flash drive, then switch to “DD Image” (dd means disk dump). Last step is to choose the img file and “Start”.

ATTENTION !!!! Make sure to choose the right device!!!

rufuschoose_rufus_img

The copy process takes a while!

Now we need to install the Intel network drivers

Intel e1000e for Nutanix CE on Intel NUC6i7KYK
because the actual version does not provide the right ones. Unzip the file so you have got a file called “e1000e.ko”

Now we need to copy the file “e1000e.ko” which is a kernel module to the USB flash drive. But the filesystem which is used on the USB flash drive is ext4 which MS Windows is not able to edit by default. So we need a tool like EXT2FSD to do so.

After the installation of EXT2FSD and a reboot you start the Ext2 volume Manager. In my case I needed to choose a drive letter manually to be able to work with the USB drive. So scroll down to the right device in the bottom window and select the drive and hit the “F4” key which should assign an unused drive letter.

EXT2Volumemanager

Copy the file “e1000e.ko” to the USB flash drive in the following directory: “/lib/modules/3.10.0-229.4.2.e17.nutanix.20150513.x86_64/kernel/drivers/net/ethernet/intel/e1000e/” and override the existing file.

copy_e10000e

The USB flash drive is ready to boot on the Intel NUC6i7KYK!

Attach the USB flash drive to your Intel NUC6i7KYK and boot it. Feel free to change the boot order right now so that the Intel NUC6i7KYK will always boot from the USB flash drive.

IMG_20160626_090725 IMG_20160626_090415

Now the Intel NUC6i7KYK is ready to boot from the USB flash drive.

IMG_20160626_090853

After the boot you should see the login screen.

IMG_20160626_090912

Login as user “root” with the password “nutanix/4u”. Loading the Intel network driver works with the command “modprobe e1000e”. Use “exit” to return to the login screen.

IMG_20160626_091010

The user “install” starts the installation.

IMG_20160626_091034

Choose your keyboard setting. In my case I used “de-nodeadkeys”.

IMG_20160626_091104

The following screen shows a small form. This is an examples for a single node setup.

IMG_20160626_091301

You may miss the configuration for a 3 or 4 node cluster. If you would like to setup a multi-node cluster your setup could look like this. This means that the cluster itself will be created later and we just install the environment. (Acropolis Hypervisor = Host, CVM = Nutanix Controller VM)

IMG_20160626_091320

There are two IPs which are needed to be configured. Host IP is the IP of the hypervisor. In the case of Nutanix CE  the Acropolis hypervisor will be installed, which is based on the KVM hypervisor. There are a lot of changes compared to the vanilla KVM so it is not the same. The logic of all Nutanix functions are implemented in the Controller VM. This is the reason why the OS which is installed in the VM is called NOS (Nutanix OS). NOS is based on Centos.

IMG_20160626_091352

The installation takes a while. In the end you should see a login screen with a random hostname.

The next post will show the configuration of the cluster.

Go mRemoteNG

 

 

 

 

 

 

 

Intel NUC NUC6i7KYK – Installation of Nutanix Community Edition (CE) – Part1 – Hardware setup

As already announced in my recent post I bought three Intel NUC NUC6i7KYK to setup my demo/showcase environment based on the Nutanix Community Edition which is free to use. In the following weeks I will show how I setup the environment step by step and I will document the live demos I would like to show at upcoming events. This will include the Openstack and docker integration.

nuc_hardware_installed

It all starts with the hardware itself. The NUC skull canyon edition is pretty new and a this post in the Nutanix Community literally convinced me to build a lab with these boxes. I used the following hardware setup. Be aware that DDR4 and SSDs are not included when buying the Intel NUCs.

NUC-AHV

ItemDescriptionFirmwareDriverHints
Intel NUC skull canyon NUC6i7KYK---
Intel Core i7-6770HQ
Skylake-H, 4C/8T
2.6 GHz (Turbo to 3.5 GHz), 14nm, 6MB L2, 45W TDP---
32GB (2x 16384MB) Crucial CT2K16G4SFD8213 DDR4-2133 SO-DIMM CL15 Dual Kit--
2 x SanDisk X400 M.2 2280 512 GB SATA SSD (6Gb/s)---
Intel Ethernet Connection I219-LM GbE Adapter-e1000o.ko -