You are currently on IBM Systems Media’s archival website. Click here to view our new website.

Bookmark and Share

Recent Posts

Getting Hands on With AIX on a Nutanix Cluster

May 15, 2018

Ever since IBM's intriguing statement of direction about AIX running on POWER-based Nutanix clusters, I've eagerly awaited the real thing. The wait ended last week, when availability of the hyperconverged systems was made official at the Nutanix .NEXT conference in New Orleans.

Now here's the really cool part: during the IBM Technical University earlier this month, I got some hands-on experience with AIX running on a Nutanix cluster. Then last week, I was able to access a cluster again, this time via Webex video conferencing.

So how does this all work? I'll start with the Prism interface. Watch this to get some familiarity with it. Prism is the GUI that manipulates the virtual machines that we created and managed. While the video I reference is actually an x86 cluster, Prism's look and feel is similar to that of a POWER-based cluster.

Once we were logged into Prism, we loaded a pre-GA raw disk image provided by IBM into our image repository. It's very similar to how we use the VIO server's virtual media library, only instead of booting from CD and installing AIX, we basically took a clone of this disk image and booted from that.

Compared to creating a machine on the HMC, there isn't much to configure in a VM when creating it via Prism. (This video gives you a feel for those tasks.) This solution--and the capability to clone virtual machines in particular--feels similar to using PowerVC images and shared storage pools with our existing POWER servers. However, with a hyperconverged solution, there's no need to worry about managing a SAN at all, because your disks are locally attached to your compute nodes.

I entered the name of my VM, the number of virtual CPUs, the number of cores per VCPU, and the amount of memory I wanted. Then I added a network interface and some logical disks that I carved out of a larger pool of physical disk. I selected "clone from image service" along with the correct disk image. I clicked on add, and the VM was created. After clicking on the power on option and selecting the console, the machine booted up. I logged in as root with no password and I was up and running.

At this point I clicked the clone option; that's all it took to get another machine up and running. The lspv command displayed the same PVID on both systems. They were identical disk clones.
In the prtconf command output, I saw the following:

System Model: IBM pSeries (emulated by qemu)
Machine Serial Number: Not Available
Processor type: PowerPC_POWER8
Processor Version: PV_S_Compat
Number of Processors: 4
Processor Clock Speed: 2095 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: (this was a long UUID string)
Platform Firmware level: Not available
Firmware Version: SLOF, HEAD

The information about the physical hardware is a little different from what we're used to seeing in PowerVM-based systems. To determine my serial number, I'll typically run either uname –a or prtconf; neither worked in this instance. Instead I went into the Prism GUI to see the physical node I was running my AIX image on.

Here's a snippet of the some of the output generated by running lsdev. Again, there are some differences:

vscsi0 Virtual SCSI Client adapter
cd0 Virtual SCSI Optical Served by VIO Server
ent0 qemu_virtio-net-pci:0000:00:01.0 Virtio NIC Client Adapter
scsi0 qemu_vhost-user-scsi-pci:0000:00:02.0 Virtio SCSI Client Adapter
hdisk0 qemu_vhost-user-scsi-pci:0000:00:02.0-LW_0 MPIO Other Virtio SCSI Disk Drive
hdisk1 qemu_vhost-user-scsi-pci:0000:00:02.0-LW_0 MPIO Other Virtio SCSI Disk Drive

Later, I built an "empty" virtual machine. I gave it a name and assigned memory, CPU, disk and a network, but I didn't give it anything to boot from. On the Nutanix cluster there's no SMS to boot into. By default it tried to boot from the network. After that timed out, it booted into the Slimline Open Firmware (SLOF) interface.

Since I didn't have a NIM server built, I couldn't test that process. Rest assured, that will be one of the first things I do once I get my own solution.

In the systems running AIX, I was able to load a virtual CD .iso containing AIX filesets?just as we'd do with PowerVM and VIO optical media libraries. Then I went into smitty and loaded filesets, just as we'd do with any other AIX system.

When I ran oslevel –s, the system returned 7200-02-02-1810.

Using chfs command to resize filesystems went as expected.

Running lsattr –El hdisk0 produced some interesting unique_id information. The disks appeared as a 54391NUTANIX disk.

I ran the mount command to mount the virtual CD inside AIX, poked around for a bit, and unmounted it. Then I went into the Prism GUI, removed the .iso I'd been using and added a different image into the virtual CD. Finally, I went back into AIX and mounted this new .iso on the fly.

Migrating virtual machines across physical nodes was like running Live Partition Mobility with PowerVM. Of course there were minor differences running AIX on this different hypervisor, but overall everything worked as expected. Getting right to work in this new environment was very simple.

As you'll need AIX 7.2 to deploy machines into this environment, you should listen to Chris Gibson’s recent AIX Virtual User Group presentation on AIX 7.2 migration.

There's much more I want to do with this technology. I plan to test out a mksysb migration to move my systems to the supported version of AIX that will run on a Nutanix cluster. Later on, I'll get into SLOF and boot from a NIM server. I also want to kick off workloads and run performance scripts. Basically, I want to see what can and can't be done with this compared to traditional AIX environments running on PowerVM.

The fact that there's another platform and hypervisor choice when it comes to running AIX is a big deal. For one thing, it's still more proof that AIX is here for the long haul.

Hopefully I've explained everything well. Please pose questions and share your impressions in comments.

Posted May 15, 2018| Permalink