Installing the VMware ESXi Embedded Host Client

As most everyone knows, the old VMware vSphere C# client has been on its way out for years. One of the things keeping it alive is the fact that not everyone has a vCenter Server, and even those who do don’t necessarily use the Web Client. Sadly, there are some really cool features the old Windows client can’t touch, such as exposing hardware-assisted virtualization to individual VMs.

If you have a home lab and don’t need vCenter, thee ESXi Embedded Host Client gives you web-based access to these hidden features of your standalone ESXi host without having to spin up a real vCenter server.

Here’s how to install it:

  1. Shut down all VMs and place the host in maintenance mode
  2. SSH into ESXi and execute the following
    [[email protected]:~] esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-4393350.vib
  3. Browse to https://[ESXi]/ui
    You should see the login screen:
    VMware ESXi Embedded Host Client Login Screen
  4. Log in using whatever credentials you use in the old C# vSphere client. You should see something that looks an awful lot like the vSphere Web Client:
    VMware ESXi Embedded Host Client Initial Screen

Using IRQbalance to Improve Network Throughput in XenServer

If you are running XenServer 5.6 FP1 or later, there is a little trick you can use to improve network throughput on the host.

By default, XenServer uses the netback process to process network traffic, and each host is limited to four instances of netback, with one instance running on each of dom0’s vCPUs. When a VM starts, each of its VIFs (Virtual InterFaces) is assigned to a netback instance in a round-robin fashion. While this results in a pretty even distribution of VIFs-to-netback processes, it is extremely inefficient during times of high network load because the CPU is not being fully utilized.

For example, suppose you have four VMs on a host, with each VM having one VIF each. VM1 is assigned to netback instance 0 which is tied to vCPU0, VM2 is assigned to netback instance 1 which is tied to vCPU1, and so on. Now suppose VM1 experiences a very high network load. Netback instance 1 is tasked with handling all of VM1’s traffic, and vCPU0 is the only vCPU doing work for netback instance 1. That means the other three vCPUs are sitting idle, while vCPU0 does all the work.

You can see this phenomenon for yourself by doing a cat /proc/interrupts from dom0’s console. You’ll see something similar to this:


(The screenshot doesn’t show it, but the first column of highlighted numbers is CPU0, the second is CPU1, and so on. The numbers represent the quantity of interrupt requests.)

If you’ve ever troubleshot obscure networking configurations in the physical world, you’ve probably run into a router or firewall whose CPU was being asked to do so much that it was causing a network slowdown. Fortunately in this case, we don’t have to make any major configuration changes or buy new hardware to fix the problem.

All we need to do to increase efficiency in this scenario is to evenly distribute the VIFs’ workloads across all available CPUs. We could manually do this at the bash prompt, or we could just download and install irqbalance.

irqbalance is a linux daemon that automatically distributes interrupts across all available CPUs and cores. To install it, issue the following command at the dom0 bash prompt:

yum install irqbalance --enablerepo base

You can either restart the host or manually start the service/daemon by issuing:

service irqbalance start

Now restart your VMs and do another cat /proc/interrupts. This time you should see something like this:

That’s much better! Try this out on your test XenServer host(s) first and see if you can tell a difference. Citrix has a whitepaper titled Achieving a fair distribution of the processing of guest network traffic over available physical CPUs (that’s a mouthful) that goes into more technical detail about netback and irqbalance.

How To Get a Unique STA ID for each of your PVS Provisioned XenApp Servers

Citrix Provisioning Services is very nice, but it does come with a slightly annoying quirk: All of your provisioned XenApp servers end up with the same STA ID! This will cause all sorts of problems for Citrix Access Gateway, Citrix Receiver, and anything else that may depend on having unique STA IDs. The good news is that fixing this little problem is easier than you might think.

To resolve the duplicate STA ID issue, we’ll do the following:
1. Create personality strings in Provisioning Services for each XenApp server
2. Put a PowerShell script on our golden image
3. Create a startup task to execute the PowerShell script

Let’s begin:

The format of the STA ID is simple. It is “STA” followed by the MAC address of the XenApp server’s NIC. The STA ID can really be anything beginning with “STA”, so you could get creative and have “STABLEFLY”, “STANK”, “STALE”, “STARTBUTTON”, “STACKOVERFLOW”.. and the list goes on. But I recommend sticking with the MAC address because it’s unique (usually), and is easy to match up.

1. In Provisioning Services, create a personality string for each server with the Name “UID” and the String “STA001122DDEEFF”, substituting the MAC address of the server for the hex I just threw in there.
Define Personality Strings

2. Copy this PowerShell script to your scripts location on your golden image: (Note: Download the file from the above link and do not copy and paste the text below, otherwise PS will complain.)

# STA Replacement Script for Citrix Provisioned Servers
# Created 7-25-11 by Ben Piper (email: [email protected], web: http://benpiper.com )
# Get UID string
# Replace STA ID
# Restart CTXHTTP service

$stastr = get-content C:\Personality.ini | Select-String -Pattern “UID=STA”
$CtxStaConfig = Get-Content ‘C:\Program Files (x86)\Citrix\system32\CtxSta.config’ | ForEach-Object {$_ -replace ‘^UID=.+$’, $stastr}
$CtxStaConfig | Set-Content ‘C:\Program Files (x86)\Citrix\system32\CtxSta.config’
Stop-Service CtxHTTP
Start-Service CtxHTTP

3. Create a scheduled task to execute the PowerShell script at startup. Make sure the account that will be executing the script has appropriate permissions.

If you are not already running PowerShell scripts, you’ll need to set the ExecutionPolicy on your gold image to Unrestricted by issuing the cmdlet ” Set-ExecutionPolicy Unrestricted” at a PS prompt.

I recommend testing the script first on your Master Target server before deploying it farm-wide. The script will still work as long as you have a personality string defined for your Master Target server, even if the vDisk is in Private mode.