vSphere 6 now supports NFS v4.1 with Authentication

VMware finally supports NFS version 4.1 and even allows Kerberos authentication.  This now allows administrators to use features that were brought to the NFS kernel back in 2010.  The biggest advantage in my mind is the ability to have “Multi-pathing” on your shares.  You can now have multiple IP addresses associated with a single NFS mount for redundancy.  In addition you can now provide more authentication for the mounts other then IP Addressing.  You can now use Kerberos to authenticate the ESXi hosts.  Now you must be using AD and that the ESXi host must joined to the domain, and that you’ve specified the NFS Authentication Credentials (which are in System -> Authentication Credentials on each host).

Continue reading

5.5 U1b now out – Fixes NFS and Heartbleed issues

So for those who didn’t know, ESXi 5.5 u1 had a pretty seveare issue relating to NFS.

So occasionally any connections to NFS storage would end up in an All Paths Down (APD) condition. This is obviously pretty poor as things tend to break when the storage is ripped out from underneath the VMs running on the hosts.

This has been a known bug by VMware. This issue had absolutely nothing to do with Network or Storage hardware, however NetApp had come out with a patch that would help prevent the issue.

In addition ESXi 5.5 was vulnerable to the Heartbleed issues. If you read that and are confused, well you must have lived under a rock.

VMware has released 5.5 U1b that has the patches baked in. If you don’t want to do a full update, the patch is here, http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2077361

A collegue of mine has create a script that you can run on your ESXi hosts to update the server if you don’t have VUM installed. You must enable SSH to the host.

# open firewall for outgoing http requests:
esxcli network firewall ruleset set -e true -r httpClient
# Install the ESXi 5.5 pre-U1 Heartbleed Fix Image Profile from the VMware Online depot
esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20140401020s-standard --allow-downgrades
# Reboot your host

Creating Custom ESXi Images Video

This is a video i did to show how to create a custom ESXi image iso and zip, incorporating specific VIBs.

In this case it was to change out the Cisco net-enic & scsi-fnic drivers to match the version that is on the HCL for the UCS version we were running.  The Cisco created ESXi version didn’t even have the right version.

I do know that there are a 100 videos out there for this, but i had created it, so why not share it 🙂

So i have this embedded, but since we’re looking at text, i really do recommend going straight to youtube and watch it, since the screen will be larger.

Time settings/issues on ESXi 4.1

Here are some short and sweet items that i discovered yesterday;

Interestingly, ESXi does not allow you to change the timezone, it is permanently set to UTC.

Also, If you setup an NTP server on your ESXi hosts and that NTP server goes away for some reason, the ESXi host will not revert to using its’ own clock or even continuing to make a valient effort of keeping time, it instead reverts to January 1, 0001, to say this creates some issues is simplifying it.  The ESX hosts complain about not being able to “synchronize”, which is the first clue you get about the issue.  When you try and manually set the date through the VI-Client, you get a bunch of errors when you try and do anything and then the VI-Client froze.  The only option i found was to get that NTP server online.

Note: It may have been possible to do it via powershell or cli commands, however i had needed to get the NTP servers online anyway, and once this occurred, the ESXi servers re-synced and was able to respond.

Why the 2TB Limit in ESX 4.1

So i was asked this question by a collegue of mine.  “Why is there still a 2TB size limit to a LUN in ESX”.   The argument was that in this day and age, and with ESX now running on a 64-bit kernel, why are we still ham-strung by this limitation?  After some consideration and thinking it was thought that maybe VMFS was the issue, but it can’t be because you can create extents and one big VMFS volume.

Well the answer is that ESX (& ESXi) are still using the SCSI-2 Standards. Yes the same SCSI-2 that was from ~1995.  The ultimate issue has to do with the way the standard addresses the “READ_CAPACITY” of the LUN.  It uses a 10-byte call for the capacity, which limits the return to a 32-bit number. (It actually is just under a 32-bit number, the last value of 0xffffffff is acutally reserved, and thus can’t be used.  This limits you to 2199023255040 bytes, which is just under the 2TB limit.

The only current work-around is extents, which are much less then ideal.

So until VMware decides to update their outdated storage methodology, we’re stuck with the 2TB limit.