What’s new & cool in vSphere 6?

So vSphere 6 is finally here!!  VMware has given us some pretty big new features as well as some great updates to some already existing.  I will go over these at a high-level here and will be diving into each of these, and then some in the next few articles.  So lets dive right into the cool stuff.

Continue reading

vSphere 6: Multi-Processor Fault Tolerance

With the announcement of vSphere 6.0, one very cool feature that is new is Multi-Processor Fault Tolerance.  You can now turn FT on for VMs with up to 4 vCPUs.  This now opens up Fault-Tolerance to a much larger host of VM workloads.  Previously the single vCPU limit excluded almost all server type workload.  Update: Doing vCenter FT will be supported in a few limited cases, this is still being worked out by VMware.

To set it up, it is exactly the same as it has been. First, ensure that you have a VMKernel NIC configured for Fault-Tolerant traffic. Then, right-Click on a VM and choose “Turn on Fault Tolerance”.  Then choose the location for the Files and then choose a ESXi host that will host the secondary VM, that simple.

One thing to keep in mind is the networking requirements.  Since the CPU instructions are being mirrored between the hosts there can be a quite a bit of Network traffic.  I have already seen that a 4x vCPU VM can start to consume a pretty good amount of network traffic.  I would absolutely have at least a 10gb link dedicated for this purpose.

vSphere 6 now supports NFS v4.1 with Authentication

VMware finally supports NFS version 4.1 and even allows Kerberos authentication.  This now allows administrators to use features that were brought to the NFS kernel back in 2010.  The biggest advantage in my mind is the ability to have “Multi-pathing” on your shares.  You can now have multiple IP addresses associated with a single NFS mount for redundancy.  In addition you can now provide more authentication for the mounts other then IP Addressing.  You can now use Kerberos to authenticate the ESXi hosts.  Now you must be using AD and that the ESXi host must joined to the domain, and that you’ve specified the NFS Authentication Credentials (which are in System -> Authentication Credentials on each host).

Continue reading

vSphere 6 : vSphere Client is ALIVE!!

UPDATE: I have updated the below to fix the mis-information

There has been a long standing rumor that VMware is killing off the VIClient.  When the vSphere Beta orginally came out, i was bummed to see it missing.  It had been replaced with a very clunky and slow client, that looked a lot like the full Web Client.  Honestly, it was terrible and i was very disappointed.

I was very pleasantly surprised when i fired up the latest build of vSphere and clicked on the “Download vSphere Client” and the normal looking vSphere Client installed.  I was a bit excited, the icons looked the same as the old client i liked so much.  Sure enough, it looks the same and works the same as the old 5.5 client. That also means that in order to use new features you will need to use the Web Client.

vMotion between AMD & Intel

UPDATE: It seems Virtual Hardware 10 VMs will NOT vMotion even with the below done, might want to hang on to v8 for now if this is something you want to do.

Its always been said that you can’t vMotion VMs between very different processor types.  In fact w/o enabling EVC you can’t vmotion between even different generations of processors, let alone between AMD and Intel processors.  This isn’t really true, well mostly anyway.  There are certain parameters you can set in vCenter and sometimes the VMs that will allow you to actually do this vMotion.

I must state that this is in NO way supported by VMware at all, in fact if you ask their support teams to help you do this they will tell you its impossible since the CPUs aren’t really virtualized and are passed through to the VMs.  Please do not do this to your production systems or at least to anything really important, there are weird bugs.

In my home basement lab i have an AMD Opteron 2350 server and my newly aquired Intel L5520 based boxes.  These are running vSphere 5.5 U2.  I wanted to use the AMD box since it still has a decent amount of RAM and cores, enough for some management servers anyway.  I wanted to be able to vMotion VMs w/o powering them off to move them, that’s just annoying.

In order to allow me to do this i enabled two advanced options in vCenter.  The first was config.migrate.test.CpuCompatibleError  this was set to false.  In addition i set config.migrate.test.CpuCompatibleMonitorSupport also set to false.

Now this allowed me to actually vMotion most of my VMs with nothing more then a warning in the “Migrate” wizard dialog.  There was not a bit of hiccup on these VMs, sweet!!  However, not everything was roses, some of my VMs required hiding the NX bit or even doing some custom CPU masking.  Here is the VMware link to do the custom masking. VMWare CPU Masking

One thing i did notice is that some machines actually played nicer when being moved from Intel to AMD.  The VM would use various AMD bits but not the Intel.  This was the most frustrating on my Horizon View Connection Server.  Now, I could go and mask each feature on these VMs, however i just made it easy and disabled DRS on that VM, and used the Intel as the primary host and it would only move to the AMD if there was an HA event or i needed to do maintenance.

Micro-Segmentation on NSX!!

I recently attended a really cool presentation by Scott Lowe about the ability to do Micro-segmentation with NSX.  This is, in my opinion, the biggest use case for NSX and something that impresses me a lot.

Before we deep dive on what is NSX & Micro-Segmentation, first where do most of us stand today with our networks in regards to security.  Most environments use a Perimeter type security model.  Unfortunately, it tends to not be very resilient and can have a lot of issues.  First off if your able to breach this “shell” and get access to the interior servers you typically have a pretty open environment.  There is typically little security protecting East/West traffic between servers, coupled with the ever increasing network traffic between these servers its fairly easy to hide any rouge traffic while attacking other servers.  Once you have access and control you can then launch your attack when its best.  A perfect example of this is the recent big box stores, Target for example.  The attackers got in and waited some time before actually stealing data.

How can we best combat this today? The best option is to utilize a Least Privilege or 0-Trust security model.  This puts firewalls between everything, all servers both North/South and East/West.  I have actually seen a customer do this, not only did they spend a TON of money on both physical and virtual firewalls but it was an administrative nightmare.  They had a lot of touch points and one of the biggest problems was actually identifying rules and what they did, and why they were there.  There were rules still in place for servers and applications that had been decommissioned earlier.  It also was a nightmare to try and open up ports between applications and equipment.  They had to touch multiple firewalls all while trying to monitor the traffic to catch those ports that vendors don’t always list in their docs about “needed firewall ports”.

NSX and its Micro-Segmentation is an awesome answer to this problem!  NSX at its heart is Network Virtualization. NSX allows us to decouple the Network from the hardware and allow centralized management of this decoupled network.  With NSX we separate out the various network “planes”.  First, there is the Management plane, this is typically vCenter with the Network & Security Plugin installed.  This is where we define all the various rules and policies.  Next, the Control Plane. This is NSX Manager & NSX Controllers. This is the decision maker and controls the rules states as well as their definitions and keeps track of everything.  Finally we have the Distributed Data planes, these are all the modules loaded into all of the ESXi hypervisiors, and enables the Distributed Routers, Distributed Firewalls and switches.  This is where all the actual packet switching happens.  All of the configurations can be manually done or can use various REST APIs to do some automation, typically with vCAC.

Ok, now that we know what the basic NSX is, why is micro-segmentation so awesome?? NSXs Micro-segmentation allows us to implement a true 0-Trust Model security policy, easily and without the complication and huge costs and administration overhead that it typically takes.  We can now put an intelligent firewall and routing between every single VM!!  The part that is really cool is that we are doing this at the hypervisor level.  This allows the traffic to not have to “hairpin” to go through a firewall or router.  The traffic is typically analyzed at the originating vnic level.  (It can be done on the receive side, but isn’t recommended)  The rules are also dynamic and follow the VM wherever it goes in the environment.  The rules are removed if the VMs are off or deleted.  They are re-created when the VM is powered on, and CAN be automatically created when a VM is provisioned.

NSX Micro-Segmentation allows us to have the datacenter security sit in a “sweet spot”.  It is close enough to the VMs and workloads that it can provide intelligence of workloads and granular control.  However, unlike many agent based tools that can provide this level of intelligence and granular control, the security features are in the hypervisor and not in the VM.  This allows a compromised VM to still have all its security features intact.  In fact as we’ll see later on, NSX can potentially recognize the compromise and perform specific actions against the VM.

NSX allows for a very flexible network design.  At a very high level it is comprised of three main types of “tenants” or types of networks. We can Isolate networks from each other.   We can also Segment networks, which means we can allow certain types of traffic between various points.  Finally NSX allows us to bring in more “Advanced Services” via 3rd Party applications that plugin to NSX.  Lets dive into each more.

Isolation networks allows us to create just that, completely isolated networks.  These networks have absolutely no knowledge of each other.  They also have no knowledge of the underlying physical connections.  They could both be running over a big flat network and still not see each other.  They can have the exact same IP space as well.  This is excellent for keeping Dev, Test and Production networks separate, yet all running within the same vSphere Environment!! This is also used for multi-tenant environments, where the workloads shouldn’t ever see each other.

Segmented networks are most common types of networks, these are the traditional 3-teir App scenario. Without NSX, typically you’d have a Perimeter firewall, a DMZ then the inside firewall or firewalls that lead to various networks.  The rules are always in place on the firewalls no matter what the workload states are.  The rules also have to be manually inputted into each firewall, and moved in some cases if workloads move, depending on network design. With NSX we have some flexibility and choices.  We can create logical networks per group, app, BU, etc and then apply the rules to each VM.  We can also be a bit silly and just dump everything onto one giant logical switch and apply the rules to the VMs that way.  Either way is the same result, the rules are applied at the VMs, the logical networks can be designed however makes the most sense for the environment.

The “Advanced Services” functions of NSX brings a lot of really neat functionality and intelligence to traffic flows that weren’t really possible before at the scale and easy of administration that NSX brings. With NSX can setup the firewalls to do intelligent and dynamic routing.  We can send traffic through a Malware scanner and then based on the output of the scans, we can do different things with the traffic.  You can also send traffic to a deep packet scanner.  You can also just pass the traffic straight to its destination.  With these advanced services we can build If/Then type rules for the traffic.  For example, if traffic is sent to TrendMicro and a virus is found, NSX can quarantine it and not allow the VM to pass any traffic until its resolved.  Also, if during a scan, a big vulnerability due to old software is found, NSX can then monitor all that VMs traffic via IPS until fixed.  If a different scan is run and it finds sensitive data, we can encrypt the traffic and restrict it while its investigated.  I think this is a very very neat feature, and something that should make both network and security guys very happy! There are a bunch of vendors that are providing various plugins for this, such as, Rapid 7, McAfee, PaloAlto, Symantec & TrendMicro.

One thing that Scott mentioned that makes a ton of sense and is really cool is around Security Groups.  Security Groups allow the administrator to group VMs together logically using a variety of static and dynamic variables, such as Datacenter, Virtual Machine, vNIC, OS Type, User ID, Security Tag (applied by IPS, Malware scanner), etc.  Then various policies you create, Firewall rules, send to IPS, etc, are then applied to given security groups and tags within the group.  In reality this is the only way to really do NSX at any real scale.  It would be very time consuming to create rules for each and every VM.  By using the security groups VMs can become part of the groups and get certain policies applied to it automatically.

NSX and the Micro-Segmentation feature are all very simply managed within the vSphere Web Client. If your still using the old .NET client, you need to stop or you’ll be very very sad in the near future.  These are all under the Network & Security plugin.  This is where you can create your Security Groups along with their elements or attributes.  You also create all your various policies here.  You can also view all the events, and logs here.  There is also an area where you can View all of the traffic flows from VM to VM, and even create rules against the seen flows.  It does make it a bit easier to create the rulesets from real flows rather then trying to set them up ahead of time, if you’d like.

One thing that Scott also mentioned were a few use cases for this micro segmentation.  There was one use case that i actually will be exploring more in depth in the near future because it interested me a lot.  He mentioned that an admin can use NSX and Micro-Segmentation with VMware View.  In a very typical VMware View design there tends to be various pools for different user types, such as internal workers and external or offshore users who shouldn’t be able to access certain things, or setup of different pools for dev teams who only should be accessing their development machines.  This usually means different VLANs, with firewall and routing rules to accomplish this segmenting.  However, NSX can apply firewall & routing rules based upon a logged in User ID.  This means you could actually have a single pool for all our users on a single flat network and because NSX can apply the locked down firewall and routing rules to the VM when a “restricted user” logs in, we accomplish the same goal as the more complicated setup.  Now thats awesome!

Now Scott had a lot of really interesting slides and visualizations in his presentation that unfortunately i can’t use.  I would look for more information at VMworld.  In addition there is a good White-Paper here.



5.5 U1b now out – Fixes NFS and Heartbleed issues

So for those who didn’t know, ESXi 5.5 u1 had a pretty seveare issue relating to NFS.

So occasionally any connections to NFS storage would end up in an All Paths Down (APD) condition. This is obviously pretty poor as things tend to break when the storage is ripped out from underneath the VMs running on the hosts.

This has been a known bug by VMware. This issue had absolutely nothing to do with Network or Storage hardware, however NetApp had come out with a patch that would help prevent the issue.

In addition ESXi 5.5 was vulnerable to the Heartbleed issues. If you read that and are confused, well you must have lived under a rock.

VMware has released 5.5 U1b that has the patches baked in. If you don’t want to do a full update, the patch is here, http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2077361

A collegue of mine has create a script that you can run on your ESXi hosts to update the server if you don’t have VUM installed. You must enable SSH to the host.

# open firewall for outgoing http requests:
esxcli network firewall ruleset set -e true -r httpClient
# Install the ESXi 5.5 pre-U1 Heartbleed Fix Image Profile from the VMware Online depot
esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20140401020s-standard --allow-downgrades
# Reboot your host

Custom UCS/FlexPod Build Script

 UPDATE: Working with some of our internal guys, its come to my attention that some of the script has broken with the newer UCSM versions.  I will be updating this to be more “adaptable”, however use the script for ideas and feel free to kang any code from it for now.


So i started working on developing a Powershell script that will grab variables from an Excel sheet and create a UCS Build off of that.

I am at a point that the build actually works quite well now. I’m pretty proud of myself since i’m NOT a deep Powershell guy. This came about from looking at other UCS Powershell scripts and a lot of tweaking and testing.

Anyway this script will continue to grow and its functionality expand. My end goal is to be able to do a base FlexPod build by scripting, including UCS, Nexus Switches, Netapp and VMware.

It will take a lot of time, and i may never really use the script but its more of a pet project to not only see if i can do it, but also grow my Powershell skillset.

Here is the github if you’d like to follow/assist or download and play with it a bit.


Creating Custom ESXi Images Video

This is a video i did to show how to create a custom ESXi image iso and zip, incorporating specific VIBs.

In this case it was to change out the Cisco net-enic & scsi-fnic drivers to match the version that is on the HCL for the UCS version we were running.  The Cisco created ESXi version didn’t even have the right version.

I do know that there are a 100 videos out there for this, but i had created it, so why not share it 🙂

So i have this embedded, but since we’re looking at text, i really do recommend going straight to youtube and watch it, since the screen will be larger.