What’s new & cool in vSphere 6?

So vSphere 6 is finally here!!  VMware has given us some pretty big new features as well as some great updates to some already existing.  I will go over these at a high-level here and will be diving into each of these, and then some in the next few articles.  So lets dive right into the cool stuff.

Continue reading

vSphere 6: Multi-Processor Fault Tolerance

With the announcement of vSphere 6.0, one very cool feature that is new is Multi-Processor Fault Tolerance.  You can now turn FT on for VMs with up to 4 vCPUs.  This now opens up Fault-Tolerance to a much larger host of VM workloads.  Previously the single vCPU limit excluded almost all server type workload.  Update: Doing vCenter FT will be supported in a few limited cases, this is still being worked out by VMware.

To set it up, it is exactly the same as it has been. First, ensure that you have a VMKernel NIC configured for Fault-Tolerant traffic. Then, right-Click on a VM and choose “Turn on Fault Tolerance”.  Then choose the location for the Files and then choose a ESXi host that will host the secondary VM, that simple.

One thing to keep in mind is the networking requirements.  Since the CPU instructions are being mirrored between the hosts there can be a quite a bit of Network traffic.  I have already seen that a 4x vCPU VM can start to consume a pretty good amount of network traffic.  I would absolutely have at least a 10gb link dedicated for this purpose.

vSphere 6 now supports NFS v4.1 with Authentication

VMware finally supports NFS version 4.1 and even allows Kerberos authentication.  This now allows administrators to use features that were brought to the NFS kernel back in 2010.  The biggest advantage in my mind is the ability to have “Multi-pathing” on your shares.  You can now have multiple IP addresses associated with a single NFS mount for redundancy.  In addition you can now provide more authentication for the mounts other then IP Addressing.  You can now use Kerberos to authenticate the ESXi hosts.  Now you must be using AD and that the ESXi host must joined to the domain, and that you’ve specified the NFS Authentication Credentials (which are in System -> Authentication Credentials on each host).

Continue reading

vSphere 6 : vSphere Client is ALIVE!!

UPDATE: I have updated the below to fix the mis-information

There has been a long standing rumor that VMware is killing off the VIClient.  When the vSphere Beta orginally came out, i was bummed to see it missing.  It had been replaced with a very clunky and slow client, that looked a lot like the full Web Client.  Honestly, it was terrible and i was very disappointed.

I was very pleasantly surprised when i fired up the latest build of vSphere and clicked on the “Download vSphere Client” and the normal looking vSphere Client installed.  I was a bit excited, the icons looked the same as the old client i liked so much.  Sure enough, it looks the same and works the same as the old 5.5 client. That also means that in order to use new features you will need to use the Web Client.

Enable Spell Check in Jabber Client

This worked for me, i can’t guarantee it’ll work for you

My company uses Cisco Jabber for their chat and softphone client.  I noticed that the new 10.5 client running on Windows 8 allows spellcheck.  Since my spelling is not exactly excellent, i found this to be a wanted feature.    However it seems that my company did not have it enabled by default in the backend.  Here is how to change it for your client.

  • Close Jabber if it is running.
  • Browse to C:\Program Files (x86)\Cisco Systems\Cisco Jabber
  • Open the “jabber-config-defaults.xml” file.
  • Add <userconfig name=”spell_check_enabled” value=”TRUE”/> to the <!–Options –> section of the file.
  • Save the file.  (I had to change permissions to the jabber folder to allow my user to save there)
  • Open Jabber.  You will now the the red “squiggly” line when you spell poorly, and are able to right-click for options.

CCIE-Datacenter Studies

I have been going absolutely insane studying for my CCIE-DataCenter Lab exam.  I have been working in our Advanced Technology Center on various equipment, 7k’s, 5ks, 2ks, UCS, MDS, & Storage.  Been polishing skills on things i do all the time and doing things i’ve only ever read about, Fabric Path and OTV.

For anybody who is planning on taking the CCIE-DC, start playing with stuff early.  I don’t care how many times you’ve done it, once you start building out things and doing stuff you’ve never done you realize just how much work you have to do and in a very short amount of time.

I will be writing up a very detailed article, within the limits of NDA, of how i studyed and passed (hopefully) the exam.  If i can help just one person its a win.  In addition i’ve learned some really good best practices around vPC, EIGRP and HSRP that i want to share.

New Position

I’ve recently taken a new position at World Wide that will allow me to touch and play with some cool stuff as well as give back to the Professional Services deployment teams.

My first tasks are to develop some training labs around UCS, Avamar/DataDomain as well as Data and OS Migrations.  Whats cool is it lets me play with the equipment, learn it incredibly deep and then help write the guides and design and build the environments to let our guys learn this stuff.

It will be a bit different for me as i will no longer be external customer facing.  All levels of Management and internal teams will be my customers.

Fog Computing? What the heck?

So i’ve heard a few people talk about and ask about “Fog Computing” lately.  Honestly its a term i only heard about fairly recently, even though it is really from earlier in the year.

What is it?

It started out from the marketing minds at Cisco.  It is the thought process or architectural concept of putting more data, compute, applications and intelligence closer to the end-users or edge.

Why do we need something new? Isn’t the cloud fixing all my issues?

Traditionally we have large centralized datacenters that have a ton of horsepower and does all of our computing and work for us.  End devices and users have to connect through the network to access the datacenter, have the computational work done, and the results sent back.   In this country one of the biggest issues i’ve seen companies have is network bandwidth in the WAN or psuedo-LAN.  With this big push for more connected devices and endpoints we are only increasing our network requirements, which are either very expensive to upgrade, or just isn’t possible.

“The Cloud” doesn’t always fix all of these issues.  Typically your cloud provider still has large datacenters in geographically diverse areas.  Now they tend to have a bit more capital to get really big network pipes, but that may not be enough.  No matter the size of the pipe, things like latency, or inefficient routing can be major headaches.

Alright, so how does “Fog Computing” help us??

What Cisco is calling Fog computing is a bit of a play on words with bringing the “Cloud” everywhere.  (Get it??)  Anyway, it involves using the new generation of Cisco gear, many of which may already be your edge.  What Cisco wants to help you do is do a lot of the work right at the edge instead of sending a bunch of raw data to your datacenters and then only have to send data back.

Cisco has used the example of a jet engine sending ~10TB of realtime data about performance and condition metics in 30 minutes.  You don’t want to send that information to the cloud/datacenter to only send back a simple reply.  I like the example but i’ve seen some more “real-world” examples where edge computing would help.

For example, a power company would rather collect their smart meter metrics at the edge, or traffic lights could do real-time decision making locally, rather then ship to data back over the WAN.  I could also see this being useful for distributed gaming engines, possibly VDI workloads. What about something as simple as DMZ type services, web services, VPN solutions?

How are we going to do this?

Cisco is leveraging their new IOx platform.  It is Ciscos new IOS/Linux Operating system.  It is really the IOS code running on an x86 code base, IOS XR.  Now, IOS XR has been running on the ASA platform for some time now.  I recently attended a call about the new ISR 4400 line of Integrated Service Routers.  These allow various “Service Containers” which are basically VMs to run on them.  In addition you can actually run the UCS E-Series servers in them, which are full blown x86 servers.  This brings real compute and storage power right at the edge of your network.

 

 

vMotion between AMD & Intel

UPDATE: It seems Virtual Hardware 10 VMs will NOT vMotion even with the below done, might want to hang on to v8 for now if this is something you want to do.

Its always been said that you can’t vMotion VMs between very different processor types.  In fact w/o enabling EVC you can’t vmotion between even different generations of processors, let alone between AMD and Intel processors.  This isn’t really true, well mostly anyway.  There are certain parameters you can set in vCenter and sometimes the VMs that will allow you to actually do this vMotion.

I must state that this is in NO way supported by VMware at all, in fact if you ask their support teams to help you do this they will tell you its impossible since the CPUs aren’t really virtualized and are passed through to the VMs.  Please do not do this to your production systems or at least to anything really important, there are weird bugs.

In my home basement lab i have an AMD Opteron 2350 server and my newly aquired Intel L5520 based boxes.  These are running vSphere 5.5 U2.  I wanted to use the AMD box since it still has a decent amount of RAM and cores, enough for some management servers anyway.  I wanted to be able to vMotion VMs w/o powering them off to move them, that’s just annoying.

In order to allow me to do this i enabled two advanced options in vCenter.  The first was config.migrate.test.CpuCompatibleError  this was set to false.  In addition i set config.migrate.test.CpuCompatibleMonitorSupport also set to false.

Now this allowed me to actually vMotion most of my VMs with nothing more then a warning in the “Migrate” wizard dialog.  There was not a bit of hiccup on these VMs, sweet!!  However, not everything was roses, some of my VMs required hiding the NX bit or even doing some custom CPU masking.  Here is the VMware link to do the custom masking. VMWare CPU Masking

One thing i did notice is that some machines actually played nicer when being moved from Intel to AMD.  The VM would use various AMD bits but not the Intel.  This was the most frustrating on my Horizon View Connection Server.  Now, I could go and mask each feature on these VMs, however i just made it easy and disabled DRS on that VM, and used the Intel as the primary host and it would only move to the AMD if there was an HA event or i needed to do maintenance.