Good questions asked during UCS Design Workshop

So i’ve recently started working for a large technology company on their Datacenter Services team in their Professional Services org.  Its been quite an experience so far, and i’m doing my first solo Cisco UCS Design Workshop coupled with an installation as well some basic teachings.

I was asked some good questions and figured that others may be asked the same things as well as may just have the questions themselves. I figured i can share and maybe help somebody else.  I will try and keep this page updated with some of the more interesting questions that aren’t easily found in Ciscos documentation.

Q1. According to Cisco’s documents when you’re using the VM-FEX or Pass Through Switching there is a limit of 54 VMs per server when those hosts have 2 HBA’s.  What is the real reason for the low limitation?  As with todays high-powered servers 54 VMs isn’t an unreachable goal.

A1. The 54 limit is based on VN-tag address space limitations on the UCS 6100 ASICs.  Future hardware for UCS will support more.  PTS may not be the right fit for high density virtual deployments, especially VDI.   Here is a link to a great blog on it.  http://vblog.wwtlab.com/2011/03/01/cisco-ucs-pts-vs-1000v/

Q2. What is the minimum number of power supplies needed for a UCS Chassis?

A2. The answer is 2, especially a fully populated one.  In this case you are running in a non-redundant mode.  If one of the power supplies fail, the UCS System will continue to power the Fans and the IO-Modules.  It will however begin to power off the blades in reverse-numerical order until it reaches a supported power load.

Q3.  Can you change the number of uplinks from the IO-Modules to the Fabic Interconnects once the system is online?

A3. Changing the number of cables from FI to the chassis requires a re-pinning of the server links to the new number of uplinks.  The pinning is based on a hard-coded static mapping based on number of links used.  This re-pinning is temporarily disruptive to the A fabric then the B fabric path on the chassis.  NIC-teaming / SAN multi-pathing will handle failover/failback if in place.

Q4. If the uplinks from the Fabic Interconnect are connected to Nexus switchs, if we dont use vPC on them do we lose the full bandwidth because the switches are in an active/passive mode?? Can you get the full bandwidth using VMware and no vPC?

A4. Even without vPC the UCS Fabric Interconnects will utilize the bandwidth of all of the uplinks, no active/passive.  However, I would still recommend configuring VMware for active/active use but ensure you are using MAC or Virtual port based pinning rather than Src/Dest IP hash.

Q5. So is there any advantages to doing the vPC other then the simplified management??

A5. Two of them, Faster failover, and potential for a single server to utilize more than 10Gbps based on port-channel load-balancing.