Custom UCS/FlexPod Build Script

 UPDATE: Working with some of our internal guys, its come to my attention that some of the script has broken with the newer UCSM versions.  I will be updating this to be more “adaptable”, however use the script for ideas and feel free to kang any code from it for now.


 

So i started working on developing a Powershell script that will grab variables from an Excel sheet and create a UCS Build off of that.

I am at a point that the build actually works quite well now. I’m pretty proud of myself since i’m NOT a deep Powershell guy. This came about from looking at other UCS Powershell scripts and a lot of tweaking and testing.

Anyway this script will continue to grow and its functionality expand. My end goal is to be able to do a base FlexPod build by scripting, including UCS, Nexus Switches, Netapp and VMware.

It will take a lot of time, and i may never really use the script but its more of a pet project to not only see if i can do it, but also grow my Powershell skillset.

Here is the github if you’d like to follow/assist or download and play with it a bit.

https://github.com/cknic/UCS_Build

Updates

So I’ve been neglecting this site really badly.

I’ve been insanely busy with all kinds of things.  So whats new;

Got my Citrix CCIA certification, woohoo!!  Now just waiting on some Citrix projects 🙂

 

In the meantime i’m doing more FlexPod’s.  I am working on an update document for my iSCSI boot.  I found that the new UCS 2.0(2r) i belive added the IQN Pools, so some of the screenshots changed, however the process is basicly the same.

I’ve also been writing a lot of internal documentation right now, so the thought of writing more when i’m “off the clock” hasn’t been fun.  Thats coming to a bit of an end, so i’m going to start writing more here again.  I’ve come across some interesting things lately.

iSCSI Boot with ESXi 5.0 & UCS Blades

UPDATE:: The issue was the NIC/HBA Placement Policy.  The customer had set a policy to have the HBA’s first, then the iSCSI Overlay NIC, then the remaining NICs.  When we moved the iSCSI NIC to the bottom of the list, the ESXi 5.0 installer worked just fine.  I’m not 100% sure why this fix is actually working, but either way it works.

So at a recent customers site i was trying to configure iSCSI Booting of ESXi 5.0 on a UCS Blade, B230 M2.  To make a long story short it doesn’t fully work and isn’t offically supported by Cisco.  In fact, NO blade models are supported for ESXi 5.0 & iSCSI boot by Cisco.  They claim a fix is on the way, and i will post an update when there is a fix.

Here is the exact issue, and my orgianal thoughts, in case it helps anybody;

We got an error installing ESXi 5 to a Netapp LUN.  Got an error “Expecting 2 bootbanks, found 0” at 90% of the install of ESXi. The blade is a B230 M2.

The LUN is seen in BIOS as well as by the ESXi 5 installer.  I even verified the “Details” option, and all the information is correct.

Doing an Alt-F12 during the install and watching the logs more closely today, at ~90% it appears to be unloading a module, that appears by its’ name, to be some sort of vmware tools type package.  As SOON as it does that the installer claims that there is no IP address on the iSCSI NIC and begins to look for DHCP.  The issue is during the configuration of the Service Profile and the iSCSI NIC, at no time did we choose DHCP, we choose static. (We even have tried Pooled)  Since there is no DHCP Server in that subnet it doesn’t pickup an address and thus loses connectivity to the LUN.

So we rebooted the blade after the error, and ESXi5 actually loads with no errors.  The odd thing is that the root password that’s specified isn’t set, it’s blank like ESXi 4.x was.

So an interesting question is what’s happening during that last 10% of the installation of ESXi 5??  Since it boots cleanly, it almost seems like it does a sort of “sysprep” of the OS, ie all the configuration details.  If that’s the only issue then it might technically be ok.  However I don’t get the “warm and fuzzies”.  My concern would be that, maybe not today but down the road some module that wasn’t loaded correctly will come back to bite the client.

Also, what is happening in that last 10% that’s different then ESXi 4.x??  We were able to load 4.1 just fine with no errors.

Again we called Cisco TAC and we were told that ESXi 5 iSCSI booting wasn’t supported on any blade.  They do support 4.1 as well as Windows, and a variety of Linux Distos.

Configuring iSCSI boot on a FlexPod

Here is a nice document to follow to configure iSCSI booting for a FlexPod, ie. UCS Blades, NetApp array & ESXi.

UPDATE: This document has the fix i found for ESXi 5.0.  This was tested on B230 M2’s and seems to work every time.

This document will be updated as i get new information.

FlexPod iSCSI Boot-Fixed

Enabling Jumbo Frames in a Flexpod Environment

Update: I have fixed the 5548 section i was missing the last two lines.

This post will help the user enable Jumbo frames on their Flexpod environment. This document will also work for just about any UCS-based environment, however you will have to check on how to enable Jumbo Frames for their storage array.

This post assumes a few things;

Environment is running 5548 Nexus switches
User needs to setup Jumbo-Frames on the NetApp for NFS/CIFS Shares
Netapp has VIF or MMVIF connections for said NFS/CIFS connections.

Cisco UCS Configuration 

-Login to the UCSM, Click on the LAN Tab.
-Expand LANs, & LAN Cloud.
-Click on the QoS System Class, Change the “Best-Effort” MTU to 9216. 

NOTE: You need to just type in the number, it’s not one of the ones that can be selected in the drop-down.

Expand the Policies section on the LAN Tab.  Right-Click on the QoS Polices and click “Create new QoS Policy”.  Call it “Jumbo-Frames” or something similar.
-On the vNIC Template or actual vNIC on the Service Profile, set the “QoS Policy” to the new Policy.

 ESX/ESXi Configuration

-Either SSH or Console into the ESX host.  If your using ESXi you’ll need to ensure local or remote tech support mode is enabled.
-We need to set the vSwitch that the Jumbo-Framed NICs will be on to allow Jumbo-Frames.
          Type esxcfg-vswitch –l   find the vSwitch we need to modify.
          Type esxcfg-vswitch –m 9000 vSwitch# (Replace # with the actual number)
          Type esxcfg-vswitch –l   you should now see the MTU to 9000

-We now need to set the actual VMKernel NICs.

          Type esxcfg-vmknic –l  find the vmk’s that we need to modify
          Type esxcfg-vmknic –m 9000 <portgroup name> (this is the portgroup that the vmk is part of)
          Type esxcfg-vmknic –l   verify that the MTU is now 9000 

Note: If your using dvSwitches, you can set the MTU size through the VI-Client.

5548 Configuration 

Login to the 5548 switch on the “A” side.
-Type the following;

system jumbomtu 9216
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multi-cast-optimize
exit
system qos
service-policy type network-qos jumbo
exit
copy run start

-Repeat on the “B” Side 

NetApp Configuration 

-Login to the Filer.
-Type ifconfig –a  verify which ports we need to make run jumbo frames.
 -Type ifconfig <VIF_NAME> mtusize 9000 

NOTE: You need to make sure you enable jumbo-frames not only on the VLAN’d VIF but also the “root” VIF.