Enable Spell Check in Jabber Client

This worked for me, i can’t guarantee it’ll work for you

My company uses Cisco Jabber for their chat and softphone client.  I noticed that the new 10.5 client running on Windows 8 allows spellcheck.  Since my spelling is not exactly excellent, i found this to be a wanted feature.    However it seems that my company did not have it enabled by default in the backend.  Here is how to change it for your client.

  • Close Jabber if it is running.
  • Browse to C:\Program Files (x86)\Cisco Systems\Cisco Jabber
  • Open the “jabber-config-defaults.xml” file.
  • Add <userconfig name=”spell_check_enabled” value=”TRUE”/> to the <!–Options –> section of the file.
  • Save the file.  (I had to change permissions to the jabber folder to allow my user to save there)
  • Open Jabber.  You will now the the red “squiggly” line when you spell poorly, and are able to right-click for options.

CCIE-Datacenter Studies

I have been going absolutely insane studying for my CCIE-DataCenter Lab exam.  I have been working in our Advanced Technology Center on various equipment, 7k’s, 5ks, 2ks, UCS, MDS, & Storage.  Been polishing skills on things i do all the time and doing things i’ve only ever read about, Fabric Path and OTV.

For anybody who is planning on taking the CCIE-DC, start playing with stuff early.  I don’t care how many times you’ve done it, once you start building out things and doing stuff you’ve never done you realize just how much work you have to do and in a very short amount of time.

I will be writing up a very detailed article, within the limits of NDA, of how i studyed and passed (hopefully) the exam.  If i can help just one person its a win.  In addition i’ve learned some really good best practices around vPC, EIGRP and HSRP that i want to share.

Fog Computing? What the heck?

So i’ve heard a few people talk about and ask about “Fog Computing” lately.  Honestly its a term i only heard about fairly recently, even though it is really from earlier in the year.

What is it?

It started out from the marketing minds at Cisco.  It is the thought process or architectural concept of putting more data, compute, applications and intelligence closer to the end-users or edge.

Why do we need something new? Isn’t the cloud fixing all my issues?

Traditionally we have large centralized datacenters that have a ton of horsepower and does all of our computing and work for us.  End devices and users have to connect through the network to access the datacenter, have the computational work done, and the results sent back.   In this country one of the biggest issues i’ve seen companies have is network bandwidth in the WAN or psuedo-LAN.  With this big push for more connected devices and endpoints we are only increasing our network requirements, which are either very expensive to upgrade, or just isn’t possible.

“The Cloud” doesn’t always fix all of these issues.  Typically your cloud provider still has large datacenters in geographically diverse areas.  Now they tend to have a bit more capital to get really big network pipes, but that may not be enough.  No matter the size of the pipe, things like latency, or inefficient routing can be major headaches.

Alright, so how does “Fog Computing” help us??

What Cisco is calling Fog computing is a bit of a play on words with bringing the “Cloud” everywhere.  (Get it??)  Anyway, it involves using the new generation of Cisco gear, many of which may already be your edge.  What Cisco wants to help you do is do a lot of the work right at the edge instead of sending a bunch of raw data to your datacenters and then only have to send data back.

Cisco has used the example of a jet engine sending ~10TB of realtime data about performance and condition metics in 30 minutes.  You don’t want to send that information to the cloud/datacenter to only send back a simple reply.  I like the example but i’ve seen some more “real-world” examples where edge computing would help.

For example, a power company would rather collect their smart meter metrics at the edge, or traffic lights could do real-time decision making locally, rather then ship to data back over the WAN.  I could also see this being useful for distributed gaming engines, possibly VDI workloads. What about something as simple as DMZ type services, web services, VPN solutions?

How are we going to do this?

Cisco is leveraging their new IOx platform.  It is Ciscos new IOS/Linux Operating system.  It is really the IOS code running on an x86 code base, IOS XR.  Now, IOS XR has been running on the ASA platform for some time now.  I recently attended a call about the new ISR 4400 line of Integrated Service Routers.  These allow various “Service Containers” which are basically VMs to run on them.  In addition you can actually run the UCS E-Series servers in them, which are full blown x86 servers.  This brings real compute and storage power right at the edge of your network.

 

 

Major Releases from Cisco today!!

Cisco has announced some major products and updates this morning around their UCS product line.  These announcements are not only just hardware and software but they also show how Cisco is seeing the changing data center landscape and realizing where they need to compete.  As a note i will be updating this as i get more information, as well as deep diving on each topic shortly.  I suggest you follow me on twitter @ck_nic or subscribe to the blog to see when i’ve updated this or added deep dives.

Cisco has realized that the single monolithic data center is not always the norm today.  Companies are moving towards more Remote Site or Multi-Data center environments.  In my opinion this is something that has been happening for years, but only recently has accelerated.  With new technologies such as VXLAN, OTV and huge advances in the virtualization world, there is no need to build a single giant data center.  Personally i’m seeing companies building multiple smaller and more efficient data centers, or utilizing space in remote offices or shared space.  In addition the push for the “Internet of Things” is only going to accelerate that, as computing power will need to be closer to the edge.  In addition, Cisco also has recognized the “app-centric” data center and cloud model that most of the vendors are moving towards, especially around the SDN and automation areas.  Cisco has announced several new items that speak to this.

UCS Mini

First, Cisco has announced the UCS Mini.  This is a UCS blade chassis with the Fabric Interconnects in the back of the chassis instead of top of rack.  Cisco is positioning this as “Edge-Scale Computing”.  They see the UCS Mini being deployed in Remote offices or smaller datacenters where the expected growth is small and the power and cooling requirements need to be smaller then the current UCS line.  For WAY more information I suggest you read my earlier post relating to the UCS Mini here.  I have updated it with some new information gained in the last week or so.

UCS Director Updates

Secondly, Cisco has updated its UCS Director Software to be more useful to more people.  The UCS Director software will now allow an administrator to automate and monitor not only UCS equipment but be able to work with Nexus products as well.  UCS Director will also be able to push out ACI configurations to the new Nexus 9k product line, here.  UCS Director has also introduced what it calls “Application Containers”.  These will allow configuration to be done from the “Application Level”.  What this means is you will be able to create networking and compute resources for a given application.  Cisco is stating that this is a very good way to simplify private-cloud deployment.  Finally, UCS Director has provided Hadoop integration into the product.  There is now a very easy way to deploy and monitor Hadoop clusters on UCS hardware.  This is something i’d like to see more of, personally.

UCS M-Series Servers

Cisco is announcing a new line of servers today that are very different then just about any other server in the market today.  Cisco’s M-Series servers are modular servers that can pack 16 individual servers into a single 2U chassis.  This is accomplished by creating “Compute Cartridges” that consist of CPU & Memory only.  Each cartridge contains two seperate servers with a single Intel Xeon E3 processor and four DIMM slots.  All of the cartridges share 4 SSDs that serve iSCSI boot LUNs to each compute node, as well as all Power supplies, Fans & Outbound Network & SAN connections.  These servers support the new VIC 1300 mentioned below, that means these can be uplinked to a UCS Fabric Interconnect as well. Now, these servers are NOT designed to run your typical virtualization or bare-metal OSs.  These are designed more for a lightweight OS, such as linux.  Cisco sees these being deployed in large numbers for uses like, BigData or other “Scale-Out” applications, online gaming, and ecommerce.  Now there has been a lot of talk about compasions to both HPs Moonshot servers as well as to the offerings of Nutanix.  These are a bit different then both.  Nutanix is a “Hyper-Converged” platform where it uses its own filesystem, and does a lot of neat tricks to distribute things across the nodes, the compute nodes become part of the virtual environment more then normal servers.  The M-Series is “Disaggregated” it uses what Cisco calls its “System Link” technology to separate the components making them more modular.  HP’s Moonshot is somewhat similar to the M-Series in that it used “server cartridges” however they are mostly Atom based processors, and still have some other hardware in the cartridges.  Cisco’s is all full Intel Xeon x86 processors.

UCS C3000 Series Servers

Cisco is not only releasing a compute heavy server but also a storage heavy one.  Cisco has announced the C3160 Rack Storage Server.  It is a 4u server that is capable of holding up to 360TB of storage space.  It is a single server just like any other, it has two processor sockets and a LSI 12GB SAS  Contoller that is connected to the disks.  Cisco is targeting this server at BigData or Web Applications that need a very large, fast central storage repository.  Cisco has provided some examples where it uses both the new M-Series and the new 3160 together in various designs.  It has mentioned both BigData and gaming services where the compute is distributed across an array of M-Series with all of the backend storage being hosted on the C3160’s.

New M4 Servers & VIC released

Cisco has announced the newest line of its blade and rackmount servers, the B200 M4, C220 M4 & C240 M4.  These servers take the advantage of the latest Intel processors as well a DDR 4 RAM, with up to 1.5TB of RAM per server.  Cisco is not introducing any configuration constraints that some other vendors have been doing.  Cisco has said will support the new 18-core Intel processor, when released.  This means you could get 36 full cores, 72 if you count hyper-threading in each blade!!!  Cisco has also announced a new VIC 1300 to go along with the newer servers.  This VIC is native 40gb capable.  However until the new FIs and IOMs are released the card will run at 4x 10gb.  For the PCIe based version the VIC have QSFP ports on it which will support both breakout cables as well as a special adapter that will convert 40gb to 10gb.  Its nice that these VIC are released to “future proof” hardware for when we see more 40gb switches, however i am a bit bummed we didn’t see a 40gb FI.

Overall there has been a lot of things announced and lot of information to be digested.  I have seen some pictures of the new hardware and hope to get to play with it soon.  Expect some deep dives to be written about the hardware and my experience with it.

Custom UCS/FlexPod Build Script

 UPDATE: Working with some of our internal guys, its come to my attention that some of the script has broken with the newer UCSM versions.  I will be updating this to be more “adaptable”, however use the script for ideas and feel free to kang any code from it for now.


 

So i started working on developing a Powershell script that will grab variables from an Excel sheet and create a UCS Build off of that.

I am at a point that the build actually works quite well now. I’m pretty proud of myself since i’m NOT a deep Powershell guy. This came about from looking at other UCS Powershell scripts and a lot of tweaking and testing.

Anyway this script will continue to grow and its functionality expand. My end goal is to be able to do a base FlexPod build by scripting, including UCS, Nexus Switches, Netapp and VMware.

It will take a lot of time, and i may never really use the script but its more of a pet project to not only see if i can do it, but also grow my Powershell skillset.

Here is the github if you’d like to follow/assist or download and play with it a bit.

https://github.com/cknic/UCS_Build

iSCSI Boot with ESXi 5.0 & UCS Blades

UPDATE:: The issue was the NIC/HBA Placement Policy.  The customer had set a policy to have the HBA’s first, then the iSCSI Overlay NIC, then the remaining NICs.  When we moved the iSCSI NIC to the bottom of the list, the ESXi 5.0 installer worked just fine.  I’m not 100% sure why this fix is actually working, but either way it works.

So at a recent customers site i was trying to configure iSCSI Booting of ESXi 5.0 on a UCS Blade, B230 M2.  To make a long story short it doesn’t fully work and isn’t offically supported by Cisco.  In fact, NO blade models are supported for ESXi 5.0 & iSCSI boot by Cisco.  They claim a fix is on the way, and i will post an update when there is a fix.

Here is the exact issue, and my orgianal thoughts, in case it helps anybody;

We got an error installing ESXi 5 to a Netapp LUN.  Got an error “Expecting 2 bootbanks, found 0” at 90% of the install of ESXi. The blade is a B230 M2.

The LUN is seen in BIOS as well as by the ESXi 5 installer.  I even verified the “Details” option, and all the information is correct.

Doing an Alt-F12 during the install and watching the logs more closely today, at ~90% it appears to be unloading a module, that appears by its’ name, to be some sort of vmware tools type package.  As SOON as it does that the installer claims that there is no IP address on the iSCSI NIC and begins to look for DHCP.  The issue is during the configuration of the Service Profile and the iSCSI NIC, at no time did we choose DHCP, we choose static. (We even have tried Pooled)  Since there is no DHCP Server in that subnet it doesn’t pickup an address and thus loses connectivity to the LUN.

So we rebooted the blade after the error, and ESXi5 actually loads with no errors.  The odd thing is that the root password that’s specified isn’t set, it’s blank like ESXi 4.x was.

So an interesting question is what’s happening during that last 10% of the installation of ESXi 5??  Since it boots cleanly, it almost seems like it does a sort of “sysprep” of the OS, ie all the configuration details.  If that’s the only issue then it might technically be ok.  However I don’t get the “warm and fuzzies”.  My concern would be that, maybe not today but down the road some module that wasn’t loaded correctly will come back to bite the client.

Also, what is happening in that last 10% that’s different then ESXi 4.x??  We were able to load 4.1 just fine with no errors.

Again we called Cisco TAC and we were told that ESXi 5 iSCSI booting wasn’t supported on any blade.  They do support 4.1 as well as Windows, and a variety of Linux Distos.

Good questions asked during UCS Design Workshop

So i’ve recently started working for a large technology company on their Datacenter Services team in their Professional Services org.  Its been quite an experience so far, and i’m doing my first solo Cisco UCS Design Workshop coupled with an installation as well some basic teachings.

I was asked some good questions and figured that others may be asked the same things as well as may just have the questions themselves. I figured i can share and maybe help somebody else.  I will try and keep this page updated with some of the more interesting questions that aren’t easily found in Ciscos documentation.

Q1. According to Cisco’s documents when you’re using the VM-FEX or Pass Through Switching there is a limit of 54 VMs per server when those hosts have 2 HBA’s.  What is the real reason for the low limitation?  As with todays high-powered servers 54 VMs isn’t an unreachable goal.

A1. The 54 limit is based on VN-tag address space limitations on the UCS 6100 ASICs.  Future hardware for UCS will support more.  PTS may not be the right fit for high density virtual deployments, especially VDI.   Here is a link to a great blog on it.  http://vblog.wwtlab.com/2011/03/01/cisco-ucs-pts-vs-1000v/

Q2. What is the minimum number of power supplies needed for a UCS Chassis?

A2. The answer is 2, especially a fully populated one.  In this case you are running in a non-redundant mode.  If one of the power supplies fail, the UCS System will continue to power the Fans and the IO-Modules.  It will however begin to power off the blades in reverse-numerical order until it reaches a supported power load.

Q3.  Can you change the number of uplinks from the IO-Modules to the Fabic Interconnects once the system is online?

A3. Changing the number of cables from FI to the chassis requires a re-pinning of the server links to the new number of uplinks.  The pinning is based on a hard-coded static mapping based on number of links used.  This re-pinning is temporarily disruptive to the A fabric then the B fabric path on the chassis.  NIC-teaming / SAN multi-pathing will handle failover/failback if in place.

Q4. If the uplinks from the Fabic Interconnect are connected to Nexus switchs, if we dont use vPC on them do we lose the full bandwidth because the switches are in an active/passive mode?? Can you get the full bandwidth using VMware and no vPC?

A4. Even without vPC the UCS Fabric Interconnects will utilize the bandwidth of all of the uplinks, no active/passive.  However, I would still recommend configuring VMware for active/active use but ensure you are using MAC or Virtual port based pinning rather than Src/Dest IP hash.

Q5. So is there any advantages to doing the vPC other then the simplified management??

A5. Two of them, Faster failover, and potential for a single server to utilize more than 10Gbps based on port-channel load-balancing.