CCIE-Datacenter Studies

I have been going absolutely insane studying for my CCIE-DataCenter Lab exam.  I have been working in our Advanced Technology Center on various equipment, 7k’s, 5ks, 2ks, UCS, MDS, & Storage.  Been polishing skills on things i do all the time and doing things i’ve only ever read about, Fabric Path and OTV.

For anybody who is planning on taking the CCIE-DC, start playing with stuff early.  I don’t care how many times you’ve done it, once you start building out things and doing stuff you’ve never done you realize just how much work you have to do and in a very short amount of time.

I will be writing up a very detailed article, within the limits of NDA, of how i studyed and passed (hopefully) the exam.  If i can help just one person its a win.  In addition i’ve learned some really good best practices around vPC, EIGRP and HSRP that i want to share.

Fog Computing? What the heck?

So i’ve heard a few people talk about and ask about “Fog Computing” lately.  Honestly its a term i only heard about fairly recently, even though it is really from earlier in the year.

What is it?

It started out from the marketing minds at Cisco.  It is the thought process or architectural concept of putting more data, compute, applications and intelligence closer to the end-users or edge.

Why do we need something new? Isn’t the cloud fixing all my issues?

Traditionally we have large centralized datacenters that have a ton of horsepower and does all of our computing and work for us.  End devices and users have to connect through the network to access the datacenter, have the computational work done, and the results sent back.   In this country one of the biggest issues i’ve seen companies have is network bandwidth in the WAN or psuedo-LAN.  With this big push for more connected devices and endpoints we are only increasing our network requirements, which are either very expensive to upgrade, or just isn’t possible.

“The Cloud” doesn’t always fix all of these issues.  Typically your cloud provider still has large datacenters in geographically diverse areas.  Now they tend to have a bit more capital to get really big network pipes, but that may not be enough.  No matter the size of the pipe, things like latency, or inefficient routing can be major headaches.

Alright, so how does “Fog Computing” help us??

What Cisco is calling Fog computing is a bit of a play on words with bringing the “Cloud” everywhere.  (Get it??)  Anyway, it involves using the new generation of Cisco gear, many of which may already be your edge.  What Cisco wants to help you do is do a lot of the work right at the edge instead of sending a bunch of raw data to your datacenters and then only have to send data back.

Cisco has used the example of a jet engine sending ~10TB of realtime data about performance and condition metics in 30 minutes.  You don’t want to send that information to the cloud/datacenter to only send back a simple reply.  I like the example but i’ve seen some more “real-world” examples where edge computing would help.

For example, a power company would rather collect their smart meter metrics at the edge, or traffic lights could do real-time decision making locally, rather then ship to data back over the WAN.  I could also see this being useful for distributed gaming engines, possibly VDI workloads. What about something as simple as DMZ type services, web services, VPN solutions?

How are we going to do this?

Cisco is leveraging their new IOx platform.  It is Ciscos new IOS/Linux Operating system.  It is really the IOS code running on an x86 code base, IOS XR.  Now, IOS XR has been running on the ASA platform for some time now.  I recently attended a call about the new ISR 4400 line of Integrated Service Routers.  These allow various “Service Containers” which are basically VMs to run on them.  In addition you can actually run the UCS E-Series servers in them, which are full blown x86 servers.  This brings real compute and storage power right at the edge of your network.

 

 

Micro-Segmentation on NSX!!

I recently attended a really cool presentation by Scott Lowe about the ability to do Micro-segmentation with NSX.  This is, in my opinion, the biggest use case for NSX and something that impresses me a lot.

Before we deep dive on what is NSX & Micro-Segmentation, first where do most of us stand today with our networks in regards to security.  Most environments use a Perimeter type security model.  Unfortunately, it tends to not be very resilient and can have a lot of issues.  First off if your able to breach this “shell” and get access to the interior servers you typically have a pretty open environment.  There is typically little security protecting East/West traffic between servers, coupled with the ever increasing network traffic between these servers its fairly easy to hide any rouge traffic while attacking other servers.  Once you have access and control you can then launch your attack when its best.  A perfect example of this is the recent big box stores, Target for example.  The attackers got in and waited some time before actually stealing data.

How can we best combat this today? The best option is to utilize a Least Privilege or 0-Trust security model.  This puts firewalls between everything, all servers both North/South and East/West.  I have actually seen a customer do this, not only did they spend a TON of money on both physical and virtual firewalls but it was an administrative nightmare.  They had a lot of touch points and one of the biggest problems was actually identifying rules and what they did, and why they were there.  There were rules still in place for servers and applications that had been decommissioned earlier.  It also was a nightmare to try and open up ports between applications and equipment.  They had to touch multiple firewalls all while trying to monitor the traffic to catch those ports that vendors don’t always list in their docs about “needed firewall ports”.

NSX and its Micro-Segmentation is an awesome answer to this problem!  NSX at its heart is Network Virtualization. NSX allows us to decouple the Network from the hardware and allow centralized management of this decoupled network.  With NSX we separate out the various network “planes”.  First, there is the Management plane, this is typically vCenter with the Network & Security Plugin installed.  This is where we define all the various rules and policies.  Next, the Control Plane. This is NSX Manager & NSX Controllers. This is the decision maker and controls the rules states as well as their definitions and keeps track of everything.  Finally we have the Distributed Data planes, these are all the modules loaded into all of the ESXi hypervisiors, and enables the Distributed Routers, Distributed Firewalls and switches.  This is where all the actual packet switching happens.  All of the configurations can be manually done or can use various REST APIs to do some automation, typically with vCAC.

Ok, now that we know what the basic NSX is, why is micro-segmentation so awesome?? NSXs Micro-segmentation allows us to implement a true 0-Trust Model security policy, easily and without the complication and huge costs and administration overhead that it typically takes.  We can now put an intelligent firewall and routing between every single VM!!  The part that is really cool is that we are doing this at the hypervisor level.  This allows the traffic to not have to “hairpin” to go through a firewall or router.  The traffic is typically analyzed at the originating vnic level.  (It can be done on the receive side, but isn’t recommended)  The rules are also dynamic and follow the VM wherever it goes in the environment.  The rules are removed if the VMs are off or deleted.  They are re-created when the VM is powered on, and CAN be automatically created when a VM is provisioned.

NSX Micro-Segmentation allows us to have the datacenter security sit in a “sweet spot”.  It is close enough to the VMs and workloads that it can provide intelligence of workloads and granular control.  However, unlike many agent based tools that can provide this level of intelligence and granular control, the security features are in the hypervisor and not in the VM.  This allows a compromised VM to still have all its security features intact.  In fact as we’ll see later on, NSX can potentially recognize the compromise and perform specific actions against the VM.

NSX allows for a very flexible network design.  At a very high level it is comprised of three main types of “tenants” or types of networks. We can Isolate networks from each other.   We can also Segment networks, which means we can allow certain types of traffic between various points.  Finally NSX allows us to bring in more “Advanced Services” via 3rd Party applications that plugin to NSX.  Lets dive into each more.

Isolation networks allows us to create just that, completely isolated networks.  These networks have absolutely no knowledge of each other.  They also have no knowledge of the underlying physical connections.  They could both be running over a big flat network and still not see each other.  They can have the exact same IP space as well.  This is excellent for keeping Dev, Test and Production networks separate, yet all running within the same vSphere Environment!! This is also used for multi-tenant environments, where the workloads shouldn’t ever see each other.

Segmented networks are most common types of networks, these are the traditional 3-teir App scenario. Without NSX, typically you’d have a Perimeter firewall, a DMZ then the inside firewall or firewalls that lead to various networks.  The rules are always in place on the firewalls no matter what the workload states are.  The rules also have to be manually inputted into each firewall, and moved in some cases if workloads move, depending on network design. With NSX we have some flexibility and choices.  We can create logical networks per group, app, BU, etc and then apply the rules to each VM.  We can also be a bit silly and just dump everything onto one giant logical switch and apply the rules to the VMs that way.  Either way is the same result, the rules are applied at the VMs, the logical networks can be designed however makes the most sense for the environment.

The “Advanced Services” functions of NSX brings a lot of really neat functionality and intelligence to traffic flows that weren’t really possible before at the scale and easy of administration that NSX brings. With NSX can setup the firewalls to do intelligent and dynamic routing.  We can send traffic through a Malware scanner and then based on the output of the scans, we can do different things with the traffic.  You can also send traffic to a deep packet scanner.  You can also just pass the traffic straight to its destination.  With these advanced services we can build If/Then type rules for the traffic.  For example, if traffic is sent to TrendMicro and a virus is found, NSX can quarantine it and not allow the VM to pass any traffic until its resolved.  Also, if during a scan, a big vulnerability due to old software is found, NSX can then monitor all that VMs traffic via IPS until fixed.  If a different scan is run and it finds sensitive data, we can encrypt the traffic and restrict it while its investigated.  I think this is a very very neat feature, and something that should make both network and security guys very happy! There are a bunch of vendors that are providing various plugins for this, such as, Rapid 7, McAfee, PaloAlto, Symantec & TrendMicro.

One thing that Scott mentioned that makes a ton of sense and is really cool is around Security Groups.  Security Groups allow the administrator to group VMs together logically using a variety of static and dynamic variables, such as Datacenter, Virtual Machine, vNIC, OS Type, User ID, Security Tag (applied by IPS, Malware scanner), etc.  Then various policies you create, Firewall rules, send to IPS, etc, are then applied to given security groups and tags within the group.  In reality this is the only way to really do NSX at any real scale.  It would be very time consuming to create rules for each and every VM.  By using the security groups VMs can become part of the groups and get certain policies applied to it automatically.

NSX and the Micro-Segmentation feature are all very simply managed within the vSphere Web Client. If your still using the old .NET client, you need to stop or you’ll be very very sad in the near future.  These are all under the Network & Security plugin.  This is where you can create your Security Groups along with their elements or attributes.  You also create all your various policies here.  You can also view all the events, and logs here.  There is also an area where you can View all of the traffic flows from VM to VM, and even create rules against the seen flows.  It does make it a bit easier to create the rulesets from real flows rather then trying to set them up ahead of time, if you’d like.

One thing that Scott also mentioned were a few use cases for this micro segmentation.  There was one use case that i actually will be exploring more in depth in the near future because it interested me a lot.  He mentioned that an admin can use NSX and Micro-Segmentation with VMware View.  In a very typical VMware View design there tends to be various pools for different user types, such as internal workers and external or offshore users who shouldn’t be able to access certain things, or setup of different pools for dev teams who only should be accessing their development machines.  This usually means different VLANs, with firewall and routing rules to accomplish this segmenting.  However, NSX can apply firewall & routing rules based upon a logged in User ID.  This means you could actually have a single pool for all our users on a single flat network and because NSX can apply the locked down firewall and routing rules to the VM when a “restricted user” logs in, we accomplish the same goal as the more complicated setup.  Now thats awesome!

Now Scott had a lot of really interesting slides and visualizations in his presentation that unfortunately i can’t use.  I would look for more information at VMworld.  In addition there is a good White-Paper here.

 

 

Not sure i want the Internet of Everything

I not so sure i want the Internet of Everything!!

While the idea of having more things in our life be available to the internet is cool.  It allows things to be interconnected and brings an intelligence and ability to automate things in a way that was only seen in “visions of the future”.  Even just 10 years ago the idea of being able to automate things, even at home, was something that was available to only the very rich and involved such elaborate and complicated setups that you needed very expensive equipment and people to get it working.  Now Lowes sells a DIY kit to do home automation.  Heck, there are refrigerators that have internet access!!

Now the “home market” isn’t really the biggest push for the Internet of Everything.  The big money is in your large companies and utilities.  One of the biggest pushes is your utility providers to create a “Smart Grid”.  Being able to provide better intelligence and automation to the power grid, water supplies, even traffic control devices is something that is desired.  I’ve done projects for some large utilities that are doing exactly that.  Putting smart meters in that have internet connectivity baked in.  Traffic lights that can communicate with each other about traffic patterns and adjust automatically already exist in many areas.

However, lets be totally honest here, there is a reason that “password” is typically in the top 5 of passwords in the world.  People either don’t fully understand or are lazy about security.  My biggest fear is that various manufactures, esp ones trying to cut costs are going to want to jump on the bandwagon of “automation” and “connected devices” and forget about securing them.  Now i’m not of the tin-foil hat variety, but lets be honest there have been a lot of high-profile security breaches lately.  Now for every Target and Adobe there are 10,000’s of smaller breaches.  These range from simple causing havoc, to major theft of PII (Personally Identifiable Information) and money.  Now imagine the wrong people getting into the “Smart-Grids”, and causing really really big problems.

As much as a tech geek i am, i think i’ll wait a while before i start to get too nuts with my own entrance into the Internet of Everything.

 

 

SDN for the common nerd

What the heck is SDN?

So what is SDN really?  Unless you have lived under a rock and have not gone on any tech blogs, or gone to any conferences, or talked to any technical human lately, you have heard of SDN or “Software Defined Networking”.  I recently had to describe what SDN really was to some limited-technical folks recently, people who wanted to know what this concept ment directly and also in-directly.  What does it mean to them today, and what is it trying to accomplish.  I’ll be honest, today is the “easy” part, what i personally think we’re all heading to is the harder part, and a lot more foggy.

Let us start simple.  What is SDN?? Wikipedia states it “is an approach to computer networking that allows network administrators to manage network services through abstraction of lower level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane).” (source)

That is absolutely a mouthful.  The first sentence states that we want to let our network admins manage their network at a higher-level, not directly on each device.  The second sentence states that we can accomplish this by splitting the network “devices” into two different types or functions.  One is the Control plane (the brains, or device that makes the decisions about the traffic), the other is the Data or Fowarding Plane (this is the actual Layer 1 system where the packets will actually travel on).  So basically we want to take the brains out of a switch/router/firewall and centralize it, and then just have a bunch of dumb devices out there to be controlled by said brains.   In most cases this is at least pretty close to what people/vendors are envisioning, or at least some variation of.

This is not THAT different from the things we’ve been doing in the compute world with hypervisiors such as VMware.  We are letting software define how we will setup, carve up and configure our computing resources.  In fact we’re excellent at that, been doing it for years.  Storage isn’t that far behind compute.  There are plenty of products out there that can interact with the APIs presented from storage arrays to carve up and serve storage to whatever device we need.  Now all we’re doing is we’re allowing a centralized device/software have control over all the networking nodes.  This allows for a lot of really simple and also some really neat things.

So why do we want SDN?

Well we can now do some really cool things.  We can provision things like VLANs, QoS, Security ACLs, etc on an much more granular and dynamic way.  For example we have have these items follow a VM that is vmotioning around the environment.  Ok, so you say you can do that already within VMware, and that is mostly true.  But can you do that when you vMotion across datacenters, or in a DR situation.  This also allows for super easy provisioning.  In your centralized tool you can create all of these “profiles” and use them when your creating your workload.  Now you can have the network configured correctly, including things like firewall ACLs, automagicly.  Now this will not eliminate the network admins job, it will just change it slightly.  Now instead of running cables and logging into 100 devices to perform a task, you will login to the centralized software/hardware and perform your tasks.

Where do we stand today?

SDN today is very much a still evolving concept and product.  Many different companies are releasing or have released products that speak to the SDN space.  VMware’s NSX is a huge example of this.  It is a product that has been deployed at some very large customers and has a good following.  Another big release is Cisco’s ACI products.  I won’t compare them in this article, that will be upcoming as its a bit muddy as they aren’t exactly apples to apples, more apples to pears.  Anyway there is also open standards, the largest being OpenDaylight.   To me, right now looks a lot like when “Cloud” or “VDI” was the big buzzwords.  Some people are picking a solution and going with it, however many others are waiting to see how the market better defines itself.  Unless there is a major need or a company is driven by the “pie in the sky” type architects, i see many people just watching and waiting.

Where do i see this going?

So where do i eventually see us heading??  Now i like VMware’s NSX, it works really pretty well.  However, I see us moving towards an “Application-Centric Datacenter”.  This is more of the approach Cisco is going for and speaking about.   While some people say Cisco is not going for a true SDN like NSX is, since hardware is involved.  I can understand that argument, however at the end of the day you still need hardware, cables need to connect somewhere and there needs to be some sort of hardware involved.

Overall, the end solution is that the end user is no longer has their applications or workloads constrained to certain hardware, or certain sections of a datacenter because that’s the hardware that supports it.  No longer will it take a while to provision your resources to do a new rollout or “balloon” during periods of peak usage.  I can see this centralized “Orchestration” engine at the center of it all, using various standards and APIs that are only just now starting to become more standard, to provision your compute, storage and now network for a workload.  No longer will an application owner have to make calls to multiple teams to get something done.

Now i’m also quite aware that there needs to be limitations and quotas and things of the like, many times as a VMware guy i’ve heard the “i need as many vCPUs and Memory as i can”.  Now things like vCloud did a great job of allowing quotas and access to only a certain amount of these resources.  Things like this will become necessary on the network side.  We don’t need people wanting huge chunks of network “just because”.  Either way the next 12-18 months will be very very interesting to watch and be a part of.

Custom UCS/FlexPod Build Script

 UPDATE: Working with some of our internal guys, its come to my attention that some of the script has broken with the newer UCSM versions.  I will be updating this to be more “adaptable”, however use the script for ideas and feel free to kang any code from it for now.


 

So i started working on developing a Powershell script that will grab variables from an Excel sheet and create a UCS Build off of that.

I am at a point that the build actually works quite well now. I’m pretty proud of myself since i’m NOT a deep Powershell guy. This came about from looking at other UCS Powershell scripts and a lot of tweaking and testing.

Anyway this script will continue to grow and its functionality expand. My end goal is to be able to do a base FlexPod build by scripting, including UCS, Nexus Switches, Netapp and VMware.

It will take a lot of time, and i may never really use the script but its more of a pet project to not only see if i can do it, but also grow my Powershell skillset.

Here is the github if you’d like to follow/assist or download and play with it a bit.

https://github.com/cknic/UCS_Build

AutoDeploy and VXLAN

UPDATE!!: So another awesome engineer that i work with has found a solution to this.

“as long as you let VSM create the vmknic the first time, and then preserve that MAC address (in the answer file of the Host-Profile), you’re good.  (Assuming you’ve added the vxlan vib to your
image)”  He also mentions “I think one of the keys is to make sure VSM is showing the correct IP in Datacenters->Network Virtualization->Preparation->Connectivity BEFORE attempting host reboots.”

Thank you Jason

Original Post:

So working with the same technician that found the UCS and PXE bug, he found another one this time relating to VXLAN and Auto-Deploy. (Thanks Zach & Eric)

First, a little background.

The customer is doing a full-scale vCloud Enterprise Suite deployment.  They wanted to utilize VXLAN and fully stateless hosts.

So normally when the cluster gets prepared, vShield Manager (now called vCloud Networking and Security. I’m sure the name will change tomorrow) creates a vmknic on the hosts that is used for the VXLAN transportation.

Now we have Auto-Deploy which muddies everything up…. The process  that should occur is this;

First boot a new host and configure it as needed.  Then prepare the cluster through vSM/VCNS.  This adds a vmknic to the host for the VXLAN transport.  Then you create a host profile from it.  Then as you add more hosts you update their answer file for those hosts.  Then reboot and all is happy…

Well here is the rub, upon reboot the vSM/VCNS prep happens before the host-profile is applied.  So when the host-profile gets applied the, just created, vmknic is removed and re-added. Sadly, there is no way to just add the IP address to this vmknic through the host-profile, it’s an all or nothing affair.  What sucks is when doing the Host-Profile remediation the vmknic isn’t just modified, it’s actually deleted and re-created with the appropriate settings.  I’m sure this is to simplify code.  Anyway, now this new vmknic is created with the correct settings, but vSM/VCNS doesn’t know about it because some identifier has changed….doh!!

There is currently no fix for this Order of Operations issue…  This will be fixed when VCNS gets updated to 5.5 though.  So for now it’s VXLAN or Auto-Deploy.

 

Updates

So I’ve been neglecting this site really badly.

I’ve been insanely busy with all kinds of things.  So whats new;

Got my Citrix CCIA certification, woohoo!!  Now just waiting on some Citrix projects 🙂

 

In the meantime i’m doing more FlexPod’s.  I am working on an update document for my iSCSI boot.  I found that the new UCS 2.0(2r) i belive added the IQN Pools, so some of the screenshots changed, however the process is basicly the same.

I’ve also been writing a lot of internal documentation right now, so the thought of writing more when i’m “off the clock” hasn’t been fun.  Thats coming to a bit of an end, so i’m going to start writing more here again.  I’ve come across some interesting things lately.

iSCSI Boot with ESXi 5.0 & UCS Blades

UPDATE:: The issue was the NIC/HBA Placement Policy.  The customer had set a policy to have the HBA’s first, then the iSCSI Overlay NIC, then the remaining NICs.  When we moved the iSCSI NIC to the bottom of the list, the ESXi 5.0 installer worked just fine.  I’m not 100% sure why this fix is actually working, but either way it works.

So at a recent customers site i was trying to configure iSCSI Booting of ESXi 5.0 on a UCS Blade, B230 M2.  To make a long story short it doesn’t fully work and isn’t offically supported by Cisco.  In fact, NO blade models are supported for ESXi 5.0 & iSCSI boot by Cisco.  They claim a fix is on the way, and i will post an update when there is a fix.

Here is the exact issue, and my orgianal thoughts, in case it helps anybody;

We got an error installing ESXi 5 to a Netapp LUN.  Got an error “Expecting 2 bootbanks, found 0” at 90% of the install of ESXi. The blade is a B230 M2.

The LUN is seen in BIOS as well as by the ESXi 5 installer.  I even verified the “Details” option, and all the information is correct.

Doing an Alt-F12 during the install and watching the logs more closely today, at ~90% it appears to be unloading a module, that appears by its’ name, to be some sort of vmware tools type package.  As SOON as it does that the installer claims that there is no IP address on the iSCSI NIC and begins to look for DHCP.  The issue is during the configuration of the Service Profile and the iSCSI NIC, at no time did we choose DHCP, we choose static. (We even have tried Pooled)  Since there is no DHCP Server in that subnet it doesn’t pickup an address and thus loses connectivity to the LUN.

So we rebooted the blade after the error, and ESXi5 actually loads with no errors.  The odd thing is that the root password that’s specified isn’t set, it’s blank like ESXi 4.x was.

So an interesting question is what’s happening during that last 10% of the installation of ESXi 5??  Since it boots cleanly, it almost seems like it does a sort of “sysprep” of the OS, ie all the configuration details.  If that’s the only issue then it might technically be ok.  However I don’t get the “warm and fuzzies”.  My concern would be that, maybe not today but down the road some module that wasn’t loaded correctly will come back to bite the client.

Also, what is happening in that last 10% that’s different then ESXi 4.x??  We were able to load 4.1 just fine with no errors.

Again we called Cisco TAC and we were told that ESXi 5 iSCSI booting wasn’t supported on any blade.  They do support 4.1 as well as Windows, and a variety of Linux Distos.

Enabling Jumbo Frames in a Flexpod Environment

Update: I have fixed the 5548 section i was missing the last two lines.

This post will help the user enable Jumbo frames on their Flexpod environment. This document will also work for just about any UCS-based environment, however you will have to check on how to enable Jumbo Frames for their storage array.

This post assumes a few things;

Environment is running 5548 Nexus switches
User needs to setup Jumbo-Frames on the NetApp for NFS/CIFS Shares
Netapp has VIF or MMVIF connections for said NFS/CIFS connections.

Cisco UCS Configuration 

-Login to the UCSM, Click on the LAN Tab.
-Expand LANs, & LAN Cloud.
-Click on the QoS System Class, Change the “Best-Effort” MTU to 9216. 

NOTE: You need to just type in the number, it’s not one of the ones that can be selected in the drop-down.

Expand the Policies section on the LAN Tab.  Right-Click on the QoS Polices and click “Create new QoS Policy”.  Call it “Jumbo-Frames” or something similar.
-On the vNIC Template or actual vNIC on the Service Profile, set the “QoS Policy” to the new Policy.

 ESX/ESXi Configuration

-Either SSH or Console into the ESX host.  If your using ESXi you’ll need to ensure local or remote tech support mode is enabled.
-We need to set the vSwitch that the Jumbo-Framed NICs will be on to allow Jumbo-Frames.
          Type esxcfg-vswitch –l   find the vSwitch we need to modify.
          Type esxcfg-vswitch –m 9000 vSwitch# (Replace # with the actual number)
          Type esxcfg-vswitch –l   you should now see the MTU to 9000

-We now need to set the actual VMKernel NICs.

          Type esxcfg-vmknic –l  find the vmk’s that we need to modify
          Type esxcfg-vmknic –m 9000 <portgroup name> (this is the portgroup that the vmk is part of)
          Type esxcfg-vmknic –l   verify that the MTU is now 9000 

Note: If your using dvSwitches, you can set the MTU size through the VI-Client.

5548 Configuration 

Login to the 5548 switch on the “A” side.
-Type the following;

system jumbomtu 9216
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multi-cast-optimize
exit
system qos
service-policy type network-qos jumbo
exit
copy run start

-Repeat on the “B” Side 

NetApp Configuration 

-Login to the Filer.
-Type ifconfig –a  verify which ports we need to make run jumbo frames.
 -Type ifconfig <VIF_NAME> mtusize 9000 

NOTE: You need to make sure you enable jumbo-frames not only on the VLAN’d VIF but also the “root” VIF.