So i’ve heard a few people talk about and ask about “Fog Computing” lately. Honestly its a term i only heard about fairly recently, even though it is really from earlier in the year.
What is it?
It started out from the marketing minds at Cisco. It is the thought process or architectural concept of putting more data, compute, applications and intelligence closer to the end-users or edge.
Why do we need something new? Isn’t the cloud fixing all my issues?
Traditionally we have large centralized datacenters that have a ton of horsepower and does all of our computing and work for us. End devices and users have to connect through the network to access the datacenter, have the computational work done, and the results sent back. In this country one of the biggest issues i’ve seen companies have is network bandwidth in the WAN or psuedo-LAN. With this big push for more connected devices and endpoints we are only increasing our network requirements, which are either very expensive to upgrade, or just isn’t possible.
“The Cloud” doesn’t always fix all of these issues. Typically your cloud provider still has large datacenters in geographically diverse areas. Now they tend to have a bit more capital to get really big network pipes, but that may not be enough. No matter the size of the pipe, things like latency, or inefficient routing can be major headaches.
Alright, so how does “Fog Computing” help us??
What Cisco is calling Fog computing is a bit of a play on words with bringing the “Cloud” everywhere. (Get it??) Anyway, it involves using the new generation of Cisco gear, many of which may already be your edge. What Cisco wants to help you do is do a lot of the work right at the edge instead of sending a bunch of raw data to your datacenters and then only have to send data back.
Cisco has used the example of a jet engine sending ~10TB of realtime data about performance and condition metics in 30 minutes. You don’t want to send that information to the cloud/datacenter to only send back a simple reply. I like the example but i’ve seen some more “real-world” examples where edge computing would help.
For example, a power company would rather collect their smart meter metrics at the edge, or traffic lights could do real-time decision making locally, rather then ship to data back over the WAN. I could also see this being useful for distributed gaming engines, possibly VDI workloads. What about something as simple as DMZ type services, web services, VPN solutions?
How are we going to do this?
Cisco is leveraging their new IOx platform. It is Ciscos new IOS/Linux Operating system. It is really the IOS code running on an x86 code base, IOS XR. Now, IOS XR has been running on the ASA platform for some time now. I recently attended a call about the new ISR 4400 line of Integrated Service Routers. These allow various “Service Containers” which are basically VMs to run on them. In addition you can actually run the UCS E-Series servers in them, which are full blown x86 servers. This brings real compute and storage power right at the edge of your network.