Sponsored / <a href=https://thenewstack.io/containerizing-makes-hybrid-cloud-easier-adopt/"/tag/contributed">Contributed"> How Overlay Networks Pave the Way for Seamless Hybrid Clouds - The New Stack
Modal Title
Networking

How Overlay Networks Pave the Way for Seamless Hybrid Clouds

Jun 3rd, 2016 6:36am by
Featued image for: How Overlay Networks Pave the Way for Seamless Hybrid Clouds

Mathew Lodge
Mathew Lodge has more than 20 years of diverse experience in cloud computing and product leadership. He is COO of Weaveworks, and most recently was VP of VMware’s Cloud Services group. He has built compilers and distributed systems for projects like the International Space Station, helped connect six countries to the Internet for the first time, and managed a $630 million router product line at Cisco. Prior to VMware, Mathew was Senior Director at Symantec in its $1 billion+ information management group.

I’ve seen an interesting trend emerge over the last few months: Containers and container overlay networks are making hybrid cloud (a mixture of on-premises data centers and off-premises public clouds) easier.

Many development teams take the opportunity to update or refactor an application when adopting public cloud. But public clouds don’t work the same way as traditional data centers: they have new app services not available on-premises and have different idioms for doing things like scale, redundancy and recovery. And when developer teams look to refactor and build new components, they’re increasingly looking to containerize as they do it to gain speed and portability benefits and because they want to evolve towards a microservices architecture.

Weaveworks pioneered overlay container networking early on as a way to make life simpler for application developers: Instead of containers trying to share the IP address of the VM or server (the Docker host), every container has its own IP address on a container-only virtual network. Processes inside that container can use standard ports like 80 and 443, and simple DNS-based service discovery using container names makes it trivial to find the IP address of any other container regardless of where it is.

As it turns out, these qualities also make the transition to hybrid cloud simpler.

Hybrid Cloud is the New Normal for Enterprises

It’s extremely unusual to refactor the entire enterprise application, move all of it to the public cloud, or transition to microservices overnight. Some parts of the app will continue to run in the data center because there is no good reason to move the application, or there are simply better things to do with the team’s time than a complex migration that adds no new capabilities. There’s no “clean slate,” unlike start-ups.

So new and refactored application components end up in containers. Some of them will run in the public cloud, some will run in the data center, and other parts of the application will be completely uncontained.

The question then becomes how to weave together all the containers running on and off premises and connect them to other (uncontained) services, without this turning into a configuration hairball and security nightmare.

Container Virtual Networking to the Rescue

Container overlay networks are essentially the de-facto standard now because they avoid the configuration hairball. The container virtual network rides on top of the underlying IP networks, so it looks the same to the application regardless of differences in the underlying network technology. A data center network doesn’t work the same way as, say, AWS Virtual Private Cloud. Essentially the problem of managing the differences is pushed down to into the container networking layer. That means there is no configuration or code required in the application itself: it can just use regular networking constructs like TCP/IP and DNS.

In the case of Weave Net, the overlay network just looks like an Ethernet network. On each Docker host, the container has a virtual Ethernet interface, just like the eth0 interface developers are used to seeing on Linux machines, which is wired into a virtual Ethernet switch. The container doesn’t have to care about how packets make it to other containers, or which host they are on, or how IP addresses are allocated. Broadcast and multicast just work like you’d expect.

What about Security?

Many overlay networks store state information in an external cluster store, such as etcd, Consul or Zookeeper. This means that every Docker host in the network has to have credentials for talking to the cluster store. There are two security issues that stem from this: securely managing the credentials (storage and credential rotation), and the security of the connection from the host to the store. These are aggravated in a hybrid cloud scenario because the hosts are in different security domains.

We were able to push 7Gbit/sec of traffic over a 10G network from a single container running in an AWS cc4.8xlarge “Network optimized” instance.

There are two solutions to this: Manage the credentials and connections with configuration management tools, or eliminate the cluster store. Weave Net takes the latter route: Network state is cached on each host by Weave and is eventually consistent across the cluster rather than being stored centrally. This also makes the cluster more reliable in the event of connectivity problem or partitions (when the container network is split into multiple pieces due to link failure or congestion.)

Security of the container network traffic can be handled by the network itself or the underlying network. An example of that might be the use of IPSec on the link between data center and public cloud provider.

Examples in the Real World

The International Securities Exchange (ISE) operates options exchanges and uses containers and overlay networking to implement novel disaster recovery for its “Anywhere Exchange” service. The application runs inside ISE’s data centers as well as at Amazon Web Services so that it can offer full geographically-distinct disaster recovery.

This presented a new challenge: How to distribute market data feeds to all the components, regardless of where they run? ISE turned to Weave Net to solve this problem, using it to carry the multicast data feed and deliver it to exchange components running in AWS and ISE’s data centers.

Need for Speed: Aren’t Overlay Networks Slow?

But wait a second, an overlay means you’re adding a packet header (encapsulation) somewhere, so what does that do to the speed of the network?

The irony is that overlays are as old as computer networking and have endured and thrived because the overhead is slight and well worth the upside. IP networking, for example, was initially delivered over X.25 networks as an overlay, and then later as an overlay on Ethernet – and it still works this way today.

When using a container overlay network, packets between containers on the same host have no overhead because they go over that virtual Ethernet bridge – no encapsulation required. To get between hosts, they’re placed in a tunnel. The most popular tunnel encapsulation is VXLAN, for efficiency. VXLAN is a very simple form of IP-over-IP tunneling like IPSec, and it’s just an IP header so that it’s very low overhead to add. Switching hardware has VXLAN support built-in, and server NIC cards are offering hardware acceleration for it.

OK, so what kind of impact does it make and why shouldn’t we care? In testing conducted by Weaveworks engineers, we were able to push 7Gbit/sec of traffic over a 10G network from a single container running in an AWS cc4.8xlarge “Network optimized” instance. We chose this instance size because it occupies the entire server, so there are no “noisy neighbor” effects.

Conclusion

The adoption of containers is helping organizations deliver new cloud applications faster, and in enterprises, a hybrid of on-premises and public cloud deployment is becoming the new normal. Overlay container networking makes the network portable to the application, reducing complexity, and eliminating configuration hairballs in a secure fashion.

Weaveworks is a sponsor of The New Stack

feature image: Street-sign yarn bomb by Knitorious M.E.G, Richmond, Virginia.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.