Modal Title
Cloud Native Ecosystem / Kubernetes / Operations

The Rise of the Cloud Native Cloud

With fewer resources to manage than the cloud pioneers can offer, you get into that new cluster much faster.
Jun 12th, 2023 9:38am by
Featued image for: The Rise of the Cloud Native Cloud

Kubefirst delivers instant GitOps platforms made from popular free and open source cloud native tools. We’ve supported the Amazon Web Services (AWS) cloud for years and love how well our platform runs on Elastic Kubernetes Service (EKS). We recently announced our expanded support for the new Civo cloud, a cloud native cloud that runs all of its infrastructure on Kubernetes. There are some pretty staggering differences between the two clouds, yet some things remain virtually identical, and it got me thinking about the wild journey of how we got here as an industry.

Remember the Original Public Clouds?

Remember when public clouds were new? Think back to computing just 10 years ago. In 2013, AWS was trying to further its stronghold on the new cloud computing space with its self-service infrastructure public cloud model in Elastic Compute Cloud (EC2). Google Cloud Platform and Microsoft Azure were just a couple years removed from announcing their own, further solidifying the architectural shift away from self-managed data centers.

Despite the higher compute cost of public cloud infrastructure compared to its on-premises equivalents, the overall time and money saved by leveraging repeatable on-demand cloud infrastructure prompted companies to begin tearing down their rack space and moving their infrastructure to the public clouds. The self-service model gave more power to the developer, fewer handoffs in the DevOps space and more autonomy to engineering teams. The public cloud era was here to stay.

The IaC Revolution

Although the days of sluggish infrastructure IT tickets were now a thing of the past, the potential of the cloud still remained untapped for many organizations. True to Tesler’s Law, the shift toward the public cloud hadn’t exactly removed system complexity — the complexity had just found a new home.

To tackle that complexity, we needed new automated ways to manage our infrastructure and the era of Infrastructure as Code (IaC) did its best to rise to this challenge. New technologies like CloudFormation, Ansible, Chef, Puppet and Terraform all did their best to step up to the need, but the infrastructure story from company to company was generally still a rather complex and bespoke operation.

The Container Revolution

Around the same time another movement was sweeping through the application space: containerization. Largely Docker-based at the time, containerizing your apps was a new way to create a consistent application runtime environment, isolating the application from the infrastructure that it runs upon.

With containerization, we were suddenly able to run an app the same way on different operating systems or distributions, whether running on your laptop, on on-premises infrastructure or in the public cloud. This solved a lot of problems that companies suddenly had as their infrastructure requirements began to dramatically shift in new directions.

Organizations with the classic monolithic applications began exploring how container-based microservices could be leveraged to optimize their software development and scaling woes. As the containerized world evolved and teams started building containerized microfrontends making calls to containerized microbackends, the sprawl of micro products started to become a lot to manage. This was particularly felt with the management of applications, infrastructure, secrets and observability at scale.

The Orchestration Battle

With the motion to put applications into containers and the resulting explosion of microservices and containerized micro products came a new challenge: managing all of them.

HashiCorp Nomad, Docker Swarm and Google’s Kubernetes (now part of CNCF) swiftly found their way to the conference keynote stages.

Each had its distinct advantages, but Kubernetes rose to the top with its declarative design, operating system and cloud portability, in addition to an unprecedentedly strong user community. The YAML-based system made it easy to organize your desired state into simple files that represent everything an application needs to work. It could be run in any cloud, on your on-premises infrastructure or even on your laptop, and it boasts a bustling community of cloud native engineers who share a uniform vision for modern solutions.

To Kubernetes Goes the Spoils

Cloud native engineers were quick to identify that all the software running inside Kubernetes was much easier to manage than the software that ran outside of Kubernetes. Opinions were beginning to form such that if your product didn’t have a Helm chart (the Kubernetes package manager), then it probably wasn’t very desirable to the cloud native engineers who were responsible for platform technology choices. After all, if you need to install complex third-party software, your choices are a Helm install command that takes seconds to run or pages upon pages of installation guides and error-prone instructions.

Opportunistic software vendors were quick to pick up on this trend and feverishly began rearchitecting their systems to be installed by Helm and operational on Kubernetes. The promise of delivering complex multicomponent software packages with complex microarchitectures, but still having it easily installable to any cloud environment has been the dream of software delivery teams forever, and it has finally reached that inevitability with Kubernetes at the helm.

How Complex Does Your Cloud Need to Be?

We first built kubefirst to provision instant cloud native (Kubernetes) platforms in the world’s largest public cloud, AWS, and it runs very well there. The maturity of the AWS cloud is largely unparalleled. If you need to accommodate large swaths of Fortune 500 complexities, Federal Information Processing Standards (FIPS)-compliant endpoints from all angles, extreme scales with enormous data volume or some other nightmare of this type, choosing one of the “big 3” (AWS, Google Cloud or Microsoft Azure) is a pretty easy instinct to follow.

If you’re working in this type of environment, kubefirst is lightning-fast and can turn 12 months of platform building into a single 30-minute command (kubefirst aws create).

We still love the big clouds. However, when we asked our community what clouds we should expand into, we weren’t too surprised to find a clamoring of interest for a simpler cloud option that focused on managed Kubernetes. The newer cloud providers like Civo, Vultr, DigitalOcean and others of this ilk are boasting blazing fast cluster provisioning times with significantly reduced complexity. With fewer resources to manage than the cloud pioneers can offer, you get you into that new cluster much faster.

Let’s break this down in terms of Terraform cloud resources, the code objects in your infrastructure as code. To create a new kubefirst instant platform from scratch in AWS, our kubefirst CLI needs to provision 95 AWS cloud resources. This includes everything — the VPC, subnets, key management service keys, state store buckets, backends, identity and access management (IAM) roles, policy bindings, security groups and the EKS cluster itself. Many of these resources are abstracted behind Terraform modules within the kubefirst platform, so the complexity of the cloud has been heavily reduced from the platform engineer’s perspective, but there’s still quite a bit of “cloud going on.” It’s also a very sophisticated and enterprise-ready setup if that’s what your organization requires.

But there’s a cost for this sophistication. To provision these 95 resources and get you into your cluster, you’ll have to wait about 25 fully automated minutes, and a lot of that is waiting on cluster provision time. It takes about 15 minutes to provision the master control plane and another 10 to provision and attach the node groups to it. If you need to destroy all these resources, it will take another 20 minutes of (automated) waiting.

But to have the same kubefirst platform in Civo, you only need to manage three Terraform resources instead of 95, and instead of the 45 minutes it takes to provision and destroy, you could do the same in about four minutes. When infrastructure is part of what you’re changing and testing, this is an enormously consequential detail for a platform team.

The Rise of Platform Engineering and the Cloud Native Cloud

Platform engineering is an emerging practice that allows organizations to modernize software delivery by establishing a platform team to build a self-service developer platform as their product. The practice requires that platform teams iterate regularly on the provisioning of infrastructure, cloud native application suites, application CI/CD, and Day 2 observability and monitoring. With entire software development ecosystems being provisioned over and over becoming the new normal, spending 45 minutes between iterations instead of four can be a costly detail for your platform team’s productivity.

If you fear that you will eventually need the complexities of “the big 3” clouds, that doesn’t mean that you need to borrow that cloud complexity today. Kubefirst is able to abstract the cloud from the platform so you can build your platform on kubefirst civo today and move it to kubefirst aws tomorrow with all of the same cloud native platform tools working in all the same ways.

The Kubefirst Platform on the Cloud Native Clouds

Kubefirst provisions open source instant fully automated open source cloud native platforms on AWS, Civo, Vultr (beta), DigitalOcean (beta), and on the localhost with k3d Kubernetes. Civo Cloud is offering a one-month $250 free credit so you can try our instant platform on its cloud for free.

To create a new Civo account, add a domain, configure the nameserver records at your domain registrar, then run kubefirst civo create (full instructions when using Civo with GitHub, and with GitLab).

Within a few minutes you’ll have:

  • A gitops repo added to your GitHub/GitLab that powers your new platform so you can add your favorite tools and extend the platform as you need.
  • A Civo cloud and Kubernetes cluster provisioned with and configured by Terraform IaC.
  • A GitOps registry of cloud native platform application configurations, preconfigured to work well with each other.
  • HashiCorp Vault secrets management with all the platform secrets preconfigured and bound to their respective tools.
  • A user management platform with single sign-on (SSO) for your admins and engineers and an OpenID Connect (OIDC) provider preconfigured to work with all of your platform tools.
  • An example React microservice with source code that demonstrates GitOps pipelines and delivery to your new Kubernetes development, staging and production environments.
  • An Argo Workflows library of templates that conduct GitOps CI and integrate the Kubernetes native CI with GitHub Actions/GitLab pipelines.
  • Atlantis to integrate any Terraform changes with your pull or merge request workflow so that infrastructure changes are automated and auditable to your team.
  • Self-hosted GitLab/GitHub runners to keep your workloads cost-free and unlimited in use.

And with kubefirst you can throw away your production cluster with the next iteration available just a couple of minutes later.

The rise of the cloud native cloud is here.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.