An Overview of Containerization Technologies | The New Stack https://thenewstack.io/containers/ Tue, 13 Jun 2023 16:39:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 A CTO’s Guide to Navigating the Cloud Native Ecosystem https://thenewstack.io/a-ctos-guide-to-navigating-the-cloud-native-ecosystem/ Tue, 13 Jun 2023 16:39:29 +0000 https://thenewstack.io/?p=22710615

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations that CTOs must address to ensure that everything runs smoothly and operates together.

The Gartner “A CTO’s Guide to Navigating the Cloud Native Container Ecosystem” report estimates that by 2028, more than 95% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 50% in 2023.

This level of adoption means that organizations must have the right software to effectively manage, monitor and run container-based, cloud native environments. And there is a multitude of options for CTOs and enterprise architecture (EAs) leaders to sift through, which makes it hard to get environments level-set and to standardize processes.

“Despite the apparent progress and continued industry consolidation, the ecosystem remains fragmented and fast-paced. This makes it difficult for EAs andCTOs to build robust cloud native architectures and institute operational governance,” the authors state.

As container adoption expands for cloud native environments, more IT leaders will see an increase in both vendor and open source options. Such variety makes it harder to select the right tools to run a cloud native ecosystem and stretches out the evaluation process.

Here’s a look at container ecosystem components, software offerings and how CTOs can evaluate the best configuration for their organization.

What Are the Components of Container-Based Cloud Native Ecosystems?

Gartner explains that “containers are not a monolithic technology, the ecosystem is a hodgepodge of several components vital for production readiness.”

The foundation of a containerized ecosystem includes:

  • Container runtime lets developers deploy applications, configurations and other container image dependencies.
  • Container orchestrator supports features for policy-based deployment, application configuration management, high availability cluster establishment and container integration into overall infrastructure.
  • Container management software provides a management console, automation features, plus operational, security and developer tools. Vendors in this sector include Amazon Web Services (AWS), Microsoft, Google, RedHad, SUSE and VMware.
  • Open source tools and code: The Cloud Native Computing Foundation is the governance body that hosts several open source projects in this space.

These components all help any container-based applications run on cloud native architecture to support business functions and IT operations, such as DevOps, FinOps, observability, security and APIs. There are lots of open source projects that support all of these architectural components and platform engineering tools for Kubernetes.

At the start of cloud native ecosystem adoption, Gartner recommends:

Map your functional requirements to the container management platforms and identify any gaps that can be potentially filled by open source projects and commercial products outlined in this research for effective deployments.

Choose open source projects carefully, based on software release history, the permissiveness of software licensing terms and the vibrancy of the community, characterized by a broad ecosystem of vendors that provide commercial maintenance and support.

What Are the Container Management Platform Components?

Container management is an essential part of cloud native ecosystems; it should be top of mind during software selection and container environment implementation. But legacy application performance monitoring isn’t suited for newer cloud technology.

Cloud native container management platforms include the following tools:

  • Observability enables a skilled observer — a software developer or site reliability engineer — to effectively explain unexpected system behavior. Gartner mentions Chronosphere for this cloud native container management platform.
  • Networking manages communication inside the communication pod, between cluster containers and from the outside world.
  • Storage delivers granular data services, high availability and performance for stateful applications with deep integration with the container management systems.
  • Ingress control gatekeeps network communications of a container orchestration cluster. All inbound traffic to services inside the cluster must pass through the ingress gateway.
  • Security and compliance provides assessment of risk/trust of container content, secrets management and Kubernetes configurations. It also extends into production with runtime container threat protection and access control.
  • Policy-based management lets IT organizations programmatically express IT requirements, which is critical for container-based environments. Organizations can use the automation toolchain to enforce these policies.

More specific container monitoring platform components and methodologies include Infrastructure as Code, CI/CD, API gateways, service meshes and registries.

How to Effectively Evaluate Software for Cloud Native Ecosystems

There are two types of container platforms that bring all required components together: integrated cloud infrastructure and platform services (CIPS) and software for the cloud.

Hyperscale cloud providers offer integrated CIPS capabilities that allow users to develop and operate cloud native applications with a unified environment. Almost all of these providers can deliver an effective experience within their platforms, including some use cases of hybrid cloud and edge. Key cloud providers include Alibaba Cloud, AWS, Google Cloud, Microsoft Azure, Oracle Cloud, IBM Cloud and Tencent.

Vendors in this category offer on-premises, edge solutions and may offer either marketplace or managed services offerings in multiple public cloud environments. Key software vendors include Red Hat, VMware, SUSE (Rancher), Mirantis, HashiCorp (Nomad), etc.

Authors note critical factors of platform provider selection include:

  • Automated, secure, and distributed operations
    • Hybrid and multicloud
    • Edge optimization
    • Support for bare metal
    • Serverless containers
    • Security and compliance
  • Application modernization
    • Developer inner and outer loop tools
    • Service mesh support
  • Open-source commitment
  • Pricing

IT leaders can figure out which provider has the most ideal offering if they match software to their infrastructure (current and future), security protocols, budget requirements, application modernization toolkit and open source integrations.

Gartner recommends that organizations:

Strive to standardize on a consistent platform, to the extent possible across use cases, to enhance architectural consistency, democratize operational know-how, simplify developer workflow and provide sourcing advantages.

Create a weighted decision matrix by considering the factors outlined above to ensure an objective decision is made.

Prioritize developers’ needs and their inherent expectations of operational simplicity, because any decision that fails to prioritize the needs of developers is bound to fail.

Read the full report to learn about ways to effectively navigate cloud native ecosystems.

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>
Deploy a Kubernetes Development Environment with Kind https://thenewstack.io/deploy-a-kubernetes-development-environment-with-kind/ Sat, 10 Jun 2023 14:00:45 +0000 https://thenewstack.io/?p=22709234

Let me set the stage: You’re just starting your journey into Kubernetes and you’re thrilled at the idea of developing

The post Deploy a Kubernetes Development Environment with Kind appeared first on The New Stack.

]]>

Let me set the stage: You’re just starting your journey into Kubernetes and you’re thrilled at the idea of developing your first application or service. Your first step is to deploy a Kubernetes cluster so you can start building but almost immediately realize how challenging a task that is.

All you wanted to do was take those first steps into the world of container development but actually getting Kubernetes up and running in a decent amount of time has proven to be a bit of a challenge.

Would that there was something a bit kinder.

There is and it’s called kind.

From the official kind website: kind is a tool for running local Kubernetes clusters using Docker container “nodes.” kind was primarily designed for testing Kubernetes itself but may be used for local development or continuous integration.

Kind is one of the easiest ways of starting out with Kubernetes development, especially if you’re just beginning your work with containers. In just a few minutes you can get kind installed and running, ready for work.

Let me show you how it’s done.

What You’ll Need

You can install kind on Linux, macOS, and Windows. I’ll demonstrate how to install kind on all three platforms. Before you install kind on your operating system of choice, you will need to have both Docker and Go installed. I’ll demonstrate it on Ubuntu Server 22.04. If you use a different Linux distribution, you’ll need to alter the installation steps accordingly.

Installing Docker

The first thing to do is install Docker. Here’s how on Each OS.

Linux

Log into your Ubuntu instance and access a terminal window. Add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg &&

| sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


Install the necessary dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release git -y


Update apt:

sudo apt-get update


Install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

MacOS/Windows

The easiest method of installing Docker on macOS and Windows is by way of Docker Desktop. You can download the installers for macOS Intel, macOS Apple Silicon, or Windows, double-click the files, and walk through the installation wizards.

Installing Go

Next, install Go. Here’s how.

Ubuntu Linux

To install Go on Ubuntu, open a terminal window and issue the command:

sudo apt-get install golang-go -y

MacOS/Windows

To install Go on macOS or Windows, simply download and run the installer file which can be found for macOS Intel, macOS Apple Silicon, and Windows.

Installing kind

Now, we can install kind. Here’s how for each platform.

Linux

Download the binary file with the command:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64


Give the file the necessary permissions with:

chmod +x kind


Move it to /usr/bin with:

sudo mv kind /usr/bin/

MacOS

Open the terminal application. For macOS Intel, download kind with:

[ $(uname -m) = x86_64 ]&& curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-darwin-amd64


For Apple Silicon, issue the command:

[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-darwin-arm64


Give the file executable permissions with:

chmod +x kind


Move kind so that it can be run globally with the command:

mv ./kind /usr/local/bin/kind

Windows

Open the terminal window app. Download kind with:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64


Move the executable file to the directory of your choice with the command:

Move-Item .\kind-windows-amd64.exe c:\DIRECTORY\kind.exe


Where DIRECTORY is the name of the directory to house kind.

Create a Dev Environment

It’s now time to deploy your first Kubernetes cluster with kind. Let’s create one called tns-test with the command:

kind create cluster --name=tns-test


You should see the following output in the terminal window:

✓ Ensuring node image (kindest/node:v1.24.0) 🖼

✓ Preparing nodes 📦

✓ Writing configuration 📜

✓ Starting control-plane 🕹️

✓ Installing CNI 🔌

✓ Installing StorageClass 💾

Once the output completes, you’re ready to go. One thing to keep in mind, however, is that the command only deploys a single node cluster. Say you have to start developing on a multinode cluster. How do you pull that off? First, you would need to delete the single node cluster with the command:

kind delete cluster --name=tns-test


Next, you must create a YML file that contains the information for the nodes. Do this with the command:

nano kindnodes.yml


In that file, paste the following contents:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker


Save and close the file. You can then deploy with the command:

kind create cluster --name=tns-multi-test --config=kindnodes.yml


To verify your cluster is running, issue the command:

kind get clusters


You should see tns-multi-test in the output.

If you want to interact with kubectl, you first must install it. On Ubuntu, that’s as simple as issuing the command:

sudo snap install kubectl --classic


Once kubectl is installed, you can check the cluster info with a command like this:

kubectl cluster-info --context kind-tns-multi-test


You should see something like this in the output:

Kubernetes control plane is running at https://127.0.0.1:45465
CoreDNS is running at https://127.0.0.1:45465/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy


To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.

You can now start developing on a multinode Kubernetes cluster, with full use of the kubectl command.

The post Deploy a Kubernetes Development Environment with Kind appeared first on The New Stack.

]]>
Chainguard Improves Security for Its Container Image Registry https://thenewstack.io/chainguard-improves-security-for-its-container-image-registry/ Wed, 31 May 2023 13:30:49 +0000 https://thenewstack.io/?p=22709510

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They do this by providing developers and users with continuously updated base container images with zero-known vulnerabilities. That’s all well and good, but now the well-regarded software developer security company has also upgraded how it hosts and distributes its Images to improve security.

Before this, Chainguard distributed its images using a slim wrapper over GitHub’s Container Registry. The arrangement allowed the company to focus on its tools and systems, enabling flexible adjustments to image distribution.

However, as the product gained traction and scaling became necessary, Chainguard ran into limitations. So, the business reevaluated its image distribution process and created its own registry. Leveraging the company’s engineering team’s expertise in managing hyperscale registries, Chainguard has built the first passwordless container registry, focusing on security, efficiency, flexibility and cost-effectiveness.

How It Works

Here’s how it works. For starters, for Identity and Access Management (IAM), Chainguard relies on short-lived OpenID Connect (OIDC) instead of conventional username-password combinations. OIDC is an identity layer built on top of the OAuth 2.0 framework credentials. To ensure the registry is only accessible to authorized Chain Guard personnel, only the GitHub Actions workflow identity can push to the public Chainguard registry repository. This promotes a secure, auditable and accountable process for making changes to the repository.

On the user side, when pulling images, you can authenticate with a credential helper built into Chainguard’s chainctl CLI. This also relies on OIDC for authentication. With this approach, there are no long-lived tokens stored on the user’s computer. Both chainctl and the credential helper are aware of common OIDC-enabled execution environments such as GitHub Actions. With this, customers can also limit who and how images can be pulled.

If your environment doesn’t support OIDC, the registry also offers long-lived authentication options. For the sake of your own security, I urge you to move to an OIDC-compliant process.

For now, existing Chainguard Images customers cannot push directly to the registry. It can only currently be used to Chainguard created and managed host Images.

As part of the Chainguard Enforce software supply chain control plane platform, the new Chainguard Registry supports CloudEvents to notify users of significant activities with their images. Customers can create subscriptions and receive event notifications for image pushes and pulls, including failures. They can leverage these events to initiate base image updates, conduct vulnerability scans, duplicate pushed images or audit system activities.

Cloudflare R2

Chainguard’s done this by building its own container image registry on Cloudflare R2. With this new method, the company has far greater control and has cut back considerably on its costs.

Why Cloudflare R2? Simple. It’s all about egress fees — the cloud provider charges for external data transfer. Chainguard opted for Cloudflare R2 for image blob distribution. Because it offers zero egress-fee hosting and a fast, globally trusted distribution network, promising a sustainable model for hosting free public images without excessive costs or rate limitations.

This is a huge deal. As Jason Hall, a Chainguard software engineer, explained, “The 800-pound gorilla in the room of container image registry operators is egress fees. … Image registries move a lot of bits to a lot of users all over the world, and moving those bits can become very expensive, very quickly. In fact, just paying to move image bits is often the main cost of operating an image registry. For example, Docker’s official Nginx image has been pulled over a billion times, about 31 million times in the last week alone. The image is about 55 megabytes, so that’s 1.7 PB of egress. At S3’s standard egress pricing of $0.05/GB, that’s $85,000, to serve just the nginx image, for just one week.”

To pay for this, companies that host registries have had to pay cloud providers for hosting. You end up paying for it as the image providers pass the costs along to you with paid plans or up-priced services

Chainguard thinks Cloudflare R2 “fundamentally changes the story for image hosting providers and makes this a sustainable model for hosting free public images without imposing onerous costs or rate limits.” I think Cloudflare needs to pay its bills too, and eventually, there will be a charge for the service.

For now, though, Chainguard can save money and re-invest in further securing images. This sounds like a win to me. You can try Chainguard Images today to see if their security-first images work for you.

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>
How to Protect Containerized Workloads at Runtime https://thenewstack.io/how-to-protect-containerized-workloads-at-runtime/ Tue, 30 May 2023 11:00:22 +0000 https://thenewstack.io/?p=22709118

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach —

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach — meaning to move security as early as possible into development pipelines. But the work is never finished.

Shift left and similar strategies are generally good things. They begin to address a long-overdue problem of treating security as a checkbox or a final step before deployment. But in many cases is still not quite enough for the realities of running modern software applications. The shift left approach might only cover the build and deploy phases, for example, but not apply enough security focus to another critical phase for today’s workloads: runtime.

Runtime security “is about securing the environment in which an application is running and the application itself when the code is being executed,” said Yugal Joshi, partner at the technology research firm Everest Group.

The emerging class of tools and practices for security aim to address three essential security challenges in the age of containerized workloads, Kubernetes, and heavily automated CI/CD pipelines, according to Utpal Bhatt, CMO at Tigera, a security platform company.

First, the speed and automation intrinsic to modern software development pipelines create more threat vectors and opportunities for vulnerabilities to enter a codebase.

Second, the orchestration layer itself, like Kubernetes, also heavily automates the deployment of container images and introduces new risks.

Third, the dynamic nature of running container-based workloads, especially when those workloads are decomposed into hundreds or thousands of microservices that might be talking to one another, creates a very large and ever-changing attack surface.

“The threat vectors increase with these types of applications,” Bhatt told The New Stack. “It’s virtually impossible to eliminate these threats when focusing on just one part of your supply chain.”

Runtime Security: Prevention First

Runtime security might sound like a super-specific requirement or approach, but Bhatt and other experts note that, done right, holistic approaches to runtime security can bolster the security posture of the entire environment and organization.

The overarching need for strong runtime security is to shift from a defensive or detection-focused approach to a prevention-focused approach.

“Given the large attack surface of containerized workloads, it’s impossible to scale a detection-centric approach to security,” said Mikheil Kardenakhishvili, CEO and co-founder of Techseed, one of Tigera’s partners. “Instead, focusing on prevention will help to reduce attacks and subsequently the burden on security teams.”

Instead of a purely detection-based approach, one that often burns out security teams and puts them in the position of being seen as bottlenecks or inhibitors by the rest of the business, the best runtime security tools and practices, according to Bhatt, implement a prevention-first approach backed by traditional detection response.

“Runtime security done right means you’re blocking known attacks rather than waiting for them to happen,” Bhatt said.

Runtime security can provide common services as a platform offering that any application can use for secure execution, noted Joshi, the Everest Group analyst.

“Therefore, things like identity, monitoring, logging, permissions, and control will fall under this runtime security remit,” he said. “In general, it should also provide an incident-response mechanism through prioritization of vulnerability based on criticality and frequency. Runtime security should also ideally secure the environment, storage, network and related libraries that the application needs to use to run.”

A SaaS Solution for Runtime Security

Put in more colloquial terms: Runtime security means securing all of the things commonly found in modern software applications and environments.

The prevention-first, holistic approach is part of the DNA of Calico Open Source, an open source networking and network security project for containers, virtual machines, and native host-based workloads, as well as Calico Cloud and Calico Enterprise, the latter of which is Tigera’s commercial platform built on the open source project it created.

Calico Cloud, a Software as a service (SaaS) solution focused on cloud native apps running in containers with Kubernetes, offers security posture management, robust runtime security for identifying known threats, and threat-hunting capabilities for discovering Zero Day attacks and other previously unknown threats.

These four components of Calico — securing your posture in a Kubernetes-centric way, protecting your environment from known attackers, detecting Zero Day attacks, and incident response/risk mitigation — also speak to four fundamentals for any high-performing runtime security program, according to Bhatt.

Following are the four principles to follow for protecting your runtime.

4 Keys to Doing Runtime Security Right

1. Protect your applications from known threats. This is core to the prevention-first mindset, and focuses on ingesting reliable threat feeds that your tool(s) continuously check against — not just during build and deploy but during runtime as well.
Examples of popular, industry-standards feeds include network addresses of known malicious servers, process file hashes of known malware, and the OWASP Top 10 project.

2. Protect your workloads from vulnerabilities in the containers. In addition to checking against known, active attack methods, runtime security to proactively protect against vulnerabilities in the container itself — and everything that the container needs to run, including the environment.

This isn’t a “check once” type of test, but a virtuous feedback loop that should include enabling security policies that protect workloads from any vulnerabilities, including limiting communication or traffic between services that aren’t known/trusted or when a risk is detected.

3. Detect and protect against container and network anomalous behaviors. This is “the glamorous part” of runtime security, according to Bhatt, because it enables security teams to find and mitigate suspicious behavior in the environment even when it’s not associated with a known threat, such as with Zero Day attacks.

Runtime security tools should be able to detect anomalous behavior in container or network activity and alert security operations teams (via integration with security information and event management, or SIEM, tools) to investigate and mitigate as needed.

4. Assume breaches have occurred; be ready with incident response and risk mitigation. Lastly, even while shifting to a prevention-first, detection-second approach, Bhatt said runtime security done right requires a fundamental assumption that your runtime has already been compromised (and will occur again). This means your organization is ready to act quickly in the event of an incident and minimize the potential fallout in the process.

Zero trust is also considered a best strategy for runtime security tools and policies, according to Bhatt.

The bottom line: The perimeter-centric, detect-and-defend mindset is no longer enough, even if some of its practices are still plenty valid. As Bhatt told The New Stack: “The world of containers and Kubernetes requires a different kind of security posture.”

Runtime security tools and practices exist to address the much larger and more dynamic threat surface created by containerized environments. Bhatt loosely compared today’s software environments to large houses with lots of doors and windows. Legacy security approaches might only focus on the front and back door. Runtime security attempts to protect the whole house.

Bhatt finished the metaphor: “Would you rather have 10 locks on one door, or one lock on every door?”

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>
How to Containerize a Python Application with Paketo Buildpacks https://thenewstack.io/how-to-containerize-a-python-application-with-packeto-buildpacks/ Mon, 29 May 2023 12:00:03 +0000 https://thenewstack.io/?p=22709274

Containers have been in use for almost a decade, but containerizing applications can still pose challenges. More specifically, Dockerfiles —

The post How to Containerize a Python Application with Paketo Buildpacks appeared first on The New Stack.

]]>

Containers have been in use for almost a decade, but containerizing applications can still pose challenges. More specifically, Dockerfiles — which dictate how container images are built — can be challenging to write properly. Even simple Dockerfiles can be problematic. A study found that nearly 84% of the projects they analyzed had smells — which are quality problems — in their Dockerfile.

In this article, I will demonstrate an alternative method to Dockerfiles for containerizing an application, following best practices, with just a single command. Before demonstrating this technique, let’s first look at the difficulties associated with containerizing applications using traditional approaches.

[/sponsor_notecontr

Great Dockerfiles Are Hard to Write

What’s so hard about Dockerfiles?

  1. It’s a craft: writing good Dockerfiles requires deep knowledge and experience. There are a number of best practices that must be implemented for every Dockerfile. Developers — who are generally the ones writing them — might not have the knowledge or resources to do it right.

  2. Security: they can be a security threat if not well written. For example, a common issue with Dockerfiles is that they often use the root user in their instructions, which can create security vulnerabilities and allow an attacker to gain full control over the host system.

  3. They are not natively fast: getting fast build time needs work, from ensuring that you use minimal base images, minimize the number of layers, use build caching and set up a multistage build.

Learning how to create the perfect Dockerfile can be enjoyable when working with one or two images. However, the excitement wanes as the number of images increases, requiring management across multiple repositories, projects, and stacks, as well as constant maintenance. This is where the open source project Paketo Buildpacks offers a solution.

An Easier Way

Before diving into the tutorial, let’s discuss the concept behind Buildpacks, an open source project maintained by the Cloud Native Computing Foundation.

Developed by Heroku, Buildpacks transform application source code into images that can run on any cloud platform. They analyze the code, identify what is needed to build and run the software, and then assemble all components into an image. By examining applications, Buildpacks determine the necessary dependencies and configure them in a series of layers, ultimately creating a container image. Buildpacks also feature optimization mechanisms to reduce build time.

While the Cloud Native Buildpacks project offers a specification for Buildpacks, it doesn’t supply ready-to-use Buildpacks; that’s what Paketo Buildpacks provide. This community-driven project develops production-ready Buildpacks.

Paketo Buildpacks adhere to best practices for each language ecosystem, currently supporting Java, Go, Node.js, .NET, Python, and PHP, among others. The community constantly addresses vulnerabilities in upstream language runtimes and operating system packages, saving you the effort of monitoring for susceptible dependencies.

Let’s Containerize a Python Application

There are two requirements to use this tutorial:

  1. Have Docker Desktop installed; here is a guide to install it.

  2. Have pack CLI installed; here is a guide to install it.

In this example, we will use a Python application. I provide a sample app for the sake of testing but feel free to use your own.

git clone git@github.com:sylvainkalache/<wbr />Containerize-Python-app-with-<wbr />Paketo.git &amp;&amp; cd Containerize-Python-app-with-<wbr />Paketo

Once you are in the application root directory, run the command:

pack build my-python-app --builder paketobuildpacks/builder:base

That’s the only command you need to create an image! Now you can run it as you would usually do.

docker run -ti -p 5000:8000 -e PORT=8000 my-python-app

Now let’s check that the app is working properly by running this command in another terminal:

$ curl 0:5000

Hello, TheNewStack readers!

$

You can continue developing your application, and whenever you need a new image, simply run the same pack build command. The initial run of the command might take some time, as it needs to download the paketobuildpacks/builder:base image. However, subsequent iterations will be much faster, thanks to advanced caching features implemented by buildpack authors.

Other Benefits of Using Paketo Buildpacks?

With increasing security standards, numerous engineering organizations have started to depend on SBOMs (software bill of materials) to mitigate the risk of vulnerabilities in their infrastructure. Buildpacks offer a straightforward approach to gaining insights into the contents of images through standard build-time SBOMs, which Buildpack can generate in CycloneDX, SPDX, and Syft JSON formats.

You can try it on your image by using the following command:

pack sbom download my-python-app

Another benefit of using Paketo Buildpacks is that you will be using minimal images that contain only what is necessary. For example, while my image based on  paketobuildpacks/builder:base was only 295MB, a bare python:3 Docker image is already 933MB.

Conclusion

Although Dockerfiles have served us well, they are not the most efficient use of a developer’s time. The need to manage and maintain Dockerfiles can become significant, especially with the rise of microservices and distributed architecture. By using Paketo Buildpacks, developers can build better images faster, giving them more time to focus on what adds more value to their projects. And the best part? While we used Python in this article, the same principle can be applied to any project with any supported stack.

The post How to Containerize a Python Application with Paketo Buildpacks appeared first on The New Stack.

]]>
Can Rancher Deliver on Making Kubernetes Easy? https://thenewstack.io/can-rancher-deliver-on-making-kubernetes-easy/ Sat, 27 May 2023 14:00:18 +0000 https://thenewstack.io/?p=22708481

Over the past few years, Kubernetes has become increasingly difficult to deploy. When you couple that with the idea that

The post Can Rancher Deliver on Making Kubernetes Easy? appeared first on The New Stack.

]]>

Over the past few years, Kubernetes has become increasingly difficult to deploy. When you couple that with the idea that Kubernetes itself can be a challenge to learn, you have the makings of a system that could have everyone jumping ship for the likes of Docker Swarm.

I’m always on the lookout for easier methods of deploying Kubernetes for development purposes. I’ve talked extensively about Portainer (which I still believe is the best UI for container management) and have covered other Kubernetes tools, such as another favorite, MicroK8s.

Recently, I’ve started exploring Rancher, a tool that hasn’t (for whatever reason) been on my radar to this point. The time for ignoring the tool is over and my initial experience so far has been, shall I say, disappointing. One would expect a tool with a solid reputation for interacting with Kubernetes to be easy to deploy and use. After all, the official Rancher website makes it clear it is “Kubernetes made simple.” But does it follow through with that promise?

Not exactly.

Let me explain by way of walking you through the installation and the first steps of both Rancher on a server and the Rancher Desktop app.

One thing to keep in mind is that this piece is a work in progress and this is my initial experience with the tool. I will continue my exploration with Rancher as I learn more about the system. But this initial piece was undertaken after reading the official documentation and, as a result, made a few discoveries in the process. I will discuss those discoveries (and the results from them) in my next post.

I’m going to show you how I attempted to deploy Rancher on Ubuntu Server 22.04.

Installing Rancher on Ubuntu Server 22.04

Before you dive into this, there’s one very important thing you need to know. Installing Rancher this way does not automatically give you a Kubernetes cluster. In fact, you actually need a Kubernetes cluster already running. This is only a web-based GUI. And even then, it can be problematic.

The first step to installing Rancher on Ubuntu Server is to log into your Ubuntu server instance. That server must have a regular user configured with sudo privileges and a minimum of 2 CPU Core and 4 GB RAM.

Once you’ve logged in, you must first install a few dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Next, add the necessary GPG key with:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Add the official Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] &amp;&amp;
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 
| sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Update apt with:

sudo apt-get update


Install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with the command:

sudo usermod -aG docker $USER


Finally, log out and log back in for the changes to take effect.

Deploy Rancher

Now that Docker is installed, you can deploy Rancher with:

docker run -d --name=rancher-server --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.4.18


An older version of Rancher must be used because the latest version fails to start.

The deployment will take some time to complete. When it does open a web browser and point it to http://SERVER (where SERVER is the IP address of the hosting server). You’ll be greeted by the welcome screen, where you must set a password for the admin user (Figure 1).

Figure 1: Setting a password for the default Rancher admin user.

In the next window (Figure 2), you must set the Rancher Server URL. If you’ll be using an IP address, leave it as is. If you’ll use a domain, change the entry and click Save URL.

Figure 2: Setting the URL for the Rancher server.

You will then be prompted to add a cluster (Figure 3). If your cluster is in-house, select “From existing nodes”. If you’ll be using a cluster from a third party, select the service.

Figure 3: Selecting the cluster type for your deployment.

In the resulting window (Figure 4), fill out the necessary details and configure the cluster as needed. At the bottom of the window, click Next.

Figure 4: The Custom cluster configuration window.

You will then be given a command to run on your Kubernetes cluster (Figure 5).

Figure 5: The command must be run on a supported version of Docker (I used the latest version of Docker CE).

After the command completes on the Kubernetes server, click Done.

At this point, the cluster should register with Rancher. “Should” being the operative term. Unfortunately, even though my Kubernetes cluster was running properly, the registration never succeeded. Even though the new node was listed in the Nodes section, the registration hadn’t been completed after twenty minutes. This could be because my Kubernetes cluster is currently being pushed to its limits. Because of that, I rebooted every machine in the cluster and tried again.

No luck.

My guess is the problem with my setup is the Kubernetes cluster was deployed with MicroK8s and Rancher doesn’t play well with that system. Although you can deploy Rancher with MicroK8s, Helm, and a few other tools, that process is quite challenging.

I decided to bypass deploying Rancher on Ubuntu Server and went straight to Rancher Desktop. After all, Rancher Desktop is supposed to be similar to Docker Desktop, only with a Kubernetes backend.

Here’s the process of installing Rancher Desktop on Pop!_OS Linux:

  1. First, check to make sure you have kvm privileges with the command [ -r /dev/kvm ] && [ -w /dev/kvm ] || echo ‘insufficient privileges’
  2. Generate a GPG key with gpg –generate-key
  3. Copy your GPG key and add it to the command pass init KEY (where KEY is your GPG key)
  4. Allow Traefik to listen on port 80 with sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
  5. Add the Rancher GPG key with the command curl -s https://download.opensuse.org/repositories/isv:/Rancher:/stable/deb/Release.key | gpg –dearmor | sudo dd status=none of=/usr/share/keyrings/isv-rancher-stable-archive-keyring.gpg
  6. Add the official Rancher repository with echo ‘deb [signed-by=/usr/share/keyrings/isv-rancher-stable-archive-keyring.gpg] https://download.opensuse.org/repositories/isv:/Rancher:/stable/deb/ ./’ | sudo dd status=none of=/etc/apt/sources.list.d/isv-rancher-stable.list
  7. Update apt with the command sudo apt update
  8. Install Rancher Desktop with sudo apt install rancher-desktop -y

Launch Rancher Desktop from your desktop menu and accept the default PATH configuration (Figure 6).

Figure 6: The only configuration option you need to set for Rancher Desktop.

Rancher Desktop will then download and start the necessary software to run. Once that completes, you’ll find yourself on the Welcome to Rancher Desktop window (Figure 7).

Figure 7: The main Rancher Desktop window.

Here’s where things take a turn for the confusing. With Rancher Desktop, the only things you can actually do are manage port forwarding, pull and build images, scan images for vulnerabilities (which is a very handy feature), and troubleshot. What you cannot do is deploy containers.

To do that, you have to revert to the command line using the nerdctl command which, oddly enough, isn’t installed along with Rancher Desktop on Linux. I did run a test by installing Rancher Desktop on macOS and found that nerdctl was successfully installed, leading me to believe this is a Linux issue. Another thing to keep in mind is that the macOS installation of Rancher Desktop is considerably easier. However, it suffers from the same usability issues as it does on Linux.

If you’d like to keep experimenting with Rancher Desktop, you’ll need to get up to speed with nerdctl which I demonstrated here.

You can also build an image with Rancher Desktop, by clicking Images > Add Image and then clicking the Build tab. Give your image a name and click Build. You then must select a build directory. What it doesn’t tell you is that the build directory must contain a proper Docker file. With the Docker file in place, the image will build.

Maybe the GUI should key users in on that fact.

Once the image is built, you should be good to go to deploy a container based on that image. Right? Not within Rancher Desktop you can’t. Instead, you have to go back to the terminal window and deploy the container with the nerdctl command.

How is any of this Kubernetes made simple? It’s not. If you want Kubernetes made simple, you go with the MicroK8s/Portainer combo and call it a day.

From my perspective, if you’re going to claim that your product makes Kubernetes simple (which is a big promise, to begin with), you shouldn’t require users to jump through so many hoops to reach a point where they can successfully work with the container management platform. Simple is a word too many companies use these days but fail to deliver on.

The post Can Rancher Deliver on Making Kubernetes Easy? appeared first on The New Stack.

]]>
Red Hat Podman Container Engine Gets a Desktop Interface https://thenewstack.io/red-hat-podman-container-engine-gets-a-desktop-interface/ Tue, 23 May 2023 14:30:34 +0000 https://thenewstack.io/?p=22708811

Red Hat’s open source Podman container engine now has a full-fledged desktop interface. With a visual user interface replacing Podman’s

The post Red Hat Podman Container Engine Gets a Desktop Interface appeared first on The New Stack.

]]>

Red Hat’s open source Podman container engine now has a full-fledged desktop interface.

With a visual user interface replacing Podman’s command lines, the open source enterprise software company wants to attract developers new to the containerization space, as well as small businesses that wish to test the waters for running their applications on Kubernetes, particularly of the OpenShift variety.

The desktop “simplifies the creation, management, and deployment of containers, while abstracting the underlying configuration, making it a lightweight, efficient alternative for container management, reducing the administrative overhead,” promised Mithun Dhar, Red Hat vice president and general manager for developer tools and programs, in a blog post.

Podman, short for Pod Manager, is a command line tool for managing containers in a Linux environment, executing tasks such as inspecting and running containers, building and pulling images.

In its own Linux distributions, Red Hat offers Podman in lieu of the Docker container engine for running containers. Docker also has a desktop interface for its own container engine, so time will tell how Red Hat’s desktop interface will compare. The Red Hat desktop can work not only with the Podman container engine itself but also with Docker and Lima, a container engine for Mac.

The Podman Desktop 1.0 offers a visual environment for all of these tasks supported by Podman itself. From the comfort of a graphical user interface, devs can build images, pull images from registries, push images to OCI registries, start and stop containers, inspect logs, start terminal sessions from within the containers, and test and deploy their images on Kubernetes. It also offers widgets to monitor the usage of the app itself.

It’s very Kubernetes-friendly. Kind, a tool for running Kubernetes multi-node clusters locally, provides an environment for creating and testing applications. Developers can work directly with Kubernetes Objects through Podman.

The Podman desktop can be installed on Windows, Linux or Mac.

OpenShift Connects

OpenShift is Red Hat’s enterprise Kubernetes platform, and so not surprisingly, Red Hat is using a Podman as a ramp-up point for the OpenShift.

The desktop, Dhar wrote, is integrated with Red Hat OpenShift Local, which provides a way to test applications in a production-equivalent environment.

Podman Desktop is also connected to Developer Sandbox for Red Hat OpenShift, a free cloud-based OpenShift hosting service. This could provide an organization to test its applications in a Kubernetes environment.

Red Hat released the desktop software during its Red Hat Summit, being held this week in Boston.

Other Red Hat news this week from the Summit:

Red Hat paid for this reporter’s travel and lodging to attend the Red Hat Summit.

The post Red Hat Podman Container Engine Gets a Desktop Interface appeared first on The New Stack.

]]>
Scan Container Images for Vulnerabilities with Docker Scout https://thenewstack.io/scan-container-images-for-vulnerabilities-with-docker-scout/ Sat, 20 May 2023 13:00:55 +0000 https://thenewstack.io/?p=22707932

The security of your containers builds on a foundation formed from the images you use. If you work with an

The post Scan Container Images for Vulnerabilities with Docker Scout appeared first on The New Stack.

]]>

The security of your containers builds on a foundation formed from the images you use. If you work with an image rife with vulnerabilities, your containers will be vulnerable. On the contrary, if you build your containers on a solid foundation of secure images, those containers will be more secure by default (so long as you follow standard best practices).

Every container developer who’s spent long enough with the likes of Docker and Kubernetes understands this idea. The issue is putting it into practice. Fortunately, there are plenty of tools available for scanning images for vulnerabilities. One such tool is Docker Scout, which was released in early preview with Docker Desktop 4.17. The tool can be used either from the Docker Desktop GUI or the command line interface and offers insights into the contents of a container image.

What sets Docker Scout apart from some of the other offerings is that it not only will display CVEs but also the composition of the image (such as base image and update recommendations). In other words, anyone who depends on Docker should consider Scout a must-use.

I’m going to show you how to use Docker Scout from both the Docker Desktop GUI and the Docker command line interface.

What You’ll Need

To use Docker Scout, you’ll need Docker Desktop installed, which is available for Linux, macOS, and Windows. When you install Docker Desktop it will also install the Docker CLI tool. If you prefer the command line, I’ll first show you how to install the latest version of Docker CE (Community Edition). You’ll also need a user with sudo (or admin) privileges.

How to Install Docker CE

The first thing we’ll do is install Docker CE. I’ll demonstrate on Ubuntu Server 22.04, so if you use a different Linux distribution, you’ll need to alter the installation commands as needed.

If you’ve already installed Docker or Docker Desktop, you can skip these steps.

First, add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Install the required dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release git -y


Update apt with:

sudo apt-get update


Finally, we can install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Next, you must add your user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

How to use Docker Scout from Docker Desktop

The first method I’ll demonstrate is via the Docker Desktop GUI. When you open Docker Desktop, you should see Docker Scout listed in the left navigation. Do take note the feature is currently in early access. Once early access closes, you’ll need either a Docker Pro, Team, or Business subscription to use the feature. Until then, however, the feature is free to use on Docker Desktop.

Click Docker Scout and you’ll see the Analyze Image button and a drop-down where you can select the image you want to scan. If you don’t see the image you want to scan in the drop-down, you’ll need to pull it by typing the image name in the Search field at the top of the Docker Desktop window, click the Images tab in the resulting popup, and then click Pull (Figure 1).

Figure 1: Pulling the official NGINX image with Docker Desktop.

Figure 1: Pulling the official NGINX image with Docker Desktop.

Once the image is pulled, go back to Docker Scout, select the image from the drop-down, and click Analyze Image (Figure 2).

Figure 2: Analyzing the latest NGINX image.

Depending on the size of the image, the analysis shouldn’t take too much time. When it completes, it will report back what it finds. For example, with the nginx:latest image, it found zero vulnerabilities or other issues (Figure 3).

Figure 3: The nginx:latest image is clean.

On the other hand, a quick scan of the Rocky Linux minimal image comes up with 16 vulnerabilities, all of which are marked as High. After that scan, click View Packages and CVEs to reveal the detailed results. You can expand each entry to view even more results (Figure 4).

Figure 4: Click Fixable Packages to see what packages have issues you can easily mitigate.

How to Run Docker Scout from the CLI

If you prefer the command line, Docker Scout has you covered. Let’s examine the NGINX image. There are four main commands you can use with Docker Scout CLI, which are:

  • docker scout compare – Compares two images and displays the differences.
  • docker scout cves – Displays the CVEs identified for any software artifacts in the image.
  • docker scout quickview – Displays a quick overview of an image.
  • docker scout recommendations – Displays all available base image updates and remediation recommendations.

Let’s run the quickview command on the latest NGINX image. That command looks like this:

docker scout quickview nginx:latest


The results will reveal any CVEs found in your images, the base image, and the updated base image (Figure 5).

Figure 5: The quickview results for the nginx:latest image.

The results will also offer you other commands you can run on the image to get more details, such as:

docker scout cves nginx:latest
docker scout recommendations nginx:latest


I would highly recommend running the recommendations command because it gives you quite a lot of important information about the image.

And that’s the gist of using Docker Scout from both the Docker Desktop GUI and the CLI. If you’re serious about the security of your containers, you should start using this tool right away.

The post Scan Container Images for Vulnerabilities with Docker Scout appeared first on The New Stack.

]]>
Container or VM? How to Choose the Right Option in 2023 https://thenewstack.io/container-or-vm-how-to-choose-the-right-option-in-2023/ Wed, 17 May 2023 15:42:48 +0000 https://thenewstack.io/?p=22708289

A few years back, most technology articles would have you thinking that Linux containers and virtual machines were diametrically opposed

The post Container or VM? How to Choose the Right Option in 2023 appeared first on The New Stack.

]]>

A few years back, most technology articles would have you thinking that Linux containers and virtual machines were diametrically opposed components in the data center. That’s natural when a new technology is adopted: The hype cycle can push such innovations into every nook and cranny of the industry, looking for new wins over old software and hardware combinations.

You may remember when JavaScript was going to take over the server side, or when virtual reality was going to revolutionize education. In truth, these technologies eventually found comfortable areas of use, rather than supplanting every other idea for themselves. Things settle over time, and it can be tricky to discern where a given technology will end up most useful, and where it will be supplanted by better options farther down the line.

Now that Linux containers and virtual machines are no longer brand new, they’ve become well-understood tools for the common software developer to consider for various scenarios. We’d like to provide a guide, now, to just when and where each technology is appropriate in today’s hybrid cloud environments.

Big or Small?

Perhaps the easiest way to make your decision is according to application size and complexity. Containers are, among other things, an application packaging technology. Containers can be — and there are often very good and valid reasons for using them this way — deployed without Kubernetes directly into an operating system. This is part of our edge strategy with Red Hat Enterprise Linux and Ansible too: Containers are an easy, replicable way to deploy software while minimizing drift and moving parts.

There are other similar and competing technologies that have many of the same capabilities, such as unikernel, Wasm etc. Thus, while containers might be the right way to deploy an application today, there may be some movement around this model in the future as it is optimized and takes on new types of deployment models.

Some applications are, quite simply, too big and complex to fit into a container as is. We colloquially refer to these as monoliths. It should be noted that there is no technical limitation here: There’s no CPU/memory threshold that you cross and end up disqualified. Rather, this is based on the value of investment. For example, a single installer that deploys a database plus middleware plus $thing1 and$thing2, etc. onto a single server can be challenging to containerize as is. “Modernization” of the application may be required to decouple the components and/or adopt application frameworks and/or runtimes that are more friendly to containerization. One example of this would be moving a Java application from SpringBoot to Quarkus.

For the Developers

Developers, and administrators, regardless of whether they’ve adopted new-fangled cloud native architectures and/or DevSecOps methodologies, should embrace containers for many reasons. Speed, security, portability and simplicity are among the benefits of application containerization. And yet, this does not mean dumping virtual machines completely overboard.

The real question becomes, “Do I want to deploy my containerized application to Kubernetes or directly to a (virtualized) operating system?” There are many factors here to consider. One is the application’s requirements. Does the application need to run constantly as a single node, without interruption? Kubernetes does not migrate application components between nodes non-disruptively. They are terminated and restarted. If this isn’t behavior your application can tolerate, then Kubernetes is not a good fit.

It’s also important to consider the state of the application’s various components. If the application in question relies on third-party components, those may limit the use of containers. Many third-party vendors, especially in more stoic VM-centric industries, are slow to create Kubernetes-ready/compatible versions of their software. This means you can either deploy a VM or take the onus of supporting their software in Kubernetes yourself.

And even before you evaluate these options, it’s important to take a serious look at the skills available inside your organization. Does your team possess the skills and competency to handle Linux containers? Do you have, or are willing to build and or acquire, the necessary expertise for Kubernetes? This extends to API-driven consumption and configuration. Do your application and development teams need/want the ability to consume and configure the platform using APIs?

This is possible with all of “private cloud,” public cloud and Kubernetes, but is often more complex and harder on-prem, requiring a lot of glue from specialized automation teams. When it comes to the public clouds, your team needs specific expertise in each public cloud it’s using, adding another layer of complexity to manage. This is an area where Kubernetes can homogenize and further enable portability.

Infrastructure Efficiency

In many/most cases, a “web scale” application that has tens to thousands of instances is going to be much more efficient running on a Kubernetes cluster than in VMs. This is because the containerized components are bin packed into the available resources and there are fewer operating system instances to manage and maintain.

Furthermore, Kubernetes facilitates the scaling up and down of applications more seamlessly and with less effort. While it’s possible to create new VMs to scale new instances of an application component or service, this is often far slower and harder than with Kubernetes. Kubernetes is focused on automating at the application layer, not at the virtualization layer, though that can be done as well with KubeVirt.

Infrastructure efficiency also implies cost impact. This is going to be different for each organization, but for some, reducing the number of VMs will affect what they’re paying to their operating system vendor for licenses, their hypervisor vendor and their hardware vendor. This may or may not be counteracted by the cost of Kubernetes and the talent needed to manage it, however.

And there are still other considerations when it comes to security. Kubernetes is a shared kernel model, where many containers, representing many applications run on the same nodes. This isn’t to say they’re insecure — Red Hat OpenShift and containers deployed to Red Hat operating systems make use of SELinux and other security features and capabilities.

However, sometimes this isn’t good enough for security requirements and compliance needs. This leaves several options for further isolation: Deploy many Kubernetes clusters (which a lot of folks do), use specialized technologies like Kata containers or use full VMs.

No matter what the requirements are for your organization, nor whether you choose containers or virtual machines for your applications, there is one fundamental rule that is always at play in the enterprise software world: Change is hard. Sometimes, if something is working, there’s no reason to move it, update it or migrate it. If your applications are running reliably on virtual machines and there’s no corporate push to migrate it elsewhere, perhaps it is fine where it is for as long as it can reliably be supported.

Sometimes, the best place for change inside an organization isn’t deep in the stacks of legacy applications, it’s out in the green fields where new ideas are growing. But even those green fields have to connect to the old barn somehow.

The actual technology being used doesn’t necessarily place something in those green fields, however. In this way, it is important to find a method of supporting both containers and virtual machines inside your environments, as the only real mistake you can make is to ignore one of these technologies completely.

The post Container or VM? How to Choose the Right Option in 2023 appeared first on The New Stack.

]]>
How Otomi Helped the City of Utrecht Move to Kubernetes https://thenewstack.io/how-otomi-helped-the-city-of-utrecht-move-to-kubernetes/ Mon, 15 May 2023 17:00:01 +0000 https://thenewstack.io/?p=22706920

With digital transformation sweeping across industries, we are seeing more and more organizations adopting cloud native technologies to modernize their

The post How Otomi Helped the City of Utrecht Move to Kubernetes appeared first on The New Stack.

]]>

With digital transformation sweeping across industries, we are seeing more and more organizations adopting cloud native technologies to modernize their IT infrastructure. Kubernetes have become the go-to solution for many when managing containers at scale.

While my experience building Red Kubes as CTO has highlighted the need for these technologies, it has also shed light on how integral the adoption process is for companies and organizations, such as The Municipality of Utrecht in the Netherlands.

Together, we addressed a common issue being complex and siloed applications. For context, Utrecht is one of the largest municipalities in the Netherlands that deals with a myriad of applications and huge volumes of data.

Essentially, its IT infrastructure needed a more modern approach to improving its services for the residents. I’m sure you’ve personally experienced the struggle and frustration of trying to get something from your council, municipality, or city.

The Challenge:

At Red Kubes, we designed Otomi (our open source platform) to address these issues, we personalize each aspect of the platform to meet the needs of the user. Considering the challenge lay in speeding up delivery, building connections between these siloes was of utmost importance.

Otomi logo

Before we stepped in, the process when updating (or even changing) was time-consuming, costly and complex.

Furthermore, there was an increasing need for collaboration and information exchange between municipalities, but the current architecture made it difficult to achieve.

I believe many organizations are facing similar issues in modernizing their infrastructure to support more modern application architectures

To address these challenges, Utrecht, along with 15 other major cities, initiated a review of their current information systems and architecture based on “Common Ground.”

The goal was to establish modern standards for data exchange between municipalities through microservices and an API-driven approach. The new standards could not be supported by the existing infrastructure so there was a need to transition to a modern architecture.

As applications and workloads were to be containerized for better cloud portability, Kubernetes was identified as the ideal solution for container orchestration.

Utrecht recognized that they would need to hire talent or contractors with the necessary skills and expertise to set up and manage a Kubernetes environment.

It’s a good thing the city was aware of the complexity of Kubernetes but especially what comes after installing a Kubernetes cluster.

The Solution:

Utrecht searched for a solution that would make Kubernetes easily manageable and ready for production without requiring extensive staff training or hiring new talent in such a tight market. The proposed solutions revealed that our open-source project Otomi could deliver to requirements.

In a nutshell, Otomi simplifies Kubernetes Engineering and management of all additional components required to run Kubernetes in a secure, compliant, and automated way providing self-service to developers. It is designed to enable organizations to get the most out of their containerized applications in just a few days.

Utrecht successfully adopted Kubernetes technology by leveraging Otomi and creating a platform engineering team to build a production-ready platform on top of the Azure Kubernetes environment.

This allowed developers to concentrate on coding while the platform engineering team focused on security, compliance, scalability and stability (the important stuff in Kubernetes environments!).

By combining AKS (Azure Kubernetes Service) and Otomi, Utrecht was able to set up its Production Ready Kubernetes environment within a few days instead of the many many months it would have taken using traditional methods.

The Results: Technical, Operational and Security

With the implementation of Kubernetes, topped with Otomi, the outcomes for the city included a host of technical, operational and security benefits. From a technical standpoint, the deployment resulted in faster, automated testing, enhanced observability, monitoring and immediate access to root cause analysis (RCA).

Additionally, automatic scaling of the Kubernetes environment was achieved, a process that previously took three to six months before Kubernetes and Otomi. Now, development environments can be deployed within one minute, providing instant self-service for development teams, compared to months in the legacy architecture.

Utrecht explained to us that the benefits of Otomi were also significant from an operational perspective. Applications can now be deployed within one day, compared to the previous process which took months.

Furthermore, the entire journey from application concept to production now averages around four weeks, compared to the prior duration of at least six to nine months.

The platform also achieved stability with 24/7 uptime, automatic restart and recovery, and up to 40% productivity gain for developers through Otomi’s self-service capabilities.

We were able to uplift the security posture as well as the implementation resulted in numerous improvements, including Open Web Application Security Project (OWASP), microsegmentation, live scanning, traceability, cluster and network policy enforcement, and more.

While naturally, I’m biased, the solution worked extremely well. Utrecht’s Senior Manager of Digital Services Lazo Bozarov, shared that the platform has allowed the municipality to accelerate its containerization and cloud journey in which they have modernized their architecture towards microservices and an API-centric infrastructure. Goal achieved.

By integrating Otomi with Kubernetes, containerization is simplified, reducing the need for extensive environment management. This results in organizations accelerating their container platform’s time-to-value and the applications on it. For organizations like Utrecht, implementing Otomi on top of Kubernetes will lead to substantial cost savings, time reduction and risk mitigation.

As someone who has co-engineered this product from the ground up, it’s rewarding to see these real-life adoptions actually making a difference. It’s also exciting to see how Kubernetes can revolutionize IT infrastructure modernization. There’s a bright future ahead for the world of Kubernetes, especially in organizations such as these.

The post How Otomi Helped the City of Utrecht Move to Kubernetes appeared first on The New Stack.

]]>
Deploy an On-Premises Bitwarden Server with Docker https://thenewstack.io/deploy-an-on-premises-bitwarden-server-with-docker/ Sat, 06 May 2023 13:00:17 +0000 https://thenewstack.io/?p=22706662

Bitwarden is one of the best password managers on the market. Not only does it include features that make it

The post Deploy an On-Premises Bitwarden Server with Docker appeared first on The New Stack.

]]>

Bitwarden is one of the best password managers on the market. Not only does it include features that make it perfectly at home with teams and organizations, but you can also deploy your own instance of the tool, so you never have to worry about your company’s most sensitive data ever being synced, shared, or saved on a third-party server. This is a great option for businesses that work with highly sensitive account details, notes, and identities.

And, thanks to Docker, the process of deploying Bitwarden in-house is actually pretty easy. I’m going to walk you through the steps, so you can use this password manager service within your LAN. You can deploy it to a single machine in your data center or even a VM hosted on a third-party cloud-based service.

What You’ll Need

Here’s what you’ll need to make this work:

  • A running instance of an operating system that supports Docker (I’ll demonstrate this on Ubuntu Server 22.04).
  • A user with sudo privileges.
  • An SMTP server (I’ll demonstrate using the Gmail SMTP service).

That’s it. Let’s get to work.

How to Install Docker CE

On the off-chance you haven’t installed Docker, here are the steps for doing so.

First, add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg &amp;&amp;
| sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg]&amp;&amp;
 https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | &amp;&amp;
sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Before you can install Docker, you must install a few dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Update apt with:

sudo apt-get update


Finally, we can install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

Deploy Bitwarden with Docker

We’re now ready to deploy Bitwarden. First, download the handy script the company created for this process with the command:

curl -Lso bitwarden.sh "https://func.bitwarden.com/api/dl/?app=self-host&amp;platform=linux" &amp;&amp; 
chmod 700 bitwarden.sh


Once that downloads, run the install command with:

./bitwarden.sh install


During the installation, you’ll be asked the following questions:

  • Enter the domain name for your Bitwarden instance — if you don’t have a domain, you can use the IP address of your hosting server.
  • Do you want to use Let’s Encrypt to generate a free SSL certificate? (y/n) — if you don’t have a domain associated with this server, you must select n.
  • Enter your installation id — this is accessed by visiting https://bitwarden.com/host
  • Enter your installation key — this key will be presented on the same page as the installation id.
  • Do you have an SSL certificate to use? (y/n) — if you have an SSL certificate, type y, otherwise type n.
  • Do you want to generate a self-signed SSL certificate? (y/n) — if you don’t have an SSL certificate, answer y.

It is absolutely crucial that you use an SSL certificate, otherwise, you will not be able to create an account or use a number of the Bitwarden features.

Once the installation completes, you’ll need to configure the environment variables for the SMTP server. If you use the Gmail SMTP servers and you have 2FA enabled for your account, you’ll need to create an app password, which can be done here.

Configure the SMTP Server

To configure the SMTP server, open the global env file with the command:

nano ~/bwdata/env/global.override.env


In that file, look for the following lines:

globalSettings__mail__replyToEmail=REPLACE
globalSettings__mail__smtp__host=REPLACE
globalSettings__mail__smtp__port=587
globalSettings__mail__smtp__ssl=false
globalSettings__mail__smtp__username=REPLACE
globalSettings__mail__smtp__password=REPLACE


If you’re using the Gmail SMTP servers, change everything marked REPLACE to:

  • Replace replyTo_email with your email address.
  • Replace smtp__host with smtp.gmail.com.
  • Replace smtp__username with your Gmail address.
  • Replace smtp__password with the app password you generated.

If you’re using a different SMTP server, make sure to configure it as necessary.

Save and close the file.

Restart the Bitwarden instance with:

./bitwarden.sh restart


Once the Bitwarden service has restarted, open a web browser and point it to https://SERVER (where SERVER is the IP address or domain of your hosting server). You will be greeted by the Bitwarden login screen (Figure 1).

Figure 1: The Bitwarden login screen.

Click Create Account and, in the resulting window (Figure 2), fill out the necessary information for the new account and click Create Account.

Figure 2: Creating a new account for your Bitwarden in-house instance.

You will then be kicked back to the login screen, where you can log in with your new account. In the resulting window (Figure 3), click Send Email in the Verify Email box. You’ll be sent an email where you can then verify the new account.

Figure 3: The main Bitwarden window, showing the need to verify the initial account.

And that’s all there is to deploy an on-premises instance of the Bitwarden password manager server. Enjoy that added level of privacy for your most important secrets.

The post Deploy an On-Premises Bitwarden Server with Docker appeared first on The New Stack.

]]>
4 Ways to Enhance Your Dockerfiles https://thenewstack.io/four-ways-to-enhance-your-dockerfiles/ Thu, 04 May 2023 17:00:28 +0000 https://thenewstack.io/?p=22705952

Ten years ago, Docker released its first version of what democratized container technology. According to a recent survey, it is

The post 4 Ways to Enhance Your Dockerfiles appeared first on The New Stack.

]]>

Ten years ago, Docker released its first version of what democratized container technology. According to a recent survey, it is still the most used software package technology, with Maven and NPM listed as second and third, respectively. To celebrate the anniversary of the Docker container technology, let’s explore four areas — and their associated open source tools — where developers can better use Dockerfiles and images.

Lint Your Dockerfiles

I’ve always liked to use linters, which ensures that your work is perfect and gives the satisfaction of seeing a green check mark after all the little details are adjusted. We, humans, tend to forget things and make typos. That’s where linters come to the rescue.

Hadolint is one of the most popular open source linters for Dockerfiles. A linter examines a Dockerfile for errors. It uses a set of predefined rules — the complete list is available here — to analyze your Dockerfile and provide recommendations for improving its syntax style, efficiency and consistency.

Hadolint checks for issues such as using the latest tag, incorrect syntax and unnecessary instructions. You can also bring in your own custom rules or ignore predefined rules that don’t apply to your use case. Hadolint is easy and quick to run from your command line interface, and it runs on Linux and Windows.

Run a Security Audit on Them

The Snyk 2022 Container Security Trends report found that 46% of respondents mentioned security being a bottleneck that slows the speed of cloud deployment. And while container-based distributed architectures can provide benefits such as increased scalability, flexibility and fault tolerance, they also introduce new security challenges that need to be addressed.

Dockle is an open source tool that performs security-focused analysis on Docker images and Dockerfiles. It analyzes various aspects of the image build, such as ensuring the use of trusted base images, the exclusion of unnecessary packages and that security patches have been applied. It also checks for configuration-related issues such as unnecessarily exposed ports, the use of the root user and the storage of secrets. This tool fits well in a shipping pipeline and can help ensure that Docker images are secure.

Build and Update Them

Building Docker images is an art, and while some have been doing it for a decade, there is a constant flow of newcomers. The recent platform engineering movement was clear about one thing: the fewer Ops tasks developers can do, the better. And the responsibility of writing Dockerfiles often ends up with developers who may not have the knowledge to do it properly.

Paketo Buildpacks are a collection of open source Cloud Native Buildpacks that transform your application source code into images that can run on Docker runtimes. These images can even be built without the need for Dockerfiles.

With a single command, the tool will automatically detect your application language and automatically build a Docker container image that fits production requirements, including dependencies, language runtimes and other components. It supports popular programming languages such as Golang, Java, Node.js, Python, Ruby and PHP. Paketo isn’t only helping to build container images; it helps to maintain them as well. The tool even allows updating the OS layer of your app images without rebuilding your source code.

Minimize Them

The size of Docker images getting out of control is a well-known issue: The 2020 Docker Usage Report, from Sysdig, found that the average size of Docker images has increased by 75% since 2016, and the average size of a Docker image is 1.5 GB.

Slim, which was initially created during a Docker Global Hack Day project, is tackling this. The open source tool can minify Docker images by up to 30 times their original size. The tool inspects the container metadata and data, and it runs the application to build smaller images. The results are impressive; for example, Slim minimizes the Ubuntu 14.04 Ruby Docker image from 433MB to 13.8 MB (31x).

A Decade of Docker

As the most used containerization method with a year-over-year growth of 10% in 2022, Docker isn’t going anywhere. The company, which struggled to adjust when Kubernetes won the orchestration battle, has been gaining momentum and recently announced WebAssembly tooling. Happy birthday, Docker! I hope this list of tools will help many to make better use of it.

The post 4 Ways to Enhance Your Dockerfiles appeared first on The New Stack.

]]>
Manage Secrets in Portainer for Docker and Kubernetes https://thenewstack.io/manage-secrets-in-portainer-for-docker-and-kubernetes/ Sat, 29 Apr 2023 13:00:02 +0000 https://thenewstack.io/?p=22705935

Deploying a very basic Docker container isn’t all that hard. With just a couple of quick commands, you can have

The post Manage Secrets in Portainer for Docker and Kubernetes appeared first on The New Stack.

]]>

Deploying a very basic Docker container isn’t all that hard. With just a couple of quick commands, you can have an NGINX/Alpine Linux container up and running and even do so with persistent storage. That level of simplicity is one of the beauties of Docker.

However, deploying a basic container vs. a full-stack application are two very different tasks. And then, when those deployments require security, things can get considerably more complicated.

You might even need to use environmental variables or (especially) secrets.

What are secrets? Secrets are encrypted tokens that can serve as passwords to gain access to applications without potential exposure to outside parties. They are a built-in feature of the Docker and Kubernetes orchestrators that serve as a convenient method of adding credentials to your deployments without exposing them such that a hacker could gain access to passwords or tokens and use them against you. Secrets are made secure in Docker deployments because they are not housed in container manifests and are encrypted at rest and during transit. This way, those secrets are only available to the services that have been granted explicit access and only while the service tasks are running.

So, instead of hard-coding your credentials and/or tokens in the YAML manifests, you use secrets to obfuscate that information from sight.

The problem with using secrets with Docker is that it can get far more complicated and/or not nearly as efficient as you’d like it to be. In fact, using secrets with the Docker CLI can get pretty confusing and if you make one mistake, you might have to undo everything you’d just done and do it all over again.

Here’s an example. Let’s say you want to create a secret called tns_secret. To do this, you’d log into your Docker Swarm controller (as the secrets feature isn’t available to a standalone Docker instance) and issue the command:

printf "H3r3 !$ my sup3r s3cret p@$$w0rd" | docker secret create tns_secret -


Verify the secret was created with:

docker secret ls


You should see something like this:

ajebh69739ox06irww7gz3svz   tns_secret  8 seconds ago   8 seconds ago


Next, you would deploy a service that makes use of the new secret like so:

docker service  create --name redis --secret tns_secret redis:alpine


Although that process isn’t terribly challenging, it’s not very efficient. And isn’t efficiency the name of the game? You want to empower your teams to work smarter, not harder which leads them to more effective, reliable, and secure containerized deployments. Imagine you have a complicated full-stack application to deploy. That, in and of itself, can be a bit daunting. Now, add the complexity of secrets to that full-stack deployment and the Docker CLI will take considerably more time and effort.

Why not let the Portainer platform help with this? Portainer works with Docker and Kubernetes secrets to make them far easier to create and manage.

Portainer allows you to create secrets that can not only be used by a single deployment but can be reused for as many apps and stacks as needed. Even better, those secrets are encrypted and encoded such that they cannot be viewed, even from within the GUI. As soon as you create a secret, it’s not only usable for your deployments, the content of the secret is encrypted such that it cannot be viewed… even by an administrator.

Even better, an admin can create all the secrets that are needed for a company and configure access such that it’s available only to admins, restricted users, or even the public.

That’s not only efficient, but it’s also secure.

You can find out how to manage secrets with Portainer in my piece “Container Security: Manage Secrets with Portainer.”

What About Third-Party Secrets Services?

There are plenty of third-party services that can house and serve up the required secrets for your containerized deployments. They work, but many of them can get overly complicated to use. Not only is the creation of those secrets often harder than necessary, but integrating those third-party services into your deployments adds yet another layer of complexity to the mix.

When you use Portainer, it’s all there, ready to go. On top of which, once your teams are up to speed with Portainer, the entire workflow is seamless, so there are fewer hurdles to overcome.

Environment Variables vs. Secrets

Another way of passing credentials and keys to your deployments is by way of environmental variables. This is a very handy tool for defining various types of items within your deployments. Environment variables are used in a key pair format, such as USERNAME:PASSWORD. And, as you might expect, you can create environment variables that pass credentials to your deployments.

Even though you can, you shouldn’t. Why? Because anyone can read environment variables. They aren’t encrypted or encoded. Instead, they’re passed to the containers in plain text, so using this feature for secrets is a bad idea that could expose your services such that any hacker might steal those precious credentials.

The Kubernetes Caveat

Unlike in Docker, where secrets are encrypted so no one can read them once they’re created, Kubernetes secrets are stored unencrypted in the API server’s data store. That means anyone with API access can not only read but modify a secret. According to the Kubernetes documentation, because of this underlying issue, there are several steps you must take to ensure the security of your secrets, which are:

  • Enable Encryption at Rest for Secrets.
  • Enable or configure RBAC rules with least-privilege access to Secrets.
  • Restrict Secret access to specific containers.
  • Consider using external Secret store providers.

How to Create a Secret in Portainer

The first thing you must know is that the Secrets feature is only available via Portainer deployed to either a Docker Swarm or a Kubernetes cluster. On a standalone server, you won’t find the secrets feature in the menu.

On a Docker Swarm, creating a secret is as simple as clicking Secrets in the left navigation, clicking Add Secret, giving the secret a name, typing the actual secret in the Secret field, configuring access control, and clicking Save (Figure 1).

Figure 1: Creating a new secrete in Portainer running on a Docker Swarm.

With Kubernetes, secrets are created similarly, only you’ll find more options available during the process (Figure 2).

Figure 2: When creating a Kubernetes secret in Portainer, you can select the type of secret you want to create, such as Basic Auth, SSH Auth, TLS, Bootstrap Token, and more.

How to Use a Secret in Portainer

Using a secret in Portainer is just as easy. Create your service as you normally would, click Secrets near the bottom of the page, and click Add A Secret. In the resulting Secret drop-down (Figure 3), select the secret you want to use, finish creating the service, and then click Create The Service.

Figure 3: Adding a secret to a service deployed to Docker Swarm.

That’s efficiency. Imagine having to do the same thing over and over from the CLI.

Truth be told, using secrets from the CLI isn’t all that challenging. However, when services and deployments get even more complicated, so too does the use of secrets. On top of that, do you want your teams spending more time having to remember and typing complicated commands or using an effective GUI tool that empowers them to work with a much higher level of efficiency?

In the end, if you want your teams to work securely with ease, you’ll want the power and simplicity that comes with Portainer.

The post Manage Secrets in Portainer for Docker and Kubernetes appeared first on The New Stack.

]]>
SUSE Unveils Rancher 2.7.2, Enhanced Kubernetes Management https://thenewstack.io/suse-unveils-rancher-2-7-2-enhanced-kubernetes-management/ Fri, 28 Apr 2023 16:36:15 +0000 https://thenewstack.io/?p=22706577

As Kubernetes users know, Rancher is a popular complete software stack for running and managing multiple Kubernetes clusters across any

The post SUSE Unveils Rancher 2.7.2, Enhanced Kubernetes Management appeared first on The New Stack.

]]>

As Kubernetes users know, Rancher is a popular complete software stack for running and managing multiple Kubernetes clusters across any infrastructure. At KubeCon Europe, SUSE released its latest and greatest version, Rancher 2.7.2.

This update aims to foster stronger ecosystem adoption. It does this by decoupling the Rancher Manager‘s user functionality (UF) so users can independently extend and enhance the Rancher UI. This enables them to build on top of the Rancher platform and better integrate Rancher into their environments by building custom, peer-developed, or Rancher-developed UI extensions.

First Three

The first three Rancher-developed extensions are:

  • Kubewarden Extension, which delivers a comprehensive way to manage the lifecycle of Kubernetes policies across Rancher clusters.
  • Elemental Extension, which provides operators with the ability to manage their cloud native OS and Edge devices from Rancher.
  • Harvester Extension, which helps operators load their virtualized Harvester cluster into Rancher to manage and inspect easily.

The idea is for Rancher to provide hybrid and multicloud multicluster management within the same pane of glass. While, at the same time, extension providers can deliver a highly customized user experience to users.

As Peter Smails, SUSE’s General Manager of Enterprise Container Management, explained, “As the Kubernetes ecosystem expands and becomes more complex, innovation, interoperability, and simplicity have never been more important. Our free-to-use Rancher UI extension framework empowers users and independent software vendors (ISVs) to create customized user experiences, significantly enhancing the operationalization of their entire Kubernetes environment.”

Ecosystem

The updated Rancher ecosystem also includes:

  • Rancher Desktop 1.8 with configurable application behaviors such as auto-start login and all applications settings configurable via its command line interface and new experimental updates.
  • Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.
  • Opni 0.9 , the Multicluster Observability tool, has several observability feature updates as it approaches its planned GA later in the year.
  • S3GW (S3 Gateway)14.0 has new features such as lifecycle management, object locking and holds, and UI improvements.
  • Epinio 1.7, an opinionated Kubernetes platform, can take you from App to URL in one step. now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW, an AWS S3-compatible gateway based on the Ceph RADOS Gateway (RGW).

The latest core Rancher also provides better value with SUSE’s commercial open source support subscription service for Rancher, Rancher Prime, It now includes Service-Level Agreement (SLA) -backed support for Policy and OS Management natively within the Rancher platform via the Kubewarden and Elemental extensions. Prime subscribers also now have access to SUSE’s customer engagement platform SUSE Collective, This includes access to peers, exclusive roadmap materials, reference architectures, operating-at-scale documentation, and on-demand start-up guides.

Puzzled by all this technology? Join the crowd. SUSE feels your pain. So, Tom Callway, SUSE’s VP of Product Marketing and Community, announced that since  “Cloud native expertise remains one of the biggest inhibitors to Kubernetes’ adoption,” SUSE is re-launching Rancher Academy. Its “aim is to help demystify the complexities of cloud native platforms like Kubernetes and break down the barriers faced by users when deploying new workloads by offering free, high-quality educational resources. Course cover: Kubernetes, Container Fundamentals, Rancher Multicluster Management, Container Security, and more.

For these classes alone, you should check out Rancher. That said, SUSE and Rancher are doing some very interesting things for Kubernetes managers.

The post SUSE Unveils Rancher 2.7.2, Enhanced Kubernetes Management appeared first on The New Stack.

]]>
Rafay Backstage Plugins Simplify Kubernetes Deployments https://thenewstack.io/rafay-backstage-plugins-simplify-kubernetes-deployments/ Mon, 24 Apr 2023 13:00:50 +0000 https://thenewstack.io/?p=22705992

recently announced the release of Backstage Plugins. These open source software plugins for Spotify’s Backstage platform create self-service workflows for

The post Rafay Backstage Plugins Simplify Kubernetes Deployments appeared first on The New Stack.

]]>

Rafay Systems recently announced the release of Backstage Plugins. These open source software plugins for Spotify’s Backstage platform create self-service workflows for developers while providing the governance and standardization platform teams require.

The announcement was made at KubeCon + CloudNativeCon Europe. Rafay Backstage Plugins will become globally available to consumers by July 2023.

The plugins are designed to balance the seemingly opposing needs of platform teams and developers while working with frameworks such as Kubernetes and varying environments. Platform teams seek to formalize how developers provision and access their resources, while developers are concerned with quickly innovating and testing varying applications or web pages.

According to Abhinav Mishra, Senior Product Manager at Rafay, “Developer self-service for Kubernetes is a huge challenge. Developers need a cluster, or namespace, or environment, to provision and test applications and it takes too long for this process to happen. It can take organizations three months to get an app fixed in production.”

Using Backstage Plugins, however, organizations can drastically reduce the time required to create, test, and operationalize applications — while doing so in a well-governed, repeatable manner.

Tedious Manual Approaches

Backstage Plugins connects developers’ Internal Developer Platforms (IDPs) in Backstage — a widely used open platform for developer teams — to Rafay Kubernetes Operations Platform. In turn, “Backstage allows for development of IDPs atop Kubernetes,” Mishra said. The plugins enable platform engineers to create reusable templates that adhere to the governance concerns of the organization, spanning everything from cost to multitenancy and regulatory compliance.

Without this methodology, organizations are frequently slowed by lengthy back-and-forth conversations between developers and platform teams about which resources to use, how to provision them, and how to secure them. “The way a lot of companies handle this workflow is if developers need an environment or to deploy an app, they submit a ticket,” maintained Sean Wilcox, Rafay SVP of Marketing. “Those tickets sit in a sea of tickets for like, a week, and there’s several questions and opportunities for platform teams to get back to them. There’s a back and forth like a ping pong game that happens for a month.” A Rafay survey found these manual methods often delay application implementations for anywhere from one to three months.

Governance Templates

The introduction of Rafay Backstage Plugins replaces these time-consuming efforts with an alternative approach in which platform engineering teams create templates for developers. Those templates specify all aspects of the infrastructural, governance, access, and provisioning of resources that platform engineering teams require developers to follow. Backstage Plugins enable developers to consume those templates (and their corresponding resources) while using their IDPs that are connected to Backstage. The connection to Rafay Kubernetes Operations Platform enables developers to avail themselves of its bevy of capabilities for facilitating governed access to Kubernetes.

Examples of templates include those designed to support “Cluster as a Service, Namespace as a Service, or Environments as a Service,” Mishra mentioned. Platform teams benefit from this approach by specifying how developers spin up resources and provision them in a governed manner. Developers benefit by accessing those resources and environments “by just entering a name and the description of a cluster,” Mishra said. “With a one-click provision, they get a nice view of their environment, where they can download [resources] and deploy apps easily to it and reduce the cognitive load of having to learn the intricacies of Kubernetes.”

Reducing the Cognitive Load

The reduced time-to-value that Backstage Plugins supports has lasting ramifications for developers and IT teams. It enables each of them to concentrate on what they do best. For platform teams, that’s “setting up those standards and workflows and ensuring they’re done in a compliant way for regulations, or internal policies, or even costs,” Mishra commented. For developers, it’s realizing the freedom from infrastructural, access, and security concerns to spur creativity.

“Developers shouldn’t have to deal with this stuff,” Wilcox remarked. “They want to code and deploy fast. All the infrastructure stuff they shouldn’t have to be concerned with, like does this even go in a cluster or a namespace? Frankly, they should just be able to deploy while the platform team, in an automated way, figures this out for them.”

The objective is for developers to spend their time pursuing more higher-value tasks related to devising new and better solutions, instead of miring themselves in the infrastructure particulars and governance demands of doing so.

The post Rafay Backstage Plugins Simplify Kubernetes Deployments appeared first on The New Stack.

]]>
What eBPF Means for Container Threat Detection https://thenewstack.io/what-ebpf-means-for-container-threat-detection/ Sat, 15 Apr 2023 17:00:05 +0000 https://thenewstack.io/?p=22704567

This blog post was adapted from a talk at osquery@scale 2022 given by Ryan Mack (vice president of engineering and

The post What eBPF Means for Container Threat Detection appeared first on The New Stack.

]]>

This blog post was adapted from a talk at osquery@scale 2022 given by Ryan Mack (vice president of engineering and head of infrastructure at Uptycs) and Christopher Stanley (a security engineer in the aerospace industry).

eBPF (enhanced Berkeley Packet Filter) is a Linux kernel technology that offers a powerful and stable method of observing the Linux kernel. It’s like having a VM in the kernel that can safely run hooks (i.e. programs) for filtering data like network events, system calls, packets, and more. eBPF is being adopted at scale for its guaranteed stability, the ability to work directly in the kernel, and potential savings when factoring in the compute process for gathering telemetry on Linux servers and containers.

eBPF is rapidly gaining traction in cloud native applications, especially in places where traditional security monitoring doesn’t work. It’s eBPF is well suited for uses in distributed and container-based environments, including Kubernetes. The core benefits of the technology include speed and performance, a low level of intrusiveness, security, unified tracing, and programmability. It is safer than previous options because of the way it sees inside processes without introducing the risk of crashing the application or modifying the kernel in any way. eBPF is a preferred alternative to the audited framework because it is less invasive and more efficient.

By monitoring from the kernel layer, many of the challenges associated with observability in the cloud are solved. You can enjoy deeper visibility, more context, and more accuracy in your data. If you have an interest in increasing your container security, it’s worth learning more about what eBPF can do for you.

eBPF Adds Context Around Containers

Many people running containers in their environment wrongly assume it’s a security boundary and there’s no way that applications can break out of containers. I don’t see containers as security. They are like a barrier. They can be one piece of the puzzle to contain your application, but it is not by itself a security. For containers (as with everything else), you should follow the rule of least privilege, which in this case means only running the binaries that you need in the containers. You shouldn’t run root in the container, but the fact is, many developers do, and this gives other binaries a chance to container escape.

With eBPF, you get contextual information around your containers. You can learn what syscalls in the container were run, what host it was on, what the container name was, and what the image was. You can pump that information into your SIEM and have context around these hosts. Then you can start writing detections of abnormal behavior.

In the screenshots below, you can see that a process happened, what container name it was, who ran it, what the container’s name was, and so on. Without that context, it’s very hard to look at a host running 20,000 containers and identify which container has a security issue.


eBPF Telemetry Can Detect Unusual Activity

You might recall CVE-2022-0185. In this vulnerability, there was an underflow attack such that you could abuse spraying kmalloc to privilege escalate outside of an unprivileged namespace. Historically speaking, privilege escalation from a root user didn’t really matter. Before containers, if you were root and you were hitting this code that was in the kernel, that code would be less scrutinized. But that’s not necessarily the case anymore. When you have these root namespaces that are running within containers, sometimes you are hitting the code in the kernel that was less scrutinized, and that can allow an underflow attack to happen.

In the image below, I type through whoamI on my box. I am CStanley echo $ $. I get the process ID, and then I do a pscap | grep with that process ID to see what capabilities I have on the system. I don’t have any. So, I type unshare-r and enter a privileged namespace, and I type whoamI. I’m root, privilege escalated.  But it’s not that easy. So, I echo $ $ again. I get the ID, I pscap | grep that, and I see I have full capabilities on this system now. I’ve privilege escalated!

Actually, no. In this context, in this namespace, the system is doing what it’s supposed to do. It’s isolated that namespace off, and it’s putting that small amount of protection around there saying, you’re not really root, you’re only in the context of this namespace. I try to change the root password and it fails. I try to install a binary, but it fails. I only have permissions in the context of that namespace.

This is where CVE-2022-0185 comes into play. If you pull down and run that binary and do this kmalloc spray, then it essentially does an underflow and passes a negative number in. It moves that pointer into the memory space to where it can execute code that allows you to privilege escalate. In the image below, I ran the exploit and get into the container namespace. Then I changed the root password and I have privilege escalated. This is a case where the container by itself failed us. It didn’t prevent us from affecting the host.

The image below shows a detection I did within osquery, utilizing eBPF telemetry. When I ran that same exploit, it shows there was a privilege escalation attack that occurred, and it detected the kthreadd. This detection triggered based on path two being spawned, and with kthreadd, that’s an indication that something happened within kernel space and that privilege escalated up. It’s a rudimentary detection but it’s effective.

eBPF is still evolving and will get better with time. Meanwhile, it’s already improved what’s possible with container threat detection. The examples given here used the open-source osquery technology, but the Uptycs solution extends osquery to provide similar container-level detections correlated with activity at the Kubernetes control plane.

Uptycs is sponsoring KubeCon + CloudNativeCon Europe 2023. Please stop by our booth to get a demo of our K8s and container security solution.

The post What eBPF Means for Container Threat Detection appeared first on The New Stack.

]]>
Walkthrough: Bitwarden’s New Secrets Manager https://thenewstack.io/walkthrough-bitwardens-new-secrets-manager/ Sat, 15 Apr 2023 16:00:10 +0000 https://thenewstack.io/?p=22704805

It was only a matter of time before a popular password manager, such as Bitwarden, would create a secrets manager,

The post Walkthrough: Bitwarden’s New Secrets Manager appeared first on The New Stack.

]]>

It was only a matter of time before a popular password manager, such as Bitwarden, would create a secrets manager, an application to create and store security tokens so they don’t have to be hard-coded into the application itself. It makes sense, especially given that Bitwarden is open source and the folks behind it seem to understand the growing need for managing secrets in cloud native and container technology.

And that’s what they’ve done, created the ideal password manager for teams that work with things like containerized and cloud native deployments. I will warn you, however, that the workflow of the Secrets Manager is a bit confusing at first. But once you understand how it works, you’ll be using it like a champ.

Although this new Secrets Manager will be a separate product from the company’s flagship Password Manager, the combination of the two gives Bitwarden a leg up over most of the competition. As of this moment, pricing is TBD for the Secrets Manager, as it is still in beta.

How the Bitwarden Secrets Manager Works

First off, you must have a valid Bitwarden account that includes organizations. For that, you’ll probably want one of the Teams accounts (otherwise, you are limited in the number of organizations and/or members you add).

Enable the Beta

The first thing you must do is enable the beta. To do that, log into your Bitwarden Web Vault. Click the Organizations tab and then click Billing > Subscription. You should see a checkmark for Enable Secrets Manager Beta (Figure 1).

Figure 1: Enabling the beta for the Bitwarden Secrets Manager.

Accessing the Secrets Manager

Once the Secrets beta has been enabled, click on the icon to the left of the profile drop-down near the upper right corner and select Secrets Manager Beta (Figure 2).

Figure 2: Accessing the Bitwarden Secrets Manager from the Product Switcher.

You should now find yourself on the main Bitwarden Secrets Manager page (Figure 3).

Figure 3: The Bitwarden Secrets Manager main page.

Create a Service Account

The next step is to create a service account that will hold something like an API token. To do that, click Service Accounts in the left navigation. On the resulting page (Figure 4), click New Service Account.

Figure 4: Once you’ve created your first Service Account, you will create the next account from the New drop-down in the upper right corner.

In the resulting popup (Figure 5), give the new Service Account a name and click Save.

Figure 5: Naming your Service Account.

You will then be directed back to the Service Account page, where your new entry is listed. Click the name of that new entry and you can then add Projects to the Service Account, add members, and access tokens.

Before you can add projects and members, they have to exist.

Adding Projects

Projects are a way to collect secrets that should be logically grouped together. Let’s create a project that can be added to the Service Account. Click Projects in the left navigation and then click Add New Project. Give the project a name and click Save. Just like with Service Accounts, once you’ve created a project, you can then add People and Service Accounts to the Project (Figure 6). With People, however, those are added in the Organizations section of the Bitwarden Password Manager.

Figure 6: A newly created project for the Bitwarden Secrets Manager.

Add Projects and People to a Service Account

Service accounts represent non-human accounts (such as system accounts, applications, and deployment pipelines). Now that we’ve had our detour through Projects, you’ll want to add information to your new Service Account. Go back to the Service Account section and click to open the Service Account you just added. Add a Project (if necessary) and add People.

Create an Access Token

An Access Token is the authentication vehicle that allows you to script secret injection to your application and service deployments or machines and applications as well as the ability to decrypt secrets that are stored in your vault. This prevents you from having to save actual passwords or use them in your manifests and/or code. H

ow this works is pretty simple: Each Access Token is issued to a particular service account. With that association, it will grant any machine it’s applied to access to the secrets associated with that service account. So, to make this work, you must create Service Accounts and then add Secrets to them. Those secrets are then accessible to any Access Token that has access to a particular Service Account. It’s a bit confusing, but once you start playing around with the Secrets Manager, you’ll pick up on the workflow.

To create your first token, click on the Access Tokens tab and click New Access Token. In the popup (Figure 7), give your new Access Token a name, select the required permissions from the Permissions drop-down, and give it an expiration date.

Figure 7: Adding a new Access Token to the Secrets Manager.

Click New Access Token to generate the access token you’ll use for the service in question. One thing to keep in mind is that you must copy the new access token, as they aren’t stored nor can be retrieved. So click Copy Token (Figure 8) to save it to your computer’s clipboard.

Figure 8: Our new access token is ready to be copied.

At any time, you can manually revoke an Access Token by navigating to Service Accounts > Access Tokens, selecting the access token, clicking the associated menu, and clicking Revoke Access Token.

And that’s the basics of using the new Bitwarden Secrets Manager. For any organization that already uses Bitwarden and needs to be able to manage Secrets as well, this will be a welcome addition. For those who’ve yet to try Bitwarden, this might be just the feature to win you over.

The post Walkthrough: Bitwarden’s New Secrets Manager appeared first on The New Stack.

]]>
Docker Gets up to Speed for WebAssembly https://thenewstack.io/webassembly/docker-needs-to-get-up-to-speed-for-webassembly/ Fri, 14 Apr 2023 11:00:08 +0000 https://thenewstack.io/?p=22704279

For those who still are looking to discuss whether WebAssembly (Wasm) will replace containers and even Kubernetes is missing the

The post Docker Gets up to Speed for WebAssembly appeared first on The New Stack.

]]>

For those who still are looking to discuss whether WebAssembly (Wasm) will replace containers and even Kubernetes is missing the point. Both are very different, yet important technologies. And even though there are some overlapping purposes they also often serve specific and separate needs.

At least in the immediate future, many organizations will be loath to replace their container infrastructure and Kubernetes environments. Besides likely losing their investments in those by replacing them with WebAssembly, WebAssembly is not a replace-all technology for all containerized environments. Comparisons between containers and Wasm and how Docker will continue to support containerized infrastructures when Wasm is in use was one of the many main talking points during Wasm I/O 2023.

During the course of the week of the conference, Docker made a series of announcements about how it will accommodate and extend support for WebAssembly. How both will work together and especially how Docker is used with containers to allow for them to deploy and manage applications with WebAssembly were often discussed. These adaptations are largely seen as necessary to pave the way for Wasm’s adoption and use with containers and Kubernetes.

Docker sees Wasm as a complementary technology to Linux containers where developers “can choose which technology they use (or both) depending on the use case, Michael Irwin, senior manager of developer relations, wrote in a blog post. “As the community explores what’s possible with Wasm, we want to help make Wasm applications easier to develop, build, and run using the experience and tools you know and love,” Irwin wrote.

Indeed, Docker has made and continues to make progress as it seeks to support Wasm. Following its October release of Docker+Wasm and after joining Bytecode Alliance for Wasm and WebAssembly System Interface (WASI) development, Docker released Wasm runtimes at the same time as this month’s Wasm I/O 2023:

The three new runtimes use the runwasi library. It is used to create the namespaces, configure the networks and other workload tasks that Containerd manages for deployment of the Wasm module.

Given Wasm’s likely importance for a wave of deployments and use cases we will likely see in the new future, it is up to Docker to continue widening its support. Docker is motivated to do this since “The Docker Desktop key value proposition focuses on developer productivity,” Torsten Volk, an analyst at Enterprise Management Associates (EMA), said. “Wasm simply constitutes another deployment target for Docker Desktop, in addition to standard Linux containers. As was the case many years ago with Linux containers, Docker has now set out to simplify the adoption of Wasm, an application runtime that has the potential to save significant developer cycles by consistently running the same code on any infrastructure,” Volk said. “This lets developers worry about code, while platform engineers can take care of the scalability and resiliency of the underlying servers, network, and storage resources. Making this capability available to its user community definitely adds to the appeal of Docker Desktop.”

Bringing containers and WebAssembly closer “will benefit everyone,” Djordje Lukic, a software staff engineer for Docker, said during Wasm I/O 2023. “WebAssembly can make use of all the existing infrastructure for building and delivering the workloads…and adding WebAssembly features to container orchestrators makes them a great choice for running workloads where performance and a small footprint is paramount,” Lukic said.

Wasm and Docker Action

Announcements are often interesting but they are not worth much when the technology is not ready. That concern about Docker’s announcement was allayed during the talk “Containers Deep Dive” and demo that Djordje Lukic, a software staff engineer for Docker, during  Wasm I/O 2023, gave. During his talk, Lukic demoed running a WebAssembly module locally using Docker and containerd (a container runtime) and running the module in the cloud on a Kubernetes cluster. The demo covered “what it takes” for a container runtime to be able to run a Wasm module, and the benefits of this approach, including faster startup times, security guarantees and easy integration into multi-tier services, Lukic said.

During his demo, Lukic ran a Wasm module with Docker inside Kubernetes. He showed the Kubernetes cluster running on the Docker desktop. We can see in the images below show Kubernetes and a pod running, as well as the definition of the Wasm module. “What it’s saying is okay, I have a deployment;” Lukic said:

The Split

As mentioned above, Wasm and Docker and containers, in general, both often serve very well specific functionalities. “I think containers versus WebAssembly is really about how you want to build your applications,” Kate Goldenring, senior software engineer, at Fermyon, said during the panel discussion “Containers vs. WebAssembly: What’s the Difference and Which Should I Use?.” “If you’re interested in serverless event-driven applications, WebAssembly is there for you. If you’re interested in continuing with the microservices architecture you have today — such as  using Kubernetes even if WebAssembly is next to it — is an option.”

Daniel Lopez Ridruejo, a senior director at VMware and CEO of Bitnami before VMware acquired it in 2019, said during the panel discussion that he both agreed and disagreed with Goldenring’s statement. While “most containers in the world running Kubernetes are running virtual machines,” there is much activity around engineering how to run WebAssembly on containers on Kubernetes, he said. “But what I’m particularly excited about is and through your work and Microsoft pioneers and this is how you run this on IoT devices: how you actually get rid of containers and get rid of VMs and can have that unit of portability on devices that you will not typically associate with running software,” Ridruejo said. “In a way, you can think of this as a wave…that I think it’s going to be disruptive once you can put compute and standardized compute in devices.”

Serverless has not lived up to its earlier promise of allowing for the deployment and management of applications with a minimal amount of operations required to support them. To this end, WebAssembly providers are speeding ahead to fill shortcomings in these serverless applications. Recent examples include Fermyon’s release of open source Spin 1.0 which is geared for serverless. Meanwhile, containers and Docker will likely remain part of the equation for serverless deployments with WebAssembly. Fermyon and other companies working on Wasm for serverless are focusing on speed of deployments for the management of modules, Shivay Lamba, a software developer specializing in DevOps, machine learning and full stack development, said during the panel discussion. “That helps you to save costs as well. So, if you have such use cases where you have smaller functions, those can be very easily replicated inside of Wasm. And while we are working on some of these toolings, which are still not supported very well in Wasm, those can still be run very easily in Docker or in containers.”

In a nutshell, Wasm should “in no way in the near future” serve as a direct replacement for all containerized Docker workloads, Saiyam Pathak, director of technical evangelism for Civo Cloud, said during the panel discussion. Instead, applications that do not necessarily run very well with Wasm should continue to work just fine with Docker and containers, reflecting how to “take the best advantages of the Wasm ecosystem.”

The post Docker Gets up to Speed for WebAssembly appeared first on The New Stack.

]]>
Tech Backgrounder: Slim.AI Makes Container Hardening Easier https://thenewstack.io/tech-backgrounder-slim-ai-makes-container-hardening-easier/ Thu, 13 Apr 2023 16:00:19 +0000 https://thenewstack.io/?p=22705073

The Slim Developer Platform aims takes the pain out of vulnerability remediation and management for container-based applications. The platform can

The post Tech Backgrounder: Slim.AI Makes Container Hardening Easier appeared first on The New Stack.

]]>

The Slim Developer Platform aims takes the pain out of vulnerability remediation and management for container-based applications. The platform can reduce vulnerability counts by 80% (on average) and equips developers and security professionals with tools to understand which vulnerabilities matter and which don’t. Using proprietary “container hardening” algorithms based on the ultra-popular SlimToolkit open source project, Slim removes unnecessary libraries, packages and binaries, thus minimizing a container’s attack surface and the potential for zero-day attacks.

Differentiator

Top differentiators of Slim.AI’s platform include the following:

  1. Slim.AI provides proactive vulnerability remediation. While most software supply-chain companies are focused on generating awareness of existing vulnerabilities (through vulnerability scanning or Software Bills of Material a.k.a “SBOMs”), Slim.AI reduces production applications to a minimal footprint proactively, removing potential future threats.
  2. Slim.AI is focused on automation for any technology stack. Previous approaches to slimming containers can result in manual effort for developers or ask developers to change their base image, distribution, or package ecosystem. Slim’s goal is to let developers work however they want and to provide trustworthy automations that run in CI/CD with every build. This approach decreases the friction between developer teams and security/compliance teams — a win-win.
  3. The Slim Developer Platform is built on SlimToolkit open source software (16K GitHub stars and growing), which many organizations have already embraced as a valuable tool for modernizing their cloud native workflows. Slim.AI makes using SlimToolkit easier, faster and more scalable for teams of developers worldwide.

Automated vulnerability remediation is gathering steam. Several startups — such as RapidFort, Chainguard and EndorLabs — are focused on the problem, though all have different approaches. Additionally, there are several existing methods for managing container vulnerabilities, including:

  1. Alternative base images: Alpine Linux, Distroless and Scratch images ask developers to start with a minimal image and add the tools, packages and libraries they need to it. For some developers, these approaches are challenging due to low-level differences in the distributions or lack of understanding as to how these techniques work.
  2. Vulnerability scanners and SBOMs: While a critical part of a secure posture, these technologies are point-in-time and reactive solutions to security. They can create friction for development teams and don’t address other aspects of attack surface outside of vulnerabilities and package information.
  3. Policy engines: These rules-based engines can prevent risky containers or configurations from reaching production and are necessary to ensure compliance. However, they tend to be a “red light” approach to security and can have a negative impact on developer velocity.

Slim.AI is focused on containers as the atomic unit of a secure cloud native posture and is the only company offering a proven, trusted method for automatically hardening containers en route to production. Being a SaaS service lets Slim.AI connect with multiple cloud providers (Amazon Web Services, GCR, Azure, etc.), but also facilitates team collaboration, sharing and reuse of important artifacts for delivery and security.

Problem Space

Large, unoptimized containers can be rife with vulnerabilities and additional attack surface (see Slim.AI’s annual Public Container Report for more information); yet, to date, hardening containers is a highly specialized and labor-intensive job.

Benefits of Slim.AI

Slim.AI seeks to be a communication platform between container producers (software companies shipping products to customers in the form of containers) and container consumers (their customers). By reducing the attack surface of a container (i.e., removing shells and package managers), the exploitability of a given vulnerability is greatly reduced.

Company

In 2015, the Docker community held a Global Hack Day in Seattle. Kyle Quest’s concept for “DockerSlim,” which he described as “a magic diet pill for your containers,” won first place in the local event and second place in the global “plumbing” category that year.

That’s how the seeds were sown for an open source community that now supports SlimToolkit. Around 2019, the project had achieved so much momentum that users were regularly asking for extended features and additional functionality. That spurred Kyle and John Amaral to put together a business plan. Quest and John Amaral launched Slim.AI in 2020 (as founding CTO and founding CEO, respectively) on the premise that true software security comes from within. The company’s vision is to empower developers to employ container best practices to deliver not only more efficient and performant software but more secure software, as well.

Stack

The Slim platform can analyze and harden any OCI-compliant container image, regardless of its base image, package ecosystem or build origin. While the SlimToolkit open source software requires the Docker daemon, Slim’s Automated Container Hardening doesn’t and can be used with any runtime, including ContainerD/Kubernetes.

Images should be hosted in one of the many cloud registries supported by Slim (e.g., Docker Hub, AWS Elastic Container Registry, Google Container Registry, Microsoft/Azure, RedHat Quay, GitHub Container Registry and others). Additionally, Slim supports several CI/CD system integrations including GitHub Actions, CircleCI Orbs, GitLab and Jenkins.

While Slim supports both Linux/AMD- and ARM-based image architectures, cross-architecture builds are currently not supported. Additionally, Slim’s core hardening capability requires a secured connection to the Slim platform, though air-gapped and on-premises solutions are on the near-term roadmap.

Partnerships/Customers

Numerous Slim.AI design partners have testified to the impact of the Slim.AI platform; here are a few who have documented their experiences and results:

  • BigID: BigID automates container security with Slim.AI to reduce vulnerabilities and maximize security posture. Learn more about BigID and Slim.AI on this episode of TFiR.
  • PaymentWorks: PaymentWorks used Slim.AI to eliminate 80% of container vulnerabilities with no additional developer overhead. Read the PaymentWorks case study.
  • Jit: Jit achieved a step change in DevX with minimal integration effort, reducing container size by 90% and cutting bootstrap time in half. Read the Jit case study.
  • Security Risk Advisors: SRA sought to deploy modern processes like containerization, slimming, SBOMs (software bills of materials) and vulnerability management without having to largely expand the DevOps team, and it found the ideal solution in Slim.AI.

Pricing

The Slim.AI platform is currently in beta and available for free to developers. Developers can log in to the Slim.AI platform to analyze their containers, get vulnerability reports from multiple scanners and automatically harden their container images for production.

Additionally, Slim.AI has been adding functionality for teams and is accepting a limited number of organizations into its Design Partner Program.

For more information, contact ian.riopel@slim.ai.

The post Tech Backgrounder: Slim.AI Makes Container Hardening Easier appeared first on The New Stack.

]]>
Learn 12 Factor Apps Before Kubernetes https://thenewstack.io/learn-12-factor-apps-before-kubernetes/ Tue, 11 Apr 2023 13:00:08 +0000 https://thenewstack.io/?p=22704884

Have you ever worked at a company where you struggled with containerized apps but couldn’t quite express why? My initial

The post Learn 12 Factor Apps Before Kubernetes appeared first on The New Stack.

]]>

Have you ever worked at a company where you struggled with containerized apps but couldn’t quite express why?

My initial experiences with containers were at a company implementing them in every wrong way imaginable. For example, they ran databases inside a container with no external volumes. You read that right: They wrote the database storage to an aufs file system, which is not designed for long-term storage and is also very slow. When I mentioned this was a terrible idea because we could lose all the data, the answer was, “We are doing snapshots, so we are fine.”

The first apps they put into containers were not much better:

  • They didn’t use environment variables for configuration; instead, they hardcoded configuration and mounted config files.
  • The app died immediately when the database was unavailable; it didn’t wait or retry until it became available.
  • There was terrible log messaging or logs that went to files, not stdout logging.
  • They ran admin processes such as database migrations with a different app.
  • Apps were needlessly stateful.

I solved most of these issues with entry-point scripts, as mentioned by Kelsey Hightower. But that’s a hacky solution to make up for terrible design. I remember coyly asking the developers to redesign their apps to address all these issues, with only my opinion to back me up. So I went online to do some research and found 12 Factor apps, which not only expanded and validated my points, but also gave me an excellent framework to back up my arguments.

Why Learn 12-Factor Apps

The 12-factor app methodology is a set of best practices for building containerized applications. Heroku introduced these practices in 2011, and they have since been widely adopted by software development teams worldwide.

The 12 factors provide a framework for building scalable, portable, maintainable and resilient applications. But perhaps the most important benefit is that they create apps that are easy for operators because they are designed to work seamlessly with any container orchestrator.

Kubernetes (K8s) works best with 12-factor apps because they are the best design practices for containerized applications. K8s, being a container orchestrator, is designed with the assumption that your applications are 12-factor apps.

Venturing into container orchestration without knowing how to engineer container apps will make operating them significantly more tedious to manage and less scalable. Sure, you can make monoliths and poorly designed apps run in Kubernetes. For example, you can mount volumes, run statefulSets and even do vertical autoscaling. But ultimately, you will have to contend with high operational costs:

Factor Operational cost from not implementing
Codebase Apps with shared code base are harder to update and maintain.
Dependencies Time is spent finding and installing dependencies that should be clearly defined and packaged with the container.
Config Time and engineering is spent creating entry-point scripts and/or custom images from source code to change the hard-coded configuration.
Backing Services There are costly and time-consuming migrations and/or significant downtime when changing backing services.
Build, release, run App code and server running being treated as one leads to snowflake servers, painful maintenance and costly upgrades.
Processes Apps cannot horizontally scale when the state is shared. They also cannot be seamlessly replaced when upgrading them.
Port binding This means maintaining a web server container like Tomcat, causing significant configuration overhead and inflated app runtime.
Concurrency Apps not designed with concurrency in mind might use an excessive amount of resources, making a poor choice for scaling.
Disposability This results in data loss and performance issues due to a lack of graceful shutdown implementations and to not handling requests so they can deal with crashes.
Dev/prod parity It is impossible to predict how an app will behave in production. Downtime increases and impairs deployment velocity.
Logs It is tedious to send logs to a log warehouse. Container orchestrators expect logs to use stdout.
Admin Processes Time is wasted procuring a process that’s not part of the app or even doing it manually.

Platform Engineering and 12-Factor Apps

Platform engineering helps deliver excellent self-service and great developer experience with an internal developer platform (IDP). And an IDP significantly reduces cognitive load for developers by providing golden paths and multiple abstractions based on user roles.

iIn platform engineering, 12-factor apps are important because developers self-serve their applications and infrastructure needs with an IDP. Internal developer platforms generally leverage container orchestration and shift the operation of services to developers, meaning that operational excellence is paramount to mitigate all the issues described above.

A platform orchestrator like Humanitec sits at the center of your IDP, making it easy to deploy workloads and all their resources to all environments with a simple workload specification.

Humanitec uses K8s to deploy workloads, therefore, designing 12-factor apps is crucial to maintain high operational performance. When using an IDP, developers self-serve their infrastructure and configuration needs, including deploying and operating applications. If they use non–12-factor apps, they will experience all the pain points described above.

For example, let’s say you have an application that uses a database. Suppose you’re not using a 12-factor app. In that case, you may need to mount configuration on disk, and whatever tooling you use to automate this process is likely designed to work with configuration as variables. If you have multiple environments, you will compound the problem.

Overall, 12-factor apps make deploying, managing and scaling applications easier. They also make it easier to collaborate with other developers.

Conclusion

The 12-factor apps framework provides best practices for building containerized applications that are scalable, portable, maintainable and resilient. They are essential for maintaining high operational excellence when deploying and operating applications in the cloud.

Platform engineering helps developers consume their own infrastructure and easily operate their own services, but the services must be designed with these tools in mind. We encourage all developers to adopt the 12-factor methodology to make their life easier operating them.

Have your 12-factor apps ready? Standardize and deploy them with Humanitec today.

The post Learn 12 Factor Apps Before Kubernetes appeared first on The New Stack.

]]>
Container Security 101: A Guide to Safe and Efficient Operations https://thenewstack.io/container-security-101-a-guide-to-safe-and-efficient-operations/ Mon, 10 Apr 2023 17:00:04 +0000 https://thenewstack.io/?p=22703831

Over the past few years, container adoption has revolutionized everything. Containers became the de facto standard of software deployments, providing

The post Container Security 101: A Guide to Safe and Efficient Operations appeared first on The New Stack.

]]>

Over the past few years, container adoption has revolutionized everything. Containers became the de facto standard of software deployments, providing a wide range of advantages such as:

  • Fast deployment
  • Automation
  • Resource isolation
  • Workload portability
  • High scalability
  • Better observability

Before we dive into the technical details, let’s ensure we’re on the same page by giving a brief recap of what containers are in the context of software development.

Containers are system processes (from the host machine perspective) that run with dedicated resources. Now, the next logical question that may arise could be: How are these containers created?

Answering this question leads us forward in understanding the heart of this article’s topics.

Images

Containers are generated from OCI Images that include the elements needed to run an application in a containerized way, such as code, config files, environment variables, libraries, as well as metadata describing its needs and capabilities. 

The most common scenario in container generation is developers relying on base images taken from public registries, to which is add the developed software.

There are different types of base images that developers can use; they could be “simple,” like a base OS image, or “complex,” and already contain information like specific system libraries or tools.

Since “the devil is in the details,” so is DevSecOps.

  • Can you really trust and rely on a base image made by someone else?
  • Is it safe to consider “production ready” software based on public images?

It can be challenging to ensure that selected base images will not have any security impact while executed, especially if you rely on “complex” ones.

Security? Yes, Please!

Let’s start from the basics: being aware of the risks is a good starting point to take countermeasures.

There are different sources where developers can pull base images to build their containers, mostly from public registries like:

  • Docker HUB
  • Quay.io
  • Cloud provider registries (Amazon ECR, Azure Container Registry, etc.)

Other common sources are git repositories, where developers can easily find Docker files with the instructions needed for the build.

This is a great example of open source strength, as it enables everyone to build their own images starting from someone else’s work.

The downside is that risks need to be considered when deploying them in production:

  • Malicious code
  • CVEs
  • Bugs
  • Image misconfiguration

Let’s take a deeper look at these and at the easiest best practices developers can implement to avoid them.

Malicious Code

A good way to limit the risk of having an image with malicious code is to pull base images only from an official source or a verified developer.

The well-known public registries have many verified and official developers/companies that push and maintain updated images.

CVEs

All registries taken as example have regular vulnerability scans that provide reports about the current ones being detected.

There is no official repository that can completely solve this issue if the images are not proactively updated with regular scan and patching processes.

Bugs and Image Misconfiguration

Bugs and images misconfiguration could be mitigated using only recent and regularly updated images. For example, from a security perspective on Kubernetes, a good mitigation practice could be an admission webhook that denies the deployment of containers based on images older than a given date.

Risks Involved

At this stage, it should be clear that working with containers and images is a blast, but it needs to be done the right way or at least with awareness.

Over the last few years, unfortunately, many security breaches have been leveraged on a compromised CI/CD supply chain, sometimes driven by malicious code injected into images, sometimes making use of known CVEs.

In the year 2023, DevOps should definitely be aware of these risks and work accordingly with the internal security team to mitigate them.

How to Keep Risks Away? (Container Security Golden Rules)

The picture is now clear. How can we live securely, or at least reduce the risks?

The answer can be long and complicated, and will depend on the level of security required for the production workloads.

Some basic level one rules:

  • Retrieve images only from trusted registries.
  • Use only official images.
  • Check the number of vulnerabilities before considering the use.
  • Fix (at least) the critical vulnerabilities of the images.
  • Use recent images when available.

Another great piece of advice about container images is the minimal, the better.

Using a complete OS as a base container image could be useful for troubleshooting purposes, but more libraries and executables inside the images also means a larger attack surface.

As risk mitigation, DevOps should consider the use of a minimal Linux base image (like Alpine) or a distro-less container image.

Consider, though, that this strategy makes it harder to troubleshoot. Using minimal base images only in a production environment could be a good compromise between security (where it matters most) and troubleshooting during development.

Conclusions

Taking into account the security aspect of base images, keeping them updated and secured over time could be challenging. Usually, relying on trusted sources, verified registries and using updated images can be enough, but this is not always the case.

If a high-security level for the containerized workloads is mandatory, for example finance, insurance or any other high-risk environment, a good idea might be to rely on dedicated services that offer secured, verified and regularly updated images.

The post Container Security 101: A Guide to Safe and Efficient Operations appeared first on The New Stack.

]]>
How Testcontainers Is Demonstrating Value as a Key CI Tool https://thenewstack.io/how-testcontainers-is-demonstrating-value-as-a-key-ci-tool/ Mon, 10 Apr 2023 14:55:08 +0000 https://thenewstack.io/?p=22704969

When software developers are building their microservices for use inside Docker containers, it saves a great deal of time and

The post How Testcontainers Is Demonstrating Value as a Key CI Tool appeared first on The New Stack.

]]>

When software developers are building their microservices for use inside Docker containers, it saves a great deal of time and effort for them to also test for various dependencies in parallel instead of starting from scratch after the app is done. As those dependency dots are connected, they often require changes to the app, which means doubling back and re-doing the code. And nobody likes re-doing anything.

This is where something new called Testcontainers comes to the proverbial rescue. Testcontainers is a library originally written in Java that helps developers run module-specific Docker containers while the app is being built in order to simplify integration testing. These Docker containers are lightweight, and once the tests are finished, the containers are destroyed, and developers can move on to the next project.

Modules Program

Software company AtomicJar, a pioneer in this sector, on April 5 launched its Testcontainers Official Modules program with the backing of several major vendors. Redpanda, StreamNative, Neo4j, Cockroach Labs, LocalStack, Oracle and Yugabyte were among the first to declare support for Testcontainers.

The modules catalog features more than 50 modules supporting a list of often-used technologies and provides certification, support, and maintenance of Testcontainers modules for the development community. Each community partner is committed to supporting the program as the preferred way of testing their work and to developing with other partners locally.

“Testcontainers allow developers to test and develop their code against the real dependencies they will use when the app goes live for use,” Eli Aleyner, co-founder of AtomicJar, told The New Stack. “For example, a developer could write a test that is to be executed with a real instance of Kafka, MySQL or any other technology. When the test is complete, it will tear down any dependencies. This allows developers to create self-contained, repeatable and idempotent tests that can be run either locally or in the continuous integration process (CI).”

In the background, Testcontainers utilizes its own containers to spin up the dependencies, Aleyner said. “So when a developer uses Testcontainers to say: ‘I want an instance of Kafka,’ before the test runs, the Testcontainers library will fetch the Kafka container, start it locally, handle all the port mapping and other details automatically.”

The larger impact of this approach is that it enables organizations to give developers more control and allow them to get more confidence in the software they write before checking in their code, Aleyner said.

“Previously, the only place developers used to discover integration issues was during the CI process. With Testcontainers, developers are able to shift this to the left, find issues faster and iterate quicker,” Aleyner said.

There is no substitute for speed in agile software development, and tools like this one help developers stomp down on the accelerator.

Started in Java

The Testcontainers project started in 2015 in Java and has grown to include hundreds of thousands of instances of Postgres, Elastic, MySQL, and other enterprise components, Aleyner said. Testcontainers has since evolved beyond the Java ecosystem libraries into .Net, Go, Node.js, Python, Rust and Haskell as those communities begin to realize the value of quicker iteration enabled through this library.

Since its inception, Testcontainers library has been implemented in seven languages, and it has also been embraced by the development community from large to small companies, Aleyner said. DoorDash, Uber, Spotify, Netflix, Capital One, and several others have talked publicly about using Testcontainers to simplify their Testing setup, he said.

“In aggregate, we are currently tracking around 6 million Docker image pulls for Testcontainers a month,” Aleyner said. “We have seen the Testcontainers library being downloaded 100 million times in January of this year; we crossed 50 million downloads in May of last year, so the technology is getting a lot of traction.”

The post How Testcontainers Is Demonstrating Value as a Key CI Tool appeared first on The New Stack.

]]>
Build and Use a Custom Image with Portainer https://thenewstack.io/build-and-use-a-custom-image-with-portainer/ Sat, 08 Apr 2023 13:00:17 +0000 https://thenewstack.io/?p=22704099

Have you ever tried to deploy a local Docker registry, build a Docker image, push the image to that local

The post Build and Use a Custom Image with Portainer appeared first on The New Stack.

]]>

Have you ever tried to deploy a local Docker registry, build a Docker image, push the image to that local registry, and use the image for a new container deployment? If you have, you know how challenging and time-consuming that task can be. Get one thing wrong and nothing will work. I’ve done it enough to know that it’s not always guaranteed to work as expected.

But what if there was a much easier route? Wouldn’t it make sense to take the path that is not only more efficient but is almost a sure thing to work exactly as expected… every time?

The answer to that is a resounding yes.

To that end, the other day I set out to see just how easy that process would be with my favorite container management platform, Portainer. I was shocked at how simple it is. Now, there are a few “moving parts” you have to take care of, but it’s far fewer than doing it the old fashion command line way and it’s also more reliable. On top of all that, it’s done entirely through the web-based GUI, so it’s not just easier, it’s far more efficient.

I’m going to show you how it’s done.

What You’ll Need

The only thing you’ll need for this is a running instance of Portainer. That’s it. Let’s make it happen.

How to Create a Custom Registry

The first thing we have to do is create a custom registry. Fortunately, Portainer has everything needed for this, built right in. So, to create the new registry, log into Portainer, and select your Docker environment, and then click Applications in the left navigation. Select the first entry, titled Registry. Give the new registry a name and click Deploy the container.

Next, click Environments in the left navigation and click Add Environment. Select Docker Standalone and then click Start Wizard. In the resulting window (Figure 1), select API, give the new registry a name, and then, for Environment Address, type SERVER:5000 (where SERVER is the IP address of the hosting server).

Creating a new Environment in Portainer.

After completing the Wizard, click Connect and your new registry is ready to go. Make to then go to Registries (in the left navigation), click Add Registry, select Custom Registry, give the registry a name, and use SERVER:5000 (where SERVER is the IP address of the hosting machine),. and click Add Registry.

How to Build a Custom Image

The next step is building a custom image. We’re going to build an image using Debian and NGINX. To do that, click Images in the left navigation and then click Build a New Image (Figure 2).

Figure 2: Building a new image with Portainer is fairly straightforward.

On the next page (Figure 3), give the new image a name (such as debian:apache), click Web Editor, and then paste the following into the editor pane:

#
# Base the image on the latest version of Debian
FROM debian:latest

#
# Identify yourself as the image maintainer (where EMAIL is your email address)
LABEL maintainer="EMAIL"

#
# Update apt and update Debian
RUN apt-get update &amp;&amp; apt-get upgrade -y

#
# Install NGINX
RUN apt-get install nginx -y

#
# Expose port 80 (or whatever port you need)
EXPOSE 80

#
# Start NGINX within the Container
CMD ["nginx", "-g", "daemon off;"]

Figure 3: Our custom image is ready to build.

Scroll down and click Build Image. It’ll take 2-5 minutes to complete the build process. When finished, click Images again in the left navigation and locate the newly built image with the name you gave it.

Tag the New Image

To make the image available to the custom registry, you must tag it as such. Remember, our custom registry URL is SERVER:5000 (where SERVER is the IP address of our registry), which is what we’ll use in the tagging process. Go back to Images and locate the newly built image. It should be listed with a tag from the name you gave the custom image (Figure 4). In my case, that was debian:nginx.

Figure 4

Our custom image was successfully built and is ready to tag.

Click the image ID (the blue random string of characters). In the resulting window, make sure to select the custom registry you created from the Registry drop-down (mine is named hive) and then type the name of the custom image you created. You should also see SERVER:5000 (where SERVER is the IP address of your hosting server) to the left of the image name (Figure 5).

Figure 5: We’re tagging our new image.

Click Tag and the image will now be available to our custom registry.

Deploy a Container with Our New Image

Click Containers in the left navigation and Add Container. In the resulting window (Figure 6), give the container a name, select your custom registry from the Registry drop-down, type the name of the image to use (in my case, debian:nginx), add a custom port mapping of something like 8888 for the local port and 80 for the container port, and click Deploy The Container.

Figure 6

Deploying a new container using our custom image from our custom registry.

You should be automatically redirected back to the container list where you’ll see the new container is running (Figure 7).

Figure 7

Our new container is up and running.

Point a web browser to http://SERVER:PORT (where SERVER is the IP address of your hosting server and PORT is the local port you used when configuring the container deployment. You should see the NGINX Welcome screen (Figure 9).

Figure 9: NGINX is serving up web pages.

Congratulations, you’ve successfully created a local image registry, built a custom image, and deployed a container from that image. Thanks to Portainer, this process is considerably easier than doing so from the command line.

The post Build and Use a Custom Image with Portainer appeared first on The New Stack.

]]>
What Is Container Monitoring? https://thenewstack.io/what-is-container-monitoring/ Wed, 05 Apr 2023 14:31:21 +0000 https://thenewstack.io/?p=22704515

Container monitoring is the process of collecting metrics on microservices-based applications running on a container platform. Containers are designed to

The post What Is Container Monitoring? appeared first on The New Stack.

]]>

Container monitoring is the process of collecting metrics on microservices-based applications running on a container platform. Containers are designed to spin up code and shut down quickly, which makes it essential to know when something goes wrong as downtime is costly and outages damage customer trust.

Containers are an essential part of any cloud native architecture, which makes it paramount to have software that can effectively monitor and oversee container health and optimize resources to ensure high infrastructure availability.

Let’s take a look at the components of container monitoring, how to select the right software and current offerings.

Benefits and Constraints of Containers

Containers provide IT teams with a more agile, scalable, portable and resilient infrastructure. Container monitoring tools are necessary, as they let engineers resolve issues more proactively, get detailed visualizations, access performance metrics and track changes. As engineers get all of this data in near-real time, there is a good potential of reducing mean time to repair (MTTR).

Engineers must be aware of the limitations of containers: complexity and changing performance baselines. While containers can spin up quickly, they can increase infrastructure sprawl, which means greater environmental complexity. It also can be hard to define baseline performance as containerized infrastructure consistently changes.

Container monitoring must be specifically suited for the technology; legacy monitoring platforms, designed for virtualized environments, are inadequate and do not scale well with container environments. Cloud native architectures don’t rely on dedicated hardware like virtualized infrastructure, which changes monitoring requirements and processes.

How Container Monitoring Works

A container monitoring platform uses logs, tracing, notifications and analytics to gather data.

What Does Container Monitoring Data Help Users Do?

It allows users to:

  • Know when something is amiss
  • Triage the issue quickly
  • Understand the incident to prevent future occurrences

The software uses these methods to capture data on memory utilization, CPU use, CPU limits and memory limit — to name a few.

Distributed tracing is an essential part of container monitoring. Tracing helps engineers understand containerized application performance and behavior. It also provides a way to identify bottlenecks and latency problems, how changes affect the overall system and what fixes work best in specific situations. It’s very effective at providing insights into the path taken by an application through a collection of microservices when it’s making a call to another system.

More comprehensive container monitoring offerings account for all stack layers. They can also produce text-based error data such as “container restart” or “could not connect to database” for quicker incident resolution. Detailed container monitoring means users can learn which types of incidents affect container performance and how shared computing resources connect with each other.

How Do You Monitor Container Health?

Container monitoring requires multiple layers throughout the entire technology stack to collect metrics about the container and any supporting infrastructure, much like application monitoring. Engineers should make sure they can use container monitoring software to track the cluster manager, cluster nodes, the daemon, container and original microservice to get a full picture of container health.

For effective monitoring, engineers must create a connection across the microservices running in containers. Instead of using service-to-service communication for multiple independent services, engineers can implement a service mesh to manage communication across microservices. Doing so allows users to standardize communication among microservices, control traffic, streamline the distributed architecture and get visibility of end-to-end communication.

How to Select a Container Monitoring Tool

In the container monitoring software selection process, it’s important to identify which functions are essential, nice to have or unnecessary. Tools often include these features:

  • Alerts: Notifications that provide information to users about incidents when they occur.
  • Anomaly detection: A function that lets users have the system continuously oversee activity and compare against programmed baseline patterns.
  • Architecture visualization: A graphical depiction of services, integrations and infrastructure that support the container ecosystem.
  • Automation: A service that performs changes to mitigate container issues without human intervention.
  • API monitoring: A function that tracks containerized environment connections to identify anomalies, traffic and user access.
  • Configuration monitoring: A capability that lets users oversee rule sets, enforce policies and log changes within the environment.
  • Dashboards and visualization: The ability to present container data visually so users can quickly see how the system is performing.

Beyond specific features and functions, there are also user experience questions to ask about the software:

  • How quickly and easily can users add instrumentation to code?
  • What is the process for alarm, alert and automation?
  • Can users see each component and layer to isolate the source of failure?
  • Can users view entire application performance for both business and technical organizations?
  • Is it possible to proactively and reactively correlate events and logs to spot abnormalities?
  • Can the software analyze, display and alarm on any set of acquired metrics?

The right container monitoring software should make it easy for engineers to create alarms and automate actions when the system reaches certain resource usage thresholds.

When it comes to container management and monitoring, the industry offers a host of open source and open-source-managed offerings: Prometheus, Kubernetes, Jaeger, Linkerd, Fluentd and cAdvisor are a few examples.

Ways Chronosphere Can Monitor Containers 

Chronosphere’s offering is built for cloud native architectures and Kubernetes to help engineering teams that are collecting container data at scale. Chronosphere’s platform can monitor all standard data ingestion for Kubernetes clusters, such as pods and nodes, standard ingestion protocols as with Prometheus.

Container monitoring software generates a lot of data. When combined with cloud native environment metrics, this creates a data overload that outpaces infrastructure growth. This makes it important to have tools that can help refine what data is useful so that it gets to the folks who need it the most and ends up on the correct dashboards.

The Control Plane can help users fine-tune which container metrics and traces the system ingests. Plus, with the Metrics Usage Analyzer, users are put back in control of which container observability data is being used, and more importantly, pointing out when data is not used. Users decide which data is important after ingestions with the Control Plane so their organization avoids excessive costs across their container and services infrastructure.

To see how Chronosphere can help you monitor your container environments, contact us for a demo today. 

The post What Is Container Monitoring? appeared first on The New Stack.

]]>
This Week in Computing: Malware Gone Wild https://thenewstack.io/this-week-in-computing-malware-gone-wild/ Sat, 25 Mar 2023 14:10:18 +0000 https://thenewstack.io/?p=22703513

Malware is sneaky AF. It tries to hide itself and cover up its actions. It detects when it is being

The post This Week in Computing: Malware Gone Wild appeared first on The New Stack.

]]>

Malware is sneaky AF. It tries to hide itself and cover up its actions. It detects when it is being studied in a virtual sandbox, and so it sits still to evade detection. But when it senses a less secure environment — such as an unpatched Windows 7 box — it goes wild, as if possessing a split personality.

In other words, malware can no longer be fully understood simply by studying it in a lab setting, asserted the University of Maryland Associate professor Tudor Dumitras, in a recently posted talk from USENIX‘s last-ever Enigma security and privacy conference.

Today, most malware is examined by examining execution traces that the malicious program generates (“Dynamic Malware Analysis”). This is usually done in a controlled environment, such as a sandbox or virtual machine. Such analysis creates the signatures to describe the behavior of the malicious software.

The malware community, of course, has been long hip to this scrutiny, and has developed an evasion technique known as red pills, which helps malware detect when it is in a controlled environment, and change its behavior accordingly.

As a result, many of the signatures used for commercial malware detection packages may not be able to adequately to identify malware in all circumstances, depending on what traces the signature actually captured.

What we really need, Dumitras said, is execution traces from the wild. Dumitras led a study that collected info on real-world attacks, consisting of over 7.6 million traces from 5.4 million users.

“Sandbox traces can not account for the range of behaviors encountered in the wild.”

They had found that, as Dumitras expected, traces collected in a sandbox rarely capture the full behavior of malware in the wild.

In the case of Wannacry ransom attack, for instance, sandbox tracing only caught 18% of all the actions that the randomware attack executed in the wild.

For the keepers of malware detection engines, Dumitras advised using traces from multiple executions in the wild. He advised using three separate traces, as diminishing returns set in after that.

Full video of the talk here:

Reporter’s Notebook

“So far, having an AI CEO hasn’t had any catastrophic consequences for NetDragon Websoft. In fact, since Yu’s appointment, the company has outperformed Hong Kong’s stock market.” — The Hustle, on replacing CEOs with AI Chatbots.

AI “Latent space embeddings end up being a double-edged sword. They allow the model to efficiently encode and use a large amount of data, but they also cause possible problems where the AI will spit out related but wrong information.” — Geek Culture, on why ChatGPT lies.

“We think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words.” ⁦– Wired, on its editorial use of generative AI.

“I think with Kubernetes, we did a decent job on the backend. But we did not get developers, not one little bit. That was a missed opportunity to really bring the worlds together in a natural way” — Kubernetes co-founder Craig McLuckie, on how the operations-centric Kubernetes perplexed developers (See: YAML), speaking at a Docker press roundtable this week.

McLuckie also noted that 60% of machine learning workloads now run on Kubernetes.

“After listening to feedback and consulting our community, it’s clear that we made the wrong decision in sunsetting our Free Team plan. Last week we felt our communications were terrible but our policy was sound. It’s now clear that both the communications and the policy were wrong, so we’re reversing course and no longer sunsetting the Free Team plan” —Docker, responding to the outcry in the open source community over the suspension of its free Docker Hub tier for teams.

“Decorators are by far the biggest new feature, making it possible to decorate classes and their members to make them more easily reusable. […] Decorators are just syntactic glue aiming to simplify the definition of higher-order functions” — Software Engineer Sergio De Simone on the release of TypeScript 5.0, in InfoQ.

“If these details cannot be hidden from you, and you need to build a large knowledge base around stuff that does not directly contribute to implementing your program, then choose another platform.” — Hacker News commenter, on the needless complexity that came with using Microsoft Foundation Classes (MFC) for C++ coding.

Now 25 years old, the venerable Unix curl utility can now enjoy an adult beverage in New Dehli.

Ken Thompson “has a long and storied history of trolling the computer industry […] he revealed, during his Turing Award lecture, that he had planted an essentially untraceable back door in the original C compiler… and it was still there.” — Liam Proven, The Register.

“It’s just like planning a dinner. You have to plan ahead and schedule everything so it’s ready when you need it.” —  Grace Hopper, 1967, explaining programming to the female audience of Cosmopolitan.

The post This Week in Computing: Malware Gone Wild appeared first on The New Stack.

]]>
Inspect Container Images with the docker scan Command https://thenewstack.io/scan-container-images-with-the-docker-scan-command/ Sat, 25 Mar 2023 13:00:21 +0000 https://thenewstack.io/?p=22702817

If you’re serious about container security, then you know it all begins at the beguine…images. No matter how much work

The post Inspect Container Images with the docker scan Command appeared first on The New Stack.

]]>

If you’re serious about container security, then you know it all begins at the beguine…images. No matter how much work you put into locking down your deployments, your network, and your infrastructure, if you base your containers on images with vulnerabilities, those deployments will simply not be secure. And simply trusting that a random image pulled from Docker Hub is enough is a big mistake.

Sure, there are verified images to be had on Docker Hub, but those verifications cost quite a bit for a company, so not every image is verified. And although you can generally trust verified images, it’s best to know, first-hand, that trust is warranted.

And as far as unverified images, every single one you attempt to use could cause you problems. To that end, you must scan them for vulnerabilities. If you find an image contains vulnerabilities, at least you’re informed and, in some cases, you could mitigate a vulnerability by updating the packages contained within an image.

Fortunately, there are a number of tools you can use to scan those images. One such tool is built right into Docker, called docker scan. It’s very easy to use and reports back very simple information about any known vulnerabilities it finds.

Let’s see just how easy the docker scan command is to use.

What You’ll Need

The only things you’ll need for this are an operating system that supports Docker and a user with admin privileges. I’m going to demonstrate on Ubuntu Server 22.04. If you’re using a different platform, you’ll need to only adjust the installation steps for installing Docker. If you already have Docker up and running, you won’t have to worry about installing anything. You’ll also need a valid Docker Hub account and an access token created for this purpose.

Let’s get busy.

Install the Latest Version of Docker

The first thing to do is to add the necessary Docker repository. To do this, you must add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


You’ll then need to install a few dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Finally, we can install the latest version of the Docker engine with these two commands:

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


You might also want to install Docker Compose with:

sudo apt-get install docker-compose -y


To finish things up, make sure your user is a member of the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

Create a Docker Hub Access Token

Log in to your Docker Hub account and click your user profile icon in the top right corner. From the drop-down menu, select Account Settings. In the resulting window, click Security in the left sidebar and then click New Access Token. Name the token something like DOCKER SCAN, give it Read, Write, Delete access, and click Generate.

Once the token has been generated, make sure to copy it to your computer clipboard.

Back at the terminal window, you’ll need to log in to Docker Hub with the command:

docker login


When prompted, type your Docker Hub username and then paste the access token into the terminal. Hit Enter and you should be successfully logged in.

Finally, you’ll need to accept the license with the command:

docker scan --accept-license --version


This will accept the license and print out the version of the docker command available on your system.

One thing to keep in mind is that you are limited to 10 scans a month unless you authenticate with a Snyk account. There’s a caveat to doing this in that the machine you are working on must have a web browser. So, if you’re working on a server OS, there must be a GUI because the authentication happens within a web browser.

To do this, you must run the docker scan command like so:

docker scan --login


This will generate a link for you to click that will open your default web browser. Follow the prompts to create an account and log in. Once you’ve done that, the authentication will finish and you’re ready to go.

You can now use the docker scan command.

Use the docker scan Command

Let’s run a quick scan on the nginx:latest image. To do that, issue the command:

docker scan nginx:latest


Docker will pull down the latest NGINX image and scan it for vulnerabilities. In my case, it reported the following:

Testing nginx:latest...
Organization:      xxx-k42
Package manager:   maven
Target file:       /usr/share/java
Project name:      nginx:latest:/usr/share/java
Docker image:      nginx:latest
Licenses:          enabled

✔ Tested nginx:latest for known issues, no vulnerable paths found.


Next, I tested an image that was created with numerous vulnerabilities (for this very purpose) with the command:

docker scan infoslack/dvwa

The docker scan command pulled down the test image, scanned it for vulnerabilities and came up with the mother load (1101 issues). If you run that command, you’ll find vulnerabilities listed like this:

High severity vulnerability found in apt/libapt-pkg4.12
  Description: Improper Certificate Validation
  Info: https://security.snyk.io/vuln/SNYK-UBUNTU1404-APT-407425
  Introduced through: apt/libapt-pkg4.12@1.0.1ubuntu2.10, apt@1.0.1ubuntu2.10, apt/libapt-inst1.5@1.0.1ubuntu2.10, apt/apt-utils@1.0.1ubuntu2.10, ubuntu-meta/ubuntu-minimal@1.325
  From: apt/libapt-pkg4.12@1.0.1ubuntu2.10
  From: apt@1.0.1ubuntu2.10 &gt; apt/libapt-pkg4.12@1.0.1ubuntu2.10
  From: apt/libapt-inst1.5@1.0.1ubuntu2.10 &gt; apt/libapt-pkg4.12@1.0.1ubuntu2.10
  and 7 more...
  Fixed in: 1.0.1ubuntu2.17


And there you go. You now have the ability to easily scan Docker images for vulnerabilities. If you run a scan and come across a number of issues, you might want to steer clear of the image you scanned. After all, when you use an image with vulnerabilities, the containers you deploy will also be vulnerable.

The post Inspect Container Images with the docker scan Command appeared first on The New Stack.

]]>