Cloud Native and IT Security News and Trends | The New Stack https://thenewstack.io/security/ Thu, 15 Jun 2023 20:18:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 The Transformative Power of SBOMs and IBOMs for Cloud Apps https://thenewstack.io/the-transformative-power-of-sboms-and-iboms-for-cloud-apps/ Thu, 15 Jun 2023 16:20:44 +0000 https://thenewstack.io/?p=22710919

As we continue to navigate the digital landscape, it is clear that every innovation brings with it a wealth of

The post The Transformative Power of SBOMs and IBOMs for Cloud Apps appeared first on The New Stack.

]]>

As we continue to navigate the digital landscape, it is clear that every innovation brings with it a wealth of opportunities as well as a host of challenges. One of the most prevalent trends in today’s tech world is the increasing reliance on cloud-based applications. These applications offer flexibility, scalability and reliability but also introduce complexity, mainly when operating in multicloud or hybrid environments. We must adopt a fresh perspective to manage this ever-evolving IT ecosystem effectively.

In this blog post, I want to explore a transformative concept that could redefine the way we manage our business applications: the integration of the software bill of materials (SBOM) and infrastructure bill of materials (IBOM).

SBOM and IBOM: A Unified Approach to Tech Management

Traditionally, an SBOM serves as an inventory list detailing all components of software, including libraries and dependencies. It plays a crucial role in managing software updates, ensuring compliance and facilitating informed decision-making. However, in today’s intricate application landscape, having knowledge of the software alone is insufficient.

This is where the concept of the IBOM comes into play. An IBOM is a comprehensive list of all critical components a business application requires to run, including network components, databases, message queuing systems, cache layers systems, cloud infrastructure components and cloud services. By integrating an SBOM and an IBOM, we can better understand our application environment. This powerful combination enables us to effectively manage critical areas such as security, performance, operations, data protection and cost control.

The Business Benefits of SBOM and IBOM Integration

The integration of an SBOM and an IBOM offers numerous benefits that can enhance various aspects of business operations:

  • Security – A comprehensive view of both software and infrastructure components allows organizations to identify potential vulnerabilities early on. This level of visibility is critical for bolstering data protection and reducing overall risk. In essence, complete visibility acts as a safety net, enabling businesses to safeguard their digital assets from threats.
  • Performance – Detailed knowledge of software and infrastructure components can significantly enhance application performance. Improved performance translates into superior customer experiences and more efficient business operations, ultimately leading to increased customer satisfaction and profitability.
  • Operations – A complete view of all application components facilitates effective operational planning. This not only simplifies the deployment and maintenance of applications but also streamlines workflows and boosts operational efficiency.
  • Cost Control – The granular information provided by SBOMs and IBOMs enables businesses to make informed decisions, optimize resource utilization and manage costs effectively. By strategically deploying resources, businesses can eliminate unnecessary expenditures and invest in areas that offer the highest value.

Navigating the Complex World of Cloud-Based Applications

The rise of homegrown applications has led to a significant increase in the number of applications that need to be managed. Coupled with the shift toward cloud-based applications and the complexities associated with multicloud or hybrid environments, this trend underscores the importance of having a comprehensive SBOM and IBOM.

Without a thorough understanding of their application landscape, organizations may find it challenging to manage and prioritize operational and security tasks. SBOMs and IBOMs are indispensable tools for effective control and management in this rapidly evolving applications and infrastructure era.

Embracing the Future of Automation and Integration: The Role of GitOps

The future of business applications presents exciting opportunities for automation and integration. As the complexity and scale of applications continue to grow, manual management is becoming increasingly challenging. Automating the creation and maintenance of SBOMs and IBOMs is crucial to keeping pace with the rapidly changing tech landscape.

One of the most promising approaches to this automation and integration is GitOps. GitOps is a paradigm or a set of practices that empowers developers to perform tasks that typically fall under IT operations’ purview. GitOps leverages the version control system as the single source of truth for declarative infrastructure and applications, enabling developers to use the same git pull requests they use for code review and collaboration to manage deployments and infrastructure changes.

In the context of SBOMs and IBOMs, GitOps can automate the process of tracking and managing changes to both software and infrastructure components. By storing the SBOM and IBOM in a git repository, any changes to the software or infrastructure can be tracked and managed through git. This simplifies the management process and enhances visibility and traceability, which are crucial for security and compliance.

Moreover, these automated systems could be integrated into secure, automated supply chains, marking this technological revolution’s next phase. This is an exciting prospect and one that holds immense potential for businesses looking to streamline their operations and enhance their efficiency. With GitOps, the creation and maintenance of SBOMs and IBOMs become a part of the natural development workflow, making it easier to keep up with the fast-paced world of cloud-based applications.

The Role of SBOMs and IBOMs in Compliance and Auditing

Another significant advantage of integrating SBOMs and IBOMs is their crucial role in compliance and auditing. In today’s digital landscape, the emphasis on data privacy and security has never been greater. Businesses must adhere to many regulations, from data protection laws like GDPR and California Consumer Privacy Act (CCPA) to industry-specific regulations such as Health Insurance Portability and Accountability Act (HIPAA) in healthcare and Payment Card Industry Data Security Standard (PCI DSS) in finance.

Having comprehensive SBOMs and IBOMs provides the necessary transparency and traceability to meet these regulatory requirements. They serve as a detailed inventory of all software and infrastructure components, including their versions, configurations and interdependencies. This level of detail is crucial for demonstrating compliance with regulations requiring businesses to thoroughly understand their IT environment.

For instance, in the event of a data breach, an SBOM and IBOM can help a team identify which components were affected and assess the extent of the breach. This can aid in incident response and reporting, both of which are key requirements of data protection regulations.

The integration of SBOM and IBOM is not just about managing complexity in the cloud-based app era. It’s also about ensuring that businesses can meet their compliance obligations and maintain the trust of their customers in an increasingly regulated and security-conscious digital landscape.

The Future Is Integrated

As we continue to navigate the digital future, it’s clear that the integration of SBOMs and IBOMs will play a pivotal role in managing the complexity of cloud-based applications. Providing a comprehensive view of our application environment can help businesses enhance security, improve performance, streamline operations and control costs.

The future of business applications is undoubtedly integrated. By embracing the power of SBOMs and IBOMs, businesses can not only navigate the complexities of the digital landscape but also unlock new opportunities for growth and innovation. As we continue to explore the potential of these tools, one thing is clear: The future of tech management is here, and it’s integrated.

The post The Transformative Power of SBOMs and IBOMs for Cloud Apps appeared first on The New Stack.

]]>
Salesforce Officially Launches Einstein AI-Based Data Cloud https://thenewstack.io/salesforce-officially-launches-einstein-ai-based-data-cloud/ Thu, 15 Jun 2023 12:00:32 +0000 https://thenewstack.io/?p=22711077

Salesforce has been sprinkling its brand of Einstein AI into a bevy of its products during the past couple of

The post Salesforce Officially Launches Einstein AI-Based Data Cloud appeared first on The New Stack.

]]>

Salesforce has been sprinkling its brand of Einstein AI into a bevy of its products during the past couple of years, including such popular services as CRM Cloud and Marketing Cloud. Now it’s going all-out for AI in a dedicated platform.

After introducing it at last September’s Dreamforce conference, the company on June 12 officially launched the dedicated generative AI service it calls Data Cloud — a catch-all subscription service that can be utilized by enterprise IT staff, data scientists and line-of-business people alike.

CEO and co-founder Marc Benioff, speaking to a livestream audience from New York ahead of this week’s re:Inforce conference in Anaheim, Calif., told listeners that since its soft launch to existing customers last fall, Data Cloud has become the company’s “fastest-growing cloud EVER.”

“One of the reasons why this is becoming such an important cloud for our customers is as every customer is preparing for generative AI. They must get their data together. They must organize and prepare their data. So creating a data cloud is becoming that important,” Benioff said.

Einstein Trust Layer Maintains a Safety Shield

Salesforce Data Cloud includes something called the Einstein Trust Layer, a new AI moderation and redaction service that overlays all enterprise AI functions while providing data privacy and data security, Benioff said. The Trust Layer resolves concerns of risks associated with adopting generative AI by meeting enterprise data security and compliance demands while offering users the continually unfolding benefits of generative AI.

“Trust is always at the start of what we do, and it’s at the end of what we do,” he said. “We came up with our first trust model for predictive (AI) in 2016, and now with generative AI, we’re able to take the same technology, and the same idea to create what we call a GPT trust layer, which we’re going to roll out to all of our customers.

“They will have the ability to use generative AI without sacrificing their data privacy and data security. This is critical for each and every one of our customers all over the world,” Benioff said.

Einstein Trust Layer aims to prevent text-generating models from retaining sensitive data, such as customer purchase orders and phone numbers. It is positioned between an app or service and a text-generating model, detecting when a prompt might contain sensitive information and automatically removing it on the backend before it reaches the model.

Trust Layer is aimed at companies with strict compliance and governance requirements that would normally preclude them from using generative AI tools. It’s also a way for Salesforce to address concerns about the privacy risks of generative AI, which have been raised by organizations such as Amazon, Goldman Sachs and Verizon.

How the AI in Data Cloud Works

A real-life example of how AI in the Data Cloud works was offered in a demo by the French sporting goods conglomerate Rossignol, which built its reputation on high-end ski wear and apparel, snowboarding and other winter sports equipment. Due to shortened winters, it is now moving increasingly into the year-round sporting goods market, which includes mountain bikes and other products, so its product SKUs are multiplying fast.

Bringing up a Rossignol product list in a demo for the audience, a company staffer was able to populate the descriptions of dozens of products (already in the company’s storage bank) into a spreadsheet that normally would have taken a team days to research, write and edit. The demo then showed how all those product descriptions could be translated into various languages with a mere series of clicks — again saving a considerable window of time for the marketing person completing this task.

Additional Salesforce News

The company also revealed its intention to infuse Einstein AI GPT into all its services by way of a distribution called GPT for Customer 360. This will make available Einstein AI GPT so enterprises can create “trusted AI-created content across every sales, service, marketing, commerce, and IT interaction, at hyperscale,” Benioff said.

Salesforce revealed new generative AI research. Key data points include:

  • While 61% of employees use or plan to use generative AI at work, nearly 60% of those don’t know how to do so in a trusted way.
  • Some 73% of employees believe generative AI introduces new security risks, yet 83% of C-suite leaders claim they know how to use generative AI while keeping data secure, compared to only 29% of individual contributors. This shows a clear disconnect between leadership and individual contributors.

The post Salesforce Officially Launches Einstein AI-Based Data Cloud appeared first on The New Stack.

]]>
Red Hat Launches OpenStack Platform 17.1 with Enhanced Security https://thenewstack.io/red-hat-launches-openstack-platform-17-1-with-enhanced-security/ Wed, 14 Jun 2023 17:34:12 +0000 https://thenewstack.io/?p=22711054

VANCOUVER — At OpenInfra Summit here, , announced the impending release of its OpenStack Platform 17.1. This release is the

The post Red Hat Launches OpenStack Platform 17.1 with Enhanced Security appeared first on The New Stack.

]]>

VANCOUVER — At OpenInfra Summit here, Red Hat, announced the impending release of its OpenStack Platform 17.1. This release is the product of the company’s ongoing commitment to support telecoms as they build their next-generation 5G network infrastructures.

In addition to bridging existing 4G technologies with emerging 5G networks, the platform enables advanced use cases like 5G standalone (SA) core, open virtualized radio access networks (RAN), and network, storage, and compute functionalities, all with increased resilience. And, when it comes to telecoms, the name of the game is resilience. Without it, your phone won’t work, and that can’t happen.

Runs On OpenShift

The newest version of the OpenStack Platform runs on Red Hat OpenShift, the company’s Kubernetes distro. Under this, Red Hat Enterprise Linux (RHEL) 8.4 or 9.2 runs. This means it can support logical volume management partitioning, and Domain Name System as a Service (DNSaaS).

The volume management partition enables short-lived snapshot and reverts functionalities. This enables service providers to revert back to a previous state during upgrades if something goes wrong. Of course, we all know that everything goes smoothly during updates and upgrades. Not.

This take on DNSaaS includes a framework for integration with Compute (Nova) and OpenStack Networking (Neutron) notifications, allowing auto-generated DNS records. In addition, DNSaaS includes integration support for Bind9.

Other Improvements

Red Hat also announced improvements to the Open Virtual Networking (OVN) capabilities, Octavia load balancer, and virtual data path acceleration. These enhancements ensure higher network service quality and improved OVN migration time for large-scale deployments.

OpenStack Platform 17.1 continues its legacy of providing a secure and flexible private cloud built on open source foundations. This latest release offers role-based access control (RBAC), FIPS-140 (ISO/IEC 19790) compatibility, federation through OpenID Connect, and Fernet tokens, ensuring a safer, more controlled IT environment.

Looking ahead to the next version, Red Hat software engineers are working on making it much easier to upgrade its OpenStack distro from one version to the next. Historically, this has always been a major headache for all versions of OpenStack. Red Hat’s control plane-based approach, a year or so in the future, sounds very promising.

The post Red Hat Launches OpenStack Platform 17.1 with Enhanced Security appeared first on The New Stack.

]]>
Survey Says: Cloud Maturity Matters https://thenewstack.io/survey-says-cloud-maturity-matters/ Tue, 13 Jun 2023 13:20:36 +0000 https://thenewstack.io/?p=22710700

The third annual State of Cloud Strategy Survey, commissioned by HashiCorp and conducted by Forrester Consulting, focuses on operational cloud

The post Survey Says: Cloud Maturity Matters appeared first on The New Stack.

]]>

The third annual State of Cloud Strategy Survey, commissioned by HashiCorp and conducted by Forrester Consulting, focuses on operational cloud maturity — defined not by the amount of cloud usage but by adoption of a combination of technology and organizational best practices at scale.

The results were unambiguous: The organizations using operational best practices are deriving the biggest benefits from their cloud efforts, in everything from security and compliance to availability and the ability to cope with the ongoing shortage of critical cloud skills. High-maturity companies were more likely to report increases in cloud spending and less likely to say they were wasting money on avoidable cloud spending.

The seven headline numbers below capture many of the survey’s most important findings, and you can view the interactive State of Cloud Strategy Survey microsite for detailed results and methodology. Read on to learn more about our cloud maturity model and some of the key differences we found between high and low cloud-maturity organizations.

Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023

Our Cloud Maturity Model

To fully understand the survey results you need to know something about the cloud maturity model developed by HashiCorp and Forrester to describe where organizations are in their cloud adoption journey. HashiCorp commissioned Forrester Consulting to survey almost 1,000 technology practitioners and decision-makers from companies in a variety of industries around the world, primarily those with more than 1,000 employees.

Forrester asked about their use of best practices across technology layers including infrastructure, security, networking and applications, as well as their use of platform teams, and used that data to separate respondents into three tiers:

  • Low-maturity organizations, the lowest 25% or respondents, are experimenting with these practices.
  • Medium-maturity companies, the middle 50%, are standardizing their use of these practices.
  • High-maturity respondents, the top 25%, are scaling these practices across the entire organization.

How High-Maturity Organizations Are Different

Multicloud works better for highly mature companies. More than three quarters (76%) of high-cloud-maturity organizations say multicloud is helping them achieve their business goals, and another 17% expect it to within the next 12 months. That compares to just 60% of low-maturity respondents who say multicloud is working for them, while another 22% expect it to do so in the next year.

The Great Cloud Skills Shortage

“Skills shortages” is the most commonly cited barrier to operationalizing multicloud, and almost three quarters (74%) of high-maturity respondents say multicloud helps them attract, motivate and retain talent. That compares to less than half (48%) of low-maturity organizations who can say the same. Other large differences between the benefits experienced by high- and low-maturity respondents showed up in the areas of compliance and risk (80% to 56%), infrastructure visibility/insight (82% to 59%) and speed (76% to 59%). Also significant, 79% of high-maturity organizations report that their multicloud efforts have resulted in a stronger security posture, perhaps because working in multiple cloud environments can help organizations keep their security professionals engaged, and also be a forcing function toward more intentional oversight of their security operations.

Cloud Spending and Cloud Waste

Despite macroeconomic uncertainty, 62% of highly mature companies boosted their cloud spending in the last year. That compares to 56% of respondents overall and just 38% of low-maturity organizations. Yet even as they increased cloud spending, more than half (53%) of high-maturity respondents used multicloud to cut costs, compared to just 42% of low-maturity respondents.

Avoidable cloud spending remains high, with 94% of respondents reporting some degree of cloud waste (about the same as in last year’s survey). But the factors contributing to that waste differ notably: Low cloud-maturity firms, in particular, struggle with over-provisioning resources (53%, compared to 47% for high maturity firms), idle or underused resources (55% compared to 51%) and lack of needed skills (47% vs. 43%).

Multicloud Drivers

High- and low-maturity organizations also differ on what drives their multicloud efforts. For example, along with cost reductions, reliability, scalability, security and governance, digital transformation and, especially, portability of data and applications are much more commonly cited by high-maturity organizations. On the other hand, factors such as remote working, shopping for best-fit cloud service, desire for operational efficiency, backup/disaster recovery and avoiding vendor lock-in were relatively similar across all levels of maturity.

What are the business and technology factors driving your multicloud adoption?

Base: 963 respondents who are application development and delivery practitioners and decision-makers with budget authority for new investments. Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023.

When it comes to security threats, 49% of both high- and low-maturity respondents worry about data theft (the top-ranking choice), and roughly equal percentages are concerned about phishing and social engineering attacks. Notably, though, while 61% of low-maturity companies rank password/credential/secrets leaks as a big concern, only 47% of high-maturity respondents agree. Similarly, ransomware is an issue for 47% of low-maturity respondents but just 39% of their high-maturity counterparts.

What are the biggest threats your organization face when it comes to cloud security?

Base: 957 respondents who are application development and delivery practitioners and decision-makers with budget authority for new investments. Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023.

Find out More

You can explore the full results of the survey on HashiCorp’s interactive State of Cloud Strategy Survey microsite, where you can also download Forrester Consulting’s “​​Operational Maturity Optimizes Multicloud” study, which presents the firm’s key survey findings, analysis and recommendations for enterprises.

The post Survey Says: Cloud Maturity Matters appeared first on The New Stack.

]]>
Open Sourcing AWS Cedar Is a Game Changer for IAM https://thenewstack.io/open-sourcing-aws-cedar-is-a-game-changer-for-iam/ Mon, 12 Jun 2023 17:00:36 +0000 https://thenewstack.io/?p=22709912

In today’s cloud native world, managing permissions and access control has become a critical challenge for many organizations. As applications

The post Open Sourcing AWS Cedar Is a Game Changer for IAM appeared first on The New Stack.

]]>

In today’s cloud native world, managing permissions and access control has become a critical challenge for many organizations. As applications and microservices become more distributed, it’s essential to ensure that only the right people and systems have access to the right resources.

However, managing this complexity can be difficult, especially as teams and organizations grow. That’s why the launch of Cedar, a new open source project from Amazon Web Services, is a tectonic shift in the identity and resource management (IAM) space, making it clear that the problem of in-app permissions has grown too big to ignore.

Traditionally, organizations have relied on access control lists (ACLs) and role-based access control (RBAC) to manage permissions. However, as the number of resources and users grows, it becomes difficult to manage and scale these policies. This is where policy as code emerges as a de facto standard. It enables developers to write policies as code, which can be versioned, tested and deployed like any other code. This approach is more scalable, flexible and auditable than traditional approaches.

The Advantages of Cedar

Aside from impressive performance, one of the most significant advantages of Cedar is its readability. The language is designed to be extremely readable, empowering even nontechnical stakeholders to read it (if not write it) for auditing purposes. This is critical in today’s world, where security and compliance are top priorities.

Cedar policies are written in a declarative language, which means they can be easily understood and audited. Cedar also offers features like policy testing and simulation, which make it easier to ensure that policies are enforced correctly.

Unlike some other policy languages, Cedar adheres to a more strict and structured syntax, which provides its aforementioned readability, emphasis on safety by default (i.e., deny by default), and more assurances on correctness/security thanks to verification-guided development.

Open Source Supporting Open Source

AWS has recognized the huge challenge that is application-level access control and strives to make Cedar easily consumed within its cloud via Amazon Verified Permissions (AVP). But what about on-premises deployments or other clouds? This is where other open source projects come into play.

With Cedar-Agent, developers can easily run Cedar as a standalone agent (just like Open Policy Agent) wherever they need it. And with OPAL, developers can manage Cedar-Agent at scale, from a unified event-driven control plane. OPAL makes sure that agents like OPA, AVP (Amazon Verified Permissions) and Cedar-Agent are loaded with the policy and data they need in real time.

Permit’s Unified Platform for Policy as Code

As developers, being polyglot and avoiding lock-in enables us to choose the right tool for the right job. With Permit’s SaaS platform, developers can choose between OPA’s Rego, AWS Cedar or any other tool as their policy engine of choice. And by leveraging Permit’s low code/no-code interfaces, RBAC and ABAC policy as code will be automatically generated so that users can take full advantage of policy as code without having to learn a new language.

Conclusion

The launch of AWS’ Cedar is a tectonic shift in the IAM space. It’s clear that the problem of in-app permissions has grown too big to ignore. Policy as code has emerged as a de facto standard, and tools like OPAL and Permit.io are making it easier for developers to write and manage policies at scale. Cedar’s readability and testing features make it an attractive choice for many organizations looking to manage permissions in a scalable, auditable and flexible way.

As the ecosystem continues to expand, we’ll likely see more tools and systems adopting policy as code as the preferred approach to managing permissions and access control in the cloud.

The post Open Sourcing AWS Cedar Is a Game Changer for IAM appeared first on The New Stack.

]]>
API Security: Is Authorization the Biggest Threat? https://thenewstack.io/api-security-is-authorization-the-biggest-threat/ Mon, 12 Jun 2023 14:29:18 +0000 https://thenewstack.io/?p=22710360

As API usage continues to grow, so too does the need to secure APIs to prevent incidents, leakages, and outages.

The post API Security: Is Authorization the Biggest Threat? appeared first on The New Stack.

]]>

As API usage continues to grow, so too does the need to secure APIs to prevent incidents, leakages, and outages. Authorization schemes have begun to gather attention from industry consortiums and vendors, with many seeking to address this longstanding and worsening set of API risks.

Recently OWASP announced the 2023 update to the OWASP API Security Top 10, keeping up with the rapid pace of change,

The update took center stage at a keynote at API Days NY last month as Erez Yalon of CheckMarx and Inon Shkedy of OWASP highlighted the increased focus on authorization controls.

Much Improved

Jeremy Snyder, founder and CEO of FireTail, an API security company at the conference, said he thinks the new release is much better for that reason. “Authorization issues are the cause of more than 50% of API security problems,” he said. “It’s not only about who can see what, but also about what I can do.”

It’s necessary to protect APIs not only against improper access to sensitive data but also to protect them against improper execution of restricted functions and programs, he added.

For this to work, both the resources and programs being accessed need a list of permissions attached to them that can be matched to the list of policies attached to the API’s ID. A match means access is ok; no match means no access.

“The task of building an infrastructure and setting up permissions, while seemingly simple at the onset, becomes exponentially complex as a company expands and as internal and external requirements evolve. Such complexity, coupled with any misconfiguration, can lead to potentially catastrophic consequences,“ said Emre Baran, co-founder and CEO of authorization specialist Cerbos. In this context specialized solutions such as Cerbos become indispensable, he added.

Without central management and governance, it’s difficult to eliminate risks and maintain security for new IDs, resources, and programs. But plenty of vendors were on hand at the conference to offer their products and services in this area, underscoring the trend toward an API security specialization in the industry.

Finding the Right Approach

At the conference, Gartner spent a lot of time on API security. Gartner analyst Mark O’Neill highlighted the lack of security on API response messages, for example. Many tend to forget about securing the response messages, even when the invocation messages are protected, he said.

O’Neill listed five steps to ensuring your APIs are secure:

  • Inventory — list all APIs: internal, external, SaaS-based, etc.
  • Use the OWASP API Security Top 10 to calculate your security posture.
  • Ensure adequate testing, including SaaS APIs.
  • Ensure runtime protection is in place, including WAFs and API Gateways.
  • Implement fine-grained access control.

In developing an approach to API security, the first thing to do is figure out how many APIs you have, and what type of APIs you have (internal, external, third-party, SaaS, etc.).

Once you have created an inventory, next you have to check whether each of the APIs is secured against the most common API security threats and vulnerabilities, such as those listed on the OWASP API Security Top 10.

Secure Beyond the API Gateway

API gateways provide basic security by authenticating users of the API, checking any security policies configured for that API, and generating JWT tokens for passing IDs and associated policies to the next API in the call chain, if any.

Gateways can also implement rate-limiting policies to guard against DDOS attacks. They can require encrypted communication between the API client and the API. And finally, they can encrypt communication between the gateway and the program that implements the API.

Authentication and rate limiting are basic requirements. But they are only the beginning of the story. Security professionals have to assume that someone will break in through the gateway and recommend establishing additional defenses — especially to prevent unauthorized access to data and programs.

Credential stuffing, or username/password theft, is a common problem for APIs and typically needs the protection of an anti-bot software system with AI capabilities that are able to distinguish good traffic from bad. And block the bad without blocking the good.

Good monitoring and alerting tools are needed to detect API vulnerabilities and if possible automated remediation with guardrails.

API Security in the Cloud

One of the biggest challenges, especially for cloud security, is to match up the privileges associated with an ID to the permissions and policies on resources and functions. In the cloud operations are executed using APIs that can change security policies, for example changing an AWS S3 bucket permission from private to internet accessible is done via an API call.

Unauthorized API calls create an over-privileged account vulnerability, however, and is part of what happened in the famous Capital One breach.

Of course, APIs must be thoroughly tested before being put into production, especially externally facing ones.

Largest Vulnerability Area

As the type and usage of APIs continue to explode, and as the software industry trend continues toward smaller API gateways complemented by specialized API tools, API security has become a major focus area for a wide variety of new startups that follow the Gartner recommendations for approaching API security and align to the new OWASP API Security Top 10.

Authorization is the largest vulnerability area that is not protected well and represents the biggest current risk for API security. Although many new startups are jumping in to close this gap, it’s fair to say that authorization remains a largely unsolved industry problem.

Eric Newcomer is CTO at Intellyx. He has been a CTO for leading integration vendors WSO2 and IONA Technologies, and Chief Architect for major enterprises such as Citibank and Credit Suisse. He has created some of the best-known industry standards and university textbooks in use today.

The post API Security: Is Authorization the Biggest Threat? appeared first on The New Stack.

]]>
How DevSecOps Teams Should Approach API Security https://thenewstack.io/how-devsecops-teams-should-approach-api-security/ Mon, 12 Jun 2023 13:30:17 +0000 https://thenewstack.io/?p=22710393

Software organizations need to store data and expose it over the internet to user-based applications. The standard way to manage

The post How DevSecOps Teams Should Approach API Security appeared first on The New Stack.

]]>

Software organizations need to store data and expose it over the internet to user-based applications. The standard way to manage this is to host APIs. In the following diagram, API endpoints are called by both web and mobile clients. APIs are usually also broken down into manageable codebases, sometimes called microservices, which then call each other.

These components are likely familiar to anyone working in software, from business owners to developers, DevOps and compliance staff. Yet in my experience, it is common for these roles to lack a unified vision on how they approach API security. Therefore, in this post, I will provide a recommended API security setup that benefits all parties involved.

Token-Based Architectures

To secure APIs, you must send a message credential with every API request. The most secure way to protect your data is to design this credential with minimal privileges, based on end-to-end client and API flows. The credential must be unforgeable and sendable from any type of client. The JSON Web Token (JWT) format meets these requirements.

The following example shows one possible financial use case: An app sends a token to an API, which forwards it to other APIs. The token restricts access to a particular user and payment transaction. The token is locked down in business terms and is, therefore, more secure than an infrastructure-based credential, such as a client certificate:

The ability to lock down access according to business rules is the primary security behavior of the OAuth 2.0 authorization framework. On every request, the API must cryptographically verify a JWT access token, after which the API can trust the values contained within the token, which are called claims. In this example, APIs could use the received transaction_id claim to restrict access to the single payment transaction. More commonly, APIs filter resources according to business rules based on the user’s identity.

Access tokens issued to each client can be designed differently based on that client’s end-to-end API flows. APIs use token sharing to forward access tokens to each other so that the user identity and claims flow securely. Each API then implements authorization using the received claims. This is more secure than solutions that receive a user ID in an encrypted cookie, where the API always allows the user’s full privileges.

Using tokens in this way provides a zero trust API architecture, where APIs do not trust each other. Instead, they only trust the token issuer, which in OAuth 2.0 is the authorization server. A third party provides this specialist component, and using one should give the richest security capabilities for applications and APIs.

This article’s main focus is DevSecOps, so next I will discuss how this API architecture affects security-related roles within an organization. The main behaviors, and the benefits of a token-based architecture, are most apparent once the essential requirements from all DevSecOps stakeholders are understood.

Development Teams

When using OAuth 2.0, frontend developers don’t have to deal with the complexity of user authentication. Instead, logins are implemented using a code flow. This involves redirecting the user to authenticate via the authorization server, which provides many ways to authenticate users. The party providing the authorization server should continually add support for the most cutting-edge authentication options. However, frontend developers need to understand the moving parts, including OAuth 2.0 messages, expiry events and error conditions. Therefore they must learn some OAuth troubleshooting skills.

Meanwhile, both developers and testers need productive ways to get user-level access tokens for test users so that they can send sample API requests. There are various options here, such as using online tools to run a code flow or using mock OAuth infrastructure. The end result should be a productive setup where APIs can easily be supplied with an access token, and then be validated using a token-signing public key downloaded from the authorization server:

Security Teams

Security and compliance teams have their own requirements, which are typically captured by architects when designing API solutions. These span areas like API hosting, browser security, managing personal data, auditing and regulations. The authorization server provides ways to externalize some of these difficult requirements, such as privacy and consent, from applications and APIs. Security teams should also have an awareness of OAuth 2.0 secure development best practices for APIs and clients.

The security team should first ensure that the token-based architecture meets confidentiality requirements. Access tokens delivered to APIs should use the JSON Web Token (JWT) format, yet since these are easily readable, they should not be returned to internet clients. To ensure token confidentiality, the preferred option is to use the phantom token pattern. This involves clients receiving opaque access tokens, which reveal no sensitive data. When APIs are called, an API gateway can introspect the token and forward a JWT to APIs. The end-to-end flow does not add any complexity to API code or require APIs to manage their own crypto keys:

Some organizations use an entitlement management system, such as Open Policy Agent, to centralize authorization. Doing so gives the security team the best visibility into which parties access important business resources. APIs using a token-based architecture integrate well with such systems, since the access token serves as a policy information point (PIP) that can be sent to a policy decision point (PDP), either from the API gateway or the API itself:

DevOps Teams

In an OAuth 2.0 architecture, APIs and user applications outsource all of the low-level security, including key management and user credentials, to the authorization server. Over time, this component, therefore, includes many intricate security settings. The DevOps team is most often responsible for maintaining its high availability and correct production configurations.

The authorization server should be considered a specialist API, hosted right next to the organization’s APIs. Doing so provides the best performance and allows control over which endpoints are exposed to the internet. DevOps teams should also understand how to de-risk authorization server deployment and upgrades. They should use an OAuth 2.0 parameterized configuration created only once, after which the same binaries and configuration are simply promoted down a pipeline.

Once the token-based architecture is live, DevOps teams need a productive way to manage Day 2 operations for both APIs and the authorization server. This should include dashboard integration, auto-healing, auto-scaling, alerts and useful technical support logs.

DevOps teams often implement security jobs related to the API gateway. An example might be implementing intelligent routing of API requests, such as to the user’s home region, to meet data sovereignty restrictions identified by the security team. The following diagram shows an American user being re-routed to the correct region, based on a claim in the access token, to ensure that the user’s transactions are stored in the United States:

Conclusion

Implemented correctly, an OAuth 2.0 token-based architecture provides a complete zero trust solution for APIs. The best solutions require cross-team collaboration to meet the crucial requirements of all DevSecOps roles. Business owners can then deliver digital services with future-facing security. This solution should offer multiple user authentication methods and first-class interoperability with external systems.

Since OAuth 2.0 requires a distributed architecture, teams often must learn new best practices and put in place productive technical setups. Developers can start by following solid standards-based learning resources like the Curity Guides. The security components you choose are also important. Use an API gateway with good support for the intelligent processing of API requests. Also, verify early that the proposed authorization server has up-to-date support for standards and is extensible. This will enable you to deliver the right claims to APIs and customize user authentication when required.

This article has summarized the core setup needed to implement a modern token-based architecture. Once the correct separation is in place, you can meet all of the main requirements for all DevSecOps roles. The architecture will then scale to many components and other security use cases. The following links provide further related details:

The post How DevSecOps Teams Should Approach API Security appeared first on The New Stack.

]]>
Cloud-Focused Attacks Growing More Frequent, More Brazen https://thenewstack.io/cloud-focused-attacks-growing-more-frequent-more-brazen/ Mon, 12 Jun 2023 13:00:21 +0000 https://thenewstack.io/?p=22710520

Cloud-focused attacks have soared in recent years, with attackers growing more sophisticated, brazen and determined in cloud exploitation, according to

The post Cloud-Focused Attacks Growing More Frequent, More Brazen appeared first on The New Stack.

]]>

Cloud-focused attacks have soared in recent years, with attackers growing more sophisticated, brazen and determined in cloud exploitation, according to a new report.

Exploitations targeting cloud infrastructure increased 95% from 2021 to 2022, and cases of adversaries targeting cloud environments have nearly tripled in the same timeframe, as noted in the CrowdStrike 2023 Cloud Risk Report.

This report by the cybersecurity platform company shares in rich detail how attackers are going after enterprise cloud environments, as well as how those threat actors use the same cloud platforms to support their own malicious campaigns.

One key finding is that hackers are becoming more adept — and more motivated — in targeting enterprise cloud environments through a growing range of tactics, techniques and procedures. These include deploying command-and-control channels on top of existing cloud services, achieving privilege escalation, and moving laterally within an environment after gaining initial access.

Many cloud-focused campaigns begin with a single set of compromised account credentials, which attackers use to gain a back door into a customer’s cloud environment. “One of the big things a lot of customers don’t realize is that the adversary will use their initial access to gain access to their identity system,” said James Perry, CrowdStrike’s senior director, incident response services, at the CrowdStrike Cloud Threat Summit, a virtual event held this past Tuesday and Wednesday. (Video presentations from the event are now available on demand.)

“That allows them to use single sign-on to access many other applications, including their cloud – all they need is one password,” Perry said. “That allows them to pivot from an on-prem identity into the cloud and gain that more destructive access.”

Hackers are also getting better at avoiding detection once they’ve breached an environment: In 28% of incidents during the period when CrowdStrike collected data for this report, an attacker had manually deleted a cloud instance to hide evidence and avoid detection. Threat actors also commonly deactivate security tools running inside virtual machines once they’ve gained access, the report noted, another maneuver to evade detection.

Cloud Misconfigurations Drive Risk

But the cloud isn’t just a target for adversaries — it’s a tool, too. Attackers will use cloud infrastructure to host tools, such as phishing lure documents and malware payloads, that support their attacks.

The CrowdStrike 2023 Cloud Risk Report offers a deep dive into the various methods and attack vectors modern adversary groups are deploying today, noting the ephemeral nature of some cloud instances is pushing attackers to become even more tenacious in their pursuit of cloud compromise.

Moreover, the relative infancy of many cloud-centric paradigms and technologies, such as containers and orchestration, expands the threat surface as well. Teams may simply not know all they need to know in order to keep their cloud infrastructure and workloads safe.

Among the report’s findings:

  • Sixty percent of container workloads lack properly configured security protections, and nearly one in four are running with root-like capabilities.
  • Kubernetes (K8s) misconfigurations can create similar risks at the orchestration layer: 26% of K8s Service Account Tokens are automounted, according to CrowdStrike, which can enable unauthorized access and communication with the Kubernetes API.

While attack vectors and methods are increasingly varied, they often rely on some common denominators, including the oldest one around: human error. For example, 38% of observed cloud environments were running with insecure default settings from the cloud service provider.

Indeed, cloud misconfigurations are one of the major sources of breaches.

Similarly, identity access management (IAM) is another huge area of risk rife with human error. In two out of three cloud security incidents observed by CrowdStrike, IAM credentials were found to be over-permissioned, meaning the user had higher levels of privileges than necessary.

This is inextricably linked with a broader misconfiguration problem: CrowdStrike found nearly half of all detected cloud misconfigurations considered critical were the result of ineffective identity and entitlement hygiene, such as excessive permissions.

“Threat actors have become very adept at pivoting from on-prem enterprises to directly into the cloud leveraging stolen identities,” said Adam Meyers, CrowdStrike’s senior vice president of intelligence. “Identity security has become a major concern across all of our enterprise customers, as they understand that there’s not a single hack that’s taking place that doesn’t involve a compromised credential.”

Creating a Stronger Security Posture

Misconfiguration and identity challenges are highly preventable when organizations invest in the people, tooling and processes needed to get it right.

“CrowdStrike is consistently called in to investigate cloud breaches that could have been detected earlier or prevented if cloud security settings had been correctly configured,” the report said.

That speaks to a broader point: The report isn’t a doomsday story. It’s more of a call to arms, offering a blueprint for how enterprises can fight back and best protect their cloud environments from malicious actors. Since so many cloud security incidents begin with leaky credentials or oversized permissions, for example, shoring up identity and entitlement management is table stakes for a strong cloud security posture.

CrowdStrike identifies four pillars of a cloud-focused security posture that makes life difficult for even the most sophisticated adversaries.

  1. Cloud workload protection (CWP): A product that provides continuous threat monitoring and detection for cloud workloads across modern cloud environments.
  2. Cloud security posture management (CSPM): A set of processes and capabilities that detects, prevents and remediates the misconfigurations adversaries exploit.
  3. Cloud infrastructure entitlement management (CIEM): A set of features that secure cloud identities and permissions across multi-cloud environments, detects account compromises, and prevents identity misconfigurations, stolen access keys, insider threats and other malicious activity.
  4. Container security: A set of tools that perform detection, investigation and threat-hunting tasks on containers, even those that have been decommissioned.

This multi-layered approach, starting at the workload level, is crucial in today’s security landscape, said CrowdStrike president Michael Sentonas.

“If you’re not on the workload, you can’t stop an attack,” Sentonas said. “At best, you’re detecting it without the ability to do anything about it.”

The multi-pronged approach is what’s needed to protect and mitigate against both active attacks and the persistent reality of human error, he said: “Organizations need the tight native integration of an agent and an agentless solution that spans runtime to CSPM to CIEM to stop breaches from both adversaries and human error.”

Read the full report to boost your cloud security awareness and strategy.

The post Cloud-Focused Attacks Growing More Frequent, More Brazen appeared first on The New Stack.

]]>
Vetting an Open Source Database? 5 Green Flags to Look for https://thenewstack.io/vetting-an-open-source-database-5-green-flags-to-look-for/ Fri, 09 Jun 2023 18:25:19 +0000 https://thenewstack.io/?p=22710410

By now, the vast majority of companies (90%, according to a report from GitHub) use open source in some way.

The post Vetting an Open Source Database? 5 Green Flags to Look for appeared first on The New Stack.

]]>

By now, the vast majority of companies (90%, according to a report from GitHub) use open source in some way. The benefits are undeniable: Open source is cost-effective, accelerates innovation, allows for faster iteration, features robust community-driven support and can be a magnet for attracting talent.

While unsupported open source is free, most companies choose to invest in some type of supported open source solution to make their implementation of this technology robust enough to operate at enterprise scale. These solutions provide a sweet spot between the challenges of managing open source oneself and the vendor lock-in associated with proprietary software.

Given open source’s massive popularity, it’s no surprise that a plethora of supported open source solutions exist, but not all open source solutions and providers are created equally. It’s important to vet your options carefully — your mission-critical applications depend on it.

Here are five green flags to look for.

1. The Solution Offers Resiliency

Nobody wants to deal with application downtime: At best, it’s inconvenient and, at worst, it cuts into revenue and can cause reputational damage to a business. So, what happens if you experience a failure in your infrastructure or data center provider? How do you minimize the impact of planned maintenance?

Open source products, more specifically, open source databases, seldom have built-in resiliency solutions to address this.

For this reason, resiliency capabilities are the hallmark of solid open source database solutions. Depending on a company’s recovery time objective (RTO), which can range from seconds to days, businesses should look for holistic open source database solutions that offer database high-availability/disaster recovery in the event of unexpected failure and, in some cases, go further to facilitate uninterrupted application uptime during scheduled maintenance. Backup and restore capabilities, too, are an important part of the resiliency equation, so make sure any solution you adopt supports regular backups (that are actually usable!) at appropriate intervals. Backup capabilities to look for are the ability to perform full backups, incremental backups, point-in-time recovery and selective data restoration.

2. The Solution Features Robust Security

In today’s world, where high-profile data breaches are a frequent occurrence, robust security is vital. From a database perspective, supported open source solutions should provide safeguards like encryption while data is in transit and at rest, plus value-add options such as redaction for sensitive information, like credit card data. This is especially crucial for highly regulated industries like financial services, health care and government that handle our most sensitive data.

Capabilities for enhanced auditing are also important for security, as they let organizations see who did what to a given data set, and at what point in time. Additionally, employing fine-grained role-based access control (RBAC) enables companies to establish specific parameters governing data access, ensuring that information is only visible to individuals on a need-to-know basis. These are just some of the capabilities that can denote superior, safe and secure open source database solutions.

3. Your Provider Gives Back to the Community

Organizations should be invested in giving back to the open source projects their solutions support, so keep an eye out for companies who focus on driving innovation for the greater good of the community. Giving back might include things like providing funding, making significant contributions to the code or educating people on/furthering the message of the project. These are all signs of a true open source partner.

The closer a company is to the open source project its solution supports, the more adept it becomes at understanding and solving its customers’ problems. This is the most effective way it can influence the direction of the project to better support customers while simultaneously driving innovation in the community.

4. It’s True, Non-Captive Open Source

There’s an important difference between offerings that are legitimate open source versus open source-compatible. “Captive” open source solutions pose as the original open source solution from which they originated, but in reality, they are merely branches of the original code. This can result in compromised functionality or the inability to access features introduced in newer versions of the true open source solution, as the branching occurred prior to the introduction of those features. “Fake” open source can feature restrictive licensing, a lack of source code availability and a non-transparent development process.

Despite this, these solutions are sometimes still marketed as open source because, technically, the code is open to inspection and contributions are possible. But when it comes down to it, the license is held by a single company, so the degree of freedom is minute compared to that of actual open source. The key is to minimize the gap between the core database and its open source origins.

Choose solutions with licenses that are approved under the Open Source Initiative (OSI), which certifies that they can be freely used, modified and shared. Signs to look for include solutions that are supported by a robust community rather than driven by a single company. Additionally, solutions that frequently release new versions and features are also indicators of a quality provider.

5. The Solution Is Flexible

The database you choose should be flexible and customizable, allowing for different deployment models, integration with other systems and support for different data types and formats. A truly flexible database service can be deployed in various models, including on-premises, cloud-based, or hybrid and multicloud deployments. It also caters to different infrastructure preferences such as bare metal, hypervisor and Kubernetes. This flexibility can extend into support for multiple data models, allowing users to work with relational, document, graph or other data models within a single service to accommodate different application requirements.

Database services with flexible pricing and billing have the added benefit of allowing users to choose the most cost-effective plan based on their usage patterns. Look for solutions that offer various pricing models, such as pay-as-you-go, subscription-based or tiered pricing to maximize value for your investment.

At the end of the day, when it comes to open source database solutions, appearances can be deceiving. It is crucial for companies to invest additional time in thoroughly evaluating these solutions to avoid getting locked into an undesirable situation. When all is said and done, the rewards of effectively harnessing the power of open source are significant. By remaining vigilant and discerning throughout the evaluation process, you can identify the most suitable solution that truly fulfills your requirements. Look for those green flags.

The post Vetting an Open Source Database? 5 Green Flags to Look for appeared first on The New Stack.

]]>
Unlocking DevSecOps’ Potential Challenges, Successes, Future https://thenewstack.io/unlocking-devsecops-potential-challenges-successes-future/ Fri, 09 Jun 2023 14:40:20 +0000 https://thenewstack.io/?p=22710577

It has been more than 15 years since DevOps emerged on the technology landscape, promising to revolutionize team collaboration and

The post Unlocking DevSecOps’ Potential Challenges, Successes, Future appeared first on The New Stack.

]]>

It has been more than 15 years since DevOps emerged on the technology landscape, promising to revolutionize team collaboration and streamline development processes. While some people now say Platform Engineering is the one true way forward, DevOps scope widened to include security, giving rise to DevSecOps, which remains influential. Unfortunately, even as the need for coding and operational security grows, a Progress Software study has found that many organizations have struggled to implement DevSecOps.

To find out why, Progress interviewed 606 IT/Security/App Dev and DevOps decision-makers from organizations with over 500 employees across 11 countries. The survey’s goals were to identify what was hindering DevSecOps success and to uncover best practices from companies with thriving DevSecOps programs.

The Challenges

They found:

  1. DevSecOps success has been hindered by complexity and constant change.
  2. Effective DevSecOps requires collaboration and investment in culture.
  3. The desire to succeed in DevSecOps did not guarantee mastery of its practices.

These DevSecOps challenges included complexity, competing priorities, and a lack of clear business impact and Return on Investment (ROI). Additionally, while the participants recognized the potential benefits of adopting cloud native technology, AI, and Policy as Code in their DevSecOps strategy, they had trouble demonstrating the ROI for these investments. That, of course, made it difficult to secure buy-in from stakeholders.

In addition, despite security threats being the primary driver for DevOpsSec evolution, many respondents proved only somewhat familiar with how security fits in DevSecOps. In short, they didn’t really understand the techniques they were trying to use. Specifically, they had trouble prioritizing security efforts, securing different types of workloads, and meeting delivery deadlines and audit requirements.

While everyone agreed that collaboration and culture were critical factors for successfully implementing DevSecOps, only 30% of the respondents felt confident in the level of collaboration between security and development teams. Furthermore, 71% agreed that culture was the biggest barrier to DevSecOps progress, yet only 16% prioritized culture as an area for optimization in the next 12-18 months. This discrepancy underscored the need for fostering a collaborative culture within organizations.

Addressing the Challenges

Therefore, to fully harness the potential of DevSecOps, organizations must address several key challenges. These are:

  1. Overcome obstacles to collaboration: Encourage cross-functional communication and collaboration between security, app development, and other teams.
  2. Incorporate new technologies and processes: Balance modernizing technology, processes, and culture, as focusing on just one area will not be enough.
  3. Address conflicting interests: Ensure leadership prioritizes and invests in key areas that drive DevSecOps success, including adopting a holistic approach that engages teams from across the organization.
  4. Build confidence in securing cloud native adoption: Focus on implementing and leveraging the benefits of cloud-first technologies while considering cloud security.

It’s become clear that even though we’ve been using DevOps for years, many of us still haven’t mastered creating an effective DevSecOps culture. Companies must engage in honest conversations from the executive level down about where they are in their journey and how to move forward to success.

The post Unlocking DevSecOps’ Potential Challenges, Successes, Future appeared first on The New Stack.

]]>
Security as Code Protects Rapidly Developing Cloud Native Architectures https://thenewstack.io/security-as-code-protects-rapidly-developing-cloud-native-architectures/ Thu, 08 Jun 2023 17:00:23 +0000 https://thenewstack.io/?p=22709691

Enterprises are increasingly going beyond lift-and-shift migrations to adopt cloud native strategies — the approach of developing, releasing and maintaining

The post Security as Code Protects Rapidly Developing Cloud Native Architectures appeared first on The New Stack.

]]>

Enterprises are increasingly going beyond lift-and-shift migrations to adopt cloud native strategies — the approach of developing, releasing and maintaining applications, all within cloud environments. According to Gartner, more than 95% of new digital initiatives will be conducted on cloud native platforms by 2025.

As enterprises dial up the focus on cloud native functionality, they’re moving away from manual click-Ops approaches to adopt automation that enables higher velocity and better manages increasing cloud complexity and scale. HashiCorp’s State of Cloud Strategy Survey shows that 81% of enterprises are already multicloud or plan to be within a year. Of those who have adopted multicloud, 90% say it works for them.

There’s a familiar problem amid all this adoption, one that’s plagued the entire industry for years: Traditional security workflows can’t keep up. They were never designed to support a paradigm where the architecture is represented as code that can change several times a day. The velocity and scope of change of today’s cloud native architectures cause security teams to struggle.

Embracing automation is the only viable approach for security teams to support this new paradigm. Developers have leaned on Infrastructure as Code (IaC) to build these cloud native applications on a large scale, even in complex environments. Security as Code (SaC) also leverages automation to intelligently analyze and remediate security and compliance design gaps, even as context changes. It’s the missing piece that completes an enterprise cloud environment.

HashiCorp’s survey shows a whopping 89% of respondents see security as a key driver of cloud success. Cloud-service providers recognize their customers’ challenges and are making investments in security to mitigate them.

Infrastructure automation tools are a catalyst for boosting operational efficiency in development, and the same is true for security. Automation helps optimize cost and scale operations. SaC ensures these applications are built right the first time, rather than security teams rushing to put out fires after they’re deployed. Empowering security teams to codify security best practices, and enforce them autonomously, allows them to focus on the strategic work of building standards that provide the necessary guardrails for developers to move with velocity. The future of SaC should be a corollary of IaC adoption, which is growing.

SaC helps both security and development teams operate autonomously, share responsibility, and collaborate more effectively in delivering secure products and features at the speed required by today’s business landscape. SaC is the only way that we can ensure security keeps up with the rapid pace of cloud native development.

How We Got Here

Modern application architectures have increased scale and complexity, completely outpacing traditional security methods, which can’t offer adequate protection in today’s landscape. These architectures are defined in IaC languages like Terraform and often span more than 100,000 lines of code that change frequently. This has allowed development teams to rapidly evolve the architecture, deliver infrastructure in an agile manner, and build architectures at an unparalleled scale and velocity.

These developers are increasingly empowered to choose their cloud providers, feature capabilities and tech stacks to rapidly deliver on customer needs. With all the choices developers are empowered to make, applications live in heterogeneous environments that are difficult to manage. If we were to measure the average entropy of an application architecture based on the interconnectedness of components, the curve would be exponential. Add in the false positives and lack of actionable, achievable and applicable feedback, and the impact on developer productivity is huge. This is especially detrimental at scale, and when time to market is a critical business objective.

Now consider the breadth of the community that’s creating these complex architectures. The Cloud Native Computing Foundation (CNCF) reports that there are 7.1 million cloud native developers — more than two and a half times the population of Chicago.

Multicloud strategies, diversity of cloud-feature capabilities, disparate tech stacks and an enormous base of developers combine to make security an incredibly complex undertaking. Functionality is prioritized, and often the security guardrails we need are not calculated in that developer freedom.

Why SaC

Traditional security measures simply can’t match the scale of today’s cloud native architectures, and enterprises recognize this issue. One report shows that nearly 70% of organizations believe their cloud security is only “somewhat mature,” and 79% aren’t sure they enforce policies consistently.

The answer is SaC, because it solves the most-pressing business challenges.

Say you need to deliver a unique solution for a fleeting business opportunity. Often, security considerations slow down the time to market. With SaC, instead of being an inhibitor, security becomes an accelerator. SaC provides the developers with flexible guardrails that let them operate autonomously with velocity. Developers can evolve their feature capabilities without having to slow down for security and potentially miss the window of opportunity.

SaC comes to the rescue when technology needs change, like modernizing your tech stack to pay off tech debt and adopt new capabilities. It also allows you to rapidly evolve security practices when your threat landscape changes because your business is increasingly being targeted. Enterprises struggling with compliance at scale can alleviate those challenges by leveraging SaC to automate compliance workflows to reduce the time and cost of becoming compliant.

McKinsey saw the promise of SaC as the best “and maybe only” path to securing cloud native architectures more than a year ago. In addition to being the next logical step of IaC and operating at the scale and pace of innovation with security baked in, SaC creates transparency in security design, and consistent, repeatable and reusable representations of the security architecture.

What SaC Enables

We’re already seeing the payoff. Opening up our SaC framework is the feature our customers ask for the most. It’s allowed resource-constrained security teams to stop putting out fires and elevate their strategy, leveraging automation to do the tedious work. Our customers have reported a 70% reduction in security design review time and 40% reduction of cost in delivering security design by automating design validation using SaC.

SaC is also the key to unlocking collaboration, autonomy and shared responsibility across development and security teams, enabling the DevOps and DevSecOps cultures that organizations want to adopt.

This is increasingly a priority, as 62% of organizations have a DevSecOps plan or are evaluating use cases, and 84% believe getting the right data and tools to developers is key to enabling DevSecOps, according to ESG Research. As modern application development evolves, SaC is the accelerator that allows security to keep pace with everything else.

Envisioning a Modern Security Practice

Developers have been unleashed to innovate as fast as possible, using whatever tools and cloud environments they wish. The only way to have security keep up with them is to identify best practices at the policy level, agnostic to the technology stacks these developers choose. Automation, powered by SaC, turns that from a dream to reality.

We can use SaC to fit into developers’ workflows and democratize security for them. This completely changes the dynamic of how developers and security interact. Ten years from now, the traditional workflows that rely on Word documents, Excel spreadsheets and Visio diagrams will be a thing of the past. Developers will have an increased responsibility for security, with collaboration making those efforts stronger. When security is defined as code, developers can easily change a security architecture to better meet their requirements.

Shifting to SaC allows enterprises to make security a driver of their velocity and agility. Automation improves security from reducing human error, to eliminating scaling challenges so security can keep pace with development, to providing richer security policies.

With SaC, we finally have a way to quickly make changes that deliver repeatable outcomes at the same speed as application development. As cloud native architectures become more prominent, this is the only way security can keep pace.

The post Security as Code Protects Rapidly Developing Cloud Native Architectures appeared first on The New Stack.

]]>
Chainguard Unveils Speranza: A Novel Software Signing System https://thenewstack.io/chainguard-unveils-speranza-a-novel-software-signing-system/ Wed, 07 Jun 2023 21:22:23 +0000 https://thenewstack.io/?p=22710335

Chainguard Labs, the company behind Sigstore code signing, in collaboration with MIT CSAIL and Purdue University researchers, has unveiled a

The post Chainguard Unveils Speranza: A Novel Software Signing System appeared first on The New Stack.

]]>

Chainguard Labs, the company behind Sigstore code signing, in collaboration with MIT CSAIL and Purdue University researchers, has unveiled a new preprint titled “Speranza: Usable, privacy-friendly software signing.” It describes how to balance usability and privacy in software signing techniques. The result, they hope, will augment software supply chain security.

Why is this a problem? Because as it is, there’s no guarantee that the person signing the code is actually the authorized author. “Digital signatures can bolster authenticity and authorization confidence, but long-lived cryptographic keys have well-known usability issues, and Pretty Good Privacy (PGP) signatures on PyPI [Python Package Index], for example, are “worse than useless.”

That’s because PGP isn’t really a standard. In PyPI‘s case, “many are generated from weak keys or malformed certificates.” In short, there is no ultimate source of trust for PGP signatures. We need more.

In addition, the Sigstore’s keyless signing flow exposes your email with every signature. For a variety of reasons, not everyone wants their name attached to a signed code artifact.

Where Speranza Comes in

That’s where the Speranza project comes in. It takes a novel approach to this problem, by proposing a solution that maintains signer anonymity while verifying software package authenticity.

It does this by incorporating zero-knowledge identity co-commitments. Zero-knowledge is a blockchain cryptographic approach. It reveals that a piece of hidden information is valid and known by the prover with a high degree of certainty.

In the Speranza model, a signer uses an automated certificate authority (CA) to generate a new pseudonym each time. These pseudonyms are manifested as Pedersen commitments. While these are cryptographically linked to plaintext email identities, they don’t reveal any further identity information. Using a zero-knowledge approach, they still, however, assure you that there’s a real, legitimate user behind the code’s signature.

Adding another layer to the process, the signer uses the zero-knowledge commitment equality proofs to show that two commitments denote the same value. Thus, you can be sure a pair of signatures originated from the same identity, all while keeping that identity and any links between other signatures concealed.

Got all that? I hope so because that’s as clear as I can make it.

Sperenza Can Be Trusted

Pragmatically speaking, Speranza-based signatures can be trusted. That is not the case with many current signature approaches. Or, for example, the npm package provenance sidesteps this issue by using machine identities, but this doesn’t help with author signature privacy issues.

The proposed Speranza approach also requires a package repository to maintain the mapping from package names to commitments to the identities of authorized signers. This allows the signer to create proof that the identity from the map and the identity for their latest signature are co-commitments. The project also employs techniques from key transparency to alleviate the necessity for users to download the entire package ownership map.

The Speranza Rust proof-of-concept implementation shows that the overheads for maintainers (signing) and end users (verifying) are minimal, even for repositories hosting millions of packages. The system dramatically reduces data requirements, and server costs are negligible.

In conclusion, Speranza represents a feasible, practical solution that has the potential to operate on the scale of the largest software repositories currently in existence. By successfully marrying robust verification with crucial privacy measures, it aims to enable deployment on real package repositories and in enterprise settings.

Read the paper, give the code a try, and let the Speranza let you know what you think. This is still a work in progress.

The post Chainguard Unveils Speranza: A Novel Software Signing System appeared first on The New Stack.

]]>
4 Factors to Consider When Choosing a Cloud Native App Platform https://thenewstack.io/4-factors-to-consider-when-choosing-a-cloud-native-app-platform/ Fri, 02 Jun 2023 17:00:14 +0000 https://thenewstack.io/?p=22709895

Embracing the cloud widens your attack surface while your security budget stays the same. Choosing the right cloud native application

The post 4 Factors to Consider When Choosing a Cloud Native App Platform appeared first on The New Stack.

]]>

Embracing the cloud widens your attack surface while your security budget stays the same. Choosing the right cloud native application platform is therefore a crucial decision to make — managing risk and regulatory compliance across the organization, expediting app delivery and remove friction with automated security.

Every dollar spent on security must minimize security risks and streamline security while producing a return on investment (ROI) in the form of better detection or prevention. As an IT leader, finding the tool that meets this requirement is not always easy. It is tempting for CISOs and CIOs to succumb to the “shiny toy” syndrome: to buy the newest tool claiming to address the security challenges facing their hybrid environment, instead of simplifying and extending their security across the entire infrastructure with the tools they already have to secure cloud native applications.

With cloud adoption on the rise, securing cloud assets is a critical aspect of supporting digital transformation efforts and the continuous delivery of applications and services to customers faster, securely and efficiently.

However, embracing the cloud widens the attack surface. That attack surface includes private, public and hybrid environments. A traditional approach to security simply doesn’t provide the level of security needed to protect this environment and requires organizations to have granular visibility over cloud events.

Organizations need a new unified approach, one that provides them with the visibility and control they need while also supporting the CI/CD pipeline, combining automated agent and agentless detection and response through the entire app life cycle.

How to Begin

To address these challenges head-on, organizations are turning to unified cloud native application-protection platforms. But how do IT and business leaders know which boxes these solutions should check? Which solution is best for addressing cloud-security threats based on the changing adversary landscape? 

To help guide the decision-making process, here are four key evaluation points:

1. Cloud Protection as an Extension of Endpoint Security

Focusing on endpoint security alone is not sufficient to secure the hybrid environments many organizations now have to protect. For those organizations, choosing the right unified security platform across endpoint and cloud workload is vital.

2. Understanding Adversary Actions against Your Cloud Workloads

Real-time, up-to-date threat intelligence is a critical consideration when evaluating security platforms. As adversaries ramp up actions to exploit cloud services, having the latest information about attacker tactics and applying it successfully is a necessary part of breach prevention.

For example, CrowdStrike researchers noted seeing adversaries targeting neglected cloud infrastructure slated for retirement that still contains sensitive data and adversaries leveraging common cloud services to obfuscate malicious activity.

A proper approach to securing cloud resources leverages enriched threat intelligence to deliver a visual representation of relationships across account roles, workloads and APIs to provide deeper context for a faster, more effective response.

3. Complete Visibility into Misconfiguration, Vulnerabilities and More

Closing the door on attackers also involves identifying the vulnerabilities and misconfiguration they’re most likely to exploit. A sound approach to cloud security will weave these capabilities into the CI/CD pipeline, enabling organizations to catch vulnerabilities early.

For example, they can create verified image policies to guarantee that only approved images can pass through the pipeline. By continuously scanning container images for known vulnerabilities and configuration issues and integrating security with developer toolchains, organizations can speed up application delivery and empower DevOps teams.

Catching vulnerabilities is also the job of cloud-security posture-management technology. These solutions allow organizations to continuously monitor the compliance of all of their cloud resources. This ability is critical because misconfiguration is at the heart of many data leaks and breaches. Having these solutions bolster your cloud-security strategy will enable you to reduce risk and embrace the cloud with more confidence.

4. Managed Threat Hunting

Technology alone is not enough. As adversaries refine their tradecraft to avoid detection, access to managed detection and response (MDR) and advanced threat-hunting services for the cloud can be the difference in stopping a breach. Managed services should be able to leverage up-to-the-minute threat intelligence to search for stealthy and sophisticated attacks. This human touch adds a team of experts that can augment existing security capabilities and improve customers’ ability to detect and respond to threats.

Choosing the Right Cloud Native Application Protection Platform

Weighing the differences between security vendors is not always simple. However, there are some must-haves for cloud-security solutions. From detection to prevention to integration with DevOps tools, organizations need to adopt the capabilities that put them in the best position to take advantage of cloud computing as securely as possible.

The post 4 Factors to Consider When Choosing a Cloud Native App Platform appeared first on The New Stack.

]]>
VeeamON 2023: When Your Nightmare Comes True https://thenewstack.io/veeamon-2023-when-your-nightmare-comes-true/ Fri, 02 Jun 2023 14:47:43 +0000 https://thenewstack.io/?p=22709684

Conferences can run the gamut from being poorly organized with a product focus on one end of the spectrum to

The post VeeamON 2023: When Your Nightmare Comes True appeared first on The New Stack.

]]>

Conferences can run the gamut from being poorly organized with a product focus on one end of the spectrum to offering both deep-device and accessible talks chock full of information to solve real-world problems. Veeam’s annual user’s conference VeeamON 2023 squarely falls under the latter category.

The key takeaway: By becoming more digitized, the amount of data organizations must manage and the number of security holes and attacks continues to explode. So, when, and not if, a ransomware or another attack shuts down your organization’s operations, you had better have a working disaster recovery system in place.

“The explosion of devices and sensors connected to IoT has increased massively the endpoints that must be managed, protected and made secure,” Anand Eswaran, CEO of Veeam, said during his keynote.

All told, the massive amount of new connections means the sheer volume of data being generated will skyrocket worldwide from 79 zettabytes today to 175 zettabytes by 2025, according to IDC numbers Eswaran discussed. “Digital transformation is happening in every single business and data is the key to covering digital transformation,” Eswaran said. “So, protecting data becomes life itself. It’s not a surprise then that cybercrime and ransomware targeting the data is exponentially on the rise.”

Much appreciated is how data and security trends were broken down into key data points and analyzed in function of how organizations are struggling and overcoming security threats, especially ransomware attacks. To wit, VeeamON marked the release of its annual Ransomware Trends Report which covered around 1,200 organizations that were victims of ransomware attacks. The insights Eswaran shared included how:

The majority of organizations seek “higher reliability and improved recoverability. The data says that four out of five companies felt that there was a gap between how quickly you need to recover versus how quickly you can have a big gap in reliability,” Eswaran said. With concerns about reliability, four out of five companies in the survey faced widening gaps between the amount of data businesses can afford to lose and how frequently data is protected.

Ransomware remains the top threat. In the survey, a staggering 85% of the respondents reported an attack during the past 12 months. The 17 of you who reported four or more attacks in the last couple of months. And 60% of you believe that significant improvement was needed between how the cyber and backup teams come together, accounting for how 93% of the time almost backups are the first target of the attack.

Cyber insurance remains necessary, but finding viable plans for coverage is becoming more challenging — and expenses. Premiums and deductibles are increasing, while coverage benefits become skimpier.

You don’t necessarily get your money back when you pay ransomware. “Paying ransomware does not ensure recoverability,” Eswaran said. According to the study, 21% of the respondents said their organization could not recover the data while only 16% of the respondents reported that they were able to recover their data without paying ransomware (compared to 19% in the previous-year survey).

To recover without paying your backups must survive. As Eswaran noted, 75% of organizations lost some of their backup repositories during a data attack in the study and when that happened 39% of backup repositories were lost. “Imagine two out of five files gone. Two out of five hard drives — gone. Two out of five of your family pictures — gone,” Eswaran said. “That’s a huge impact.”

The secret to survivable backups is immutability. “Most of you use immutable repositories in some way, but you are actually still unable to recover your backups without paying the ransom. And why is that?” Eswaran said. “It actually means that you need to pay a little more attention to the architecture of the platform… There is clearly [often] a gap between the promise and execution of when companies say they offer immutable storage.”

The secret to recoverability is portability. “While many large organizations have multiple data centers, which helps them do this better, many do not,” Eswaran said. “A hybrid approach and data portability are supercritical. It allows you to backup to and from anywhere and recover to and from anywhere.”

It is critical to not reinfect during the recovery process. “More than half the organizations run the risk of infection because they do not have the means to ensure they have clean data during recovery,” Eswaran said. “You need immutable and air-gapped backups. You need Hybrid IT architectures, which allows you to create data portability and you need a staged recovery to prevent reinfection.”

The Ransomware Elephant in the Room

Security attacks are not the only thing that can cause an organization to lose data, especially if proper disaster covering is not in place. If your organization is running a data center, conceivable and real threats still include floods, fires and other natural disasters. Human error and sabotage are always a threat for data on the cloud or in data centers. But during the past few years, ransomware remains the mother of all threats. “For the last several years, we have asked the question, what’s the most common cause of outages?” Jason Buffington, vice president, market strategy, Veeam, said.

“Three years running ransomware was the cause of the most impactful events and the last two years, the most common cause of outages as well,” said Buffington while discussing the report with Dave Russell, vice president, enterprise strategy, Veeam in the eponymously called talk “Ransomware Trends Report for 2023.”

But when it comes to investing in resiliency for proper backups and other ways to protect data against ransomware and other attacks, CTOs, CxOs and other stakeholders with purchasing power are seemingly investing more in protection, but the growth in spending does not seem to be exponential.

Citing data from Gartner and IDC data, security budgets, in general, are up this year to about three to four percent and are being “positively influenced,” Russell said.

“There has been a lot of talk lately about how security budgets are getting positively influenced because of the cycle of trends to invest more and more in those areas. But in fact, on the recovery side, we’re seeing similar kinds of activities,” Russell said. “So, there is recognition that recovery plays a role in overall cyber resiliency.”

But when it comes to resiliency, money will eventually be spent regardless. “In cyber resiliency, you are either going to pay in advance or you’re gonna pay after the fact,” Buffington said. “So, if you don’t want to pay after the fact, i.e. ransomware or in downtime, then you better pay upfront.”

The post VeeamON 2023: When Your Nightmare Comes True appeared first on The New Stack.

]]>
Compiled Python Code Used in a New PyPI Attack https://thenewstack.io/compiled-python-code-used-in-a-new-pypi-attack/ Fri, 02 Jun 2023 13:00:58 +0000 https://thenewstack.io/?p=22709874

The Python Package Index (PyPI), can’t catch a break. The popular Python programming language code repository has been subject to

The post Compiled Python Code Used in a New PyPI Attack appeared first on The New Stack.

]]>

The Python Package Index (PyPI), can’t catch a break. The popular Python programming language code repository has been subject to numerous attacks and has had to restrict new members for a while. Now, ReversingLabs, a software supply chain company, has found a novel attack using compiled Python code to dodge software code security scanners.

This technique leverages the direct execution of Python byte code (PYC) files and could be the first of its kind. Great. Just great.

Stumbled on the Attack

While hunting for threats across open source repositories looking for suspicious files, ReversingLabs stumbled upon this unique supply chain attack. It employs a previously unexplored approach, exploiting the capability of PYC files to be directly executed. Thus, it avoids security tools that scan Python source code (PY) files for trouble.

The ReversingLabs crew found the suspect package when its ReversingLabs Software Supply Chain Security platform, discovered suspicious behaviors from a fshec2 compiled binary. Specifically, once the binary was decompiled, they discovered URLs that reference the host by IP address, as well as the creation of a process and execution of a file.

Deeper Dig

Digging deeper with a manual analysis revealed that there was nothing obviously wrong with the source code. Instead, the malicious functionality was hidden within a single compiled Python byte code file.

Unlike more commonplace attacks that rely on obfuscation code, here, the entry point of the package was found in the __init__.py file, which imports a function from the other plaintext file, main.py. This contains Python source code responsible for loading the Python compiled module located in one of the other files, full.pyc.

So far, so innocent. But. this function import triggered a previously unseen loading technique inside the main.py file that avoids using the usual import directive, which is the simplest way to load a Python-compiled module. Had it done so, that would likely have raised a red flag. Instead, Importlib, the implementation of import in Python source code portable to any Python interpreter, is used to avoid detection by security tools. Importlib is typically used in cases where the imported library is dynamically modified upon import. However, the library loaded by main.py was unchanged, meaning that the regular import function would have sufficed. In short, they were up, the writers were up to mischief.

Malicious Module

Loader scripts such as those in this package contain a minimal amount of Python code and perform a simple action: Loading of a compiled Python module. It just happens to be a malicious module. Inspector, the default PyPI security team tool, doesn’t, at the moment, provide any way of analyzing binary files to spot malicious behaviors.

Once active, the loaded library would then execute a host of malicious functions, such as collecting usernames, hostnames, and directory listings, and fetching commands for execution using scheduled tasks or cronjob.

The attack showed an ability to evolve, with the functionality to download commands from a remote server. This would have allowed the attackers to add new programs to their malware infection. It appears a keylogger was one such addition.

Two Targets Affected

Further research confirmed that at least two targets, which appear to be developer machines, had been infected. The hosts have had their machine names, usernames, and directory listings harvested. But, since the PyPI security team removed it immediately from the repository on April 17, 2023, there aren’t likely to be any additional cases in the wild.

That said, while the fshec2 package and its associated Command-and-Control (C2) infrastructure isn’t cutting-edge, a new method of attack is still bad news. We can expect other, more experienced Python hackers to adopt this path for their more sophisticated attacks.

For us, that means we must be more suspicious of Python-compiled byte code. What looks harmless at the surface may conceal a vicious attack.

The post Compiled Python Code Used in a New PyPI Attack appeared first on The New Stack.

]]>
Demystifying WebAssembly: What Beginners Need to Know https://thenewstack.io/webassembly/webassembly-what-beginners-need-to-know/ Fri, 02 Jun 2023 12:35:19 +0000 https://thenewstack.io/?p=22708617

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to address the limitations of JavaScript, an interpreted language that can lead to slower performance and longer page load times.

With WebAssembly, developers can compile code to a low-level binary format that can be executed by modern web browsers at near-native speeds. This can be particularly useful for applications that require intensive computation or need to process large amounts of data.

Compiling code to Wasm requires some knowledge of the programming language and tools being used, as well as an understanding of the WebAssembly format and how it interacts with the browser environment. However, the benefits of improved performance and security make it a worthwhile endeavor for many developers.

In this article, we will explore the basics of WebAssembly, including how it works with web browsers, how to compile code to Wasm, and best practices for writing secure WebAssembly code.

We will also discuss benchmarks and examples that illustrate the performance benefits of using WebAssembly compared to traditional web technologies. You will learn how WebAssembly can be used to create faster, more efficient and more secure web applications.

The Benefits of Using WebAssembly

As mentioned previously, WebAssembly offers faster execution times and improved performance compared to JavaScript, due to its efficient binary format and simpler instruction set. It enables developers to use other languages to create web applications, such as C++, Rust, and others.

Wasm also provides a more secure environment for running code on the web. In addition to performance, there are several other benefits to using it in web development:

Portability. Wasm is designed to be language-agnostic and can be used with multiple programming languages, enabling developers to write code in their preferred language and compile it to WebAssembly for use on the web.

Security. It provides a sandboxed environment for executing code, making it more secure than executing untrusted code directly in the browser.

Interoperability. Wasm modules can be easily integrated with JavaScript, allowing developers to use existing libraries and frameworks alongside new WebAssembly modules.

Accessibility. It can be used to bring applications written in native languages to the web, making them more accessible to users without requiring them to install additional software.

WebAssembly can be represented in two forms: binary format and textual format.

The binary format is Wasm’s native format, consisting of a sequence of bytes that represent the program’s instructions and data. This binary format is designed to be compact, efficient and easily parsed by machines. The binary format is also the form that is typically transmitted over the network when a Wasm program is loaded into a web page.

The textual representation of WebAssembly, on the other hand, is a more human-readable form that is similar to assembly language. The textual format is designed to be more readable, and easier to write and debug, than the binary format. The textual format consists of a series of instructions, each represented using a mnemonic and its operands, and it can be translated to the binary format using a WebAssembly compiler.

The textual format can be useful for writing and debugging Wasm programs, as it allows developers to more easily read and understand the program’s instructions. Additionally, the textual format can be used to write programs in high-level programming languages that can then be compiled to WebAssembly, which can help to simplify the process of writing and optimizing Wasm programs.

What Is the WebAssembly Instruction Set?

WebAssembly has a simple, stack-based instruction set that is designed to be easy to optimize for performance. It supports basic types such as integers and floating-point numbers, as well as more complex data structures such as vectors and tables.

The Wasm instruction set consists of a small number of low-level instructions that can be used to build more complex programs. These instructions can be used to manipulate data types such as integers, floats and memory addresses, and to perform control flow operations such as branching and looping.

Some examples of WebAssembly instructions include

  • i32.add: adds two 32-bit integers together.
  • f64.mul: multiplies two 64-bit floating-point numbers together.
  • i32.load: loads a 32-bit integer from memory.
  • i32.store: stores a 32-bit integer into memory.
  • br_if: branches to a given label if a condition is true.

WebAssembly instructions operate on a stack-based virtual machine, where values are pushed onto and popped off of a stack as instructions are executed. For example, the i32.add instruction pops two 32-bit integers off the stack, adds them together, and then pushes the result back onto the stack.

This is significant because it improves the efficiency and simplicity of execution.

A stack-based architecture allows for the efficient execution of instructions. Since values are pushed onto the stack, instructions can easily access and operate on the topmost values without the need for explicit addressing or complex memory operations. This reduces the number of instructions needed to perform computations, resulting in faster execution.

Also, the stack-based model simplifies the design and implementation of the virtual machine. Instructions can be designed to work directly with values on the stack, eliminating the need for complex register management or memory addressing modes. This simplicity leads to a more compact and easier-to-understand instruction set.

The small number of instructions in the WebAssembly instruction set makes it easy to optimize and secure. Because the instructions are low-level, they can be easily translated into machine code, making Wasm programs fast and efficient.

Additionally, the fixed instruction set means that those programs are not prone to the same types of security vulnerabilities that can occur in more complex instruction sets.

How Does Wasm Work with the Browser?

WebAssembly code is loaded and executed within the browser’s sandboxed environment. It is typically loaded asynchronously using the fetch() API and then compiled and executed using the WebAssembly API.

Wasm can work with web browsers to provide efficient and secure execution of code in the client-side environment. Its code can be loaded and executed within a web page using JavaScript, and can interact with the Document Object Model (DOM) and other web APIs.

When a web page loads a WebAssembly module, the browser downloads the module’s binary file and compiles it to machine code using a virtual machine called the WebAssembly Runtime. The WebAssembly Runtime is integrated into the browser’s JavaScript engine and translates the Wasm code into machine code that can be executed by the browser’s processor.

Once the WebAssembly module is loaded and compiled, the browser can execute its functions and interact with its data. Wasm code can also call JavaScript functions and access browser APIs using JavaScript interop, which allows seamless communication between WebAssembly and JavaScript.

WebAssembly’s efficient execution can provide significant performance benefits for web applications, especially for computationally intensive tasks such as data processing or scientific calculations. Additionally, Wasm’s security model, which enforces strict memory isolation and control flow integrity, can improve the security of web applications and reduce the risk of security vulnerabilities.

How to Compile Code to WebAssembly

To compile code to WebAssembly, developers can use compilers that target the Wasm binary format, such as Clang or Emscripten.

Developers can also use languages that have built-in support for WebAssembly, such as Rust or AssemblyScript.

To compile code to WebAssembly, you will need a compiler that supports generating Wasm output. Here are some general steps:

  1. Choose a programming language that has a compiler capable of generating WebAssembly output. Some popular languages that support WebAssembly include C/C++, Rust and Go.
  2. Install the necessary tools for compiling code to WebAssembly. This can vary depending on the programming language and the specific compiler being used. For example, to compile C/C++ code to WebAssembly, you may need to install Emscripten, which is a toolchain for compiling C/C++ to WebAssembly.
  3. Write your code in the chosen programming language, making sure to follow any specific guidelines for WebAssembly output. For example, in C/C++, you may need to use special Emscripten-specific functions to interact with the browser environment.
  4. Use the compiler to generate WebAssembly output from your code. This will typically involve passing in command-line options or setting environment variables to specify that the output should be in Wasm format.

Optionally, optimize the WebAssembly output for performance or size. This can be done using tools such as Wasm-opt or Wasm-pack. Load the generated WebAssembly code in your application or website using JavaScript or another compatible language.

Wasm modules are typically loaded asynchronously using the fetch() API.

Once the module is loaded, it can be compiled and instantiated using the WebAssembly API.

To load and run a WebAssembly module, you first need to create an instance of the module using the WebAssembly.instantiateStreaming or WebAssembly.instantiate method in JavaScript. These methods take the URL of the WebAssembly binary file as an argument and return a Promise that resolves to a WebAssembly.Module object and a set of exported functions.

Once you have the WebAssembly.Module object and exported functions, you can call the exported functions to interact with the Wasm module. These functions can be called just like any other JavaScript function, but they execute WebAssembly code instead of JavaScript code.

Here’s an example of how to load and run a simple WebAssembly module in JavaScript:

// Load the WebAssembly module from a binary file
fetch('module.wasm')
  .then(response => response.arrayBuffer())
  .then(bytes => WebAssembly.instantiate(bytes))
  .then(module => {
    // Get the exported function from the module
    const add = module.instance.exports.add;

    // Call the function and print the result
    const result = add(1, 2);
    console.log(result);
  });


In this example, we use the fetch API to load the WebAssembly binary file as an ArrayBuffer, and then pass it to the WebAssembly.instantiate method to create an instance of the WebAssembly module.

We then get the exported function add from the instance, call it with arguments 1 and 2, and print the result to the console.

It’s important to note that WebAssembly modules run in a sandboxed environment and cannot access JavaScript variables or APIs directly.

To communicate with JavaScript, WebAssembly modules must use the WebAssembly.Memory and WebAssembly.Table objects to interact with data and function pointers that are passed back and forth between the WebAssembly and JavaScript environments.

Performance Advantages of WebAssembly

WebAssembly can improve performance compared to other web technologies in a number of ways.

First, Wasm code can be compiled ahead-of-time (AOT) or just-in-time (JIT) to improve performance. AOT compilation allows WebAssembly code to be compiled to machine code that can be executed directly by the CPU, bypassing the need for an interpreter.

JIT compilation, on the other hand, allows WebAssembly code to be compiled to machine code on the fly, at runtime, which can provide faster startup times and better performance for code that is executed frequently.

Additionally, WebAssembly can take advantage of hardware acceleration, such as SIMD (single instruction, multiple data) instructions, to further improve performance. SIMD instructions allow multiple operations to be performed simultaneously on a single processor core, which can significantly speed up mathematical and other data-intensive operations.

Here are some benchmarks and examples that illustrate the performance benefits of using WebAssembly.

Game of Life. A cellular automaton that involves updating a grid of cells based on a set of rules. The algorithm is simple, but it can be computationally intensive. The WebAssembly version of the algorithm runs about 10 times faster than the JavaScript version.

Image processing. Image processing algorithms can be highly optimized using SIMD instructions, which are available in WebAssembly. The Wasm version of an image processing algorithm can run about three times faster than the JavaScript version.

AI/machine learning. Machine learning algorithms can be highly compute-intensive, making them a good candidate for WebAssembly. TensorFlow.js is a popular JavaScript library for machine learning, but its performance can be improved by using the WebAssembly version of TensorFlow. In some benchmarks, the Wasm version runs about two times faster than the JavaScript version.

Audio processing. WebAssembly can be used to implement real-time audio processing algorithms. The Web Audio API provides a way to process audio data in the browser, and the WebAssembly version of an audio processing algorithm can run about two times faster than the JavaScript version.

Wasm Security Considerations

WebAssembly supports various security policies that allow web developers to control how their code interacts with the browser’s resources. For example, Wasm modules can be restricted from accessing certain APIs or executing certain types of instructions.

WebAssembly code runs within the browser’s sandboxed environment, which limits its access to the user’s system.

Wasm code is subject to the same-origin policy, which restricts access to resources from a different origin (i.e., domain, protocol and port). This prevents Wasm code from accessing sensitive resources or data on a website that it shouldn’t have access to.

WebAssembly also supports sandboxing through the use of a memory-safe execution environment. This means that Wasm code cannot access memory outside of its own allocated memory space, preventing buffer overflow attacks and other memory-related vulnerabilities.

Additionally, WebAssembly supports features such as trap handlers, which can intercept and handle potential security issues, and permissions, which allow a module to specify which resources it needs access to.

Furthermore, Wasm can be signed and verified using digital signatures, ensuring that the code has not been tampered with or modified during transmission or storage. WebAssembly code can also be executed in a secure execution environment, such as within a secure enclave, to further enhance its security.

Best Practices for Writing Secure Wasm Code

When writing WebAssembly code, there are several best practices that developers can follow to ensure the security of their code.

Validate inputs. As with any code, it is important to validate inputs to ensure that they are in the expected format and range. This can help prevent security vulnerabilities such as buffer overflows and integer overflows.

Use memory safely. WebAssembly provides low-level access to memory, which can be a source of vulnerabilities such as buffer overflows and use-after-free bugs. It is important to use memory safely by checking bounds, initializing variables and releasing memory when it is no longer needed.

Avoid branching on secret data. Branching on secret data can leak information through side channels such as timing attacks. To avoid this, it is best to use constant-time algorithms or to ensure that all branches take the same amount of time.

Use typed arrays. WebAssembly provides typed arrays that can be used to store and manipulate data in a type-safe manner. Using typed arrays can help prevent vulnerabilities such as buffer overflows and type confusion.

Limit access to imported functions. Imported functions can introduce vulnerabilities if they are not properly validated or if they have unintended side effects. To limit the risk, it is best to restrict access to imported functions and to validate their inputs and outputs.

Use sandboxes. To further isolate WebAssembly code from the rest of the application, it can be run in a sandboxed environment with restricted access to resources such as the file system and network. This can help prevent attackers from using WebAssembly code as a vector for attacks

Keep code minimal. Write minimal code with clear boundaries that separate untrusted and trusted code, thus reducing the attack surface area.

Avoid using system calls as much as possible. Instead, use web APIs to perform operations that require input/output or other system-related tasks.

Use cryptographic libraries. Well-known cryptographic libraries like libsodium, Bcrypt, or scrypt can help secure your data.

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>
PyPI Strives to Pull Itself Out of Trouble https://thenewstack.io/pypi-strives-to-pull-itself-out-of-trouble/ Thu, 01 Jun 2023 18:43:27 +0000 https://thenewstack.io/?p=22709859

The Python Package Index (PyPI), is the most popular Python programming language software repository. It’s also a mess. Earlier this year,

The post PyPI Strives to Pull Itself Out of Trouble appeared first on The New Stack.

]]>

The Python Package Index (PyPI), is the most popular Python programming language software repository. It’s also a mess. Earlier this year, the FortiGuard team discovered zero-day malware in three PyPI packages called “colorslib,” “httpslib,” and “libhttps.”  Before that, 2022 closed with  PyTorch-nightly on Linux being poisoned with a fake dependency. More recently, PyPI had to stop new user registrations and project creations because of a flood of malicious users. PyPI isn’t the only one to notice the user trouble. The Python Software Foundation (PSF) received three subpoenas for PyPI user data. What is going on here!?

The root problem is that Python is used extensively in many problems, and PyPI, with a full-time staff of two people and relatively little automation, simply doesn’t have the resources to deal with securing its code repository. It’s trivial to place malware in PyPI. Adding salt to the wound, ChatGPT and other Generative AI tools have made it child’s play to create malicious code.

Cavalier Users

Users are also cavalier about using PyPI code. Package managers, such as Python’s default manager pip, use PyPI as their default source for packages and their dependencies. If you don’t look closely at what you’re installing, you won’t see malware coming until it’s too late.

As Pete Morgan, Co-founder and CSO at Phylum, a software supply chain security company, said, “The dynamics between software, security, and business are changing. The purpose of package managers like PyPI is to provide a platform for developers to share their code. But it is a company’s decision to allow that code from a stranger on the Internet to be used in the applications they build for profit. So who is responsible for ensuring the code is secure? The volunteer maintainers at PyPI are doing their best. …  Or should the business be taking more responsibility for protecting its developers when using open source code?”

As it is, “The volume of malicious users and malicious projects being created on the index in the past week has outpaced our ability to respond to it in a timely fashion, especially with multiple PyPI administrators on leave.” PyPI is now letting new users in, but the administrators are still close to being overwhelmed.

Were some of the malicious users the targets of US DoJ subpoenas? We don’t know Neither does the PSF. What we do know is that the PSF remains “committed to the freedom, security, and privacy of our users.”

So, the PSF is adopting new data retention and disclosure policies. Specifically, PyPI, going forward, is reducing how it retains and uses Internet Protocol (IP) addresses. But, while that’s all well and good, it won’t help keep our criminal hackers.

2FA

To make it harder for the crooks of code, the PyPI will require every account that maintains any project or organization on PyPI to enable Two-Factor Authentication (2FA) on their account by the end of 2023.

In other words, ordinary users will still be able to use PyPI without 2FA… for now. Eventually, as does GitHub, all users will be required to use 2FA.

When PyPI says 2FA, they mean strong 2FA. 2FA Texting will not be supported. Instead, you’ll need to use 2FA TOTP and WebAuthN on authenticator apps or security keys such as YubiKeys or Google Titan.

This isn’t the first time PyPI tried to get top developers to use 2FA. In July of 2022, PyPI had a security key giveaway that began mandating 2FA for the top 1% of projects on PyPI by download count. Earlier this year, PyPI introduced “Trusted Publishing.” This uses the OpenID Connect (OIDC) standard to exchange short-lived identity tokens between a trusted third-party service and PyPI.

Shortsighted?

Not everyone is thrilled with these changes. Donald Stufft, PyPi Primary maintainer whose day job is as a Datadog senior software engineer, wrote:

There are some people who believe that efforts to improve supply chain security benefits only corporate or business users, and that individual developers should not be asked to take on an uncompensated burden for their benefit.

We believe this is shortsighted.

A compromise in the supply chain can be used to attack individual developers the same as it is able to attack corporate and business users. In fact, we believe that individual developers are in a more vulnerable position than corporate and business users.

Besides, Stufft also noted, “The workload to support end users relies heavily on a very small group of volunteers. When a user account report is seen by our trusted admins, we have to take time to properly investigate. These are often reported as an emergency, red-alert-level urgency. By mandating 2FA for project maintainers, the likelihood of account takeovers drops significantly, reserving the emergency status for truly extraordinary circumstances. Account recovery becomes part of normal routine support efforts instead of admin-level urgency.”

Will this be enough? Probably not. As Stufft commented on Reddit, PyPI “was first created back in 2002 or 2003 …, and was sort of designed as a weekend hack project to showcase an idea to bring a package repository to Python.” It’s still close to its hackish roots. In short, PyPi comes with an enormous amount of technical debt.

Hopefully, adding a dedicated PyPI Safety and Security Engineer role will help. Frankly, PyPI can use all the help it can get. It’s a virtual project that is all too vulnerable to attacks.

The post PyPI Strives to Pull Itself Out of Trouble appeared first on The New Stack.

]]>
Chainguard Improves Security for Its Container Image Registry https://thenewstack.io/chainguard-improves-security-for-its-container-image-registry/ Wed, 31 May 2023 13:30:49 +0000 https://thenewstack.io/?p=22709510

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They do this by providing developers and users with continuously updated base container images with zero-known vulnerabilities. That’s all well and good, but now the well-regarded software developer security company has also upgraded how it hosts and distributes its Images to improve security.

Before this, Chainguard distributed its images using a slim wrapper over GitHub’s Container Registry. The arrangement allowed the company to focus on its tools and systems, enabling flexible adjustments to image distribution.

However, as the product gained traction and scaling became necessary, Chainguard ran into limitations. So, the business reevaluated its image distribution process and created its own registry. Leveraging the company’s engineering team’s expertise in managing hyperscale registries, Chainguard has built the first passwordless container registry, focusing on security, efficiency, flexibility and cost-effectiveness.

How It Works

Here’s how it works. For starters, for Identity and Access Management (IAM), Chainguard relies on short-lived OpenID Connect (OIDC) instead of conventional username-password combinations. OIDC is an identity layer built on top of the OAuth 2.0 framework credentials. To ensure the registry is only accessible to authorized Chain Guard personnel, only the GitHub Actions workflow identity can push to the public Chainguard registry repository. This promotes a secure, auditable and accountable process for making changes to the repository.

On the user side, when pulling images, you can authenticate with a credential helper built into Chainguard’s chainctl CLI. This also relies on OIDC for authentication. With this approach, there are no long-lived tokens stored on the user’s computer. Both chainctl and the credential helper are aware of common OIDC-enabled execution environments such as GitHub Actions. With this, customers can also limit who and how images can be pulled.

If your environment doesn’t support OIDC, the registry also offers long-lived authentication options. For the sake of your own security, I urge you to move to an OIDC-compliant process.

For now, existing Chainguard Images customers cannot push directly to the registry. It can only currently be used to Chainguard created and managed host Images.

As part of the Chainguard Enforce software supply chain control plane platform, the new Chainguard Registry supports CloudEvents to notify users of significant activities with their images. Customers can create subscriptions and receive event notifications for image pushes and pulls, including failures. They can leverage these events to initiate base image updates, conduct vulnerability scans, duplicate pushed images or audit system activities.

Cloudflare R2

Chainguard’s done this by building its own container image registry on Cloudflare R2. With this new method, the company has far greater control and has cut back considerably on its costs.

Why Cloudflare R2? Simple. It’s all about egress fees — the cloud provider charges for external data transfer. Chainguard opted for Cloudflare R2 for image blob distribution. Because it offers zero egress-fee hosting and a fast, globally trusted distribution network, promising a sustainable model for hosting free public images without excessive costs or rate limitations.

This is a huge deal. As Jason Hall, a Chainguard software engineer, explained, “The 800-pound gorilla in the room of container image registry operators is egress fees. … Image registries move a lot of bits to a lot of users all over the world, and moving those bits can become very expensive, very quickly. In fact, just paying to move image bits is often the main cost of operating an image registry. For example, Docker’s official Nginx image has been pulled over a billion times, about 31 million times in the last week alone. The image is about 55 megabytes, so that’s 1.7 PB of egress. At S3’s standard egress pricing of $0.05/GB, that’s $85,000, to serve just the nginx image, for just one week.”

To pay for this, companies that host registries have had to pay cloud providers for hosting. You end up paying for it as the image providers pass the costs along to you with paid plans or up-priced services

Chainguard thinks Cloudflare R2 “fundamentally changes the story for image hosting providers and makes this a sustainable model for hosting free public images without imposing onerous costs or rate limits.” I think Cloudflare needs to pay its bills too, and eventually, there will be a charge for the service.

For now, though, Chainguard can save money and re-invest in further securing images. This sounds like a win to me. You can try Chainguard Images today to see if their security-first images work for you.

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>
How to Integrate OpenShift with Keycloak https://thenewstack.io/how-to-integrate-openshift-with-keycloak/ Tue, 30 May 2023 18:00:30 +0000 https://thenewstack.io/?p=22707376

If you want to integrate with an identity provider, such as Keycloak, you must first understand how user authentication and

The post How to Integrate OpenShift with Keycloak appeared first on The New Stack.

]]>

If you want to integrate Red Hat OpenShift with an identity provider, such as Keycloak, you must first understand how user authentication and token management work. During the OAuth process, the user’s credentials are verified by the identity provider, and the user’s information is mapped to an identity in OpenShift.

But any changes made to the user’s information or credentials on the identity provider (such as deleting a user or adding a group) will not impact or invalidate an active bearer token from a previous authentication.

The API server validates the access token, but user authentication happens during the early OAuth process, so the token will remain active regardless of any changes made to the user’s information or credentials.

This means you’ve got a security vulnerability. If a bearer token remains active even after changes have been made to the user’s information or credentials, it could potentially be used by a hacker or malicious actor to access protected resources.

By default, the access token lifetime is set to 24 hours, but this can be configured using the steps described in the OpenShift documentation. When deciding on the token’s lifetime, consider how soon you want authorization-related changes made to an identity provider to take effect in the OpenShift cluster.

OpenShift Security and Manageability

One way to address these concerns is by integrating OpenShift with Keycloak. Keycloak is an open source identity and access management (IAM) solution, originally developed by Red Hat, that can provide more control over bearer token policies and enforce access control rules more effectively. In April, it was accepted as an incubating project at the Cloud Native Computing Foundation.

As an open source solution, it’s free to use and can be customized to meet specific requirements. Additionally, the community provides regular updates and security patches, ensuring the solution remains up-to-date and secure.

Integrating OpenShift with Keycloak provides a wide range of benefits that can improve security and access control in your cluster. Some of the main benefits of this integration include:

Federation support. Keycloak provides support for federation, allowing you to integrate with external identity providers, such as lightweight directory access protocol (LDAP) or Active Directory. This enables you to leverage existing user management systems and extend their capabilities to your OpenShift applications.

Fine-grained access control. Keycloak provides features such as multi-factor authentication (MFA), social login, and identity brokering that can enhance the security of your OpenShift applications. Keycloak can also enforce complex access control policies, such as role-based access control (RBAC) and attribute-based access control (ABAC), to ensure that only authorized users can access your OpenShift applications.

Flexible token management. By integrating OpenShift with Keycloak, you can gain more control over bearer token policies. Keycloak provides a token management system that allows you to set policies for token expiration, revocation and renewal. This can help prevent unauthorized access to your OpenShift applications and reduce the impact of a leaked token.

Customizable user interfaces. Keycloak provides customizable user interfaces that can be branded to match the look and feel of your OpenShift applications. This can help create a seamless user experience for your users and reinforce your brand.

Centralized authentication and authorization. With Keycloak, you can centralize authentication and authorization for all your OpenShift applications. This means that you can manage user access across all your applications and services from a single location, simplifying user management and improving security.

Keycloak provides support for MFA, allowing you to add an additional layer of security to your OpenShift applications. This can include options such as SMS authentication, Google Authenticator, and email-based one-time passwords.

Single sign-on. With Keycloak, you can enable single sign-on (SSO) for your OpenShift applications. This means that users only need to authenticate once, and they can then access all the applications and services that they are authorized to use, without the need for additional logins.

Integrating OpenShift with Keycloak: Getting Started

Step 1: Create a Keycloak Realm

To create a new realm in Keycloak, follow these steps:

  1. Log in to the Keycloak web console and navigate to the “Realms” tab.
  2. Click on the “Add realm” button and enter a name for your realm.
  3. Click on “Create” to create your new realm.

Log in to Keycloak:

$ oc login -u <username> -p <password> https://<keycloak-url>/auth


Create a new realm:

$ oc create configmap keycloak-realm --from-file=<realm-config-file>.json
$ oc process -f <realm-template-file>.yaml 
--param-file=<realm-params-file>.properties | oc apply -f -

Step 2: Create Keycloak Clients

To create a new client in Keycloak, follow these steps:

  1. Navigate to the “Clients” tab within your realm and click on “Create.”
  2. Enter a name for your client and click on “Save.”

Configure the client settings according to your requirements. For example, you can set the client protocol to “OpenID Connect” and specify the redirect URIs for your client.

Create a new client:

$ oc create configmap keycloak-client --from-file=<client-config-file>.json
$ oc process -f <client-template-file>.yaml 
--param-file=<client-params-file>.properties | oc apply -f -

Step 3: Configure Authentication

To configure authentication for your OpenShift applications, you can follow these steps:

  1. Create an OpenID Connect identity provider within your Keycloak realm by navigating to the “Identity Providers” tab and clicking on “Add provider.”
  2. Configure the identity provider by specifying the client ID and client secret of your OpenShift client, along with the authorization and token endpoints.
  3. Configure your OpenShift application to use the OpenID Connect identity provider for authentication.

Create an OpenID Connect Identity Provider:

$ oc create configmap keycloak-oidc --from-file=<oidc-config-file>.json
$ oc process -f <oidc-template-file>.yaml 
--param-file=<oidc-params-file>.properties | oc apply -f -

Step 4: Enforce Authorization

To enforce authorization policies for your OpenShift applications, you can follow these steps:

  1. Create groups, roles, and permissions within your Keycloak realm by navigating to the “Groups,” “Roles” and “Permissions” tabs.
  2. Assign the roles and permissions to your OpenShift clients by navigating to the “Clients” tab and selecting the client that you want to configure.
  3. Configure the access control policies according to your requirements. For example, you can create a role that allows read-only access to a particular resource, and assign this role to a specific group.

Create groups, roles and permissions within your Keycloak Realm:

$ oc create configmap keycloak-groups --from-file=<groups-config-file>.json
$ oc process -f <groups-template-file>.yaml 
--param-file=<groups-params-file>.properties | oc apply -f -

$ oc create configmap keycloak-roles --from-file=<roles-config-file>.json
$ oc process -f <roles-template-file>.yaml 
--param-file=<roles-params-file>.properties | oc apply -f -

$ oc create configmap keycloak-permissions 
--from-file=<permissions-config-file>.json
$ oc process -f <permissions-template-file>.yaml 
--param-file=<permissions-params-file>.properties | oc apply -f -

Step 5: Integrate OpenShift with Keycloak

The final step is to create a new OpenShift OAuth2 provider by creating a new custom resource of type OAuth in the openshift-config namespace. This will allow OpenShift to use Keycloak for authentication and authorization.

Create a new OpenShift OAuth2 provider by creating a new custom resource of type OAuth in the openshift-config namespace. You can use the following YAML file as a template:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
 name: cluster
spec:
 identityProviders:
 - name: keycloak
  mappingMethod: claim
  type: OpenID
  openID:
   clientID: <client-ID>
   clientSecret: <client-secret>
   extraScopes: []
   issuerURL: https://<keycloak-url>/auth/realms/<realm-name>
   claims:
    id:
    - sub
    preferredUsername:
    - preferred_username
    name:
    - name
    email:
    - email


Replace <client-ID>, <client-secret>, <keycloak-url>, and <realm-name> with the appropriate values for your Keycloak realm and client.

Apply the YAML file to create the OAuth Custom Resource:

$ oc apply -f <yaml-file>


Verify that the OAuth custom resource has been created:

$ oc get oauth cluster -o yaml

  1. Log in to the OpenShift console and navigate to the “OAuth” section. You should see “Keycloak” listed as an identity provider.
  2.  Test the integration by logging in to the OpenShift console using a user from your Keycloak realm.

By following these steps, you can fully integrate Keycloak with OpenShift and take advantage of its advanced authentication and authorization features.

The post How to Integrate OpenShift with Keycloak appeared first on The New Stack.

]]>
Cloud Security: Don’t Confuse Vendor and Tool Consolidation https://thenewstack.io/cloud-security-dont-confuse-vendor-and-tool-consolidation/ Tue, 30 May 2023 13:24:18 +0000 https://thenewstack.io/?p=22709273

In the current macroeconomic climate, many organizations are looking to consolidate and work with a smaller number of vendors. It’s

The post Cloud Security: Don’t Confuse Vendor and Tool Consolidation appeared first on The New Stack.

]]>

In the current macroeconomic climate, many organizations are looking to consolidate and work with a smaller number of vendors. It’s understandable. Not only are you reducing potential runaway costs and making vendor relationships easier to manage, you can also gain a more advantageous bargaining position on price. The fewer individual vendors a company has to deal with, the easier it is to manage purchasing, get legal clearances, request support and so on.

However, from a security professional’s end-user perspective, vendor consolidation doesn’t necessarily translate to greater efficiency. The reason is simple: Even when you consolidate vendors, you may not consolidate tools. Unless your vendor offers a truly integrated platform, you still end up working with a discrete set of disparate, disconnected solutions. Whether or not they happen to be provided by the same vendor doesn’t matter much.

This is a reality that cloud security teams know all too well today. As business folks push for vendor consolidation, cybersecurity practitioners are left to wonder what vendor consolidation actually means for them, or how it can improve security outcomes.

Let’s take a moment to explore this phenomenon, discuss why vendor consolidation doesn’t always yield the desired results “on the ground” and what to do to ensure that consolidation initiatives result in tangible benefits.

Why the C-Suite Loves Vendor Consolidation

To start, let’s consider why organizations prefer to consolidate cybersecurity tool vendors.

They do it because it helps streamline their business processes and has fewer vendors to interface with. They get a one-stop shopping process that — just like buying groceries at a supermarket instead of going to individual bakers, butchers, produce stands and so on — will save them time. It might also result in lower overall costs because vendors are more willing to offer pricing discounts when they are selling multiple products to a single customer.

Why Cybersecurity Vendor Consolidation Doesn’t Always Live Up to Its Promise

Unfortunately, simply buying solutions from fewer vendors doesn’t necessarily deliver the operation efficiencies or efficacy of security coverage — that entirely depends on the nature of those solutions, how integrated they are and how good the user experience is that they provide.

If you’re an in-the-trenches application developer or security practitioner, consolidating cybersecurity-tool vendors might not mean much to you. If the vendor that your business chooses doesn’t offer an integrated platform, you’re still left juggling multiple tools.

You are constantly toggling between screens and dealing with the productivity hit that comes with endless context switching. You have to move data manually from one tool to another to aggregate, normalize, reconcile, analyze and archive it. You have to sit down and think about which alerts to prioritize because each tool is generating different alerts, and without tooling integrations, one tool is incapable of telling you how an issue it has surfaced might (or might not) be related to an alert from a different tool.

In short, vendor consolidation without an integrated platform or tight integration between the different tools (that seldom exist) doesn’t make life any easier at all for cybersecurity practitioners. It might improve business efficiency for procurement but at the same time add overhead and reduce efficiency of security operations.

A Better Approach to Cloud Security Tooling

Fortunately, it doesn’t have to be this way. It’s possible to consolidate both vendors and tools — a strategy that yields tangible benefits from both a business perspective and a security operations perspective.

In the realm of cybersecurity, and particularly in cloud native security, this approach is possible when businesses choose to work with a vendor that offers a fully unified cloud native application protection platform, or CNAPP. In fact, Gartner expects cloud native security to consolidate from the 10 or more tools/vendors used today to a more viable two to three in a few years.

A true CNAPP will integrate all of the tools that practitioners need to operate efficiently into a single solution. It does away with context switching, and it ensures that teams can draw on all available contextual data when managing alerts and remediation workflows.

At the same time, if you choose a real end-to-end CNAPP developed by a single vendor, it will achieve the business-process consolidation that executives love and the operational efficiency. The business gets the one-stop cybersecurity shopping it longs for, while at the same time giving practitioners a solution that addresses all aspects of cloud native application security, across all stages of the application delivery life cycle.

A Holistic Approach to Cybersecurity Vendor Consolidation

The bottom line is this: Consolidation only works when organizations think in terms of vendor consolidation and tool consolidation at the same time. Consolidating vendors alone offers little value if it leaves practitioners struggling to manage discrete, poorly integrated tools, which in turn leaves the business at greater risk of cyberattack because cloud native security teams can’t identify or respond to risks as effectively when they lack a centralized, consolidated platform. It might deliver some cost benefits and easier vendor management, but those efficiencies might be canceled or even overridden by poor user experience, lack of consolidated policies, processes, and outcomes, and overall higher operational overhead.

The good news is that CNAPP solves this dilemma. A CNAPP platform worth its name delivers all-in-one protection that keeps business folks happy while also helping to maximize the operational efficiency of cybersecurity teams.

Contact us to learn more about how Aqua’s CNAPP platform helps organizations optimize business efficiency and cybersecurity readiness at the same time.

The post Cloud Security: Don’t Confuse Vendor and Tool Consolidation appeared first on The New Stack.

]]>
How to Protect Containerized Workloads at Runtime https://thenewstack.io/how-to-protect-containerized-workloads-at-runtime/ Tue, 30 May 2023 11:00:22 +0000 https://thenewstack.io/?p=22709118

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach —

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach — meaning to move security as early as possible into development pipelines. But the work is never finished.

Shift left and similar strategies are generally good things. They begin to address a long-overdue problem of treating security as a checkbox or a final step before deployment. But in many cases is still not quite enough for the realities of running modern software applications. The shift left approach might only cover the build and deploy phases, for example, but not apply enough security focus to another critical phase for today’s workloads: runtime.

Runtime security “is about securing the environment in which an application is running and the application itself when the code is being executed,” said Yugal Joshi, partner at the technology research firm Everest Group.

The emerging class of tools and practices for security aim to address three essential security challenges in the age of containerized workloads, Kubernetes, and heavily automated CI/CD pipelines, according to Utpal Bhatt, CMO at Tigera, a security platform company.

First, the speed and automation intrinsic to modern software development pipelines create more threat vectors and opportunities for vulnerabilities to enter a codebase.

Second, the orchestration layer itself, like Kubernetes, also heavily automates the deployment of container images and introduces new risks.

Third, the dynamic nature of running container-based workloads, especially when those workloads are decomposed into hundreds or thousands of microservices that might be talking to one another, creates a very large and ever-changing attack surface.

“The threat vectors increase with these types of applications,” Bhatt told The New Stack. “It’s virtually impossible to eliminate these threats when focusing on just one part of your supply chain.”

Runtime Security: Prevention First

Runtime security might sound like a super-specific requirement or approach, but Bhatt and other experts note that, done right, holistic approaches to runtime security can bolster the security posture of the entire environment and organization.

The overarching need for strong runtime security is to shift from a defensive or detection-focused approach to a prevention-focused approach.

“Given the large attack surface of containerized workloads, it’s impossible to scale a detection-centric approach to security,” said Mikheil Kardenakhishvili, CEO and co-founder of Techseed, one of Tigera’s partners. “Instead, focusing on prevention will help to reduce attacks and subsequently the burden on security teams.”

Instead of a purely detection-based approach, one that often burns out security teams and puts them in the position of being seen as bottlenecks or inhibitors by the rest of the business, the best runtime security tools and practices, according to Bhatt, implement a prevention-first approach backed by traditional detection response.

“Runtime security done right means you’re blocking known attacks rather than waiting for them to happen,” Bhatt said.

Runtime security can provide common services as a platform offering that any application can use for secure execution, noted Joshi, the Everest Group analyst.

“Therefore, things like identity, monitoring, logging, permissions, and control will fall under this runtime security remit,” he said. “In general, it should also provide an incident-response mechanism through prioritization of vulnerability based on criticality and frequency. Runtime security should also ideally secure the environment, storage, network and related libraries that the application needs to use to run.”

A SaaS Solution for Runtime Security

Put in more colloquial terms: Runtime security means securing all of the things commonly found in modern software applications and environments.

The prevention-first, holistic approach is part of the DNA of Calico Open Source, an open source networking and network security project for containers, virtual machines, and native host-based workloads, as well as Calico Cloud and Calico Enterprise, the latter of which is Tigera’s commercial platform built on the open source project it created.

Calico Cloud, a Software as a service (SaaS) solution focused on cloud native apps running in containers with Kubernetes, offers security posture management, robust runtime security for identifying known threats, and threat-hunting capabilities for discovering Zero Day attacks and other previously unknown threats.

These four components of Calico — securing your posture in a Kubernetes-centric way, protecting your environment from known attackers, detecting Zero Day attacks, and incident response/risk mitigation — also speak to four fundamentals for any high-performing runtime security program, according to Bhatt.

Following are the four principles to follow for protecting your runtime.

4 Keys to Doing Runtime Security Right

1. Protect your applications from known threats. This is core to the prevention-first mindset, and focuses on ingesting reliable threat feeds that your tool(s) continuously check against — not just during build and deploy but during runtime as well.
Examples of popular, industry-standards feeds include network addresses of known malicious servers, process file hashes of known malware, and the OWASP Top 10 project.

2. Protect your workloads from vulnerabilities in the containers. In addition to checking against known, active attack methods, runtime security to proactively protect against vulnerabilities in the container itself — and everything that the container needs to run, including the environment.

This isn’t a “check once” type of test, but a virtuous feedback loop that should include enabling security policies that protect workloads from any vulnerabilities, including limiting communication or traffic between services that aren’t known/trusted or when a risk is detected.

3. Detect and protect against container and network anomalous behaviors. This is “the glamorous part” of runtime security, according to Bhatt, because it enables security teams to find and mitigate suspicious behavior in the environment even when it’s not associated with a known threat, such as with Zero Day attacks.

Runtime security tools should be able to detect anomalous behavior in container or network activity and alert security operations teams (via integration with security information and event management, or SIEM, tools) to investigate and mitigate as needed.

4. Assume breaches have occurred; be ready with incident response and risk mitigation. Lastly, even while shifting to a prevention-first, detection-second approach, Bhatt said runtime security done right requires a fundamental assumption that your runtime has already been compromised (and will occur again). This means your organization is ready to act quickly in the event of an incident and minimize the potential fallout in the process.

Zero trust is also considered a best strategy for runtime security tools and policies, according to Bhatt.

The bottom line: The perimeter-centric, detect-and-defend mindset is no longer enough, even if some of its practices are still plenty valid. As Bhatt told The New Stack: “The world of containers and Kubernetes requires a different kind of security posture.”

Runtime security tools and practices exist to address the much larger and more dynamic threat surface created by containerized environments. Bhatt loosely compared today’s software environments to large houses with lots of doors and windows. Legacy security approaches might only focus on the front and back door. Runtime security attempts to protect the whole house.

Bhatt finished the metaphor: “Would you rather have 10 locks on one door, or one lock on every door?”

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>
Top 3 Application Security Must-Haves https://thenewstack.io/top-3-application-security-must-haves/ Fri, 26 May 2023 14:59:48 +0000 https://thenewstack.io/?p=22709202

Look to slow down a development team with security and expect to be greeted with a wave of frustration. Between

The post Top 3 Application Security Must-Haves appeared first on The New Stack.

]]>

Look to slow down a development team with security and expect to be greeted with a wave of frustration. Between a lack of integration of security tools and confusion about shared responsibility, security teams are often playing from behind when it comes to defending cloud environments.

Meeting the needs of DevOps and the multiple clouds that companies now need to protect requires a unified platform that automates security controls and compliance for hosts and containers regardless of the cloud provider or deployment model. To win the cloud security race, organizations need the right ingredients for effective security to end up in the winner’s circle.

Ingredient No. 1: Unified and Portable

Let’s start with an unfortunate truth. Traditional security tools simply do not work in the cloud; they are not designed to scale alongside dynamic cloud environments. The end result is gaps in visibility and security. Meeting those challenges with point solutions is untenable for security teams seeking to keep pace with the realities of a cloud native world. As the limitations of those point products became apparent, it led to ad hoc approaches designed to address blind spots and a lack of integration.

Eliminating visibility gaps takes a cloud native security platform, a unified solution capable of providing visibility into the ever-growing number of containers and microservices today’s organizations need to protect. Armed with comprehensive visibility and continuous workload discovery, these platforms support efforts to identify vulnerabilities and ultimately help DevOps teams weave security into CI/CD workflows so that issues can be fixed before they reach production.

Security has to move at the speed of DevOps, and it needs to work across any cloud so that when workloads move, security and visibility are maintained. It’s a multicloud world, and security solutions need to live in it and not get passed on the outside.

Ingredient No. 2: Automated and Fast

Rapid changes are a part of that world as well. Microservices, for example, can be quickly spun up and are often short-lived. While they can simplify application updates, they are also a reminder of how dynamic cloud environments are. Enterprises need to know what is running, where and who is running it. With automated asset discovery and monitoring, organizations can get a handle on everything happening across their cloud environment without slowing anything down.

As noted earlier, integrating security with CI/CD improves security by enabling a “shift left” approach. Automation allows security to be orchestrated more effectively to resolve vulnerabilities and security risks early in the development life cycle, though care must be taken to prevent security holes from being introduced via Infrastructure-as-Code (IaC) templates. Recently, a survey of 300 CISOs performed by IDC revealed that 67% of respondents viewed security misconfigurations in production environments as a top concern. By automating the discovery of misconfigurations, organizations can reduce the chance that one will slip through their defenses and affect their customers or business.

Ingredient No. 3: Integrated and Scalable

To ensure success, security and DevOps teams need to operate like a finely tuned engine. It should be clear by now that security cannot be treated as an afterthought or bolted on. It must be integrated into the development process from the beginning and implemented to work seamlessly with applications, cloud instances and cloud workloads. Doing so allows teams to build securely in the cloud knowing cloud native apps are protected from the control plane to runtime.

This is the ingredient that makes the term “cloud native” an essential part of the cloud security winning team you are trying to build for your organization. Non-cloud native tools increase complexity; they are not optimized for cloud native applications, and they make monitoring harder. They also require more manual intervention. Conversely, cloud native solutions ensure consistency across the entire cloud estate. API-driven and integrated with DevOps tools, cloud native solutions allow organizations to maintain security and compliance levels operating at top speeds to take the lead and win the race.

The right solution will also empower businesses to scale at will in accordance with their needs. As businesses grow, security needs to grow alongside it. Cloud security solutions need to be able to scale at will, adding and decommissioning capabilities as simply as possible so enterprises can get the security they need when they need it, where they need it.

A Winning Combination

To win the cloud security race requires the right ingredients, and so does protecting your cloud environment. The ability to leverage a cloud native platform that provides visibility and control across public, private, hybrid and multicloud environments is a winning combination for any business. By automating cloud security management across the application development life cycle and providing real-time monitoring of cloud resources, this type of approach will enable organizations to prevent the types of cloud misconfigurations that are often exploited in cyberattacks and to deploy applications securely.

The post Top 3 Application Security Must-Haves appeared first on The New Stack.

]]>
Better Security with ChatGPT: Using AI’s Defensive Strengths https://thenewstack.io/better-security-with-chatgpt-using-ais-defensive-strengths/ Fri, 26 May 2023 13:00:45 +0000 https://thenewstack.io/?p=22709243

While ChatGPT has grabbed negative headlines recently due to cybercriminals’ use of the technology to strengthen attacks, it can also

The post Better Security with ChatGPT: Using AI’s Defensive Strengths appeared first on The New Stack.

]]>

While ChatGPT has grabbed negative headlines recently due to cybercriminals’ use of the technology to strengthen attacks, it can also be a formidable asset for cyber defense, helping companies maximize their security posture while promising to bridge any skills gaps in their workforce.

That’s particularly relevant as security teams become increasingly overwhelmed by an ever-expanding threat landscape — according to the results of a recent Cobalt survey, 79% of cybersecurity professionals say they’re having to deprioritize key projects just to stay on top of their workload.

Mike Fraser, vice president and field CTO of DevSecOps at Sophos, told The New Stack that generative AI has an enormous amount to offer to those overloaded security teams. “ChatGPT can be utilized for threat intelligence analysis, incident response guidance, security documentation and training generation, vulnerability management, security policy compliance, and automation,” he said. “With automation alone, the cybersecurity use cases are endless.”

The Cloud Security Alliance (CSA) recently published a white paper examining ChatGPT’s offensive and defensive potential in detail. CSA technical research director Sean Heide, one of the paper’s authors, said one key strength of the tool is that it allows users to simply ask in natural language for a specific attribute they need written for a task, or to make tasks more efficient with new suggestions.

“These tasks would typically take teams, depending on experience, a few hours to properly research, write out, test, and then push into a production scenario,” Heide said. “We are now seeing these same scripts being able to be accurately produced within seconds, and working the same, if not better.”

And Ric Smith, chief product and technology officer at SentinelOne, said it’s important to keep in mind that ChatGPT itself isn’t the only way to make use of large language models — dedicated solutions like SentinelOne’s recently announced AI-based threat-hunting platform can do it in a more focused way. “Companies need to think of LLMs as expert services and maintain a level of pragmatism in how and where they leverage generative AI,” he said. “You can create a fantastic generalist like GPT-4. But in reality, having a complex model is optional if the task is more focused.”

Bridging the Skills Gap

Chang Kawaguchi, vice president and AI security architect at Microsoft, said generative AI tools like his company’s Security Copilot can serve both to assist highly skilled employees and to fill in knowledge gaps for less-skilled workers. With Cybersecurity Ventures reporting a total of 3.5 million cybersecurity job vacancies worldwide (and expecting that number to remain unchanged until at least 2025), there’s a real need for that kind of support.

“We’re definitely hoping to make already skilled defenders more effective, more efficient — but also, because this technology can provide natural-language interfaces for complex tools, what we are starting to see is that lower-skilled folks become more effective in larger percentages,” Kawaguchi said.

At every level, Smith said, ChatGPT can simply make the work more approachable. “By enabling analysts to pose questions in their natural form, you are reducing the learning curve and making security operations more accessible to a larger pool of talent,” he said. “You are also making it easier to move more rudimentary operations to junior analysts, freeing veteran analysts to take on more thought work and sophisticated tasks.”

That’s equally true for the summarization and interpretation of data. “When you run hunting queries, you need to be able to interpret the results meaningfully to understand if there is an important finding and the resulting action that needs to be taken,” Smith said. “Generative AI is exceptionally good at both of these tasks and reduces, not eliminates, the burden of analysis for operators.”

It’s not that different, Smith said, from what spell check has done in freeing writers to focus on content rather than on proofreading. “We are lowering the cognitive burden to allow humans to do what they do best: creative thinking and reasoning,” he said.

Still, it’s not just about supporting less-skilled users. Different levels of generative AI capability, Kawaguchi said, are better suited for different levels of user expertise. At a higher level, he said, consider the potential of a tool like GitHub Copilot. “It can provide really complex code examples, and if you’re a highly skilled developer, you can clearly understand those samples and make them fit — make sure that they’re good with your own code,” he said. “So there’s a spectrum of capabilities that generative AI offers, some of which will be more useful to lower-skilled folks and some of which will be more useful to higher-skilled folks.”

Handling Hallucinations

As companies increasingly leverage these types of tools, it’s reasonable to be concerned that errors or AI hallucinations will cause confusion — as an example, Microsoft’s short video demo of Security Copilot shows the solution referring confidently to the non-existent Windows 9. In general, Kawaguchi said Security Copilot strives to avoid hallucinations by grounding it in an organization’s data or in information from trusted sources like the National Institute of Standards and Technology (NIST). “With grounding the data, we think that there’s a significant opportunity to, if not completely eliminate, greatly reduce the hallucination risk,” he said.

Basic checks and balances, Heide said, are also key to mitigating the potential impact of any hallucinations. “Much like there are review processes for development, the same will need to be taken around the usage of answers received from ChatGPT or other language models,” he said. “I foresee teams needing to check for accuracy of prompts being given, and the type of answers being provided.”

Still, Fraser said one of the key remaining barriers to adoption for a lot of companies lies in concerns about accuracy. “Thorough testing, validation and ongoing monitoring are necessary to build confidence in their effectiveness and minimize risks of false positives, false negatives or biased outputs,” he said.

It’s similar, Fraser said, to the benefits and challenges of automation, where ongoing tuning and management are key. “Human oversight is necessary to validate AI outputs, make critical judgments and respond effectively to evolving threats,” he said. “Security professionals can also provide critical thinking, contextual understanding and domain expertise to assess the accuracy and reliability of AI-generated information, which is essential to a successful strategy using ChatGPT and similar tools.”

Understanding the Benefits

While many companies at this point are more concerned about the threat from ChatGPT than they are invested in its potential as a defensive tool, Heide said that will inevitably shift as more and more users understand its potential. “I think as time goes on, and teams can see how quickly simple scripts can be completed to match an internal use case in a fraction of the time, they will begin to build more pipelines around its usage,” Heide said.

And as we move forward, Kawaguchi said, there’s an inevitable balancing act to be found between proceeding carefully in adopting generative AI and staying ahead of adversaries who may be surging forward with it. “It does feel relatively analogous to other step changes in technology that we’ve seen, where both offense and defense move forward and it’s a race to learn about new technology,” he said. “Our goal is to do so responsibly, so we’re taking it at an appropriate speed — but also not letting offense get ahead of us, not letting the malicious use of these technologies outpace just because we’re worried about potential misuse.”

Ultimately, Fraser said ChatGPT’s future as an asset for cyber defense will depend on responsible development, deployment, and regulation. “With responsible usage, ongoing advancements in AI and a collaborative approach between human experts and AI tools, ChatGPT can be a net benefit for cybersecurity,” he said. “It has the potential to significantly enhance defensive capabilities, support security teams in their fight against emerging threats, solve the skills gap through smarter automation, and enable a more proactive and effective approach to cyber defense.”

The post Better Security with ChatGPT: Using AI’s Defensive Strengths appeared first on The New Stack.

]]>
Bitwarden Moves into Passwordless Security https://thenewstack.io/bitwarden-moves-into-passwordless-security/ Thu, 25 May 2023 14:11:30 +0000 https://thenewstack.io/?p=22709077

Passwords are so passe. People who are serious about security are moving to Zero Trust Security or other passwordless Identity

The post Bitwarden Moves into Passwordless Security appeared first on The New Stack.

]]>

Passwords are so passe. People who are serious about security are moving to Zero Trust Security or other passwordless Identity and Access Management (IAM) systems. Now, Bitwarden, the curator of the prominent open source password management program of the same name, has officially launched Bitwarden Passwordless. dev. This is a comprehensive developer toolkit for integrating FIDO2 WebAuthn-based passkeys into consumer websites and enterprise applications

The time is right for Bitwarden to expand beyond its top-rated password manager

Passwordless technology is gaining significant traction. A Bitwarden survey found 56% of individuals are enthusiastic about passwordless technology.

But forget about what users want. The sad, simple truth is that password breaches are becoming as common as people running stop signs. Don’t believe me? Check your own e-mail account to see if it’s been swiped in one security breach or another at HaveIBeenPwned. I’ll wait. Unless you’re one in a million, one or more of your accounts, have already been exposed.

A Better Way

There has got to be a better way. Passwordless designs are one. That’s easier said than done. Most organizations have yet to adopt this technology. About half of the IT decision-makers cite the lack of passwordless design in the applications they use as the primary reason.

That’s where Bitwarden comes in. Its latest offering aims to bridge this gap.

Passkeys not only eliminate the need for passwords, usernames, and two-factor authentication (2FA), but they also enhance user security by mitigating the risk of phishing attacks.

Bitwarden Passwordless.dev uses an easy-to-use application programming interface (API) to provide a simplified approach to implementing passkey-based authentication with your existing code. This enables developers to create seamless authentication experiences swiftly and efficiently. For example, you can use it to integrate with FIDO2 WebAuthn applications such as Face ID, fingerprint, and Windows Hello.

Gaining Popularity

Enterprises also face challenges in integrating passkey-based authentication into their existing applications. Another way Bitwarden Passwordless.dev addresses this issue is by including an admin console. This enables programmers to configure applications, manage user attributes, monitor passkey usage, deploy code, and get started instantly.

“Passwordless authentication is rapidly gaining popularity due to its enhanced security and streamlined user login experience,” said Michael Crandell, CEO of Bitwarden. “Bitwarden equips developers with the necessary tools and flexibility to implement passkey-based authentication swiftly and effortlessly, thereby improving user experiences while maintaining optimal security levels.”

Lundatech AB, a Bitwarden customer, and a business cloud integrator, is already using Bitwarden Passwordlessdev to enhance its employee and customer sign-up and login. They’re happy with it.

“Our clientele includes software vendors, large private corporations, and government agencies, all of whom have stringent security and reliability requirements,” said Henrik Doverhill, Lundatech’s CTO and founder. “We aimed to provide them with superior security and a modern, streamlined authentication experience. Bitwarden Passwordless.dev significantly reduced our development process – we had passwordless authentication operational within an hour.”

Looking ahead, Bitwarden also recently unveiled the open beta of Bitwarden Secrets Manager. This is designed to securely manage sensitive authentication credentials within the privileged developer and DevOps environments.

Given Bitwarden’s sterling open source and security track record, if you want to replace passwords for better, more secure user IAM, I’d give Bitwarden Passwordless.dev a long hard look.

The post Bitwarden Moves into Passwordless Security appeared first on The New Stack.

]]>
Detect and Mitigate Common Attack Techniques for Containers https://thenewstack.io/detect-and-mitigate-common-attack-techniques-for-containers/ Wed, 24 May 2023 17:55:40 +0000 https://thenewstack.io/?p=22708952

The MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework and its corresponding matrices help us understand how an organization’s

The post Detect and Mitigate Common Attack Techniques for Containers appeared first on The New Stack.

]]>

The MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework and its corresponding matrices help us understand how an organization’s attack surface can be exploited by an adversary and how they would likely approach an attack.

With a rising number of Kubernetes vulnerabilities being discovered each day, it’s important for organizations to be aware of the various attack vectors, and corresponding tactics and techniques, that are relevant to containerized applications in the cloud. Let’s look at the tactics and techniques outlined in MITRE’s containers matrix, and the types of detection and mitigation solutions you’ll need to address each tactic outlined in the matrix.

Mitre Att&Ck Framework and the Containers Matrix

In this article, I’ll be focusing on MITRE’s containers matrix. The matrix is a table organized by tactic, with each column listing the techniques related to that tactic.

In addition to the containers matrix, the MITRE ATT&CK framework includes a wide range of enterprise matrices that include Linux, cloud and network matrices. To make sure you have a strong overall security posture, I suggest you also explore these matrices.

Detecting and Mitigating Common Attack Techniques

Let’s look at the tactics and techniques outlined in the containers matrix by organizing them into the four main stages of an attack: reconnaissance, delivery and exploitation, installation and spread, and command and control. For each tactic, I’ll identify the types of detection and mitigation solutions you’ll need to address the related attack techniques.

Reconnaissance

Initial Access

The containers matrix begins with initial access (how attackers enter an organization’s environment). This initial access can be achieved through scanning for any public-facing application that the organization has built.

To mitigate risk, you need a solution that offers granular, zero trust runtime security features, including:

  • Vulnerability management (image scanning and admission controller).
  • Deep packet inspection.
  • Workload-centric web application firewall (WAF) with application-level visibility.
  • Identity-based microsegmentation to reduce attack surface.
  • Domain Name Service (DNS) policies with least-privilege access.

Execution

After gaining access, an attacker will attempt to execute some type of malicious code, spin up a new container, execute some code from the container orchestration platform or get a user to unsuspectingly execute the code on the attacker’s behalf via social engineering, for example, phishing campaigns using file extensions such as .doc. During this stage, the attacker has not fully gained access to critical data or resources.

To mitigate the risks posed by this tactic, you need a solution that enables you to strengthen your runtime threat defense. Features to look for include:

  • Center for Internet Security (CIS) benchmark reports for KSPM (Kubernetes Security Posture Management).
  • Malware protection through signature-based detection.
  • Container threat detection with behavioral-based learning to detect unknown container threats.
  • Identity-based microsegmentation to reduce attack surface.
  • DNS policies.

CIS benchmark reports for KSPM can help track and fix misconfigurations in the platform control-plane components. All types of role-based access control (RBAC)-related configurations where users are allocated permissions based on their roles should be taken care of in the Kubernetes platform.

Persistence

Persistence refers to a determined bad actor trying to find alternate methods to infiltrate an organization’s network. In other words, even though a first attempt at access might have been futile, continued attempts are made to gain access.

Deploy-time security features can help to mitigate the risks posed by this tactic. Look for a solution that offers these features:

  • Image scanner.
  • CIS benchmark reports for KSPM.
  • Identity-based microsegmentation to reduce attack surface.
  • DNS policies.

Privilege Escalation

Privilege escalation refers to the additional resources through which an adversary can introduce malware directly or further move to a different component in a container environment. Techniques under privilege escalation give the attacker a higher-level privilege (system, root) for a container or the host.

A combination of build- and deploy-time security features can mitigate the risk of privilege escalation. These features include:

  • CIS benchmark reports for KSPM.
  • Workload-based IDS/IPS (intrusion detection and prevention).
  • Global threat feed intelligence.

Delivery and Exploitation

Defense Evasion

Many security solutions offer a wide range of features to detect and track malicious behavior in containers. Defense evasion techniques are meant to obfuscate these tools so that everything the bad actor has done is wiped out, leaving no trace of malicious activity. Attackers use defense evasion techniques to delete all logs and events related to their malicious activities so the administrator of a security, security information and event management (SIEM) or observability tool has no idea that an unauthorized event or process has occurred.

To protect against defense evasion, you need a container security solution that detects malware during runtime and provides threat detection and blocking capabilities, including:

  • Container threat detection with behavior-based learning
  • Runtime threat defense to protect against malware
  • Honeypods to capture malicious actors and activity

Credential Access

If, after employing defensive evasion techniques, an attacker has not been successful in obtaining sensitive data, they are probably looking at accounts, passwords and other credentials that will let them access the data they’re looking for. There are multiple ways an attacker can get the credentials they need, such as social engineering, spear phishing, brute force and network sniffing.

In a Kubernetes-based environment, access tokens for APIs are required to authorize API communication (OAuth 2.0) that happens between the Kubernetes API server and the container processes. If these tokens are compromised, any attacker can run Kubernetes commands as an authorized user.

Mitigation strategies for this tactic include:

  • Container threat detection with behavior-based detection
  • Workload-centric WAF
  • DNS policies

Discovery

This is a critical tactic for both the attacker and the organization (defender). Once an adversary gets enough information about all the resources such as pods, nodes, images, etc., they’ll have an approximate blueprint of the entire application. This information can be used to plan how to move from workload to workload until their desired outcome is reached. Most threat actors and teams spend a considerable amount of time in this phase.

To mitigate risks posed by this tactic, you’ll need features that are designed for zero trust workload access and deliver the following mitigation strategies:

  • DNS policies and workload access controls to limit access to resources.
  • Identity-based microsegmentation to reduce the attack surface and prevent sensitive workloads from being discovered.

Installation and Spread

Lateral Movement

Lateral movement is a critical aspect when it comes to container security as it can be a way to evade traditional security tools that are not designed to be deployed in a Kubernetes-based application. Since the basic Kubernetes networking premise is flat and all pods can talk to each other, lateral movement is easier for a threat actor looking to steal data, install ransomware or use botnets in an application.

To combat this tactic, you need a security solution that provides:

  • Identity-based microsegmentation to reduce attack surface.
  • Fine-grained egress access controls for workloads (with the ability to apply DNS policies).
  • Global default-deny policy and least-privilege access.

With these features, an attacker will have less chance of moving laterally since the number of nodes they are exposed to is smaller.

Command and Control

Impact

Organizations are most worried about losing critical data — both internal and customer information — through command and control activity. Any security system, no matter how good it is at detecting vulnerabilities or threat activity, must be able to block the transfer of sensitive data from inside the organization to an external actor. In a containerized environment, this means applying the principle of least privilege to workloads when they communicate with other workloads within a cluster, with external applications and workloads outside the cluster, and with end users.

To protect workloads from severe impact from an attack, you need a solution that provides:

  • A suite of zero trust security policies.
  • Microsegmentation to limit an attack’s blast radius.
  • A global default-deny policy.
  • Alerting for anomaly detection.

Final Thoughts

A comprehensive runtime security solution is something that can both detect and mitigate reconnaissance techniques, while also providing a robust zero trust architecture to thwart unauthorized network activity that can lead to command-and-control situations. Cloud native applications have a unique architecture, so when dealing with containers and Kubernetes, you need a security solution that is built with cloud native architecture in mind. Without this, it will be a challenge to detect, mitigate and prevent attacks. My recommendation is to invest in a tailor-made security approach for containers and Kubernetes. A solid defense-in-depth strategy along with a Kubernetes-native solution will help you stay one step ahead of attackers.

To learn more about cloud native approaches for establishing security and observability for containers and Kubernetes, check out this O’Reilly eBook, authored by Tigera.

The post Detect and Mitigate Common Attack Techniques for Containers appeared first on The New Stack.

]]>
Lineaje Unveils SBOM360 Hub for Software Bills of Materials https://thenewstack.io/lineaje-unveils-sbom360-hub-for-software-bills-of-materials/ Wed, 24 May 2023 15:46:20 +0000 https://thenewstack.io/?p=22709020

We all need to use SBOMs moving forward, Lineaje gives you a way to manage them. Using Software Bills of

The post Lineaje Unveils SBOM360 Hub for Software Bills of Materials appeared first on The New Stack.

]]>

We all need to use SBOMs moving forward, Lineaje gives you a way to manage them.

Using Software Bills of Material (SBOM) isn’t just a good idea. It’s the law. No, seriously. It is. Executive Order 14028 requires software inventories to be automatically generated if they’re to be used in Federal agencies and presented to the appropriate agencies by Sept. 14, 2023. If you’re smart, you’re already on top of that. But, if you need to effortlessly deliver your SBOMs, check out Lineaje‘s SBOM360 Hub repository

SBOM360 Hub offers a comprehensive service that enables you to manage and publish their software distribution chain efficiently through a unified platform. With SBOM360 Hub, software producers will be able to publish all their SBOMs to their entire distribution chain in one place.

Subscribe to SBOMS

Simultaneously, software consumers can subscribe to their vendors’ SBOMS and manage their entire software supply chain in one location. They can also subscribe to specific notifications, such as when new versions are available or when new vulnerabilities are found. The SBOM360 Hub assessment engines continuously scan all subscribed SBOMs and provide automated notifications for relevant updates.

SBOM360 Hub’s key features include the following:

  1. With SBOM360 Hub, software producers and sellers can swiftly create and publish approved, attested, and compliant SBOMs, self-attestation forms, and related artifacts for their products. These can be mapped to the SKUs they offer, ensuring smooth and private sharing with customers and the distribution chain.
  2. The platform enables software distributors and resellers to request SBOMs and related artifacts from vendors. They can easily make these available to their distribution channels and customers with a single click, facilitating efficient information flow.
  3. By subscribing to SBOM360 Hub, software consumers gain access to a centralized location where they can search for and request specific vendor SBOMs and related artifacts. They can directly communicate with their vendors to obtain all the necessary information for evaluation, purchase, and compliance. Additionally, the platform provides automated updates on software changes, new versions, and vulnerabilities, ensuring users stay informed.

Moreover, SBOM360 Hub offers comprehensive security profiles of open source dependencies within commercial products, providing a valuable tool for vulnerability assessment and better roadmap planning. Users can identify trends in the security profile of each software component, enabling collaboration and enhanced decision-making throughout the software distribution chain.

Private, Secure, Searchable

The SBOM360 Hub also includes a private, secure, and searchable environment for publishing and sharing SBOMs. Within that environment, the creator controls the depth and width of what users see. With this, you can customize the data to meet specific compliance requirements. It also supports both product-level and SKU-level SBOMs. This means it’s simple for software producers to offer multiple versions of a product. In addition, continuous assessment and automated notifications for subscribed SBOMs, keeps users informed of critical security updates.

SBOM360 Hub is now available for Early Access, offering a free trial for software producers, consumers, distributors, resellers, and system integrators. You can give it a try now to see if it will meet your needs. Remember, the SBOM clock is ticking.

The post Lineaje Unveils SBOM360 Hub for Software Bills of Materials appeared first on The New Stack.

]]>