The New Stack https://thenewstack.io/ Fri, 16 Jun 2023 03:32:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 An E-tailer’s Journey to Real-Time AI: Recommendations https://thenewstack.io/an-e-tailers-journey-to-real-time-ai-recommendations/ Thu, 15 Jun 2023 17:41:53 +0000 https://thenewstack.io/?p=22710927

The journey to implementing artificial intelligence and machine learning solutions requires solving a lot of common challenges that routinely crop

The post An E-tailer’s Journey to Real-Time AI: Recommendations appeared first on The New Stack.

]]>

The journey to implementing artificial intelligence and machine learning solutions requires solving a lot of common challenges that routinely crop up in digital systems: updating legacy systems, eliminating batch processes and using innovative technologies that are grounded in AI/ML to improve the customer experience in ways that seemed like science fiction just a few years ago.

To illustrate this evolution, let’s follow a hypothetical contractor who was hired to help implement AI/ML solutions at a big-box retailer. This is the first in a series of articles that will detail important aspects of the journey to AI/ML.

The Problem

First day at BigBoxCo on the “Infrastructure” team. After working through the obligatory human resources activities, I received my contractor badge and made my way over to my new workspace. After meeting the team, I was told that we have a meeting with the “Recommendations” team this morning. My system access isn’t quite working yet, so hopefully IT will get that squared away while we’re in the meeting.

In the meeting room, it’s just a few of us: my manager and two other engineers from my new team, and one engineer from the Recommendations team. We start off with some introductions, and then move on to discuss an issue from the week prior. Evidently, there was some kind of overnight batch failure last week, and they’re still feeling the effects of that.

It seems like the current product recommendations are driven by data collected from customer orders. With each order, there’s a new association between the products ordered, which is recorded. When customers view product pages, they can get recommendations based on how many other customers bought the current product alongside different products.

The product recommendations are served to users on bigboxco.com via a microservice layer in the cloud. The microservice layer uses a local (cloud) data center deployment of Apache Cassandra to serve up the results.

How the results are collected and served, though, is a different story altogether. Essentially, the results of associations between products (purchased together) are compiled during a MapReduce job. This is the batch process that failed last week. While this batch process has never been fast, it has become slower and more brittle over time. In fact, sometimes the process takes two or even three days to run.

Improving the Experience

After the meeting, I check my computer and it looks like I can finally log in. As I’m looking around, our principal engineer (PE) comes by and introduces himself. I tell him about the meeting with the Recommendations team, and he gives me a little more of the history behind the Recommendation service.

It sounds like that batch process has been in place for about 10 years. The engineer who designed it has moved on, not many people in the organization really understand it, and nobody wants to touch it.

The other problem, I begin to explain, is that the dataset driving each recommendation is almost always a couple of days old. While this might not be a big deal in the grand scheme of things, if the recommendation data could be made to be more up to date, it would benefit the short-term promotions that marketing runs.

He nods in agreement and says that he’s definitely open to suggestions on how to improve the system.

Maybe a Graph Problem?

At the onset, this sounds to me like a graph problem. We have customers who log on to the site and buy products. Before that, when they look at a product or add it to the cart, we can show recommendations in the form of “Customers who bought X also bought Y.” The site has this today, in that the recommendations service does exactly this: It returns the top four additional products that are frequently purchased together.

But we’d have to have some way to “rank” the products, because the mapping of one product to every other purchased at the same time by any of our 200 million customers is going to get big, fast. So we can rank them by the number of times they appear in an order. A graph of this system might look something like what is shown below in Figure 1.

Figure 1. A product recommendation graph showing the relationship between customers and their purchased products.

After modeling this out and running it on our graph database with real volumes of data, I quickly realized that this isn’t going to work. The traversal from one product to nearby customers to their products and computing the products that appear most takes somewhere in the neighborhood of 10 seconds. Essentially, we’ve “punted” on the two-day batch problem, to have each lookup putting the traversal latency precisely where we don’t want it: in front of the customer.

But perhaps that graph model isn’t too far off from what we need to do here. In fact, the approach described above is a machine learning (ML) technique known as “collaborative filtering.” Essentially, collaborative filtering is an approach that examines the similarity of certain data objects based on activity with other users, and it enables us to make predictions based on that data. In our case, we will be implicitly collecting cart/order data from our customer base, and we will use it to make better product recommendations to increase online sales.

Implementation

First of all, let’s look at data collection. Adding an extra service call to the shopping “place order” function isn’t too big of a deal. In fact, it already exists; it’s just that data gets stored in a database and processed later. Make no mistake: We still want to include the batch processing. But we’ll also want to process that cart data in real time, so we can feed it right back into the online data set and use it immediately afterward.

We’ll start out by putting in an event streaming solution like Apache Pulsar. That way, all new cart activity is put on a Pulsar topic, where it is consumed and sent to both the underlying batch database as well as to help train our real-time ML model.

As for the latter, our Pulsar consumer will write to a Cassandra table (shown in Figure 2) designed simply to hold entries for each product in the order. The product then has a row for all of the other products from that and other orders:

CREATE TABLE order_products_mapping (
    id text,
    added_product_id text,
    cart_id uuid,
    qty int,
    PRIMARY KEY (id, added_product_id, cart_id)
) WITH CLUSTERING ORDER BY (added_product_id ASC, cart_id ASC);


Figure 2. Augmenting an existing batch-fed recommendation system with Apache Pulsar and Apache Cassandra.

We can then query this table for a particular product (“DSH915” in this example), like this:

SELECT added_product_id, SUM(qty)
FROm order_products_mapping
WHERE id='DSH915'
GROUP BY added_product_id;

 added_product_id | system.sum(qty)
------------------+-----------------
            APC30 |               7
           ECJ112 |               1
            LN355 |               2
            LS534 |               4
           RCE857 |               3
          RSH2112 |               5
           TSD925 |               1

(7 rows)


We can then take the top four results and put them into the product recommendations table, ready for the recommendation service to query by product_id:

SELECT * FROM product_recommendations
WHERE product_id='DSH915';

 product_id | tier | recommended_id | score
------------+------+----------------+-------
     DSH915 |    1 |          APC30 |     7
     DSH915 |    2 |        RSH2112 |     5
     DSH915 |    3 |          LS534 |     4
     DSH915 |    4 |         RCE857 |     3

(4 rows)


In this way, the new recommendation data is constantly being kept up to date. Also, all of the infrastructure assets described above are located in the local data center. Therefore, the process of pulling product relationships from an order, sending them through a Pulsar topic and processing them into recommendations stored in Cassandra happens in less than a second. With this simple data model, Cassandra is capable of serving the requested recommendations in single-digit milliseconds.

Conclusions and Next Steps

We’ll want to be sure to examine how our data is being written to our Cassandra tables in the long term. This way we can get ahead of potential problems related to things like unbound row growth and in-place updates.

Some additional heuristic filters may be necessary to add as well, like a “do not recommend” list. This is because there are some products that our customers will buy either once or infrequently, and recommending them will only take space away from other products that they are much more likely to buy on impulse. For example, recommending a purchase of something from our appliance division such as a washing machine is not likely to yield an “impulse buy.”

Another future improvement would be to implement a real-time AI/ML platform like Kaskada to handle both the product relationship streaming and to serve the recommendation data to the service directly.

Fortunately, we did come up with a way to augment the existing, sluggish batch process using Pulsar to feed the cart-add events to be processed in real time. Once we get a feel for how this system performs in the long run, we should consider shutting down the legacy batch process. The PE acknowledged that we made good progress with the new solution, and, better yet, that we have also begun to lay the groundwork to eliminate some technical debt. In the end, everyone feels good about that.

In an upcoming article, we’ll take a look at improving product promotions with vector searching.

Learn how DataStax enables real-time AI.

The post An E-tailer’s Journey to Real-Time AI: Recommendations appeared first on The New Stack.

]]>
How Dell’s Data Science Team Benefits from Agile Practices https://thenewstack.io/how-dells-data-science-team-benefits-from-agile-practices/ Thu, 15 Jun 2023 16:38:17 +0000 https://thenewstack.io/?p=22711178

Agile development doesn’t work for data science… at least, not at first, said Randi Ludwig, Dell Technologies’ director of Data

The post How Dell’s Data Science Team Benefits from Agile Practices appeared first on The New Stack.

]]>

Agile development doesn’t work for data science… at least, not at first, said Randi Ludwig, Dell Technologies’ director of Data Science. That’s because, in part, there is an uncertainty that’s innate to data science, Ludwig told audiences at the Domino Data Lab Rev4 conference in New York on June 1.

“One of the things that breaks down for data science, in terms of agile development practices, is you don’t always know exactly where you’re going,” Ludwig said. “I haven’t even looked at that data. How am I supposed to know where do I even start with that?”

Nonetheless, Dell uses agile practices with its data science team and what Ludwig has found is that while there is a certain amount of uncertainty, it’s contained to the first part of the process where data scientists collect the data, prove there’s value and obtain sign-off from stakeholders. To manage that first part, she suggested time boxing it to three or four weeks.

“The uncertainty really only lies in the first part of this process,” she said. “What that agile looks like in the first half and then the second half of the process are different on a day-to-day basis for the team.”

After the uncertainty period, the rest of the data science process is more like software development, and agile becomes beneficial, she said.

Ludwig interwove how Dell implements agile practices in data science with the benefits the team reaps from those practices.

Benefits of Standups

First, standups should include anyone involved in a data science project, including data engineers, analysts and technical project managers, Ludwig said. Just talking to each other on a regular basis tends to fly in the face of how data scientists inherently work in isolation, but it helps put everyone on the same page and delivers value by adding context and avoiding rework. This pays dividends in that team members can step in for one another more than they can under the “lone wolf” approach to data science.

“Doing standups gives visibility to everybody else in the story,” she said. “That lack of context goes away just by talking to each other every day, and then if you actually write down what you talk about every day, you get other amazing benefits out of it.”

The standup doesn’t necessarily need to be every day, but it should be a recurring cadence that’s short enough that the project can’t go wildly afield, she added.

Benefits of Tickets

Documenting tickets is also a key practice that’s easy to do while alleviating single points of failure, she said, plus tickets have the benefit of not being onerous documentation.

“Just the fact of having things written down and talking to each other every day is massively beneficial, and in my experience is not how data science teams organically develop most of the time,” she said.

In the second half of the data science process, teams can articulate more clearly what exactly they’re going to do so tickets become possible. It’s important not to be too broad when writing tickets, however. Instead, break big ideas down into bite-sized chunks of work, she advised.

“‘I’m going to do EDA (exploratory data analysis) on finance data’ is way too broad. That’s way too big of a ticket. You’ve got to break those things down into smaller pieces,” she said. “Even just getting the team to articulate what are the some of the things you’re going to look for — you’re going to look for missing values, you’re going to look for columns that are high-quality data, you’re going to look to see if there’s any correlations between some of those columns — so that you’re not doing bringing in redundant features.”

It also helps inform the team about the why and how of the models being built. There can also be planning tickets that incorporate questions that need to be asked, she said.

Tickets become another form of data that can be used in year-end reviews and for the management of the team. For instance, one of Ludwig’s data scientists was able to demonstrate through tagged tickets how much time was spent on building data pipelines.

“Data scientists are not best at building data pipelines, you need data engineers for that,” Ludwig said. “This is a great tool because now I know that I need to either redistribute resources I have or go ask for more resources. I actually need more data engineers.”

Tickets can also be used to document problems encountered by the data science team. For instance, Ludwig was able to use tickets to show the database management team all the problems they were encountering with a particular database, thus justifying improvements to that database.

It can be challenging to get members to make tickets and keep them updated, she acknowledged, so she has everyone opened to Github so they can update the tickets during the standup.

Benefits of a Prioritization Log

Tickets also allow the team to create a prioritization log, she said. That triggers a slew of benefits, such as providing the team with support when there is pushback from stakeholders about requests.

“This magical thing happens where now you have stuff written down, which means you have a prioritization backlog, you can actually go through all of the ideas and thoughts you’ve had and figure out how to prioritize the work instead of just wondering,” she said. “You actually foster much less contentious relationships with stakeholders in terms of new asks by having all of the stuff written down.”

Stakeholders will start to understand that for the team to prioritize their request, they need to do some homework such as identifying what data sold be used, what business unit will consume the output of the data and what they think it should look like.

Another benefit: It can keep data scientists from wandering down rabbit holes as they explore the data. Instead, they should bring those questions to the standup and decide as a team for prioritizing.

”This helps you on your internal pipeline, as well as your intake with external stakeholders. Once they see that you have a list to work against, then they’re, ‘Oh, I need to actually be really specific about what I’m asking from you,’” she said.

Finally, there’s no more “wondering what the data science team is doing” and whether it will deliver benefits.

“One of the biggest concerns I’ve ever heard from leadership about data science teams is that they don’t know what your plan’s going to be, what are you going to deliver in 12 or 18 months, how many things I could learn between here that’s going to completely change whatever I tell you right now,” she said. “At least now you know that this investment has a path and a roadmap that’s going to continue to provide value for a long time.”

Benefits of Reviews and Retrospectives

“Stakeholders are just really convinced that people just disappear off into an ivory tower, and then they have no idea what are those data scientists doing,” Ludwig said.

There’s a lot of angst that can be eliminated just by talking with business stakeholders, which review sessions give you a chance to do. It’s important to take the time to make sure they understand what you’re working on, why and what you found out about it, and that you understand their business problem.

Retrospectives are also beneficial because they allow the data science team to reflect and improve.

“One of the things that I actually thought was one of the most interesting about data scientists or scientists at heart, they love to learn, they love to make things more efficient and optimize, but the number of teams that organically just decide to have retrospectives is very small, in my experience,” she said. “Having an organized framework of we’re going to sit down and periodically review what we’re doing and make sure we learn from it is an ad hoc thing that some people do or some people don’t. Just enforcing that regularly has a ton of value.”

Domino Data Lab paid for The New Stack’s travel and accommodations to attend the Rev4 conference.

The post How Dell’s Data Science Team Benefits from Agile Practices appeared first on The New Stack.

]]>
The Transformative Power of SBOMs and IBOMs for Cloud Apps https://thenewstack.io/the-transformative-power-of-sboms-and-iboms-for-cloud-apps/ Thu, 15 Jun 2023 16:20:44 +0000 https://thenewstack.io/?p=22710919

As we continue to navigate the digital landscape, it is clear that every innovation brings with it a wealth of

The post The Transformative Power of SBOMs and IBOMs for Cloud Apps appeared first on The New Stack.

]]>

As we continue to navigate the digital landscape, it is clear that every innovation brings with it a wealth of opportunities as well as a host of challenges. One of the most prevalent trends in today’s tech world is the increasing reliance on cloud-based applications. These applications offer flexibility, scalability and reliability but also introduce complexity, mainly when operating in multicloud or hybrid environments. We must adopt a fresh perspective to manage this ever-evolving IT ecosystem effectively.

In this blog post, I want to explore a transformative concept that could redefine the way we manage our business applications: the integration of the software bill of materials (SBOM) and infrastructure bill of materials (IBOM).

SBOM and IBOM: A Unified Approach to Tech Management

Traditionally, an SBOM serves as an inventory list detailing all components of software, including libraries and dependencies. It plays a crucial role in managing software updates, ensuring compliance and facilitating informed decision-making. However, in today’s intricate application landscape, having knowledge of the software alone is insufficient.

This is where the concept of the IBOM comes into play. An IBOM is a comprehensive list of all critical components a business application requires to run, including network components, databases, message queuing systems, cache layers systems, cloud infrastructure components and cloud services. By integrating an SBOM and an IBOM, we can better understand our application environment. This powerful combination enables us to effectively manage critical areas such as security, performance, operations, data protection and cost control.

The Business Benefits of SBOM and IBOM Integration

The integration of an SBOM and an IBOM offers numerous benefits that can enhance various aspects of business operations:

  • Security – A comprehensive view of both software and infrastructure components allows organizations to identify potential vulnerabilities early on. This level of visibility is critical for bolstering data protection and reducing overall risk. In essence, complete visibility acts as a safety net, enabling businesses to safeguard their digital assets from threats.
  • Performance – Detailed knowledge of software and infrastructure components can significantly enhance application performance. Improved performance translates into superior customer experiences and more efficient business operations, ultimately leading to increased customer satisfaction and profitability.
  • Operations – A complete view of all application components facilitates effective operational planning. This not only simplifies the deployment and maintenance of applications but also streamlines workflows and boosts operational efficiency.
  • Cost Control – The granular information provided by SBOMs and IBOMs enables businesses to make informed decisions, optimize resource utilization and manage costs effectively. By strategically deploying resources, businesses can eliminate unnecessary expenditures and invest in areas that offer the highest value.

Navigating the Complex World of Cloud-Based Applications

The rise of homegrown applications has led to a significant increase in the number of applications that need to be managed. Coupled with the shift toward cloud-based applications and the complexities associated with multicloud or hybrid environments, this trend underscores the importance of having a comprehensive SBOM and IBOM.

Without a thorough understanding of their application landscape, organizations may find it challenging to manage and prioritize operational and security tasks. SBOMs and IBOMs are indispensable tools for effective control and management in this rapidly evolving applications and infrastructure era.

Embracing the Future of Automation and Integration: The Role of GitOps

The future of business applications presents exciting opportunities for automation and integration. As the complexity and scale of applications continue to grow, manual management is becoming increasingly challenging. Automating the creation and maintenance of SBOMs and IBOMs is crucial to keeping pace with the rapidly changing tech landscape.

One of the most promising approaches to this automation and integration is GitOps. GitOps is a paradigm or a set of practices that empowers developers to perform tasks that typically fall under IT operations’ purview. GitOps leverages the version control system as the single source of truth for declarative infrastructure and applications, enabling developers to use the same git pull requests they use for code review and collaboration to manage deployments and infrastructure changes.

In the context of SBOMs and IBOMs, GitOps can automate the process of tracking and managing changes to both software and infrastructure components. By storing the SBOM and IBOM in a git repository, any changes to the software or infrastructure can be tracked and managed through git. This simplifies the management process and enhances visibility and traceability, which are crucial for security and compliance.

Moreover, these automated systems could be integrated into secure, automated supply chains, marking this technological revolution’s next phase. This is an exciting prospect and one that holds immense potential for businesses looking to streamline their operations and enhance their efficiency. With GitOps, the creation and maintenance of SBOMs and IBOMs become a part of the natural development workflow, making it easier to keep up with the fast-paced world of cloud-based applications.

The Role of SBOMs and IBOMs in Compliance and Auditing

Another significant advantage of integrating SBOMs and IBOMs is their crucial role in compliance and auditing. In today’s digital landscape, the emphasis on data privacy and security has never been greater. Businesses must adhere to many regulations, from data protection laws like GDPR and California Consumer Privacy Act (CCPA) to industry-specific regulations such as Health Insurance Portability and Accountability Act (HIPAA) in healthcare and Payment Card Industry Data Security Standard (PCI DSS) in finance.

Having comprehensive SBOMs and IBOMs provides the necessary transparency and traceability to meet these regulatory requirements. They serve as a detailed inventory of all software and infrastructure components, including their versions, configurations and interdependencies. This level of detail is crucial for demonstrating compliance with regulations requiring businesses to thoroughly understand their IT environment.

For instance, in the event of a data breach, an SBOM and IBOM can help a team identify which components were affected and assess the extent of the breach. This can aid in incident response and reporting, both of which are key requirements of data protection regulations.

The integration of SBOM and IBOM is not just about managing complexity in the cloud-based app era. It’s also about ensuring that businesses can meet their compliance obligations and maintain the trust of their customers in an increasingly regulated and security-conscious digital landscape.

The Future Is Integrated

As we continue to navigate the digital future, it’s clear that the integration of SBOMs and IBOMs will play a pivotal role in managing the complexity of cloud-based applications. Providing a comprehensive view of our application environment can help businesses enhance security, improve performance, streamline operations and control costs.

The future of business applications is undoubtedly integrated. By embracing the power of SBOMs and IBOMs, businesses can not only navigate the complexities of the digital landscape but also unlock new opportunities for growth and innovation. As we continue to explore the potential of these tools, one thing is clear: The future of tech management is here, and it’s integrated.

The post The Transformative Power of SBOMs and IBOMs for Cloud Apps appeared first on The New Stack.

]]>
Pulumi: New Features for Infrastructure as Code Automation https://thenewstack.io/pulumi-new-features-for-infrastructure-as-code-automation/ Thu, 15 Jun 2023 16:00:07 +0000 https://thenewstack.io/?p=22710944

Given the enormous complexity involved, orchestrating cloud infrastructure manually, even with Infrastructure as Code (IaC), is time-consuming and tough. Enterprises

The post Pulumi: New Features for Infrastructure as Code Automation appeared first on The New Stack.

]]>

Given the enormous complexity involved, orchestrating cloud infrastructure manually, even with Infrastructure as Code (IaC), is time-consuming and tough. Enterprises often have dozens and sometimes hundreds of public cloud accounts, with new ones popping up all the time.

Without a unified control plane that keeps track of application stacks across clouds and cloud accounts, achieving operational consistency, cost efficiency and resiliency becomes near impossible.

Additionally, enterprises are missing out on the opportunity to learn from what worked and what didn’t work in the past, when creating new app stacks, Torsten Volk, an analyst at Enterprise Management Associates, told The New Stack.

He added, “Ideally, developers will be able to define their infrastructure requirements straight from within code functions, without having to specify the exact resources needed, while the IaC platform analyzes the new app, compares it to existing apps that are similar in character, and automatically derives the optimal infrastructure resources.”

Pulumi, an IaC provider, is seeking to simplify and automate IaC for complex cloud environments (Amazon Web Services, for instance, has more than 300 infrastructure resources alone). As part of that mission, it announced new product features during its PulumiUP virtual conference on Thursday.

For those organizations that may have cloud native ambitions and struggle with just getting started, Pulumi’s new AI-enhanced and other features and existing API are designed for the task.

Other newly introduced features include the ability to convert infrastructure across a stack from an alternative such as Terraform with accessible IaC commands.

AI and Insights

When managing thousands of resources across multiple clouds, manual errors can be devastating. A proper IaC platform must prevent manual errors and streamline operations. It should provide a single source of truth and become a necessity at the scale of cloud native environments.

For serverless architectures and Kubernetes applications, for example, managing infrastructure with a programming language of your choice — features that which Pulumi provides — is also critical as IaC becomes the default choice in the cloud native world.

“Pulumi is more suitable for this new world, where infrastructure plays a different role,” Aaron Kao, Pulumi’s vice president for marketing, told The New Stack.

Pulumi’s new features are designed to increase developer productivity and operational scalability by leveraging metrics from past projects to automatically compile an optimal application stack for new projects, Volk said.

For example, he said, the analytics engine might find that leveraging SQL databases over NoSQL ones leads to a decreased number of weekly deployments that at the same time show higher failure rates and a longer mean time to recovery.

The new features Pulumi announced at its conference include:

An On-Ramp from Terraform

New feature in Pulumi that makes converting Terraform infrastructure as code easier.

Tf2pulumi, which converts Terraform projects to Pulumi programs, is now part of the Pulumi CLI. The new Terraform conversion support includes support for Terraform modules, all core features of Terraform 1.4 and the majority of Terraform built-in functions.

The tf2pulumi feature previously converted snippets of Terraform to Pulumi, and now supports conversion of most complete Terraform projects. It is now integrated with the pulumi convert command in the CLI, which can also be used to convert Pulumi YAML to other Pulumi languages.

A Deeper Dive into Cloud Resources

Pulumi Insights now lets engineers ask questions about cloud resource property values, in addition to resource types, packages, projects and stacks. This property search capability allows teams to perform deeper analysis on their resources.

The Insights feature also now allows search filtering by teams. This allows organizations to analyze resources under management per team and better estimate usage and cost.

Pulumi Insights is where Pulumi’s AI capabilities particularly shine, with a heavy implantation of ChatGPT functionality. The information retrieved by issuing commands in conversational English, and Pulumi Insights offers actionable analysis and information about how to accomplish infrastructure orchestration-related tasks.

On-Demand Infrastructure Stacks for Testing

Review Stacks, a new feature of Pulumi Deployments, are temporary, on-demand infrastructure environments created for each pull request in a repository. They allow engineers to review and test IaC changes in an isolated setting before merging them into the main branch.

The features streamlines the development process by maintaining a separation between testing and production environments and catching potential issues before they reach production. With Review Stacks, organizations can enhance resource efficiency by spinning up a test stack only when needed, which is intended to accelerate deployment cadence.

The post Pulumi: New Features for Infrastructure as Code Automation appeared first on The New Stack.

]]>
Managing Kubernetes Complexity in Multicloud Environments https://thenewstack.io/managing-kubernetes-complexity-in-multicloud-environments/ Thu, 15 Jun 2023 14:48:40 +0000 https://thenewstack.io/?p=22710908

Kubernetes has become the ubiquitous choice as the container orchestration platform for building and deploying cloud native applications. As enterprises

The post Managing Kubernetes Complexity in Multicloud Environments appeared first on The New Stack.

]]>

Kubernetes has become the ubiquitous choice as the container orchestration platform for building and deploying cloud native applications. As enterprises adopt Kubernetes, one of the key decisions they have to make is around adopting a multicloud strategy. It’s essential to understand the factors driving the need for a solution across public cloud providers such as Amazon Web Services (AWS) , Azure, GCP, Oracle, Alibaba, etc., and validate whether those factors are relevant currently or in the future. Some factors that influence multicloud strategy are:

  • Specialized cloud technology needs/requirements for particular applications
  • Multiple business units adopting separate clouds
  • GDPR and other locality considerations
  • Disaster recovery
  • Mergers and acquisitions of other businesses that have adopted different clouds
  • Dependency on a cloud-managed service

Specialized Cloud Technology Needs/Requirements for a Particular Application

Some applications require specialized cloud services only available on specific cloud platforms. For example, Google Big Table is a NoSQL database only available on Google Cloud. Similarly, Azure has specialized machine learning and AI services, such as Azure Cognitive Services.

In such scenarios, enterprises need to deploy their applications across multiple clouds to access the specialized services required for their applications. This approach can also help organizations optimize costs by choosing the most cost-effective cloud service for each application.

Multiple Business Units Adopting Separate Clouds

In large organizations, different business units may have unique requirements for their cloud services, leading to the adoption of separate cloud services. For example, one business unit may prefer Google Cloud for its machine learning capabilities, while another may prefer AWS for its breadth of services. As a result, the cloud environment becomes fragmented, and deploying applications across multiple clouds becomes complex.

GDPR and Other Locality Considerations

Regional regulations can also drive the need for a multicloud approach. For example, enterprises may need to store and process data in specific regions to comply with data residency regulations. For instance, Alibaba Cloud is China’s leading cloud provider and the preferred cloud in that region.

Deploying applications across multiple clouds in different regions can help enterprises meet their data residency and compliance requirements.

Disaster Recovery

Implementing disaster recovery in the right manner is essential for enterprises, as downtime can lead to significant revenue loss and reputational damage. A multicloud approach can help enterprises ensure business continuity by deploying applications across multiple clouds. In such scenarios, primary applications can run in one cloud while secondary applications can run in another for disaster recovery.

This approach can also help enterprises optimize their costs by choosing the most cost-effective cloud service for disaster recovery.

Mergers and Acquisitions

When organizations merge, they may have different cloud environments that must be integrated. Similarly, when organizations acquire other companies, they may need to integrate the acquired company’s cloud environment with their existing cloud environment, hence the need for a multicloud approach.

Dependency on a Particular Cloud Service

Enterprises may need to deploy applications in a particular cloud due to the dependency on a specific service that a specific cloud provider only offers. For example, an organization may require managed Oracle for its databases or SAP HANA for its ERP systems. In this case, deploying the applications in the same cloud is necessary to be closer to the database. Platform and site reliability engineering (SRE) teams must now acquire skills to manage Kubernetes infrastructure on a new public cloud. Platform teams must thoroughly understand all their application team requirements to see whether any of their applications will fall into this category.

How to Manage Multicloud Kubernetes Operations with a Platform Approach

Enterprises may want to invest in a true Kubernetes operations platform if the multicloud deployment is a critical requirement now or in the future. A true Kubernetes operations platform helps enterprises develop standardized automation across clouds while leveraging public cloud Kubernetes distributions such as AWS EKS, Azure AKS, Google GKE, etc. On the other hand, deploying and managing Kubernetes infrastructure on multiple clouds without a Kubernetes operations platform requires a lot of manual effort and can lead to substantial operational costs, operational inconsistencies, project delays, etc.

  • A Kubernetes operations platform can standardize the process for deploying and managing Kubernetes clusters across multiple clouds. Enterprises can use a unified interface to automate the deployment and management of Kubernetes clusters across multiple clouds. This automation helps improve consistency and reduce the risk of human error. It also reduces the need for specialized skills.
  • Enterprises also need to maintain a unified security posture across clouds. In a multicloud environment, each cloud provider has its own security policies, which makes it hard for enterprises to implement standard security policies across the clouds. A Kubernetes operations platform can provide consistent security policies across clouds, enforcing governance and compliance uniformly.
  • Consistent policy management and network security policies across clouds are critical for adopting multicloud Kubernetes deployments. A Kubernetes operations platform should provide standardized workflows for applying network security and Open Policy Agent (OPA) policies for Kubernetes clusters spanning clouds. Policies, including network policies, ingress and egress rules, can be defined in a centralized location and deployed to all Kubernetes clusters, ensuring consistency and reducing operational complexity.
  • A true Kubernetes operations platform should provide a unified bimodal multitenancy (cluster and namespace) across clouds. This platform should allow multiple teams and applications to share the same Kubernetes clusters without affecting each other, providing better resource utilization and cost efficiency. Similarly, for teams, applications or environments that require dedicated clusters, the Kubernetes platform should offer cluster-as-a-service where the individual teams can create their clusters in a self-serve manner adhering to the security, governance and compliance set by the platform and SRE teams.
  • Kubernetes access control, role-based access control (RBAC) and single sign-on (SSO) across all clouds are essential for a Kubernetes operations platform. However, access management becomes increasingly complex when deploying Kubernetes across multiple clouds. A unified access management solution can simplify the process and reduce the security risk.
  • Finally, a single pane of administration offering visibility for the entire infrastructure spanning multiple clouds is essential for a Kubernetes operations platform. A single management plane can provide centralized visibility into Kubernetes clusters across multiple clouds, allowing enterprises to monitor, manage and troubleshoot their Kubernetes clusters more efficiently.

Conclusion

A multicloud strategy may be an important consideration for enterprises that are adopting a Kubernetes operations platform for managing their Kubernetes infrastructure. Enterprises should carefully look at all factors that influence a multicloud deployment and decide whether multicloud is required for their organization. A true multicloud Kubernetes operations platform should provide standardized automation, consistent security policies, unified Kubernetes bimodal multitenancy, access management and a single administration pane, offering visibility for the entire infrastructure spanning multiple clouds.

The post Managing Kubernetes Complexity in Multicloud Environments appeared first on The New Stack.

]]>
Apache SeaTunnel Integrates Masses of Divergent Data Faster https://thenewstack.io/apache-seatunnel-integrates-masses-of-divergent-data-faster/ Thu, 15 Jun 2023 13:58:46 +0000 https://thenewstack.io/?p=22711101

The latest project to reach top-level status with the Apache Software Foundation (ASF) was designed to solve common problems in

The post Apache SeaTunnel Integrates Masses of Divergent Data Faster appeared first on The New Stack.

]]>

The latest project to reach top-level status with the Apache Software Foundation (ASF) was designed to solve common problems in data integration. Apache SeaTunnel can ingest and synchronize massive amounts of data from disparate sources faster, greatly reducing the cost of data transfer.

“Currently, the big data ecosystem consists of various data engines, including Hadoop, Hive, Kudu, Kafka, HDFS for big data ecology, MongoDB, Redis, ClickHouse, Doris for the generalized big database ecosystem, AWS S3, Redshift, BigQuery, Snowflake in the cloud, and various data ecosystems like MySQL, PostgreSQL, IoTDB, TDEngine, Salesforce, Workday, etc.,” Debra Chen, community manager for SeaTunnel, wrote in an email message to The New Stack.

“We need a tool to connect these data sources. Apache SeaTunnel serves as a bridge to integrating these complex data sources accurately, in real-time, and with simplicity. It becomes the ‘highway’ for data flow in the big data landscape.”

The open source tool is described as an “ultra-high-performance distributed data integration platform that supports real-time synchronization of massive data.” We’re talking tens of billions of data points a day.

Efficient and Rapid Data Delivery

Begun in 2017 and originally called Waterdrop, the project was renamed in October 2021 and entered the ASF incubator in December the same year. Created by a small group in China, SeaTunnel since has grown to more than 180 contributors around the world.

Built in Java and other languages, and it consists of three main components: source connectors, transfer compute engines and sink connectors. The source connectors read data from the source end (it could be JDBC, binlog, unstructured Kafka or Software as a Service  API, or AI data models) and transform the data into a standard format understood by SeaTunnel.

Then the transfer compute engines process and distribute the data (such as data format conversion, tokenization, etc.). Finally, the sink connector transforms the SeaTunnel data format into the format required by the target database for storage.

“Of course, there are also complex high-performance data transfer mechanisms, distributed snapshots, global checkpoints, two-phase commits, etc., to ensure efficient and rapid data delivery to the target end,” Chen said.

SeaTunnel provides a connector API that does not depend on a specific execution engine. While it uses its own SeaTunnel Engine for data synchronization by default, it also supports multiple versions of Spark and Flink. The plug-in design allows users to easily develop their own connector and integrate it into the SeaTunnel project. It currently supports more than 100 connectors.

It supports various synchronization scenarios, such as offline-full synchronization, offline-incremental synchronization, change data capture (CDC), real-time synchronization and full database synchronization.

Enterprises use a variety of technology components and must develop corresponding synchronization programs for different components to complete data integration. Existing data integration and data synchronization tools often require vast computing resources or Java database connectivity connection resources to complete real-time synchronization. SeaTunnel aims to ease these burdens, making data transfer faster, less expensive and more efficient.

New Developments in the Project

In October 2022, SeaTunnel released its major version 2.2.0, introducing SeaTunnel Zeta engine, its data integration-specific computing engine and enabling cross-engine connector support.

Last December it added support for CDC synchronization, and earlier this year added support for Flink 1.15 and Spark 3. The Zeta engine was enhanced to support CDC full-database synchronization, multi-table synchronization, schema evolution and automatic table creation.

The community also recently submitted SeaTunnel-Web, which allows users not only to use SQL-like languages for transformation but also to directly connect different data sources, using a drag-and-drop interface.

“Any open source user can easily extend their own connector for their data source, submit it to the Apache community, and enable more people to use it,” Chen said. “At the same time, you can quickly solve the data integration issues between your enterprise data sources by using connectors contributed by others.”

SeaTunnel is used in more than 1,000 enterprises, including Shopee, Oppo, Kidswant and Vipshop.

What’s Ahead for SeaTunnel?

Chen laid out these plans for the project going forward:

  • SeaTunnel will further improve the performance and stability of the Zeta engine and fulfill the previously planned features such as data definition language change synchronization, error data handling, flow rate control and multi-table synchronization.
  • SeaTunnel-Web will transition from the alpha stage to the release stage, allowing users to define and control the entire synchronization process directly from the interface.
  • Cooperation with the artificial general intelligence component will be strengthened. In addition to using ChatGPT to automatically generate connectors, the plan is to enhance the integration of vector databases and plugins for large models, enabling seamless integration of over 100 data sources.
  • The relationship with the upstream and downstream ecosystems will be enhanced, integrating and connecting with other Apache ecosystems such as Apache DolphinScheduler and Apache Airflow. Regular communication occurs through emails and issue discussions, and major progress and plans of the project and community are announced through community media channels to maintain openness and transparency.
  • After supporting Google Sheets, Feishu (Lark), and Tencent Docs, it will focus on constructing SaaS connectors, such as ChatGPT, Salesforce and Workday.

The post Apache SeaTunnel Integrates Masses of Divergent Data Faster appeared first on The New Stack.

]]>
Salesforce Officially Launches Einstein AI-Based Data Cloud https://thenewstack.io/salesforce-officially-launches-einstein-ai-based-data-cloud/ Thu, 15 Jun 2023 12:00:32 +0000 https://thenewstack.io/?p=22711077

Salesforce has been sprinkling its brand of Einstein AI into a bevy of its products during the past couple of

The post Salesforce Officially Launches Einstein AI-Based Data Cloud appeared first on The New Stack.

]]>

Salesforce has been sprinkling its brand of Einstein AI into a bevy of its products during the past couple of years, including such popular services as CRM Cloud and Marketing Cloud. Now it’s going all-out for AI in a dedicated platform.

After introducing it at last September’s Dreamforce conference, the company on June 12 officially launched the dedicated generative AI service it calls Data Cloud — a catch-all subscription service that can be utilized by enterprise IT staff, data scientists and line-of-business people alike.

CEO and co-founder Marc Benioff, speaking to a livestream audience from New York ahead of this week’s re:Inforce conference in Anaheim, Calif., told listeners that since its soft launch to existing customers last fall, Data Cloud has become the company’s “fastest-growing cloud EVER.”

“One of the reasons why this is becoming such an important cloud for our customers is as every customer is preparing for generative AI. They must get their data together. They must organize and prepare their data. So creating a data cloud is becoming that important,” Benioff said.

Einstein Trust Layer Maintains a Safety Shield

Salesforce Data Cloud includes something called the Einstein Trust Layer, a new AI moderation and redaction service that overlays all enterprise AI functions while providing data privacy and data security, Benioff said. The Trust Layer resolves concerns of risks associated with adopting generative AI by meeting enterprise data security and compliance demands while offering users the continually unfolding benefits of generative AI.

“Trust is always at the start of what we do, and it’s at the end of what we do,” he said. “We came up with our first trust model for predictive (AI) in 2016, and now with generative AI, we’re able to take the same technology, and the same idea to create what we call a GPT trust layer, which we’re going to roll out to all of our customers.

“They will have the ability to use generative AI without sacrificing their data privacy and data security. This is critical for each and every one of our customers all over the world,” Benioff said.

Einstein Trust Layer aims to prevent text-generating models from retaining sensitive data, such as customer purchase orders and phone numbers. It is positioned between an app or service and a text-generating model, detecting when a prompt might contain sensitive information and automatically removing it on the backend before it reaches the model.

Trust Layer is aimed at companies with strict compliance and governance requirements that would normally preclude them from using generative AI tools. It’s also a way for Salesforce to address concerns about the privacy risks of generative AI, which have been raised by organizations such as Amazon, Goldman Sachs and Verizon.

How the AI in Data Cloud Works

A real-life example of how AI in the Data Cloud works was offered in a demo by the French sporting goods conglomerate Rossignol, which built its reputation on high-end ski wear and apparel, snowboarding and other winter sports equipment. Due to shortened winters, it is now moving increasingly into the year-round sporting goods market, which includes mountain bikes and other products, so its product SKUs are multiplying fast.

Bringing up a Rossignol product list in a demo for the audience, a company staffer was able to populate the descriptions of dozens of products (already in the company’s storage bank) into a spreadsheet that normally would have taken a team days to research, write and edit. The demo then showed how all those product descriptions could be translated into various languages with a mere series of clicks — again saving a considerable window of time for the marketing person completing this task.

Additional Salesforce News

The company also revealed its intention to infuse Einstein AI GPT into all its services by way of a distribution called GPT for Customer 360. This will make available Einstein AI GPT so enterprises can create “trusted AI-created content across every sales, service, marketing, commerce, and IT interaction, at hyperscale,” Benioff said.

Salesforce revealed new generative AI research. Key data points include:

  • While 61% of employees use or plan to use generative AI at work, nearly 60% of those don’t know how to do so in a trusted way.
  • Some 73% of employees believe generative AI introduces new security risks, yet 83% of C-suite leaders claim they know how to use generative AI while keeping data secure, compared to only 29% of individual contributors. This shows a clear disconnect between leadership and individual contributors.

The post Salesforce Officially Launches Einstein AI-Based Data Cloud appeared first on The New Stack.

]]>
ServiceNow Revamps Intelligent Chatbot with Generative AI https://thenewstack.io/servicenow-revamps-intelligent-chatbot-with-generative-ai/ Wed, 14 Jun 2023 20:06:04 +0000 https://thenewstack.io/?p=22710970

ServiceNow recently unveiled Now Assist for Virtual Agent, a solution that fortifies the platform’s chatbot with textual applications of Generative

The post ServiceNow Revamps Intelligent Chatbot with Generative AI appeared first on The New Stack.

]]>

ServiceNow recently unveiled Now Assist for Virtual Agent, a solution that fortifies the platform’s chatbot with textual applications of Generative AI. The combination is designed to enable conversational, self-service user experiences to easily initiate — and complete — what otherwise may be complicated workflows.

Now Assist for Virtual Agent is the latest ServiceNow offering to exploit the enterprise worth of Generative AI. The company previously announced ServiceNow Generative AI Controller, which enables the platform to access Generative AI models via OpenAI API and Microsoft Azure OpenAI Service. ServiceNow architected Now Assist for Search on top of the controller; the former provides search results from internal enterprise sources.

The Virtual Agent solution sits atop the search and controller components, both of which are available for rapid question answering, intelligent search, and content summarization. Now Assist for Virtual Agent also integrates with a designer component (Virtual Agent Designer) for creating workflows.

According to Jeremy Barnes, ServiceNow VP for Platform Product AI, uniting these capabilities through Now Assist for Virtual Agent “gives a control that our customers know and love, but allows it to be much more rapid to create these conversational experiences that users want so they get resolutions faster.”

From Analysis to Action

Coupling leading-edge natural language technologies with ServiceNow’s chatbot expedites timely action from information retrieval, textual summarizations, and low code workflow design. Users can ask questions, issue prompts, and search content before acting on results. Jon Sigler, ServiceNow VP for the Now Platform, described a use case in which an employee requires Now Assist for Virtual Agent to search through internal documentation to find options for benefits packages.

In this example, the bot would access the relevant knowledge base to find all the pertinent documents, send it to Azure OpenAI service or OpenAI API to summarize this information, then present results to the user. “In conjunction with the Generative AI piece, we can add actions users can then take,” Sigler commented. “In this use case, they may be able to change their benefits based on the conversation they’re having with the bot, as opposed to going in and doing it themselves or getting an agent involved.”

Conversational Workflow Design

Another premier benefit of buttressing Virtual Agent (VA) with vanguard language understanding models is users can build workflows for the bot via natural language interfaces. Augmenting the VA designer with the heightened Natural Language Understanding of contemporary Large Language Models (LLMs) creates a situation in which “anyone can build one of these VA workflows, and you can now do that conversationally with a great experience,” Barnes remarked.

The actions that can be taken after employing LLMs to search through and summarize results from relevant documentation can be input in workflows as suggestions. The entire process is substantially accelerated, from designing workflows, triggering them via natural language, and implementing next actions. Barnes explained these “actions can be taken because they’re embedded in a workflow platform that has built them in. So that connection, instead of you going through a traditional, slot-filling, cumbersome flow, enables you to get to the resolution much faster while still invoking stuff that’s there.”

Internal Applicability

One of the caveats about using bots like ChatGPT for information retrieval is it searches for answers through public sources on the internet, which isn’t always reliable. Now Assist for Virtual Agent supports implementations in which results are gleaned from trusted, internal sources. Granted, API calls are still made to summarize those results as needed.

Nonetheless, since the answers are derived from enterprise content repositories or databases, they’re much more credible than they’d be otherwise. “It’s not that we expose all of the internet to what we’re making available,” Sigler said. “The real power here is taking things from within the instance, the customer-specific data, the knowledge articles.”

Underlying Utility

Decoupling internal sources of searches via natural language models from environments in which models synthesize results is crucial to the overall enterprise adoption and underlying utility of Now Assist for Virtual Agent. “That is the real key to success here,” Sigler reflected. “It’s your ability to take that domain-specific instance, specific data, and send that off to the backend and get a summarization of that.”

When these capabilities are joined with the ability to take action from an intelligent bot with an easily-defined, conversationally developed workflow, the effectiveness and efficiency of this approach become apparent.

The post ServiceNow Revamps Intelligent Chatbot with Generative AI appeared first on The New Stack.

]]>
The Risks of Decomposing Software Components https://thenewstack.io/the-risks-of-decomposing-software-components/ Wed, 14 Jun 2023 19:47:23 +0000 https://thenewstack.io/?p=22711004

Software components decompose. They are not like gold bricks in a safe. They are more like lettuce in a warehouse.

The post The Risks of Decomposing Software Components appeared first on The New Stack.

]]>

Software components decompose. They are not like gold bricks in a safe. They are more like lettuce in a warehouse. They have to be used and replaced. It’s not apples to apples, per se, but the point is that components require updating.

If the software is not updated, problems arise, and we face security vulnerabilities like Log4J.

But getting components updated in a timely way is a universal problem and a challenge getting tackled by the Linux Foundation’s Open Source Security Foundation (OpenSSF) said Omkhar Arasaratnam, general manager of the OpenSSF, and Brian Behlendorf, CTO, OpenSSF in an interview at The Open Source Summit North America in Vancouver, BC.

With a component fix, how do you get from upstream to downstream as quickly and efficiently as possible? People still use old versions, as illustrated with Log4J, where people are still relying on outdated and vulnerable software components.

“It’s a very classical case of a security issue, it’s not something novel,” Arasaratnam said. “I’d like to ensure that we start by making our software secure by construction so the issues like that don’t exist at all: through education, through using different techniques, hardened libraries, well-vetted patterns for addressing those kinds of issues. Now, when issues like that do occur, then you’re right, we do have to jump into rapid response mode. We have to have not only, as you pointed out, a good mechanism of traversing stuff from upstream all the way back down to what’s running in prod. But that’s where artifacts like SBOMs come in.”

An SBOM is a software bill of materials. The SBOM tells you the software components and, hopefully, even more.

According to the Linux Foundation, an SBOM is a complete, formally structured list of components, libraries, and modules required to build (i.e., compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access.”

Arasaratnam, who recently joined OpenSFF as general manager, said SBOMs, also noted by Behlendorf, provide telemetry. They provide data that can be reasoned over when making some of these decisions.

“Wouldn’t it be wonderful if we could also provide reputation data on a particular repo you’ve decided to link against? Wouldn’t it be great if you had that full inventory of the time that you use that GCC compiler flag that could have caused some kind of regression? All of this data is extremely valuable. And I think for a long time, we, in enterprise in general and production environments, have been fumbling around with imprecise data, and have been unable to really leverage all the telemetry we could be using.”

The discussion also covers the issues with package managers and how we may quantify the risks of software vulnerabilities.

The post The Risks of Decomposing Software Components appeared first on The New Stack.

]]>
Canva Launches Developer Platform, Eyes Generative AI Apps https://thenewstack.io/canva-launches-developer-platform-eyes-generative-ai-apps/ Wed, 14 Jun 2023 18:00:08 +0000 https://thenewstack.io/?p=22710979

Today at Canva’s first-ever developer conference in San Francisco, the company announced a new developer platform, along with a $50

The post Canva Launches Developer Platform, Eyes Generative AI Apps appeared first on The New Stack.

]]>

Today at Canva’s first-ever developer conference in San Francisco, the company announced a new developer platform, along with a $50 million “Innovation Fund.” Canva, a design platform that competes with the likes of Adobe and Figma, claims it has 135 million monthly active users. So this is potential of great interest to devs — including independent developers, who will be able to charge money for the apps they build.

To find out more about Canva’s developer platform, and why developers might want to utilize it, I spoke to Canva’s head of ecosystem, Anwar Haneef.

The key to the new dev platform is the “Canva App,” which is described as “a JavaScript file that runs inside an iframe.” The file can then be displayed within Canva — which is both a web-based service and an application across various platforms — and access a number of APIs that interact with a user’s design. To build apps, developers can access the Canva Apps SDK (Software Development Kit), which is now available publicly.

What Kinds of Apps Will Be Built?

Canva’s user base is more wide-ranging than Figma’s — it’s generally seen as a business or marketing tool, whereas Figma is explicitly targeted at designers (although I discovered earlier this year that a lot of developers use Figma too). Haneef said that its users utilize Canva for various design purposes, such as marketing and sales materials, and social media content.

When I asked what kinds of apps Canva hopes will be built for this large user base, unsurprisingly Haneef highlighted generative AI apps first. He expects to see AI apps such as virtual avatars, image manipulation apps, and photo editing tools. Indeed, one of the apps to be showcased at the developer conference today is a generative audio app, which he said will generate custom music for a Canva user.

“There’s a whole gamut of media, visual, and auditory media type of applications that we expect,” said Haneef, “especially building off of this Cambrian explosion of AI happening lately.”

Canva Apps

Canva AI Apps

Another area of interest to Canva is workflow-focused apps, continued Haneef. Canva has many users from marketing and sales backgrounds, who use the platform to create designs and incorporate assets from digital asset management suites. So he envisions apps that seamlessly integrate productivity tools, like Monday or Asana, into Canva.

A Canva App Store for Developers

Canva apps will be a combination of both free and paid options, so there will be a marketplace for the apps. Haneef said that Canva wants to make the platform sustainable for everyone involved, whether they’re developers working on behalf of external companies or independent devs hoping to monetize their app.

Developers will be able to set up subscription services or one-time payment models for their apps, Haneef confirmed. In addition, the $50m “Canva Developers Innovation Fund” is available for developers to apply to.

Clearly, Canva is targeting JavaScript developers first and foremost. But Haneef also said that it wants to entice any other frontend developer to build on Canva. The APIs and tools provided by Canva are designed to be familiar and comfortable for web developers to pick up and use, he said. They will be offering pre-built components and libraries, which he said will allow developers to create apps “in a matter of days.”

Canva Apps Marketplace

Canva Apps Marketplace

While JavaScript is the main focus, Canva also has plans to launch something called “Connect APIs”, which will be REST APIs that can connect any external app with Canva. A waitlist for this will open today and the APIs will be ready later this year, stated the company in its press release.

Haneef added that the Connect APIs won’t have an SDK, so developers will be able to use them with any programming language of their choice.

The ‘Canva For Everything’ Hype Cycle

Given its broad user base and the fact it can be used to design pretty much anything digital, Canva is often talked about on social media as a threat to other creator platforms. Just this week, Jamie Marsland (who runs a WordPress dev shop) suggested that Canva is a threat to WordPress, because “Canva’s distribution potential is absolutely enormous.” He pointed out that Canva already has a one-page site builder.

Commenters on Marsland’s tweet pointed out that Canva is more of a competitor to Photoshop currently. But this new developer platform could add a lot of nifty functionality for Canva users. Perhaps a CMS company will create an app that does indeed make Canva into a full-fledged website builder.

Regardless, it’s clear that Canva itself has grand ambitions to broaden its usage. According to Haneef, Canva aims to be “the most pluggable platform in the world.” That sounds hyperbolic — and it is — but Canva’s large user base undoubtedly makes it an attractive proposition for developers. So if you’re a dev looking for opportunities to plug in and profit, then it’s worth checking out this new Canva developer platform.

The post Canva Launches Developer Platform, Eyes Generative AI Apps appeared first on The New Stack.

]]>
70% of Devs Using or Will Use AI, Says Stack Overflow Survey https://thenewstack.io/70-percent-of-developers-using-or-will-use-ai-says-stack-overflow-survey/ Wed, 14 Jun 2023 17:45:51 +0000 https://thenewstack.io/?p=22710894

Artificial intelligence is hot, and GitHub Copilot and ChatGPT are poised to benefit developers, according to Stack Overflow’s 2023 Developer

The post 70% of Devs Using or Will Use AI, Says Stack Overflow Survey appeared first on The New Stack.

]]>

Artificial intelligence is hot, and GitHub Copilot and ChatGPT are poised to benefit developers, according to Stack Overflow’s 2023 Developer Survey.

Adoption of the Lua and Rust programming languages spiked in 2023, as did that of the Python-based FastAPI framework, reported the latest global survey of more than 90,000 developers.

When asked about their plans to use AI tools in their development process, 44% of developers said they already do this and another 26% plan to do so soon.

When this group was asked what specific AI-powered developer tools they use, 55% mentioned GitHub Copilot, while 13% use Tabnine and 5% use Amazon Web Services CodeWhisperer. The other seven tools included in the survey were used by no more than 2%.

The gap between GitHub Copilot and similar tools is most noticeable when looking at how many of its users plan to continue using it — 72% of GitHub Copilot users want to use it in the upcoming year, as compared to only 53% of AWS CodeWhisperer users and 37% of Tabnine users.

AI search tools like ChatGPT were also highlighted in the study. Among respondents who are already using one of 11 different AI search tools Stack Overflow asked about, the following percentages of devs have used these tools in the past year:

About 78% of both ChatGPT and Phind users claim they will continue using the technology in the next year, which is higher than the 61% of Bard AI and 62% of Bing AI users expressing that type of loyalty.

Lua, Rust, TypeScript and Go Gain Users

Lua and Rust were among the fasted growing programming languages. Rust usage grew 40% in 2023 and its adoption rate now stands at 13% of developers. Meanwhile, Lua adoption rose 50%, to a still modest 6%.

Among languages with larger user bases, TypeScript and Go saw the largest gains; TypeScript adoption rose 12%, to 39% of developers. Go rose 19%, used by 13% of devs.

When it comes to web frameworks, Python-based FastAPI saw the most rapid growth, rising 48% to a still-modest 7% of developers surveyed. While not technically a framework, jQuery continues its steady decline, dropping 23%, to usage by 22% of developers.

Other notable changes include:

  • Docker adoption fell 19% over the last year, but 51% of developers in the 2023 study reported that they were using it. It remains incredibly popular among its core user base — 75% of developers that have used Docker in the last year have a desire to continue using it in the next 12 months. This tells us that even after several bumps in the hype cycle, Docker and containers are here to stay.
  • Use of Atlassian's Trello, a tool for asynchronous collaboration and project management, fell 42% since 2022, down to 19% of survey participants. Asana adoption also fell 36%, with only 5% using it in 2023.
  • Less significant declines were also seen by the leaders in this category, including Atlassian's Jira and Confluence, which may indicate a market consolidation and that developers are relying on fewer collaboration tools.
  • Adoption of popular package managers npm and Yarn are on the decline. Use of npm fell 24% to 49% of respondents and Yarn dropped 21% to 22% of respondents.

Heroku Usage Down 40% Since 2022's Report

Some of the biggest news coming out of the Stack Overflow survey is about the technologies and frameworks that have seen significant declines in usage.

Almost all of the cloud providers saw declines, but Heroku's was the worst. Only 12% of developers used Heroku in 2023, down 40% from the 2022 study. Even worse, only 22% of the technology's user base said they want to continue using it in the upcoming year.

Even the cloud leaders are taking a hit, with Google Cloud usage falling 11% compared to the previous year, Microsoft Azure dropping 9%, and Amazon Web Services declining 5%.

A new group of cloud platforms may be taking advantage of this slump. For the first time, Cloudflare (used by 15% of the survey participants), Vercel (used by 11%), Netlify (also used by 11%) and Germany's Hetzner (used by 4%) were included in the survey. Notably, Vercel and Hezner were the most admired cloud provider among those that survey participants were asked about.

The post 70% of Devs Using or Will Use AI, Says Stack Overflow Survey appeared first on The New Stack.

]]>
Red Hat Launches OpenStack Platform 17.1 with Enhanced Security https://thenewstack.io/red-hat-launches-openstack-platform-17-1-with-enhanced-security/ Wed, 14 Jun 2023 17:34:12 +0000 https://thenewstack.io/?p=22711054

VANCOUVER — At OpenInfra Summit here, , announced the impending release of its OpenStack Platform 17.1. This release is the

The post Red Hat Launches OpenStack Platform 17.1 with Enhanced Security appeared first on The New Stack.

]]>

VANCOUVER — At OpenInfra Summit here, Red Hat, announced the impending release of its OpenStack Platform 17.1. This release is the product of the company’s ongoing commitment to support telecoms as they build their next-generation 5G network infrastructures.

In addition to bridging existing 4G technologies with emerging 5G networks, the platform enables advanced use cases like 5G standalone (SA) core, open virtualized radio access networks (RAN), and network, storage, and compute functionalities, all with increased resilience. And, when it comes to telecoms, the name of the game is resilience. Without it, your phone won’t work, and that can’t happen.

Runs On OpenShift

The newest version of the OpenStack Platform runs on Red Hat OpenShift, the company’s Kubernetes distro. Under this, Red Hat Enterprise Linux (RHEL) 8.4 or 9.2 runs. This means it can support logical volume management partitioning, and Domain Name System as a Service (DNSaaS).

The volume management partition enables short-lived snapshot and reverts functionalities. This enables service providers to revert back to a previous state during upgrades if something goes wrong. Of course, we all know that everything goes smoothly during updates and upgrades. Not.

This take on DNSaaS includes a framework for integration with Compute (Nova) and OpenStack Networking (Neutron) notifications, allowing auto-generated DNS records. In addition, DNSaaS includes integration support for Bind9.

Other Improvements

Red Hat also announced improvements to the Open Virtual Networking (OVN) capabilities, Octavia load balancer, and virtual data path acceleration. These enhancements ensure higher network service quality and improved OVN migration time for large-scale deployments.

OpenStack Platform 17.1 continues its legacy of providing a secure and flexible private cloud built on open source foundations. This latest release offers role-based access control (RBAC), FIPS-140 (ISO/IEC 19790) compatibility, federation through OpenID Connect, and Fernet tokens, ensuring a safer, more controlled IT environment.

Looking ahead to the next version, Red Hat software engineers are working on making it much easier to upgrade its OpenStack distro from one version to the next. Historically, this has always been a major headache for all versions of OpenStack. Red Hat’s control plane-based approach, a year or so in the future, sounds very promising.

The post Red Hat Launches OpenStack Platform 17.1 with Enhanced Security appeared first on The New Stack.

]]>
No More FOMO: Efficiency in SLO-Driven Monitoring https://thenewstack.io/no-more-fomo-efficiency-in-slo-driven-monitoring/ Wed, 14 Jun 2023 17:00:07 +0000 https://thenewstack.io/?p=22708546

Observability is a concept that has been defined in various ways by different experts and practitioners. However, the core idea

The post No More FOMO: Efficiency in SLO-Driven Monitoring appeared first on The New Stack.

]]>

Observability is a concept that has been defined in various ways by different experts and practitioners. However, the core idea that underlies all these definitions is efficiency.

Efficiency means using the available resources in the best possible way to achieve the desired outcomes. In the current scenario, where every business is facing fierce competition and changing customer demands, efficiency is crucial for survival and growth. Resources include not only money, but also time, productivity, quality and strategy.

IT spending is often a reflection of the market conditions. When the market is booming, companies tend to spend more on IT projects and tools, without being too concerned about the value they are getting from them. This can create some problems, such as having too many tools that are not integrated or aligned with the business goals, wasting resources on unnecessary or redundant tasks, and losing visibility and control over the IT environment.

IT spend always correlates to market temperature.

Even the companies that spend heavily on cloud services are reconsidering their big decisions that involve significant, long-term investments. Companies are reassessing their existing substantial spend to ensure their investments can be aligned with revenues or future revenue potential.

Observability tools are also subject to the same review. It is essential that the total operating cost of observability tools can also be directly linked to revenue, customer satisfaction, growth in business innovation and operational efficiency.

Why Do We Need Monitoring?

  • If we had a system that would absolutely never fail, we wouldn’t need to monitor that system.
  • If we had a system for which we never have to worry about being performant, reliable or functional, we wouldn’t need to monitor that system.
  • If we had a system that self-corrects itself and auto-recovers from failures, we wouldn’t need to monitor that system.

None of the aforementioned points are true today, and it is obvious that we need to set up monitoring for our infrastructure and applications no matter what scale you operate.

What Is FOMO-Driven Monitoring?

When you are responsible for operating a critical production system, it is natural to want to collect as much monitoring data as possible. After all, the more data you have, the better equipped you will be to identify and troubleshoot problems. However, there are a number of challenges associated with collecting too much monitoring data.

Data Overload

One of the biggest challenges of collecting too much monitoring data is data overload. When you have too much data, it can be difficult to know what to look at and how to prioritize your time. This can lead to missed problems and delayed troubleshooting.

Storage Costs

Another challenge of collecting too much monitoring data is storage costs. Monitoring data can be very large, and storing it can be expensive. If you are not careful, you can quickly rack up a large bill for storage.

Reduced Visibility

When there is too much data, it can be difficult to see the big picture. This can make it difficult to identify trends and patterns that could indicate potential problems.

Increased Noise

More data also means more noise. This can make it difficult to identify important events and trends.

Security Concerns

Collecting too much monitoring data can also raise security concerns. If your monitoring data is not properly secured, it could be vulnerable to attack. This could lead to theft of sensitive data or disruption of your production systems.

FOMO-driven monitoring

Ultimately, an approach driven by the fear of missing out does not result in an optimal observability situation/setup and, in fact, can contribute to plenty of chaos, increased expenses, ambiguity between teams and overall increase in poor efficiency.

You can address this situation by being intentional in making decisions on all aspects of the observability pipeline including signal collection, dashboarding and alerting. Using service-level objectives SLOs is one of the strategies that offers plenty of benefits.

What Are SLOs?

An SLO is a target or goal for a specific service or system. A good SLO will define the level of performance your application needs, but not any higher than necessary.

SLOs help us set a target performance level for a system and measure the performance over a period of time.

Example SLO: An API’s p95 will not exceed 300ms response time

How Do You Set SLOs?

SLOs are generally set by customers. Yes, they are the ultimate authority. However, customers do not actually set SLOs as you can imagine. It is up to the business teams to tell the IT operations and development teams the expected performance and availability of a system.

For example, the business teams operating a marketing lead sign-up page can tell the IT teams that they want the page to load within 200ms at least 90% of the time. They would derive this conclusion by looking at the customer behavior already captured.

Now the IT teams can set the SLO for tracking by identifying SLIs(service-level indicators) in order to measure the SLOs over a period of time. SLIs are the specific metrics and query details of the metrics used to keep track of the SLO progression.

Here is what your observability life cycle looks like implementing an SLO-driven strategy.

SLO-driven strategy

There is an intentional loopback mechanism that is set in taking the SLO-driven strategy. Observability is never a settled problem. Organizations that do not continue reinventing their observability strategy fall behind very quickly, resulting in ambiguous tools, outdated processes and practices, which in the end increases overall operational cost while decreasing efficiency.

With this approach, you get the ability to scientifically measure your infrastructure and application performance over a period of time. Data collected as a result can be used to influence important decisions made on infrastructure spend which in turn helps improve further efficiency.

What Does This Tell Us?

Taking an SLO-first approach allows us to be intentional about the metrics to collect to meet commitments to business.

These are some of the benefits that organizations can achieve by following SLO-based observability strategy:

  • Results in improved signal vs. noise ratio
  • Reduces tool proliferation
  • Enriched monitoring data resulting in reduced MTTR/MTTI
  • Feedback loop provides continuous improvement opportunities
  • Connect monitoring costs in relation to business outcomes, hence able to justify spend to management

Use SLOs to drive your monitoring decisions:

  • Measure, revisit and review SLOs periodically based on outcomes
  • Improve observability posture through
    • Lower cost
    • Reduced issue resolution time
    • Increased team efficiency and innovation

Conclusion

We live in an era where efficiency is critical for organizational success. Observability costs can become uncontrollable if you do not have a proper strategy in place. SLO-driven observability strategy can help you set guardrails, track performance goals, business metrics and measure impact in a consistent manner while increasing operational efficiency and innovation.

The post No More FOMO: Efficiency in SLO-Driven Monitoring appeared first on The New Stack.

]]>
3 AI Moves Smart Companies Get Right https://thenewstack.io/3-ai-moves-smart-companies-get-right/ Wed, 14 Jun 2023 15:41:17 +0000 https://thenewstack.io/?p=22711017

Artificial intelligence leaders get three moves right when it comes to creating outcomes: priorities, people and platforms. That’s according to

The post 3 AI Moves Smart Companies Get Right appeared first on The New Stack.

]]>

Artificial intelligence leaders get three moves right when it comes to creating outcomes: priorities, people and platforms. That’s according to Nick Elprin, co-founder and CEO of machine learning/AI platform Domino Data Lab at this month’s Rev4 conference.

Priorities may seem like an obvious one, but companies do get it wrong, he said.

“Too many companies make the mistake of starting with some interesting data set they have or some trendy or novel new technique or algorithm, and they ask what can I do with this?” Elprin said. “In contrast, AI leaders working backwards, they start from a strategic objective or a business goal and they ask how can AI help me achieve this.”

Surprisingly, many companies also don’t talk about KPIs or business goals, he added — instead, many seem to view it as a shiny new toy without having clarity around how it will help their businesses, he said.

People and Platforms

Once there’s clarity around priorities, AI leaders build their talent strategy around a core of professional data scientists.

“That doesn’t mean that everyone has to be a Ph.D. in computer science, but what it does mean is that you need people inside your organization who have the expertise and the knowledge and a sound fundamental understanding of the methods and techniques involved in this type of work,” Elprin told audiences.

He shared customer testimonies about Domino’s support for collaboration across people and — perhaps more importantly to programmers and data scientists — different programming languages, including Python and R. He also predicted that a new wave of advanced AI, with its more complex models, is going to be the death knell for “citizen data scientist experiments.”

“They have a wider range of unexpected failure modes and negative consequences for the model from unexpected model behavior,” he said. “So it’s going to be ineffective and risky to have citizens doing the heavy lifting and building operators models.”

The third step is to empower them with technology and platforms for operating AI, he added.

“It [AI] is unlike anything that most businesses have had to build or operate or manage in the past, and it has some important implications for the kinds of technology you need to empower enable this sort of work,” he said.

How Domino Data Lab Differentiates

Domino Lab has built a business model on the premise of a purpose-built system. It handles the infrastructure and integration pieces, allowing a data scientist to start with a smaller footprint and then scale up — whether that means more GPU, CPU or whatever — as needed, without rebuilding. That’s one way it differentiates itself from the big cloud providers, who focus on compute and use proprietary platforms. It primarily competes against these cloud providers, custom solutions and, to some extent, the SAS Institute.

The company announced a number of new capabilities at its Rev4 conference in New York, starting with Code Assist for hyperparameter tuning of foundation models. Ramanan Balakrishnan, vice president of product marketing demoed deploying a new chatbot. He shared how experiment managers can enable automatic login of key metrics and artifacts during normal training to monitor the progress of AI experiments, including model training and fine-tuning. Domino Data Lab has also added enterprise security to ensure only approved personnel can see the metrics, logs and artifacts.

The summer release, which will be available in August, also includes advanced cost management tools. Specifically, Domino introduced detailed controls for actionable cost management. Balakrishna also introduced Model Sentry, a responsible AI solution for in-house generative AI. One aspect of Model Sentry that will be of interest to international companies is that it supports the training of models using on-premise GPUs, so data isn’t moved across borders, he said.

Domino Cloud will now include Nexus support. Users can now use a fully-managed control plane in the cloud with single-pane access to private hybrid data planes, including NVIDIA DGX clusters. Finally, Domino has a new Domino Cloud for Life Sciences, which incorporates an audit-ready specialized AI cloud platform with a Statistical Computing Environment to address the unique needs of the pharmaceutical industry.

“It’s fair to say that now we live in a new era of AI,” Balakrishna said.

Domino Data Lab paid for The New Stack’s travel and accommodations to attend the Rev4 conference.

The post 3 AI Moves Smart Companies Get Right appeared first on The New Stack.

]]>
Can Companies Really Self-Host at Scale? https://thenewstack.io/can-companies-really-self-host-at-scale/ Wed, 14 Jun 2023 14:47:06 +0000 https://thenewstack.io/?p=22710876

There’s no such thing as free lunch, or in this case, free software. It’s a myth. Paul Vixie, vice president

The post Can Companies Really Self-Host at Scale? appeared first on The New Stack.

]]>

There’s no such thing as free lunch, or in this case, free software. It’s a myth. Paul Vixie, vice president of security at Amazon Web Services, creator of the original Domain Name System (DNS), gave a compelling presentation at Open Source Summit Europe 2022 about this topic. His presentation included a comprehensive list of “dos and don’ts” for consumers of free software. Vixie’s docket included labor-intensive, often expensive, engineering work that ran the gamut of small routine upgrades to locally maintaining orphaned dependencies.

To sum the “dos and don’ts” up in one sentence though, engineer(s) are always working, monitoring, watching and ready for action. This “ready for action” engineer must have high-level expertise so that they can handle anything that comes their way. Free software isn’t inherently bad, and it definitely works. Identifying the hidden costs of selecting software also applies to the decision to self-host a database. Self-hosting is effective for many companies. But when is it time to let go and try the easier way?

What Is a Self-Hosted Database?

Self-hosted databases come in many forms. Locally hosted open source databases are the most obvious example. However, many commercial database products have tiered packages that include self-managed options. On-premises hosting comes with pros and cons: low security risk, the ability to work directly beside the data and complete control over the database are a few advantages. There is, of course, the problem with scaling. Self-hosting creates challenges for any business or developer team with spiky or unreliable traffic because on-demand scaling is impossible. Database engineers must always account for the highest amount of traffic with on-premises servers or otherwise risk an outage in the event of a traffic spike.

For businesses that want to self-host and scale on demand, self-hosting in the cloud is another option. This option allows businesses with spiky or less predictable traffic to scale alongside their needs. When self-hosting in the cloud, the cloud provider installs and hosts their database on a virtual machine in a traditional deployment model. When you’re hosting a commercial database in the cloud, support for cloud and the database is minimal because self-hosted always means your engineering resources helm the project. This extends to emergencies like outages and even security breaches.

The Skills Gap

There are many skilled professionals with experience managing databases at scale on-premises and in the cloud. SQL databases were the de facto database for decades. Now, with the rise of more purpose-built databases geared toward deriving maximum value from the data points they’re storing, the marketplace is shifting. Newer database types that are gaining a foothold within the community are columnar databases, search engine databases, graph databases and time series databases. Now developers familiar with these technologies can choose what they want to do with their expertise.

Time Series Data

Gradient Flow expects the global market for time series analysis software will grow at a compound annual rate of 11.5% from 2020 to 2027. Time series data is a vast category and includes any data with a timestamp. Businesses collect time series data from the physical world through items like consumer Internet of Things (IoT), industrial IoT and factory equipment. Time series data originating from online sources include observability metrics, logs, traces, security monitoring and DevOps performance monitoring. Time series data powers real-time dashboards, decision-making and statistical and machine learning models that heavily influence many artificial intelligence applications.

Bridging the Skills Gap

InfluxDB 3.0 is a purpose-built time series database that ingests, stores and analyzes all types of time series data in a single datastore, including metrics, events and traces. It’s built on top of Apache Arrow and optimized for scale and performance, which allows for real-time query responses. InfluxDB has native SQL support and open source extensibility and interoperability with data science tools.

InfluxDB Cloud Dedicated is a fully managed, single-tenant instance of InfluxDB created for customers who require privacy and customization without the challenges of self-hosting. The dedicated infrastructure is resilient and scalable with built-in, multi-tier data durability with 2x data replication. Managed services mean around-the-clock support, automated patches and version updates. A higher level of customization is also a characteristic of InfluxDB Cloud Dedicated. Customers choose the cluster tier that best matches their data and workloads for their dedicated private cloud resources. From the many customizable characteristics, increased query timeouts and in-memory caching are two.

Conclusion

It’s up to every organization to decide whether to self-manage or choose a managed database. Decision-makers and engineers must have a deep understanding of the organization’s needs, traffic flow patterns, engineering skills and resources and characteristics of the data before reaching the best decision.

To get started, check out this demo of InfluxDB Cloud Dedicated, contact our sales team or sign up for your free cloud account today.

The post Can Companies Really Self-Host at Scale? appeared first on The New Stack.

]]>
What’s Up with OpenStack in 2023 https://thenewstack.io/whats-up-with-openstack-in-2023/ Wed, 14 Jun 2023 14:00:30 +0000 https://thenewstack.io/?p=22710866

The OpenStack community has released its 27th version of the software, circling all the way back to the beginning of

The post What’s Up with OpenStack in 2023 appeared first on The New Stack.

]]>

The OpenStack community has released its 27th version of the software, circling all the way back to the beginning of the alphabet. Due to its passionate and active contributor base, OpenStack continues to be one of the top five most active open source projects. Organizations around the globe, spanning almost every industry, have embraced OpenStack, reaching 40 million cores of compute in production. Within this footprint, adoption specifically among OpenStack-powered public clouds now spans over 300 data centers worldwide.

In addition to OpenStack, the OpenInfra Foundation has replicated its model for hosting open source projects including Kata Containers, StarlingX and Zuul. This model is now readily available for any organization that wants to leverage the Four Opens and three forces to build a sustainable open source project within the infrastructure layer.

The OpenInfra Summit Vancouver on June 13-15, is a great opportunity to get involved in the OpenStack community while collaborating more closely with other OpenInfra projects and learn from the world’s biggest users.

OpenStack Is More Reliable and Stable Than Ever

As the OpenStack software platform has matured, there has been a notable emphasis on reliability and stability. Many features and enhancements have been introduced to ensure a smoother and more robust experience. These improvements include the implementation of a new “skip level upgrade release process” cadence, which began with the Antelope release in March 2023.

One significant aspect of OpenStack’s evolution is the increased emphasis on thorough testing. More extensive testing procedures are now in place, ensuring that the platform is attentively examined for potential issues and vulnerabilities.

Another recent focus for the upstream community has been removing under-maintained services and features to allow for a more focused and efficient system, eliminating unnecessary components that may hinder the reliability and stability of OpenStack.

OpenStack also places a strong emphasis on interoperability. Integration and collaboration efforts with other popular open source components such as GNU/Linux, Kubernetes, Open vSwitch, Ceph and Ansible have been prioritized. These initiatives promote compatibility and interaction between different software systems, which has enhanced overall reliability.

In contrast to the past focus on vendor-specific drivers and niche features, OpenStack now prioritizes contributions to general functionality. For example, developing a unified client/SDK offers a standardized and consistent experience across the platform. This shift promotes stability and reliability by focusing on core functionalities that benefit all users.

As OpenStack continues to mature, these various measures and initiatives demonstrate a strong commitment to reliability, stability and long-term success.

Flexible Support for a Variety of Workload Models

OpenStack is a powerful and versatile cloud computing platform that offers flexible support for various workload models. One of its standout features is the ability to work closely with hardware, enabling users to harness the full potential of their systems. For instance, the Ironic bare metal systems deployment and lifecycle management tool and the Nova bare metal driver provide seamless integration of on-demand physical server access into a full-featured OpenStack deployment.

To ensure long-term sustainability, OpenStack’s capabilities are continuously tested on architectures like ARM/AArch64 and have included support for other unconventional processor architectures like PowerPC. It also offers advanced scheduling capabilities like PCI passthrough, CPU pinning, and coordination and life cycle management of peripherals like accelerators, graphics processing units (GPUs), data processing units (DPUs) and field-programmable gate arrays (FPGAs). Moreover, OpenStack has tighter integration with the container ecosystem, with Magnum as a Kubernetes-certified distribution, Zun enabling individual application containers to be provisioned and manipulated as first-class server objects and Kuryr delivering advanced Neutron network features directly to container processes.

OpenStack also offers solutions for running its services as container workloads, with Kolla and OpenStack-Helm. It has fostered close collaboration with the Kubernetes community, with current and former leadership cross-over between the two projects. OpenStack provides services to facilitate long-lived processes and precious data, such as scheduling policies, data retention, backups and high availability/disaster recovery. Its services facilitate ephemeral and distributed applications with load distribution and multi and hybrid cloud, along with cloud-bursting features. Overall, this is an ideal platform for organizations looking to achieve maximum flexibility and efficiency in their cloud computing environments, with a broad range of tools and features that can support a wide variety of workloads and use cases.

What’s the Story with Security?

Security is a major concern in any computing platform, and the OpenStack community takes this issue very seriously. Over time, the OpenStack contributors have made significant strides in enhancing security through long-term improvement initiatives. Community goals have been set to tackle critical security aspects, such as role-based access control, privilege separation for services, image encryption and Federal Information Processing Standards (FIPS) compliance testing. These efforts demonstrate the community’s commitment to continuously enhancing security features and mitigating potential risks.

One notable achievement is the steady reduction in the volume of reported security vulnerabilities. By actively identifying and addressing security concerns, the community has created a safer environment for cloud deployments.

Additionally, OpenStack has implemented new vulnerability coordination policies that promote transparency and collaboration. These policies not only provide open access to more projects but also mandate clearer publication timelines. By ensuring that vulnerabilities are promptly disclosed and addressed, OpenStack enables users to stay informed and take appropriate actions to protect their systems.

OpenStack’s commitment to security extends beyond its own ecosystem. The platform has been a pioneer in establishing a sustainable vulnerability management process, which has served as a model for many other open source communities. This recognition highlights the effectiveness of OpenStack’s security practices and reinforces its position as a leader in the open source world.

How Can People Contribute and Add to This Project?

The OpenStack community welcomes all individuals and organizations to actively participate and enhance the community by adhering to OpenInfra’s “Four Opens” principles. If you’re interested in joining this collaborative effort, various avenues are available to guide you through the process.

To begin contributing, the project offers Contributor Guides that serve as valuable resources for both individuals and organizations. These guides not only assist with upstream code contributions but also provide insights into non-code contributions. Additionally, they outline opportunities for users and operators to contribute their expertise and insights to the project’s growth.

One way to make a meaningful impact is by volunteering as a mentor for university interns. Sharing your knowledge and experience can help shape the next generation of contributors. Moreover, you can propose efforts for sponsorship through programs like Outreachy, which provides opportunities for individuals from underrepresented backgrounds to contribute to open source projects. Additionally, you can support events such as Open Source Day at the Grace Hopper conference.

For someone who is new and seeking information and advice, the First Contact SIG (Special Interest Group) within the OpenStack community is an excellent starting point. This group’s mission is to provide a place for new contributors, making it a welcoming and inclusive space for those who are just beginning their journey in the project.

If you’re looking to make a more significant impact, consider exploring Upstream Investment Opportunities. These opportunities offer a curated set of suggested investment areas based on the current needs in the OpenStack community along with contact points for who can help you get started.

Overall, the OpenStack project offers a range of avenues for individuals and organizations to contribute and add value to the community. Whether it’s through code or non-code contributions, mentoring, sponsorships or investment opportunities, there are numerous ways to engage and actively participate in the growth and success of the project.

The post What’s Up with OpenStack in 2023 appeared first on The New Stack.

]]>
Kubernetes Operators: The Real Reason Your Boss Is Smiling https://thenewstack.io/kubernetes-operators-the-real-reason-your-boss-is-smiling/ Wed, 14 Jun 2023 13:30:45 +0000 https://thenewstack.io/?p=22710777

It’s no industry secret that the cloud native segment around Kubernetes has shifted toward hosted Kubernetes providers who build, run

The post Kubernetes Operators: The Real Reason Your Boss Is Smiling appeared first on The New Stack.

]]>

It’s no industry secret that the cloud native segment around Kubernetes has shifted toward hosted Kubernetes providers who build, run and partially manage the Kubernetes infrastructure for organizations. Compared to organizations building and maintaining their own Kubernetes infrastructure, hosted Kubernetes providers allow you to offload a measurable amount of technical complexity so staff can focus on operations and innovation.

Along with the rise of hosted Kubernetes providers, more enterprises are favoring larger Kubernetes distributions from the likes of OpenShift, Rancher, Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) and others rather than building their own homegrown distribution from the upstream codebase.

These trends are not limited to the Kubernetes platform itself but showcase a general movement toward letting the providers of strong core platform layers do what they do best so companies can focus on the business value that comes from building on top of Kubernetes. This was a chant heard in 2017 to “make Kubernetes boring,” and I think that we are getting there as an ecosystem.

But that was six years ago. What does “boring” look like in 2023 and how do new trends like the rise of Kubernetes operators fit into this picture? There are three ways I think of this when evaluating modern Kubernetes deployments:

I want my organization to build value on top of Kubernetes.

Similar to the mantra of 2017, the “value” we mean here is everything that is built on top on Kubernetes and the infrastructure layers, which has seen substantial progress and evolution from the community over the past six years.

I want Kubernetes to be simple.

Every organization is unique, and roles within your organization may differ depending on not only size, but also Kubernetes maturity. Because of this, skill sets vary, and not everyone has the time or ambition to become an expert. Those who aren’t experts want Kubernetes to be easy so daily tasks aren’t intrusive.

I want Kubernetes to be scalable.

Deployment models for Kubernetes are expanding, and enterprises are taking advantage of using Kubernetes across on-premises, multicloud and hybrid cloud environments. Kubernetes needs to be flexible across these environments while also enabling cluster growth with streamlined scalability as the practice matures.

Building Value on Top of Kubernetes

Once the Kubernetes infrastructure layers are solid for your organization, it’s time to build the “value” on top, whether that is an application end users interact with or a platform layer that adds advanced data services such as observability. Developers need to start somewhere, and this usually consists of finding the right Kubernetes resources for the workload, such as creating deployments, services, jobs, statefulsets, daemonsets, persistent volumes, pod security policies, role-based access control (RBAC) rules, secrets, service accounts and much more.

Managing and tracking all these resources can get quite complicated, and it’s likely that your team doesn’t need to control all these objects but must adhere to resources that affect how applications run. There are cases where this development practice is something that must happen: For instance, if the application you are building is unique to your organization, then the API resources prevent you from having to start from scratch.

However, on the flip side, we see DevOps teams, developers and application owners turning to trusted, prebuilt Kubernetes operators to run, configure and manage common applications so they can focus on the value above these layers.

Operators: Bringing Together Value, Simplicity and Scalability

If you’re not familiar with what a Kubernetes operator is, then I suggest reading the documentation.

Switchboard operator

However, whenever I hear the term “operator,” my mind immediately jumps to a switchboard operator with a massive telephone network in front of them moving wires in and out at a rapid pace while transferring calls.

You may remember them from the pilot of the hit show “Mad Men” or recall the popular saying, “Operator, please hold.”

Much like the way a switchboard operator in the 20th century assisted in the routing and transfer of phone calls, a Kubernetes operator facilitates the deployment, management and ongoing operations of a Kubernetes application. Except instead of having a person move wires behind a telephone switchboard, think of it as a robot who is listening to the inputs and commands and outputting the Kubernetes resources in the appropriate namespaces.

It’s Like a Robot, but without the Attitude

Unlike the switchboard operator, the core tenet of a Kubernetes operator is automation. Automation is a necessity as the community forges ahead with Kubernetes, allowing end users to focus on what matters to them while relying on operators to automate deployments, operations and management of common components in their stack.

There is a community propensity to use trusted operators for applications and not reinvent the wheel when running a particular service on Kubernetes. Take the database landscape’s use of operators as an example.

As seen at KubeCon EU in Amsterdam, the operator pattern has a strong use case for databases because in general; they are a common denominator to many application stacks. Applications may use Postgres or Redis in slightly different ways, but they are common services that need to be installed, configured and managed. Databases on Kubernetes deployed via operator in a trusted way for production is a major win for time-to-value when it comes to DevOps development cycles.

It doesn’t stop at databases, though; operators can be used for all kinds of applications. Operators can be used for almost anything from monitoring and alerting software, to storage integrations, to fully customized applications that may be delivered to internal customers.

It’s great to see the focus move northbound as the Kubernetes ecosystem matures. As end users and organizations are gravitating to hosted Kubernetes and application automation through operators, I’m excited to see the innovations that come next focus on what can be built on top of Kubernetes.

How Do We Use Operators?

Operator frameworks are extremely popular among Dell’s customers, and we are actively working to introduce deeper operator capabilities for our Kubernetes storage capabilities, such as our container storage modules, as well as container storage interface drivers, which are available on OperatorHub.io. Operators are also a key part of our future portfolio offerings and will be integrated into our upcoming user interface for Kubernetes data storage.

The benefits of using operators are straightforward: less time spent on manual processes, more time spent on coding and innovation. If you haven’t started with operators today in your business, I highly suggest exploring the world of Kubernetes operators and seeing how to take advantage of automation to make your life a little easier.

Simple, scalable and adding value on top of Kubernetes.

The post Kubernetes Operators: The Real Reason Your Boss Is Smiling appeared first on The New Stack.

]]>
Reducing Complexity with a Multimodel Database https://thenewstack.io/reducing-complexity-with-a-multimodel-database/ Tue, 13 Jun 2023 19:42:48 +0000 https://thenewstack.io/?p=22710663

“Future users of large data banks must be protected from having to know how the data is organized in the

The post Reducing Complexity with a Multimodel Database appeared first on The New Stack.

]]>

“Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation).”

With these words, E.F. Codd (known as “Ted” to his friends) began the seminal paper that begat the “relational wave” that would spend the next 50 years dominating the database landscape.

“Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed.”

When Codd wrote this paper back in 1969, data access was in its infancy: Programmers wrote code that accessed flat files or tables and followed “pointers” from a row in one file to a row in a separate file. By introducing a “model” of data that encapsulated the underlying implementation (of how data was stored and retrieved) and putting a domain-specific language (in this case, SQL) in front of that model, programmers found their interaction with the database elevated away from the physical details of the data, and instead were free to think more along the logical levels of their problem, code and application.

Whether Codd knew this or not, he was tapping into a concept known today as a “complexity budget: the idea that developers — any organization, really — can only keep track of so much complexity within their projects or organization. When a project reaches the limits of that budget, the system starts to grow too difficult to manage and all sorts of inefficiencies and problems arise — difficulties in upgrading, tracking down bugs, adding new features, refactoring code, the works. Codd’s point, really, was simple: If too much complexity is spent navigating the data “by hand,” there is less available to manage code that captures the complexities of the domain.

Fifty years later, we find ourselves still in the same scenario — needing to think more along logical and conceptual lines rather than the physical details of data. Our projects wrestle with vastly more complex domains than ever before. And while Codd’s model of relational data has served us well for over a half-century, it’s important to understand that the problem, in many respects, is still there — different in detail than the one that Codd sought to solve, but fundamentally the same issue.

Models in Nature

In Codd’s day, data was limited in scope and nature, most of it business transactions of one form or another. Parts had suppliers; manufacturers had locations; customers had orders. Creating a system of relationships between all of these was a large part of the work required by developers.

Fifty years later, however, data has changed. Not only has the amount of data stored by a business exploded by orders of magnitude (many orders of magnitude), but the shape of the data generated is wildly more irregular than it was in Codd’s day. Or, perhaps fairer to say, we capture more data than we did 50 years ago, and that data comes in all different shapes and sizes: images, audio and video, to start, but also geolocation information, genealogy data, biometrics, and that’s just a start. And developers are expected to be able to weave all of it together into a coherent fabric and present it to end users in a meaningful way. And — oh, by the way — the big launch is next month.

For its time, Codd’s relational model provided developers with exactly that — a way to weave data together into a coherent fabric. But with the growth of and changes to the data with which we have to contend, new tactics, ones which didn’t throw away the relational model but added upon it, were necessary.

We wrought what we could using the concept of “polyglot persistence,” the idea of bringing disparate parts together into a distributed system. But as any experienced architect will be all too familiar, the more different and distinct nodes in a distributed system, the greater the complexity. And the more complexity we must spend on manually stitching together data from different nodes in the database system, the less we have to spend on the complexity of the domain.

Nature of Storage

But complexity doesn’t live just in the shape of the data we weave; it also lives in the very places we store it.

What Codd hadn’t considered, largely because it was 50 years too early, is that databases also carry with them a physical concern that has to do with the actual physical realm — the servers, the disks on which the data is stored, the network and more. For decades, an organization “owning” a database has meant a non-trivial investment into all the details around what that ownership means, including the employment of a number of people whose sole purpose is the care and feeding of those machines. These “database administrators” were responsible for machine procurement and maintenance, software upgrades and patches, backups and restorations and more — all before ever touching the relational schema itself.

Like the “physical” details of data access 50 years ago, devoting time to the physical details of the database’s existence is also a costly endeavor. Between the money and time spent doing the actual maintenance as well as the opportunity cost of it being offline and unavailable for use, keeping a non-trivial database up and running is a cost that can often grow quite sizable and requires deep amounts of ongoing training and learning for those involved.

Solutions

By this point, it should be apparent that developers need to aggressively look for ways to reduce accidental and wasteful spending of complexity. We seek this in so many ways; the programming languages we use look for ways to promote encapsulation of algorithms and data, for example, and libraries and services tuck away functionality behind APIs.

Providing a well-encapsulated data strategy in the modern era often means two things: the use of a multimodel database to bring together the differing shapes of data into a single model, and the use of a cloud database provider to significantly reduce the time spent managing the database’s operational needs. Which one you choose is obviously the subject of a different conversation — just make sure it’s one that supports all the models your data needs, in an environment that requires the smallest management necessary.

Multimodel brings all the benefits of polyglot persistence, without the disadvantages of it. Essentially, it does this by supporting a document store (JSON documents), a key/value store and other data storage models (multiple databases) into one database engine that has a common query language and a single API for further access. Learn more about Couchbase’s multimodel database here, and try Couchbase for yourself today with our free trial.

The post Reducing Complexity with a Multimodel Database appeared first on The New Stack.

]]>
Google’s DeepMind Extends AI with Faster Sort Algorithms https://thenewstack.io/googles-deepmind-extends-ai-with-faster-sort-algorithms/ Tue, 13 Jun 2023 19:09:51 +0000 https://thenewstack.io/?p=22710747

Computing pioneer Grace Hopper once quipped that the most dangerous phrase in data processing is ‘We’ve always done it this

The post Google’s DeepMind Extends AI with Faster Sort Algorithms appeared first on The New Stack.

]]>

Computing pioneer Grace Hopper once quipped that the most dangerous phrase in data processing is ‘We’ve always done it this way.” In that spirit, Google’s DeepMind searched for a faster sorting algorithm using an AI system — and the company’s researchers are now claiming the new algorithms they’ve found “will transform the foundations of computing.”

Google emphasized that sorting algorithms affect billions of people every day — from how online search results get ranked to how data gets processed. But “Making further improvements on the efficiency of these routines has proved challenging,” notes a recent paper from DeepMind, “for both human scientists and computational approaches.” DeepMind focused on the algorithms for sorting short sequences — with between three and five elements — because they’re the most commonly used (often called when sorting even larger sequences).

And for short sequences of numbers, their results were up to 70% faster.

But even for longer sequences with over 250,000 elements, the results were still 1.7% faster. And this isn’t just an abstract exercise. Google has already made the code open source, uploading it into LLVM’s main library for standard C++ functions — the first change to its sorting algorithm in over a decade. Google proudly points out that “millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management.”

In announcing their results, DeepMind offered more examples where they’d applied AI to real-world problems, trying to demonstrate that beyond all the hype, some truly impactful improvements are waiting to be discovered. It’s interesting to see how the approached the problem — but the exercise also raises the possibility that some long-hidden secrets may finally be unlocked with our new and powerful AI systems.

How They Did It

To hunt for improvements, DeepMind drilled down to one of the lowest-level of programing: assembly language. (a human-readable representation of the machine code).

Their blog post calls this “looking where most humans don’t” (or “starting from scratch”). “We believe many improvements exist at this lower level that may be difficult to discover in a higher-level coding language,” argues DeepMind’s blog. “Computer storage and operations are more flexible at this level, which means there are significantly more potential improvements that could have a larger impact on speed and energy usage.”

For their search, the researchers created a program based on DeepMind’s AlphaZero program, which beat the world’s best players in chess and Go. That program trained solely by playing games against itself, getting better and better using a kind of massively automated trial-and-error that eventually determines the most optimal approach. DeepMind’s researchers modified into a new coding-oriented program called AlphaDev, calling this an important next step. “With AlphaDev, we show how this model can transfer from games to scientific challenges, and from simulations to real-world applications,” they write on the DeepMind blog.

The breakthrough happens when AlphaDev transformed coding into a new kind of game, where AlphaDev continually adds single instructions to its algorithm and assesses its results. (“Winning a game” is replaced here by rewards for correct and speedy results.) The researchers called it “AssemblyGame,” and the blog points out that the number of possible combinations of instructions “is similar to the number of particles in the universe.” But the paper also clearly quantifies the game’s stakes.

“Winning the game corresponds to generating a correct, low-latency algorithm using assembly instructions.”

DeepMind’s blog post reports the newly-discovered sorting algorithms “contain new sequences of instructions that save a single instruction each time they’re applied.” (It then envisions this performance savings multiplied by the trillions of times a day that this code is run.) “AlphaDev skips over a step to connect items in a way that looks like a mistake but is actually a shortcut.” (DeepMind’s blog argues this is similar to an AlphaZero’s Go move which looked like a mistake, but ultimately led it to victory — and believes the discovery “shows AlphaDev’s ability to uncover original solutions and challenges the way we think about how to improve computer science algorithms.”)

Their paper says it shows “how artificial intelligence can go beyond the current state of the art,” because ultimately AlphaDev’s sorts use fewer lines of code for sorting sequences with between three elements and eight elements — for every number of elements except four. And these shorter algorithms “do indeed lead to lower latency,” the paper points out, “as the algorithm length and latency are correlated.”

The current (human-generated) sorting for up to four numbers first checks the length of the sequence, then calls an algorithm optimized for that length. (Unless the length is one, meaning no sorting is required.) But AlphaDev realized that with four-element sequences, it’s faster to just sort the first three elements — and then use a simpler algorithm to find that fourth element’s position among the three already-sorted. And this approach eliminates much of the overhead of “branching” into an entirely different set of code for every other possible sequence length. Instead AlphaDev can handle most sequence lengths as part of its first check (for how the length relates to the number two).

  • Is length < 2 (If there’s one element, just return its value)
  • Is length = 2 (If there’s two elements, sort them and return them.)
  • Is length > 2 (Sort the first three elements. If there were only three elements, return them.)
  • If there are four elements, find the position of the fourth element among the already-sorted three.

Beyond Coding

Their paper applauds the results as “both new and more efficient than the state-of-the-art human benchmarks.” But that was just the beginning. DeepMind moved on, discovering a new hashing algorithm that was 30% faster in the 9-16 bytes range (adding it to Google’s Abseil library of C++ functions in January).

Google also sicced AlphaZero on its datacenter to optimize workload distributions, according to another post, ultimately resulting in a 19% drop in underused hardware. And it also improved the compression of videos on YouTube, (reducing the bitrate by 4%).

DeepMind now argues that AlphaDev’s success at coding represents a step toward general-purpose AI tools that solve problems to the benefit of society — including helping to optimize more of our code. And while better hardware has “kept pace” for the last half century, “as microchips approach their physical limits, it’s critical to improve the code that runs on them to make computing more powerful and sustainable.”

The paper points out this isn’t the first use of reinforcement learning for optimizing code — and even some that tried to optimize sorting algorithms.

So maybe the ultimate affirming message there is its reminder that one single corporation isn’t driving the progress. Instead the results announced this month are just part of a larger broad-based human effort to deliver real and tangible benefits using our newest tools.

And as society acknowledges potential dystopian futures and the possible danger of AI systems, maybe it’s balanced by the prospect that AI systems could also deliver another possible outcome.

The post Google’s DeepMind Extends AI with Faster Sort Algorithms appeared first on The New Stack.

]]>
3 Ways to Drive Open Source Software Maturity https://thenewstack.io/3-ways-to-drive-open-source-software-maturity/ Tue, 13 Jun 2023 19:00:53 +0000 https://thenewstack.io/?p=22710640

Open source software (OSS) is taking over the world. It’s a faster, more collaborative and flexible way of driving software

The post 3 Ways to Drive Open Source Software Maturity appeared first on The New Stack.

]]>

Open source software (OSS) is taking over the world. It’s a faster, more collaborative and flexible way of driving software innovation than proprietary code. This flexibility appeals to developers and can help organizational leadership drive down costs while supporting digital transformation goals. The figures speak for themselves: 80% of organizations increased their OSS use in 2022, especially those operating in critical infrastructure sectors such as oil and gas, telecommunications and energy.

However, open source is not a panacea. There can be challenges around governance, security and the balance between contributing to OSS development and preserving a commercial advantage. These each need careful consideration if developers want to maximize the impact of their work on open source projects.

Open Source Software Saves Time and Drives Innovation

There’s no one-size-fits-all approach with OSS. Projects could range from relatively small software components, such as general-purpose Java class libraries, to major systems, such as Kubernetes for container management or Apache’s HTTP server for modern operating systems. Those projects receiving regular contributions from reputable sources are likely to be most widely adopted and frequently updated. But there is already a range of proven benefits across them all.

Open source can save time and resources, as developers don’t have to expend their own energies to produce code. The top four OSS ecosystems are estimated to have recorded over 3 trillion requests for components last year. That’s a great deal of effort potentially saved. It also means those same developer teams can focus more fully on proprietary functionality that advances the baseline functionality available through OSS to boost revenue streams. It’s estimated just $1.1 billion invested in OSS in the EU back in 2018 generated $71 billion to $104 billion for the regional economy.

OSS also encourages experts from across the globe — whether individual hobbyists or DevOps teams from multinational companies — to contribute their coding skills and industry knowledge. The idea is projects will benefit from a large and diverse pool of developers, driving up the quality of the final product. In contributing to these projects, businesses and individuals can stake a claim to the future direction of a particular product or field of technology, helping to shape it in a way that advances their own solutions. Companies also benefit from being at the leading edge of any new discoveries and leaps in innovation as they emerge, so they can steal a march on the competition by being first to market.

This, in turn, can help to drive a culture of innovation at organizations that contribute regularly to OSS. Alongside a company’s track record on patents, their commitment to OSS projects can be a useful indicator to prospective new hires of their level of ambition, helping attract the brightest and best talent going forward.

Three Ways to Drive OSS Maturity

To maximize the benefit of their contributions to the OSS community, DevOps leaders should ensure their organization has a clear, mature approach. There are three key points to consider in these efforts:

1. Define the Scope of the Organization’s Contribution

OSS is built on the expertise of a potentially wide range of individuals and organizations, many of whom are otherwise competitors. This “wisdom of the crowd” can ultimately help to create better-quality products more quickly. However, it can also raise difficult questions about how to keep proprietary secrets under wraps when there is pressure from the community to share certain code bases or functionality that could benefit others. By defining at the outset what they want to keep private, contributors can draw a clear line between commercial advantage and community spirit to avoid such headaches later.

2. Contribute to Open Standards

Open standards are the foundation on which OSS contributors can collaborate. By getting involved in these initiatives, organizations have a fantastic opportunity to shape the future direction of OSS, helping to solve common problems in a manner that will enhance the value of their commercial products. OpenTelemetry is one such success story. This collection of tools, application programming interfaces and software development kits simplifies the capture and export of telemetry data from applications to make tracing more seamless across boundaries and systems. As a result, OpenTelemetry has become a de facto industry standard for the way organizations capture and process observability data, bringing them closer to achieving a unified view of hybrid technology stacks in a single platform.

3. Build Robust Security Practices

Despite the benefits of OSS, there’s always a risk of vulnerabilities slipping into production if they’re not detected and remediated quickly and effectively in development environments. Three-quarters (75%) of chief information security officers (CISOs) worry the prevalence of team silos and point solutions throughout the software development lifecycle makes it easier for vulnerabilities to fly below the radar. Their concerns are valid. The average application development project contains 49 vulnerabilities, according to one estimate. These risks will only grow as ChatGPT-like tools are increasingly used to support software development by compiling code snippets from open source libraries.

Given the dynamic, fast-changing nature of cloud native environments and the sheer scale of open source use, automation is the only way DevOps teams can take control of the situation. To support this, they should converge security data with real-time, end-to-end observability to create a unified source of insights. By combining this with trustworthy AI that can understand the full context behind that observability and security data, teams can unlock precise, real-time answers about vulnerabilities in their environment. Armed with those answers, they can implement security gates throughout the delivery pipeline so bugs are automatically resolved as soon as they are detected.

OSS is increasingly important to long-term success, even for commercially motivated organizations. How effectively they’re able to harness and contribute to its development will define the winners and losers of the next decade. If they put careful consideration into these three key points, DevOps leaders will bring their organizations much closer to being recognized as a leading innovator in their industries.

The post 3 Ways to Drive Open Source Software Maturity appeared first on The New Stack.

]]>
The First Kubernetes Bill of Materials Standard Arrives https://thenewstack.io/the-first-kubernetes-bill-of-materials-standard-arrives/ Tue, 13 Jun 2023 17:48:51 +0000 https://thenewstack.io/?p=22710825

If you’re not using a Software Bill of Materials (SBOM) yet, you will be soon. They’re seen as essential groundwork

The post The First Kubernetes Bill of Materials Standard Arrives appeared first on The New Stack.

]]>

If you’re not using a Software Bill of Materials (SBOM) yet, you will be soon. They’re seen as essential groundwork for building code security defense. While there are many SBOM standards, such as Software Package Data Exchange (SPDX), CycloneDX: and GitHub’s dependency submission format, there hasn’t been one just for the popular container orchestration program Kubernetes until now: Kubernetes Security Operations Center’s (KSOC) Kubernetes Bill of Materials (KBOM) standard.

At this early stage, KBOM is a rough first draft. It provides an initial specification in JavaScript Object Notation (JSON) It’s been shown to work with Kubernetes 1.19 and newer; hyperscale cloud services providers; and do-it-yourself Kubernetes.

With the KBOM’s shell interface, cloud security teams can gain a comprehensive understanding of third-party tooling within their environment. This development is aimed at enabling quicker responses to the surge of new Kubernetes tooling vulnerabilities.

Is It Necessary?

Is there really a need for this, though, since there are many SBOM standards? Since  Kubernetes is used by over  96% of organizations to orchestrate container deployments, clearly there’s a deployment security gap here. After all, Kubernetes security adoption remains low, with 34% in 2022. A major barrier to securing Kubernetes is getting an accurate grasp of the environment’s scope.

As KSOC CTO Jimmy Mesta explained: “Kubernetes is orchestrating the applications of many of the biggest business brands we know and love. Adoption is no longer an excuse, and yet from a security perspective, we continually leave Kubernetes itself out of the conversation when it comes to standards and compliance guidelines, focusing only on the activity before application deployment.” Therefore, “We are releasing this KBOM standard as a first step to getting Kubernetes into the conversation when it comes to compliance guidelines. ”

To meet these needs, KBOM offers a concise overview of a Kubernetes cluster’s elements. These include:

  • Workload count.
  • Cost and type of hosting service.
  • Vulnerabilities for both internal and hosted images.
  • Third-party customization, for example, the deployed custom resources, authentication, and service mesh.
  • Version details for the managed platform, the Kubelet, and more.

Sounds interesting? It should. To contribute, you can download the CLI tool today or learn more about the standard. You can also work on this Apache 2 open source program via its GitHub page.

The post The First Kubernetes Bill of Materials Standard Arrives appeared first on The New Stack.

]]>
A CTO’s Guide to Navigating the Cloud Native Ecosystem https://thenewstack.io/a-ctos-guide-to-navigating-the-cloud-native-ecosystem/ Tue, 13 Jun 2023 16:39:29 +0000 https://thenewstack.io/?p=22710615

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations that CTOs must address to ensure that everything runs smoothly and operates together.

The Gartner “A CTO’s Guide to Navigating the Cloud Native Container Ecosystem” report estimates that by 2028, more than 95% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 50% in 2023.

This level of adoption means that organizations must have the right software to effectively manage, monitor and run container-based, cloud native environments. And there is a multitude of options for CTOs and enterprise architecture (EAs) leaders to sift through, which makes it hard to get environments level-set and to standardize processes.

“Despite the apparent progress and continued industry consolidation, the ecosystem remains fragmented and fast-paced. This makes it difficult for EAs andCTOs to build robust cloud native architectures and institute operational governance,” the authors state.

As container adoption expands for cloud native environments, more IT leaders will see an increase in both vendor and open source options. Such variety makes it harder to select the right tools to run a cloud native ecosystem and stretches out the evaluation process.

Here’s a look at container ecosystem components, software offerings and how CTOs can evaluate the best configuration for their organization.

What Are the Components of Container-Based Cloud Native Ecosystems?

Gartner explains that “containers are not a monolithic technology, the ecosystem is a hodgepodge of several components vital for production readiness.”

The foundation of a containerized ecosystem includes:

  • Container runtime lets developers deploy applications, configurations and other container image dependencies.
  • Container orchestrator supports features for policy-based deployment, application configuration management, high availability cluster establishment and container integration into overall infrastructure.
  • Container management software provides a management console, automation features, plus operational, security and developer tools. Vendors in this sector include Amazon Web Services (AWS), Microsoft, Google, RedHad, SUSE and VMware.
  • Open source tools and code: The Cloud Native Computing Foundation is the governance body that hosts several open source projects in this space.

These components all help any container-based applications run on cloud native architecture to support business functions and IT operations, such as DevOps, FinOps, observability, security and APIs. There are lots of open source projects that support all of these architectural components and platform engineering tools for Kubernetes.

At the start of cloud native ecosystem adoption, Gartner recommends:

Map your functional requirements to the container management platforms and identify any gaps that can be potentially filled by open source projects and commercial products outlined in this research for effective deployments.

Choose open source projects carefully, based on software release history, the permissiveness of software licensing terms and the vibrancy of the community, characterized by a broad ecosystem of vendors that provide commercial maintenance and support.

What Are the Container Management Platform Components?

Container management is an essential part of cloud native ecosystems; it should be top of mind during software selection and container environment implementation. But legacy application performance monitoring isn’t suited for newer cloud technology.

Cloud native container management platforms include the following tools:

  • Observability enables a skilled observer — a software developer or site reliability engineer — to effectively explain unexpected system behavior. Gartner mentions Chronosphere for this cloud native container management platform.
  • Networking manages communication inside the communication pod, between cluster containers and from the outside world.
  • Storage delivers granular data services, high availability and performance for stateful applications with deep integration with the container management systems.
  • Ingress control gatekeeps network communications of a container orchestration cluster. All inbound traffic to services inside the cluster must pass through the ingress gateway.
  • Security and compliance provides assessment of risk/trust of container content, secrets management and Kubernetes configurations. It also extends into production with runtime container threat protection and access control.
  • Policy-based management lets IT organizations programmatically express IT requirements, which is critical for container-based environments. Organizations can use the automation toolchain to enforce these policies.

More specific container monitoring platform components and methodologies include Infrastructure as Code, CI/CD, API gateways, service meshes and registries.

How to Effectively Evaluate Software for Cloud Native Ecosystems

There are two types of container platforms that bring all required components together: integrated cloud infrastructure and platform services (CIPS) and software for the cloud.

Hyperscale cloud providers offer integrated CIPS capabilities that allow users to develop and operate cloud native applications with a unified environment. Almost all of these providers can deliver an effective experience within their platforms, including some use cases of hybrid cloud and edge. Key cloud providers include Alibaba Cloud, AWS, Google Cloud, Microsoft Azure, Oracle Cloud, IBM Cloud and Tencent.

Vendors in this category offer on-premises, edge solutions and may offer either marketplace or managed services offerings in multiple public cloud environments. Key software vendors include Red Hat, VMware, SUSE (Rancher), Mirantis, HashiCorp (Nomad), etc.

Authors note critical factors of platform provider selection include:

  • Automated, secure, and distributed operations
    • Hybrid and multicloud
    • Edge optimization
    • Support for bare metal
    • Serverless containers
    • Security and compliance
  • Application modernization
    • Developer inner and outer loop tools
    • Service mesh support
  • Open-source commitment
  • Pricing

IT leaders can figure out which provider has the most ideal offering if they match software to their infrastructure (current and future), security protocols, budget requirements, application modernization toolkit and open source integrations.

Gartner recommends that organizations:

Strive to standardize on a consistent platform, to the extent possible across use cases, to enhance architectural consistency, democratize operational know-how, simplify developer workflow and provide sourcing advantages.

Create a weighted decision matrix by considering the factors outlined above to ensure an objective decision is made.

Prioritize developers’ needs and their inherent expectations of operational simplicity, because any decision that fails to prioritize the needs of developers is bound to fail.

Read the full report to learn about ways to effectively navigate cloud native ecosystems.

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>
At PlatformCon: For Realtor.com, Success Is Driven by Stories https://thenewstack.io/at-platformcon-for-realtor-com-success-is-driven-by-stories/ Tue, 13 Jun 2023 16:31:53 +0000 https://thenewstack.io/?p=22710695

You’re only as good as the stories you tell. Storytelling, after all, is a tenet of humanity, and the best

The post At PlatformCon: For Realtor.com, Success Is Driven by Stories appeared first on The New Stack.

]]>

You’re only as good as the stories you tell. Storytelling, after all, is a tenet of humanity, and the best way to pass information, at least when it’s anchored in context. It’s also a pillar of successful sales. No matter what you’re selling or who you’re selling it to.

For platform engineering, your eager or not-so-eager audience is made up of your colleagues, the internal developers as well as other company-wide stakeholders and influencers. You have to understand the context and needs of your different target personas, and how they could respond to the changes you’re making. Much of intentional developer experience and platform adoption hinges on your ability to convey what works and what hasn’t, often socratically repeating back to be sure you comprehend your stakeholders’ stakes — and making sure they feel heard.

For Realtor.com, a platform engineering mindset is anchored in the power of success stories. Suzy Julius, SVP of product and engineering, joined the virtual PlatformCon stage to share how the top U.S. real estate site, with 86 million visits per month, went from a culture where you couldn’t say platform to a culture that embraces it.

The First Step Is always Recognition

Realtor.com is a company that’s over the last couple of years scaled mainly via acquisition, which often results in either spaghetti architecture or a complete lack of visibility into other business units. It pretty much always signals an increase in complexity.

“Our tech stack became extremely complex, slowing down our ability to build features in a fast and reliable way,” Julius said. “The existing tech stack made it difficult to ensure a quality product or ensure reliable feature releases.”

Facing its divergent and often duplicated tech ecosystem, in 2020, the company embarked on a transformation, with the aim to “simplify to scale” in order to accelerate innovation.

A platform emerged as the solution.

When Julius joined the company at the start of 2021, her team recognized the common barriers to entry to platform adoption, mainly, “knowing that there was a reluctance to building a platform, with fear that one would slow down the engineering team by creating more complexity.” Not an uncommon hurdle for platform engineers to face at all.

So the platform team kicked off this journey gaining feedback from a diverse background of stakeholders, not just from engineering, but from business and security, and offered a compelling success story, she explained. Now, 150 people are considered part of the platform organization — a mix of product leaders and engineers, who she said are all “focused on developer experience, data, content and personalization.”

Next, It’s Time to Adopt a Product Mindset

Come 2022 and the platform team was embracing a platform mindset, concentrating on developer enablement and providing a service to their colleagues. Specifically, Julius outlined the aims as:

  • To provide service to others to help everyone go faster and more reliably.
  • To understand as a platform team the vision and principles, and then to get corporate buy-in.
  • To be able to show short-term and long-term wins.
  • To measure, iterate and evangelize the vision to be a platform empowering all products and unlocking business opportunities.

These goals, she said, mostly focused on developer experience, but they also created a data platform for a “clear line of sight to understand business metrics or give analytics the ability to create a canonical source of truth dataset for our consumer and customers.”

The tech stack that drove this sociotechnical change included:

  • For developer experience — CircleCI, Apollo supergraph, GraphQL, Amazon EKS. ArgoCD, Tyk API gateway, Vault developer portal
  • For data, content and personalization — Fivetran automated data movement platform, Snowflake for data integration, Apache Kafka, DBT for data warehousing, Apache Airflow, NodeJS, Amazon SageMaker for machine learning, Optimizely, Metaflow data science framework, ElasticSearch

All the platform tech, people and processes are aligned around the vision to become the preferred platform on which their internal customers choose to build. That is grounded, Julius explained, in connecting wins with features that drive business metrics, namely, revenue and/or user engagement.

She highlighted sociotechnical lessons they learned over the past year:

  • A platform mindset is not just a technical but a cultural shift.
  • Adoption hinges on training, documentation and awareness.
  • You need a tighter feedback loop to establish stakeholder sentiment.
  • Be aware not to over-index on microservices. For example, they had rate-limiting in different locations, which Julius said made it hard to build client features.
  • Align around a few programming languages, as too many make it much harder to build cross-company platform features like logging and monitoring.
  • And, in a time of tighter budgets, make sure you commit to continuously invest in your platform service, no matter what.

Keep up the Momentum

Now, this year at Realtor.com is all about embracing the Platform as a Product mindset and building a differentiated, self-service product suite. Treating your platform as a product is about treating your developers like your customers, always focusing on improving developer experience or DevEx. For Realtor.com, this includes continuous feedback and stakeholder scorecards.

This year is about “understanding that we need to continue to solve problems and to make it easy and intuitive to use our platform,” Julius said. “And we need to realize gains beyond tech, [like] more involvement and input into what the platforms do and how they can help the entire company.”

Many of the platform engineering thought leaders The New Stack has interviewed have talked about the importance of using the platform as a single pane of glass to create a common language between business and engineering. This helps business understand the value of the big cost center that is engineering, while engineering can better connect their work to driving real business value to end customers. Julius’s team stands out in looking to leverage the platform to measure that effect. She said they are currently working “to incorporate how platforms impact our end-user strategy and experience,” connecting the DevEx to the DevOps.

They are also working out how to evangelize the platform internally. Like with all things, communication is key, including around onboarding and design-first thinking. They are customizing their messaging for different stakeholders. Julius noted they all have to get comfortable repeating themselves to not get lost in the email and Slack cacophony. The platform team is also considering adopting a tool like Backstage to help facilitate that internal product marketing and to, as she said, “bring it all together.”

All this feeds into a continued highlighting of performance, security and reliability gains.

Julius next to their playbook: Identity; start with the end state and vision. Principles & self-awareness, first team mindset, reputation & brand, execution and barriers, and an importance of failure

How Mature Is Your Platform?

Platform teams are cost centers, but, until recently, developer productivity wasn’t something that could be easily measured. This means that platform teams have had difficulty assessing their performance and impact. Last month, a new DevEx framework came out that examines developers’ flow state, feedback loops, and cognitive load.

The month before, the Syntasso team open-sourced their Platform Maturity Model which guides teams to answering the following questions:

  • How does the company value (and therefore fund) platform efforts?
  • What compels users to start, and be successful, using your platform?
  • How do users interact with and consume offerings on your platform?
  • How are requests and requirements identified and prioritized on your platform?
  • How does product engineering manage non-differentiating (and often internally common) tooling and infrastructure?
  • How does each business requirement (e.g. compliance or performance) get enabled by platform offerings?

Each of these questions has answers from Levels 1 through 4 to mark maturity of a platform team.

The Realtor.com platform team has created what it refers to as a playbook — an artifact that helps continuously build onto the organization’s Platform-as-a-Product culture. This includes their own maturity model. “It’s recognizing and reminding us that we don’t want to stop at a platform that just works, but we want to be seen for the good and invested in,” Julius said.

Pulling a metaphor for the company’s core industry, she compared a platform to a house. There are parts that you don’t really notice until something goes wrong like a window won’t open or the foundation is cracked. She explained that “Where we strive to mature as a platform when you notice the doors, you notice the windows, and they’re seen for the good.”

Next, the playbook features two decision-making frameworks to decide when to slow down or to speed up. She called them a flywheel to show off how they make decisions collaboratively and cross-functionally, “in a way that we can keep coming back and pointing at that decision as we progress.” They are:

  • Strategic technical initiative group (STIG) — to ensure technical decisions are made collaboratively and consider the future tech stack and feature development.
  • Cross-functional workshops — to collaborate and focus on both the Platform-as-a-Product and tech strategy.

Finally, the playbook centers on identity — which Julius said she could’ve given a whole talk around, it’s that essential to the Realtor.com product team. Identity leans into the importance of vision and purpose. A platform team always needs empathy, she argues, putting itself in its stakeholders’ shoes to better understand the technology and onboarding. It’s treating internal customers with the same level of care as external users.

Identity is all about understanding what a success story looks like and working backward to identify key aspects of that story, Julius explained, aligning that story with key decisions and remaining focused on the vision. It’s always about maintaining the organization’s reputation and grounding every decision in context.

“This is all about having the end state in mind, combining the fundamentals with your vision. It’s that compelling story of success.”

The post At PlatformCon: For Realtor.com, Success Is Driven by Stories appeared first on The New Stack.

]]>
A New Tool for the Open Source LLM Developer Stack: Aviary https://thenewstack.io/a-new-tool-for-the-open-source-llm-developer-stack-aviary/ Tue, 13 Jun 2023 14:53:00 +0000 https://thenewstack.io/?p=22710854

The company behind Ray, an open source AI framework that helps power ChatGPT, has just released a new tool to

The post A New Tool for the Open Source LLM Developer Stack: Aviary appeared first on The New Stack.

]]>

The company behind Ray, an open source AI framework that helps power ChatGPT, has just released a new tool to help developers work with large language models (LLMs). Called Aviary, Anyscale describes it as the “first fully free, cloud-based infrastructure designed to help developers choose and deploy the right technologies and approach for their LLM-based applications.” Like Ray, Aviary is being released as an open source project.

I spoke to Anyscale’s Head of Engineering, Waleed Kadous, to discuss the new tool and its impact on LLM applications.

The goal of Aviary is to enable developers to identify the best open source platform to fine-tune and scale an LLM application. Developers can submit test prompts to a pre-selected set of LLMs, including Llama, CarperAI, Dolly 2.0, Vicuna, StabilityAI, and Amazon’s LightGPT.

The Emergence of an Open Source LLM Stack

I told Kadous that there’s an emerging developer ecosystem building up around AI and LLMs; I mentioned LangChain and also Microsoft’s new Copilot stack as examples. I asked how Aviary fits into this new ecosystem.

He replied that we are witnessing the development of an open source LLM stack. He drew a parallel to the LAMP stack of the 1990s and early 2000s (which I also did, in my LangChain post). In the open source LLM stack, he continued, Ray serves as the bottom layer for orchestration and management. Above that, there is an interface for model storage and retrieval — something like Hugging Face. Then there are tools like LangChain “that kind of glues it all together and does all the prompt adjustments.”

Aviary is essentially the back end to run something like LangChain, he explained.

“LangChain is really good for a single query, but it doesn’t really have an off-the-shelf deployment suite,” he said.

Aviary

Aviary in action.

So why does this LLM stack have to be open source, especially considering the strength of OpenAI and the other big tech companies (like Google) when it comes to LLMs?

Kadous noted the downsides of LLMs owned by companies (such as OpenAI or Google), since their inner workings are often not well understood. They wanted to create a tool that would help access open source LLMs, which are more easily understood. Initially, he said, they intended to just create a comparison tool — which turned out to be the first part of Aviary. But as they worked on the project, he continued, they realized there was a significant gap in the market. There needed to be a way for developers to easily deploy, manage and maintain their chosen open source model. So that became the second half of what Aviary offers.

How a Dev Uses Aviary

Kadous explained that there are two main tasks involved in choosing and then setting up an LLM for an application. The first is comparing different LLM models, which can be done through Aviary’s frontend website, or via the command line.

Aviary currently supports nine different open source LLMs, ranging from small models with 2 billion parameters to larger ones with 30 billion parameters. He said that it took them “a fair amount of effort” to get the comparison engine up to par.

“Each one [LLM] has unique stop tokens [and] you have to kind of tailor the process a little bit,” he said. “In some cases, you can accelerate them using something like DeepSpeed, which is a library that helps to make models run faster.”

One interesting note here is that for the evaluation process, they use OpenAI’s GPT-4 (not an open source LLM!). Kadous said they chose this because it’s currently considered the most advanced model globally. The GPT-4 evaluation provides rankings and comparisons for each prompt, across whichever models were selected.

The second key task for a developer is getting the chosen model into production. The typical workflow involves downloading a model from a repository like Hugging Face. But then additional considerations arise, said Kadous, such as understanding stop tokens, implementing learning tweaks, enabling auto-scaling, and determining the required GPU specifications.

He said that Aviary simplifies the deployment process by allowing users to configure the models through a config file. The aim is to make deployment as simple as running a few command lines, he added.

Ray Serve

Aviary’s main connection with Ray, the distributed computing framework that Anyscale is best known for, is that it uses a library called Ray Serve, which is described as “a scalable model serving library for building online inference APIs.” I asked Kadous to explain how this works.

Ray Serve is specifically designed for serving machine learning models and handling model traffic, he replied. It enables the inference process, where models respond to queries. One of its benefits, he said, is its flexibility and scalability — which allows for easy service deployment and scaling from one instance to multiple instances. He added that Ray Serve incorporates cost-saving features like utilizing spot instances, which he said are significantly cheaper than on-demand instances.

Kadous noted that Ray Serve’s capabilities are particularly important when dealing with large models that require coordination across multiple machines. For example, Falcon LLM has 40 billion parameters, which necessitates running on multiple GPUs. Ray Serve leverages the Ray framework to handle the coordination between those GPUs and manage workloads distributed across multiple machines, which in turn enables Aviary to support these complex models effectively.

Customized Data Requirements

I wanted to know how a developer with a specific use case — say, someone who works for a small insurance company — might use Aviary. Can they upload insurance-related data to Aviary and test it against the models?

Kadous said that developers can engage with Anyscale and request their own customized version of Aviary, which allows them to set up a fine-tuned model. For example, an insurance company might fine-tune a model to generate responses to insurance claims. By comparing the prompts sent to the original model and the fine-tuned model, developers can assess if the fine-tuning has produced the desired differences, or if any unexpected behavior occurs.

Examples of LLM Apps

Finally, I asked Kadous what are the most impressive applications built on top of open LLMs that he’s seen so far.

He mentioned the prevalence of retrieval Q&A applications that utilize embeddings. Embeddings involve converting sentences into sequences of numbers that represent their semantic meaning, he explained. He thinks open source engines have proven to be particularly effective in generating these embeddings and creating semantic similarity.

Additionally, open source models are often used for summarizing the results obtained from retrieval-based applications, he added.

The post A New Tool for the Open Source LLM Developer Stack: Aviary appeared first on The New Stack.

]]>
Survey Says: Cloud Maturity Matters https://thenewstack.io/survey-says-cloud-maturity-matters/ Tue, 13 Jun 2023 13:20:36 +0000 https://thenewstack.io/?p=22710700

The third annual State of Cloud Strategy Survey, commissioned by HashiCorp and conducted by Forrester Consulting, focuses on operational cloud

The post Survey Says: Cloud Maturity Matters appeared first on The New Stack.

]]>

The third annual State of Cloud Strategy Survey, commissioned by HashiCorp and conducted by Forrester Consulting, focuses on operational cloud maturity — defined not by the amount of cloud usage but by adoption of a combination of technology and organizational best practices at scale.

The results were unambiguous: The organizations using operational best practices are deriving the biggest benefits from their cloud efforts, in everything from security and compliance to availability and the ability to cope with the ongoing shortage of critical cloud skills. High-maturity companies were more likely to report increases in cloud spending and less likely to say they were wasting money on avoidable cloud spending.

The seven headline numbers below capture many of the survey’s most important findings, and you can view the interactive State of Cloud Strategy Survey microsite for detailed results and methodology. Read on to learn more about our cloud maturity model and some of the key differences we found between high and low cloud-maturity organizations.

Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023

Our Cloud Maturity Model

To fully understand the survey results you need to know something about the cloud maturity model developed by HashiCorp and Forrester to describe where organizations are in their cloud adoption journey. HashiCorp commissioned Forrester Consulting to survey almost 1,000 technology practitioners and decision-makers from companies in a variety of industries around the world, primarily those with more than 1,000 employees.

Forrester asked about their use of best practices across technology layers including infrastructure, security, networking and applications, as well as their use of platform teams, and used that data to separate respondents into three tiers:

  • Low-maturity organizations, the lowest 25% or respondents, are experimenting with these practices.
  • Medium-maturity companies, the middle 50%, are standardizing their use of these practices.
  • High-maturity respondents, the top 25%, are scaling these practices across the entire organization.

How High-Maturity Organizations Are Different

Multicloud works better for highly mature companies. More than three quarters (76%) of high-cloud-maturity organizations say multicloud is helping them achieve their business goals, and another 17% expect it to within the next 12 months. That compares to just 60% of low-maturity respondents who say multicloud is working for them, while another 22% expect it to do so in the next year.

The Great Cloud Skills Shortage

“Skills shortages” is the most commonly cited barrier to operationalizing multicloud, and almost three quarters (74%) of high-maturity respondents say multicloud helps them attract, motivate and retain talent. That compares to less than half (48%) of low-maturity organizations who can say the same. Other large differences between the benefits experienced by high- and low-maturity respondents showed up in the areas of compliance and risk (80% to 56%), infrastructure visibility/insight (82% to 59%) and speed (76% to 59%). Also significant, 79% of high-maturity organizations report that their multicloud efforts have resulted in a stronger security posture, perhaps because working in multiple cloud environments can help organizations keep their security professionals engaged, and also be a forcing function toward more intentional oversight of their security operations.

Cloud Spending and Cloud Waste

Despite macroeconomic uncertainty, 62% of highly mature companies boosted their cloud spending in the last year. That compares to 56% of respondents overall and just 38% of low-maturity organizations. Yet even as they increased cloud spending, more than half (53%) of high-maturity respondents used multicloud to cut costs, compared to just 42% of low-maturity respondents.

Avoidable cloud spending remains high, with 94% of respondents reporting some degree of cloud waste (about the same as in last year’s survey). But the factors contributing to that waste differ notably: Low cloud-maturity firms, in particular, struggle with over-provisioning resources (53%, compared to 47% for high maturity firms), idle or underused resources (55% compared to 51%) and lack of needed skills (47% vs. 43%).

Multicloud Drivers

High- and low-maturity organizations also differ on what drives their multicloud efforts. For example, along with cost reductions, reliability, scalability, security and governance, digital transformation and, especially, portability of data and applications are much more commonly cited by high-maturity organizations. On the other hand, factors such as remote working, shopping for best-fit cloud service, desire for operational efficiency, backup/disaster recovery and avoiding vendor lock-in were relatively similar across all levels of maturity.

What are the business and technology factors driving your multicloud adoption?

Base: 963 respondents who are application development and delivery practitioners and decision-makers with budget authority for new investments. Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023.

When it comes to security threats, 49% of both high- and low-maturity respondents worry about data theft (the top-ranking choice), and roughly equal percentages are concerned about phishing and social engineering attacks. Notably, though, while 61% of low-maturity companies rank password/credential/secrets leaks as a big concern, only 47% of high-maturity respondents agree. Similarly, ransomware is an issue for 47% of low-maturity respondents but just 39% of their high-maturity counterparts.

What are the biggest threats your organization face when it comes to cloud security?

Base: 957 respondents who are application development and delivery practitioners and decision-makers with budget authority for new investments. Source: A commissioned study conducted by Forrester Consulting on behalf of HashiCorp, February 2023.

Find out More

You can explore the full results of the survey on HashiCorp’s interactive State of Cloud Strategy Survey microsite, where you can also download Forrester Consulting’s “​​Operational Maturity Optimizes Multicloud” study, which presents the firm’s key survey findings, analysis and recommendations for enterprises.

The post Survey Says: Cloud Maturity Matters appeared first on The New Stack.

]]>
In the Great Microservices Debate, Value Eats Size for Lunch https://thenewstack.io/in-the-great-microservices-debate-value-eats-size-for-lunch/ Tue, 13 Jun 2023 13:10:42 +0000 https://thenewstack.io/?p=22710290

In May, an old hot topic in software design long thought to be settled was stirred up again, sparked by

The post In the Great Microservices Debate, Value Eats Size for Lunch appeared first on The New Stack.

]]>

In May, an old hot topic in software design long thought to be settled was stirred up again, sparked by an article from the Amazon Prime Video architecture team about moving from serverless microservices to a monolith. This sparked some spirited takes and also a clamor for using the right architectural pattern for the right job.

Two interesting aspects can be observed in the melee of ensuing conversations. First, the original article was more about the scaling challenges with serverless “lambdas” rather than purely about microservices. Additionally, it covered state changes within Step Functions leading to higher costs, data transfer between lambdas and S3 storage, and so on.

It bears reminding that there are other and possibly better ways of implementing microservices other than just the use of serverless. The choice of serverless lambdas is not synonymous with the choice of microservices. Choosing serverless as a deployment vehicle should be contingent upon factors such as expected user load and call frequency patterns, among other things.

The second and more interesting aspect was about the size of the services (micro!) and this was the topic of most debates that emerged. How micro is micro? Is it a binary choice of micro versus monolith? Or is there a spectrum of choices of granularity? How should the size or granularity factor into the architecture?

Value-Based Services: Decoupling to Provide Value Independently

A key criterion for a service to be standing alone as a separate code base and a separately deployable entity is that it should provide some value to the users — ideally the end users of the application. A useful heuristic to determine whether or not a service satisfies this criterion is to think about whether most enhancements to the service would result in benefits perceivable by the user. If in a vast majority of updates the service can only provide such user benefit by having to also get other services to release enhancements, then the service has failed the criterion.

Services Providing Shared Internal Value: Coupling Non-Divergent Dependent Paths

What about services that offer capabilities internally to other services and not directly to the end user? For instance, there might be a service that offers a certain specialty queuing that is required for the application. In such cases, the question becomes whether the capabilities provided by the service have just one internal client or several internal clients.

If most of the time a service ends up calling exactly just one other service apart from very few exceptional cases where-in the call path may diverge, then there is little benefit in separating that service and its most predominant dependency. Another useful heuristic:  if a circuit breaks and a service is unable to reach one of its dependency services, can the calling service provide anything at all to its users or nothing?

Avoiding Talkative Services with Heavy Payloads

Providing value is also about the cost efficiency of designing as multiple services versus combining as a single service. One such aspect that was highlighted in the Prime Video case was chatty network calls. This could be a double whammy because it not only results in additional latency before a response goes back to the user, but it might also increase your bandwidth costs.

This would be more problematic if you have large or several payloads moving around between services across network boundaries. To mitigate, one could consider the use of a storage service, so one doesn’t need to move the payload around, rather only an identifier of the payload and only services that need it to consume it.

However even if an ID is passed around, if several services along the call path need to inspect or operate on the payload, those would need to pull the payload down from the storage service which would completely nullify and possibly worsen the situation.

How and where payloads are handled should be an important part of designing service boundaries and thereby influencing the number of services we have in the system.

Testability and Deployability

Finally, one more consideration would be the cost of rapidly testing and deploying services. Consider a scenario wherein a majority of the time multiple services need to be simultaneously enhanced in order to provide a feature enhancement to the user.

Feature testing would involve testing all of those services together. This could potentially result in bottlenecks for releases or necessitate the requirement for complex release control and testing mechanisms such as feature flags or blue-greening sets of services, among other things. This tendency is a sure-shot sign of the disadvantageous proliferation of too many discrete parts.

Too many teams fall into the trap of building “service enhancements” in every release but those enhancements not doing much for the end user because a number of other pieces from other services need to come together. Such highly coupled architecture complicates both dependency management and versioning, with delays in delivering “end user value”.

Value-Based Services, Not ‘Micro’  Services!

Architecture should be able to deliver value to the end users a majority of the time by the release of individual services independently. Considerations of coupling, dependencies, ease of testing and frequency of deployment matter more, while the size of the service itself has limited usefulness other than for applying reasonable limits on becoming too gigantic or too nano-sized.

There may be other esoteric reasons for splitting or creating multiple services such as the way our teams are organized (Conway’s law, anyone?) or providing flexibility with languages and frameworks but these are rarely real needs for providing value in enterprise software development.

One could very well have a performant cost-efficient architecture that delivers “value” with a diverse mix of services of various sizes — some big, some micro, and others somewhere in between. Think of it as a “value-based services architecture” rather than a “microservices-based architecture” that enables services to deliver value quickly and independently. Because value always eats size for lunch!

The post In the Great Microservices Debate, Value Eats Size for Lunch appeared first on The New Stack.

]]>