WebAssembly Overview, News & Trends | The New Stack Fri, 09 Jun 2023 18:46:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 WebAssembly and Go: A Guide to Getting Started (Part 1) https://thenewstack.io/webassembly-and-go-a-guide-to-getting-started-part-1/ Mon, 12 Jun 2023 12:00:36 +0000 https://thenewstack.io/?p=22709669

WebAssembly (Wasm) and Go are a powerful combination for building efficient and high-performance web applications. WebAssembly is a portable and

The post WebAssembly and Go: A Guide to Getting Started (Part 1) appeared first on The New Stack.

]]>

WebAssembly (Wasm) and Go are a powerful combination for building efficient and high-performance web applications. WebAssembly is a portable and efficient binary instruction format designed for web browsers, while Go is a programming language known for its simplicity, speed and concurrency features.

In this article, we will explore how WebAssembly and Go can work together to create web applications that leverage the benefits of both technologies. We will demonstrate the steps involved in compiling Go code into Wasm format, loading the resulting WebAssembly module into the browser, and enabling bidirectional communication between Go and JavaScript.

Using Go for WebAssembly offers several advantages. First, Go provides a familiar and straightforward programming environment for web developers, making it easy to transition from traditional Go development to web development.

Secondly, Go’s performance and concurrency features are well-suited for building efficient web applications that can handle heavy workloads.

Finally, the combination of Go and WebAssembly allows for cross-platform compatibility, enabling the deployment of applications on various browsers without the need for plugins or additional dependencies.

We will dive into the technical details of compiling Go code to Wasm, loading the module in a web browser, and establishing seamless communication between Go and JavaScript for WebAssembly.

You’ll come away with a comprehensive understanding of how Wasm and Go can be leveraged together to create efficient, cross-platform web applications. Whether you are a Go developer looking to explore web development or a web developer seeking high-performance options, this article will equip you with the knowledge and tools to get started with WebAssembly and Go.

Go and Its Use Cases

Go is often used for server-side development, network programming and distributed systems, but it can also be used for client-side web development.

Web development. Go is a popular choice for web development due to its simplicity, speed and efficient memory usage. It is well-suited for building backend web servers, APIs and microservices. Go’s standard library includes many built-in packages that make web development easy and efficient. Some popular web frameworks built in Go include Gin, Echo and Revel.

System programming. Go was designed with system programming in mind. It has a low-level feel and provides access to system-level features such as memory management, network programming and low-level file operations. This makes it ideal for building system-level applications such as operating systems, device drivers and network tools.

DevOps tools. Go’s simplicity and efficiency make it well-suited for building DevOps tools such as build systems, deployment tools, and monitoring software. Many popular DevOps tools are built in Go, such as Docker, Kubernetes, and Terraform.

Machine learning. Although not as popular as other programming languages for machine learning, Go’s performance and concurrency features make it a good choice for building machine learning models. It has a growing ecosystem of machine learning libraries and frameworks such as Gorgonia and Tensorflow.

Command-line tools. Go’s simplicity and fast compilation time makes it an ideal choice for building command-line tools. Go’s standard library includes many built-in packages for working with the command-line interface, such as the “flag” package for parsing command-line arguments and the “os/exec” package for executing external commands.

Key Benefits of Using WebAssembly with Go

Performance. WebAssembly is designed to be fast and efficient, which makes it an ideal choice for running computationally intensive tasks in the browser. Go is also known for its speed and efficiency, making it a good fit for building high-performance web applications.

Portability. Wasm is designed to be portable across different platforms and architectures. This means that you can compile Go code into WebAssembly format and run it on any platform that supports WebAssembly. This makes it easier to build web applications that work seamlessly across different devices and operating systems.

Security. WebAssembly provides a sandboxed environment for running code in the browser, which helps to prevent malicious code from accessing sensitive user data. Go also has built-in security features such as memory safety and type safety, which can help to prevent common security vulnerabilities.

Concurrency. Go is designed with concurrency in mind, which makes it easier to build web applications that can handle multiple requests simultaneously. By combining WebAssembly and Go, you can build web applications that are highly concurrent and can handle a large number of requests at the same time.

How WebAssembly Works with the Browser

When a Wasm module is loaded in a browser, it is executed by a virtual machine called the WebAssembly Runtime, which translates the Wasm code into machine code that the browser’s JavaScript engine can execute.

The WebAssembly Runtime is implemented in the browser as a JavaScript library and provides a set of APIs for loading, validating and executing Wasm modules. When a Wasm module is loaded, the Runtime validates the module’s bytecode and creates an instance of the module, which can be used to call its functions and access its data.

Wasm modules can interact with the browser’s Document Object Model (DOM) and other web APIs using JavaScript. For example, a Wasm module can modify the contents of a webpage, listen for user events, and make network requests using the browser’s web APIs.

One of the key benefits of using Wasm with the browser is that it provides a way to run code that is more performant than JavaScript. JavaScript is an interpreted language, which means that it can be slower than compiled languages like C++ or Go. However, by compiling code into Wasm format, it can be executed at near-native speeds, making it ideal for computationally intensive tasks such as machine learning or 3D graphics rendering.

Using WebAssembly with Go

The Go programming language has a compiler that can produce Wasm binaries, allowing Go programs to run in a web browser. The Go compiler for WebAssembly, called wasm, can be invoked using the GOARCH=wasm environment variable.

When compiling a Go program for WebAssembly, the Go compiler generates WebAssembly bytecode that can be executed in the browser using the WebAssembly Runtime. The generated Wasm module includes all of the Go runtime components needed to run the program, so no additional runtime support is required in the browser.

The Go compiler for WebAssembly supports the same set of language features as the regular Go compiler, including concurrency, garbage collection, and type safety. However, some Go features are not yet fully supported in WebAssembly, such as reflection and cgo.

Reflection. Reflection is a powerful feature in Go that allows programs to examine and manipulate their own types and values at runtime. However, due to the limitations of the Wasm runtime environment, reflection is not fully supported in Go programs compiled to WebAssembly. Some reflection capabilities may be limited or unavailable in WebAssembly binaries.

Cgo. The cgo tool in Go enables seamless integration with C code, allowing Go programs to call C functions and use C libraries. However, the cgo functionality is not currently supported in Go programs compiled to WebAssembly. This means that you cannot directly use cgo to interface with C code from WebAssembly binaries.

Technical Overview: How Wasm and Go Work Together

To compile Go code into WebAssembly format, you can use the Golang Wasm compiler. This tool generates a .wasm file that can be executed in a web browser. The compiler translates Go code into WebAssembly instructions that can be executed by a virtual machine in the browser.

Once you have the .wasm file, you need to load it into the browser using the WebAssembly JavaScript API. This API provides functions to load the module, instantiate it, and execute its functions.

You can load the .wasm file using the fetch() function, which loads the file as an ArrayBuffer. You can then instantiate the module using the WebAssembly.instantiate() function, which returns a Promise that resolves to a WebAssembly.Module object.

Calling Go Functions from JavaScript

After the WebAssembly module is loaded and instantiated, it exposes its functions to JavaScript. These functions can be called from JavaScript using the WebAssembly JavaScript API.

You can use the WebAssembly.instantiate() function to obtain a JavaScript object that contains the exported functions from the WebAssembly module. You can then call these functions from JavaScript just like any other JavaScript function.

Calling JavaScript Functions from Go

To call JavaScript functions from Go, you can use the syscall/js package. This package provides a way to interact with the JavaScript environment. You can create JavaScript values, call JavaScript functions, and handle JavaScript events from Go.

Use the js.Global() function to get the global object in the JavaScript environment. You can then call any function on this object using the Call() function, passing in the function name and any arguments.

The Golang WebAssembly API

The Golang WebAssembly API provides a set of functions that can be used to interact with WebAssembly modules from Go code running in a web browser. These functions allow Go programs to call functions defined in WebAssembly modules, pass values between Go and WebAssembly, and manipulate WebAssembly memory.

The Golang WebAssembly API is implemented as a set of Go packages, including “syscall/js,” which provides a bridge between Go and JavaScript, and “syscall/js/wasm,” which provides a bridge between Go and WebAssembly.

Using the Golang WebAssembly API, Go programs can load and instantiate Wasm modules, call functions defined in the modules, and manipulate the memory of the modules. For example, a Go program can load a Wasm module that performs complex computations, and then use the Golang WebAssembly API to call functions in the module and retrieve the results.

The Golang WebAssembly API also provides a way to define and export Go functions that can be called from WebAssembly modules. This allows Go programs to expose functionality to WebAssembly modules and provides a way to integrate Go code with existing JavaScript codebases.

Here’s a demonstration of how to compile a simple Go program to WebAssembly and load it in the browser

First, we need to install the Go compiler for WebAssembly. This can be done by running the following command:

GOARCH=wasm GOOS=js go get -u github.com/golang/go1.16.4


This will install the WebAssembly-enabled version of the Go compiler.

Next, we can write a simple Go program that adds two numbers together:

package main

import "fmt"

func add(a, b int) int {

  return a + b
}


func main() {
  fmt.Println("Hello from Go!")
}


We can then compile this program to WebAssembly by running the following command:

GOARCH=wasm GOOS=js go build -o add.wasm


This will generate a WebAssembly binary file called “add.wasm.”

Now we can write some JavaScript code to load and execute the WebAssembly module. Here’s an example

const go = new Go();

WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject).then((result) => {
  go.run(result.instance);
  console.log("Result:", add(2, 3)); // call the 'add' function defined in the Go program
});


This code creates a new instance of the Go WebAssembly API, loads the add.wasm module using the WebAssembly API, runs the module, and then calls the add function defined in the Go program.

Finally, we can load our JavaScript code in a webpage and view the output in the browser console. For example:

<!DOCTYPE html>
<html>
 <head>
   <meta charset="utf-8">
   <title>Go + WebAssembly Example</title>
 </head>
 <body>
   <script src="wasm_exec.js"></script>
   <script>
     // insert JavaScript code here
   </script>
 </body>
</html>


This HTML file loads the wasm_exec.js file, which is included with the Go compiler for WebAssembly, and then includes our JavaScript code to load and execute the add.wasm module.

That’s it! With these steps, we can compile a simple Go program to WebAssembly and load it in a web browser using JavaScript. This provides a powerful way to build high-performance web applications with the simplicity and ease of use of the Go programming language.

How to Use Go with Various Wasm Frameworks

Here’s an overview of different WebAssembly frameworks that can be used with Go, including AssemblyScript (a TypeScript-like language that compiles to Wasm) and TinyGo (a variant of Go that compiles to WebAssembly and other embedded systems).

AssemblyScript

AssemblyScript provides a familiar syntax for web developers and can be used alongside Go to provide additional functionality to a web application. Here’s an example of how to use Go with AssemblyScript:

import * as go from "go";

const wasmModule = new WebAssembly.Module(await f
etch('add.wasm').then(response => response.arrayBuffer()));
const wasmInstance = new WebAssembly.Instance(wasmModule, go.importObject)

console.log(wasmInstance.exports.add(2, 3)); // Call the 'add' function defined in the Wasm module

await go.run(wasmInstance); // Start the Go runtime and call Go functions from JavaScript


In this example, we load the add.wasm module using the WebAssembly API and instantiate it with the Go import object. We then call the add function defined in the WebAssembly module and pass it two parameters. Finally, we start the Go runtime and call Go functions from JavaScript.

TinyGo

TinyGo provides a subset of the Go standard library and can be used to write low-level code that runs in the browser. Here’s an example of how to use TinyGo to call a function defined in a Go WebAssembly module:

package main

import "syscall/js"

func add(this js.Value, inputs []js.Value) interface{} {
  a := inputs[0].Int()
  b := inputs[1].Int()
  return a + b
}

func main() {
  c := make(chan struct{}, 0)
  js.Global().Set("add", js.FuncOf(add))
  <-c
}


In this example, we define a function called add that takes two integer parameters and returns their sum. We then use the “syscall/js” package to export this function to JavaScript. Finally, we block the main thread using a channel to prevent the Go program from exiting.

We can then call this function from JavaScript using the following code:

const wasmModule = await WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject);
const go = new Go();


WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject).then((result) => {
   go.run(result.instance);
   console.log("Result:", add(2, 3)); // call the 'add' function defined in the Go program
});


In this example, we instantiate the WebAssembly module and pass it to the Go runtime using the Go import object. We then run the Go program and call the add function defined in the Go program. The result is then printed to the console.

Using Wasm for Cross-Platform Development

WebAssembly code can be run in any environment that supports it, including browsers and standalone runtimes. Developers can use it to create applications that can run on multiple platforms with minimal code changes — fulfilling WebAssembly’s promise of “build once, run anywhere.” This can help to reduce development time and costs, while also providing a consistent user experience across different devices and platforms.

One way to use Wasm for cross-platform development is to build an application in a language that can be compiled to WebAssembly, such as Go or Rust. Once the application is built, it can be compiled to WebAssembly and deployed to the web, or compiled to native code and deployed to a desktop environment, using a framework like Electron or GTK.

Another way to use Wasm for cross-platform development is to build an application in a web-based language like JavaScript, and then compile it to WebAssembly using a tool like Emscripten. This approach can be especially useful for porting existing web applications to run on desktop environments, or for building applications that need to run on both the web and desktop.

Go programs can be compiled to both WebAssembly and native desktop environments using a number of different tools and frameworks.

For example, Electron is a popular framework for building cross-platform desktop applications using web technologies like HTML, CSS, and JavaScript. Go programs can be compiled to run on Electron using a tool like Go-Electron, which provides a way to package Go applications as Electron apps.

Another option is to use GTK, a popular cross-platform toolkit for building desktop applications. Go programs can be compiled to run on GTK using the gotk3 package, which provides Go bindings for GTK.

The post WebAssembly and Go: A Guide to Getting Started (Part 1) appeared first on The New Stack.

]]>
WebAssembly and Go: A Guide to Getting Started (Part 2) https://thenewstack.io/webassembly-and-go-a-guide-to-getting-started-part-2/ Mon, 12 Jun 2023 12:00:13 +0000 https://thenewstack.io/?p=22709677

WebAssembly (Wasm) and Golang (Go) are a dynamic duo for high-performance web applications due to their specific features and advantages.

The post WebAssembly and Go: A Guide to Getting Started (Part 2) appeared first on The New Stack.

]]>

WebAssembly (Wasm) and Golang (Go) are a dynamic duo for high-performance web applications due to their specific features and advantages. Wasm is a binary instruction format that allows running code at near-native speed in modern web browsers. It provides a low-level virtual machine that enables efficient execution of code, making it ideal for performance-intensive tasks.

Go is a statically typed, compiled programming language known for its simplicity, efficiency and high-performance characteristics. It offers built-in concurrency support, efficient memory management, and excellent execution speed. These qualities make Go a suitable language for developing backend systems that power web applications.

By combining WebAssembly and Go, developers can achieve exceptional performance in web applications. Go can be used to write backend services, APIs and business logic, while WebAssembly can be used to execute performance-critical code in the browser. This combination allows for offloading computation to the client-side, reducing server load and improving responsiveness.

Furthermore, Go has excellent interoperability with WebAssembly, allowing seamless integration between the two. Developers can compile Go code to WebAssembly modules, which can be executed in the browser alongside JavaScript, enabling the utilization of Go’s performance benefits on the client side.

Performance is of paramount importance in web applications for several reasons:

User experience. A fast and responsive web application enhances the user experience and satisfaction. Users expect web pages to load quickly and respond promptly to their interactions. Slow and sluggish applications can lead to frustration, abandonment and loss of users.

Conversion rates. Performance directly impacts conversion rates, especially in e-commerce and online businesses. Even minor delays in page load times can result in higher bounce rates and lower conversion rates, studies have shown. Improved performance can lead to increased engagement, longer session durations and higher conversion rates.

Search Engine Optimization (SEO). Search engines, like Google, take website performance into account when ranking search results. Faster-loading websites tend to have better search engine rankings, which can significantly impact organic traffic and visibility.

Mobile users. With the increasing use of mobile devices, performance becomes even more critical. Mobile networks can be slower and less reliable than fixed-line connections. Optimizing web application performance ensures a smooth experience for mobile users, leading to better engagement and retention.

Competitiveness. In today’s highly competitive digital landscape, performance can be a key differentiator. Users have numerous options available, and if your application is slow, they may switch to a competitor offering a faster and more efficient experience.

How Wasm Enhances Web Application Performance

Near-native performance. WebAssembly is designed to execute code at near-native speed. It achieves this by using a compact binary format that can be efficiently decoded and executed by modern web browsers. Unlike traditional web technologies like JavaScript, which are interpreted at runtime, Wasm code is compiled ahead of time and can be executed directly by the browser’s virtual machine, resulting in faster execution times.

Efficient execution. WebAssembly provides a low-level virtual machine that allows for efficient execution of code. It uses a stack-based architecture that minimizes the overhead associated with memory access and function calls. Additionally, WebAssembly operates on a compact binary format, reducing the size of the transmitted code and improving load times.

Multilanguage support. WebAssembly is designed to be language-agnostic, which means it can be used with a wide range of programming languages. This allows developers to leverage the performance benefits of Wasm while using their preferred programming language. By compiling code from languages like C, C++, Rust, and Go to WebAssembly, developers can take advantage of their performance characteristics and seamlessly integrate them into web applications.

Offloading computation. Wasm enables offloading computationally intensive tasks from the server to the client side. By moving certain operations to the browser, web applications can reduce the load on the server, distribute computation across multiple devices and improve overall responsiveness. This can be particularly beneficial for applications that involve complex calculations, image processing, simulations and other performance-critical tasks.

Seamless integration with JavaScript. WebAssembly can easily integrate with JavaScript, the traditional language of the web. This allows developers to combine the performance benefits of Wasm with the rich ecosystem of JavaScript libraries and frameworks. WebAssembly modules can be imported and exported from JavaScript code, enabling interoperability and smooth interaction between the two.

Progressive enhancement. Wasm supports a progressive enhancement approach to web development. Developers can choose to compile performance-critical parts of their application to WebAssembly while keeping the rest of the code in JavaScript. This way, the performance gains are selectively applied where they are most needed, without requiring a complete rewrite of the entire application.

WebAssembly vs. Other Web Technologies

WebAssembly outperforms JavaScript and asm.js in terms of execution speed. JavaScript is an interpreted language, while asm.js is a subset of JavaScript optimized for performance.

In contrast, WebAssembly executes at near-native speed, thanks to its efficient binary format and ahead-of-time (AOT) compilation. Wasm is language-agnostic, allowing developers to use multiple languages.

JavaScript has a larger developer community and mature tooling, while asm.js requires specific optimizations. WebAssembly binaries are smaller, resulting in faster load times. JavaScript has wider browser compatibility and seamless interoperability with web technologies.

WebAssembly requires explicit interfaces for interaction with JavaScript. Overall, Wasm offers high performance, while JavaScript has wider adoption and tooling support. Usage of asm.js has diminished with the rise of WebAssembly. The choice depends on performance needs, language preferences and browser support.

How Go Helps Create High-Performance Apps

Go is known for its key features that contribute to building high-performance applications. These features include:

Compiled language. Go compiles source code into efficient machine code, which results in fast execution and eliminates the need for interpretation at runtime. The compiled binaries can be directly executed by the operating system, providing excellent performance.

Concurrency support. The language has built-in support for concurrency through goroutines and channels. Goroutines are lightweight threads that allow concurrent execution of functions, while channels facilitate communication and synchronization between goroutines.

This concurrency model makes it easy to write highly concurrent and parallel programs, enabling efficient use of available resources and improving performance in scenarios like handling multiple requests or processing large amounts of data concurrently.

Efficient garbage collection, Go incorporates a garbage collector that automatically manages memory allocation and deallocation. It uses a concurrent garbage collector that minimizes pauses and allows applications to run smoothly without significant interruptions. The garbage collector efficiently reclaims unused memory, preventing memory leaks and enabling efficient memory management in high-performance applications.

Strong standard library. Go comes with a rich standard library that provides a wide range of functionalities, including networking, file I/O, encryption, concurrency primitives and more. The standard library is designed with performance and efficiency in mind, offering optimized implementations and well-designed APIs.

Developers can leverage these libraries to build high-performance applications without relying heavily on third-party dependencies.

Native support for concurrency patterns. Go provides native support for common concurrency patterns, such as mutexes, condition variables and atomic operations. These features enable developers to write thread-safe and efficient concurrent code without the complexities typically associated with low-level synchronization primitives.

This native support simplifies the development of concurrent applications and contributes to improved performance.

Efficient networking. Golang’s standard library includes a powerful networking package that offers efficient abstractions for building networked applications. It provides a robust set of tools for handling TCP/IP, UDP, HTTP, and other protocols. The networking capabilities of Go are designed to be performant, enabling the development of high-throughput and low-latency network applications.

Compilation to standalone binaries. Go can compile code into standalone binaries that contain all the necessary dependencies and libraries. These binaries can be easily deployed and executed on various platforms without requiring the installation of additional dependencies.

This approach simplifies deployment and can contribute to better performance by reducing overhead and ensuring consistent execution environments.

Using Wasm for Computationally Intensive Tasks

Wasm can greatly improve the performance of computationally intensive tasks like image processing or cryptography by leveraging its near-native execution speed. By compiling algorithms or libraries written in languages like C/C++ or Rust to WebAssembly, developers can achieve significant performance gains.

WebAssembly’s efficient binary format and ability to execute in a sandboxed environment make it ideal for running computationally intensive operations in the browser.

Go programs can benefit from improved performance when compiled to Wasm for computationally intensive tasks. For example, Go libraries or applications that involve heavy image manipulation, complex mathematical calculations or cryptographic operations can be compiled to WebAssembly to take advantage of its speed.

Using WebAssembly for UI Rendering

WebAssembly can improve UI rendering performance in the browser compared to traditional JavaScript approaches. By leveraging Wasm’s efficient execution and direct access to low-level operations, rendering engines can achieve faster updates and smoother animations.

WebAssembly allows UI rendering code to run closer to native speeds, resulting in improved user experiences, especially for complex or graphically intensive applications.

UI frameworks or libraries like React or Vue.js can benefit from improved performance when compiled to WebAssembly. By leveraging the speed and efficiency of Wasm, these frameworks can deliver faster rendering and more responsive user interfaces. Compiling UI components written in languages like Rust or C++ to WebAssembly can enhance the overall performance and responsiveness of the UI, making the user experience more seamless and interactive.

Using WebAssembly for Game Development

WebAssembly’s efficient execution and direct access to hardware resources make it ideal for browser-based game development. It offers improved performance compared to traditional JavaScript game engines. By compiling game logic and rendering code to WebAssembly, developers can achieve near-native speeds, enabling complex and visually rich games to run smoothly in the browser.

Go-based game engines like Azul3D can benefit from improved performance when compiled to WebAssembly. By leveraging the speed and efficiency of Wasm, Go game engines can deliver high-performance browser games with advanced graphics and physics simulations.

Compiling Go-based game engines to WebAssembly enables developers to harness Go’s performance characteristics and create immersive gaming experiences that rival native applications.

The Power of Go and WebAssembly: Case Studies

TinyGo

TinyGo is a project that compiles Go code to WebAssembly for running on resource-constrained devices and in the browser. It showcases the performance gains of combining Go with Wasm for scenarios where efficiency and low resource usage are crucial.

Wasmer

Wasmer is an open-source runtime for executing WebAssembly outside the browser. It supports running Go code as WebAssembly modules. Wasmer’s performance benchmarks have demonstrated that Go code executed as Wasm can achieve comparable or better performance than JavaScript in various scenarios.

Vecty

Vecty is a web framework for building responsive and dynamic frontends in Go using WebAssembly. It aims to compete with modern web frameworks like React and VueJS. Here are some key features of Vecty:

  • Simplicity. Vecty is designed to be easily mastered by newcomers, especially those familiar with the Go programming language. It follows Go’s philosophy of simplicity and readability.
  • Performance. Vecty focuses on providing efficient and understandable performance. It aims to generate small bundle sizes, resulting in faster loading times for your web applications. Vecty strives to achieve the same performance as raw JavaScript, HTML  and CSS.
  • Composability. Vecty allows you to nest components, enabling you to build complex user interfaces by logically separating them into smaller, reusable packages. This composability promotes code reusability and maintainability.
  • Designed for Go. Vecty is specifically designed for Go developers. Instead of translating popular libraries from other languages to Go, Vecty was built from the ground up, asking the question, “What is the best way to solve this problem in Go?” This approach ensures that Vecty leverages Go’s unique strengths and idioms.

Best Practices: Developing Web Apps with Wasm and Go

Optimize Go Code for WebAssembly

Minimize memory allocations. Excessive memory allocations can impact performance. Consider using object pooling or reusing memory to reduce the frequency of allocations and deallocations.

Use efficient data structures. Choose data structures that are optimized for performance. Go provides various built-in data structures like slices and maps that are efficient for most use cases.

Limit garbage collection pressure. Excessive garbage collection can introduce pauses and affect performance. Minimize unnecessary object allocations and use the appropriate garbage collection settings to optimize memory management.

Optimize loops and iterations. Identify loops and iterations that can be optimized. Use loop unrolling, minimize unnecessary calculations and ensure efficient memory access patterns.

Leverage goroutines and channels. Go’s concurrency primitives, goroutines, and channels, can help maximize performance. Use them to parallelize tasks and efficiently handle concurrent operations.

Maximize Performance in Wasm Modules

Minimize startup overhead. Reduce the size of the WebAssembly module by eliminating unnecessary code and dependencies. Minify and compress the module to minimize download time.

Optimize data transfers. Minimize data transfers between JavaScript and Wasm modules. Use efficient memory layouts and data representations to reduce serialization and deserialization overhead.

Use SIMD instructions. If applicable, use single instruction, multiple data (SIMD) instructions to perform parallel computations and improve performance. SIMD can be especially beneficial for tasks involving vector operations.

Profile and optimize performance-critical code. Identify performance bottlenecks by profiling the WebAssembly module. Optimize the hot paths, critical functions and sections that consume significant resources to improve overall performance.

Use compiler and optimization flags. Use compiler-specific flags and optimizations tailored for WebAssembly. Different compilers may have specific optimizations to improve performance for Wasm targets.

Minimize Latency and Improve Responsiveness

Reduce round trips. Minimize the number of network requests by combining resources, utilizing caching mechanisms, and employing efficient data transfer protocols like HTTP/2 or WebSockets.

Do asynchronous operations. Use asynchronous programming techniques to avoid blocking the main thread and enhance responsiveness. Employ callbacks, Promises, or async/await syntax for non-blocking I/O operations.

Employ lazy loading and code splitting. Divide the application into smaller modules and load them on-demand as needed. Lazy loading and code splitting reduce the initial load time and improve perceived performance.

Use efficient DOM manipulation. Optimize Document Object Model (DOM) manipulation operations by batching changes and reducing layout recalculations. Use techniques like virtual DOM diffing to minimize updates and optimize rendering.

Rely on caching and prefetching. Leverage browser caching mechanisms and prefetching to proactively load resources that are likely to be needed, reducing latency and improving perceived performance.

The post WebAssembly and Go: A Guide to Getting Started (Part 2) appeared first on The New Stack.

]]>
How WASM (and Rust) Unlocks the Mysteries of Quantum Computing https://thenewstack.io/how-wasm-and-rust-unlocks-the-mysteries-of-quantum-computing/ Thu, 08 Jun 2023 10:00:40 +0000 https://thenewstack.io/?p=22709920

WebAssembly has come a long way from the browser; it can be used for building high-performance web applications, for serverless

The post How WASM (and Rust) Unlocks the Mysteries of Quantum Computing appeared first on The New Stack.

]]>

WebAssembly has come a long way from the browser; it can be used for building high-performance web applications, for serverless applications, and for many other uses.

Recently, we also spotted it as a key technology used in creating and controlling a previously theoretical state of matter that could unlock reliable quantum computing — for the same reasons that make it an appealing choice for cloud computing.

Quantum Needs Traditional Computing

Quantum computing uses exotic hardware (large, expensive and very, very cold) to model complex systems and problems that need more memory than the largest supercomputer: it stores information in equally exotic quantum states of matter and runs computations on it by controlling the interactions of subatomic particles.

But alongside that futuristic quantum computer, you need traditional computing resources to feed data into the quantum system, to get the results back from it — and to manage the state of the qubits to deal with errors in those fragile quantum states.

As Dr. Krysta Svore, the researcher heading the team building the software stack for Microsoft’s quantum computing project, put it in a recent discussion of hybrid quantum computing, “We need 10 to 100 terabytes a second bandwidth to keep the quantum machine alive in conjunction with a classical petascale supercomputer operating alongside the quantum computer: it needs to have this very regular 10 microsecond back and forth feedback loop to keep the quantum computer yielding a reliable solution.”

Qubits can be affected by what’s around them and lose their state in microseconds, so the control system has to be fast enough to measure the quantum circuit while it’s operating (that’s called a mid-circuit measurement), find any errors and decide how to fix them — and send that information back to control the quantum system.

“Those qubits may need to remain alive and remain coherent while you go do classical compute,” Svore explained. “The longer that delay, the more they’re decohering, the more noise that is getting applied to them and thus the more work you might have to do to keep them stable and alive.”

Fixing Quantum Errors with WASM

There are different kinds of exotic hardware in quantum computers and you have a little more time to work with a trapped-ion quantum computer like the Quantinuum System Model H2, which will be available through the Azure Quantum service in June.

That extra time means the algorithms that handle the quantum error correction can be more sophisticated, and WebAssembly is the ideal choice for building them Pete Campora, a quantum compiler engineer at Quantinuum, told the New Stack.

Over the last few years, Quantinuum has used WebAssembly (WASM) as part of the control system for increasingly powerful quantum computers, going from just demonstrating that real-time quantum error correction is possible to experimenting with different error correction approaches and, most recently, creating and manipulating for the first time the exotic entangled quantum states (called non-Abelian anyons) that could be the basis of fault-tolerant quantum computing.

Move one of these quasiparticles around another — like braiding strings — and they store that sequence of movements in their internal state, forming what’s called a topological qubit that’s much more error resistant than other types of qubit.

At least, that’s the theory: and WebAssembly is proving to be a key part of proving it will work — which still needs error correction on today’s quantum computers.

“We’re using WebAssembly in the middle of quantum circuit execution,” Campora explained. The control system software is “preparing quantum states, doing some mid-circuit measurements, taking those mid-circuit measurements, maybe doing a little bit of classical calculation in the control system software and then passing those values to the WebAssembly environment.”

Controlling Quantum Circuits

In cloud, developers are used to picking the virtual machine with the right specs or choosing the right accelerator for a workload.

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it — and error-correcting the qubits while the job is running — with a language like OpenQASM (Open Quantum Assembly Language); that’s rather like controlling an FPGA with a hardware description language like Verilog.

You can’t measure a qubit to check for errors directly while it’s working or you’d end the computation too soon, but you can measure an extra qubit (called an “ancilla” because it’s used to store partial results) and extrapolate the state of the working qubit from that.

What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to “diagnose” or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.

OpenQASM is good for basic integer calculation, but it requires a lot of expertise to write that code: “There’s a lot more boilerplate than if you just call out to a nice function in WASM.”

Writing the algorithmic decoder that uses those qubit measurements to work out what the most likely error is and how to correct it in C, C++ or Rust and compiling it to WebAssembly makes it more accessible and lets the quantum engineers use more complex data structures like vectors, arrays, tuples and other ways to pass data between different functions to write more sophisticated algorithms that deliver more effective quantum error correction.

“An algorithmic decoder is going to require data structures beyond what you would reasonably try to represent with just integers in the control system: it just doesn’t make sense,” Campora said. “The WASM environment does a lot of the heavy lifting of mutating data structures and doing these more complex algorithms. It even does things like dynamic allocation that normally you’d want to avoid in control system software due to timing requirements and being real time. So, the Rust programmer can take advantage of Rust crates for representing graphs and doing graph algorithms and dynamically adding these nodes into a graph.”

The first algorithmic decoder the Quantinuum team created in Rust and compiled to WASM was fairly simple: “You had global arrays or dictionaries that mapped your sequence of syndromes to a result.” The data structures used in the most recent paper are more complex and quantum engineers are using much more sophisticated algorithms like graph traversal and Dijkstra’s [shortest path] algorithm. “It’s really interesting to see our quantum error correction researchers push the kinds of things that they can write using this environment.”

Enabling software that’s powerful enough to handle different approaches to quantum error correction makes it much faster and more accessible for researchers to experiment than if they had to make custom hardware each time, or even reprogram an FPGA, especially for those with a background in theoretical physics (with the support of the quantum compiler team if necessary). “It’s portable, and you can generate it from different languages, so that frees people up to pick whatever language and software that can compile to WASM that’s good for their application.”

“It’s definitely a much easier time for them to get spun up trying to think about compiling Rust to WebAssembly versus them having to try and program an FPGA or work with someone else and describe their algorithms. This really allows them to just go and think about how they’re going to do it themselves,” Campora said.

Sandboxes and System Interfaces

With researchers writing their own code to control a complex — and expensive — quantum system, protecting that system from potentially problematic code is important and that’s a key strength of WebAssembly, Campora noted. “We don’t have to worry about the security concerns of people submitting relatively arbitrary code, because the sandbox enforces memory safety guarantees and basically isolates you from certain OS processes as well.”

Developing quantum computing takes the expertise of multiple disciplines and both commercial and academic researchers, so there are the usual security questions around code from different sources. “One of the goals with this environment is that, because it’s software, external researchers that we’re collaborating with can write their algorithms for doing things like decoders for quantum error correction and can easily tweak them in their programming language and resubmit and keep re-evaluating the data.”

A language like Portable C could do the computation, “but then you lose all of those safety guarantees,” Campora pointed out. “A lot of the compilation tooling is really good about letting you know that you’re doing something that would require you to break out of the sandbox.”

WebAssembly restricts what a potentially malicious or inexpert user could do that might damage the system but also allows system owners to offer more capabilities to users who need them, using WASI — the WebAssembly System Interface that standardizes access to features and services that aren’t in the WASM sandbox.

“I like the way WASI can allow you, in a more fine-grained way, to opt into a few more things that would normally be considered breaking the sandbox. It gives you control. If somebody comes up to you with a reasonable request that that would be useful for, say, random number generation we can look into adding WASI support so that we can unblock them, but by default, they’re sandboxed away from OS things.”

In the end, esoteric as the work is, the appeal of WebAssembly for quantum computing error correction is very much what makes it so useful in so many areas.

“The web part of the name is almost unfortunate in certain ways,” Camora noted, “because it’s really this generic virtual machine-stack machine-sandbox, so it can be used for a variety of domains. If you have those sandboxing needs, it’s really a great target for you to get some safety guarantees and still allows people to submit code to it.”

The post How WASM (and Rust) Unlocks the Mysteries of Quantum Computing appeared first on The New Stack.

]]>
The Need to Roll up Your Sleeves for WebAssembly https://thenewstack.io/the-need-to-roll-up-your-sleeves-for-webassembly/ Mon, 05 Jun 2023 13:00:41 +0000 https://thenewstack.io/?p=22706865

We already know how putting applications in WebAssembly modules can improve runtime performance and latency speeds and compatibility when deployed.

The post The Need to Roll up Your Sleeves for WebAssembly appeared first on The New Stack.

]]>

We already know how putting applications in WebAssembly modules can improve runtime performance and latency speeds and compatibility when deployed. We also know that WebAssembly has been used to improve application performance when running on the browser on the backend. But the day when developers can create applications in the language of their choice for distribution across any environment simultaneously, whether it’s on Kubernetes clusters, servers, edge devices, etc. remains a work in progress.

This status quo became that much more apparent from the talks and impromptu meetings I had during KubeCon + CloudNativeCon in April. In addition to a growing number of WebAssembly module and service providers and startups offering support for WebAssembly, it’s hard to find any organization that is not getting down to work to at least see how it works as a sandbox project in wait of when customers will ask for or require it.

Many startups, established players and tool and platform providers are actively contributing to the common pool of knowledge by contributing or maintaining open source projects, taking part in efforts such as the ByteCode Alliance or sharing their knowledge and experiences at conferences, such as during the KubeCon + CloudNativeCon Europe’s co-located event Cloud Native Wasm Day. This collective effort will very likely serve as a catalyst so that WebAssembly will eventually soon move past its current status as just a very promising new technology and begin to be used for what it’s intended for on a massive industry scale.

Indeed, WebAssembly is the logical next step in the evolution from running applications on specific hardware, running them on virtual machines, to running them in containers on Kubernetes, Torsten Volk, an analyst at Enterprise Management Associates (EMA), said. “The payout in terms of increased developer productivity alone justifies the initial investments that come with achieving this ultimate level of abstraction between code and infrastructure. No more library hell: No more debugging app-specific infrastructure. No more refactoring of app code for edge deployments. In general, no more wasting developer time on stuff other than writing code,” Volk said. “This will get us to a state where we can truly compose new applications from existing components without having to worry about compatibility.”

 

Work to Be Done

But until we get that point of developer-productivity nirvana, work needs to be done. “Now we need all-popular Python libraries to work on WebAssembly and integrations with key components of modern distributed apps, such as NoSQL storage, asynchronous messaging, distributed tracing, caching, etc.,” Volk said. “Luckily there’s a growing number of startups completing the ‘grunt work’ for us to make 2024 the year when WebAssembly really takes off in production.”

Collaboration, alliances and harmony in the community, especially in the realm of open source, will be critical. “The one thing I’ve learned from the container wars is that we were fighting each other too early in the process. There was this mindset that the winner would take all, but the truth is the winner takes all the burden,” Kelsey Hightower, principal developer advocate, Google Cloud, said during the opening remarks at KubeCon + CloudNativeCon Europe’s Cloud Native Wasm Day. “You will be stuck trying to maintain the standards on behalf of everyone else. Remember collaboration is going to be super important — because the price for this has to be this invisible layer underneath that’s just doing all of this hard work.”

At the end of the day, those writing software probably just want to use their favorite language and framework in order to do it, Hightower said. “How compatible will you be with that? Or will we require them to rewrite all the software?” Hightower said. “My guess is anything that requires people to rewrite everything is doomed to fail, almost guaranteed and that there is no way that the world is going to stop innovating at the pace we’re on where the world will stop, and implement all the lower levels. So, it is a time to be excited, but understand what the goal is and make sure that this thing is usable and has tangible results along the way.”

During the sidelines of the conference, Peter Smails, senior vice president and general manager, enterprise container management, at SUSE, discussed how internal teams at SUSE shared an interest in Wasm without going into details about SUSE’s involvement. “WebAssembly has an incredibly exciting future and we see practical application of WebAssembly. I personally think of it as similar to being next-generation Java: it is a small, lightweight, fast development platform and, arguably, is an infrastructure that lets you write code and deploy it where you want and that’s pretty cool,” Smails told The New Stack.

In many ways, WebAssembly proponents face the chicken-before-the-egg challenges. After all, what developer would not want to be able to use the programming language of their choice to deploy applications for an environment or device without having to worry about configuration issues? What operations and security team would not appreciate a single path of deployment from finalized application code to deployment on any device or environment (including Kubernetes) security without the hassles of reconfiguring the application for each endpoint?  But we are not there yet and many risks must be taken and investments made before wide-scale adoption really does happen the way it should in theory.

“We have a lot of people internally very excited about it, but practically speaking, we don’t have customers coming to talk about this asking for the requirements — that’s why it’s in the future,” Smails said. “We see it more as a potentially exciting space because we’re all about infrastructure.”

Get the Job Done

Meanwhile, there is a huge momentum to create, test and standardize the Wasm infrastructure to pave the way for mass adoption. This is thanks largely to the work of the open source community working on projects sponsored in-house or among new tool providers startups that continue to sprout up, as mentioned above. Among the more promising projects discussed during  the KubeCon + CloudNativeCon co-located event Cloud Native Wasm Day, Saúl Cabrera, a staff developer, for Shopify, described how he is leading the development of Winch during his talk “The Road to Winch.” Winch is a compiler in Wasmtime created to improve application performance beyond what Wasm already provides. Offering an alternative to overcome the limitations of a baseline compiler, WebAssembly Intentionally-Non Optimizing Compiler and Host (Winch) improves startup times of WebAssembly applications, Cabrera said. Benchmarks result that demonstrates the touted performance metrics will be available in the near future, Cabrera said.

The post The Need to Roll up Your Sleeves for WebAssembly appeared first on The New Stack.

]]>
Python and WebAssembly: Elevating Performance for Web Apps https://thenewstack.io/python-and-webassembly-elevating-performance-for-web-apps/ Mon, 05 Jun 2023 10:00:33 +0000 https://thenewstack.io/?p=22709558

Python developers have long appreciated the language’s versatility and productivity. However, concerns persist about Python’s performance limitations and seamless integration

The post Python and WebAssembly: Elevating Performance for Web Apps appeared first on The New Stack.

]]>

Python developers have long appreciated the language’s versatility and productivity. However, concerns persist about Python’s performance limitations and seamless integration with other languages.

The emergence of WebAssembly (Wasm) bridges this gap. Wasm empowers Python users to explore new frontiers of speed, compatibility and language interoperability.

In this article, we’ll delve into the world of WebAssembly and its relevance for Python enthusiasts. We will explore how Wasm propels Python applications to near-native performance levels, extends their capabilities across platforms and ecosystems, and unlocks a plethora of possibilities for web-based deployments.

WebAssembly simplifies the deployment of Python applications on the web. By compiling Python code into a format that can be executed directly in the browser, developers can seamlessly deliver their Python applications to a wide range of platforms without the need for complex setup or server-side processing.

The combination of Wasm and Python empowers developers to build high-performance web applications, leverage existing Python code and libraries, and explore new domains where Python’s productivity and versatility shinesponsored-

The Benefits of Using WebAssembly with Python

Wasm brings a plethora of benefits when combined with Python, revolutionizing the way developers can leverage the language. Let’s explore some of the key advantages of using WebAssembly with Python:

Enhanced performance. Python, while highly expressive and easy to use, has traditionally been criticized for its relatively slower execution speed compared to low-level languages. By using WebAssembly, Python code can be compiled into highly optimized, low-level binary code that runs at near-native speed, significantly enhancing application performance and reducing network latency.

This performance boost allows Python developers to tackle computationally intensive tasks, process large datasets or build real-time applications with enhanced responsiveness.

Language interoperability. WebAssembly provides a seamless integration pathway between Python and other languages like C++, Rust, and Go. By leveraging WebAssembly’s interoperability features, Python developers can tap into the vast ecosystem of libraries and tools available in these languages.

This empowers developers to harness the performance and functionality of existing codebases, extending Python’s capabilities and enabling them to build sophisticated applications with ease.

Platform independence. Wasm is not limited to the web browser environment. It offers a cross-platform runtime, making it possible to execute Python code on a wide range of devices and operating systems.

This cross-platform compatibility enables Python developers to target desktop applications, mobile apps, Internet of Things (IoT) devices, and more, using a unified codebase. It reduces development efforts, simplifies maintenance and expands the reach of Python applications to diverse computing environments.

Web deployment. WebAssembly has gained significant traction as a deployment format for web applications. By compiling Python code to WebAssembly, developers can directly execute Python in the browser, eliminating the need for server-side execution or transpiling Python to JavaScript.

This opens up exciting possibilities for building web applications entirely in Python, with seamless client-side interactivity and reduced server-side load.

Performance critical components. Wasm is an excellent choice for integrating performance-critical components or algorithms into Python applications.

By offloading computationally intensive tasks to WebAssembly modules written in languages like Rust or C, developers can achieve significant performance improvements without sacrificing the productivity and ease of use provided by Python.

This hybrid approach combines the best of both worlds, leveraging Python’s high-level abstractions with the speed and efficiency of low-level code.

A growing ecosystem and tooling. The WebAssembly ecosystem is rapidly evolving, with a thriving community and an expanding range of tools, libraries and frameworks. Python developers can tap into this vibrant ecosystem to compile, optimize and run their code in Wasm.

The availability of tooling makes adoption easier and ensures developers have the necessary resources to harness the power of WebAssembly effectively.

7 Steps to Compile Python Code to Wasm

What follows are general steps to compile Python code to WebAssembly; the exact process and tools may vary depending on the specific compiler and configuration you choose. Refer to the documentation and resources provided by the compiler you’re using for detailed instructions and best practices.

Additionally, keep in mind that not all Python code may be suitable for compilation to WebAssembly, especially if it relies heavily on features that are not supported in the Wasm environment or if it requires extensive access to system resources.

  1. Choose a WebAssembly compiler. There are several compilers available that can convert Python code to WebAssembly. One popular option is Emscripten, which provides a toolchain for compiling code written in C/C++ to WebAssembly, including Python through the CPython interpreter.
  2. Set up the development environment. Install the necessary dependencies and tools for the chosen compiler. This typically includes Python, a C/C++ compiler, and the WebAssembly compiler itself (such as Emscripten or Pyodide). Pyodide is a full Python environment that runs entirely in the browser, while Emscripten is a toolchain for compiling C and C++ code to Wasm.
  3. Prepare your Python code. Ensure that your Python code is compatible with the compiler. It’s essential to avoid using Python features or libraries that are not supported by the WebAssembly environment, as it has limited access to certain system resources.
  4. Compile Python to WebAssembly. Use the chosen compiler to translate the Python code into WebAssembly. The specific command or process will depend on the compiler you’re using. For example, with Emscripten, you would typically invoke the compiler with the necessary flags and options, specifying the Python source files as input.
  5. Optimize the WebAssembly output. After compiling, you may need to optimize the resulting Wasm code to improve performance and reduce the file size. The compiler may offer optimization flags or options to leverage to achieve this.
  6. Integrate with JavaScript. WebAssembly modules are typically loaded and interacted with through JavaScript. You will need to write JavaScript code that interacts with the compiled Wasm module, providing an interface for calling functions, passing data and handling the Python code’s results.
  7. Test and deploy. Once the compilation and integration steps are complete, test the WebAssembly module in various environments and scenarios to ensure it behaves as expected. You can then deploy the Wasm module to the desired target, such as a web server or an application that supports WebAssembly execution.

Loading and Executing Wasm Modules in Python

Remember that the specific steps and syntax needed for loading and executing WebAssembly modules in Python may vary depending on which Wasm interface library you’ve chosen. Again, refer to the documentation and resources provided by the library you’re using for detailed instructions and examples.

Choose a WebAssembly interface. Select a Python library or package that provides the necessary functionality for loading and executing WebAssembly modules. Some popular options include wasmtime, pywasm, and pyodide.

Install the required libraries. Install the chosen WebAssembly interface library using a package manager like pip. For example, you can install wasmtime by running $ pip install wasmtime.

Load the WebAssembly module. Use the WebAssembly interface library to load the Wasm module into your Python environment. Typically, you will provide the path to the WebAssembly module file as input to the loading function.

For instance, with wasmtime, you can use the wasmtime.wat2wasm() function to load a WebAssembly module from a WebAssembly Text Format (WAT) file.

Create an instance. Once the Wasm module is loaded, you need to create an instance of it to execute its functions. This step involves invoking a function provided by the WebAssembly interface library, and passing the loaded module as a parameter.

The exact function and syntax may vary depending on the chosen library. For example, in wasmtime, you can use the wasmtime.Instance() constructor to create an instance.

Call WebAssembly functions. After creating the instance, you can access and call functions defined within the WebAssembly module. The Wasm interface library typically provides methods or attributes to access these functions.

You can invoke the functions using the instance object, passing any required arguments. The return values can be retrieved from the function call. The specific syntax and usage depend on the chosen library.

Handle data interchange. WebAssembly modules often require exchanging data between Python and the Wasm environment. This can involve passing arguments to WebAssembly functions or retrieving results back to Python.

The Wasm interface library should provide mechanisms or functions to handle data interchange between Python and WebAssembly. This may include converting data types or handling memory management.

Handle errors and exceptions. When working with WebAssembly modules, it’s important to handle errors and exceptions gracefully. The chosen WebAssembly interface library should provide error-handling mechanisms or exception classes to catch and handle any potential errors or exceptions that may occur during module loading or function execution.

Test and iterate. Once the initial integration is complete, test the loaded WebAssembly module and its functions within your Python environment. Verify that the module executes as expected, produces the desired results, and handles edge cases appropriately. Iterate and refine your code as necessary.

Wasm and Python Use Cases across Different Domains

Scientific Simulations

Python is widely used in scientific computing, and WebAssembly can bring its computational capabilities to the web. For example, you can compile scientific simulation code written in Python to Wasm and run it directly in the browser.

This enables interactive and visually appealing web-based simulations, allowing users to explore scientific concepts without the need for server-side processing. Libraries like NumPy and SciPy can be utilized in combination with WebAssembly to achieve high-performance scientific simulations in the browser.

Machine Learning Models

Python is renowned for its rich ecosystem of machine learning libraries like TensorFlow, PyTorch, and Scikit-learn. With WebAssembly, you can compile trained machine learning models built in Python and deploy them in the browser or other environments.

This allows for client-side inference and real-time prediction capabilities without relying on server-side APIs. WebAssembly’s performance benefits enable efficient execution of complex models, empowering developers to create browser-based machine learning applications.

Web-Based Games

Python is increasingly used for game development due to its simplicity and versatility. By leveraging WebAssembly, Python game developers can bring their creations to the web without sacrificing performance.

By compiling game logic written in Python to Wasm, developers can create browser-based games with near-native speed and interactivity. Libraries like Pygame and Panda3D, when combined with WebAssembly, provide a powerful platform for cross-platform game development.

Web User Interfaces

Python developers can leverage Wasm to create rich, responsive UIs for web applications. By compiling Python UI frameworks or components to WebAssembly, such as Pywebview or BeeWare, developers can build browser-based UIs that offer the simplicity and productivity of Python. This allows for a seamless user experience while retaining the power and expressiveness of Python for developing complex web applications.

Data Processing and Visualization

Python’s data processing and visualization libraries, such as Pandas, Matplotlib, and Plotly, can be used in conjunction with WebAssembly to perform data analysis and generate interactive visualizations directly in the browser.

By compiling Python code to Wasm, developers can create web applications that handle large datasets and provide real-time visualizations without the need for server-side computation.

The post Python and WebAssembly: Elevating Performance for Web Apps appeared first on The New Stack.

]]>
Demystifying WebAssembly: What Beginners Need to Know https://thenewstack.io/webassembly/webassembly-what-beginners-need-to-know/ Fri, 02 Jun 2023 12:35:19 +0000 https://thenewstack.io/?p=22708617

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to address the limitations of JavaScript, an interpreted language that can lead to slower performance and longer page load times.

With WebAssembly, developers can compile code to a low-level binary format that can be executed by modern web browsers at near-native speeds. This can be particularly useful for applications that require intensive computation or need to process large amounts of data.

Compiling code to Wasm requires some knowledge of the programming language and tools being used, as well as an understanding of the WebAssembly format and how it interacts with the browser environment. However, the benefits of improved performance and security make it a worthwhile endeavor for many developers.

In this article, we will explore the basics of WebAssembly, including how it works with web browsers, how to compile code to Wasm, and best practices for writing secure WebAssembly code.

We will also discuss benchmarks and examples that illustrate the performance benefits of using WebAssembly compared to traditional web technologies. You will learn how WebAssembly can be used to create faster, more efficient and more secure web applications.

The Benefits of Using WebAssembly

As mentioned previously, WebAssembly offers faster execution times and improved performance compared to JavaScript, due to its efficient binary format and simpler instruction set. It enables developers to use other languages to create web applications, such as C++, Rust, and others.

Wasm also provides a more secure environment for running code on the web. In addition to performance, there are several other benefits to using it in web development:

Portability. Wasm is designed to be language-agnostic and can be used with multiple programming languages, enabling developers to write code in their preferred language and compile it to WebAssembly for use on the web.

Security. It provides a sandboxed environment for executing code, making it more secure than executing untrusted code directly in the browser.

Interoperability. Wasm modules can be easily integrated with JavaScript, allowing developers to use existing libraries and frameworks alongside new WebAssembly modules.

Accessibility. It can be used to bring applications written in native languages to the web, making them more accessible to users without requiring them to install additional software.

WebAssembly can be represented in two forms: binary format and textual format.

The binary format is Wasm’s native format, consisting of a sequence of bytes that represent the program’s instructions and data. This binary format is designed to be compact, efficient and easily parsed by machines. The binary format is also the form that is typically transmitted over the network when a Wasm program is loaded into a web page.

The textual representation of WebAssembly, on the other hand, is a more human-readable form that is similar to assembly language. The textual format is designed to be more readable, and easier to write and debug, than the binary format. The textual format consists of a series of instructions, each represented using a mnemonic and its operands, and it can be translated to the binary format using a WebAssembly compiler.

The textual format can be useful for writing and debugging Wasm programs, as it allows developers to more easily read and understand the program’s instructions. Additionally, the textual format can be used to write programs in high-level programming languages that can then be compiled to WebAssembly, which can help to simplify the process of writing and optimizing Wasm programs.

What Is the WebAssembly Instruction Set?

WebAssembly has a simple, stack-based instruction set that is designed to be easy to optimize for performance. It supports basic types such as integers and floating-point numbers, as well as more complex data structures such as vectors and tables.

The Wasm instruction set consists of a small number of low-level instructions that can be used to build more complex programs. These instructions can be used to manipulate data types such as integers, floats and memory addresses, and to perform control flow operations such as branching and looping.

Some examples of WebAssembly instructions include

  • i32.add: adds two 32-bit integers together.
  • f64.mul: multiplies two 64-bit floating-point numbers together.
  • i32.load: loads a 32-bit integer from memory.
  • i32.store: stores a 32-bit integer into memory.
  • br_if: branches to a given label if a condition is true.

WebAssembly instructions operate on a stack-based virtual machine, where values are pushed onto and popped off of a stack as instructions are executed. For example, the i32.add instruction pops two 32-bit integers off the stack, adds them together, and then pushes the result back onto the stack.

This is significant because it improves the efficiency and simplicity of execution.

A stack-based architecture allows for the efficient execution of instructions. Since values are pushed onto the stack, instructions can easily access and operate on the topmost values without the need for explicit addressing or complex memory operations. This reduces the number of instructions needed to perform computations, resulting in faster execution.

Also, the stack-based model simplifies the design and implementation of the virtual machine. Instructions can be designed to work directly with values on the stack, eliminating the need for complex register management or memory addressing modes. This simplicity leads to a more compact and easier-to-understand instruction set.

The small number of instructions in the WebAssembly instruction set makes it easy to optimize and secure. Because the instructions are low-level, they can be easily translated into machine code, making Wasm programs fast and efficient.

Additionally, the fixed instruction set means that those programs are not prone to the same types of security vulnerabilities that can occur in more complex instruction sets.

How Does Wasm Work with the Browser?

WebAssembly code is loaded and executed within the browser’s sandboxed environment. It is typically loaded asynchronously using the fetch() API and then compiled and executed using the WebAssembly API.

Wasm can work with web browsers to provide efficient and secure execution of code in the client-side environment. Its code can be loaded and executed within a web page using JavaScript, and can interact with the Document Object Model (DOM) and other web APIs.

When a web page loads a WebAssembly module, the browser downloads the module’s binary file and compiles it to machine code using a virtual machine called the WebAssembly Runtime. The WebAssembly Runtime is integrated into the browser’s JavaScript engine and translates the Wasm code into machine code that can be executed by the browser’s processor.

Once the WebAssembly module is loaded and compiled, the browser can execute its functions and interact with its data. Wasm code can also call JavaScript functions and access browser APIs using JavaScript interop, which allows seamless communication between WebAssembly and JavaScript.

WebAssembly’s efficient execution can provide significant performance benefits for web applications, especially for computationally intensive tasks such as data processing or scientific calculations. Additionally, Wasm’s security model, which enforces strict memory isolation and control flow integrity, can improve the security of web applications and reduce the risk of security vulnerabilities.

How to Compile Code to WebAssembly

To compile code to WebAssembly, developers can use compilers that target the Wasm binary format, such as Clang or Emscripten.

Developers can also use languages that have built-in support for WebAssembly, such as Rust or AssemblyScript.

To compile code to WebAssembly, you will need a compiler that supports generating Wasm output. Here are some general steps:

  1. Choose a programming language that has a compiler capable of generating WebAssembly output. Some popular languages that support WebAssembly include C/C++, Rust and Go.
  2. Install the necessary tools for compiling code to WebAssembly. This can vary depending on the programming language and the specific compiler being used. For example, to compile C/C++ code to WebAssembly, you may need to install Emscripten, which is a toolchain for compiling C/C++ to WebAssembly.
  3. Write your code in the chosen programming language, making sure to follow any specific guidelines for WebAssembly output. For example, in C/C++, you may need to use special Emscripten-specific functions to interact with the browser environment.
  4. Use the compiler to generate WebAssembly output from your code. This will typically involve passing in command-line options or setting environment variables to specify that the output should be in Wasm format.

Optionally, optimize the WebAssembly output for performance or size. This can be done using tools such as Wasm-opt or Wasm-pack. Load the generated WebAssembly code in your application or website using JavaScript or another compatible language.

Wasm modules are typically loaded asynchronously using the fetch() API.

Once the module is loaded, it can be compiled and instantiated using the WebAssembly API.

To load and run a WebAssembly module, you first need to create an instance of the module using the WebAssembly.instantiateStreaming or WebAssembly.instantiate method in JavaScript. These methods take the URL of the WebAssembly binary file as an argument and return a Promise that resolves to a WebAssembly.Module object and a set of exported functions.

Once you have the WebAssembly.Module object and exported functions, you can call the exported functions to interact with the Wasm module. These functions can be called just like any other JavaScript function, but they execute WebAssembly code instead of JavaScript code.

Here’s an example of how to load and run a simple WebAssembly module in JavaScript:

// Load the WebAssembly module from a binary file
fetch('module.wasm')
  .then(response => response.arrayBuffer())
  .then(bytes => WebAssembly.instantiate(bytes))
  .then(module => {
    // Get the exported function from the module
    const add = module.instance.exports.add;

    // Call the function and print the result
    const result = add(1, 2);
    console.log(result);
  });


In this example, we use the fetch API to load the WebAssembly binary file as an ArrayBuffer, and then pass it to the WebAssembly.instantiate method to create an instance of the WebAssembly module.

We then get the exported function add from the instance, call it with arguments 1 and 2, and print the result to the console.

It’s important to note that WebAssembly modules run in a sandboxed environment and cannot access JavaScript variables or APIs directly.

To communicate with JavaScript, WebAssembly modules must use the WebAssembly.Memory and WebAssembly.Table objects to interact with data and function pointers that are passed back and forth between the WebAssembly and JavaScript environments.

Performance Advantages of WebAssembly

WebAssembly can improve performance compared to other web technologies in a number of ways.

First, Wasm code can be compiled ahead-of-time (AOT) or just-in-time (JIT) to improve performance. AOT compilation allows WebAssembly code to be compiled to machine code that can be executed directly by the CPU, bypassing the need for an interpreter.

JIT compilation, on the other hand, allows WebAssembly code to be compiled to machine code on the fly, at runtime, which can provide faster startup times and better performance for code that is executed frequently.

Additionally, WebAssembly can take advantage of hardware acceleration, such as SIMD (single instruction, multiple data) instructions, to further improve performance. SIMD instructions allow multiple operations to be performed simultaneously on a single processor core, which can significantly speed up mathematical and other data-intensive operations.

Here are some benchmarks and examples that illustrate the performance benefits of using WebAssembly.

Game of Life. A cellular automaton that involves updating a grid of cells based on a set of rules. The algorithm is simple, but it can be computationally intensive. The WebAssembly version of the algorithm runs about 10 times faster than the JavaScript version.

Image processing. Image processing algorithms can be highly optimized using SIMD instructions, which are available in WebAssembly. The Wasm version of an image processing algorithm can run about three times faster than the JavaScript version.

AI/machine learning. Machine learning algorithms can be highly compute-intensive, making them a good candidate for WebAssembly. TensorFlow.js is a popular JavaScript library for machine learning, but its performance can be improved by using the WebAssembly version of TensorFlow. In some benchmarks, the Wasm version runs about two times faster than the JavaScript version.

Audio processing. WebAssembly can be used to implement real-time audio processing algorithms. The Web Audio API provides a way to process audio data in the browser, and the WebAssembly version of an audio processing algorithm can run about two times faster than the JavaScript version.

Wasm Security Considerations

WebAssembly supports various security policies that allow web developers to control how their code interacts with the browser’s resources. For example, Wasm modules can be restricted from accessing certain APIs or executing certain types of instructions.

WebAssembly code runs within the browser’s sandboxed environment, which limits its access to the user’s system.

Wasm code is subject to the same-origin policy, which restricts access to resources from a different origin (i.e., domain, protocol and port). This prevents Wasm code from accessing sensitive resources or data on a website that it shouldn’t have access to.

WebAssembly also supports sandboxing through the use of a memory-safe execution environment. This means that Wasm code cannot access memory outside of its own allocated memory space, preventing buffer overflow attacks and other memory-related vulnerabilities.

Additionally, WebAssembly supports features such as trap handlers, which can intercept and handle potential security issues, and permissions, which allow a module to specify which resources it needs access to.

Furthermore, Wasm can be signed and verified using digital signatures, ensuring that the code has not been tampered with or modified during transmission or storage. WebAssembly code can also be executed in a secure execution environment, such as within a secure enclave, to further enhance its security.

Best Practices for Writing Secure Wasm Code

When writing WebAssembly code, there are several best practices that developers can follow to ensure the security of their code.

Validate inputs. As with any code, it is important to validate inputs to ensure that they are in the expected format and range. This can help prevent security vulnerabilities such as buffer overflows and integer overflows.

Use memory safely. WebAssembly provides low-level access to memory, which can be a source of vulnerabilities such as buffer overflows and use-after-free bugs. It is important to use memory safely by checking bounds, initializing variables and releasing memory when it is no longer needed.

Avoid branching on secret data. Branching on secret data can leak information through side channels such as timing attacks. To avoid this, it is best to use constant-time algorithms or to ensure that all branches take the same amount of time.

Use typed arrays. WebAssembly provides typed arrays that can be used to store and manipulate data in a type-safe manner. Using typed arrays can help prevent vulnerabilities such as buffer overflows and type confusion.

Limit access to imported functions. Imported functions can introduce vulnerabilities if they are not properly validated or if they have unintended side effects. To limit the risk, it is best to restrict access to imported functions and to validate their inputs and outputs.

Use sandboxes. To further isolate WebAssembly code from the rest of the application, it can be run in a sandboxed environment with restricted access to resources such as the file system and network. This can help prevent attackers from using WebAssembly code as a vector for attacks

Keep code minimal. Write minimal code with clear boundaries that separate untrusted and trusted code, thus reducing the attack surface area.

Avoid using system calls as much as possible. Instead, use web APIs to perform operations that require input/output or other system-related tasks.

Use cryptographic libraries. Well-known cryptographic libraries like libsodium, Bcrypt, or scrypt can help secure your data.

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>
Case Study: A WebAssembly Failure, and Lessons Learned https://thenewstack.io/webassembly/case-study-a-webassembly-failure-and-lessons-learned/ Thu, 25 May 2023 14:00:55 +0000 https://thenewstack.io/?p=22708922

VANCOUVER — In their talk “Microservices and WASM, Are We There Yet?” at the Open Source Summit North America, Kingdon

The post Case Study: A WebAssembly Failure, and Lessons Learned appeared first on The New Stack.

]]>

VANCOUVER — In their talk “Microservices and WASM, Are We There Yet?” at the Linux Foundation’s Open Source Summit North America, Kingdon Barrett, of Weaveworks, and Will Christensen, of Defense Unicorns, said they were surprised as anyone that their talk was accepted since they were newbies who had spent about three weeks delving into this nascent technology.

And their project failed. (Barrett argued, “It only sort of failed  … We accomplished the goal of the talk!”)

But they learned a lot about what WebAssembly, or Wasm, can and cannot do.

“Wasm has largely delivered on its promise in a browser and in apps, but what about for microservices?” the pair’s talk synopsis summarized. “We didn’t know either, so we tried to build a simple project that seemed fun, and learned Wasm for microservices is not as mature and a bit more complicated than running in the browser.”

“Are we there yet? Not really. There’s some caveats,” said Christensen. “But there are a lot of things that do work, but it’s not enough that I wouldn’t bet the farm on it kind of thing.”

Finding Wasm’s Limitations

Barrett, an open source support engineer at Weaveworks, called WebAssembly “this special compiled bytecode language that works on some kind of like a virtual machine that’s very native toward JavaScript. It’s definitely shown that is significantly faster than, let’s say, JavaScript running with the JIT (just-in-time compiler).

“And when you write software to compile for it, you just need to treat it like a different target — like on x86 or Arm architectures; we can compile to a lot of different targets.”

The speakers found there are limitations or design constraints, if you will:

  • You cannot access the network in an unpermissioned way.
  • You cannot pass a string as an argument to a function.
  • You cannot access the file system unless you have specified the things that are permitted.

“There is no string type,” Barrett said. “As far as I can tell, you have to manage memory and count the bytes you’re going to pass. Make sure you don’t lose that number. That’s a little awkward, but there is a way around that as well.”

One of the big potential benefits for government contractors with Wasm is the ability to use existing code and to retain people with deep knowledge in a particular language.

The talk was part of the OpenGovCon track at the conference.

“We came up with this concept, being the government space, that I thought was going to be really interesting for an ATO perspective” — authorized to operate — “which is, how do you enable continuous delivery while still maintaining a consistent environment?” Christensen said.

The government uses ATO certification to manage risk in contractors’ networks by evaluating the security controls for new and existing systems.

One of the big potential benefits for government contractors with Wasm, Christensen said, is the ability to use existing code and to retain people with deep knowledge in a particular language.

“You can use that, tweak it a little bit and get life out of it,” he said. “You may have some performance losses where there may be some nuances, but largely you can retain a lot of that domain language or that sort of domain knowledge and carry it over for the future.”

Barrett and Christensen set out to write a Kubernetes operator.

“I wanted to write something in Go … so all your functions for this or wherever you need come in the event hooks,” Christensen said.

Then, instead of calling the state a function, or a class that you have inside of that monolithic operating design, the idea is that you can reference somehow the last value store. It could be a Redis cache, database, or object storage. Wasm is small enough that at load time, a small binary can be loaded at initialization.

If cold start times are not a problem, you could write something that will go request, pull a Wasm module, load, run and return the result.

And, Christensen continued, “if you really want to get creative, you can shove it in as a config map inside of Kubernetes and … whatever you want to do, but the biggest thing is Wasm gets pulled in. And the idea is you call it almost like a function, and you just execute it.

“And each one of those executions would be a sandbox so you can control the exposure and security and what’s exposed throughout the entire operator. … You could statically compile the entire operator and control it that way. Anyone who wants to work in the sandbox with modules, they would have the freedom within the sandbox to execute. This is the dream. … Well, it didn’t work.”

The idea was that there would be stringent controls in a sandbox about how the runtime would be exposed to the Wasm module, which would include logging and traceability for compliance.

Runtimes and Languages

WebAssembly is being hailed for its ability to compile from any language, though Andrew Cornwall, a Forrester analyst, told The New Stack that it’s easier to compile languages that do not have garbage collectors, so languages such as Java, Python and interpreted languages tend to be more difficult to run in WebAssembly than languages such as C or Rust.

Barrett and Christensen took a few runtimes and languages for (ahem) a spin. Here’s what they found:

Fermyon Spin

Runtime class has been available since Kubernetes v1.12. It’s easy to get started, light on controls. The design requires privileged access to your nodes. Containerd shims control which nodes get provisioned with the runtime.

Kwasm

“There’s a field on the deployment class called runtimeClassName, and you can set that to whatever you want, as long as containerd knows what that means. So Kwasm operator breaks into the host node and sets up some containerd configuration imports of binary from wherever — this is not production ready,” Barrett said, unless you already had separate controls around all of those knobs and know how to authorize that type of grant safely.

He added, “Anyway, this was very easy to get your Wasm modules to run directly on Kubernetes this way, despite it does require privileged access to the nodes and it’s definitely not ATO.”

WASI/WAGI

WASI (WebAssembly System Interface) provides system interfaces; WAGI (WebAssembly Gateway Interface) permits standard IO to be treated as a connection.

“Basically, you don’t have to handle connections, the runtime handles that for you,” Barrett said. “That’s how I would summarize WAGI, and WASI is the system interface that makes that possible. You have standard input, standard output, you have the ability to share memory, and functions — you can import them or export them, call them from inside or outside of the Wasm, but only in ways that you permit.”

WasmEdge

WasmEdge Runtime, based on C++, became a Cloud Native Computing Foundation project in 2021.

The speakers extolled an earlier talk at the conference by Michael Yuan, a maintainer of the project, and urged attendees to look for it.

Wasmer/Wastime

Barrett and Christensen touted the documentation on these runtime projects.

“There are a lot of language examples that are pretty much parallel to what I went through … and it started to click for me,” Barrett said. “I didn’t really understand WASI at first, but going through those examples made it pretty clear.”

They’re designed to get you thinking about low-level constructs of Wasm:

  • What is possible with a function, memory, compiler.
  • How to exercise these directly from within the host language.
  • How to separate your business logic.
  • Constraints in these environments will help you scope your project’s deliverable functions down smaller and smaller.

Wasmtime or Wasmer run examples in WAT (WebAssembly Text Format), a textual representation of the Wasm binary format, something to keep in mind when working in a language like Go. If you’re trying to figure out how to call modules in Go and it’s not working, check out Wazero, the zero-dependency WebAssembly runtime written in Go, Barrett said.

Rust

It has first-class support and the most documentation, the speakers noted.

“If you have domain knowledge of Rust already, you can start exploring right now how to use Wasm in your production workflow,” Christensen said.

Node.js/Deno

Wasm was first designed for use in web browsers. There’s a lot of information out there already about the V8 engine running code that wasn’t just JavaScript in the browser. V8 is implemented in C++ with support for JavaScript. That same V8 engine is found at the heart of NodeJS and Deno. The browser-native JavaScript runtimes in something like Node.js or Deno are what made their use with Wasm so simple.

“A lot of the websites that had the integration already with the V8 engine, so we found that from the command line from a microservices perspective was kind of really easy to implement,” Christensen said.

“So the whole concept about the strings part, about passing it with a pointer, if you’re running Node.js and Deno, you can pass strings natively and you don’t even know it’s any different. …Using Deno, it was really simple to implement. …There are a lot of examples that we’ve discovered, one of which is ‘Hello World,’ actually works. I can compile it so it actually runs and can pass a string and get a string out simply from a web assembly module with Deno.”

Christensen said that Deno or Node.js currently provides the best combination of WASM support that is production ready with a sufficient developer experience.

A Few Caveats

“But a little bit of warning when you go to compile,” Christensen said. “What we have discovered is: all WASM is not compiled the same.”

There are three compilers for Wasm:

  • Singlepass doesn’t have the fastest runtime, but has the fastest compilation.
  • Cranelift is a main engine used in Wasmer and Wasmtime. It doesn’t have the fastest runtime; it’s much better, but it’s still a bizarre compilation.
  • LLVM has the slowest compile time. No one who’s ever used LVM is surprised there, but it is the fastest runtime.

A Few Problems

Pointer functions for handling strings are problematic. String passing, specifically with Rust, even when done correctly, could decrease performance by up to 20 times, they said.

There is a significant difference between compiled and interpreted languages when compiled to a WASM target. Wasm binaries for Ruby and Python may see 20 to 50MG penalties compared to Go or Rust because of the inclusion of the interpreter.

“And specifically, just because we’re compiling Ruby or Python to Wasm, you do need to compile the entire interpreter into it,” Christensen said. “So that means if you are expecting Wasm to be better for boot times and that kind of stuff, if you’re using an interpreted language, you are basically shoving the entire interpreter into the Wasm binary and then running your code to be on the interpreter. So please take note that it’s not a uniform experience.”

“If you’re using an interpreted language, it’s still interpreted in Wasm,” Barrett said. “If you’re passing the script itself into Wasm, the interpreter is compiled in Wasm but the script is still interpreted.”

And Christensen added, “You’re restricted to the runtime restrictions of the browser itself, which means sometimes they may be single-threaded. Good, bad, just be aware.”

A web browser, Deno and Node.js all use the V8 engine, meaning they all exhibit the same limitations when running Wasm.

And language threading needs to be known at runtime for both host and module.

“One thing I’ve noticed: in Go, if I use the HTTP module to do a request from a Wasm-compiled Go module from Deno, there is no way that I can turn around and make sure that’s not gonna break the threaded nature of Deno and that V8 engine,” Christensen said.

He added, “Maybe there’s an answer there, but I didn’t find it. So if you are just getting started and you’re just trying to mess around and try to find all that happening, just know that you may spend some time there.”

And what happens when you have a C dependency with your RubyGem?

Barrett said he didn’t try that at all.

“Most Ruby dependencies are probably native Ruby, not native extensions,” he said. “They’re pure Ruby, but a ‘native extension’ is Ruby compiling C code. And then you have to deal with C code now,” in addition to Ruby.

“Of course, C compiles to Wasm, so I’m sure there is a solution for this. But I haven’t found anyone who has solved it yet.”

It applies to some Python packages as well, Christensen said.

“They [Python eggs] are using the binary modules as well, so there is definitely no way to do a [native system] binary translation into Wasm — binary to binary,” he said. “So if you need to do it, you need to get your hands dirty, compile the library itself to Wasm, then compile whatever gem or package that function calls are there.”

The speakers said that in working with Wasm, they found that ChatGPT wasn’t very helpful and that debugging can be harsh.

So, Should You Be Excited about Wasm?

“Yes. There’s plenty of reasons to be excited,” Christensen said. “It may not be ready yet, but I definitely think it’s enough to move forward and start playing around yourself.”

When Wasm is fully mature, he said, it will have benefits in terms of tech workforce retention, especially in governmental organizations: “You can take existing workforce, you don’t have to re-hire and you can get longevity out of them. Especially to have all that wonderful domain knowledge and you don’t have to re-solve the same problem using a new tool.

“If you have a lot of JavaScript stuff, [you’ll have] better control over it and it runs faster, which is the whole reason why Wasm is interesting,” Christensen said. The reason is that JavaScript compiled to Wasm is much faster, as the V8 engine no longer has to do “just-in-time” operations.

“And then finally, I’m sure a lot of you have an ARM MacBook, and then you try to deploy something to the cloud,” he said. “And next thing you realize, ‘Oh look, my entire stack is in x86.’ Well, Wasm magically does take care of this. I did test this out on a Mac Mini and ran it on a brand new AMD 64 system and Deno couldn’t tell the difference.”

WebAssembly is ready to be tested, Christensen said, and the open source community is the way to make that happen.

“Let the maintainers know; start talking about it. Bring up issues. We need more working examples. That’s missing. We can’t even get ChatGPT to give us anything decent,” he said, so the community is relying on its members to experiment with it and share their experiences.

The post Case Study: A WebAssembly Failure, and Lessons Learned appeared first on The New Stack.

]]>
New Image Trends Frontend Developers Should Support https://thenewstack.io/new-image-trends-frontend-developers-should-support/ Thu, 25 May 2023 13:00:55 +0000 https://thenewstack.io/?p=22708951

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within ChatGPT.

It’s part of keeping up with new technologies that, like AI, are changing user expectations when it comes to a frontend experience, said Tal Lev-Ami, CTO and co-founder of online media management company Cloudinary.

“If you look at e-commerce, many websites now have ways to know what you want to buy the 360 [degree] way and some of them also have integrated AR experiences where you can take whatever object it is and either see it in the room or see it on yourself,” Lev-Ami told The New Stack. “These are considerations that are becoming more critical for developers to support.”

Another thing developers should consider is how AI-enabled media manipulation will alternate the expectations of end users. He compared it to the internet’s shift from simply text to using images. Images didn’t replace text, but users suddenly expected images on web pages.

“The expectations of the end users on the quality and personalization of the media is ever increasing, because they see ads and they see more sophisticated visual experiences,” he said. “It’s not that everything before is meaningless; it’s still needed. But if you’re not there to meet the expectations of the end user in terms of experiences, then you’re getting left behind.”

Supporting 3D

There are challenges around supporting 3D, such as how to optimize images and (for instance) how to take a file developed for CAD and convert it to a media 3D format that’s supported on the web, such as glTF, an open standard file format for three-dimensional scenes and models, Lev-Ami said.

A case study with Minted, a crowdsourced art site with 59.8 million images, offers a look at what’s required to support 3D. Minted used Cloudinary to improve its image generation pipeline with support for a full set of 2D and 3D transforms and automation technology. A single product at Minted can have more than 100,000 variants, according to a case study of Minted’s Cloudinary deployment.

The case study explained how the art site worked with the media company to create a 3D shopping experience. First, the image of the scenes are created in a studio, then an internal image specialist sliced the image into layers and corrected for transparency, color and position. A script was then used to generate the coordinates needed to position these layers as named transforms into a text file (CSV), which when uploaded to Cloudinary (with the previously created screen layers) created the final image.

Separately, Minted’s proprietary pipeline ingests raw art files from artists and builds the base images for each winning design. When a customer navigates to an art category page or product details page on Minted, the page sends requests to Cloudinary for images that composite the correct combination of scenes, designs, frame and texture into the final thumbnails, the case study explained.

“For close-up product images, Minted makes use of Cloudinary’s 3D rendering capability as well as its e_distort API feature,” the case study noted. “A 3D model with texture UV mapping was created for the close-up image that shows off the texture and wrapping effect of a stretched canvas art print. With some careful tweaking of the 3D coordinates, the model is uploaded and Cloudinary does the rest, composing the art design as texture onto the model.”

Bring Your Own Algorithms

WebAssembly is another relative newcomer technology for the frontend, where it can be used to deploy streaming media, so I asked Lev-Ami if Wasm is also changing how media works on the frontend, or perhaps in how Cloudinary manages its own workload? While Cloudinary does deploy Wasm to support edge computing, the company also allows developers to upload Wasm and run their own algorithms.

“We actually have a capability where you can upload your own Wasm so that you can run your own algorithm as part of the media processing pipeline,” he said. “If you have some unique algorithm that you want to run as part of the media processing pipeline, you can do that. The safety and security around Wasm allows us to be more open as a platform and allows customers to handle use cases where they need to run their own algorithms part of the pipeline.”

Wasm has fewer security risks than code because it executes within its own sandbox, according to Andrew Cornwall, a senior analyst with Forrester who specializes in the application development space. Code compiled to WebAssembly can’t grab passwords, for instance, Cornwall recently told The New Stack.

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>
Could WebAssembly Be the Key to Decreasing Kubernetes Use? https://thenewstack.io/could-webassembly-be-the-key-to-decreasing-kubernetes-use/ Mon, 22 May 2023 13:00:06 +0000 https://thenewstack.io/?p=22708613

WebAssembly, aka Wasm, is already changing how companies deploy Kubernetes, according to Taylor Thomas, a systems engineer and director of

The post Could WebAssembly Be the Key to Decreasing Kubernetes Use? appeared first on The New Stack.

]]>

WebAssembly, aka Wasm, is already changing how companies deploy Kubernetes, according to Taylor Thomas, a systems engineer and director of customer engineering at Cosmonic. Fortune 100 companies are spinning down Kubernetes Clusters to use Wasm instead, he said.

There will always be a place for Kubernetes, he added — just perhaps not as an ad hoc development platform.

“We’ve seen so many companies in the Fortune 100 who we’ve talked to who are getting rid of Kubernetes teams and spinning down Kubernetes clusters,” Thomas told The New Stack. “It’s just so expensive. It’s so wasteful that the utilization numbers we get from most people are 25 to 35%.”

Kubernetes forces developers to care about infrastructure and they don’t necessarily want to, he added.

“Basically, developers have to care about their infrastructure much more than they need to,” he said. “A lot of these things around microservices, we did them in Kubernetes because that was a great way to do it before we had stuff like WebAssembly, but microservices and functions … all those things work better in a world where WebAssembly exists because you focus just on writing that code.”

WebAssembly, or Wasm, is a low-level byte code that can be translated to assembly. A bytecode is computer object code that an interpreter converts into binary machine code so it can be read by a computer’s hardware processor.

Cosmonic Bets on Open Source

Cosmonic is counting on Wasm winning. In April, the WebAssembly platform-as-a-service company launched its open beta and released Cosmonic Connect, a set of third-party connectors designed to simplify Wasm integration. The first Cosmonic Connect integration to launch was Cosmonic Connect Kubernetes.

“You can now connect Kubernetes clusters with a single command,” he said. “We manage all the Wasm cloud-specific bits. We have a beautiful UI you can use to see and manage these things.”

Cosmonic is also involved in furthering WebAssembly standards, including the proposed component model. With the component model, language silos could be broken down by compiling to Wasm, Thomas said. The function then becomes like Lego blocks — developers could combine functions from different languages into WebAssembly and the functions would work together, he added.

“We’ve been focusing on a common set of contracts that we’ve been using at Wasm cloud for a long time, and we’re now centralizing on in the WebAssembly community called wasi-cloud,” he said. “These things are wasi key value, wasi messaging — [if] you want to use a key-value database in 80% of the use cases, you just need the same five functions — get set, put, all these common things — and so it’s defined by an interface.”

That will allow developers to “click” code from different languages together, he said.

“That language barrier is so incredibly powerful — that really fundamentally changes how we put together applications,” Thomas said. “Because of WebAssembly being able to compile from any language, that thing you’re using could be written in Rust or C, and the thing you’re writing could be in Go or Python, and then they plug together when they actually run.”

That doesn’t just break the language barrier — it can also break down vendor barriers because now everything can be moved around, he added. Components will also liberate developers from being locked into custom software development kits (SDKs) or libraries, he said.

“It’s a walled garden and we don’t want that to be the case. We want it to be you just write against the contracts and we provide the stuff you need for our platform but you just focus on the code part of it,” he said. “That’s very different than all these other approaches where you either had to confine yourself to a specific language or a specific type of way things were set up or any of those kinds of details.”

Cosmonic also is a maintainer on the CNCF project wasmCloud and works with the Wasm cloud application deployment manager [WADM] standard. He compared WADM to running a YAML file.

“WADM gives you the ability to connect to something to use a familiar pattern,” Thomas said. “A user is able to define their application, they can say, Okay, here’s the dependencies I’m using that I’m going to link and at runtime, here’s the configuration I’m passing to it. And here’s the code I’m running. And they can specify all those things where they want to run it, and then it’ll run it everywhere for them, and then automatically reconcile if something disappears, or something moves around.”

The post Could WebAssembly Be the Key to Decreasing Kubernetes Use? appeared first on The New Stack.

]]>
Forrester on WebAssembly for Developers: Frontend to Backend https://thenewstack.io/forrester-on-webassembly-for-developers-frontend-to-backend/ Wed, 17 May 2023 13:00:11 +0000 https://thenewstack.io/?p=22708204

There are a lot of things to love about WebAssembly — but how do developers decide when to use it?

The post Forrester on WebAssembly for Developers: Frontend to Backend appeared first on The New Stack.

]]>

There are a lot of things to love about WebAssembly — but how do developers decide when to use it? Does it matter in what language you write to WebAssembly? And what about security? To learn more about what frontend developers need to know, I sat down with Andrew Cornwall, a senior analyst with Forrester who specializes in the application development space.

The good news is, functionality does not alter depending on which coding language you write in. Write in C++, AssemblyScript, Rust — it’s the developer’s choice, Cornwall said. Typically, it’s easier to compile languages that do not have garbage collectors, so languages such as Java, Python, and interpreted languages tend to be more difficult to have running in WebAssembly than languages such as C or Rust. But the end result will be WebAssembly, which he noted is best thought of as a processor rather than a language.

“Something like JavaScript or Java or Python, where there’s a whole ecosystem in there that needs to be in place before you can run,” Cornwall said.

Typically, developers will take the C implementation of Python, compile it using a compiler that outputs WebAssembly, he said. Now they have a Python interpreter that is written in WebAssembly, which they can then feed regular Python code.

“That is easier to do than converting Python to WebAssembly itself,” he added.”Once it’s in WebAssembly, it doesn’t matter. It just runs — it’s essentially very similar to machine code.”

For other supported languages, rather than compile to x86 or Arm on a compiler, developers opt for WebAssembly when compiling, he explained. The compiler outputs the byte code that will run — WebAssembly, or Wasm, is a low-level byte code that can be translated to assembly. A bytecode is computer object code that an interpreter converts into binary machine code so it can be read by a computer’s hardware processor. Essentially, WebAssembly converts code to this portable binary-code format. As such, it has more in common with machine language than anything else and that’s why it’s so gosh darn fast.

Wasm Use Cases for the Frontend

When WebAssembly first came out, it was seen primarily as a solution for frontend needs, Cornwall said. Typical use cases for the frontend include operations with a lot of matrix math and video. If you need something to start executing right away and don’t have time to wait for the JavaScript to download and parse in the browser, then WebAssembly is a great solution, he said. For instance, the BBC created a video player for its site in Wasm and Figma is written in C++ and compiled to WebAssembly, where it cut load time by three times.

“WebAssembly can be streaming so you can download it and start executing it right away,” Cornwall said. “Other than that, the other interesting use case for WebAssembly on web front ends is going to be not so much for JavaScript developers, but for developers of other things.”

That’s in part because JavaScript running through the just-in-time [JIT] compiler is actually pretty fast, he said, adding that developers can get to half native speed with JavaScript “if you let it run long enough.” For other developers, Wasm means they can write in their favorite, supported code and then compile to Wasm for the frontend.

“The interesting parts where WebAssembly gets used are essentially things where you’d go down to machine code if you were writing a program in another language,” he said. “If there is something that needs to be really fast right away, and you can’t afford to wait for the JIT to bring it into high speed by optimizing it, or if there is something you need it to start right away and you don’t want to wait for the time for the JavaScript to be parsed, for instance, so you have it in WebAssembly.”

Wasm for the Backend

Then a funny thing happened along the way to the assembly (ahem): Wasm started to become less of a frontend thing and more of a backend thing as it began to be leveraged for serverless compute, he said.

“WebAssembly VMs [virtual machines] start really fast compared to JavaScript VMs or containers,” he said. “A JavaScript VM starts in milliseconds, so 50, 100 milliseconds; WebAssembly VMs can start in microseconds. … If you’re running serverless functions, that’s great because you make a call out to the server and say, give me the result. It can then start up and give you the results really quickly, whereas other things like Javascript VM, Java VMs and containers have that startup time the cost that it takes to for them to start running before they can then do something with the values that you’re passing them and give you the result.”

That includes Kubernetes containers, he added. And there are places — serverless functions or where the web browser wants to make a request of a search function — where developers would want to use WebAssembly VMs instead of a Kubernetes container, he added.

“If you send that search request off, you’re waiting until the container comes up, runs the code the search code itself and then sends the result back. Often containers will allow multiple connections because it’s expensive to bring a container up,” he said. “So Kubernetes, it has a cost to bring the container up. With WebAssembly you don’t have as much of a cost. It’s microseconds to come up rather than milliseconds; or even if it’s a container, it could even be hundreds of milliseconds or getting close to half a second.”

Multiple that by 1000s and those milliseconds start to add up.

How Wasm Improves Security

There’s also a security risk in containers because people tend to reuse them rather than shut them down and start over. That’s not an issue with Wasm.

“Then you need to worry about how did what someone that came before me affect what the current person is requesting or what the current request is going on?” Cornwall said. “With WebAssembly it’s so cheap, you just throw it away. You can just write a serverless function, start up the VM, execute the serverless function and then throw it all the way and wait for the next request.”

Not that Wasm is a replacement for containers all the time, he cautioned. Containers are still needed and make sense when running big queries on large databases, where adding another 300 milliseconds to the query really doesn’t make much of a difference.

“Things like that will probably stay in containers because it is a little bit easier to manage a container, at least right now, than it is to manage WebAssembly serverless functions that just kind of float around in space,” he said. “WebAssembly is going to be an addition to when you need to make fast calls to serverless functions, as opposed to taking over for all containers.”

Another way Wasm is more secure than other options is that it will only execute within its sandbox — nothing goes outside of the sandbox. That’s why so far the biggest security threat seen with WebAssembly has been from websites where bitcoin miners were hidden in the Web Assembly, causing the website user to unwittingly loan CPUs for bitcoin mining. It’s not possible for code compiled into Wasm to reach and send out passwords, for instance, because the code stays within the Wasm sandbox, Cornwall explained.

The post Forrester on WebAssembly for Developers: Frontend to Backend appeared first on The New Stack.

]]>
Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ https://thenewstack.io/dev-news-dart-3-meets-wasm-flutter-3-10-and-qwik-streamable-javascript/ Sat, 13 May 2023 16:00:58 +0000 https://thenewstack.io/?p=22708063

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and the first preview of Dart to WebAssembly compilation.

“With 100% null safety in Dart, we have a sound type system,” wrote Michael Thomsen, the product manager working on Dart and Flutter. “You can trust that if a type says a value isn’t null, then it never can be null. This avoids certain classes of coding errors, such as null pointer exceptions. It also allows our compilers and runtimes to optimize code in ways it couldn’t without null safety.”

The trade-off, he acknowledged, is that migrations became a bit harder. However, 99% of the top 1000 packages on pub.dev support null safety, so Google expects the “vast majority of packages and apps that have been migrated to null safety” will work with Dart 3. For those who do experience problems using the Dart 3 SDK, there’s a Dart 3 migration guide.

Thomsen also announced a first preview of Dart to WebAssembly compilation. Flutter, which is written in Dart, already uses Wasm, he added.

“We’ve long had an interest in using Wasm to deploy Dart code too, but we’ve been blocked. Dart, like many other object-oriented languages, uses garbage collection,” he wrote. “Over the past year, we’ve collaborated with several teams across the Wasm ecosystem to add a new WasmGC feature to the WebAssembly standard. This is now near-stable in the Chromium and Firefox browsers.”

Compiling Dart to Wasm modules will help achieve high-level goals for web apps, including faster load times; better performance because Wasm modules are low-level and closer to machine code; and semantic consistency.

“For example, Dart web currently differs in how numbers are represented,” he wrote. “With Wasm modules, we’d be able to treat the web like a ‘native’ platform with semantics similar to other native targets.”

Also in Dart 3, Google added records, patterns and modifiers. The language quest for multiple return values was Dart’s fourth highest-rated issue, and by adding records, developers can “build up structured data with nice and crisp syntax,” Thomsen noted.

“In Dart, records are a general feature,” he stated. “They can be used for more than function return values. You also store them in variables, put them into a list, use them as keys in a map, or create records containing other records.”

Records simplify how you build up structured data, he continued, while not replacing using classes for more formal type hierarchies.

Patterns come into play when developers might want to break that structured data into its individual elements to work with them. Patterns shine when used in a switch statement, he explained. While Dart has had limited support for switch, in Dart 3, they’ve broadened the power and expressiveness of the switch statement.

“We now support pattern matching in these cases. We’ve removed the need for adding a break at the end of each case. We also support logical operators to combine cases,” he wrote.

Google also added class modifiers for fine-grained access control for classes.

“Unlike records and patterns that we expect every Dart developer to use, this is more of a power-user feature. It addresses the needs of Dart developers crafting large API surfaces or building enterprise-class apps,” Thomsen stated. “Class modifiers enable API authors to support only a specific set of capabilities. The defaults remain unchanged though. We want Dart to remain simple and approachable.”

Flutter v3.10 Released

Since Flutter is built on Dart, and Dart 3 launched this week, it’s not surprising that Google also launched Flutter version 3.10 at its Google I/O event Wednesday. It was buried in the slew of news announcements, but fortunately, more details were available in a blog post by Kevin Chisholm, Google’s technical program manager for Dart and Flutter.

Flutter 3.10 includes improvements to web, mobile, graphics and security. The framework now compiles with Supply Chain Levels for Software Artifacts (SLSA) Level 1, which adds more security features such:

  • Scripted build process, which now allows for automated builds on trusted build platforms;
  • Multi-party approval with audit logging, in which all executions create auditable log records; and
  • Provenance, with each release publishing links to view and verify provenance on the SDK archive.

This is also the first step toward SLA L2 and L3 compliance, which focus on protecting artifacts during and after the build process, Chisholm explained.

When it comes to the web, there are a number of new changes, including improved load times for web apps because the release reduces the file size of icon fonts and pruned unused glyphs from Material and Cupertino. Also reduced in size: the CanvasKit for all browsers, which should further improve performance.

It also now supports element embedding, which means developers can serve Flutter web apps from a specific element in a page. Previously, apps could either take up the entire page or display within an iframe tag.

The engine Impeller on iOS was tested in the 3.7 stable release, but with v3.10 it’s now set as the default renderer on iOS, which should translate into “less bank and better consistent performance,” Chisholm wrote. Actually, eliminating jank is a big part of this release: Chisholm thanks open source contributor luckysmg, who discovered that it was possible to slash the time to get the next drawable layer from the Metal drive.

“To get that bonus, you need to set the FlutterViews background color to a non-nil value,” he explained. “This change eliminates low frame rates on recent iOS 120Hz displays. In some cases, it triples the frame rate. This helped us close over half a dozen GitHub issues. This change held such significance that we backported a hotfix into the 3.7 release.”

Among the other lengthy list of improvements are the ability to decode APNG images, improved image loading APIs and support for wireless debugging.

Quick v1.0: A Full-Stack Framework with ‘Streaming JavaScript’

Qwik, a full-stack web framework, reached version 1.0 this week, with the Quick team promising a “fundamentally new approach to delivering instant apps at scale.”

The open source JavaScript framework draws inspiration from React, Cue, Angular, Svelte, SolidJS and their meta frameworks — think Next.js, Nuxt, SvelteKit — according to the post announcing the new release. Qwik promises to provide the same strengths as these frameworks while adapting for scalability.

“As web applications get large, their startup performance degrades because current frameworks send too much JavaScript to the client. Keeping the initial bundle size small is a never-ending battle that’s no fun, and we usually lose,” the Qwik team wrote. “Qwik delivers instant applications to the user. This is achieved by keeping the initial JavaScript cost constant, even as your application grows in complexity. Qwik then delivers only the JavaScript for the specific user interaction.”

The result is that the JavaScript doesn’t “overwhelm” the browser even as the app becomes larger. It’s like streaming for JavaScript, they added.

To that end, Qwik solves for instant loading time with JavaScript streaming, speculative code fetching, lazy execution, optimized rendering time and data fetching, to name a few of the benefits listed in the post.

It also incorporates ready-to-use integrations with poplar libraries and frameworks, the post noted. Qwik also includes adapters for Azure, Cloudflare, Google Cloud Run, Netlify, Node.js, Deno and Vercel.

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>
Our WebAssembly Experiment: Extending NGINX Agent https://thenewstack.io/our-webassembly-experiment-extending-nginx-agent/ Thu, 11 May 2023 15:21:03 +0000 https://thenewstack.io/?p=22707568

This is the second in a two-part series. Read Part 1 here. At NGINX, we’re excited about what WebAssembly (Wasm)

The post Our WebAssembly Experiment: Extending NGINX Agent appeared first on The New Stack.

]]>

This is the second in a two-part series. Read Part 1 here.

At NGINX, we’re excited about what WebAssembly (Wasm) can offer the community, especially in regard to extensibility. We’ve built a variety of products that benefit from modularity and plugins, including NGINX Open Source and NGINX Plus. This also includes open source NGINX Agent, which is a companion daemon that enables remote management of NGINX configurations, alongside collection and reporting of real-time NGINX performance and operating system metrics.

NGINX Agent is designed with modularity in mind, and it’s written in a popular and Wasm-friendly language: Go. It also uses a publish-subscribe event system to push messages to cooperating plugins. Its current stage of development, however, limits plugin creation to the Go language and static linkage.

Seeing as NGINX Agent is designed with a powerful and flexible architecture, we wondered how we could improve the developer experience by experimenting with an external plugin model (caveat: not as a roadmap item, but to evaluate the ergonomics of using Wasm in a production-grade system).

The choices available to us are wide and varied. We could directly use one of the many runtime engines in development, build some bespoke tools and bindings, or adopt one of the burgeoning plugin software development kits (SDKs) developing in the community. Two such SDKs — Extism and waPC — are compelling, active, excellent examples of the growing ecosystem surrounding Wasm outside the browser.

The Extism and waPC projects take complementary but different approaches to embedding Wasm into an application. They provide server-side SDKs to simplify runtime interfaces, loading and executing Wasm binaries, life-cycle management and server function exports, while also expanding the language set available to the programmer.

Another project, Wasmtime, provides APIs for using Wasm from Rust, C, Python, .NET, Go, BASH and Ruby. Extism has expanded on that set with OCaml, Node, Erlang/Elixir, Haskell, Zig. It also provides an extensive collection of client-side APIs, referred to as plug-in development kits (PDKs). The waPC project takes a similar approach by providing server-side and client-side SDKs to ease the interaction with the underlying runtime engine.

However, some significant differences remain between Extism and waPC. Here is a basic comparison chart:

Extism waPC
Helper APIs (e.g., memory allocation, function exists) Fewer client-side APIs (cannot access memory)
Direct runtime invocations Abstracted runtime invocations, indirect server and client APIs
Single runtime engine Multiple runtime engines
Host function exports Host function exports
Complex routing input and output system Simplified inputs and language native function output
High number of server languages Limited server language support (Rust, Go, JavaScript)
High number of client languages Limited client language support (Rust, Go, AssemblyScript, Zig)
Required C namespace code C namespace and bindings hidden behind abstraction
Early, pre-GA development releases Early, pre-GA development releases
Active Active
Smaller backing group Used by dapr with larger potential backing
Configurable state through supported APIs Durable state must be passed via custom initialization stage
Basic hash validation No bytecode custom validation
Host call user data supported Host call user data unsupported

Depending on your use cases, either Extism or waPC may be a better fit:

  • Extism supports only one runtime engine — Wasmtime; waPC supports multiple runtime engines and is more configurable.
  • Extism allows calls directly to the exported symbols from server and client sides. The waPC project builds an abstraction between the server and client sides by exporting specific call symbols and tracking user-registered functions in a lookup table.
  • Extism defers data serialization entirely to the user. The waPC project integrates with an Interface Definition Language (IDL) to automate some of the serialization or deserialization chores.

We extended NGINX Agent with both projects and used Wasmtime as the exclusive engine to keep things simple. With our candidate SDKs and runtime chosen, it was generally a straightforward process shunting in an external plugin mechanism.

Our process of extending NGINX Agent followed these stages:

  • Extended the NGINX Agent configuration semantics to define external plugins and their bytecode source.
  • Created an adapter abstraction as a concrete Go structure to shim the Go function calls to their Wasm counterparts.
  • Defined the client API (Guest) as expected client-side function exports.
  • Defined the server API (Host) as expected server-side function exports.
  • Defined data semantics for Host and Guest calls. (Wasm’s type system is strict but limited, and its memory model is a contiguous array of uninterpreted bytes, so passing complex data requires interface definitions and serialization and deserialization utilities.)
  • Finally, we wired everything together by initializing our runtime, registering our expected server API exports, loading example plugins as bytecode, validating expected client APIs, and running the mostly unchanged NGINX Agent core code.

The diagram below shows the high-level data flow for the plugin components using Extism. It differs slightly from waPC, in that waPC brings its own abstraction between the Host and Guest systems. That said, the same conclusions can be drawn. Adding an external plugin system to a new or existing one does adopt some overhead and complexity but, for that cost, our plugins can also gain significant benefits from developer choice and portability. Compared to network latency, microservice complexity, distributed race conditions, increased security surface area and the need to protect data on wire and endpoints, the tradeoff is reasonable.

In this simplified view, you can see our shunt between the NGINX Agent core executable to the Wasm “Guest” (or client) code. We used “Go Runtime” as shorthand for the NGINX Agent system and executable. NGINX Agent, having already supported plugins, provided “Plugin Interface.” Then, we built a small shim structure to shunt between Go native calls and the respective SDK calls, such as a call to Plugin. Process simply generated a call to Extism.Plugin.Call (process). The SDK (for both Extism and waPC) does the rest of the work regarding memory, Wasmtime integration and function invocation until the client-side plugin execution. As shown in the diagram, plugins can also call back to “Host” through Wasm exports, in this case allowing for plugins to also publish new messages and events.

Wasm as a Universal Backend Control and Configuration Plane for Plugin Architectures

The Wasm landscape and ecosystem is rapidly advancing. Use outside of the browser is now more than science fiction — it’s a reality with increasingly extensive options for runtime engines, SDKs, utilities, tools and documentation at the developer’s disposal. We see further improvements coming fast on the horizon. The Wasm community is actively working on the component model, along with specifications like WIT and code-generation tools like wit-bindgen defining interoperable Wasm components, server and client APIs. Standardized interfaces could become commonplace, like we experience when writing protobuf files.

Without a doubt, there are more challenges ahead. To name one: higher-order language impedance, such as “What does a server-side Go context mean to a Haskell-sourced client bytecode?” Even so, we found our limited — and experimental — exercise of embedding Wasm into pre-existing projects exciting and illuminating. We plan to do more because Wasm clearly will play a major role in the future of running applications.

In theory, many other applications with plugin architectures could benefit from a similar Wasm stack. We will continue exploring more ways we can use Wasm at NGINX in our open source projects. It’s a brave new Wasm world for the server side, and we are only starting to get a glimpse of what’s possible. As the Wasm toolchain continues to mature and compatibility issues are ironed out, Wasm appears to be a promising path toward enhancing application performance while improving developer experience.

The post Our WebAssembly Experiment: Extending NGINX Agent appeared first on The New Stack.

]]>
A Workaround to WebAssembly’s Endpoint Compatibility Issues? https://thenewstack.io/a-workaround-webassemblys-endpoint-compatibility-issues/ Mon, 08 May 2023 15:00:31 +0000 https://thenewstack.io/?p=22706863

A new WebAssembly player Loophole Labs has joined the WebAssembly module provider fold with its open source platform Scale. Most

The post A Workaround to WebAssembly’s Endpoint Compatibility Issues? appeared first on The New Stack.

]]>

A new WebAssembly player Loophole Labs has joined the WebAssembly module provider fold with its open source platform Scale. Most recently, it announced during KubeCon + CloudNativeCon support for deploying WebAssembly functions to the cloud as well as serverless environments.

Scale’s creators also say Scale’s Signature technology offers a workaround around endpoint-compatibility issues ahead of when — if ever — component modules are standardized. Eventually,  a common component standard would allow — in theory — code and applications running in a Wasm module to be deployed across different and various endpoints, including edge devices and servers. This would be done without the hassle of specifying interfaces and painstakingly reading memory across module boundaries for higher-level types, Loophole Labs says. “Better higher-level, non-serializing interfaces would allow for much less tedious configuration work, and more reusability and even less host dependence,” Trezy Who, a  principal software engineer at the company, told The New Stack during KubeCon+CloudNativeCon. But until that day happens, Loophole Labs says Scale’s Signature offers a workaround before a standard component model for deployment of Wasm modules beyond WebAssembly’s current limitations beyond the browser and backend (more about this below) is developed.

Scale Signatures help to ensure compatibility of the endpoints where applications and code are deployed within a Scale module. Signatures are used with Scale Functions to help define the inputs and outputs of a function using declarative syntax, according to Scale’s documentation.

Fast and Easy

The startup is also touting what it says are impressive benchmarks that are achieved measured by runtime performance and latency specs for the deployment of application and code within Scale WebAssembly modules.

Loophole Labs is attempting to capitalize on WebAssemby’s key concepts and strengths: For developers to be able to create applications that are loaded into a WebAssembly module and are deployed without the developer having to worry about configuring their applications or code for the Wasm module or for deployment across any environment or device that is able to process a CPU instruction set. What’s running underneath the hood should not be a concern for a developer working with a Wasm module, security features notwithstanding, since the code inside a Wasm module remains in a closed loop or in a so-called sandbox.

“Our goal is to turn Wasm into the default target development environment,” Who said.“In order to do that, we want to abstract away WebAssembly so that all anybody has to think about is if you’re going to build an application write the code and you don’t worry about the Wasm module.”

Loophole Labs’ creators say it takes about “20 seconds” to write, build and begin running a Scale module with the Scale CLI with the curl command. This means the code running in the Scale Wasm module is compiled and running locally within that 20-second time frame.

WebAssembly-powered Scale function can process hundreds of thousands of requests per second with ~30ms latencies from different endpoints worldwide, the company says. The benchmarks ran on a 48-Core Ryzen CPU with 192GB of RAM using 16KB payloads for five minutes reproducible with this GitHub repository, the company says.

The low latency specs make a good case for relying on the Scale Wasm module versus a container, in addition to the security benefits of deploying applications and code in a closed environment, Shivansh Vij, CEO and founder of Loophole Labs, told The New Stack during KubeCon+CloudNativeCon.

“Often overlooked, many people do not realize that I can ship applications anywhere in the world much faster and cheaper than it would be possible with a container,” Vij said.

While — like all Wasm module providers — Loophole Labs says Scale should eventually be polyglot to incorporate all languages that WebAssembly is designed to support, the Scale presently offers support for Go and Rust, with runtimes for Go and TypeScript.

The post A Workaround to WebAssembly’s Endpoint Compatibility Issues? appeared first on The New Stack.

]]>
IBM’s Quiet Approach to AI, Wasm and Serverless https://thenewstack.io/ibms-quiet-approach-to-ai-wasm-and-serverless/ Thu, 04 May 2023 13:00:27 +0000 https://thenewstack.io/?p=22707069

It’s been 12 years since IBM’s Watson took on Jeopardy champions and handily won. Since then, the celebrity of Watson

The post IBM’s Quiet Approach to AI, Wasm and Serverless appeared first on The New Stack.

]]>

It’s been 12 years since IBM’s Watson took on Jeopardy champions and handily won. Since then, the celebrity of Watson has been usurped by ChatGPT, but not because IBM has abandoned Watson or artificial intelligence. In fact, the company’s approach to artificial intelligence has evolved over the years and now reflects a different, more targeted path forward for AI — beyond pumping out generic large language models.

I sat down with IBM Fellow and CTO of IBM Cloud Jason McGee during KubeCon+CloudNativeCon EU, to discuss how Big Blue is approaching modern challenges such as serverless, WebAssembly in the enterprise, and of course artificial intelligence. The conversation has been edited for clarity and brevity.

Using AI for Code Automation

What is IBM doing with automation?

There [are] a lot of dimensions to automation. At the base technology level, we obviously do a lot of work with Ansible and the Red Hat side, and then we use TerraForm pretty extensively as a kind of infrastructure as code language for provisioning cloud resources and managing a lot of those reference architectures — under the covers are essentially collections of TerraForm automation that [are] configured [in] the cloud. There is also higher level work going on in automation, and that’s more like business process automation and robotic process automation, and things like that. With products like Watson Automate, [we] are applying AI and automation to customers, business processes and automating manual things. So that’s kind of higher up the stack.

We have tools [like robotic process automation and business process management] in our space, and we’re applying AI to that and then down the technology stack. We have software automation tools like TerraForm and Ansible that we’re using. We’re doing some interesting work on Ansible or the research team, with applying foundation models to help code assist on Ansible and helping people write automation using AI, to help fill in best practice code based on natural language descriptions and stuff.

What does the AI do in that context?

Think about if you’re writing an Ansible playbook, you might have a block that’s, “I want to deploy a web application on Node.js” or something. You could just like write a comment, “Create a Node.js server. running on port 80” in natural language, and it would read that comment and automatically fill in all of the code and all the Ansible commands, to provision and configure that using best practices. It’s been trained on all the Ansible Galaxy playbooks and GitHub Ansible code. So it’s like helping them write all the Ansible and write good Ansible […] based on natural descriptions of what they’re trying to achieve.

The AI is based on large language models. Do they hallucinate? I keep hearing they hallucinate and I’m reminded of the story, “Do Androids Dream of Electric Sheep?

A great question and it’s part of the example I gave you of that model [which] was trained for a more narrow purpose of doing Ansible code assist, versus something like GPT, which was like trained on everything and therefore it can be more accurate at the smaller scope, right? It understands natural language but also understands Ansible very precisely, and so it can have a higher accuracy than a general purpose large language model, which also could spit out Ansible or TerraForm, or Java or whatever the heck you wanted it to, but maybe has less awareness of how good or accurate language is.

We’re using it in AI Ops as well for incident management, availability management and property termination. That’s another kind of big space that IBM is investing a lot in — Instana, which is one of our key observability tools.

How do we help customers adopt and leverage large-scale foundations with large language models? In IBM Cloud we have this thing called the Vela cluster, which is a high-performance foundation model training cluster that’s in our cloud in Washington, DC. It was originally built for our research team so that the IBM Research Group could use it to do all their research and training on models and build things like Project Wisdom on it.

Now we’re starting to expose that for customers. We believe that enterprises will build some of their own large language models or take base models — because we’re also building a bunch of base models — and then customize them by training them on additional unique data. We’re doing work in OpenShift, to allow you to use OpenShift as the platform for that. We’re doing work in open source around that software stack for building models. And then we’re of course building a whole bunch of models.

Beyond Traditional Serverless

TNS: What else are you here promoting today at KubeCon?

McGee: There’s a lot of activity in this space that we’ve been working for a long time on, so it’s more progression. One is serverless and we have a capability called IBM Cloud Code Engine and that’s based on K data, which is like a layer on top of Kubernetes, designed to help developers consume cloud native. We’ve been doing a lot of work recently expanding that serverless notion to a more varied set of workloads.

Traditional serverless was like apps and functions running event-driven kinds of workloads — a lot of limitations on what kinds of applications you could run there. What we’ve been doing is extending that and opening up the kinds of workloads you can run, so we’re adding in things like batch processing, large-scale parallel computation, compute-intensive, simulation kind of workloads. We’re starting to do some work on HPC [high-performance computing] so people can do financial modeling or EDA [exploratory data analysis], industrial design and silicon design workloads, leveraging a serverless paradigm. We have a lot of activity going in that space.

We’re also working with a project called Ray, which is a distributed computing framework that’s being used for a lot of AI and data analytics workloads. We’ve enabled Ray to work with the Code Engine so that you can do large-scale bursts [of] compute on cloud and use it to do data analytics processing. We’ve also built a serverless Spark capability, which is another data analytics framework. All of those things are exposed in a single service in Code Engine. So instead of having seven or eight different cloud services that do all these different kinds of workloads, we have a model where we can do all that in one logical service.

What kinds of use cases are you seeing from your customers with serverless?

One of the challenges with serverless is [that] when it started a few years ago, with cloud functions and Lambda, it was positioned in a very narrow kind of way — like it was good for event-driven, it was good for kind of web frontends.

That’s interesting, but customers actually get a lot more value out of these more large-scale, compute-intensive workloads. Especially in cloud, you’d have this massive pool of resources. How do you quickly use that massive pool of resources to run a Monte Carlo simulation or to run a batch job or to run an iteration of design verification for a silicon device you’re building? When you have those large-scale workloads, the traditional way you would do that is you would build a big compute grid, and then you have a lot of costs sunk in all this infrastructure.

We’re starting to see them use serverless as the paradigm for how they run these more compute-intensive, large-scale workloads, because that combines a nice set of attributes, like the resource pool of cloud, with [a] pay-as-you-go pricing model, with a no infrastructure management. You just like simply spin up and spin back down as you run your work. So that’s the angle on serverless we’re seeing a lot more adoption on.

Wasm’s Potential

Are people using serverless on the edge?

They do. It’s more niche, of course. But you see, for example, in CDN (content delivery network), where people want to push small-scale computation out to the edge of the network, close to the end users — so I think there [are] use cases like that. At IBM Cloud, we use Cloudflare as kind of our core internet service, [with] global load balancer and edge CDN, and they support our cloud functions. You see technology like Wasm — just a lot of people here talking about Wasm. Wasm has a role to play in those scenarios.

Is IBM doing anything with Wasm? Is it useful in the enterprise?

We’re enabling some of that, we’re looking at it in the edge. We support Wasm Code Engine, it gives you a nice, super fast startup time, like workload implication in 10 milliseconds or something, because I can inject it straight in with Wasm, which is useful if you’re doing large-scale bursty things but you don’t want to pay the penalty of waiting for things to spin up.

But I still think that whole space is more exploratory. It’s not like there [are] massive piles of enterprise workloads waiting to run on Wasm, right? So it’s more next-gen edge device stuff. It’s useful — there [are] some interesting use cases around that HPC [high-performance computing] space potentially … because I can inject small fragments of code into an existing grid, but I also think it’s it’s a little more niche, specialist workloads.

CNCF paid for travel and accommodations for The New Stack to attend the KubeCon+CloudNativeConEurope 2023 conference.

The post IBM’s Quiet Approach to AI, Wasm and Serverless appeared first on The New Stack.

]]>
Wasm-Based SQL Extensions — Toward Portability and Compatibility https://thenewstack.io/wasm-based-sql-extensions-toward-portability-and-compatibility/ Mon, 01 May 2023 16:23:11 +0000 https://thenewstack.io/?p=22706741

WebAssembly (Wasm) is becoming well known for letting users run code written in different languages in the browser. But that’s

The post Wasm-Based SQL Extensions — Toward Portability and Compatibility appeared first on The New Stack.

]]>

WebAssembly (Wasm) is becoming well known for letting users run code written in different languages in the browser. But that’s not all it lets you do. Wasm’s portability, speed and security make it a great way for you to create platforms and extensible frameworks that let users compile their code to Wasm and run it in your system quickly.

Databases and other data-intensive systems are great candidates for becoming Wasm-powered platforms. When you have a lot of data, it’s cheaper to move the compute to the data than the other way around. Wasm gives us the tools to do this well, but it’s missing a few features that we can either all build on our own in a thousand incompatible ways or build together in the open.

Many SQL databases already have extensibility features that let you create new functions, aggregates, types and more. For example, in databases like PostgreSQL, each extension has an installation script written in SQL and may also include C code that is compiled to a shared library. The C code may use database APIs and implement logic that would be hard to write in procedural SQL languages.

These shared libraries don’t create a secure sandbox, so you can’t easily prevent an extension from using too many resources, corrupting memory or messing with the system. They’re also not very portable, since you have to compile them for each platform on which you run the database.

This is a natural fit for Wasm since its modules are portable, sandboxed and “capability-safe,” which means they can only access what you give them permission to. SingleStore released Wasm-powered extensibility last summer, including user-defined functions (UDFs) created from Wasm. We’re not alone either — several other products and open source projects are also working on Wasm-based extensibility.

Like other Wasm use cases, people working on SQL extensions quickly realized they need some way to pass data like strings, lists and records in and out of Wasm. The core Wasm spec doesn’t provide a way to do this and only defines things like numbers and memory as a flat array of bytes, not higher-level types.

This can lead different Wasm platforms to come up with their own Application Binary Interface (ABI), procedure call mechanism, mapping to gRPC or other solutions. These different solutions to describing high-level interfaces and types lead to a huge amount of fragmentation. This means that Wasm created for one platform can’t be used in another, and users need a different set of tools for each language for each platform, which is both inconvenient and a waste of resources to develop.

However, there is a way out of this fragmentation nightmare: the WebAssembly System Interface (WASI) and the component model. WASI is a subgroup of the WebAssembly Community Group (CG), and it’s working to define standardized interfaces for common system resources and a component model. Wasm Components are wrappers around core Wasm modules, giving us a way to statically link them together and include high-level interfaces and types in the binary.

The component model provides a general solution with a path to standardization for these high-level types and interfaces that are currently being achieved in a huge variety of bespoke ways. If we want to prevent fragmentation, reduce the amount of duplicate work done in the Wasm + SQL ecosystem, and make extensions work in a wide variety of projects and products, the component model and WASI are the answer.

That’s why SingleStoreDB is championing the WASI SQL Embedding proposal, which describes how Wasm can be embedded in SQL environments as extensions. The standard will leverage the component model and its interfaces to provide a way for users to create SQL extensions using only open source component model tools like Cargo Component and Componentize-JS.

The WASI SQL Embedding proposal is fully open source and part of the WASI subgroup. If you’re interested in being part of a more cohesive and less fragmented SQL-extension ecosystem based on Wasm, come join us.

The post Wasm-Based SQL Extensions — Toward Portability and Compatibility appeared first on The New Stack.

]]>
Will JavaScript Become the Most Popular WebAssembly Language? https://thenewstack.io/webassembly/will-javascript-become-the-most-popular-webassembly-language/ Tue, 25 Apr 2023 13:00:13 +0000 https://thenewstack.io/?p=22706212

Since it grew out of the browser, it’s easy to assume that JavaScript would be a natural fit for WebAssembly.

The post Will JavaScript Become the Most Popular WebAssembly Language? appeared first on The New Stack.

]]>

Since it grew out of the browser, it’s easy to assume that JavaScript would be a natural fit for WebAssembly. But originally, the whole point of WebAssembly was to compile other languages so that developers could interact with them in the browser from JavaScript (compilers that generate Wasm for browsers create both the Wasm module and a JavaScript shim that allows the Wasm module to access browser APIs).

Now there are multiple non-browser runtimes for server-side WebAssembly (plus Docker’s Wasm support), where Wasm modules actually run inside a JavaScript runtime (like V8), so alignment with JavaScript is still important as WebAssembly becomes more of a universal runtime.

Wasm is intentionally polyglot and it always will be; a lot of the recent focus has been on supporting languages like Rust and Go, as well as Python, Ruby and .NET. But JavaScript is also the most popular programming language in the world, and there’s significant on-going work to improve the options for using JavaScript as a language for writing modules that can be compiled to WebAssembly (in addition to the ways WebAssembly already relies on JavaScript), as well as attempts to apply the lessons learned about improving JavaScript performance to Wasm.

Developer Demand 

When Fermyon released SDKs for building components for its Spin framework using first .NET and then JavaScript and TypeScript, CEO Matt Butcher polled customers to discover what languages they wanted to be prioritized. “[We asked] what languages are you interested in? What languages are you writing in? What languages would you prefer to write in? And basically, JavaScript and TypeScript are two of the top three.” (The third language developers picked was Rust — likely because of the maturity of Rust tooling for Wasm generally — with .NET, Python and Java also proving popular.)

Suborbital saw similar reactions when it launched JavaScript support for building server-side extensions, which quickly became its most popular developer language, Butcher told us.

It wasn’t clear whether the 31% of Fermyon customers wanting JavaScript support and the 20% wanting TypeScript support were the same developers or a full half of the respondents, but the language had a definite and surprising lead. “It was surprising to us; that momentum in a community we thought would be the one to push back the most on the idea that JavaScript was necessary inside of WebAssembly is the exact community that is saying no, we really want [JavaScript] support in WebAssembly.”

Butcher had expected more competition between languages for writing WebAssembly, but the responses changed his mind. “They’re not going to compete. It’s just going to be one more place where everybody who knows JavaScript will be able to write and run JavaScript in an emerging technology. People always end up wanting JavaScript.”

“I think at this point, it’s inevitable. It’s going to not just be a WebAssembly language, but likely the number one or number two WebAssembly language very quickly.”

While Butcher pointed at Atwood’s Law (anything that can be written in JavaScript will be), director of the Bytecode Alliance Technical Steering Committee Bailey Hayes brought up Gary Bernhardt’s famous Birth and Death of JavaScript (which predicts a runtime like WebAssembly and likens JavaScript to a cockroach that can survive an apocalypse).

“Rust can be hard to learn. It’s the most loved language, but it also has a pretty steep learning curve. And if somebody’s just getting started, I would love for them to start working with what they know.” Letting developers explore a new area like WebAssembly with the tools they’re familiar with makes them more effective and makes for a better software ecosystem, Hayes suggested. “Obviously we’re all excited about JavaScript because it’s the most popular thing in the world and we want to get as many people on WebAssembly as possible!”

What Developers Want to Do in JavaScript 

Butcher put WebAssembly usage into four main groups: browser applications, cloud applications, IoT applications and plugin applications. JavaScript is relevant to all of them.

“What we have seen [at Fermyon] is [developers] using JavaScript and WebAssembly to write backends for heavily JavaScript-oriented frontends, so they’ll serve out their React app, and then they’ll use the JavaScript back end to implement the data storage or the processing.”

There are obvious advantages for server-side Wasm, Hayes pointed out. “Folks that do server-side JavaScript are going to roll straight into server-side Wasm and get something that’s even smaller and starts even faster: they’re going to see benefits without hardly any friction.”

“People are very excited about running WebAssembly outside the browser, so let’s take the most popular language in the world and make sure it works for this new use case of server-side WebAssembly.”

There were some suggestions for what else JavaScript in WebAssembly would be useful for that struck Butcher as very creative. “One person articulated an interesting in-browser reason why they want JavaScript in WebAssembly, that you can create an even more secure JavaScript sandbox and execute arbitrary untrusted code inside of WebAssembly with an interface to the browser’s version of JavaScript that prevents the untrusted JavaScript from doing things to the trusted JavaScript.”

Being able to isolate snippets of untrusted code in the Wasm sandbox is already a common use case for embedded WebAssembly: SingleStore, Scylla, Postgres, TiDB and CockroachDB have been experimenting with using Wasm for what are effectively stored procedures.

Fastly’s js-compute runtime is JavaScript running on WebAssembly for edge computing, Suborbital is focusing on plugins (where JavaScript makes a lot of sense), Shopify recently added JavaScript as a first-class language for WebAssembly functions to customize the backend, and Redpanda shipped WebAssembly support some time ago (again using JavaScript).

Redpanda’s WebAssembly module exposes a JavaScript API for writing policy on how data is stored on its Kafka-compatible streaming platform, and CEO Alex Gallego told us that’s because of both the flexibility and popularity of JavaScript with developers.

The flexibility is important for platform developers. “When you’re starting to design something new, the most difficult part is committing to a long-term API,” he noted. “Once you commit, people are going to put that code in production, and that’s it: you’re never going to remove that, you’re stuck with your bad decisions. What JavaScript allows you to do, from a framework developer perspective, is iterate on feedback from the community super-fast and change the interface relatively easily because it’s a dynamic language.”

With JavaScript, developers get a familiar programming model for business logic like masking social security numbers, finding users in specific age groups, or credit-scoring IP addresses — all without needing to be an expert in the intricacies of distributed storage and streaming pipelines. “The scalability dimensions of multithreading, vectorization instructions, IO, device handling, network throughput; all of the core gnarly things are still handled by the underlying platform.”

JavaScript: Popular and Performant

Appealing to developers is a common reason for enabling JavaScript support for writing WebAssembly modules.

When a new service launches, obviously developers won’t have experience with it; but because they know JavaScript, it’ll be much easier for them to get up to speed with what they want to do. That gives platforms a large community of potential customers, Gallego noted.

“It gives WebAssembly the largest possible programming community in the world to draw talent from!”

“WebAssembly allows you to mix and match programming languages, which is great. But in practical terms, I think JavaScript is the way to go. It’s super easy. It’s really friendly, has great packaging, there are a million tutorials for developers. And as you’re looking at expanding talent, right, which is challenging as companies grow, it’s much easier to go and hire JavaScript developers.”

“When it comes to finding the right design for the API that you want to expose, to me, leaning into the largest programming community was a pretty key decision.”

“JavaScript is one of the most widely used languages; it’s always very important because of adoption,” agreed Fastly’s Guy Bedford, who works on several projects in this space. “WebAssembly has all these benefits which apply in all the different environments where it can be deployed, because of its security properties and its performance properties and its portability. All these companies are doing these very interesting things with WebAssembly, but they want to support developers to come from these existing ecosystems.”

JavaScript has some obvious advantages, Bucher noted: “the low barrier to entry, the huge variety of readily available resources to learn it, the unbelievably gigantic number of off-the-shelf libraries that you can pull through npm.”

JavaScript could become the SQL equivalent for Wasm

Libraries are a big part of why using JavaScript with WebAssembly will be important for functionality as well as adoption. “If you’ve developed a library that’s very good at matrix multiplication, you really want to leverage the decade of developer hours that it took you to build that library.” With those advantages, JavaScript could become the SQL equivalent for Wasm, Gallego suggested.

The 20 years of optimization that JavaScript has had are also a big part of the appeal. “There’s so much money being poured into this ecosystem,” he points out. “Experts are very financially motivated to make sure that your website renders fast.” The programming team behind the V8 JavaScript engine includes the original creator of Java’s garbage collector. “The people that are focused on the performance of JavaScript are probably the best people in the world to focus on that; that’s a huge leg over from anything else.”

“I think that’s why JavaScript continues to stay relevant: it’s just the number of smart, talented people working on the language not just at the spec level, but also at the execution level.”

“Single thread performance [in JavaScript] is just fantastic,” he noted: that makes a big difference at the edge, turning the combination of WebAssembly and JavaScript into “a real viable vehicle for full-blown application development”.

Similarly, Butcher mused about the server-side rendering of React applications on a WebAssembly cloud to cater to devices that can’t run large amounts of JavaScript in the browser.

“V8 has all of these great performance optimizations,” he agreed. “Even mature languages like Python and Ruby haven’t had the same devoted attention from so many optimizers [as JavaScript] making it just a little bit faster, and just a little more faster.”

“The performance has been pretty compelling and the fact that it’s easy to take a JavaScript runtime and drop it into place… I looked at that and of course, people would want a version that would run in WebAssembly. They can keep reaping the same benefits they’ve had for so long.”

But WebAssembly isn’t quite ready for mainstream JavaScript developers today.

“JavaScript has this low barrier to entry where you don’t have to have a degree or a bunch of experience; it’s a very accessible language. But if you’re a JavaScript developer and you want to be using WebAssembly it’s not easy to know how to do that,” Bedford warned.

Different Ways to Bring JavaScript to Wasm

You can already use JavaScript to write WebAssembly modules, but “there are significant updates coming from the Bytecode Alliance over the next few months that are going to enable more JavaScript,” Cosmonic CEO Liam Randall told us.

“When we think about what the big theme for WebAssembly is going to be in 2023, it really comes down to components, components, components.”

“There have been significant advancements this year in the ability to build, create and operate components and the first two languages down the pipe are Rust and some of this JavaScript work,” Randall continued.

Currently, the most popular approach is to use the very small (210KB) QuickJs interpreter originally adopted or popularised by Shopify, which is included in a number of WebAssembly runtimes. For example, Shopify’s Javy and Fermyon’s spin-js-sdk use Quickjs with the Wasmtime runtime (which has early bindings for TypeScript but doesn’t yet include JavaScript as an officially supported language) and there’s a version of QuickJS for the CNCF’s WasmEdge runtime that supports both JavaScript in WebAssembly and calling C/C++ and Rust functions from JavaScript.

QuickJs supports the majority of ECMAScript 2020 features, including strings, arrays, objects and the methods to support them, async generators, JSON parsing, RegExps , ES modules and optional operator overloading, big decimal (BigDecimal) and big binary floating point numbers (BigFloat). So it can run most JavaScript code. As well as being small, it starts up fairly quickly and offers good performance for running JavaScript — but it doesn’t support JIT.

Using QuickJs is effectively bundling in a JavaScript runtime and there’s a tradeoff for this simplicity Hayes noted: “you typically have a little bit larger size and maybe the performance isn’t as perfect as it could be — but it works in most cases, and I’ve been seeing it get adopted all over.”

Fermyon’s JavaScript SDK builds on the way Javvy uses QuickJs but uses the Wizer pre-initializer to speed up the QuickJs startup time by saving a snapshot of what the code will look like once it’s initialized. “Wizer is what makes .NET so fast on WebAssembly,” Butcher explained. “It starts off the runtime, loads up all the runtime environment for .NET and then writes it back out to disk as a new WebAssembly module. We discovered we can do the same thing with QuickJs.”

“When you run your spin build, the SDK takes the JavaScript runtime, takes your source files, optimizes it with WIZER and then packages all of that up and ships that out a new WebAssembly binary.”

If the idea of getting a speed boost by pre-optimizing the code for an interpreted language sounds familiar, that’s because it’s the way most of the browser JavaScript engines work. “They start interpreting the JavaScript but while they’re interpreting, they feed in the JavaScript files to an optimizer so that a few milliseconds into execution, you flip over from interpreted mode into the compiled optimized mode.”

“One of the biggest untold stories is how much, at the end of the day, WebAssembly really is just everything we’ve learned from JavaScript, Java, .NET — all the pioneering languages in the 90s,” Butcher suggested. “What did we learn in 15-20 years of doing those languages and how do we make that the new baseline that we start with and then start building afresh on top of that?”

Adding JIT

Shopify also contracted Igalia to bring SpiderMonkey, the Mozilla JavaScript engine, to Wasm; while Fastly (which has a number of ex-Mozilla staff) has taken an alternative approach with compontentize-js, using SpiderMonkey to run JavaScript for WebAssembly in the high-speed mode it runs in the browser, JIT compiling at least part of your JavaScript code and running it inside the WebAssembly interpreter.

Although WebAssembly modules are portable enough to use in many different places, it’s not yet easy to compose multiple Wasm modules into a program (as opposed to writing an entire, monolithic program in one source language and then compiling that into a single module). Type support in Wasm is primitive, the different WebAssembly capabilities various modules may require are grouped into different “worlds” (like web, cloud and the CLI) and modules typically define their own local address space.

“The problem with WebAssembly has been that you get this binary, but you’ve got all these very low-level binding functions and there’s a whole lot of wiring process. You have to do that wiring specifically for every language and it’s a very complex marshaling of data in and out, so you have to really be a very experienced developer to be able to know how to handle this,” Bedford told us.

The WebAssembly component model adds dependency descriptions and high-level, language-independent interfaces for passing values and pointers. These interfaces solve what he calls “the high-level encapsulation problem with shared nothing completely separated memory spaces.”

“You don’t just have a box, you have a box with interfaces, and they can talk to each other,” he explained. “You’re able to have functions and different types of structs and object structures and you can have all of these types of data structures passing across the component boundary.”

That enables developers to create the kind of reusable modules that are common in JavaScript, Python, Rust and other languages.

Compontentize-js builds on this and allows developers to work with arbitrary bindings. “You bring your bindings and your JavaScript module that you want to run and we give you a WebAssembly binary that represents the entire JavaScript runtime and engine with those bindings. We can do that very quickly and we can generate very complex bindings.”

This doesn’t need a lot of extra build steps for WebAssembly: JavaScript developers can use familiar tooling, and install the library from npm.

Although the SpiderMonkey engine size is larger than QuickJs — Bedford estimates a binary with the JavaScript runtime and a developer’s JavaScript module will be 5-6MB — that’s still small enough to initialize quickly, even on the kind of hardware that will be available at the edge (where Fastly’s platform runs).

Again, this uses Wizer to optimize initialization performance, because that affects the cold start time. “We pre-initialize all of the JavaScript up until right before the point where it’s going to call your function, so there’s no JavaScript engine initialization happening. Everything is already pre-initialized using Wizer.”

“You’re just calling the code that you need to call so there’s not a whole lot of overhead.”

That isn’t AOT (Ahead Of Time) compilation, but later this year and next year, componentize-js will have more advanced runtime optimizations using partial evaluation techniques that Bedford suggested will effectively deliver AOT. “Because you know which functions are bound you can partially evaluate the interpreter using Futamura projections and get the compiled version of those functions as a natural process of partially evaluating the interpreter in SpiderMonkey itself.”

Compontentize-js is part of a larger effort from the Bytecode Alliance called jco — JavaScript components tooling for WebAssembly — an experimental JavaScript component toolchain that isn’t specific to the JavaScript runtime Fastly uses for its own edge offering. “The idea was to build a more generic tool, so wherever you’re putting WebAssembly and you want to allow people to write a small bit of JavaScript, you can do it,” Bedford explained.

Jco is a project “where you can see the new JavaScript experience from stem to stern”, Randall noted, suggesting that you can expect to see more mature versions of the JavaScript and Rust component work for the next release of wasmtime, which will be aligned with WASI Preview2. It’s important to note that this is all still experimental — there hasn’t been a full release of the WebAssembly component model yet and Bedford refers to componentize-js as research rather than pre-release software: “this is a first step to bring this stuff to developers who want to be on the bleeding edge exploring this”.

The experimental slightjs is also targeting the WebAssembly component model, by creating the Wasm Interface Types (WIT) bindings that lets packages share types and definitions for JavaScript. So far the wit-bindgen generator (which creates language bindings for programs developers want to compile to WebAssembly and use with the component mode) only supports compiled languages — C/C++, Rust, Java and TinyGo — so adding an interpreted language like JavaScript may be challenging.

While spin-js-sdk produces bindings specifically for Spin HTTP triggers, SlightJs aims to create bindings for any WIT interface a developer wants to use. Eventually, it will be part of Microsoft’s SpiderLightning project, which provides WIT interfaces for features developers need when building cloud native applications, adding JavaScript support to the slight CLI for running Wasm applications that use SpiderLightning.

Currently, SlightJS uses QuickJs because the performance is better, but as the improvements to SpiderMonkey arrive it could switch and Butcher pointed out the possible performance advantages of a JIT-style JavaScript runtime. QuickJs itself has largely replaced an earlier embeddable JavaScript engine, Duktape.

“There’s a real explosion of activity,” Bedford told us: “there’s very much a sense of accelerating development momentum in this space at the moment.”

Improving JavaScript and Wasm Together

You can think of these options as “JavaScript script on top and WebAssembly on the bottom,” suggested Daniel Ehrenberg, vice president of the TC39 ECMAScript working group, but another approach is “JavaScript and WebAssembly side by side with the JavaScript VM beneath it”.

The latter is where Bloomberg and Igalia have been focusing, with proposals aimed at enabling efficient interaction between JavaScript and WebAssembly, like reference-typed strings to make it easier for WebAssembly programs to create an consume JavaScript strings, and WebAssembly GC for garbage collection to simplify memory management.

Making strings work better between the two languages is about efficiency, TC39 co-chair and head of Bloomberg’s JavaScript Infrastructure and Tooling team Rob Palmer explained.

“This unlocks a lot of use cases for smaller scale use of WebAssembly [for] speeding up some small amount of computation.”

“At the moment they cannot currently really be efficient, because the overhead of copying strings in between the two domains outweighs the benefit of higher speed processing within WebAssembly.”

GC goes beyond the weak references and finalization registry additions to JavaScript (in ECMAScript 2021), which provide what Ehrenberg calls a bare minimum of interoperability between WebAssembly’s linear memory and JavaScript heap-based memory, allowing some Wasm programs to be compiled. The GC proposal is more comprehensive. “WebAssembly doesn’t just have linear memory; WebAssembly can also allocate several different garbage-collection-allocated objects that all point to each other and have completely automatic memory management,” Ehrenberg explains. “You just have the reference tracing and when something’s dead, it goes away.”

Work on supporting threads in WASI to improve performance through parallelization and give access to existing libraries is at an even earlier stage (it’s initially only for C and it isn’t clear how it will work with the component model) but these two WebAssembly proposals are fairly well developed and he expects to see them in browsers soon, where they will help a range of developers.

“Partly that’s been enabling people to compile languages like Kotlin to WebAssembly and have that be more efficient than it would be if it were just directly with its own memory allocation, but it also enables zero-copy memory sharing between JavaScript and WebAssembly in this side-by-side architecture.”

For server-side JavaScript, Ehrenberg is encouraged by early signs of better alignment between two approaches that initially seemed to be pulling in different directions: WinterCG APIs (designed to enable web capabilities in server-side environments) and WASI, which aims to offer stronger IO capabilities in WebAssembly.

“You want WinterCG APIs to work in Deno but you also want them to work in Shopify’s JavaScript environment and Fastly’s JavaScript environment that are implemented on top of WebAssembly using WASI,” he pointed out. “Now that people are implementing JavaScript on top of WebAssembly, they’re looking at can JavaScript support the WinterCG APIs and then can those WinterCG APIs be implemented in WASI?”

The Promise of Multilanguage Wasm 

The flexibility of JavaScript makes it a good way to explore the componentization and composability that gives the WebAssembly component model so much promise, embryonic as it is today.

Along with Rust, JavaScript will be the first language to take advantage of a modular WebAssembly experience that Randall predicted will come to all languages, allowing developers to essentially mix and match components from multiple WebAssembly worlds in different languages and put them together to create new applications.

“You could use high performance and secure Rust to build cloud components, much like wasmCloud does, and you could pair that with less complicated to write user-facing code in JavaScript. I could take JavaScript components from different worlds and marry them together and I could take cargo components written in Rust, and I can now recompose those in many different ways.”

“You can have Rust talking to JavaScript and you can be running it in the sandbox or you could have a JavaScript component that’s alerting a highly optimized Rust component to do some heavy lifting, but you’re writing the high-level component that’s your edge service in JavaScript,” agreed Bedford.

The way compontentize-js lets you take JavaScript and bundle it as a WebAssembly component will translate to working in multiple languages with the Jco toolchain and equivalent tools like cargo-component that also rely on the component model.

Despite WebAssembly’s support for multiple languages, using them together today is hard.

“You have to hope that someone’s going to go and take that Rust application and write some JavaScript — write the JavaScript bindgen for it and then maintain that bindgen,” Beford explained. “Whereas with the component model, they don’t even need to think about targeting JavaScript in particular; they can target the component model, making this available to any number of languages and then you as a JavaScript developer just go for it.”

“That’s what the component model brings to these workflows. Someone can write their component in Rust and you can very easily bring it into a JavaScript environment. And then [for environments] outside the browser you can now bring JavaScript developers along.”

That will also open up JavaScript components for Rust developers, he noted. “Jco is a JavaScript component toolchain that supports both creating JavaScript components and running components in JavaScript.”

In the future, the wasm-compose library “that lets you take two components and basically smoosh them together” could help with this, Hayes suggested. As the component model becomes available over the next few years, it will make WebAssembly a very interesting place to explore.

“If you support JavaScript and Rust, you’ve just combined two massive language ecosystems that people love, and now they can interop and let people just pick the best library or tool.”

“I’m so excited about WebAssembly components because, in theory, it should break down the silos that we’ve created between frontend and backend engineers and language ecosystems.”

The post Will JavaScript Become the Most Popular WebAssembly Language? appeared first on The New Stack.

]]>
WebAssembly for the Server Side: A New Way to NGINX https://thenewstack.io/webassembly-for-the-server-side-a-new-way-to-nginx/ Fri, 21 Apr 2023 18:11:42 +0000 https://thenewstack.io/?p=22705788

This is the first of a two-part series. The meteoric rise of WebAssembly (Wasm) started because it’s a language-agnostic runtime

The post WebAssembly for the Server Side: A New Way to NGINX appeared first on The New Stack.

]]>

This is the first of a two-part series.

The meteoric rise of WebAssembly (Wasm) started because it’s a language-agnostic runtime environment for the browser that enables safe and fast execution of languages other than JavaScript. Although Wasm’s initial focus was in the browser, developers have begun to explore the possibilities of Wasm on the backend, where it opens many possibilities for server and network management.

Similar to NGINX , many server-side technologies operate with a standard plugin model, which relies on statically or dynamically injecting linked object files into an executable running in the same address space.

However, plugins have considerable limitations. In particular, they allow extensibility through native language extensions, which limits developer choice in terms of languages and language-specific capabilities. Other plugins must conform to complex linking methods that require both server and client languages to support the same functionality interface. This can add complexity for creators of plugins.

Finally, some plugins work through dynamic languages and scripting layers. These are easier to use but sacrifice performance. Dynamic scripting can introduce layers of abstraction as well as additional security risk. For example, remote procedure calls (RPCs) must address network communication, serialization and deserialization, error handling, asynchronous behavior, multiplatform compatibility, and latency when those challenges cause problems. While a plugin that uses RPCs is flexible, it’s at the cost of greatly increased complexity.

Why Wasm Rocks: Fast, Secure, Flexible

So, what is this Wasm thing? Wasm is a binary format and runtime environment for executing code. In short, Wasm was created as a low-level, efficient and secure way to run code at near-native speeds. Wasm code is designed to be compiled from high-level programming languages such as C, C++, Golang and Rust. In reality, Wasm is language-agnostic and portable. This is becoming more important as developers who deploy and maintain applications increasingly prefer to write as much as possible in a single language (in other words, less YAML).

Wasm blows the standard plugin model wide open by allowing for far more flexible and manageable plugins. With Wasm, making plugins language-neutral, hardware-neutral, modular and isolated is much easier than with existing plugin models. This enables developers to customize behaviors beyond the browser, specific to their environment and use cases, in the language of their choice.

Wasm achieves all this while maintaining near-native code levels of performance thanks to:

  • A compact binary format smaller than equivalent human-readable code, resulting in faster download and parse times.
  • An instruction set that is closer to native machine instructions, allowing for faster interpretation and compilation to native code.
  • An extremely fast JIT with strong typing that delivers better optimization opportunities for faster code generation and execution through application of a variety of optimization techniques.
  • A contiguous, resizable linear memory model that simplifies memory management, allowing for more efficient memory access patterns.
  • Concurrency and parallel execution that unlocks performance from multicore processors (currently a WIP).

Designed initially for running untrusted code on the web, Wasm has a particularly strong security model that includes:

  • A sandboxed code execution environment that limits its access to system resources and ensures that it cannot interfere with other processes nor the operating system.
  • A “memory-safe” architecture that helps prevent common security vulnerabilities such as buffer overflows.
  • A robust typing system that enforces strict typing rules.
  • Small code size compared to other runtimes, which reduces the attack surface.
  • A bytecode format that is designed to be easy to analyze and optimize, which makes it easier to detect and fix potential security vulnerabilities.
  • Minimal need to refactor code for different platforms because of its high degree of portability.

A More Flexible Way to Build Plugins

Server-side Wasm has a number of impressive potential benefits, both primary and secondary. To start, using Wasm environments can make it much easier for standard application developers to interact with backend systems. Wasm also allows anyone to set up granular guardrails for what a function can and cannot do when it attempts to interact with the lower-level functionality of a networking or server-side application. That’s important because backend systems may be interacting with sensitive data or require higher levels of trust.

Similarly, server systems can be configured or designed to limit interaction with the Wasm plugin environment by explicitly exporting only limited functionality or only providing specific file descriptors for communication. For example, every Wasm bytecode binary has an imports section. Each import must be satisfied before instantiation. This allows a host system to register (or export in Wasm parlance) specific functions to interact with as a system.

Runtime engines will prevent instantiation of the Wasm module when those imports are not satisfied, giving host systems the ability to guardrail, control, validate and restrict what interaction the client has with the environment.

With more traditional plugin models and compiler technologies, creating this granularity and utility level is a challenge. The high degree of difficulty discourages developers from making plugins, further limiting choice. Perhaps most importantly, role-based access control and attribute-based access control, and other authorization and access control technologies, can introduce complex external systems that must be synchronized with the plugin as well as the underlying server-side technology. In contrast, Wasm access control capabilities are often built directly into the runtime engines, reducing the complexities and simplifying the development process.

Looking Ahead to the Great Wasm Future

In a future sprinkled with Wasm pixie dust, developers will be able to more easily design bespoke or semi-custom configurations and business logic for their applications. Additionally, they’ll be able to apply that to the server side to remove much of the development friction between backend, middle and frontend.

A Wasm-based plugin future could mean many cool things: easier and finer tuning of application performance, specific scaling and policy triggers based on application-level metrics and more.

With warg.io, we’re already seeing how Wasm might fuel innovative, composable approaches to building capabilities that apply the existing package management and registry approach to building with trusted Wasm code elements. In other words, Wasm might give us composable plugins that are not that different from the way a developer might put together several npm modules to achieve a specific functionality profile.

Application developers and DevOps teams generally have had blunt instruments to improve application performance. When latency issues or other problems arise, they have a few choices:

  1. Throw more compute at the problem.
  2. Increase memory (and, indirectly, I/O).
  3. Go into the code and try to identify the sources of latency.

The first two can be expensive. The last is incredibly laborious. With Wasm, developers can elect to run large parts of apps or functions that are slowing down performance inside a Wasm construct, and use a faster language or construct. They can do this without having to rip out the whole application and can focus on low-hanging fruit (for example, replacing slow JavaScript code used for calculations with C code or Go code compiled inside Wasm).

In fact, Wasm has a host of performance advantages over JavaScript. To paraphrase Lin Clark from Mozilla on the original Wasm team:

  • It’s faster to fetch Wasm, as it is more compact than JavaScript, even when compressed.
  • Decoding Wasm is faster than parsing JavaScript.
  • Because Wasm is closer to machine code than JavaScript, and already has gone through optimization on the server side, compiling and optimizing takes less time.
  • Code execution runs faster because there are fewer compiler tricks and gotchas necessary for the developer to know in order to write consistently performant code. Plus, Wasm’s set of instructions is more ideal for machines.

So let’s imagine this future: Microservices aren’t choreographing through expensive Kubernetes API server calls or internal east-west RPCs, but instead through modular, safe and highly performant Wasm components bounded within a smaller process space and surface area.

Traditionally, developers have used other data encoding languages like YAML to invoke custom resource definitions (CRDs) and other ways to add functionality to their applications running as microservices in Kubernetes. This adds overhead and complexity, making performance tuning more challenging. With a Wasm-based plugin, developers can take advantage of language primitives (Go, Rust, C++) that are well known and trusted rather than reinventing the wheel with more CRDs.

The post WebAssembly for the Server Side: A New Way to NGINX appeared first on The New Stack.

]]>
Fermyon Cloud: Save Your WebAssembly Serverless Data Locally https://thenewstack.io/fermyon-cloud-save-your-webassembly-serverless-data-locally/ Thu, 20 Apr 2023 20:22:25 +0000 https://thenewstack.io/?p=22705714

Fermyon Technologies has added local stateful storage capacity for Fermyon Cloud as well as Spin 1.1, as the WebAssembly startup

The post Fermyon Cloud: Save Your WebAssembly Serverless Data Locally appeared first on The New Stack.

]]>

Fermyon Technologies has added local stateful storage capacity for Fermyon Cloud as well as Spin 1.1, as the WebAssembly startup seeks to improve the developer experience for Wasm.

With the introduction of the Fermyon Cloud Key Value Store, users can now persist non-relational data in a key/value store managed by Fermyon Cloud that remains available for your serverless application. This availability of the data is measured in milliseconds — with no cold starts as the company says — given the low latency that WebAssembly offers for data connections. The Fermyon Cloud Key Value Store is an implementation of Spin’s key/value API, which means you can deploy Spin apps that use key/value data to Fermyon Cloud without changing anything about your application, the company says.

“When designing Fermyon Cloud, we wanted to retain certain stateful mechanisms because there are things that are going to start up, run to completion and stop. So while stateless is really a prerequisite to be able to scale for that, we wanted to give the developer the feeling that they didn’t have to start wiring up their own extra storage service,” Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack during the first day of KubeCon + CloudNativeCon. “Now that Key Value Store is released inside of Fermyon Cloud, the developer is really just making what appears to be regular API calls to store data. It’s deploying into a highly scalable, replicated environment.”

In a blog post, Fermyon communicated the following pain points that Fermyon Cloud users have experience, which includes:

  • Having to manage external stateful data services to use from Spin apps introduces additional infrastructure and operational overhead.
  • Changes in configuration and code between environments often introduce friction between local development and deploying to production.

So, previously, users working with serverless workloads had to only rely on external services to persist state beyond the lifespan of a single request while Spin lets you use databases you manage (like Redis, PostgreSQL or MySQL).

“So it really feels a lot of this was based on the idea that we want to remove developer friction all along the pipeline, by trying to figure out what are frustrating points for the developer,” Butcher said. “For example, without Fermyon Cloud Key, the developer might have to stand up a local copy of Redis and install it and keep it running. Instead, this step is thus removed by using Fermyon Cloud Key Value Store to allow this to happen.”

Fermyon Technologies offers key-value storage for serverless functions with free 1,000 database records at 1MB each. Spin, the popular open source product that is the easiest way for developers to build WebAssembly serverless apps, added local key-value storage in version 1.0 and now developers can instantly utilize key-value capability in a serverless runtime on Fermyon Cloud, which is also free.

Under the Hood

The Fermyon Cloud Key Value Store is an implementation of Spin’s key/value API, which means you can deploy Spin apps that use key/value data to Fermyon Cloud without changing anything about the application, Fermyon said. The final command once setup is completed is very simple:

The latest feature added is also in support of WebAssembly’s adoption in general, for which Butcher said momentum continues to build. “There has been a growing general awareness of what WebAssembly is, what it can do and what its strengths are,” Butcher said. “We were talking at the beginning of 2023 about how it’s likely that WebAssembly becomes mainstream this year. We’re definitely seeing evidence of that happening already.”

Check back often this week for all things KubeCon+CloudNativeCon Europe 2023. The New Stack will be your eyes and ears on the ground in Amsterdam!

The post Fermyon Cloud: Save Your WebAssembly Serverless Data Locally appeared first on The New Stack.

]]>
WebAssembly: The Ultimate Guide https://thenewstack.io/webassembly-the-ultimate-guide/ Mon, 17 Apr 2023 15:06:22 +0000 https://thenewstack.io/?p=22701845

WebAssembly is arguably beginning to live up to its hype. Although whether it realizes its potential or not remains to

The post WebAssembly: The Ultimate Guide appeared first on The New Stack.

]]>

WebAssembly is arguably beginning to live up to its hype. Although whether it realizes its potential or not remains to be seen and its ultimate success largely depends on factors beyond its worth as a technology. Things that could hold it back include a lack of agreement about standardization devices on which it is deployed.

Already, WebAssembly (aka Wasm) has been shown to work exceedingly well in the browser. It is widely used as a way to improve speed and security, and especially computing simplicity for applications that run directly in the browser, notably with JavaScript, as well as other languages. This speed and simplicity are eventually thanks to its binary computing structure that runs directly in a very clean way on the CPU.

WebAssembly is expected to eventually see wide-scale use as a way to deploy applications in a single module across different containers and Kubernetes clusters, devices (such as for edge and IoT devices) and multicloud environments simultaneously.

Other things that WebAssembly offers, given its low computing instruction-set size, are its ultrafast speeds and its security aspect — or its sandbox design to use industry jargon — since there is no access by other services or applications during deployments as the code inside remains isolated and is not accessible during its lightning fast journey measured in milliseconds for deployment across different environments.

WebAssembly is very suitable for serverless environments and is seen as a way to overcome many of serverless’ issues impeding its adoption. Today’s typical third-party use cases mean that serverless will require the support of a third party, which is more often than not a cloud vendor. For many, serverless architecture might be equated with Lambda on Amazon Web Services or an offering from another cloud vendor such as Azure, Google Cloud, Oracle or IBM. The organization thus must be content to entrust its several infrastructures not with multiple vendors, but with one third-party cloud provider, to administer its critical apps in many cases. For this reason alone, the avoidance of vendor lock-in is a key Wasm selling point.

“One of the things that we at Fermyon hear all the time is that developers love the serverless functions paradigm,” Matt Butcher, co-founder and CEO of Fermyon Technologies, said. “That statement almost always comes with a ‘but,’ though: While the big clouds each provide serverless, developers dislike the vendor lock-in, performance, and developer experience accompanying those offerings.”

An essential feature of Wasm is how it allows developers to no longer concern themselves with working with a potential multitude of libraries in order for their code to see deployment. “WebAssembly offers the promise of sharing libraries regardless of the underlying language. For example, a JavaScript program can load a library originally written in Python, and another written in Rust, and use them both,” Butcher said. “In today’s language ecosystem, every programming language has its own YAML parser, its own JPEG library, and so on. How many hours, days, and months are wasted implementing the same algorithms in a plethora of languages? WebAssembly is the remedy.”

Indeed, WebAssembly has the potential to become the new standard for composing apps, consisting of “truly universal building blocks” that can be combined and molded into many different apps, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. For the developer, this is accomplished “without worrying about getting it to work within these apps’ runtimes. This opens the door for a massive jump in developer productivity, as developers could pick and choose from a library of boilerplate modules that could even be available as part of the runtime,” Volk said. “They could consist of microservices for identity management, access control, app messaging, data storage, and data mining or they could be entire data pipelines, machine learning models, or API integrations. This prospect of developers becoming laser-focused on writing business code, and business code only, is what makes Wasm so exciting.”

However, again, WebAssembly, as it stands now, remains a work in progress. Among other things, it is in the wait of the standardization of component interface Wasi, the layer required to ensure endpoint compatibility among the different devices and servers on which Wasm applications are deployed.

What Does WebAssembly Really Do?

The idea is that WebAssembly is designed to deploy applications written in the language of the developer’s choice for deployment anywhere simultaneously in disparate and various environments. “Disparate” since WebAssembly runs on a CPU and only requires a device, server, etc., to be able to run a CPU instruction set. This means that a single deployment of an application in a WebAssembly module theoretically should be able to run and be updated on a multitude of different disparate devices whether that might be for servers, edge devices, multiclouds, serverless environments, etc.

Anywhere there is a CPU capable of running instruction sets, WebAssembly is designed to run applications written in a growing number of languages it can host in a module. It now accommodates Python, JavaScript, C++, Rust and others. Different applications written with different programming languages should be able to function within a single module, although this capability largely remains under development. Essentially, a microservices-packed module should be able to be used to deploy multiple services across multiple disparate environments and to offer application updates without reconfiguring the endpoints. In theory, it is just a matter of configuring the application in the module so that each environment in which the module is deployed does not have to be reconfigured separately once the work is done inside the module.

Can WebAssembly Replace Containers?

The argument that WebAssembly will replace containers and Kubernetes is largely a non sequitur. This is because WebAssembly and containers and Kubernetes are different, yet important technologies. And even though there are some overlapping purposes, they also meet specific and separate computing needs.

At least in the immediate future, many organizations will be loath to replace their container infrastructure and Kubernetes environments. Besides likely losing their investments in those by replacing them with WebAssembly, WebAssembly is not a replace-all technology for all containerized environments. In fact, there is much attention paid these days to use Wasm to deploy applications on containers and in Kubernetes environments.

Docker continues to make announcements about how it will accommodate and extend support for WebAssembly. How both will work together and especially how Docker is used with containers to allow for them to deploy and manage applications with WebAssembly were often discussed. These adaptations are largely seen as necessary to pave the way for Wasm’s adoption and use with containers and Kubernetes.

“With supersonic startup speed and light runtime requirements, Wasm is well suited for serverless functions – something that has historically been hard to implement well in Docker. Conversely, Docker’s standout feature is its ability to easily bundle up a long-running server and its environment in a portable fashion,” Butcher said. “Long-running servers are not yet Wasm’s strong suit. Now that Wasm can be packaged in the same image format as a container, we’ll see the two technologies combined to build the kind of hybrid serverless-and-server microservice apps that have been difficult to achieve with prior technologies.”

Is WebAssembly Faster Than JavaScript?

Towards the beginning of what is popularly known as the World Wide Web, there was JavaScript. JavaScript has been around since 1995 when Brendan Eich created the language to support Netscape, the now sadly defunct yet aesthetically pleasing web browser that was revolutionary for its time. Since then, the ECMAScript standard has served to underpin web development, representing the vast majority of applications that run in the web browser.

More recently, WebAssembly — which actually has been around for a while — has emerged. After the World Wide Web Consortium (W3C) named it as a web standard in 2019, it has thus become the fourth web standard with HTML, CSS and JavaScript. But while web browser applications have represented Wasm’s central and historical use case, again, the point is that it is designed to run anywhere on a properly configured CPU — this is where Wasm and JavaScript both bifurcate and become more integrated for some use cases.

Wasm and JavaScript remain closely linked, yet Wasm is very much about other things in addition to JavaScript. In a nutshell, Wasm’s original purpose to help JavaScript run more efficiently in the web browser remains a key component of their integration. That integration now extends beyond the web browser, and into edge and server applications for which JavaScript alone has not been the best fit.

This is due to how Wasm runs in a binary format on a CPU level. And lest we forget, unlike JavaScript, Wasm is not a programming language. One of the main beauties of Wasm is that its functionality enables it to accommodate a number of different languages in addition to JavaScript, including Python, Rust, of course, as well as Go, .NET, C++, Java and PHP.

So, WebAssembly can both integrate JavaScript when needed, but it is not limited to integrating JavaScript, of course. This integration and use with JavaScript has been a cornerstone of the symbiosis between WebAssembly and JavaScript, especially in the sphere of web applications.

For pure compute performance, as well as for such tasks as image processing, WebAssembly has certainly shown its merit as being much faster than JavaScript. But arguably the context is much more complex than that. It is not really a question all of the time as to whether faster compute times matter as much, such as the need for JavaScript code for lighter coding tasks for mobile and Web application applications.

JavaScript is a language that is accessible to almost anyone and offers lots of community-supported libraries that support many use cases without the need to reinvent the wheel each time, Volk noted. “Executing otherwise interpreter-dependent languages like JavaScript and Python as bytecode and separating out boilerplate code from the core application, could bring tremendous performance and capacity advantages,” Volk said.

Will WebAssembly Replace JavaScript?

The point is not if WebAssembly will replace JavaScript, because there are no foreseeable reasons why it might. What WebAssembly will do instead is extend the reach of JavaScript to make it more deployable beyond just the browser.

“What we’re seeing in Fermyon surprised us. Developers are clamoring to execute JavaScript and TypeScript in WebAssembly. What we hear from our community is that the serverless paradigm is what they love, and JavaScript is just one of a variety of languages they want to have on hand when building serverless functions,” Butcher said. “So, if Wasm was originally a supplement to JavaScript, in some ways the relationship has inverted.”

Does WebAssembly Offer Superior Security?

Wasm can offer security advantages compared to code deployed only in JavaScript. Wasm serves to make JavaScript code more secure when Wasm is used as a “compiler on steroids” with which JavaScript applications can be deployed. Wasm, for example, isolates JavaScript from the browser, ensures memory safety, and implements strongly typed variables that are harder to exploit compared to JavaScript’s dynamically typed ones.

“Wasm’s security model could enable the vast JavaScript community to start creating complete apps, instead of building out only frontends and relying on backend developers to do the rest,” Volk said. “The ability to chain together individual Wasm modules into basic apps that bring life to traditional JavaScript frontends is an exciting perspective. Imagine the possibilities if frontend developers could securely store and access data on and from MongoDB, Postgres or the SalesForce API.”

Indeed, Wasm offers security advantages on a number of fronts. This is because, as Sounil Yu, chief information security officer at JupiterOne, a provider of cyber asset management and governance solutions, communicated:

Wasm as a compiler for JavaScript can improve the security of the application by reducing the vulnerability attack surface, providing better memory safety, obscuring the code, sandboxing the execution environment and leveraging an existing security ecosystem. Wasm has a limited set of instructions and better memory management, which helps reduce the attack surface for vulnerabilities and prevents some common types of vulnerabilities such as buffer overflows.

Wasm code offers a bit of security through obscurity by not being human-readable, making it harder for attackers to reverse-engineer the code and thus more difficult to discover and exploit vulnerabilities.

Wasm can also be run in a sandboxed environment, which can help to isolate the code from the rest of the system to prevent it from accessing sensitive information or performing illegal operations.

Wasm Frameworks, like CNCF’s wasmCloud, extend the Wasm security footprint further by providing higher-level abstractions, reducing the amount of code that developers embed in each application. wasmCloud also eases the security burden for developers by making it easier to sign artifacts, enable built-in monitoring, and automate the patching of applications.

But let’s not say JavaScript is inherently insecure. In fact, Javascript “can be made quite secure,” Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, said in an email response. “Browsers are some of the most attacked surfaces on the planet. WebAssembly, however, makes it easier to defend in depth with a mathematically provable sandbox model, which tools like Veriwasm take advantage of,” he said.

“In addition, you can use the upcoming component model to constrain the attack surface — the host might, for example, not even offer a file system API — and in the coming world these kinds of constraints will prove critical,” Squillace said. “But don’t be fooled: hosts can still make config mistakes and give too much power to a module!”

The post WebAssembly: The Ultimate Guide appeared first on The New Stack.

]]>
Docker Gets up to Speed for WebAssembly https://thenewstack.io/webassembly/docker-needs-to-get-up-to-speed-for-webassembly/ Fri, 14 Apr 2023 11:00:08 +0000 https://thenewstack.io/?p=22704279

For those who still are looking to discuss whether WebAssembly (Wasm) will replace containers and even Kubernetes is missing the

The post Docker Gets up to Speed for WebAssembly appeared first on The New Stack.

]]>

For those who still are looking to discuss whether WebAssembly (Wasm) will replace containers and even Kubernetes is missing the point. Both are very different, yet important technologies. And even though there are some overlapping purposes they also often serve specific and separate needs.

At least in the immediate future, many organizations will be loath to replace their container infrastructure and Kubernetes environments. Besides likely losing their investments in those by replacing them with WebAssembly, WebAssembly is not a replace-all technology for all containerized environments. Comparisons between containers and Wasm and how Docker will continue to support containerized infrastructures when Wasm is in use was one of the many main talking points during Wasm I/O 2023.

During the course of the week of the conference, Docker made a series of announcements about how it will accommodate and extend support for WebAssembly. How both will work together and especially how Docker is used with containers to allow for them to deploy and manage applications with WebAssembly were often discussed. These adaptations are largely seen as necessary to pave the way for Wasm’s adoption and use with containers and Kubernetes.

Docker sees Wasm as a complementary technology to Linux containers where developers “can choose which technology they use (or both) depending on the use case, Michael Irwin, senior manager of developer relations, wrote in a blog post. “As the community explores what’s possible with Wasm, we want to help make Wasm applications easier to develop, build, and run using the experience and tools you know and love,” Irwin wrote.

Indeed, Docker has made and continues to make progress as it seeks to support Wasm. Following its October release of Docker+Wasm and after joining Bytecode Alliance for Wasm and WebAssembly System Interface (WASI) development, Docker released Wasm runtimes at the same time as this month’s Wasm I/O 2023:

The three new runtimes use the runwasi library. It is used to create the namespaces, configure the networks and other workload tasks that Containerd manages for deployment of the Wasm module.

Given Wasm’s likely importance for a wave of deployments and use cases we will likely see in the new future, it is up to Docker to continue widening its support. Docker is motivated to do this since “The Docker Desktop key value proposition focuses on developer productivity,” Torsten Volk, an analyst at Enterprise Management Associates (EMA), said. “Wasm simply constitutes another deployment target for Docker Desktop, in addition to standard Linux containers. As was the case many years ago with Linux containers, Docker has now set out to simplify the adoption of Wasm, an application runtime that has the potential to save significant developer cycles by consistently running the same code on any infrastructure,” Volk said. “This lets developers worry about code, while platform engineers can take care of the scalability and resiliency of the underlying servers, network, and storage resources. Making this capability available to its user community definitely adds to the appeal of Docker Desktop.”

Bringing containers and WebAssembly closer “will benefit everyone,” Djordje Lukic, a software staff engineer for Docker, said during Wasm I/O 2023. “WebAssembly can make use of all the existing infrastructure for building and delivering the workloads…and adding WebAssembly features to container orchestrators makes them a great choice for running workloads where performance and a small footprint is paramount,” Lukic said.

Wasm and Docker Action

Announcements are often interesting but they are not worth much when the technology is not ready. That concern about Docker’s announcement was allayed during the talk “Containers Deep Dive” and demo that Djordje Lukic, a software staff engineer for Docker, during  Wasm I/O 2023, gave. During his talk, Lukic demoed running a WebAssembly module locally using Docker and containerd (a container runtime) and running the module in the cloud on a Kubernetes cluster. The demo covered “what it takes” for a container runtime to be able to run a Wasm module, and the benefits of this approach, including faster startup times, security guarantees and easy integration into multi-tier services, Lukic said.

During his demo, Lukic ran a Wasm module with Docker inside Kubernetes. He showed the Kubernetes cluster running on the Docker desktop. We can see in the images below show Kubernetes and a pod running, as well as the definition of the Wasm module. “What it’s saying is okay, I have a deployment;” Lukic said:

The Split

As mentioned above, Wasm and Docker and containers, in general, both often serve very well specific functionalities. “I think containers versus WebAssembly is really about how you want to build your applications,” Kate Goldenring, senior software engineer, at Fermyon, said during the panel discussion “Containers vs. WebAssembly: What’s the Difference and Which Should I Use?.” “If you’re interested in serverless event-driven applications, WebAssembly is there for you. If you’re interested in continuing with the microservices architecture you have today — such as  using Kubernetes even if WebAssembly is next to it — is an option.”

Daniel Lopez Ridruejo, a senior director at VMware and CEO of Bitnami before VMware acquired it in 2019, said during the panel discussion that he both agreed and disagreed with Goldenring’s statement. While “most containers in the world running Kubernetes are running virtual machines,” there is much activity around engineering how to run WebAssembly on containers on Kubernetes, he said. “But what I’m particularly excited about is and through your work and Microsoft pioneers and this is how you run this on IoT devices: how you actually get rid of containers and get rid of VMs and can have that unit of portability on devices that you will not typically associate with running software,” Ridruejo said. “In a way, you can think of this as a wave…that I think it’s going to be disruptive once you can put compute and standardized compute in devices.”

Serverless has not lived up to its earlier promise of allowing for the deployment and management of applications with a minimal amount of operations required to support them. To this end, WebAssembly providers are speeding ahead to fill shortcomings in these serverless applications. Recent examples include Fermyon’s release of open source Spin 1.0 which is geared for serverless. Meanwhile, containers and Docker will likely remain part of the equation for serverless deployments with WebAssembly. Fermyon and other companies working on Wasm for serverless are focusing on speed of deployments for the management of modules, Shivay Lamba, a software developer specializing in DevOps, machine learning and full stack development, said during the panel discussion. “That helps you to save costs as well. So, if you have such use cases where you have smaller functions, those can be very easily replicated inside of Wasm. And while we are working on some of these toolings, which are still not supported very well in Wasm, those can still be run very easily in Docker or in containers.”

In a nutshell, Wasm should “in no way in the near future” serve as a direct replacement for all containerized Docker workloads, Saiyam Pathak, director of technical evangelism for Civo Cloud, said during the panel discussion. Instead, applications that do not necessarily run very well with Wasm should continue to work just fine with Docker and containers, reflecting how to “take the best advantages of the Wasm ecosystem.”

The post Docker Gets up to Speed for WebAssembly appeared first on The New Stack.

]]>
Why WebAssembly Is Perfect for Tiny IoT Devices https://thenewstack.io/why-webassembly-is-perfect-for-tiny-iot-devices/ Fri, 07 Apr 2023 17:00:41 +0000 https://thenewstack.io/?p=22703824

As the world becomes more interconnected, the number of Internet of Things (IoT) devices has exploded. These devices come in

The post Why WebAssembly Is Perfect for Tiny IoT Devices appeared first on The New Stack.

]]>

As the world becomes more interconnected, the number of Internet of Things (IoT) devices has exploded. These devices come in a wide range of shapes and sizes, from massive industrial machines to tiny sensors. While larger devices may run on Linux or other operating systems, smaller devices require a different approach. In this article, I’ll explain why WebAssembly (Wasm) is the perfect runtime for tiny IoT devices that are too small for Linux and may need to run on battery power.

Challenges

First, let’s look at the challenges of running a traditional operating system on tiny IoT devices. These devices are typically low-powered and have limited memory and storage capacity. Running a full-blown operating system such as Linux requires a significant amount of resources, which can quickly drain the battery life of the device.

Additionally, these devices may not have the hardware necessary to support a full Linux OS, such as MMU (memory management unit) required for hardware virtual memory. Various RTOS (Real Time Operating Systems) exist for tiny IoT devices — some examples include FreeRTOS, ThreadX, and NuttX. As such, we cannot use Linux containers (aka Docker) as a unit of isolation on these tiny IoT devices.

Second, the traditional development cycle for embedded systems, like IoT devices, differs significantly from that of cloud software. Although making changes to the code may be uncomplicated, delivering them to customers is a complex process. Typically, there is a monthly code freeze, during which all modifications undergo thorough hardware-in-the-loop testing, potentially on many types of devices.

Following successful testing of the new release, a staged rollout process is initiated to prevent issues with firmware updates, customer complaints or support escalations. It may take several weeks, or even months, for a minor code change to reach most customers. This process is no longer acceptable in the era of agile development and the cloud.

WebAssembly

Enter WebAssembly. Wasm is a stack-based virtual machine and bytecode format. Originally designed for web browser plugins, it is not limited to just web applications. It’s a versatile runtime that can be used for a wide range of applications, including IoT devices. Wasm is designed to be small and efficient, which makes it a great fit for tiny IoT devices.

Wasm modules are typically a few kilobytes in size, which is much smaller than a typical Linux kernel or a Linux container. This small size means that Wasm can run on devices with limited memory and storage capacity, with native performance.

Another advantage of using Wasm for IoT devices is that it’s a platform-independent runtime. This means that Wasm modules can be written in any programming language and executed on any platform that supports Wasm. This flexibility makes it easy to develop applications for IoT devices, regardless of the hardware or software environment.

Developers can write code in their preferred programming language, and then compile it to Wasm, which can be executed on the target device. C, C++, Rust, JavaScript are well supported. Languages that require garbage collection are less well supported, but that is due to change as the Wasm spec evolves.

Wasm modules are executed in a sandboxed environment, which means that they’re isolated from the rest of the system. This makes it difficult for attackers to exploit vulnerabilities in the system, as they’re unable to access the underlying operating system or hardware. This is particularly important for tiny IoT devices which have no virtual memory. Additionally, Wasm modules can be verified, ahead of deployment, and signed, which adds an extra layer of security to the system.

Some Wasm runtimes support AoT (Ahead of Time) compilation, which takes the Wasm bytecode and produces machine code for the target CPU/MCU type. This is very useful, or even essential, in the context of tiny IoT devices, which may not have the available CPU and memory to perform JIT (Just in Time) compilation, as we typically do in the cloud or on the desktop.

The AoT compilation can be part of a cloud service which manages the life cycle of software deployed on the IoT devices. AoT-compiled code can run nearly at native speeds and orders of magnitude faster than interpreted languages like Micropython.

Maturity

While Wasm has been around for several years, it’s still a relatively new technology. There are not as many tools and libraries available for Wasm as there are for traditional programming languages and frameworks. Some parts of the Wasm spec are still evolving, such as native support for GC required by popular languages like Python. However, this is changing rapidly, as more and more developers adopt Wasm for a variety of applications.

Another difficulty of using Wasm for IoT devices is the lack of support for hardware-specific functionality. Since Wasm is a platform-independent runtime, it doesn’t have direct access to the hardware of the device, or to peripherals such as sensors.

Developers must use a combination of Wasm and native code to access the hardware of the device, and create a WASI API extension that abstracts the hardware functionality and exposes it to the Wasm module. The Wasm community is working to standardize various system interfaces in the WASI (WebAssembly System Interface) specification.

Conclusion

Overall, the benefits of using Wasm for IoT devices far outweigh the potential drawbacks. Wasm is a lightweight, efficient, secure runtime that’s perfect for devices with limited resources. It’s also flexible and platform-independent, which makes it easy to develop applications for a wide range of hardware and software environments, and bring agile development to tiny IoT devices.

As the number of IoT devices continues to grow, Wasm will become an increasingly important tool for developers who are looking to create efficient and secure applications for these devices. With the increasing importance of edge computing and the rise of IoT, it’s clear that WebAssembly is set to play a significant role in the future of computing.

The post Why WebAssembly Is Perfect for Tiny IoT Devices appeared first on The New Stack.

]]>
Python in the Browser: Free PyScript SaaS Launches https://thenewstack.io/python-in-the-browser-free-pyscript-saas-launches/ Tue, 28 Mar 2023 14:12:35 +0000 https://thenewstack.io/?p=22703887

Anaconda is offering a free Pyscript software-as-a-service, Pyscript.com, starting today, nearly a year after launching the open source language project

The post Python in the Browser: Free PyScript SaaS Launches appeared first on The New Stack.

]]>

Anaconda is offering a free Pyscript software-as-a-service, Pyscript.com, starting today, nearly a year after launching the open source language project last April. It will allow developers to deploy Python to run in the browser, alongside other HTML content.

PyScript is essentially HTML, but with an ecosystem of Python libraries, according to PyScript.net, which was the original project launched last year. It leverages Pyodide, WASM, and web technologies to allow Python to run in the browser.

“Everything that you can do with Pyscript.com, you could do without it — pretty much like you could run your own Git server and access Git on your own,” Fabio Pliger, principal architect at Anaconda and PyScript creator, told The New Stack. “But GitHub provides you [with] a lot of niceties and good features for you [so] that it makes sense that just sign up and start using those. I think PyScript.com is in the same vein. Where do I find new plugins that people are creating? I could just go to Pyscript.com and look for them and have a list of which ones are available, what’s more popular, and whatnot.”

The original project at PyScript.net will still be maintained and will evolve, but PyScript.com incorporates an IDE on a free coding platform with Python-powered data interactivity and computation. The platform is now generally available for free as a software service, although there are plans to add paid tiers.

Developers will be able to develop a project on PyScript.com in the browser-based IDE and then — more significantly — deploy the app in the browser and share it via a url.

PyScript.com Goes from IDE to Launch

Previously, if a developer wanted to learn Python, they would have to download Python, configure an environment and install packets on a local machine, Pliger said.

“With PyScript, we tried to reduce that and say you just need to edit one txt file and we will build your environment and everything in the browser without you having to care about resources or installations or things like this,” he explained. “PyScript.com is basically a space where users can create, share, deploy and copy PyScript projects.”

Once developers log in and start a project, they’ll see an IDE that looks much like Visual Studio or other IDEs, with a “Hello World” script. PyScript.com will offer plugins as well, so, for instance, if a teacher wants to grade a slideshow, that teacher will be able to find a plug-in to enable that.

“We’re trying to carve out an experience that is easy and also suggest best practices to users,” Pliger said. “PyScript by design is very flexible in what you can do. So for instance, when I create a new project, I could choose this route, which is defining my Python files and things separately, or I could just say something like, print ‘Hello world’ and that would just work, which is great for novice users.”

The application is automatically deployed from the IDE.

“I can send you a link and you can see the application running — all of that within seconds,” he said. “Compared to one of the difficulties in Python is the question[of] once I have my application, how do I deploy that to my users?”

Like GitHub, PyScript.com allows users to copy projects from others and modify the program for a coder’s own modifications. Anaconda has also created templates to make it easier to add functionality.

“One of the concepts in PyScript is that you have your configuration file that allows you to set packages, that you’re using dependencies, plugins, and things like this,” he said. “This is just as easy as saying like, packages equals NumPy. And then here all of a sudden, I have a version that includes NumPy and I can use those as well.”

While the IDE will be familiar to developers, one thing the project did not want to do is create a super powerful editor that might overwhelm non-programmers. That’s because one of PyScript’s goals is to be a language for the 99% of web users who aren’t programmers, as Anaconda co-founder and CEO Peter Wang told The New Stack in July.

“The biggest thing I would say to people who are like, ‘I don’t see why this is necessary,’ is simply [that] it’s not about you,” Wang said. “It’s for all of the schoolchildren that you don’t spend your afternoons trying to teach how to build their first application.”

Future Features for PyScript.com

But developers are still a major consideration, of course. One addition that many developers will like is a planned command line interface for Pyscript.com that would enable developers to code on their own machine and then synchronize it in real-time with PyScript.com.

“If you’re a GitHub user, for instance, and you want to use that, we want to encourage this and say, keep using your tools, keep using everything and just use the features … that you need, like the ease of deployment or share-ability and things like that,” Pliger said.

There are also plans to add more social support, so that users can follow others to see what projects they’re working on and clone projects. Already, there are demos of games and other projects available from maintainers and other beta participants, he said. Pliger demoed a game that looked much like the original Super Mario Brothers and worked within the browser.

Python in the Browsers: Free PyScript Software as a Service Launches

Image via Anaconda

“I’ve been amazed by the things that people are doing,” he said. “Games are a channel for students and people who would like to program but they can’t, to catch their interest, right? If you go to a class of students and you tell them, ‘Today we’re going to learn Python and learn how to print things on the screen,’ […] you would lose their interest. But if you go and say, ‘Hey, we’re going to use Python to automate Minecraft and to create blocks and things like this,’ all of a sudden you catch their interest, and they actually want to learn how to use that for their own interest. I think it’s very powerful.”

On another practical note, teachers can run scripts on a page within the content that students are learning, so there’s no need for two pages — which makes for a powerful learning experience, he added.

There will be paid tiers for Pyscript.com, but for now, Anaconda wants to encourage “passionate individuals to help pave the future of PyScript,” the press release stated. It’s offering a Founder’s Package for a one-time fee of $150 for those who want to be more involved in the project. Founders will get early access to beta features, have a channel for direct feedback to the core developers, and get one year of unlimited access to new features as they get released. In addition, Founders will receive special edition apparel featuring PyScript’s new mascot, Rabbit.

Since its debut in 2022, PyScript’s GitHub has grown to more than 15,000 stars and monthly usage reached more than 20,000 web developers, data science practitioners, and learners.

The post Python in the Browser: Free PyScript SaaS Launches appeared first on The New Stack.

]]>
What Wasm Needs to Reach the Edge https://thenewstack.io/what-wasm-needs-to-reach-the-edge/ Mon, 27 Mar 2023 16:00:24 +0000 https://thenewstack.io/?p=22703629

Write once, run anywhere. This mantra continues to hold true for the promise of WebAssembly (WASM) — but the keyword

The post What Wasm Needs to Reach the Edge appeared first on The New Stack.

]]>

Write once, run anywhere. This mantra continues to hold true for the promise of WebAssembly (WASM) — but the keyword is “promise” since we are not there yet, especially for edge applications, or at least not completely. Of course, strides have been made as far as WebAssembly’s ability to accommodate different languages beyond JavaScript and Rust as vendors begin to support different languages, such as TypeScript, Python or C#.

As of today, WASM is very much present in the browser. It is also rapidly being used for backend server applications. And yet, much work needs to be done as far as getting to the stage where applications can reach the edge. The developer probably does not care that much — they just want their applications to run well and security wherever they are accessed, without wondering so much about why edge is not ready yet but when it will be.

Indeed, the developer might want to design one app deployed through a WebAssembly module that will be distributed across a wide variety of edge devices. Unlike years past when designing an application for a particular device could require a significant amount of time to reinvent the wheel for each device type, one of the beautiful things about WASM — once standardization is in place — is for the developer to create a voice-transcription application that can run not only on a smartphone or PC but in a minuscule edge device that can be hidden in a secret agent’s clothing during a mission. In other words, the application is deployed anywhere and everywhere across different edge environments simultaneously and seamlessly.

During the WASM I/O conference held in Barcelona, a few of the talks discussed successes for reaching the edge and other things that need to be accomplished before that will happen, namely, having standardized components in place for edge devices.

The Missing Link

Edge is one of those buzzwords that can be misused or even misunderstood. For telcos, it might mean servers or different phone devices. Industrial devices might include IoT devices, applicable to any industry or consumer use for users that require connected devices with CPUs.

An organization might want to deploy WASM modules through a Kubernetes cluster to deploy and manage applications on edge devices. Such a WASM use case was the subject of the conference talk and demo “Connecting to devices at the edge with minimal footprint using AKS Edge Essentials and Akri on WASMs” given by Francisco Cabrera Lieutier, technical program manager for Micrsosoft, and virtually by Yu Jin Kim, product manager at Microsoft’s Edge and Platforms.

Lieutier and Kim showed how a WASM module was used to deploy and manage camera devices through a Kubernetes environment. This was accomplished with AKS Edge Essentials and Akr. One of the main benefits of WASM’s low power was being able to manage the camera device remotely that like other edge devices, such as thermometers or other sensor types, would lack the CPU power to run Kubernetes that would otherwise be a requirement without WASM.

“How can we coordinate and manage these devices from the cluster?” Kim said.  The solution used in the demo is Akri, which is a Kubernetes features interface to makes connections to the IoT devices with WASM, Kim explained.

However, while different edge devices can be connected and managed with WASM with AKS Edge Essentials and Akri, the edge device network is not yet compatible with say an edge network running under an AWS cluster from the cloud or distributed directly from an on-premises environment.

Again, the issue is interoperability. “We know that WebAssembly already works. It does what you need to do and the feature set of WASM has already been proven in production, both in the browser and on the server,” Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, told The New Stack during the conference sidelines.

“The thing that’s missing is we don’t have interoperability, which we call portability — the ability to take the same module and deploy it after rebuilding a different cloud but you need a common interface, common runtime experience and specialization. That’s what the component model provides for interoperability.

Not that progress is not being made, so hopefully, the interoperability issue will be solved and a standardized component model will be adopted for edge devices in the near future. As it stands now, WASI has emerged as the best candidate for extending the reach of Wasm beyond the browser. Described as a modular system interface for WebAssembly, it is proving apt to help solve the complexities of running Wasm runtimes anywhere there is a properly configured CPU — which has been one of the main selling points of WebAssembly since its creation. With standardization, the Wasi layers should eventually be able to run all different Wasm modules into components on any and all edge devices with a CPU.

During the talk “wasi-cloud: The Future of Cloud Computing with WebAssembly,” Bailey Hayes, Bailey Hayes, director of the Bytecode Alliance Technical Standards Committee and a director at Cosmonic and Dan Chiarlone (virtually), an open source Software engineer at Microsoft’s WASM Container Upstream team, showed in a working demo how wasi-cloud offers standardized interfaces for running Wasm code on the cloud.

“Our answer to the question of how do you write one application that you can run anywhere across clouds is with wasi-cloud,” Hayes said. “And you can imagine that using standard APIs, one application is runnable anywhere or on any architecture, cloud or platform.”

The post What Wasm Needs to Reach the Edge appeared first on The New Stack.

]]>
No More JavaScript: How Microsoft Blazor Uses WebAssembly https://thenewstack.io/no-more-javascript-how-microsoft-blazor-uses-webassembly/ Mon, 27 Mar 2023 14:22:53 +0000 https://thenewstack.io/?p=22702973

Last week I introduced you to Blazor, Microsoft’s web stack that eschews JavaScript and enables developers to use WebAssembly on

The post No More JavaScript: How Microsoft Blazor Uses WebAssembly appeared first on The New Stack.

]]>

Last week I introduced you to Blazor, Microsoft’s web stack that eschews JavaScript and enables developers to use WebAssembly on the client side. We saw quite a pleasing HTML/code separation on templates and a solid component system.

Now it is time to venture further into the purple man’s domain (the presenter in the explainer videos, Jeff Fritz, wears a very fetching purple blazer) and beyond.

You will remember previously that you can have multiple routes marked out on the page. We can see an interesting variant using this on the example Counter.razor:

@page "/counter"

<PageTitle>Counter</PageTitle>

<h1>Counter</h1>

<p role="status">Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    private int currentCount = 0;

    private void IncrementCount()
    {
        currentCount++;
    }
}


We can add another route to this one, which takes an argument that binds with StartingValue. You can see how readable this is:

@page "/counter"
@page "/counter/{startingValue:int}

<PageTitle>Counter</PageTitle>

<h1>Counter</h1>

<p role="status">Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    private int currentCount = 0;

    [Parameter]
    public int StartingValue { get; set; }

    protected override void OnParametersSet()    
    {
        currentCount = StartingValue;
        base.OnParametersSet();
    }

        private void IncrementCount()
    {
        currentCount++;
    }
}


This takes a parameter from the URL and puts it straight into the code (allowing for capitalization differences) using an override. Clearly, we are picking up an OnParameterSet event. Without the parameter present, the code works as before. If I do have a parameter that matches the type int, we get:

This sets us up for the trickier binding of different types of HTML UI elements to C# code.

But for now, we will leave the Purple Man’s video lessons for a more pressing issue. This is the bit where we want to talk about the (purple) elephant in the room.

Are We Just Creating Server-Side Apps?

The answer is no. It is Microsoft’s business to create a unified development environment and blur the line between server-side and client-side. But they do make it clear they are different projects:

However, there are a few shiny but confusing baubles in the explanations for the two options above. WebAssembly directly supports .NET on the browser, and thus gives you offline behavior. In the Server App, no C# goes to the client at all. SignalR sounds like a toothpaste, but it is just asynchronous communication helper code for the client/server connection.

You may also have noticed that the last sentence in both descriptions is identical. Microsoft is trying to square off its older systems with ways to support modern techniques while protecting its platform legacy.

Differences Between the Server and Wasm Apps

I want to compare the differences in approach between a server and a WebAssembly-based app by looking at an example component.

So what are the differences? The demo project I have been using is, under the hood, a Blazer Server. So let’s look at the simple FetchData example. It shows some fake weather:

On the backend is this a simple fake service in pure C#:

public class WeatherForecastService
{
    private static readonly string[] Summaries = new[]
    {
        "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
    };

    public Task<WeatherForecast[]> GetForecastAsync(DateOnly startDate)
    {
        return Task.FromResult(Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = startDate.AddDays(index),
            TemperatureC = Random.Shared.Next(-20, 55),
            Summary = Summaries[Random.Shared.Next(Summaries.Length)]
        }).ToArray());
    }
}


Note the use of Random to generate results, as you might expect. Each time I refresh, I get some different fake data.

Naturally, there is a FetchData.razor for the page, and it has two interesting parts:

@page "/fetchdata"
@using FirstBlazorApp.Data
@inject WeatherForecastService ForecastService
...


The inject directive is a form of Dependency Injection; Blazor uses this so that components can use available services in an independent manner — a real service in the wild would not be part of the calling program.

Here is the corresponding code at the bottom of FetchData.razor:

...
@code {
   private WeatherForecast[]? forecasts;

   protected override async Task OnInitializedAsync()
   {
      forecasts = await ForecastService.GetForecastAsync(DateOnly.FromDateTime(DateTime.Now));
   }
}


We know the rest of the code on the page is just going to loop through the array of WeatherForecast and display it.

So we are just missing the code that “registers” this service. That is added to the boilerplate of Program.cs:

...
var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddRazorPages();
builder.Services.AddServerSideBlazor();
builder.Services.AddSingleton<WeatherForecastService>();
...


It appears that the design treats the pages themselves as separate services. All components requiring a singleton service receive the same instance of the service.

So in summary, we make a fake service for our server, register it, then “inject” it into the pages of the App. The client’s only responsibility is to display the information — and that is done in the rest of FetchData.razor by some simple HTML.

This cannot possibly be the model for a WebAssembly client. So let’s pull in the WebAssembly equivalent of the demo project and compare the differences. And yes, there is a FetchData example in this project too. It runs the same as the server version, except there is a few seconds to wait for .NET to load into the browser. The data doesn’t change after refresh (we will see why shortly). This wait is acceptable as long as the page will claim the user’s interest for some amount of time. We can imagine that this will improve over time, too.

Let’s start with the top of FetchData.razor:

@page "/fetchdata"
@inject HttpClient Http
...


Ah — so we are not injecting some backend fake service. Because, or course, there is no backend. In the demo, the method used is an HttpClient, so you can suck up some data in a nice REST service.

How will that be applied?

@code {
    private WeatherForecast[]? forecasts;

    protected override async Task OnInitializedAsync()
    {
        forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("sample-data/weather.json");
    }
    ...
}


Ah, so the service is now an HTTP call for some trusty fake JSON.

The static JSON file is sitting in the wwwroot directory of the project, and hence the data won’t refresh when the app runs:

[
  {
    "date": "2022-01-06",
    "temperatureC": 1,
    "summary": "Freezing"
  },
  {
    "date": "2022-01-07",
    "temperatureC": 14,
    "summary": "Bracing"
  },
  {
    "date": "2022-01-08",
    "temperatureC": -13,
    "summary": "Freezing"
  },
  {
    "date": "2022-01-09",
    "temperatureC": -16,
    "summary": "Balmy"
  },
  {
    "date": "2022-01-10",
    "temperatureC": -2,
    "summary": "Chilly"
  }
]


We still need to find out how we register this service, and also find out why the HTTP started looking in our wwwroot. The answers happen to be on the same line:

...
var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add<App>("#app");
builder.RootComponents.Add<HeadOutlet>("head::after");

builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });


So we add the HttpClient service and, in this case, the base address is set to the Host Environment. A Scoped service is tied up with the lifetime of an HTTP connection.

While supporting subtle differences in approach, the two projects are largely the same. I hope this quick look at how Microsoft has smoothed the path for WebAssembly has tempted you to leave JavaScript behind and put on a colorful jacket.

The post No More JavaScript: How Microsoft Blazor Uses WebAssembly appeared first on The New Stack.

]]>
WebAssembly Providers Speed Ahead to Fill Serverless Gaps https://thenewstack.io/webassembly-providers-speed-ahead-to-fill-serverless-gaps/ Fri, 24 Mar 2023 09:00:37 +0000 https://thenewstack.io/?p=22703430

It can be safely said that the vast majority of developers and operations team members are not overly concerned about

The post WebAssembly Providers Speed Ahead to Fill Serverless Gaps appeared first on The New Stack.

]]>

It can be safely said that the vast majority of developers and operations team members are not overly concerned about the underlying mechanisms of serverless. In other words, what’s running underneath the hood — as long as it’s safe and secure — is of little interest. It’s the features that count.

At the same time, serverless has not lived up to its earlier promise of allowing for the deployment and management of applications with a minimal amount of operations required to support them. To this end, WebAssembly (WASM) providers are speeding ahead to fill shortcomings in these serverless applications.

They are also looking to add features and fill in the gaps of their offerings as WASM’s earlier hype to offer a way to develop and deploy applications more securely and simply with better computing performance than what any widely adopted technology has enabled in the past.

What the WASM providers need to and are trying to provide to fill in the gap of their offerings include proper SDKs so developers can simply use the language of their choice beyond JavaScript and difficult-to-use Rust and operations can rest assured that applications can be deployed securely across multiple targets and devices simultaneously with little or no configuration required for the WASM module and the endpoint.

Indeed, the ability for WASM to run “anywhere applies not just to which processor and operating system you’re in,” but also its ability to accommodate “multiple other binaries … being able to run it inside other languages,” Fastly CTO Tyler McMullen said during his talk “The Return of Write Once, Run Anywhere.”

However, while WASM is now widely used for browser applications for which it was originally created, it remains a work in progress. Work is ongoing to fully benefit from its runtime structure designed to run directly on the CPU in order to offer a more direct way to run the same application and code distributed on containers or on different devices and environments.

The race to deliver on serverless were specifically addressed in talks about the launch of Fermyon’s open source Spin 1.0, VMware’s WASM Workers Server project and other discussions during the first day of the WASM I/O conference held in Barcelona.

Take a Spin

Spin 1.0’s main new feature is its development of the capability to accommodate a number of languages in addition to Rust. Created and maintained by Fermyon and especially geared for developers, it is designed to make up for some of WebAssembly’s shortcomings for serverless.

Fermyon describes Spin 1.0 — which is a Function as a Service — as the first stable release of the open source developer tool for building serverless applications with WebAssembly. The tool and framework guides the user through creating, building, distributing, and running serverless applications with WebAssembly, Fermyon says. This includes the ability for users to use starter templates to build, distribute and run applications from a single interface (or locally).

For different languages in addition to Rust, Fermyon says it is building support for JavaScript, TypeScript, Python or C# and is integrated with HashiCorp Vault for managing runtime configuration. It is also designed for distributing applications using popular registry services and  allowing users to run applications on Kubernetes,

“Configurability is key when it comes to building distributed applications that run on different environments and Spin is no exception,” said Thorsten Hans, a cloud native consultant for Germany Thinktecture AG, during his talk “Spin it! Jumpstart your Wasm journey with Fermyon Spin.”

WASM Workers Unite!

Not unlike Fermyon’s Spin, VMware’s WASM Workers Server open source project is designed to offer a very quick and painless path to getting started for serverless application development and deployments with WASM.

During their talk “Develop serverless apps with WASM Workers Server,” VMware staff engineers for VMware’s Office of the CTO Angel De Miguel and Rafael Fernandez Lopez described how a user can get started in just a few minutes. VMware and Fermyon are, of course, not the only providers of WASM FaaS and other open source WebAssembly alternatives. Cosmonic’s wasmCloud, Suborbital Extension Engine and numerous other stand-alone and in-house WASM projects.

Eventually, the idea is to make everything easy and seamless for developers for serverless.

“We want to get as many developers as possible using WebAssembly so they only have to worry about developing applications and not the underlying infrastructure needed to support those applications,” De Miguel said.

More to Life Beyond Serverless

Serverless computing continues to grow in demand for a range of use cases, for those organizations seeking to create and run applications with a relatively minimal amount of infrastructure management involved. This is good news for the startup seeking to offer software applications, services or both without making significant investments in on-premises servers or having to configure or manage their own infrastructure through a cloud vendor.

Just one step removed from adding an API or service on top of prebuilt Software as a Service (SaaS) platforms, a serverless alternative allows organizations to begin offering their own business service or application with a minimal amount of overhead and fewer administrative and management tasks for maintenance.

However, WebAssembly is certainly not just about serverless. Indeed, thinking WASM might replace serverless would be to miss the point. WASM can be thought of as a different mindset not only thinking about its computing structure but especially for deployments in general. Sure, you can use WASM to run serverless applications, but it is much more than that because it’s a way of deploying, among other things, applications in a highly distributed way.

So, eventually, you might one day turn to a SaaS or cloud provider for a serverless application with WASM running underneath. You might also use WASM directly to distribute applications run in a wide variety of languages simultaneously across distributed environments, including not only Kubernetes clusters but across edge devices as well.

Meanwhile, for serverless alone, it is highly likely that in addition to VMware, Fermyon and Cosmonic, it is also highly likely that cloud vendors and SaaS providers will follow suit. “They’re all obviously looking at WASM for being able to run their serverless applications,” Bailey Hayes, director of the Bytecode Alliance Technical Standards Committee and a director at Cosmonic said during the sidelines of the conference. “It is just so much better than anything else,” in consideration of its very small and simple commuting structure,” its ability to do cold starts very well and a number of other features.

The post WebAssembly Providers Speed Ahead to Fill Serverless Gaps appeared first on The New Stack.

]]>
VMware and Other Wasm Players Want WebAssembly  https://thenewstack.io/vmware-and-other-wasm-players-want-webassembly/ Thu, 23 Mar 2023 15:52:37 +0000 https://thenewstack.io/?p=22702342

The signs were everywhere: WebAssembly was going to be something big. This was already becoming apparent not long after the

The post VMware and Other Wasm Players Want WebAssembly  appeared first on The New Stack.

]]>

The signs were everywhere: WebAssembly was going to be something big. This was already becoming apparent not long after the World Wide Web Consortium (W3C) named WebAssembly or Wasm as a web standard in 2019 with HTML, CSS and JavaScript (the bread-and-butter language for Wasm thus far). So, while web browser applications have represented Wasm’s central and historical use case, that it is not what Wasm’s potential is about.

Since it is designed to allow applications that run inside Wasm anywhere there is a properly configured CPU so that there is a Wasm runtime for the CPU architecture. Wasm is expected to serve as a very important computing structure. This is because Wasm allows applications to be deployed and distributed with fewer security layers to manage, a single configuration required for deployment across numerous devices simultaneously and a relatively small runtime structure that lends itself to low latency specs when deploying across networks. It became obvious how it could lend itself well to a host of new target environments for serverless applications or anywhere code might run on a processor, such as in IoT or other edge devices.

VMware Wasm Labs

In this context about two years ago, Daniel Lopez Ridruejo, a senior director at VMware and CEO of Bitnami before VMware acquired it in 2019, was one of more than a few engineers and technology executives at VMware and elsewhere to realize that Wasm was going to do a lot more than help to broadly improve the browser experience. The formation of  VMware’s Wasm Labs is a team at the Office of the CTO at VMware.

“Nobody had really figured out exactly how everything was going to work, but I realized there had been a shift,” Ridruejo said. “I came to the leadership of VMware, and I said ‘hey, we should be paying attention to this.’”

Without necessarily creating a formal department to support Wasm-related open source projects, development, integrations with infrastructure and network topologies or to develop applications for Wasm, tech leaders are almost invariably working with Wasm in production or as a sandbox project in addition to VMware.

“Wasm is a powerful core technology that continues the brilliant story of an enterprise app store that was originally envisioned by Bitnami. These guys were close to creating a universal app store for simple and consistent cross-platform deployment of complex enterprise apps, but ultimately fell short of enabling turnkey deployments due to the extreme diversity of the target infrastructure environments,” Torsten Volk, an analyst at Enterprise Management Associates (EMA), said. “Wasm eliminates this complexity and could ultimately make this type of turnkey app store for enterprise apps possible.”

No Guarantees

We are still not in a Wasm renaissance at this point. While using JavaScript to run browser applications is becoming very popular, much work remains to be done in order for Wasm to properly integrate other languages for which it is designed to accommodate. Work still needs to be done to properly create layers on top of Wasm that will allow it to live up to its full potential of being able to allow a single application configuration to be created and deployed across multiple devices and networks.

“Of course, there is a lot of homework to be done in the form of language and infrastructure integration, but the Wasm guys are on the right path and all this needs is some patience and some investments into the folks currently creating these integrations,” Volk said. “It will be key that Wasm not only works but is super simple to program for and deploy to.”

To help provide missing links to pave the way for Wasm’s adoption, VMware’s Wasm Labs initiated the Wasm Language Runtimes project. The idea is to “provide basic building blocks for developers who are looking to adopt WebAssembly,” Ridruejo said. The Language Runtime project can be used with other projects, such as mod_wasm (for running traditional web applications, such as WordPress), Wasm Workers Server (for running edge/serverless apps) or other open source projects, such as Fermyon’s Spin, Ridruejo said.

“Adding language support is hugely beneficial to the entire Wasm community — there are around 20 languages that Wasm must support to be successful,” Matt Butcher, co-founder and CEO of Fermyon Technologies, said. “Wasm Labs first released PHP for Wasm, and this was a brilliant opening move. I dropped everything and went to try it out, and even added support to Fermyon Spin.”

Meanwhile, in Fermyon’s case for Spin, VMware’s Wasm Language Runtimes project is beneficial because “Spin does not support running canned web applications without the need for significant setup and integration work,” Volk said. “Wasm Language Runtimes delivers turnkey runtimes for popular enterprise apps and, I believe, supports the installation of today’s hottest data science libraries, such as Pandas and Numpy.”

The Python Connection

In order for Wasm to realize its potential for use cases beyond the browser, such as for edge devices, backend server applications and IoT, Wasm will need to be able to accommodate Python better than it does now.

“Python is a flexible language. It plays an important role in server-side web development, in data and ML in UNIX system scripting and in other places,” Butcher said. “When a general-purpose scripting language like this is well-implemented in Wasm, we get a glimpse into the breadth of applications Wasm can be used for.”

At this point, the core work around Python is done (thanks to VMware and also Christian Heimes, who initiated the CPython work), Butcher said. “That is, we have the runtime. There are rough edges to sand down, but Python-in-Wasm is usable,” Butcher said. “In the next few months, we will start to see how developers apply Python in the Wasm space. If there is one major thing Python needs in the Wasm space, it is support for Numpy and Pandas.”

The big appeal of Python is that there “is a library for everything,” Volk said. “Therefore developers can create feature-rich apps around these libraries, without having to worry about exactly how things work under the covers. For example, if I want to grab data from different APIs to use in my own app, I do not have to create most of the API integrations and neither do I need to figure out how to write the resulting data streams to my SQL or NoSQL database — there are turnkey libraries that do all of this for me,” Volk said. “All I need to do is chain them together and pass them some parameters. Once I have the freedom to do all this in Wasm, we will quickly achieve mass adoption.”

VMware recently added Python support for Wasm Language Runtimes. In its documentation, its Wasm Language Runtimes offers popular language runtimes such as Ruby and others as well as Python that are precompiled to WebAssembly that are tested for compatibility and kept up to date when new versions of upstream languages are released, according to the project’s documentation.

It provides a build of Python for the wasm32-wasi target. Based on the WASI support that is already available in CPython (the mainstream, C-based implementation of Python), additional libraries and usage examples to make it as easy to use as possible augment the runtimes, VMware’s Asen Alexandrov, a staff engineer, wrote in a blog post.

“Our goal is to work with upstream projects whenever possible. CPython added WASI support later last year and it has been great,” Ridruejo said. “SingleStore and Fermyon are companies that pioneered Python work in this area.”

Interestingly, VMware is not focused on runtime performance, but rather on compatibility with the largest number of applications, Ridruejo said. “For that, we are currently focusing on adding as much third-party library support as possible and related dependencies, such as libXML and graphics manipulation libraries.”

The post VMware and Other Wasm Players Want WebAssembly  appeared first on The New Stack.

]]>