Modal Title
AI / Machine Learning / Operations

Google’s DeepMind Extends AI with Faster Sort Algorithms

As improvements in hardware performance start to level out in the decades ahead, fasting sorting algorithms such as Google's will be essential in scaling AI ops.
Jun 13th, 2023 12:09pm by
Featued image for: Google’s DeepMind Extends AI with Faster Sort Algorithms
Image by congerdesign from Pixabay.

Computing pioneer Grace Hopper once quipped that the most dangerous phrase in data processing is ‘We’ve always done it this way.” In that spirit, Google’s DeepMind searched for a faster sorting algorithm using an AI system — and the company’s researchers are now claiming the new algorithms they’ve found “will transform the foundations of computing.”

Google emphasized that sorting algorithms affect billions of people every day — from how online search results get ranked to how data gets processed. But “Making further improvements on the efficiency of these routines has proved challenging,” notes a recent paper from DeepMind, “for both human scientists and computational approaches.” DeepMind focused on the algorithms for sorting short sequences — with between three and five elements — because they’re the most commonly used (often called when sorting even larger sequences).

And for short sequences of numbers, their results were up to 70% faster.

But even for longer sequences with over 250,000 elements, the results were still 1.7% faster. And this isn’t just an abstract exercise. Google has already made the code open source, uploading it into LLVM’s main library for standard C++ functions — the first change to its sorting algorithm in over a decade. Google proudly points out that “millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management.”

In announcing their results, DeepMind offered more examples where they’d applied AI to real-world problems, trying to demonstrate that beyond all the hype, some truly impactful improvements are waiting to be discovered. It’s interesting to see how the approached the problem — but the exercise also raises the possibility that some long-hidden secrets may finally be unlocked with our new and powerful AI systems.

How They Did It

To hunt for improvements, DeepMind drilled down to one of the lowest-level of programing: assembly language. (a human-readable representation of the machine code).

Their blog post calls this “looking where most humans don’t” (or “starting from scratch”). “We believe many improvements exist at this lower level that may be difficult to discover in a higher-level coding language,” argues DeepMind’s blog. “Computer storage and operations are more flexible at this level, which means there are significantly more potential improvements that could have a larger impact on speed and energy usage.”

For their search, the researchers created a program based on DeepMind’s AlphaZero program, which beat the world’s best players in chess and Go. That program trained solely by playing games against itself, getting better and better using a kind of massively automated trial-and-error that eventually determines the most optimal approach. DeepMind’s researchers modified into a new coding-oriented program called AlphaDev, calling this an important next step. “With AlphaDev, we show how this model can transfer from games to scientific challenges, and from simulations to real-world applications,” they write on the DeepMind blog.

The breakthrough happens when AlphaDev transformed coding into a new kind of game, where AlphaDev continually adds single instructions to its algorithm and assesses its results. (“Winning a game” is replaced here by rewards for correct and speedy results.) The researchers called it “AssemblyGame,” and the blog points out that the number of possible combinations of instructions “is similar to the number of particles in the universe.” But the paper also clearly quantifies the game’s stakes.

“Winning the game corresponds to generating a correct, low-latency algorithm using assembly instructions.”

DeepMind’s blog post reports the newly-discovered sorting algorithms “contain new sequences of instructions that save a single instruction each time they’re applied.” (It then envisions this performance savings multiplied by the trillions of times a day that this code is run.) “AlphaDev skips over a step to connect items in a way that looks like a mistake but is actually a shortcut.” (DeepMind’s blog argues this is similar to an AlphaZero’s Go move which looked like a mistake, but ultimately led it to victory — and believes the discovery “shows AlphaDev’s ability to uncover original solutions and challenges the way we think about how to improve computer science algorithms.”)

Their paper says it shows “how artificial intelligence can go beyond the current state of the art,” because ultimately AlphaDev’s sorts use fewer lines of code for sorting sequences with between three elements and eight elements — for every number of elements except four. And these shorter algorithms “do indeed lead to lower latency,” the paper points out, “as the algorithm length and latency are correlated.”

The current (human-generated) sorting for up to four numbers first checks the length of the sequence, then calls an algorithm optimized for that length. (Unless the length is one, meaning no sorting is required.) But AlphaDev realized that with four-element sequences, it’s faster to just sort the first three elements — and then use a simpler algorithm to find that fourth element’s position among the three already-sorted. And this approach eliminates much of the overhead of “branching” into an entirely different set of code for every other possible sequence length. Instead AlphaDev can handle most sequence lengths as part of its first check (for how the length relates to the number two).

  • Is length < 2 (If there’s one element, just return its value)
  • Is length = 2 (If there’s two elements, sort them and return them.)
  • Is length > 2 (Sort the first three elements. If there were only three elements, return them.)
  • If there are four elements, find the position of the fourth element among the already-sorted three.

Beyond Coding

Their paper applauds the results as “both new and more efficient than the state-of-the-art human benchmarks.” But that was just the beginning. DeepMind moved on, discovering a new hashing algorithm that was 30% faster in the 9-16 bytes range (adding it to Google’s Abseil library of C++ functions in January).

Google also sicced AlphaZero on its datacenter to optimize workload distributions, according to another post, ultimately resulting in a 19% drop in underused hardware. And it also improved the compression of videos on YouTube, (reducing the bitrate by 4%).

DeepMind now argues that AlphaDev’s success at coding represents a step toward general-purpose AI tools that solve problems to the benefit of society — including helping to optimize more of our code. And while better hardware has “kept pace” for the last half century, “as microchips approach their physical limits, it’s critical to improve the code that runs on them to make computing more powerful and sustainable.”

The paper points out this isn’t the first use of reinforcement learning for optimizing code — and even some that tried to optimize sorting algorithms.

So maybe the ultimate affirming message there is its reminder that one single corporation isn’t driving the progress. Instead the results announced this month are just part of a larger broad-based human effort to deliver real and tangible benefits using our newest tools.

And as society acknowledges potential dystopian futures and the possible danger of AI systems, maybe it’s balanced by the prospect that AI systems could also deliver another possible outcome.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.