Contributed"> The Edge Is Replacing the Cloud, and Mobile Is Making It Happen - The New Stack
Modal Title
Edge Computing / Software Development

The Edge Is Replacing the Cloud, and Mobile Is Making It Happen

Jan 4th, 2018 9:00am by
Featued image for: The Edge Is Replacing the Cloud, and Mobile Is Making It Happen

Alexander Stigsen
Alexander is co-founder and CEO of Realm, the leading mobile data platform embedded in over 3.5 billion mobile applications.

Something weird happened last year at Apple’s launch of its latest iPhones. At least according to some benchmarks, this newest generation of devices are even faster than Apple’s latest MacBook Pro line. If there hasn’t yet been a clear starting line to mark the era of edge computing, then we’ve got it now: the devolution of processing power from servers to phones has progressed far enough that it just makes sense for more work to be done at the edge.

Even in the six years between the release of the original iPhone and the iPhone 5S, its CPU got 54 times faster. A typical smartphone is now a lot faster than the servers that used to enable that phone to do anything interesting. And mobile networks are also evolving. We’re only a few years away from the broad rollout of 5G networks, which aim to dramatically reduce latency and increase data speeds even further for connected mobile devices.

5G is about more than speed: it needs to support the many devices that will become part of the Internet of Things. How many devices? The current spec calls for supporting up to one million connected devices per square kilometer, each of which will need reliable, efficient connections. Even if they’re not pushing the limits of performance, the potential for sheer volume of deployment means these devices will benefit just as much from technical developments in edge computing as the iPhone 16 might.

In the face of all of these developments on devices and the networks that power them, there’s one thing that isn’t changing very fast at all: the time it takes for an HTTP call to get from your phone to a data center and back again. With this holding nearly constant, it means the interesting trend is for interesting work to happen closer and closer to the device, and distributed across more and more devices.

What could that future look like? When it comes to autonomous vehicles, finding the right place for an algorithm to do some work might mean life or death. Especially when it comes to split-second decisions in traffic, speed matters, and by building the technologies that underlie edge computing properly, we can give a vehicle’s computer as good a chance at getting that decision right. A future where people get hurt or killed because your car takes too long to get a response back from a server in a data center is not the future we want.

For mobile developers, the future feels pretty close. After all, if you’ve been building mobile apps, you’ve already been building on the edge. It’s not at all strange for you to have experience building apps that are resilient and capable. You’ve been dealing with intermittent networks and learning how to enable features even when your users go offline, and how to resolve conflicts when they get a connection back again. You’ve figured out how to package high-performance features like machine learning into something that can run on a phone. You’ve lived with battery and processing and storage constraints, and know how to get the most out of them.

The many advancements in mobile databases were at least partly responsible for turning apps into something more than lazy clients. Sure, you still get and post data from the server every chance you get, but once you’ve got that data, why should you have to re-fetch it? Instead, you store any fetched data locally, and show your users what they’re looking for by first querying the local database, and avoiding a round-trip to the server unless it’s entirely necessary.

Building apps also opens you up to another deeply frustrating reality: Networking is painful. Often, doing it right results in writing an outrageous amount of code to capture all the possible error states you might run into. But what if you’re trying to accommodate the shifting locations of nodes and servers that will make up edge computing? You’d have to write so much code just to get data to the right place.

By taking the focus off of the endpoint-based networking of RESTful APIs, and focusing more on the flow and processing of data that comes through the network, mobile developers get to focus on writing actually interesting code. Instead of writing new endpoints and networking code for nearly every button in the app, you just make changes to data in your local database, and rely on a sync system to move that data to your servers.

You probably don’t want to solve sync on your own, but there are great mobile platforms that will save your mobile team an enormous amount of time building out its own server tech. Using that to back a RESTless architecture will let you build resilient, intelligent networks that distribute data to where it needs to go.

The intelligence of the network is core to realizing the fullest possibilities of edge computing. If the network is smart, then it can figure out where to send changes, run compute tasks, and store persistent data. And your users win — whether it’s because their cars will learn to drive themselves, or simply because their apps work better and do new, cool things.

Feature image by Micah Williams on Unsplash.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.