(Flickr Photo / Tim Green CC 2.0)

The last ten years marked a centralization of computing, in which we moved away from relying on our individual computers to process our orders toward a world in which lightweight mobile apps and web services backed by powerful cloud data centers took over.

At Structure 2017 in San Francisco on Tuesday, it was pretty clear things are moving back in the other direction.

Several of the sessions on the opening day of the venerable cloud computing conference addressed the growing certainty that computing power is moving back to intelligent connected devices on the “edge” of the network. Microsoft CEO Satya Nadella made it a key theme of his opening keynote at Microsoft Build in May, and momentum toward this shift would appear to be growing.

David King, Foghorn Systems CEO. (LinkedIn Photo)

Edge computing “improves security and privacy, and is more close to real-time latency,” said David King, CEO of Foghorn Systems. Foghorn develops machine-learning software that can operate within the constraints of connected devices in the field, such as industrial machines. Or, as he put it, “We deliver a small footprint computing capability to execute data science.”

Latency is what causes the breakdown as these connected devices spread throughout both the commercial and industrial internet, relying on back-end cloud services to unlock data insights or control complicated machinery. Latency is the delay incurred when a signal travels across a network, and the demands of our real-time world are hitting up against the limitations of the speed of light.

Edge computing will allow expensive manufacturing devices to do data analysis directly on the machine. (Wikimedia Photo / CC 3.0)

There are also cost benefits from being able to process data locally, said Jason Shepherd, director of IoT strategy and partnerships at Dell. Because of data egress fees at cloud providers, “it’s getting expensive if you have to pay every time you want to touch (your data).”

As with most shifts in technology, don’t expect everything to go along for the ride to the edge.

“There are certain classes of apps where the edge will be more important than others,” said Silicon Valley veteran and C3 IoT CEO Tom Siebel, citing surveillance systems as one area where sophisticated image recognition capabilities running directly on the cameras themselves could provide real-time intelligence that would be harder to do remotely.

(L to R): George Gilbert, Big Data and Data Analytics Analyst, Wikibon/The Cube; David King, CEO, Foghorn Systems; and Jason Shepherd, director of IoT partnerships, Dell, discuss edge computing at Structure 2017. (Structure Photo / Philip Van Nostrand)

But any application that finds itself constrained by the delay in connecting to cloud servers will benefit from increased computing power on the device itself. There’s a concept in cloud computing known as “data gravity,” which could also be thought of as data inertia to some extent: data at rest tends to want to stay at rest.

“The conditions that cause data gravity are going to push workloads out to the edge,” said Mark Russinovich, CTO for Microsoft Azure. He pointed to examples such as the budding market for augmented reality devices, where a fair amount of computing will have to be done on the device to generate real-time results, and self-driving vehicles; “any time life is involved, you want the device to be autonomous.”

Mark Russinovich. (Microsoft Photo)

Edge computing will also take off in places where local networks are underdeveloped, such as emerging markets around the world. At this point, there are lots of places on the planet that have been touched by wired or wireless internet access, but those networks aren’t necessarily maintained and updated at the pace their users would like.

Lior Netzer, vice president and general manager of Akamai’s mobile business unit, laid out a scenario in which a coffee shop in India that wanted to offer online content to its customers could have that content pushed to a local machine overnight, when fewer people are using the network. The coffee shop could then serve that content directly to its customers when it opens for business the next morning, giving them access to news and information in a speedy fashion that wouldn’t necessarily be possible if all those customers tried to get online and pull that information from the cloud at the same time.

And thinking about how to operate at the edge, under constraints not normally experienced by cloud users on fast reliable connections, has other benefits. “Problems in constrained emerging markets lead to innovation in developed markets,” Netzer said. “The ability to take caching and put it on any device opens up a lot of possibilities.”

As Russinovich mentioned, several speakers agreed the self-driving car will probably be the first big test of edge networking.

Even though the evolving 5G wireless networking standard promises extremely fast connections to mobile devices (and yes, we’re going to start talking about cars as mobile devices), the sensitivities around how self-driving cars are deployed will likely mean that a lot of the computing required to safely navigate will be done locally. Self-driving cars will be responsible for taking in data to paint a picture of their surroundings and making instant decisions about how to react to that data, while also gathering data on its own activities for maintenance purposes.

“Why take that data from the car?” Foghorn’s King wondered. There doesn’t appear to be a very good answer.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.