Edge computing will change many of the underlying assumptions that applications make in the era of cloud and mobile computing, and a new project led by Pittsburgh’s Carnegie Mellon University wants to make sure that developers are prepared.
The project, known as CONIX, is “looking at how we can design future communications networks to act more like computers rather than things that push along packets,” said Anthony Rowe, a professor at CMU leading the effort, which also includes researchers from the University of Washington.
Backed by a $27.5 million grant from the Semiconductor Research Corporation — an industry trade group that includes Intel, Microsoft, and Arm — the idea is to prepare for the rise of edge computing by figuring out how to write software that can take advantage of the push to place more processing power across different points on a network.
You’re going to hear a lot about edge computing over the next few years. Microsoft CEO Satya Nadella actually made it a central theme of his Build 2017 keynote speech to Microsoft developers. The concept is unfolding as the internet of things is finally coming online at scale, and it holds that processing power is going to move out of public clouds and huge data centers back toward devices on a network.
Rowe compared the CONIX effort to the human nervous system. The brain is responsible for most of our cognition, and our network of senses gives us the data needed to make decisions. But some actions — such as, “wow, that stove is hot” — are actually handled by our spine, which can react much more quickly than our brain.
As real-time computing becomes more central to our lives, building out the “spine” in this scenario becomes more important. There are a growing number of applications that rely on sensor data to make decisions, and waiting for the central cloud or data center to make those decisions can cost users time and money. Nadella’s example involved a factory floor with extremely expensive machines where detecting an anomaly in a machine and reacting quickly could save millions in maintenance or replacement costs.
But there is also an opportunity to take advantage of the growing amount of computing horsepower in the networking components themselves, Rowe said. Long thought of as dumb pipes shuffling bits to and fro, modern routers and switches are now much more capable of handing some of the processing work, assuming software is written with that in mind.
That’s probably the biggest thing Rowe and his team of almost 30 people are working on: “It’s a heavy software focus,” he said. The team hopes to create a new programming language designed for edge computing applications, which will have to understand how to allocate and access resources spread out across a network as opposed to the relative simplicity of an app designed for a mobile phone that runs on a cloud computing service.
However, hardware will also need to accommodate this latest shift between centralized computing and decentralized computing. Rowe and his team are studying how drones, virtual reality, and smart cities will evolve around edge computing, which will require a lot of research into sensor networks and the underlying infrastructure needed to collect, process, and act upon data.
This will be a multiyear effort. The grant asked the team to look at the future of distributed computing over a ten-year horizon, and Rowe said that the team hopes to have something of a blueprint after five years of work.
“We’ll be steering more toward the really forward-looking architectures that are higher risk for companies” to research on their own, Rowe said.