Mark Russinovich, CTO, Microsoft Azure. (GeekWire Photo / Todd Bishop)

Hang around enterprise computing types long enough, and you’ll wind up talking about “the stack” at some point. It’s a term used to refer to the complicated layers of software that run in modern data centers, and the most fundamental part of the stack is the operating system, which manages how everything else in that stack uses the hardware.

Subscribe to GeekWire’s Cloud Tech Newsletter for cloud industry news and analysis.

For years, Microsoft and Linux vendors have fought for control of this basic and lucrative part of the stack. But as cloud computing evolves, we’re starting to see other parts of the stack take on greater prominence.

Containers — which allow applications to run independently of the operating system — were the spark for this evolution, and the growing importance of container orchestration software like Kubernetes means that a certain amount of the resource management once done by the operating system can now be handled in other places. And the emergence of event-driven serverless development techniques could lead to more changes in the way we think about operating systems in the stack, according to Mark Russinovich, Microsoft Azure chief technology officer.

“If you take a look at the way that containers have evolved, it’s basically an evolution of the OS model we’ve had to this point; they have a file-system view of things,” said Russinovich, an operating-system historian in his own right, in a recent interview. “If you take a look at what an app is trying to do, it’s possible to break away from that type of abstraction.”

The operating system isn’t going anywhere: something has to take charge of allocating hardware resources in response to the basic demands of the applications running on a server. But the role it plays could be changing quite a bit, and that shift could have a profound effect on how the data centers of the future are put together. It could pose problems for Red Hat and Microsoft, which have aggressively embraced the cloud but still make a lot of money selling traditional operating systems to server vendors and companies building on-premises data centers.

And it could unlock some interesting opportunities for startups with a fresh approach to a product that historically has taken an awful lot of resources to develop and maintain. Just like FPGAs (field programmable gate arrays) are gaining steam among artificial intelligence researchers thanks to their flexibility, lightweight operating systems — which promise that you only need a bare-bones package at the bottom needed to make everything work — could emerge as the cloud-native approach to computing.

Order of operations

The operating system does more or less what its name implies: it’s a system that operates the computer. Operating systems serve as a bridge between higher-level application activity and hardware components like the processor, memory, and storage, and they have traditionally been one of the most important components of the aforementioned stack.

Unix was the predominant operating system for enterprise computing around the time grunge rock was sweeping the nation, and as the internet took off in the late 1990s, the rise of the scale-out low-end server brought Windows into the enterprise mix. Around the same time, a guy named Linus Torvalds was leading a project to refine an open-source version of Unix.

A typical data center setup. (Courtesy: Wikipedia)

Now there are dozens of versions of Linux running enterprise computers, from Red Hat Enterprise Linux to Amazon Web Services’ custom Linux distribution. Microsoft Azure, in a push to be OS-agnostic, now offers eight Linux options for its customers on a service that used to be all Windows all the time. Most cloud vendors also offer an array of Linux options.

And now we’re seeing another transition.

The hottest enterprise technology (yes, there are such things) of the 2000s was the virtual machine, which allowed companies to run multiple applications on a single processor core thanks to the introduction of hardware virtualization and software from VMware. Still widely in use, virtual machines need to have a copy of the operating system packaged with the rest of the application software in order to run, and a piece of software called a hypervisor manages how those virtual machines are deployed.

Now, containers, based around operating-system level virtualization, are allowing developers to pack even larger numbers of applications onto a single piece of hardware. Containers are also interesting because they don’t need to have the operating system code present in order to work, which means they can be launched very quickly, especially compared to virtual machines.

“As virtualization allowed people to squeeze more performance out of the same hardware, containers make another leap,” Russinovich said.

Containers at the core

But containers are changing the notion of what is expected from the operating system.

We might be at the beginning of a downsizing movement in the operating system designs chosen to run the enterprise computers of the 21st century. Companies like CoreOS and open-source projects like Alpine and CentOS are advocating for stripped-down operating systems, believing that a lot of the complexity of higher-level parts of the operating system can be handled by container-management software and Kubernetes, the hypervisor of the container era.

“We kind of kicked off this whole category of container-focused OSes,” said Brandon Philips, co-founder and chief technology officer of CoreOS. “We’ve seen from the very beginning that containers would change the way you think about the OS.”

Before containers, enterprise applications had to be tightly integrated with the operating system because all of the components they depend on to run — binaries and libraries — had to be available within the operating system. Containers allow developers to package those binaries and libraries with their applications without having to bring the operating system along, which means the operating system itself doesn’t have to provide for as wide a range of application dependencies.

An overview of the difference between a containerized approach (L) and virtual machines. (Docker Photo)

Lightening the load could have a number of interesting effects.

For one thing, the less complex an operating system, the more stable it tends to be. And in a world where everybody is hacking everybody, a smaller code base offers what the security types call “a reduced attack surface,” meaning there are fewer software vulnerabilities to be discovered and exploited if there’s less software.

It’s also easier for operating system developers to update and patch their systems if the code base is smaller, Phillips said. “OS distributors have always been afraid of shipping updates for fear of breaking applications, but containerization breaks that,” he said.

At the far end of this movement are unikernels, in which the application more or less manages the hardware resources. Unikernels basically remove a management layer traditionally handled by the operating system and package everything needed to run an application directly on the system’s processor, which eliminates the need for a lot of operating system components that traditional applications rely upon.

Similar to how the first movies were pretty much live-action plays that happened to be filmed until creative people began to understand the new possibilities of the medium, there are a lot of people who believe that simply moving old-school architectural strategies to the cloud is missing out on the ultimate promise of flexible, on-demand cloud computing.

Popping the kernel

Tech veterans have seen this pendulum swing back and forth over the years, and there are a lot of skeptics (and comedians) when it comes to ideas like unikernels.

Bryan Cantrill, chief technology officer at Samsung’s Joyent, wrote a rather-pointed (but extremely on-brand, for anybody who knows Bryan) argument last year about why the unikernel concept wasn’t even close to ready for production systems, and I could almost see him rolling his eyes about this most-recent code-cutting debate during a recent conversation.

“Any time you have a movement that wants to radically simplify a system, it’s because of the belief that systems have gotten too complicated. And then a radical movement comes through and simplifies it, and then the system that started out as simple becomes more complex,” he said, articulating an age-old truism about technology development. Systems start out simple, and then features get added to do cool things (or sell more software), up until the point where the system is so bloated that it becomes ripe for disruption by something just as capable but simpler.

However, for an average-sized tech organization, a slimmed-down OS approach requires you to do a lot more work piecing together all the components you need for your app software to work. Some engineering teams will like that sort of flexibility, while others will bemoan the complexity.

“Linux distros were created for a reason, it’s because people didn’t want to stick all those packages together,” said Gunnar Hellekson, director of product management at Red Hat, which has a lot at stake in this discussion. “There’s value in having someone else do that job.”

It’s quite possible that as tools like Kubernetes become easier to use and more mature, prompting another explosion in container usage, the value of a lightweight OS might start to shine. But the rise of serverless technologies might just sidestep this debate entirely.

Event-driven programming models like serverless development further step away from the operating system layer, to the extent that the application developer doesn’t have to think about the operating system at all. Services like AWS Lambda or Azure Functions handle not just the hardware resources, but abstract away the operating system layer as well, allowing developers to just concentrate on events and desired outcomes and cloud vendors to use whatever works the best for their situation.

Keep it simplish

We’ve seen steady progress over the last few decades abstracting the mind-boggling complexity of how computers actually work, and that has paved the way for an explosion in software development that has unlocked entire new industries.

Yet we have to always remind ourselves that for all the cutting-edge cloud computing technology that inspires conferences and technical talks, there are still plenty of companies that are running well enough on outdated technology, which can slow the adoption of promising new approaches to enterprise computing. It’s going to take some time before the benefits of changing the size and role of the operating system make sense to a wide enough number of businesses, and for some categories of workloads, it might never happen.

But there are some types of apps that might thrive in such an environment, and there are always new ideas and applications built using the newest tools that just couldn’t have been done reliably or at scale in the past.

And software developers tend to look for solutions that make their jobs easier and faster, Philips said. For them, it’s “less about the OS, it’s more about, ‘is my application up and is the OS doing the job of keeping it up?’”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.