christopherbudd
Christopher Budd

Earlier this month, Microsoft marked the tenth anniversary of its regular “Patch Tuesday” release of security updates. There wasn’t a lot of fanfare, but there was reflection on how this new, regular process improved security for Microsoft customers and for security practices in the industry overall. Larry Seltzer and Andrew Storms both give a good sense of what the world was like before this, and the good this program has done over the decade.

I was part of the Microsoft Security Response Center (MSRC) for ten years, and my tenure straddles the Patch Tuesday process. I started before Patch Tuesday, worked to help build it, and was a part of it for many years. While people are looking back on the good that Patch Tuesday has done, like I did on the ten-year anniversary of the Trustworthy Computing Memo, I want to use the occasion to look forward instead. I believe that, while Patch Tuesday has been a very good thing, we need to move beyond it to something newer and better — something more suited to today’s threat environment. Ten years from now, Patch Tuesday needs to be a thing of the past.

How it all started

patchTo understand my logic, it helps to remember the beginnings of Patch Tuesday. Before Patch Tuesday, everyone followed a “ship when ready” model: when the security fixes were done, you shipped them. Sometimes we’d even have multiple shipments in one day. And in those days you also had multiple, widespread, debilitating attacks like Code Red, NIMDA, Blaster and Sasser. Taking security fixes and attacks together, that created a world where the IT Manager might wake up to find another “grenade in their inbox,” as one colleague put it. Some order was needed for that chaos. But when Microsoft started moving away from the “ship when ready” model, there was a lot of criticism that we were leaving people vulnerable to attack longer than they needed to be. Through the history of the Patch Tuesday process, this has been an issue. It comes to the fore whenever there is a zero-day situation and people clamor for an “out of band” release. In those situations, the benefits of a structured process collide with the problem of the increased time a vulnerability is open to attack.

We were aware of the risk around waiting when we built the regular Patch Tuesday schedule. And we managed that risk throughout my time there (just as my colleagues there still do to this day) with the “out of band” option. But because of the risks of holding fixes back, some of us didn’t see “Patch Tuesday” as an end state, but rather a confidence-building step. By standardizing the delivery schedule, we hoped for a day in the future when people, and especially enterprises, wouldn’t throttle the deployment of patches with their testing. The end state we envisioned would be a synthesis of the benefits that a structured process gave us and the speed of protection that the old “ship when ready” model offered.

When we look at today’s threat environment, time is even more of an issue. Zero-day attacks have increased exponentially in the past decade, as have the speed and sophistication of attacks. Attackers have also been using the fixed position of Patch Tuesday against itself, as evidenced by the well-established pattern of “Patch Tuesday/Exploit Wednesday.” “Exploit Wednesday” is a problem for other vendors who follow regular releases, like Oracle and Adobe. And with Oracle’s quarterly rather than monthly schedule, the problems of time delays and exploiting the fixed position are even more acute, and the complexity of patch deployment is much greater.

To be clear, this isn’t a call to return to what things were like before. There’s a lot of important work around information sharing and coordination between companies that has grown up around the structured Patch Tuesday process. That absolutely should continue; if anything, it needs to keep growing. However, all of the attendant goodness that surrounds Patch Tuesday needs to be adapted and made nimble enough to support a “ship when ready” model.

Adjusting to the new world

At first glance it may seem easier to simply let tech evolution take its course. Outside of desktops and servers, apps on mobile devices and online services are already serviced with a “ship when ready” model and have been from the beginning. Even Microsoft has embraced the “ship when ready” model for Windows 8 apps available through the Windows Store. It’s very tempting to just accept two different standards for now, especially when it looks like the model better suited to moving forward is tied to technologies we associate as key to the future.

buddpull

But accepting one standard for mobile platforms and online services and another for desktops and servers is not the right answer. Accepting the status quo leaves desktops and servers at greater risk of attack for an unacceptably long time. And while the model for servicing apps and online services has the advantage of speed, it generally lacks the kind of broad, layered support that has evolved around the desktop and server security-response processes. It’s also unrealistic to pretend that desktops and servers are going to disappear entirely in favor of mobile devices and the cloud. Instead, the best and most realistic answer is to synthesize these two security servicing models into new industry best practices for the future.

As the creator of the undisputed best practices around desktop and server security response, Microsoft is well-placed to lead the next wave of best practices. They’ve done good work in adapting to the needs of mobile devices and online services. But Microsoft doesn’t drive the industry like it once did; the current environment makes it impossible for any one vendor or platform to drive the entire industry. Progress will require customers and users to demand better security practices rather than accepting what’s offered. Specifically, it means they must demand better-quality patches, improved delivery methods (including fewer reboots), and anything else they need to feel comfortable updating their servers’ applications much like they update their mobile phone’s apps.

I don’t know that these changes will happen. Customers have historically been passive when it comes to security updates, and vendors typically don’t do anything until they have to — there’s a lot of “good enough” inertia in this space. The lack of a single, big player in dire trouble, as Microsoft was ten years ago, eliminates the unique potential of that situation to be used for the greater good. But when you’re in security like I am, you accept the importance and necessity of being a voice in the wilderness when you need to be, in the hopes that eventually people will listen. We did it when we started calling for a more regular process when I was in the MSRC, and eventually the world followed. Now I’m doing it again, saying that it’s time to move on and replace Patch Tuesday with a new, faster process that’s ready to meet the challenges of the next ten years.

Christopher Budd works for Trend Micro, focusing on communications in the areas of online security and privacy, incident response, and crisis communications. Prior to that, he was an independent consultant and before that a ten-year veteran of the Microsoft Security Response Center (MSRC). He combines his prior career as an engineer with his communications expertise to help bridge the gap between the technical and communications realms. Follow him on his personal blog or on Twitter @christopherbudd.

Comments

  • panacheart

    Good in-depth article.

    • http://www.christopherbudd.com Christopher Budd

      Thanks for reading and thanks for the comment!

  • SilverSee

    Nice job Christopher. I think a transition to a “ship when ready” model is inevitable, and Microsoft probably is aware of that already. As you know, the regular and predictable schedule of security updates provided opportunity for IT staff to test patches before deployment. I think the new threat environment makes this practice harder to justify, and today’s application frameworks and server infrastructure are a lot more robust making the need for in-house testing less critical. I think at some point, both Microsoft and its enterprise customers will arrive naturally at a more nimble model, but opinion pieces like this one can help drive the change.

    • http://www.christopherbudd.com Christopher Budd

      Strange: I replied yesterday but the comment is gone.

      Thanks for reading and thanks for the comment. That’s a good point you made about things being more evolved these days and so better able to support the kind of thing I’m calling for.

  • Out For Justice

    I disagree that MSFT leads in security innovation. Patch Tuesday and rebooting is the very reason why Windows sucks in the server space. It is unfortunate a more secure and always available type of kernel from MSFT did not see the light of day (e.g. singularity). Erlang has a much better “update without shutting down” model that has existed for over 20 years and the Linux kernel is far superior in terms of security and stability. I do not have much faith that MSFT will ever make an OS that isn’t plagued by security flaws and update hell. After all, they did invent reboot and DLL hell in the previous generations…

    • http://www.christopherbudd.com Christopher Budd

      Thanks for reading and for the comment.

      I certainly won’t argue about rebooting being a problem. When I was there, Microsoft promised to minimize reboots. And we started to see some progress towards that goal around 2006 or so. The reality is that for whatever reason, around 2007 the priority to fulfill that promise disappeared. I can’t speak to where things are today but when I left at the end of 2010 there hadn’t been any forward motion on reboots for some time.

      That always disappointed me particularly because I felt we had made a promise there and had an obligation to keep it. But, as the saying goes, no one asked me.

Job Listings on GeekWork