SAN FRANCISCO – It takes a special kind of person to encourage a room full of journalists to blow up the furniture around them, but Brandon Bray is just that kind of man. With his encouragement, I looked at a paper ball hovering over the coffee table in front of me, said “fire in the hole” and watched as the ball dropped to the table’s surface, where it exploded and opened a hole inside the table…to reveal a low poly landscape with flying paper cranes and a rushing river.
That scene was the result of a program I loaded onto Microsoft’s HoloLens augmented reality headgear, under the tutelage of Bray, a principal group program manager at the company who was among the instructors at an almost two-hour-long demo session held in San Francisco today.
I took part in a special condensed version of the “Holographic Academy” sessions Microsoft has been running in a hotel next door to Moscone West, where the company is hosting its Build developer conference.
I walked away optimistic about Microsoft’s plans for the future of computing in the long term, and impressed by the opportunities for developers in the short term. I had a chance to try on the latest version of the HoloLens hardware, which is the same model that Microsoft executives have worn on stage when they’ve demonstrated the device.
Each session, in a conference room stocked with PCs is designed to familiarize developers with the Windows Holographic platform and what it’s like to develop for the HoloLens. The sessions usually run for more than four hours. The guided demo covered building an augmented reality 3D scene using the Unity 5 game engine, and then using C# scripts, audio files and animations to bring it to life.
How the SDK works
The HoloLens software development kit is surprisingly elegant, given the complexity of the device itself, which is crammed full of speakers, sensors, and processors – including a dedicated Holographic Processing Unit to help handle everything Microsoft needs it to do. The HoloToolkit framework contains a number of different classes that handle everything from mapping the physical space around the device’s wearer to voice recognition and registering hand gestures from the user.
To add a voice command, developers call the AddKeyword method from the KeywordRecognizer class, and pass their trigger phrase to it as a string of plain English text. It’s incredibly straightforward code that belies the complexity of the system, which has to translate those words into sounds that the HoloLens is listening for from the user.
Speaking of voice commands, one of the things that surprised me most about the HoloLens was the sensitivity of its microphones. I could speak the key phrases I had set up in a fairly soft murmur, and the hardware would still register them fairly reliably.
Alex Kipman, the technical fellow at Microsoft who has been the public face of the HoloLens project, told me after the class that the system’s performance was the result of the HoloLens having so many microphones close to the user’s face, which allow the device to pick up incredibly precise sound data.
People who are already used to building apps for smartphones or tablets in Visual Studio should feel right at home deploying code to the HoloLens. From what I can tell, Microsoft’s development environment handles the HoloLens like another mobile device, and Bray said the experiences of developing for a HoloLens and a phone are very similar.
Of course, there’s a glaring difference between the two. Phone apps – even the ones that have dabbled aggressively in augmented reality – have never felt this cool to use.
Still under development
Microsoft seems to be a long way away from a shipping version of the HoloLens. The hardware and software I used had a few obvious bugs – the spatial mapping refused to work in a particular spot in the room during one deployment of the test application, and the first unit I was given didn’t start up properly. But that’s to be expected, since the device is still under active and incredibly heavy development.
“The software is just so new that we literally built it a week ago,” Bray told the room before we got underway.
Bugs aside, the device isn’t without its limitations. Holographic objects appear in a rectangular band that takes up a user’s central field of view, but cut off in a person’s peripheral vision. In practice, it meant that “close” holographic objects were actually clipped in places where they would have ordinarily continued. Kipman said that was intentional, so HoloLens wearers could still take advantage of their peripheral vision while wearing the device.
People who talk with their hands like I do may also find themselves setting off the HoloLens’s gesture controls unintentionally; I managed to fire off a command without meaning to a few times while carrying on a conversation with one of the mentors in attendance. That said, the system has an incredibly wide field of view when it comes to detecting gestures. I could hold my hand close to my body and slightly above my waist, and have the HoloLens detect the mid-air tapping gesture that’s roughly analogous to clicking in a traditional mouse and keyboard paradigm.
It can be a bit ungainly to set up at first, since users have to adjust a headband to fit their head and then lower the HoloLens visor into place. Once it’s on, the device does a good job of keeping weight centered on a user’s head rather than placing it on their nose.
If I had to use one every day, I’d likely swap out my massive hipster glasses for a set of contact lenses, but the HoloLens was more comfortable for me to get over my specs than an Oculus Rift.
As any programmer will tell you, building something in a heavily guided code demo and actually going out and creating something from scratch are two vastly different animals. It’s going to take a whole lot longer than a couple hours for developers to create holographic apps that people will want to use, but after today, I’m convinced it’s an achievable goal for anyone who wants to take on the challenge.