Walking through the airport, you look up at the big screen to find your gate. But this is no ordinary public display. Rather than a list of arrivals and departures, you see just your flight information, large enough to view from a distance. Other travelers who look at the same screen, at the same time, see their flight information instead.
On the road, traffic signals are targeted individually to your car and other vehicles as they move — showing you a red light when you won’t make the intersection in time, and displaying a green light to another driver who can make it through safely.
At a stadium, the scoreboard displays stats for your favorite players. Fans nearby, each looking simultaneously at the same screen, instead see their favorites and other content customized to them.
These are examples of the long-term potential for “parallel reality” display technology to personalize the world, as envisioned by Misapplied Sciences Inc., a Redmond, Wash.-based startup founded by a small team of Microsoft and Walt Disney Imagineering veterans.
It might sound implausible, but the company has already developed the underlying technology that will make these scenarios possible. It’s a new type of display, enabled by a “multi-view” pixel. Unlike traditional pixels, each of which emit one color of light in all directions, Misapplied Sciences says its pixel can send different colors of light in tens of thousands or even millions of directions.
They call it a “magic pixel.”
“Multiple people can be looking at the same pixel at the same time, and yet perceive a completely different color,” said Albert Ng, the company’s CEO and co-founder. “That’s each individual pixel. Then, we can create displays by having arrays of these multi-view pixels, and we can control the colors of light that each pixel sends. After coordinating all those light rays together, we can form images at different locations.”
The result: a display that lets many different people see completely different content on the same screen, simultaneously. When combined with location technology and sensors, similar to those already embedded in a smartphone, the company says this content can be targeted in real time from public displays to specific locations, people and objects, essentially following them in three-dimensional space as they move through the world.
It works with the naked eye, no bulky headset or high-tech goggles required. And no need to bury your head in a smartphone for personalized information.
Misapplied Sciences will face a series of technical, economic and even societal challenges as it works to achieve widespread adoption for the technology. But with seemingly endless potential to create new forms of entertainment and information, the company’s long-term ambition is nothing less than to change the way people experience the world.
“Bill Gates, when he started Microsoft, had this crazy idea that in the future there would be a computer on every desk. People thought he was nuts,” said Paul Dietz, the Misapplied Sciences co-founder, chief technology officer and chairman. “We have a version of that. Our version is that, in the future, everything you see on a public display will be targeted to you.”
Whether the idea is crazy or not, three members of the GeekWire team experienced the Misapplied Sciences technology with our own eyes. After working in secret for several years, the company recently gave us a series of demonstrations inside their nondescript headquarters, behind the blacked-out windows of a former retail store in a Redmond strip mall.
This is the first time they’ve shown or talked about the technology publicly.
‘Parallel reality’ in person
Standing on the main floor of the Misapplied Sciences office space, we looked at a large display across the room. We each saw the content on the screen change as we moved to different locations, while our colleagues, standing in a variety of different spots nearby, continued to see alternative content on the same display.
The video above, provided by Misapplied Sciences, demonstrates the effect we experienced in person — presenting simultaneous views of the same display taken two feet apart, using the movement of the dragon character to show that each view is actually happening at the same time, not a video trick.
In the video below, the empty frame represents a point in 3D space. The content on the display across the room changes as the vantage point of the camera moves.
It might be tempting to think that a filter or something else inside the frame is causing the effect, which is what we assumed initially when we saw this demo in person. That’s not the case. The frame outlines the point in space where the content changes for the viewer. When we stepped on the other side, with no frame between us and the display, the visual effect was the same.
In another demo, the same display showed two sets of numbers: one representing a row, and another a column. As we moved left and right, the number on the left changed, and when we moved and up and down, the number on the right changed, effectively pinpointing our vantage point at any given moment on the X and Y axes.
In the video below, the company uses mirrors to demonstrate how different pieces of content can be seen on one screen from multiple different perspectives. The single screen is behind the photographer. We also saw this mirror array and screen in person as part of the demos we experienced, and we can confirm that only one display is being used.
In another demo, not shown in the videos, we each held different frames containing infrared LEDs, indicating our position to cameras across the room. In this demo, the content that we saw on the display across the room, through the empty frames, moved with us as we moved. Different content was visible when we looked outside the frames.
This was an early demonstration of the ability to target specific content to people as they move, using sensors and location tracking to create personalized “viewing zones” that can be updated in real time.
An ‘enormous change’
For a company with world-changing ambitions, Misapplied Sciences has raised a relatively small amount of money so far: $3.4 million in equity investment, and another $900,000 from the National Science Foundation’s Small Business Innovation Research program.
Dietz, the company’s CTO and chairman, is widely recognized in engineering circles for his early work on multi-touch screen technology, now common in smartphone and tablet displays. At Microsoft Research from 2007 to 2014, he developed key interface technologies for devices including the original Microsoft Surface tablet. Earlier, at Disney, he created the technology behind the “Pal Mickey” plush device that used location technology to create an interactive experience for Walt Disney World visitors.
Dave Thompson, co-founder and chief operating and creative officer, also worked at Walt Disney Imagineering, leading the design of theme park attractions and cruise ship experiences, before going into advertising and marketing, working on some of the first campaigns that blended the physical and digital worlds.
Ng, the company’s co-founder and CEO, is a Stanford University and Caltech alum recognized for his work on in areas including low-latency touchscreens. He was a research intern at Microsoft Research during Dietz’s tenure at the company.
Misapplied Sciences was founded in 2014, shortly after Dietz and Ng left Microsoft. They declined to talk about the origins of the startup but said they would tell the story at a later date. Dietz worked in Microsoft’s Applied Sciences hardware research group, sometimes abbreviated “MS Applied Sciences,” which demonstrated a different, more basic approach to multi-view displays as early as 2010. Microsoft does not have an ownership stake in Misapplied Sciences. A Microsoft representative declined to comment this week when asked if Microsoft has any other type of ongoing business relationship with the startup.
Others, such as the MIT Media Lab, have also pursued the concept of multi-view displays using different techniques in the past. More recently, a startup called MirraViz drummed up attention at the CES technology trade show this year with a system that uses multiple projectors to display different content to multiple people on the same screen. Engadget called it “one of the wildest” things at the show, while noting that it was limited by the number of projectors that could fit around the screen.
There’s no such limitation with the Misapplied Sciences technology, given the way its multi-view pixel works. Misapplied Sciences has applied for 18 patents related to its technology, three of which have been granted, and the founders say they have more in the pipeline. Its approved patents are for a multi-view architectural lighting system, a computational pipeline and architecture for multi-view display, and multi-view traffic signage, displaying customized content to different vehicles.
A captioning system customized to individual theater seats is another possible application cited several times by the company’s founders, allowing people to choose the language of the captions they view from the same screen. Kelly Tremblay, a Seattle-based clinician and neuroscientist who focuses on issues including hearing loss among aging adults, was shown a demo of such a system by Misapplied Sciences, after the company invited her to its offices.
“I couldn’t stop thinking about different ways it could be implemented and made useful — if it could be scalable enough and made affordable for different venues in different populations,” said Tremblay, who is also a professor at the University of Washington in the Department of Speech & Hearing Services.
Another possibility is multiplayer arena games, where each competitor sees his or her own view on the big screen. And, yes, billboards could be targeted to individual people or locations. What happens to the outdoor advertising market when someone can buy a personalized message to propose from a specific spot in Times Square?
The potential is immense, said Carl Ledbetter, managing director of Pelion Venture Partners, a Misapplied Sciences board member and early investor in the company. A former AT&T and Novell executive who has been in venture capital for more than two decades, Ledbetter said his initial experience with the Misapplied Sciences technology was the first time he saw a tech demo and was forced to ask, “How does that work?”
“The decision to invest really was almost an instinctive one, based on the enormous change that this could bring about,” Ledbetter said.
Implications of ‘parallel reality’
That change could be both positive and negative. In a world already struggling with digital tracking and “alternative facts,” the ethical and privacy implications of “parallel reality” could be significant. How will we agree on a shared reality or common set of facts when we can look at the same screen and see something different?
Misapplied Sciences says it’s thinking about safeguards, including ways to extend digital privacy to the physical world of parallel reality. Advertising could be opt-in, for example, and the person being tracked through the airport could be an anonymous blob to the system, identified by flight number after an initial registration. In some situations, they point out, targeting content to an individual person or location could actually make it more private.
“In general, parallel reality is a new medium that connects content with people, and like other powerful media such as internet news or social networks, there are many different ways in which personal information can be provided and used so content can be relevant to the viewer,” said Ng, the company’s CEO. “The current conversations about privacy are taking place on many fronts for many different technologies. The same debates will apply to some applications of parallel reality, as well. And of course we recognize and we share these concerns.”
“Our goal with parallel reality is to provide people with content that’s beneficial, interesting, entertaining while being sensitive to their privacy,” he said. “Ultimately parallel reality will revolutionize how we think about things like accessibility, safety, traffic management, wayfinding, entertainment and many other things. We believe parallel reality will bring tremendous benefit, and will help a lot of people.”
Misapplied Sciences has been in stealth mode for several years, as evidenced by its low-profile office space. GeekWire has been trying to figure out exactly what Misapplied Sciences is doing since 2015, when we reported on an SEC filing that disclosed its funding round.
Recently, a mutual contact let us know they might be finally ready to talk publicly. We scheduled a visit, dispatching more people than we normally would to help verify that our eyes weren’t deceiving us. The team — including Dietz, Ng and Thompson — greeted us at the door with the nervous grins of engineers ready to finally reveal something they’ve been working on secretly for years.
After introductions at a conference table in the back of the office, it was time for the demos in the main room. GeekWire reporter Taylor Soper and I were joined by GeekWire developer, photographer and videographer Kevin Lisota, a technology industry veteran. Afterward, Kevin offered these takeaways on the experience.
It is rare that a tech demo surprise me. My first reaction was “what the hell??” when I thought my eyes were playing tricks on me. But indeed their screens were projecting different images to different points in space. I even found a spot where I could cover one eye and see one image, then cover the other eye and see a different one.
For those who can’t see it in person yet, these multi-view displays are somewhat like the “tilt cards” or “motion cards” you’ve likely seen. Those use lenticular printing to show a different image depending upon the angle of view. This seemed similar in concept, but on a digital display and capable of showing far more distinct images to many different points in space.
The utility of these multi-view displays is going to be very interesting, though may take a while to take hold. A world in which a computer or digital display can show entirely different images to different people or different points in space really twists our perception of what we’re be shown.
The demos we saw involved two types of displays, one with a lower density of pixels across a large horizontal display, which could be used as large public signage, and a larger rectangular display with a higher resolution, consisting of 36 individual panels, each 28 pixels wide by 16 pixels high. The only glitch we noticed was a brief moment when I was holding one of the frames with infrared LEDs, and the content inside briefly flickered, which the company attributed to a depleted battery in the frame.
In addition to the multi-view pixel technology, the company says it has developed a custom processor that allows the displays to run efficiently, along with software that allows content creators to easily target content to different points in space.
Applications and challenges
An early investor in the company is Ginger Alford, a teacher, museum director and expert in computer graphics, who is also a leader of the SIGGRAPH graphics and interactive technology conference. Alford called the concept of personalizing the environment in public settings “an astounding idea,” citing potential applications including the ability to tailor classroom lessons to individual students by language, learning styles and cultural touchstones.
“A key aspect is that, while it is individualized, it is not isolating. It is very social,” Alford said. Asked for her take on the biggest challenge Misapplied Sciences will face, Alford said the capabilities “are so far beyond how people currently interact with the world I think it might be hard for people to get their heads around it.”
Daniel Wigdor, a human computer interaction expert who worked with Dietz at Microsoft and Mitsubishi Electric Research Labs, said he has seen the Misapplied Sciences team make “incredible advances” over the past few years.
“People walk around now with their faces pointed at private little screens they carry in their pockets, necessary to get a private, personalized experience, but detaching them from the shared experience of the world,” Wigdor said. “Misapplied Sciences promises a sort of parallel reality, in which people can coexist in a shared world, but still receive private and personalized content.”
Don Dorsey, a legendary former Disney audio engineer and experience designer known for Disneyland’s Electrical Parade and Epcot’s IllumiNations, has also seen the Misapplied Sciences technology. He called it “mind-boggling, in the same way good magic tricks and grand illusions are.” In addition to delivering personalized information, Dorsey said, the technology could be used for an entirely new form of entertainment, giving different views and experiences to different people in an audience.
“This is a type of storytelling we have only begun to explore through multi-player games and virtual reality. Sharing this type of experience with others in the real world is the leap forward here,” Dorsey said.
It’s not without challenges, he said. “On a more practical level, each independent view has to be designed, scripted, created and implemented. This can mean a potentially massive increase in workload and cost on the part of the experience provider. Consideration must also be given to the possibility that the ‘trick’ could supersede the story experience.”
The economics of further developing and commercializing the technology would seem to be a challenge for any small startup with these ambitions. The funding Misapplied Sciences has raised so far has been enough to keep the five-person company running for the past few years, thanks in part to a frugal approach, exemplified by its office filled with furniture from the University of Washington surplus store. But it’s a fraction of what some of the most ambitious virtual and augmented reality startups have raised.
However, when we pressed the Misapplied Sciences team on the technical and economic challenges of scaling the multi-view display technology and bringing it to market, they said repeatedly that they’re confident it can be done.
“The breakthroughs that we were creating in the past few years, most of them were to make it affordable and practical and manufacturable,” Ng said. “We are on the cusp of bringing this out as product.”
Sitting at their conference table, they said it wasn’t yet time to reveal what that product will be, or how they plan to roll the technology out. For now, they say, their goal is to demonstrate that “parallel reality” is real — to get people to understand the concept of what they’ve created, and to start to think about its potential impact on the world.