You know you’re on to something in computer graphics when you get to explain in your research paper how you’ve advanced the state of the art beyond The Matrix: Reloaded and the Beowulf remake.
In a paper to be presented at the SIGGRAPH computer graphics conference in Vancouver, B.C., this week, Microsoft researchers tip their hats to the technologies used in those movies for accurate 3D computer modeling of the human face.
But the researchers, from Microsoft’s Beijing lab, say they’ve figured out how to do it even better. The technique they’ve come up with can automatically model faces with a new level of accuracy — down to the last wrinkle.
They use a combination of 3D scanning technology and a motion-capture system (which explains the markers on the actor’s face above), plus a technique they developed to determine the minimal number of face scans needed to create an accurate model, which makes the system faster and more efficient.
Research projects don’t necessarily translate into shipping products, but it’s not hard to imagine aspects of the approach being incorporated somehow into future versions of Microsoft’s Kinect sensor.
Microsoft’s new Avatar Kinect virtual conferencing service lets people control some of the facial expressons on their on-screen avatars — by lifting an eyebrow, for example. But a more accurate model of the face could open up more possibilities.
Who knows, maybe future Windows users will be able to launch Outlook with a particular type of smirk.
Here’s the abstract from the paper, for more of the technical details.
This paper introduces a new approach for acquiring high-ﬁdelity 3D facial performances with realistic dynamic wrinkles and ﬁnescale facial details. Our approach leverages state-of-the-art motion capture technology and advanced 3D scanning technology for facial performance acquisition. We start the process by recording 3D facial performances of an actor using a marker-based motion capture system and perform facial analysis on the captured data, thereby determining a minimal set of face scans required for accurate facial reconstruction. We introduce a two-step registration process to efﬁciently build dense consistent surface correspondences across all the face scans. We reconstruct high-ﬁdelity 3D facial performances by combining motion capture data with the minimal set of face scans in the blendshape interpolation framework. We have evaluated the performance of our system on both real and synthetic data. Our results show that the system can capture facial performances that match both the spatial resolution of static face scans and the acquisition speed of motion capture system.