Seminar - Human Capture with Depth Sensors

School of Engineering and Computer Science Seminar

Speaker: Hao Li
Time: Monday 28th July 2014 at 03:00 PM - 04:00 PM
Location: Cotton Club, Cotton 350
URL: http://hao.li/

Add to Calendar Add to your calendar

Abstract

With the emergence of real-time 3D depth sensing, performance capture technologies in visual effects are undergoing a radical shift from traditional marker and vision-based imaging to a geometry processing pipeline, allowing more faithful measurement of people without the requirement of wearing motion capture suits. The next generation imaging systems will recover large amounts of dynamic 3D data and the big challenge will be in their efficient treatment and analysis. I will present an overview of state-of-the-art 3D capture techniques and introduce a fundamental technique to allow computers to automatically understand captured data from a geometric perspective. I will show how these techniques can help modeling digital humans in the context of entertainment and how they also impact other sciences, such as in evolutionary biology, cardiology, and cancer treatment. With the democratization of 3D depth sensors such as Microsoft's Kinect, these systems are no longer prototype technologies from research labs or visual effects studios, but are available to everyone. They will play a key role for future consumer level products innovating human-computer interfaces and 3D communication. I will also show a live demonstration of a realtime facial animation system that I developed at Industrial Light & Magic which is based on a low cost level depth sensor. The system learns personalized facial expressions on-the-fly without the need of a tedious training process. While primarily developed to innovate real-time virtual production in the film and gaming industry, the tracking technology has the potential to impact the fidelity of emotions analysis, improve surveillance and monitoring systems, and be used for massive data collection for behavioral analytics. I will also discuss several important challenges that are ahead of us in the context of real-time 3D digitization of humans, 3D hair digitization, and motivate the potential for upcoming personalized applications ranging from interactive gaming to 3D printing.

Hao Li recently joined USC as an assistant professor of Computer Science, after working for a year at the R&D group at Industrial Light & Magic/Lucasfilm as a research lead, developing next generation real-time performance capture technologies for the upcoming Starwars episodes. Prior to his time in the visual effects industry, He spent a year as a postdoctoral research at Columbia and Princeton Universities, right after obtaining his Ph. D. from ETH Zurich in 2010. His research lies in geometry processing, 3D reconstruction, and realtime performance capture. While primarily developed for Computer Graphics and Vision applications, his work on geometry-driven performance capture has also impacted the field of human shape analysis and biomedicine. His algorithms are widely deployed in the industry ranging from leading visual effects studios to manufacturers of state-of-the-art radiation therapy systems. Since 2009, eight of his papers have been published at Siggraph and Siggraph Asia. He has been named top 35 innovator under 35 by MIT Technology Review in 2013 and this yearâs NextGen 10: Innovators under 40 by CSQ magazine. He was also awarded the Swiss National Science Foundation fellowship for prospective researchers in 2011, and obtained the best paper award at the Symposium of Computer Animation in 2009.

Go backGo back to the seminar list