Ken Perlin Lecture - Future of 3D Immersion

This is a recap of the Volumetric Meetup I attended on Feb. 5, 2014 featuring Ken Perlin. 

Ken Perlin, a professor in the Department of Computer Science at New York University, directs the NYU Games For Learning Institute, and a participating faculty member in the NYU Media and Games Network (MAGNET). He was also founding director of the Media Research Laboratory and director of the NYU Center for Advanced Technology. His research interests include graphics, animation, augmented and mixed reality, user interfaces, science education and multimedia. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation.

In short he was the creator of "Perlin Noise" for procedural texturing.

He started off by creating procedural characters that can be directed, not animated, using simple ai - similar to the approach that Pixar takes. Thought about what it  would be like to have these characters interact with real objects?

He brought up Thad Starner and Steve Mann, who have tried to turn themselves into human cyborgs using sensors and Virtual Reality throughout the '90s and 2000s. Now Virtual Reality Contact lenses are making their work more mainstream.

Eventually though, VR will occur through direct simulation right into brain. IR through skull into brain has also been explored.

He said holograms will happen due to implants before projection, 25yrs from now an artificial retina will be better than the original eye retina and people will demand it.

Western electric telephone crossed with tv was 50 years to early, similar to the virtual reality was too early in he '90s.

 

 

Rainbows End by Vernon Vinge is not a very good book in his opinion, but is a good description of the way some of these optical technologies will work in the future.

 

He talked about how hard it is to track people's hands and his research group is training algorithms to match hand poses while using the Kinect. Uses real-time 3d matching to a 3D hand model through inverse kinamatics.

 

 

VR research led to the iBird project, that tracks the body through kinect and oculus rift with wind blowing kind like Disney ride soarin', to give a user more cues to movement while in virtual world.

His latest work deals with drawing things on a black/white board and giving them life like properties automatically - eventually he will tie this into tracking the body with computer vision for virtual reality interactions.
Drawing graphs, number, and letters on the blackboard so they come to life, it is a drawing program with AI.
It combines the liveness of the blackboard with the simulation of the screen/lab. The gesture and writing library will be used in virtual reality in real time so avatars can communicate with one another. 

First gen kinect will lose really small things bad silhouette - new kinect has other problems but gorgeous silhouette.

Every company is working on true augmented reality. Working on small form factor - low power elements and depth cameras.

The electronics will eventually just be inside our bodies.

Leave a Comment

Plain text