I’m seeing more and more articles talking about natural user interfaces (NUI) and using hand gestures to perform computing tasks. Many of the papers at the recent ACM SIGCHI conference in Atlanta dealt with new ways to interact with a computer that don’t require a keyboard and mouse. [Disclosure: I’m currently working as a contractor on the Microsoft Surface team, so I have a greater than average interest in this. Unfortunately, I can’t discuss any new features under development.]
One project disclosed at CHI was a method to use gestures to control the image displayed by an omnidirectional video projector. The example application was developed by Hrvoje Benko and Andrew Wilson and is a home planetarium called Pinch-The-Sky. Obviously, there could be other practical uses too (oil exploration, medical imaging, and video gaming come to mind). An application like this (except with the projector inside and the user outside) would work great in conjunction with the Orbitarium projector I described in a previous blog post. Imagine being able to spin the globe, stop it, and zoom in or out.
My wife often complains how PowerPoint was a big step back from overhead transparencies because it made it difficult or impossible for her to immediately incorporate audience feedback into her presentations. The ability to combine pen input with hand gestures could offer a solution. A prototype application that does this, called Manual Deskterity, was also demonstrated at CHI. It is the work of Ken Hinckley, Bill Buxton, and others.
Finally, I just noticed that Motorola has introduced a new Android-based phone called the BackFlip that has touch sensors on its back cover. Motorola calls it BackTrack navigation. Touch sensors on the back cover allow users to select items on the screen without obscuring the screen with their fingers. Development of back-of-device touch contact originated with Daniel Wigdor’s LucidTouch project and was refined under Patrick Baudisch’s NanoTouch project.