I remember when I first saw the E3 presentation, I was so impressed with the idea of using human body as controller. Years later, I thought to myself what about using human face as a controller to control viewing. Therefore I did an undergraduate research in trying to use face tracking data to control viewing panoramic images in real time. Because of the instability of the face tracker that was used in my research, the result wasn’t really promising. So I didn’t end up publishing any paper on this. Then in the same year, I came across FaceAPI, which is the link down below. They basically did the same thing but with much more success than I did.
I think incorporating face tracking data into controlling computers can bring so many changes to interface design. We can adjust the images shown on computer monitor according to the head movements of the users, which would allow more information to be shown without changing the size of the screen (i.e. when the user look left, we can show one the left part of the desktop, and when the user look right we can just show the right part of the desktop). The amazing part of this is that this doesn’t not even require special hardware. Both in my research and FaceAPI, the only additional equipment needed is a web camera, which is already available in most laptop computers and computer monitors.
However, the big challenge is calibration of control. The dilemma is that small movements of the face are hard to track and large movements are counter-intutive given that the user is looking at a screen close to him or her. Given the limitation of accuracy and robustness of current face tracking algorithm, reaching the perfect calibration of control is almost impossible. Because we’re dealing with interaction here, a tiny bit of delay or mis mach would cause a lot of confusion for users. I think we still need to be cautious in experimenting with this. But I hope that there will be future advances in this area.