Thursday, September 22, 2011

Paper Reading #10: Sensing Foot Gestures from the Pocket



Sensing Foot Gestures from the Pocket

By:
Jeremy Scott, David Dearman, Koji Yatani, Khai Truong.

Presented at UIST 2010.
  • Jeremy Scott has a Bachelors of Science, a Masters of Science and a PhD in Pharmacology and Toxicology from the University of Western Ontario. He is currently employed as an Assistant Professor at the University of Toronto.
  • David Dearman is currently a PhD student at the University of Toronto.
  • Koji Yatani is also a currnet PhD student at the University of Toronto.
  • Khai Truong is currently an Associate Professor in the C.S. Department at the University of Toronto.
Summary
Hypothesis
The authors aim to study the ability to use foot based gestures to control phones in users pockets. Using the results of that study they further developed a working system to recognise foot gestures and take action on the phone.

Methods
For the purposes of studying and getting initial results, the authors had participants perform certain tasks that were mapped to targets on the phone with their dominant food (right, for all participants). The participants were asked to hit 43 targets across 3 different flexions and rotations. Once they had used this study to figure out precisely how the system would need to be set up, they held another study to test foot gestures and how they could control phones using an Iphone, accelerometers and multiple cameras similar to the preliminary study.

Results
Their testing resulted in 82%-92% accuracy in classification of foot gestures, which is fairly reliable when it comes to controlling the phone. They also learnt that having the cell phone at the users side as opposed to a pocket in the front or rear allowed for greater accuracy. Sadly it showed that the system occasionally confused similar gestures, especially those that were in neighbouring degree spaces.

Contents
The authors spent a fair bit of time gathering their data, getting a list of all potential gestures possible and needed and then studied the ease of recognition of said gestures. Once that was all in place the authors worked through to take their gathered data and integrated it into a system that they designed and was able to interface with a smart phone. For appropriate data gathering and testing they used multiple cameras located at various locations and angles, and then had them working in conjunction with accelerometers so as to interface with a phone.

Discussion
While the actual study and implementation is rather intriguing and "cool" in it self, I personally find the over arching concept to be quite meaningless. The fact that the authors never resolved the issue of differentiating between gestures that need to be interfaced and "gestures" that are really just walking/running/etc makes this even more useless. In addition the fact that this system greatly limited the location of the phone in so as to be accurate is even more unacceptable. While this may prove to be a remotely useful tool for users that lack visual feed back, it's use is greatly limited and its ability is even further limited.

No comments:

Post a Comment