Wednesday, December 14, 2011

Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations

Taking advice from intelligent systems: the double-edged sword of explanations

IUI '11

By:
Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, and Daniel Gruen.
  • Kate Ehrlich is a senior Technical Staff member at IBM.
  • Susanna Kirk holds a MS in Human Factors in Information Design.
  • John Patterson is a Distinguished Engineer at IBM.
  • Jamie Rasmussen is a member of the same team as John and Kate at IBM.
  • Steven Ross is a member of the same team as Jamie, John and Kate at IBM.
  • Daniel Gruen is currently working on the Unified Activity Management project at IBM.

Summary
Hypothesis
The researchers of this paper aim to figure out how accurate taking advice from intelligent systems currently is and ways of figuring out how to improve upon it.

Methods
Using a software called NIMBLE researchers gathered data on the effect of a network analyst following the advice of an Intelligent System.

Results
The users performance improved slightly with a correct recommendation for the system, and when there was no correct recommendation or justification available, the system kept quiet. All this was somewhat irrelevant considering that most users ignore the recommendations and instead relied on their own knowledge. They also noticed that most users the followed the recommendations quicker when it was closer to what they were inclined towards.

Contents
The authors aimed to create a study that would test the accuracy of Intelligent System recommendations and their effect on humans. Comparing gathered values with baseline results the researchers were able to calculate the benefit of positive recommendations over the negative effect of negative recommendations.

Discussion
While intriguing, I feel the topic of this paper isn't liable to progress very much. As the researchers noticed, most users aren't too inclined to take actions they have experience in based on a computers recommendations.

Paper Reading #31: Identifying emotional states using keystroke dynamics

Identifying emotional states using keystroke dynamics

Chi '11

By:
Clayton Epp, Michael Lippold, and Regan Mandryk.
  • Clayton Epp is currently a Software Engineer for a private consulting firm. He also holds a masters degree in CHI from the University of Saskatchewan.
  • Michael Lippold is currently a masters student at the University of Saskatchewan.
  • Regan Mandryk is an Assistant Professor in the department of CS at the University of Saskatchewan.

Summary
Hypothesis
The researchers hypothesise that it is feasible to discern a users emotions using keystrokes.

Methods
Using a program to record keystrokes the researchers were able to have users fill out an emotional state based questionnaire along with another piece of text to type based upon keystroke frequency. The data gathered from this was key presses, release events, codes assigned to each key and time stamps for key events.

Results
Using under-sampling on various models to have more meaningful data they were able to conclude that these models were more accurate and most consistent. Essentially they were far better.

Contents
This paper discusses how minute measurements of keystrokes can be used to give a fairly accurate representation of the users emotional state. There is some discussion of related work in Human Computing. They also spend some time discussing ways in which to further research this topic and how to effectively use the results of this paper.

Discussion
I felt this paper is going to some potential problems. This is just going to mean that the ads I get online are now going to be fine tuned to my mood. However it does have plenty of pros. It's going to make online communication done using only text far more accurate if it displays the accompanying mood, thus reducing misunderstandings. This may lead to the acceptance of sarcasm in text based communications.

Paper Reading #30: Life "modes" in social media

Life "modes" in social media

Chi '11

By:
Fatih Kursat Ozenc, and Shelly Farnham.
  • Fatih Kursat Ozenc is at Carnegie Mellon and also holds a PhD in Interaction Design.
  • Shelly Farnham is currently a researcher at Microsoft Research. She also holds a PhD from the University of Washington.
Summary
Hypothesis
This paper looks at ways to improve users ability to fully use social networking sites by allowing to organise their online world more thoroughly based on life "modes" so as to maximise their use.

Methods
The researchers gathered more data for their idea by selected 16 participants after a thorough screening and asked them to model their lives using a colour scheme with focus on method of communication with each person and time spent with them.

Results
The vast majority of participants drew their lives as a social meme map while a few used the timeline method. They discovered that communication channels depended heavily on closeness to a person in areas of ones life. The closer a person was the more means of communication were used. There was also a level of compartmentalisation that was noticed and there was a relation between the level of compartmentalisation and the age, personality and culture of the user.

Contents
This paper attempted to figure out a way to allow people to manage, organise and compartmentalise their lives on social networking sites. Based on their research they were able to come up with a method that they felt worked pretty well.

Discussion
The conclusion of their research seems an awful lot like what Facebook implemented recently with their circles, and the ability to select how close on is to each friend. I'm unsure whether Facebook used this as an idea, however their implementation shows the obvious fact that this research was correct, and thus has been verified as such.

Paper Reading #29: Usable gestures for blind people: understanding preference and performance

Usable gestures for blind people: understanding preference and performance

Chi '11

By:
Shaun Kane, Jacob Wobbrock, and Richard Ladner.
  • Shaun Kane is currently an Associate Professor at the University of Maryland. He also holds a PhD from the University of Washington.
  • Jacob Wobbrock is currently an Associate Professor at the University of Washington.
  • Richard Ladner is current a Professor at the University of Washington. He also holds a PhD in Math from the University of California, Berkeley.
Summary
Hypothesis
Considering the different needs of blind people, especially in regards to touch based gestures, this paper approaches the problem and seeks a solution.

Methods
The researchers set up two studies to get appropriate data. They had 1 group of sighted participants and 1 group of blind participants. Both participants were asked to invent two gestures for commands that were read to them. These gestures were later evaluated based upon ease of use, and appropriateness. A second study targeted the specifics of blind people performing gestures. Both groups were asked to perform standardised gestures and the results were recorded for analysis.

Results
The first experiment showed that blind people were more liable to use complex, gestures that were closer to the edge of the tablet. They were also a lot more likely to use multi-touch gestures. The 2nd experiment showed no great difference, except for the fact that blind people had larger gestures which took them longer to draw.

Contents
This paper makes an effort to bring touch screen phones to the blind in a usable way. They discussed some previous work while later talking about their experiments and the rather predictable data gleaned from them. Using the results they recommended some improvements that could be made to touch screen devices to improve usability of touch screen devices for the blind.

Discussion
This paper does a fantastic job of bring something that is rarely thought about to the forefront. Blind people are hardly something one thinks about as users of touch screen phones, and this paper shows why there is no need that they can't use touch screen devices. Given the recommendations given in this paper and some further research, this gap could be bridged quite simply.

Paper Reading #28: Experimental analysis of touch-screen gesture designs in mobile environments

Experimental analysis of touch-screen gesture designs in mobile environments

Chi '11

By:
Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle.
  • Andrew Bragdon is currently a PhD student at Brown University.
  • Eugene Nelson is currnetly a PhD Student at Brown University.
  • Yang Li is currently a researcher at Google. He also holds a PhD from the Chinese Academy of Sciences.
  • Ken Hinckle is a Principal researcher at Microsoft Research. He also holds a PhD from the University of Virginia.
Summary
Hypothesis
The researchers are attempting to develop an application that uses Bezel and marker-based gestures to speed up and allow more accurate actions on touch screen based phones with less user attention.

Methods
To test and asses their applications they had 15 participants were asked to complete a series of tasks through various distractions at varying levels of intensity. Sitting and walking were the two major areas studied along the levels of distraction ranging from no distraction to complete distraction. To appropriately gather data and have a proper test, the participants were give a pre-test questionnaire along with appropriate instructions to complete their tasks.

Results
Bezel marks were by far the fastest in mean completion time, with soft and hard buttons coming in next with negligible difference in their mean completion times. While bezel marks and soft buttons performed similarly in direct action, bezel marks far outperformed soft buttons with distractions. Bezel and soft button paths had a quicker mean time compared to bezel and hard button paths.

Contents
This paper spent much time looking at the effect of having varying types of buttons, and their varying levels of effectiveness under different circumstances. The concluded that direct touch is the most accurate and the quickest regardless of input form, however with distractions, hard buttons are the most successful form of input.

Discussion
The authors did a fantastic job of attaining their objective. By the end of the experiment they had a superior understanding of distractions, their effect on user input and the best input and type of input to use during varying levels of interaction and distraction. It's a fairly relevant paper and certainly worth further research.

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

Sensing cognitive multitasking for a brain-based adaptive user interface

Chi '11

By:
Erin Treacy Solov, Francine Laloosesis, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, and Robert Jacob.

  • Erin Treacy Solov is currently a Post Doctoral fellow at MIT.
  • Francine Laloosesis is currently a PhD student at Tufts University.
  • Krysta Chauncey is currently a Post Doctoral resarcher at Tufts
  • Douglas Weaver has a doctorate from Tufts University.
  • Margarita Parasi is currently a Masters student at Tufts University.
  • Angelo Sassaroli is currently a research assistant Professor at Tufts University. He also holds a PhD from University of Electro-Communication.
  • Sergio Fantini is currently a Professor at Tufts University in the department of Biomedical Engineering.
  • Paul Schermerhorn is currently a post doctoral researcher at Tufts University and was previously at Indiana University.
  • Audrey Girouard is currently an assistant Professor at Queen's University and holds a PhD from Tufts University.
  • Robert Jacob is currently a Professor at Tufts University.
Summary
Hypothesis
The researchers aim to work on a device that can recognise cognitive multitasking tasks and help humans with their completion.

Methods
In the first experiment the participants were asked to interact with a simulated robot on Mars that was collecting and sorting rocks. Based upon the classification data was taken in relation to delay, dual tasking and branching. In the second experiment they attempted to work more specifically with branching tasks to see whether they could more specifically distinguish between random branches and predictive branches. Then they repeated the first experiment except this time with only two experimental states.

Results
Statistical analysis done to the results of the first experiment involving all variables showed significant response time between delay and dual along with delay and branching. There were however no strong correlations and therefore no learning curve was discovered. Statistical analysis to the second experiment showed absolutely no significant relationships found between any of the variables. Furthermore there were absolutely no correlations found between any of the variables.

Contents
The paper describes the objective and then the studies done to get data to attain that objective. Their attempts to asses cognitive multitasking to allow for human-robot interaction is chronicled. It connects current research to some related works and attempts to expand upon pre-existing research.

Discussion
While the scientists were unable to completely fulfil their hypothesis, they did make considerable progress in certain direction allowing for future research in the area. Their testing and research was exceptionally and thorough and thus any further work based upon this research is being build upon a solid foundation.

Paper Reading #26: Embodiment in brain-computer interaction

Embodiment in brain-computer interaction

Chi '11

By:
Kenton O'Hara, Abigail Sellen, and Richard Harper.
  • Kenton O'Hara is currently a senior researcher at Microsoft Research.
  • Abigail Sellen is currently a principal researcher at Microsoft Research and holds a PhD in CS from the University of California, San Diego.
  • Richard Harper is currently a principal researcher at Microsoft Research and holds a PhD in CS from Manchester.
Summary
Hypothesis
The authors of this paper attempt to study ways of brain and full body interaction with computers.

Methods
The study was tested using MindFlex, a game that paired with an EEC measure brain activity and controls a fan's speed accordingly. High brain activity results in a greater fan speed and reduced brain activity results in a lower fan speed. Participants were asked to play this game in a relaxed setting and record their game play. This game play was analysed and based on the bodily reactions (gestures, facial and bodily features, audible words) combined with the reaction of the fan allowed scientists to gain a better idea of how things worked. They were able to better describe visible reactions and its relation to brain activity.

Results
Body reactions had a great correlation with task at hand. Tasks requiring concentration resulted in actions such as hunching over or clenching of fists. It was noticed that players gave a certain amount of instructions which exceeded the games requirement and the relation of that to performance.

Contents
The paper spends a fair bit of time listing the need to superior understanding of the human mind and the body's role in affecting and supporting it. Then it describes the testing phrase and further the results of this testing phase the relationships discovered by the researchers and its effects.

Discussion
A rather dull paper to read. While I certainly appreciate the attempt to take computing to a new level and the logical leap from here to ubiquitous computing, this was still a pain to read. A highly intriguing topic, just very poorly and dully written.

Paper Reading #25: Twitinfo: aggregating and visualizing microblogs for event exploration

Twitinfo: aggregating and visualizing microblogs for event exploration

Chi '11

By:
Adam Marcus, Michael Bernstein, Osama Badar, David Karger, Samuel Madden, and Robert Miller.

  •  Adam Marcus is currently a graduate student at MIT in the CS and AI department.
  • Michael Bernstein is currently a graduate student at MIT in the CS and AI department concentrating on HCI.
  • Osama Badar is a member of the CS and AI department at MIT.
  • David Karger is a member of the CS and AI department at MIT as a EECS student.
  • Samuel Madden is an associate professor at MIT in the EECS department.
  • Robert Miller is an associate professor at MIT in the EECS department and is currently leading the User Interface Design group.

Summary
Hypothesis
The purpose of Twitinfo is to analyse twitter data and draw conclusions based upon this analysis. It also has a feature to summarise tweets based upon search or events.

Methods
All testing done to Twitinfo was to test its user interface and thus its usability. 12 participants were asked to search twitter using Twitinfo and research different recent events. The 2nd test involved a similar search with a time limit. After the test the users were interviewed for their reaction to using Twitinfo in the two sessions.

Results
Most participants were able to research topics thoroughly when they didn't have a time limit. They explored all tweets, related links and use the map to learn more. The introduction of the time limit resulted in a hastier research involving more skimming and certainly less use of advanced features. Tweets were used more to confirm previous information as opposed to garnering new information and thus resulted in slightly less thorough research.

Contents
Most of this article concentrates on the specifics of how Twitinfo works, its relation to database and user interfaces and its particular implementation. User testing is also described along with potential uses and identifying key trends in recent times on twitter.

Discussion
I find this paper to be extremely academic in nature and certainly see very little real world application for Twitinfo. While it may be some what useful for sentiment analysis and be used to garner some broad understanding of public opinion it isn't the most useful nor accurate tool for serious polling for any advertising or political purposes.

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

Gesture avatar: a technique for operating mobile user interfaces using gestures

Chi '11

By:
Hao Lu and Yang Li.
  • Hao Lu is currently a graduate student in CSE at the University of Washington.
  • Yang Li has a PhD in Computer Science from the Chinese Academy of Sciences. He is currently a Senior Research Scientist for Google.

Summary
Hypothesis
Gesture Avtar is meant to resolve the problem of imprecise touch screen input. This paper also compares Gesture Avtar with Shift such as Gesture Avtar's speed with small targets as opposed to large targets, having fewer errors in general and ease of use of Gesture Avtar regardless of use state (walking, sitting, etc).

Methods
Participants were divided into two groups and asked to run a series of tests  on either Gesture Avtar or GA an then later switch. All test were done both while walking and while sitting still. Tests included achieving targets of varying sizes, shapes, ambiguity and complexity. Using the English alphabet and varying the size of the keys and the space between them the researchers were able to thoroughly test and find the best combination to reduce errors.

Results
Gesture Avtar was found lacking at a target size of 20pixels compared to Shift; same at 15pixels and found to be advantageous at 10 pixels. At sizes greater than 20 pixels both Gesture Avtar and Shift became quicker. Shift combined with MobileState performed better in a stationary setting as opposed to in a moving one, while Gesture Avtar was equally quick in both states of use.

Contents
This paper presents Gesture Avtar, an application designed to result in more precise touch screen inputs. This application was developed to work on Android and was pitted against Shift technology to better understand its limitations and areas of improvement. After much testing they concluded that they had met their objective considering the positive reviews from the test subjects.

Discussion
This paper was fantastic and highly relevant considering the massive flux of touch screen based cell phones in the market and fairly high rate of inaccurate  screens amongst them. I feel a polished and complete version of this application would work out extremely well, especially if they were able to build on the user feedback.

Paper Reading #23: User-Defined Motion Gestures for Mobile Interaction

User-Defined Motion Gestures for Mobile Interaction

Chi '11

By:
Jaime Ruiz, Yang Li, Edward Lank.
  • Jaime Ruiz is currently a Doctoral Student in HCI at the University of Waterloo.
  • Yang Li has a PhD in Computer Science from the Chinese Academy of Sciences. He is currently a Senior Research Scientist for Google.
  • Edward Lank has a PhD in Computer Science from Queens University. He is currently an Assistant Professor for Computer Science at the University of Waterloo.
Summary
Hypothesis
The authors of this paper feel that there needs to be more research into how to optimally use and program all the multiple three dimensional sensors on modern smart phones.

Methods
To test their device and check the validity of their ideas they chose to have 20 participants were asked to design, implement and test a motion gesture to control and command smart phones. These user created gestures were analysed for accuracy and usefulness and then used in a 2nd study. In the second study users were asked to perform aforementioned gestures and then rank them based on how accurately they related to the command at hand and the ease of use.

Results
As natural most participants in the first study came up with simple, obvious gestures of commands that mimicked typical use. They physical nature of the phone and its dimensions were extensively used and users found it relate able to deal with the phone as a physical object.

Contents
The paper spent a large amount of time describing the experiment set up, its objectives and its results. Most participants felt that gestures that were more natural should be mapped more frequently so as to make them easy to learn and more intuitive. The paper goes on to discuss the parameters used to analyse gestures and then manipulated by participants. They further determine taxonomy dimensions as gesture mapping and physical characteristics. Gesture mapping gets further broken down depending upon abstract, metaphor, physical and symbol.

Discussion
This paper was interesting to read, and while it shows great leaps and bounds of progress in the field of mobile devices and how we interact with them, I wasn't all that impressed. I see no reason for all this work, and personally I am perfectly happy using keys to communicate effectively with my device as opposed to trying to use a gesture multiple times before finally getting it right.