Wednesday, December 14, 2011

Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations

Taking advice from intelligent systems: the double-edged sword of explanations

IUI '11

By:
Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, and Daniel Gruen.
  • Kate Ehrlich is a senior Technical Staff member at IBM.
  • Susanna Kirk holds a MS in Human Factors in Information Design.
  • John Patterson is a Distinguished Engineer at IBM.
  • Jamie Rasmussen is a member of the same team as John and Kate at IBM.
  • Steven Ross is a member of the same team as Jamie, John and Kate at IBM.
  • Daniel Gruen is currently working on the Unified Activity Management project at IBM.

Summary
Hypothesis
The researchers of this paper aim to figure out how accurate taking advice from intelligent systems currently is and ways of figuring out how to improve upon it.

Methods
Using a software called NIMBLE researchers gathered data on the effect of a network analyst following the advice of an Intelligent System.

Results
The users performance improved slightly with a correct recommendation for the system, and when there was no correct recommendation or justification available, the system kept quiet. All this was somewhat irrelevant considering that most users ignore the recommendations and instead relied on their own knowledge. They also noticed that most users the followed the recommendations quicker when it was closer to what they were inclined towards.

Contents
The authors aimed to create a study that would test the accuracy of Intelligent System recommendations and their effect on humans. Comparing gathered values with baseline results the researchers were able to calculate the benefit of positive recommendations over the negative effect of negative recommendations.

Discussion
While intriguing, I feel the topic of this paper isn't liable to progress very much. As the researchers noticed, most users aren't too inclined to take actions they have experience in based on a computers recommendations.

Paper Reading #31: Identifying emotional states using keystroke dynamics

Identifying emotional states using keystroke dynamics

Chi '11

By:
Clayton Epp, Michael Lippold, and Regan Mandryk.
  • Clayton Epp is currently a Software Engineer for a private consulting firm. He also holds a masters degree in CHI from the University of Saskatchewan.
  • Michael Lippold is currently a masters student at the University of Saskatchewan.
  • Regan Mandryk is an Assistant Professor in the department of CS at the University of Saskatchewan.

Summary
Hypothesis
The researchers hypothesise that it is feasible to discern a users emotions using keystrokes.

Methods
Using a program to record keystrokes the researchers were able to have users fill out an emotional state based questionnaire along with another piece of text to type based upon keystroke frequency. The data gathered from this was key presses, release events, codes assigned to each key and time stamps for key events.

Results
Using under-sampling on various models to have more meaningful data they were able to conclude that these models were more accurate and most consistent. Essentially they were far better.

Contents
This paper discusses how minute measurements of keystrokes can be used to give a fairly accurate representation of the users emotional state. There is some discussion of related work in Human Computing. They also spend some time discussing ways in which to further research this topic and how to effectively use the results of this paper.

Discussion
I felt this paper is going to some potential problems. This is just going to mean that the ads I get online are now going to be fine tuned to my mood. However it does have plenty of pros. It's going to make online communication done using only text far more accurate if it displays the accompanying mood, thus reducing misunderstandings. This may lead to the acceptance of sarcasm in text based communications.

Paper Reading #30: Life "modes" in social media

Life "modes" in social media

Chi '11

By:
Fatih Kursat Ozenc, and Shelly Farnham.
  • Fatih Kursat Ozenc is at Carnegie Mellon and also holds a PhD in Interaction Design.
  • Shelly Farnham is currently a researcher at Microsoft Research. She also holds a PhD from the University of Washington.
Summary
Hypothesis
This paper looks at ways to improve users ability to fully use social networking sites by allowing to organise their online world more thoroughly based on life "modes" so as to maximise their use.

Methods
The researchers gathered more data for their idea by selected 16 participants after a thorough screening and asked them to model their lives using a colour scheme with focus on method of communication with each person and time spent with them.

Results
The vast majority of participants drew their lives as a social meme map while a few used the timeline method. They discovered that communication channels depended heavily on closeness to a person in areas of ones life. The closer a person was the more means of communication were used. There was also a level of compartmentalisation that was noticed and there was a relation between the level of compartmentalisation and the age, personality and culture of the user.

Contents
This paper attempted to figure out a way to allow people to manage, organise and compartmentalise their lives on social networking sites. Based on their research they were able to come up with a method that they felt worked pretty well.

Discussion
The conclusion of their research seems an awful lot like what Facebook implemented recently with their circles, and the ability to select how close on is to each friend. I'm unsure whether Facebook used this as an idea, however their implementation shows the obvious fact that this research was correct, and thus has been verified as such.

Paper Reading #29: Usable gestures for blind people: understanding preference and performance

Usable gestures for blind people: understanding preference and performance

Chi '11

By:
Shaun Kane, Jacob Wobbrock, and Richard Ladner.
  • Shaun Kane is currently an Associate Professor at the University of Maryland. He also holds a PhD from the University of Washington.
  • Jacob Wobbrock is currently an Associate Professor at the University of Washington.
  • Richard Ladner is current a Professor at the University of Washington. He also holds a PhD in Math from the University of California, Berkeley.
Summary
Hypothesis
Considering the different needs of blind people, especially in regards to touch based gestures, this paper approaches the problem and seeks a solution.

Methods
The researchers set up two studies to get appropriate data. They had 1 group of sighted participants and 1 group of blind participants. Both participants were asked to invent two gestures for commands that were read to them. These gestures were later evaluated based upon ease of use, and appropriateness. A second study targeted the specifics of blind people performing gestures. Both groups were asked to perform standardised gestures and the results were recorded for analysis.

Results
The first experiment showed that blind people were more liable to use complex, gestures that were closer to the edge of the tablet. They were also a lot more likely to use multi-touch gestures. The 2nd experiment showed no great difference, except for the fact that blind people had larger gestures which took them longer to draw.

Contents
This paper makes an effort to bring touch screen phones to the blind in a usable way. They discussed some previous work while later talking about their experiments and the rather predictable data gleaned from them. Using the results they recommended some improvements that could be made to touch screen devices to improve usability of touch screen devices for the blind.

Discussion
This paper does a fantastic job of bring something that is rarely thought about to the forefront. Blind people are hardly something one thinks about as users of touch screen phones, and this paper shows why there is no need that they can't use touch screen devices. Given the recommendations given in this paper and some further research, this gap could be bridged quite simply.

Paper Reading #28: Experimental analysis of touch-screen gesture designs in mobile environments

Experimental analysis of touch-screen gesture designs in mobile environments

Chi '11

By:
Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle.
  • Andrew Bragdon is currently a PhD student at Brown University.
  • Eugene Nelson is currnetly a PhD Student at Brown University.
  • Yang Li is currently a researcher at Google. He also holds a PhD from the Chinese Academy of Sciences.
  • Ken Hinckle is a Principal researcher at Microsoft Research. He also holds a PhD from the University of Virginia.
Summary
Hypothesis
The researchers are attempting to develop an application that uses Bezel and marker-based gestures to speed up and allow more accurate actions on touch screen based phones with less user attention.

Methods
To test and asses their applications they had 15 participants were asked to complete a series of tasks through various distractions at varying levels of intensity. Sitting and walking were the two major areas studied along the levels of distraction ranging from no distraction to complete distraction. To appropriately gather data and have a proper test, the participants were give a pre-test questionnaire along with appropriate instructions to complete their tasks.

Results
Bezel marks were by far the fastest in mean completion time, with soft and hard buttons coming in next with negligible difference in their mean completion times. While bezel marks and soft buttons performed similarly in direct action, bezel marks far outperformed soft buttons with distractions. Bezel and soft button paths had a quicker mean time compared to bezel and hard button paths.

Contents
This paper spent much time looking at the effect of having varying types of buttons, and their varying levels of effectiveness under different circumstances. The concluded that direct touch is the most accurate and the quickest regardless of input form, however with distractions, hard buttons are the most successful form of input.

Discussion
The authors did a fantastic job of attaining their objective. By the end of the experiment they had a superior understanding of distractions, their effect on user input and the best input and type of input to use during varying levels of interaction and distraction. It's a fairly relevant paper and certainly worth further research.

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

Sensing cognitive multitasking for a brain-based adaptive user interface

Chi '11

By:
Erin Treacy Solov, Francine Laloosesis, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, and Robert Jacob.

  • Erin Treacy Solov is currently a Post Doctoral fellow at MIT.
  • Francine Laloosesis is currently a PhD student at Tufts University.
  • Krysta Chauncey is currently a Post Doctoral resarcher at Tufts
  • Douglas Weaver has a doctorate from Tufts University.
  • Margarita Parasi is currently a Masters student at Tufts University.
  • Angelo Sassaroli is currently a research assistant Professor at Tufts University. He also holds a PhD from University of Electro-Communication.
  • Sergio Fantini is currently a Professor at Tufts University in the department of Biomedical Engineering.
  • Paul Schermerhorn is currently a post doctoral researcher at Tufts University and was previously at Indiana University.
  • Audrey Girouard is currently an assistant Professor at Queen's University and holds a PhD from Tufts University.
  • Robert Jacob is currently a Professor at Tufts University.
Summary
Hypothesis
The researchers aim to work on a device that can recognise cognitive multitasking tasks and help humans with their completion.

Methods
In the first experiment the participants were asked to interact with a simulated robot on Mars that was collecting and sorting rocks. Based upon the classification data was taken in relation to delay, dual tasking and branching. In the second experiment they attempted to work more specifically with branching tasks to see whether they could more specifically distinguish between random branches and predictive branches. Then they repeated the first experiment except this time with only two experimental states.

Results
Statistical analysis done to the results of the first experiment involving all variables showed significant response time between delay and dual along with delay and branching. There were however no strong correlations and therefore no learning curve was discovered. Statistical analysis to the second experiment showed absolutely no significant relationships found between any of the variables. Furthermore there were absolutely no correlations found between any of the variables.

Contents
The paper describes the objective and then the studies done to get data to attain that objective. Their attempts to asses cognitive multitasking to allow for human-robot interaction is chronicled. It connects current research to some related works and attempts to expand upon pre-existing research.

Discussion
While the scientists were unable to completely fulfil their hypothesis, they did make considerable progress in certain direction allowing for future research in the area. Their testing and research was exceptionally and thorough and thus any further work based upon this research is being build upon a solid foundation.

Paper Reading #26: Embodiment in brain-computer interaction

Embodiment in brain-computer interaction

Chi '11

By:
Kenton O'Hara, Abigail Sellen, and Richard Harper.
  • Kenton O'Hara is currently a senior researcher at Microsoft Research.
  • Abigail Sellen is currently a principal researcher at Microsoft Research and holds a PhD in CS from the University of California, San Diego.
  • Richard Harper is currently a principal researcher at Microsoft Research and holds a PhD in CS from Manchester.
Summary
Hypothesis
The authors of this paper attempt to study ways of brain and full body interaction with computers.

Methods
The study was tested using MindFlex, a game that paired with an EEC measure brain activity and controls a fan's speed accordingly. High brain activity results in a greater fan speed and reduced brain activity results in a lower fan speed. Participants were asked to play this game in a relaxed setting and record their game play. This game play was analysed and based on the bodily reactions (gestures, facial and bodily features, audible words) combined with the reaction of the fan allowed scientists to gain a better idea of how things worked. They were able to better describe visible reactions and its relation to brain activity.

Results
Body reactions had a great correlation with task at hand. Tasks requiring concentration resulted in actions such as hunching over or clenching of fists. It was noticed that players gave a certain amount of instructions which exceeded the games requirement and the relation of that to performance.

Contents
The paper spends a fair bit of time listing the need to superior understanding of the human mind and the body's role in affecting and supporting it. Then it describes the testing phrase and further the results of this testing phase the relationships discovered by the researchers and its effects.

Discussion
A rather dull paper to read. While I certainly appreciate the attempt to take computing to a new level and the logical leap from here to ubiquitous computing, this was still a pain to read. A highly intriguing topic, just very poorly and dully written.