IUI '11
By:
Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, and Daniel Gruen.
- Kate Ehrlich is a senior Technical Staff member at IBM.
- Susanna Kirk holds a MS in Human Factors in Information Design.
- John Patterson is a Distinguished Engineer at IBM.
- Jamie Rasmussen is a member of the same team as John and Kate at IBM.
- Steven Ross is a member of the same team as Jamie, John and Kate at IBM.
- Daniel Gruen is currently working on the Unified Activity Management project at IBM.
Summary
Hypothesis
The researchers of this paper aim to figure out how accurate taking advice from intelligent systems currently is and ways of figuring out how to improve upon it.
Methods
Using a software called NIMBLE researchers gathered data on the effect of a network analyst following the advice of an Intelligent System.
Results
The users performance improved slightly with a correct recommendation for the system, and when there was no correct recommendation or justification available, the system kept quiet. All this was somewhat irrelevant considering that most users ignore the recommendations and instead relied on their own knowledge. They also noticed that most users the followed the recommendations quicker when it was closer to what they were inclined towards.
Contents
The authors aimed to create a study that would test the accuracy of Intelligent System recommendations and their effect on humans. Comparing gathered values with baseline results the researchers were able to calculate the benefit of positive recommendations over the negative effect of negative recommendations.
Discussion
While intriguing, I feel the topic of this paper isn't liable to progress very much. As the researchers noticed, most users aren't too inclined to take actions they have experience in based on a computers recommendations.