Sandra D. Starke and Chris Baber, University of Birmingham
Many aspects of everyday life rely on people correctly interpreting visual information in complex scenarios: doctors working with medical images, air traffic controllers working with multiple screens or analysts searching for patterns amongst different information sources. These tasks can require significant cognitive effort, with constraints arising from limitations in e.g. short term memory, logic, expectation-driven biases and, importantly, the human visual system: at any given moment, the only area seen in full resolution has the size of a thumb at an arm’s length. We therefore have to move our eyes to build up a representation of what is in front of us, and this might even change in dynamic tasks. The perceived ‘reality’ we are left with is hence a jumble of actual visual input, gap-filling by the brain, expectation/projection, outdated memories and ignored information. Hence, people make mistakes.
To combat the limitations of human information processing, the integration of computer systems into human decision making has a long tradition and many success stories to report. However, the interaction of humans with such systems is less well understood and can suffer from a range of issues when it comes to uptake. While this can be difficult to quantify, eye tracking permits real-time recording of where someone is looking, with the potential to alert operators to neglected information. This could, for example, prompt operators to look at information that might be important but that had not been visually attended.
We designed a pilot study to investigate how people might integrate a gaze-based recommender system into their workflow when trying to find patterns in a complex scenario. The task was to correctly classify credit card transactions as legitimate or fraudulent based on nine information sources with varying predictive power regarding the true status of a transaction. For each transaction, participants engaged in a multi-step workflow, starting with their own independent assessment which was then supported by the computer recommendation of the most important three sources and the highlighting of those of the three sources that the participant did not look at. Feedback regarding the correctness of the decision was given for each trial.