The identification of specific objects in highly dynamic environments is a significant data analysis challenge inherent to current mobile eye-tracking technologies. A novel algorithmic approach to annotate mobile eye-tracking data incorporating human insight using a semi-automatic decision making process has been developed. Combining a computer’s ability to quickly process large quantities of data and a human's ability to decipher and make decisions for complex images and situations leads to a more robust analysis process that is less resource intensive.
The prototype was developed and validated outside a controlled laboratory setting; one involving a participant looking at three different objects from various angles and the other involving a physician performing an intubation on a mannequin tested using two different comparison techniques spatial histogram and flow. Overall accuracy was 82.3% compared to manually annotated data for ground truth.
“Making Sense of Mobile Eye-Tracking Data in the Real-World: A Human-in-the-Loop Analysis Approach” accepted and presented for the Human Factors and Ergonomics Society International Annual Meeting September 19-23, 2016 Washington D.C.