Novel Human-in-the-Loop approach to annotate Mobile Eye Tracking data

Questions?


What is it? What does it do?

The identification of specific objects in highly dynamic environments is a significant data analysis challenge inherent to current mobile eye-tracking technologies. A novel algorithmic approach to annotate mobile eye-tracking data incorporating human insight using a semi-automatic decision making process has been developed. Combining a computer’s ability to quickly process large quantities of data and a human's ability to decipher and make decisions for complex images and situations leads to a more robust analysis process that is less resource intensive.

Why is it better?

  • Blended approach improves accuracy and reduces time burden spent on object identification and annotation in mobile eye tracking studies
  • Method can be implemented without any training data or predefined models of the environment
  • Platform agnostic and can be used with various software systems and eye tracker devices

What is its current status?

The prototype was developed and validated outside a controlled laboratory setting; one involving a participant looking at three different objects from various angles and the other involving a physician performing an intubation on a mannequin tested using two different comparison techniques spatial histogram and flow. Overall accuracy was 82.3% compared to manually annotated data for ground truth.


IP Status

Images/Videos

Attachments

News

Publications

Other Technologies by Inventor

Other Technologies by Innovation Manager

Get in Touch With Us