Show simple item record

Using Eye-tracking Data to Predict Situation Awareness in Real Time during Takeover Transitions in Conditionally Automated Driving

dc.contributor.authorZhou, Feng
dc.contributor.authorYang, X. Jessie
dc.contributor.authorde Winter, Joost
dc.date.accessioned2021-03-27T02:45:21Z
dc.date.available2021-03-27T02:45:21Z
dc.date.issued2021-03-26
dc.identifier.urihttps://hdl.handle.net/2027.42/167003en
dc.description.abstractSituation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.en_US
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.rightsCC0 1.0 Universal*
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/*
dc.subjectReal-time situation awareness prediction, takeover, automated driving, eye-tracking measures, explainabilityen_US
dc.titleUsing Eye-tracking Data to Predict Situation Awareness in Real Time during Takeover Transitions in Conditionally Automated Drivingen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelIndustrial and Operations Engineering
dc.subject.hlbtoplevelEngineering
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumUniversity of Michigan, Ann Arboren_US
dc.contributor.affiliationumUniversity of Michigan, Dearbornen_US
dc.contributor.affiliationotherDelft University of Technologyen_US
dc.contributor.affiliationumcampusDearbornen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/167003/1/hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/799
dc.identifier.sourceIEEE Transactions on Intelligent Transportation Systemsen_US
dc.identifier.orcid0000-0001-6123-073Xen_US
dc.description.filedescriptionDescription of hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdf : Mian article
dc.description.depositorSELFen_US
dc.identifier.name-orcidZhou, Feng; 0000-0001-6123-073Xen_US
dc.working.doi10.7302/799en_US
dc.owningcollnameIndustrial and Manufacturing Systems Engineering (IMSE, UM-Dearborn)


Files in this item

Show simple item record

CC0 1.0 Universal
Except where otherwise noted, this item's license is described as CC0 1.0 Universal

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.