Show simple item record

An interpretable neural network for outcome prediction in traumatic brain injury

dc.contributor.authorMinoccheri, Cristian
dc.contributor.authorWilliamson, Craig A.
dc.contributor.authorHemmila, Mark
dc.contributor.authorWard, Kevin
dc.contributor.authorStein, Erica B.
dc.contributor.authorGryak, Jonathan
dc.contributor.authorNajarian, Kayvan
dc.date.accessioned2022-08-03T07:43:58Z
dc.date.available2022-08-03T07:43:58Z
dc.date.issued2022-08-01
dc.identifier.citationBMC Medical Informatics and Decision Making. 2022 Aug 01;22(1):203
dc.identifier.urihttps://doi.org/10.1186/s12911-022-01953-z
dc.identifier.urihttps://hdl.handle.net/2027.42/173158en
dc.description.abstractAbstract Background Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are “black box”: it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. Methods In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. Results The proposed method is compared to established machine learning methods—XGBoost, Random Forest, and SVM—achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)—0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model’s predictions and can be used as general guidelines by clinicians to inform treatment decisions. Conclusions The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model’s decision.
dc.titleAn interpretable neural network for outcome prediction in traumatic brain injury
dc.typeJournal Article
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/173158/1/12911_2022_Article_1953.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/4889
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dc.date.updated2022-08-03T07:43:56Z
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.