Show simple item record

Alternative Markers of Performance in Simulation: Where We Are and Where We Need To Go

dc.contributor.authorWillemsen‐dunlap, Ann M.
dc.contributor.authorBinstadt, Emily S.
dc.contributor.authorNguyen, Michael C.
dc.contributor.authorElliott, Nicole C.
dc.contributor.authorCheney, Alan R.
dc.contributor.authorStevens, Ronald H.
dc.contributor.authorDooley‐hash, Suzanne
dc.date.accessioned2018-03-07T18:25:48Z
dc.date.available2019-04-01T15:01:10Zen
dc.date.issued2018-02
dc.identifier.citationWillemsen‐dunlap, Ann M. ; Binstadt, Emily S.; Nguyen, Michael C.; Elliott, Nicole C.; Cheney, Alan R.; Stevens, Ronald H.; Dooley‐hash, Suzanne (2018). "Alternative Markers of Performance in Simulation: Where We Are and Where We Need To Go." Academic Emergency Medicine 25(2): 250-254.
dc.identifier.issn1069-6563
dc.identifier.issn1553-2712
dc.identifier.urihttps://hdl.handle.net/2027.42/142535
dc.description.abstractThis article on alternative markers of performance in simulation is the product of a session held during the 2017 Academic Emergency Medicine Consensus Conference â Catalyzing System Change Through Health Care Simulation: Systems, Competency, and Outcomes.â There is a dearth of research on the use of performance markers other than checklists, holistic ratings, and behaviorally anchored rating scales in the simulation environment. Through literature review, group discussion, and consultation with experts prior to the conference, the working group defined five topics for discussion: 1) establishing a working definition for alternative markers of performance, 2) defining goals for using alternative performance markers, 3) implications for measurement when using alternative markers, identifying practical concerns related to the use of alternative performance markers, and 5) identifying potential for alternative markers of performance to validate simulation scenarios. Five research propositions also emerged and are summarized.
dc.publisherSpringer International Publishing
dc.publisherWiley Periodicals, Inc.
dc.titleAlternative Markers of Performance in Simulation: Where We Are and Where We Need To Go
dc.typeArticleen_US
dc.rights.robotsIndexNoFollow
dc.subject.hlbsecondlevelMedicine (General)
dc.subject.hlbtoplevelHealth Sciences
dc.description.peerreviewedPeer Reviewed
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/142535/1/acem13321_am.pdf
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/142535/2/acem13321.pdf
dc.identifier.doi10.1111/acem.13321
dc.identifier.sourceAcademic Emergency Medicine
dc.identifier.citedreferenceStevens R, Galloway T, Halpin D, Willemsenâ Dunlap A. Healthcare teams neurodynamically reorganize when resolving uncertainty. Entropy 2016; 18: 427.
dc.identifier.citedreferenceMarriage B, Kinnear J. Assessing team performance â markers and methods. Trends Anaesth Crit Care 2016; 7â 8: 11 â 6.
dc.identifier.citedreferenceTognoli E, Kelso JA. The coordination dynamics of social neuromarkers. Front Hum Neurosci 2015; 9: 563.
dc.identifier.citedreferenceJackson A, Marks LB, Bentzen SM, et al. The lessons of QUANTEC: recommendations for reporting and gathering data on doseâ volume dependencies of treatment outcome. Int J Radiat Oncol Biol Phys 2010; 76: S155 â 60.
dc.identifier.citedreferenceGorman JC, Martin MJ, Dunbar TA, et al. Crossâ level effects between neurophysiology and communication during team training. Hum Factors 2015; 58: 181 â 99.
dc.identifier.citedreferenceStevens RH, Galloway T. Are neurodynamic organizations a fundamental property of teamwork? Front Psychol 2017; 8: 644.
dc.identifier.citedreferenceStevens R, Galloway T, Lamb J, Steed R, Lamb C. Linking team neurodynamic organizations with observational ratings of team performance. In: Innovative Assessment of Collaboration. Cham: Springer International Publishing; 2017. p. 315 â 30.
dc.identifier.citedreferenceErk S, Kleczar A, Walter H. Valenceâ specific regulation effects in a working memory task with emotional context. Neuroimage 2007; 37: 623 â 32.
dc.identifier.citedreferenceHoward SJ, Burianová H, Ehrich J, et al. Behavioral and fMRI evidence of the differing cognitive load of domainâ specific assessments. Neuroscience 2015; 297: 38 â 46.
dc.identifier.citedreferenceDan M, Saha A, Konar A, Ralescu AL, Nagar AK. A typeâ 2 fuzzy approach towards cognitive load detection using fNIRS signals. 2016 IEEE International Conference on Fuzzy Systems (FUZZâ IEEE). Piscataway (NJ): IEEE; 2016. p. 2508 â 15.
dc.identifier.citedreferenceFishburn FA, Norr ME, Medvedev AV, Vaidya CJ. Sensitivity of fNIRS to cognitive state and load. Front Hum Neurosci 2014; 8: 76.
dc.identifier.citedreferenceGuastello SJ, Reiter K, Malon M, et al. Cognitive workload and fatigue in a vigilance dual task: miss errors, false alarms, and the impact of wearing biometric sensors while working. Nonlinear Dynamics Psychol Life Sci 2016; 20: 509 â 35.
dc.identifier.citedreferenceCook DA. Technology enhanced simulation to assess health professionals. Acad Med 2013; 88: 872 â 83.
dc.identifier.citedreferenceValidity Messick S. In: Linn RL, editor. The American Council on Education/Macmillan Series on Higher Education Educational Measurement. New York: Macmillan, 1989: 13 â 103.
dc.identifier.citedreferenceKane M. An argument based approach to validity. Psychol Bull 1992; 112: 527 â 35.
dc.identifier.citedreferenceSalas E, Stevens R, Gorman J, Cooke NJ, Guastello SJ, von Davier A. What will quantitative measures of teamwork look like in 10 years? Proc Hum Factors Ergon Soc Annu Meet 2015; 59: 235 â 9.
dc.identifier.citedreferenceGalster SM, Johnson EM. Senseâ assessâ augment: a taxonomy for human effectiveness. (Report AFRLâ RHâ WPâ TMâ 2013â 0002). Wrightâ Patterson Air Force Base (OH): US Air Force Research Laboratory; 2013.
dc.identifier.citedreferenceMicrosoft Solution Providers. Microsoft Data Mining. Amsterdam: Elsevier, 2001: 255 â 87.
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.