Show simple item record

Clinical performance comparators in audit and feedback: a review of theory and evidence

dc.contributor.authorGude, Wouter T
dc.contributor.authorBrown, Benjamin
dc.contributor.authorvan der Veer, Sabine N
dc.contributor.authorColquhoun, Heather L
dc.contributor.authorIvers, Noah M
dc.contributor.authorBrehaut, Jamie C
dc.contributor.authorLandis-Lewis, Zach
dc.contributor.authorArmitage, Christopher J
dc.contributor.authorde Keizer, Nicolette F
dc.contributor.authorPeek, Niels
dc.date.accessioned2019-04-28T03:35:20Z
dc.date.available2019-04-28T03:35:20Z
dc.date.issued2019-04-24
dc.identifier.citationImplementation Science. 2019 Apr 24;14(1):39
dc.identifier.urihttps://doi.org/10.1186/s13012-019-0887-1
dc.identifier.urihttps://hdl.handle.net/2027.42/148826
dc.description.abstractAbstract Background Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator. Methods We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis. Results We found across 146 trials that feedback recipients’ performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients’ own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies. Conclusion Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback’s credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information.
dc.titleClinical performance comparators in audit and feedback: a review of theory and evidence
dc.typeArticleen_US
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/148826/1/13012_2019_Article_887.pdf
dc.language.rfc3066en
dc.rights.holderThe Author(s).
dc.date.updated2019-04-28T03:35:21Z
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.