Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts
Jordan, Jaime; Hopson, Laura R.; Molins, Caroline; Bentley, Suzanne K.; Deiorio, Nicole M.; Santen, Sally A.; Yarris, Lalena M.; Coates, Wendy C.; Gisondi, Michael A.
2021-08
View/ Open
Citation
Jordan, Jaime; Hopson, Laura R.; Molins, Caroline; Bentley, Suzanne K.; Deiorio, Nicole M.; Santen, Sally A.; Yarris, Lalena M.; Coates, Wendy C.; Gisondi, Michael A. (2021). "Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts." AEM Education and Training (4): n/a-n/a.
Abstract
BackgroundResearch abstracts are submitted for presentation at scientific conferences; however, criteria for judging abstracts are variable. We sought to develop two rigorous abstract scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to support score interpretation.MethodsWe used a modified Delphi method to achieve expert consensus for scoring rubric items to optimize content validity. Eight education research experts participated in two separate modified Delphi processes, one to generate quantitative research items and one for qualitative. Modifications were made between rounds based on item scores and expert feedback. Homogeneity of ratings in the Delphi process was calculated using Cronbach’s alpha, with increasing homogeneity considered an indication of consensus. Rubrics were piloted by scoring abstracts from 22 quantitative publications from AEM Education and Training “Critical Appraisal of Emergency Medicine Education Research” (11 highlighted for excellent methodology and 11 that were not) and 10 qualitative publications (five highlighted for excellent methodology and five that were not). Intraclass correlation coefficient (ICC) estimates of reliability were calculated.ResultsEach rubric required three rounds of a modified Delphi process. The resulting quantitative rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Cronbach’s α for the third round = 0.922, ICC for total scores during piloting = 0.893). The resulting qualitative rubric contained seven items: quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Cronbach’s α for the third round = 0.913, ICC for the total scores during piloting = 0.788).ConclusionWe developed scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts to aid in selection for presentation at scientific meetings. Our tools demonstrated high reliability.Publisher
American Council on Education and Macmillan Wiley Periodicals, Inc.
ISSN
2472-5390 2472-5390
Other DOIs
Types
Article
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.