Show simple item record

Numerical Likelihood Estimates from Physicians and Linear Models (Probability Scoring Rates, Calibration, Coronary Artery Disease, Verification Bias).

dc.contributor.authorLevi, Keith Randell
dc.date.accessioned2020-09-09T02:11:00Z
dc.date.available2020-09-09T02:11:00Z
dc.date.issued1985
dc.identifier.urihttps://hdl.handle.net/2027.42/160824
dc.description.abstractThere are many potential benefits to be gained from using explicit numerical estimates of likelihood as opposed to qualitative verbal expressions of uncertainty. Medical diagnosis is one domain in which this issue is obviously important. This study investigated whether physicians are capable of giving meaningful and accurate numerical likelihood estimates. I considered two questions in regard to the issue of meaning- fulness. First, could physicians discriminate enough categories of relative likelihood to justify the use of numerical responses? Second, were the reported numbers well calibrated? That is, given the physi- cians' reported numerical estimates, did the expected proportion of patients turn out to have disease? Accuracy was measured by the mean probability score ((')P(')S). Physicians' accuracy was compared with a simple linear model. Finally, I also examined skills contributing to forecast accuracy. The subjects were nine nuclear-medicine physicians. The stimuli were 220 case histories of patients who had undergone a nuclear-medicine diagnostic test for coronary artery disease (CAD) as well as coronary angiography. Each physician read all 220 cases. For each case, physicians gave three- and six-category likelihood ratings, and a numerical response. Coronary angiography determined whether a patient actually had significant CAD. Physicians showed reliable discriminations in going from a three- to a six-category likelihood rating. Calibration was generally quite good. However, there was a consistent trend to under-predict CAD at low likelihoods and over-predict it at high likelihoods. Under-prediction of CAD was surprising because a false negative error was considered much worse than a false positive. There were no significant accuracy differences between physicians. However, the linear model was significantly more accurate than the physicians. The lens-model analyses implied that physicians failure to outperform the model was due mainly to their lack of proper nonlinear use of cues, and also to their less than perfect reliability and less than optimal weighting of cues. I concluded that physicians' reliable discrimination of relative likelihoods and generally good calibration showed that they could give meaningful numerical likelihood estimates. In addition, their estimates should be improved by using predictive models as diagnostic aids.
dc.format.extent128 p.
dc.languageEnglish
dc.titleNumerical Likelihood Estimates from Physicians and Linear Models (Probability Scoring Rates, Calibration, Coronary Artery Disease, Verification Bias).
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineExperimental psychology
dc.description.thesisdegreegrantorUniversity of Michigan
dc.subject.hlbtoplevelSocial Sciences
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/160824/1/8600486.pdfen_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.