Bayesian inference of dependent kappa for binary ratings
dc.contributor.author | Sen, Ananda | |
dc.contributor.author | Li, Pin | |
dc.contributor.author | Ye, Wen | |
dc.contributor.author | Franzblau, Alfred | |
dc.date.accessioned | 2021-11-02T00:48:19Z | |
dc.date.available | 2022-12-01 20:48:17 | en |
dc.date.available | 2021-11-02T00:48:19Z | |
dc.date.issued | 2021-11-20 | |
dc.identifier.citation | Sen, Ananda; Li, Pin; Ye, Wen; Franzblau, Alfred (2021). "Bayesian inference of dependent kappa for binary ratings." Statistics in Medicine 40(26): 5947-5960. | |
dc.identifier.issn | 0277-6715 | |
dc.identifier.issn | 1097-0258 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/170898 | |
dc.publisher | John Wiley & Sons | |
dc.subject.other | covariate adjustment | |
dc.subject.other | test of homogeneity | |
dc.subject.other | Bayesian inference | |
dc.subject.other | correlated kappa | |
dc.subject.other | grouped data | |
dc.title | Bayesian inference of dependent kappa for binary ratings | |
dc.type | Article | |
dc.rights.robots | IndexNoFollow | |
dc.subject.hlbsecondlevel | Medicine (General) | |
dc.subject.hlbsecondlevel | Statistics and Numeric Data | |
dc.subject.hlbsecondlevel | Public Health | |
dc.subject.hlbtoplevel | Health Sciences | |
dc.subject.hlbtoplevel | Science | |
dc.subject.hlbtoplevel | Social Sciences | |
dc.description.peerreviewed | Peer Reviewed | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/170898/1/sim9165.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/170898/2/sim9165_am.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/170898/3/SIM_9165-sup-0001-supinfo.pdf | |
dc.identifier.doi | 10.1002/sim.9165 | |
dc.identifier.source | Statistics in Medicine | |
dc.identifier.citedreference | Sen A, Lee SY, Gillespie BW, et al. Comparing film and digital radiographs for reliability of pneumoconiosis classifications: a modeling approach. Acad Radiol. 2010; 17 ( 4 ): 511 ‐ 519. | |
dc.identifier.citedreference | Kang C, Qaqish B, Monaco J, Sheridan SL, Cai J. Kappa statistic for clustered dichotomous responses from physicians and patients. Stat Med. 2013; 32 ( 21 ): 3700 ‐ 3719. | |
dc.identifier.citedreference | Yang Z, Zhou M. Weighted kappa statistic for clustered matched‐pair ordinal data. Comput Stat Data Anal. 2015; 82: 1 ‐ 18. | |
dc.identifier.citedreference | Nelson KP, Edwards D. Improving the reliability of diagnostic tests in population‐based agreement studies. Stat Med. 2010; 29 ( 6 ): 617 ‐ 626. | |
dc.identifier.citedreference | Ma Y, Tang W, Feng C, Tu XM. Inference for kappas for longitudinal study data: applications to sexual health research. Biometrics. 2008; 64 ( 3 ): 781 ‐ 789. | |
dc.identifier.citedreference | Fleiss JL, Cohen J, Everitt BS. Large sample standard errors of kappa and weighted kappa. Psychol Bull. 1969; 72 ( 5 ): 323. | |
dc.identifier.citedreference | Williamson JM, Lipsitz SR, Manatunga AK. Modeling kappa for measuring dependent categorical agreement data. Biostatistics. 2000; 1 ( 2 ): 191 ‐ 202. | |
dc.identifier.citedreference | ILO Guidelines for the use of ILO international classification of radiographs of pneumoconioses, Revised edition 2000; 2002. | |
dc.identifier.citedreference | Mulloy KB, Coultas DB, Samet JM. Use of chest radiographs in epidemiological investigations of pneumoconioses. Occup Environ Med. 1993; 50 ( 3 ): 273 ‐ 275. | |
dc.identifier.citedreference | Pham QT. Chest radiography in the diagnosis of pneumoconiosis [Workshop Report]. Int J Tuberc Lung Dis. 2001; 5 ( 5 ): 478 ‐ 482. | |
dc.identifier.citedreference | Franzblau A, teWaterNaude J, Sen A, et al. Comparison of digital and film chest radiography for detection and medical surveillance of silicosis in a setting with a high burden of tuberculosis. Am J Ind Med. 2018; 61 ( 3 ): 229 ‐ 238. | |
dc.identifier.citedreference | Takashima Y, Suganuma N, Sakurazawa H, et al. A flat‐panel detector digital radiography and a storage phosphor computed radiography: screening for pneumoconioses. J Occup Health. 2007; 49 ( 1 ): 39 ‐ 45. | |
dc.identifier.citedreference | Zähringer M, Piekarski C, Saupe M, et al. Comparison of digital selenium radiography with an analog screen‐film system in the diagnostic process of pneumoconiosis according to ILO classification. RoFo Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin. 2001; 173 ( 10 ): 942 ‐ 948. | |
dc.identifier.citedreference | Franzblau A, Kazerooni EA, Sen A, et al. Comparison of digital radiographs with film radiographs for the classification of pneumoconiosis1. Acad Radiol. 2009; 16 ( 6 ): 669 ‐ 677. | |
dc.identifier.citedreference | Jeffreys H. Theory of Probability. 3rd ed. Oxford, UK: Clarendon Press; 1961. | |
dc.identifier.citedreference | Kass R, Raftery A. Bayes factor and model uncertainty. J Am Stat Assoc. 1995; 90 ( 430 ): 773 ‐ 795. | |
dc.identifier.citedreference | Satagopan J, Sen A, Zhou Q, et al. Bayes and empirical Bayes methods for reduced rank regression models in matched case‐control studies. Biometrics. 2016; 72 ( 2 ): 584 ‐ 595. | |
dc.identifier.citedreference | Plummer M. rjags: Bayesian graphical models using MCMC. R package version 3(10); 2013. | |
dc.identifier.citedreference | Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM. Measurement Error in Nonlinear Models: A Modern Perspective. Boca Raton, FL: CRC Press; 2006. | |
dc.identifier.citedreference | Feinstein AR, Cicchetti DV. High agreement but low kappa: I. the problems of two paradoxes. J Clin Epidemiol. 1990; 43 ( 6 ): 543 ‐ 549. | |
dc.identifier.citedreference | Williamson JM, Manatunga AK. Assessing interrater agreement from dependent data. Biometrics. 1997; 53 ( 2 ): 707 ‐ 714. | |
dc.identifier.citedreference | Feuerman M, Miller A. The kappa statistic as a function of sensitivity and specificity. Int J Math Educ Sci Technol. 2005; 36 ( 5 ): 517 ‐ 527. | |
dc.identifier.citedreference | Lin L‐K. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989;45(1): 255 ‐ 268. | |
dc.identifier.citedreference | Lin L‐K. Assay validation using the concordance correlation coefficient. Biometrics. 1992;48(2): 599 ‐ 604. | |
dc.identifier.citedreference | Quiroz J. Assessment of equivalence using a concordance correlation coefficient in a repeated measurements design. J Biopharm Stat. 2005; 15 ( 6 ): 913 ‐ 928. | |
dc.identifier.citedreference | Guo Y, Manatunga AK. Modeling the agreement of discrete bivariate survival times using kappa coefficient. Lifetime Data Anal. 2005; 11 ( 3 ): 309 ‐ 332. | |
dc.identifier.citedreference | Peng L, Li R, Guo Y, Manatunga A. A framework for assessing broad sense agreement between ordinal and continuous measurements. J Am Stat Assoc. 2011; 106 ( 496 ): 1592 ‐ 1601. | |
dc.identifier.citedreference | Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960; 20 ( 1 ): 37 ‐ 46. | |
dc.identifier.citedreference | Cicchetti DV, Allison T. A new procedure for assessing reliability of scoring EEG sleep recordings. Am J EEG Technol. 1971; 11 ( 3 ): 101 ‐ 110. | |
dc.identifier.citedreference | Fleiss JL, Cohen J. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educ Psychol Meas. 1973; 33 ( 3 ): 613 ‐ 619. | |
dc.identifier.citedreference | Landis JR, Koch GG. An application of hierarchical kappa‐type statistics in the assessment of majority agreement among multiple observers. Biometrics. 1977; 33 ( 2 ): 363 ‐ 374. | |
dc.identifier.citedreference | Fleiss JL, Levin B, Paik MC. Statistical Methods for Rates and Proportions. 3rd ed. Hoboken, NJ: John Wiley & Sons; 2003. | |
dc.identifier.citedreference | Shoukri MM. Measures of Interobserver Agreement and Reliability. 2nd ed. Boca Raton, FL: CRC Press; 2010. | |
dc.identifier.citedreference | Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971; 76 ( 5 ): 378. | |
dc.identifier.citedreference | Kraemer HC. Extension of the kappa coefficient. Biometrics. 1980; 36 ( 2 ): 207 ‐ 216. | |
dc.identifier.citedreference | Donner A, Eliasziw M, Klar N. Testing the homogeneity of kappa statistics. Biometrics. 1996; 52 ( 1 ): 176 ‐ 183. | |
dc.identifier.citedreference | Basu S, Banerjee M, Sen A. Bayesian inference for kappa from single and multiple studies. Biometrics. 2000; 56 ( 2 ): 577 ‐ 582. | |
dc.identifier.citedreference | Lipsitz SR, Williamson J, Klar N, Ibrahim J, Parzen M. A simple method for estimating a regression model for κ between a pair of raters. J R Stat Soc A. 2001; 164 ( 3 ): 449 ‐ 465. | |
dc.identifier.citedreference | Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: a review of interrater agreement measures. Can J Stat. 1999; 27 ( 1 ): 3 ‐ 23. | |
dc.identifier.citedreference | Donner A, Shoukri MM, Klar N, Bartfay E. Testing the equality of two dependent kappa statistics. Stat Med. 2000; 19 ( 3 ): 373 ‐ 387. | |
dc.identifier.citedreference | McKenzie DP, Mackinnon AJ, Péladeau N, et al. Comparing correlated Kappas by resampling: is one level of agreement significantly different from another? J Psychiatr Res. 1996; 30 ( 6 ): 483 ‐ 492. | |
dc.identifier.citedreference | Vanbelle S, Albert A. A bootstrap method for comparing correlated kappa coefficients. J Stat Comput Simul. 2008; 78 ( 11 ): 1009 ‐ 1015. | |
dc.identifier.citedreference | Barnhart HX, Williamson JM. Weighted least‐squares approach for comparing correlated kappa. Biometrics. 2002; 58 ( 4 ): 1012 ‐ 1019. | |
dc.working.doi | NO | en |
dc.owningcollname | Interdisciplinary and Peer-Reviewed |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.