Paradata, Interviewing Quality, and Interviewer Effects
Sharma, Sharan
2019
Abstract
A vast literature on interviewer effects (interviewer measurement error variance) is devoted to the estimation of these effects and understanding their causes and associated factors. However, consideration of interviewer effects in active quality control (QC) does not seem widespread, despite their known effect on reducing precision of survey estimates. We address this gap in this dissertation with the overarching goal of using item-level paradata (keystrokes and time stamps generated during the computer-assisted interviewing process) in a systematic manner to develop an active interviewer monitoring system in order to control interviewer effects. The dissertation is structured around exploring associations between paradata, indicators of interviewing quality, and interviewer effects. Our hypothesis is that different levels of interviewing quality cause different paradata patterns. Differing levels of interviewing quality also result in different between-interviewer response means even after controlling for respondent characteristics, leading to interviewer effects. Thus, interviewing quality is conceptualized as a common cause of both interviewer effects and paradata patterns, making it possible for us to think about paradata patterns as being potentially effective proxies of interviewer effects. Little is known about what paradata say about the actual quality of an interview. This is explored in Chapter 2 where we use paradata patterns to either predict the proportion of flags in an interview (interview-level analysis) or the occurrence of a QC flag for an item (item-level analysis). The results show that paradata patterns have strong associations with interviewing quality. A key finding is that a multivariate approach to paradata use is necessary. Chapter 3 turns to investigating associations of indicators of interviewing quality with interviewer effects. Survey quality control (QC) systems monitor interviewers for their compliance with interviewing protocol. But what is not very clear is if deviations from protocol are also associated with interviewer effects. While the results of our analysis show moderate associations in this regard, we find that when QC variables are complementary to other interviewer-level characteristic variables; when used together, a fair magnitude of interviewer variance can be explained. Based on the foundations laid by Chapters 2 and 3, Chapter 4 uses paradata to directly predict interviewer effects. We find that paradata are fairly strong predictors of interviewer effects for the items we analyzed, explaining more than half the magnitude of interviewer effects on average. Also, paradata outperformed interviewer-level demographic and work-related variables in explaining interviewer effects. While most of the focus in the literature and practice has been on time-based paradata, e.g., item times, we find that non-time based paradata, e.g., frequency of item revisits, outperform the time-based paradata for a large majority of items. We discuss how survey organizations can use the dissertation findings in active quality control. All our analyses use data from the 2015 wave of the Panel Study of Income Dynamics.Subjects
Interviewer effects Paradata Survey quality control Interviewing quality
Types
Thesis
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.