Careless Survey Respondents: Approaches to Identify and Reduce their Negative Impact on Survey Estimates
Melipillan Araneda, Edmundo Roberto
2019
Abstract
Multi-item response scales are widely used in surveys to assess a variety of constructs including respondents’ attitudes, behavior, and personality. Multi-item scales often appear in grid question formats with the same response options for a set of survey question items. In these types of questions, survey satisficing is likely to occur, where respondents might skim instructions, respond in a haphazard fashion, or rush through questionnaires. Those with these behaviors are often referred to as satisficers or careless respondents (CR). Despite that previous literature has extensively discussed ways to identify satisficing behaviors in these type of scales (e.g., the detection of response order effects, straightlining and speeding, and the use of trap questions), there are two methods overlooked in survey literature. One method is the person-fit-statistics which identify the inconsistency of responses by comparing the expected responses to the actual reported responses. One of the most popular person-fit statistics is the standardized log-likelihood person-fit statistic, also known as the standardized log-likelihood l_z^p, which has been proven to be a useful tool in multi-item scales with a large number of items. Another is the autoencoder method, which was initially developed and used in engineering to identify anomalies or outlier cases. This dissertation intends to fill three important gaps in the existing literature related to CR identification and reduction of their negative effects. Specifically, this dissertation examines i) the use of standardized standardized log-likelihood l_z^p and the autoencoder in identifying careless respondents (CR) in multi-item scales; ii) the use of multiple imputation to deal with data of the identified CR; and iii) evaluate the use of standardized log-likelihood l_z^p and the autoencoder as an alternative to trap questions and explore how to best deal with trapped respondents. The first study compares the performances of standardized log-likelihood l_z^p and the autoencoder in identifying CR in a multi-item scale with a small number of items. This research is based on a full factorial simulation study experimental design focusing on two types of CR behaviors – random response and non-differentiation of question item directions. Results indicate that the autoencoder with two iterations has increased sensitivity and acceptable false positive rates, identifying more CR (higher sensitivity) in all conditions, compared to the standardized log-likelihood l_z^p. The second study compares three approaches in treating data from CR, including using the full sample or “complete data analysis” approach, excluding all CR data, and deleting and imputing CR data. Results of this chapter suggest that the autoencoder identification with imputation of CR data outperforms the standardized log-likelihood l_z^p identification and imputation. The third study examines whether the standardized log-likelihood l_z^p and the autoencoder can be used as an alternative to the trap question and what is the optimal approach to deal with CR data. Data from a non-probability web survey suggest that the autoencoder provides equivalent results to the use of trap questions. In addition, it is possible to remove only a subset of trapped respondent data in analysis by using the autoencoder to identify the most-careless subset of trapped respondents.Subjects
Careless respondents Satisficing Person-fit indices Autoencoder
Types
Thesis
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.