Show simple item record

Simulation-based Approaches for Evaluating Information Elicitation Mechanisms and Information Aggregation Algorithms

dc.contributor.authorBurrell, Noah
dc.date.accessioned2023-09-22T15:30:52Z
dc.date.available2023-09-22T15:30:52Z
dc.date.issued2023
dc.date.submitted2023
dc.identifier.urihttps://hdl.handle.net/2027.42/177922
dc.description.abstractThe mathematical study of information elicitation has led to elegant theories about the behavior of economic agents asked to share their private information. Similarly, the study of information aggregation has illuminated the possibility of combining independent sources of imperfect information so that the combined information is more valuable than that from any single source. However, despite a flourishing academic literature in both areas, some of their key insights have yet to be embraced in many of their purported applications. In this dissertation, we revisit prior work in the applications of crowdsourcing and peer assessment to address overlooked obstacles to more widespread adoption of their key contributions. We apply simulation-based methods to the evaluation of information elicitation mechanisms and information aggregation algorithms. First, we use real crowdsourcing data to explore common assumptions about the way that crowd workers make mistakes in labeling. We find that a common assumption from theoretical work, that workers identify correct labels with a constant probability, is implausible in the data sets we consider. We further find different forms of heterogeneity among both tasks and workers, which have different implications for the design and evaluation of label aggregation algorithms. Then, we turn to peer assessment. Despite many potential benefits from peer grading, the traditional paradigm, where one instructor grades each submission, predominates. One persistent impediment to adopting a new grading paradigm is doubt that it will assign grades that are at least as good as those that would have been assigned under the existing paradigm. We address this impediment by using tools from economics to define a practical framework for determining when peer grades clearly exceed the standard set by the instructor baseline. We simulate realistic grading data using a model from prior work, and find that peer grading is unlikely to be clearly preferable to instructor grading for participants unless we either significantly increase the workload of students or make stronger assumptions about participants’ utility functions. We also study the effectiveness of various interventions to improve the quality of peer grading and the optimal allocations of peer and instructor resources under a fixed grading budget. Lastly, we propose measurement integrity, which quantifies a peer prediction mechanism’s ability to assign rewards that reliably measure agents according to the quality of their reports, as a novel desideratum in many applications. We perform computational experiments with mechanisms that elicit information without verification in the setting of peer assessment to empirically evaluate mechanisms according to both measurement integrity and robustness against strategic reporting. We find an apparent trade-off between these properties: The best-performing mechanisms in terms of measurement integrity are highly susceptible to strategic reporting. But we also find that supplementing mechanisms with realistic parametric statistical models results in mechanisms that strike the best balance between them.
dc.language.isoen_US
dc.subjectSimulation
dc.subjectInformation Elicitation
dc.subjectInformation Aggregation
dc.titleSimulation-based Approaches for Evaluating Information Elicitation Mechanisms and Information Aggregation Algorithms
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science & Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberPeikert, Christopher J
dc.contributor.committeememberSchoenebeck, Grant
dc.contributor.committeememberResnick, Paul
dc.contributor.committeememberKoutra, Danai
dc.subject.hlbsecondlevelEconomics
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelBusiness and Economics
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/177922/1/burrelln_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/8379
dc.identifier.orcid0000-0003-3448-080X
dc.identifier.name-orcidBurrell, Noah; 0000-0003-3448-080Xen_US
dc.working.doi10.7302/8379en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.