Simulation-based Approaches for Evaluating Information Elicitation Mechanisms and Information Aggregation Algorithms
dc.contributor.author | Burrell, Noah | |
dc.date.accessioned | 2023-09-22T15:30:52Z | |
dc.date.available | 2023-09-22T15:30:52Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/177922 | |
dc.description.abstract | The mathematical study of information elicitation has led to elegant theories about the behavior of economic agents asked to share their private information. Similarly, the study of information aggregation has illuminated the possibility of combining independent sources of imperfect information so that the combined information is more valuable than that from any single source. However, despite a flourishing academic literature in both areas, some of their key insights have yet to be embraced in many of their purported applications. In this dissertation, we revisit prior work in the applications of crowdsourcing and peer assessment to address overlooked obstacles to more widespread adoption of their key contributions. We apply simulation-based methods to the evaluation of information elicitation mechanisms and information aggregation algorithms. First, we use real crowdsourcing data to explore common assumptions about the way that crowd workers make mistakes in labeling. We find that a common assumption from theoretical work, that workers identify correct labels with a constant probability, is implausible in the data sets we consider. We further find different forms of heterogeneity among both tasks and workers, which have different implications for the design and evaluation of label aggregation algorithms. Then, we turn to peer assessment. Despite many potential benefits from peer grading, the traditional paradigm, where one instructor grades each submission, predominates. One persistent impediment to adopting a new grading paradigm is doubt that it will assign grades that are at least as good as those that would have been assigned under the existing paradigm. We address this impediment by using tools from economics to define a practical framework for determining when peer grades clearly exceed the standard set by the instructor baseline. We simulate realistic grading data using a model from prior work, and find that peer grading is unlikely to be clearly preferable to instructor grading for participants unless we either significantly increase the workload of students or make stronger assumptions about participants’ utility functions. We also study the effectiveness of various interventions to improve the quality of peer grading and the optimal allocations of peer and instructor resources under a fixed grading budget. Lastly, we propose measurement integrity, which quantifies a peer prediction mechanism’s ability to assign rewards that reliably measure agents according to the quality of their reports, as a novel desideratum in many applications. We perform computational experiments with mechanisms that elicit information without verification in the setting of peer assessment to empirically evaluate mechanisms according to both measurement integrity and robustness against strategic reporting. We find an apparent trade-off between these properties: The best-performing mechanisms in terms of measurement integrity are highly susceptible to strategic reporting. But we also find that supplementing mechanisms with realistic parametric statistical models results in mechanisms that strike the best balance between them. | |
dc.language.iso | en_US | |
dc.subject | Simulation | |
dc.subject | Information Elicitation | |
dc.subject | Information Aggregation | |
dc.title | Simulation-based Approaches for Evaluating Information Elicitation Mechanisms and Information Aggregation Algorithms | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Computer Science & Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Peikert, Christopher J | |
dc.contributor.committeemember | Schoenebeck, Grant | |
dc.contributor.committeemember | Resnick, Paul | |
dc.contributor.committeemember | Koutra, Danai | |
dc.subject.hlbsecondlevel | Economics | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbtoplevel | Business and Economics | |
dc.subject.hlbtoplevel | Engineering | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/177922/1/burrelln_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/8379 | |
dc.identifier.orcid | 0000-0003-3448-080X | |
dc.identifier.name-orcid | Burrell, Noah; 0000-0003-3448-080X | en_US |
dc.working.doi | 10.7302/8379 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.