Scientific Analysis by the Crowd: A System for Implicit Collaboration between Experts, Algorithms, and Novices in Distributed Work.
dc.contributor.author | Lee, David | en_US |
dc.date.accessioned | 2014-01-16T20:41:19Z | |
dc.date.available | NO_RESTRICTION | en_US |
dc.date.available | 2014-01-16T20:41:19Z | |
dc.date.issued | 2013 | en_US |
dc.date.submitted | 2013 | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/102368 | |
dc.description.abstract | Crowd sourced strategies have the potential to increase the throughput of tasks historically constrained by the performance of individual experts. A critical open question is how to configure crowd-based mechanisms, such as online micro-task markets, to accomplish work normally done by experts. In the context of one kind of expert work, feature extraction from electron microscope images, this thesis describes three experiments conducted with Amazon’s Mechanical Turk to explore the feasibility of crowdsourcing for tasks that traditionally rely on experts. The first experiment combined the output from learning algorithms with judgments made by non-experts to see whether the crowd could efficiently and accurately detect the best algorithmic performance for image segmentation. Image segmentation is an important but rate limiting step in analyzing biological imagery. Current best practice relies on extracting features by hand. Results showed that crowd workers were able to match the results of expert workers in 87.5% of the cases given the same task and that they did so with very little training. The second experiment used crowd responses to progressively refine task instructions. Results showed that crowd workers were able to consistently add information to the instructions and produced results the crowd perceived as more clear by an average of 8.7%. Finally, the third experiment mapped images to abstract representations to see whether the crowd could efficiently and accurately identify target structures. Results showed that crowd workers were able to find 100% of known structures with an 82% decrease in false positives compared to conventional automated image processing. This thesis makes a number of contributions. First, the work demonstrates that tasks previously performed by highly-trained experts, such as image extraction, can be accomplished by non-experts in less time and with comparable accuracy when organized through a micro-task market. Second, the work shows that engaging crowd workers to reflect on the description of tasks can be used to have them refine tasks to produce increased engagement by subsequent crowd workers. Finally, the work shows that abstract representations perform nearly as well as actual images in terms of using a crowd of non-experts to locate targeted features. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Crowdsourcing | en_US |
dc.subject | Neuroscience | en_US |
dc.title | Scientific Analysis by the Crowd: A System for Implicit Collaboration between Experts, Algorithms, and Novices in Distributed Work. | en_US |
dc.type | Thesis | en_US |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Information | en_US |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | en_US |
dc.contributor.committeemember | Finholt, Thomas A. | en_US |
dc.contributor.committeemember | Athey, Brian D. | en_US |
dc.contributor.committeemember | Newman, Mark W. | en_US |
dc.contributor.committeemember | Adar, Eytan | en_US |
dc.contributor.committeemember | Ellisman, Mark H. | en_US |
dc.subject.hlbsecondlevel | Information and Library Science | en_US |
dc.subject.hlbtoplevel | Social Sciences | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/102368/1/dlzz_1.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.