Show simple item record

Scientific Analysis by the Crowd: A System for Implicit Collaboration between Experts, Algorithms, and Novices in Distributed Work.

dc.contributor.authorLee, Daviden_US
dc.date.accessioned2014-01-16T20:41:19Z
dc.date.availableNO_RESTRICTIONen_US
dc.date.available2014-01-16T20:41:19Z
dc.date.issued2013en_US
dc.date.submitted2013en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/102368
dc.description.abstractCrowd sourced strategies have the potential to increase the throughput of tasks historically constrained by the performance of individual experts. A critical open question is how to configure crowd-based mechanisms, such as online micro-task markets, to accomplish work normally done by experts. In the context of one kind of expert work, feature extraction from electron microscope images, this thesis describes three experiments conducted with Amazon’s Mechanical Turk to explore the feasibility of crowdsourcing for tasks that traditionally rely on experts. The first experiment combined the output from learning algorithms with judgments made by non-experts to see whether the crowd could efficiently and accurately detect the best algorithmic performance for image segmentation. Image segmentation is an important but rate limiting step in analyzing biological imagery. Current best practice relies on extracting features by hand. Results showed that crowd workers were able to match the results of expert workers in 87.5% of the cases given the same task and that they did so with very little training. The second experiment used crowd responses to progressively refine task instructions. Results showed that crowd workers were able to consistently add information to the instructions and produced results the crowd perceived as more clear by an average of 8.7%. Finally, the third experiment mapped images to abstract representations to see whether the crowd could efficiently and accurately identify target structures. Results showed that crowd workers were able to find 100% of known structures with an 82% decrease in false positives compared to conventional automated image processing. This thesis makes a number of contributions. First, the work demonstrates that tasks previously performed by highly-trained experts, such as image extraction, can be accomplished by non-experts in less time and with comparable accuracy when organized through a micro-task market. Second, the work shows that engaging crowd workers to reflect on the description of tasks can be used to have them refine tasks to produce increased engagement by subsequent crowd workers. Finally, the work shows that abstract representations perform nearly as well as actual images in terms of using a crowd of non-experts to locate targeted features.en_US
dc.language.isoen_USen_US
dc.subjectCrowdsourcingen_US
dc.subjectNeuroscienceen_US
dc.titleScientific Analysis by the Crowd: A System for Implicit Collaboration between Experts, Algorithms, and Novices in Distributed Work.en_US
dc.typeThesisen_US
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineInformationen_US
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studiesen_US
dc.contributor.committeememberFinholt, Thomas A.en_US
dc.contributor.committeememberAthey, Brian D.en_US
dc.contributor.committeememberNewman, Mark W.en_US
dc.contributor.committeememberAdar, Eytanen_US
dc.contributor.committeememberEllisman, Mark H.en_US
dc.subject.hlbsecondlevelInformation and Library Scienceen_US
dc.subject.hlbtoplevelSocial Sciencesen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/102368/1/dlzz_1.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.