Show simple item record

Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms

dc.contributor.authorYe, Teng
dc.contributor.authorReinecke, Katharina
dc.contributor.authorRobert, Lionel
dc.date.accessioned2016-12-16T10:00:25Z
dc.date.available2016-12-16T10:00:25Z
dc.date.issued2017-02-25
dc.identifier.citationYe, T., Reinecke, K. and Robert, L. P. (2017). Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms, Proceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, February 25 - March 01, 2017, Portland, OR, USAen_US
dc.identifier.urihttps://hdl.handle.net/2027.42/134704
dc.description.abstractWe compared the data reliability on a subjective task from two platforms: Amazon's Mechanical Turk (MTurk) and LabintheWild. MTurk incentivizes participants with financial compensation while LabintheWild provides participants with personalized feedback. LabintheWild was found to produce higher data reliability than MTurk. Our findings suggest that online experiment platforms providing feedback in exchange for study participation can produce more reliable data in subjective preference tasks than those offering financial compensation.en_US
dc.language.isoen_USen_US
dc.publisherACMen_US
dc.subjectCrowdsourcingen_US
dc.subjectOnline Experimentationen_US
dc.subjectMechanical Turken_US
dc.subjectCompensationen_US
dc.subjectCrowdworker Compensationen_US
dc.subjectCrowdsourcing compensationen_US
dc.subjectincentivesen_US
dc.subjectcrowdsourcing incentivesen_US
dc.subjectmotivationen_US
dc.subjectcrowdsourcing motivationen_US
dc.subjectonline study data qualityen_US
dc.subjectonline experimentation data reliabilityen_US
dc.subjectonline studies data reliabilityen_US
dc.subjectLabinWilden_US
dc.titlePersonalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platformsen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelInformation and Library Science
dc.subject.hlbtoplevelSocial Sciences
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumInformation, School ofen_US
dc.contributor.affiliationotherUniversity of Washingtonen_US
dc.contributor.affiliationumcampusAnn Arboren_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/134704/1/Ye et al. 2017.pdf
dc.identifier.doi10.1145/3022198.3026339
dc.identifier.sourceProceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, February 25 - March 01, 2017, Portland, OR, USAen_US
dc.owningcollnameInformation, School of (SI)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.