Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms
dc.contributor.author | Ye, Teng | |
dc.contributor.author | Reinecke, Katharina | |
dc.contributor.author | Robert, Lionel | |
dc.date.accessioned | 2016-12-16T10:00:25Z | |
dc.date.available | 2016-12-16T10:00:25Z | |
dc.date.issued | 2017-02-25 | |
dc.identifier.citation | Ye, T., Reinecke, K. and Robert, L. P. (2017). Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms, Proceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, February 25 - March 01, 2017, Portland, OR, USA | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/134704 | |
dc.description.abstract | We compared the data reliability on a subjective task from two platforms: Amazon's Mechanical Turk (MTurk) and LabintheWild. MTurk incentivizes participants with financial compensation while LabintheWild provides participants with personalized feedback. LabintheWild was found to produce higher data reliability than MTurk. Our findings suggest that online experiment platforms providing feedback in exchange for study participation can produce more reliable data in subjective preference tasks than those offering financial compensation. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | ACM | en_US |
dc.subject | Crowdsourcing | en_US |
dc.subject | Online Experimentation | en_US |
dc.subject | Mechanical Turk | en_US |
dc.subject | Compensation | en_US |
dc.subject | Crowdworker Compensation | en_US |
dc.subject | Crowdsourcing compensation | en_US |
dc.subject | incentives | en_US |
dc.subject | crowdsourcing incentives | en_US |
dc.subject | motivation | en_US |
dc.subject | crowdsourcing motivation | en_US |
dc.subject | online study data quality | en_US |
dc.subject | online experimentation data reliability | en_US |
dc.subject | online studies data reliability | en_US |
dc.subject | LabinWild | en_US |
dc.title | Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Information and Library Science | |
dc.subject.hlbtoplevel | Social Sciences | |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | Information, School of | en_US |
dc.contributor.affiliationother | University of Washington | en_US |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/134704/1/Ye et al. 2017.pdf | |
dc.identifier.doi | 10.1145/3022198.3026339 | |
dc.identifier.source | Proceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, February 25 - March 01, 2017, Portland, OR, USA | en_US |
dc.owningcollname | Information, School of (SI) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.