Show simple item record

"What are you doing, TikTok" : How Marginalized Social Media Users Perceive, Theorize, and "Prove" Shadowbanning

dc.contributor.authorDelmonaco, Daniel
dc.contributor.authorMayworm, Samuel
dc.contributor.authorThach, Hibby
dc.contributor.authorGuberman, Josh
dc.contributor.authorAugusta, Aurelia
dc.contributor.authorHaimson, Oliver L.
dc.date.accessioned2024-03-07T15:50:14Z
dc.date.available2024-03-07T15:50:14Z
dc.date.issued2024-04
dc.identifier.urihttps://hdl.handle.net/2027.42/192621en
dc.description.abstractShadowbanning is a unique content moderation strategy receiving recent media attention for the ways it impacts marginalized social media users and communities. Social media companies often deny this content moderation practice despite user experiences online. In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning, develop folk theories about shadowbanning, and attempt to prove its occurrence. We find that marginalized social media users collaboratively develop and test algorithmic folk theories to make sense of their unclear experiences with shadowbanning. Participants reported direct consequences of shadowbanning, including frustration, decreased engagement, the inability to post specific content, and potential financial implications. They reported holding negative perceptions of platforms where they experienced shadowbanning, sometimes attributing their shadowbans to platforms’ deliberate suppression of marginalized users’ content. Some marginalized social media users acted on their theories by adapting their social media behavior to avoid potential shadowbans. We contribute collaborative algorithm investigation: a new concept describing social media users’ strategies of collaboratively developing and testing algorithmic folk theories. Finally, we present design and policy recommendations for addressing shadowbanning and its potential harms.en_US
dc.description.sponsorshipNational Science Foundation award #1942125en_US
dc.language.isoen_USen_US
dc.publisherACMen_US
dc.subjectcontent moderationen_US
dc.subjectsocial mediaen_US
dc.subjectmarginalizationen_US
dc.subjectshadowbanningen_US
dc.subjectalgorithmic folk theoriesen_US
dc.subjectcollaborative algorithm investigationen_US
dc.title"What are you doing, TikTok" : How Marginalized Social Media Users Perceive, Theorize, and "Prove" Shadowbanningen_US
dc.typeArticleen_US
dc.subject.hlbsecondlevelInformation Science
dc.subject.hlbtoplevelSocial Sciences
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumInformation, School ofen_US
dc.contributor.affiliationumcampusAnn Arboren_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/192621/1/Shadowbanning_CSCW23_MinorRevisions.pdf
dc.identifier.doi10.1145/3637431
dc.identifier.doihttps://dx.doi.org/10.7302/22437
dc.identifier.sourceProceedings of the ACM Human Computer Interaction (PACM HCI) (CSCW 2024)en_US
dc.description.mappingc4321027-eaa6-44f5-a298-a6880ec181d5en_US
dc.identifier.orcid0000-0001-6552-4540en_US
dc.description.filedescriptionDescription of Shadowbanning_CSCW23_MinorRevisions.pdf : Main article
dc.description.depositorSELFen_US
dc.identifier.name-orcidHaimson, Oliver; 0000-0001-6552-4540en_US
dc.working.doi10.7302/22437en_US
dc.owningcollnameInformation, School of (SI)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.