Show simple item record

Gender and Big Data: Finding or Making Stereotypes?

dc.contributor.authorMandell, Laura
dc.date.accessioned2016-03-10T21:34:37Z
dc.date.available2016-03-10T21:34:37Z
dc.date.issued2016-02-01
dc.identifier.urihttps://hdl.handle.net/2027.42/117493
dc.descriptionIntroduced by: Paul Conway
dc.description.abstractBeyond CTRL+F: Text Mining Across the Disciplines Conference — Keynote Speaker: In his book Macroanalysis, Matthew Jockers argues that we have reached a “tipping point.” Now that we have so much data digitized, we can use techniques and methodologies used to explore big data: text mining, topic modeling, machine learning, named entity recognition, etc. Two problems confront digital literary historians of women writers who wish to apply these methodologies. First, the number of women writers who published works before 1800 in Britain and America, as well as the number of their publications that have been preserved, is small compared to men, a problem compounded by how few works by early modern women writers are currently being digitized: roughly 4% of 307,000 volumes in the Early English Books Online and Eighteenth-Century Collections Online were written by women writers.  Second, many of the data analysts currently comparing what they call “female writing” to “male writing” propagate rather than interrogate stereotypes about women and women writers. ** Sociologists have worked on such problems, and in this talk, I will outline some of their strategies and discuss how literary critics who wish to perform macroanalysis might make use of them.  Data scientists in the commercial world have worked on the problem of representing minorities “fairly” even when they are represented by a small sample. Thanks to the robust history of feminist theory and criticism, we have the means for generating vocabularies, taxonomies, and ontologies for semantic searching and supervised topic modeling that differ from those generated through big-data techniques that naïvely privilege historically oppressive discourses. Second, the need to shift from quantitative to qualitative analysis (and back again) is augmented when analyzing textual data produced by minorities. I argue that, once again, the concern for social justice enhances intellectual work by effectively demonstrating the inadequacies of claiming “new” discoveries based upon “statistical significance” alone.
dc.titleGender and Big Data: Finding or Making Stereotypes?
dc.typeVideo
dc.subject.hlbsecondlevelHumanities (General)
dc.subject.hlbtoplevelHumanities
dc.contributor.affiliationotherTexas A&M University
dc.contributor.affiliationumcampusAnn Arbor
dc.identifier.videostreamhttps://cdnapisec.kaltura.com/p/1038472/sp/103847200/embedIframeJs/uiconf_id/33084471/partner_id/1038472?autoembed=true&entry_id=1_deszddpe&playerId=kaltura_player_1455309475&cache_st=1455309475&width=400&height=330&flashvars[streamerType]=autoen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/117493/1/2016WI013-003720.mov
dc.owningcollnameLibrary (University of Michigan Library)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.