Show simple item record

Prompt and Circumstance: Investigating the Relationship between College Writing and Postsecondary Policy

dc.contributor.authorGodfrey, Jason
dc.date.accessioned2024-09-03T18:38:12Z
dc.date.available2024-09-03T18:38:12Z
dc.date.issued2024
dc.date.submitted2024
dc.identifier.urihttps://hdl.handle.net/2027.42/194541
dc.description.abstractIn US-based postsecondary education, first-year students commonly have their compositional ability consequentially assessed on the basis of standardized tests. As a result, students who score above certain thresholds on ACT, SAT, or AP exams often are placed into honors or remedial courses; receive credit remissions; and/or test out of general education classes such as first-year composition. While the thresholds and applicable tests vary from institution to institution, over 2000 Title IV schools implement policies based on such tests. However, there is little evidence that the linguistic patterns that correlate with success on timed, high-stakes tests carry forward to college-level writing tasks. Consequently, contemporary composition scholars call for research that centers examinations of student writing itself rather than assessments of writing quality such as standardized tests. This dissertation responds to that call by answering the questions, How do linguistic features observed in college-level writing relate to institutionally sanctioned measures of writing quality? And, what are the implications for policy levers based on those measures? To answer these questions, I leverage a longitudinal corpus (2009-2019) of approximately 47,000 student essays, matched with data on test scores. Together, these data allow me to investigate whether the test scores, implemented as boolean policy levers, meaningfully distinguish between students who write using measurably distinct linguistic patterns. To measure such distinctions, this study employs natural language processing by incorporating large language models designed for text classification tasks: BERT, RoBERTa, and XLNet. The methods employed in this study identify a quadratic weighted kappa of 0.43, which indicates that the model was able to classify student essays better than random assignment; however, the relationship between student writing and test scores maintain a minimal relationship. Ideally, educational policy that consequentially sorts students into different educational tracks at the most vulnerable point of their college career would bear more than a weak relationship to their college-level performance. To uncover which linguistic features are most correlated with higher scores, I employ OLS, multiple, and logistic regression. These models find significant differences between the essays of students with high and low test scores. Across most models, students with higher test scores have on average fewer clauses per sentence; more prepositions, adverbs, colons, and adjectives; and write with the same number of personal pronouns. While these findings are statistically significant, they only weakly describe the differences between high- and low-scoring, such that distinguishing between essays of students who are near common policy thresholds would be an error-prone task for any human or algorithm. Additionally, while the logistic regression based on the existing policy threshold at University of Michigan had the greatest explanatory power (Pseudo R^2 0.09), linear regressions based on a normalized ACT-SAT score had more explanatory power (R^2 0.161). While these metrics cannot be directly compared, the difference in their relative strength nonetheless reveals a disparity in goodness-of-fit that demonstrates how educational policy based on a boolean threshold from one test is functionally less discriminating than the metric that is based on multiple measures. Significance notwithstanding, the overall weak correlation between standardized test scores and college-level writing evidences the inability for a timed, high-stakes writing test to relate to writing in other circumstances, including college-level writing tasks. These results evidence the brittleness of these test scores as measures of writing quality and cast doubt as to their utility as policy levers.
dc.language.isoen_US
dc.subjectEducational Policy
dc.subjectCollege Writing
dc.subjectStandardized Tests
dc.subjectAP, ACT, SAT
dc.subjectNatural Language Processing
dc.titlePrompt and Circumstance: Investigating the Relationship between College Writing and Postsecondary Policy
dc.typeThesis
dc.description.thesisdegreenamePhD
dc.description.thesisdegreedisciplineEnglish & Education
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberAull, Laura L
dc.contributor.committeememberJurgens, David
dc.contributor.committeememberGere, Anne Ruggles
dc.contributor.committeememberSchleppegrell, Mary J
dc.subject.hlbsecondlevelEducation
dc.subject.hlbtoplevelSocial Sciences
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/194541/1/jgod_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/23889
dc.identifier.orcid0000-0002-1977-9427
dc.identifier.name-orcidGodfrey, Jason; 0000-0002-1977-9427en_US
dc.working.doi10.7302/23889en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.