Show simple item record

WHAT’S IN THE CHATTERBOX? LARGE LANGUAGE MODELS, WHY THEY MATTER, AND WHAT WE SHOULD DO ABOUT THEM

dc.contributor.authorOkerlund, Johanna
dc.contributor.authorKlasky, Evan
dc.contributor.authorMiddha, Aditya
dc.contributor.authorKim, Sujin
dc.contributor.authorRosenfeld, Hannah
dc.contributor.authorKleinman, Molly
dc.contributor.authorParthasarathy, Shobita
dc.date.accessioned2023-12-21T17:29:59Z
dc.date.available2023-12-21T17:29:59Z
dc.date.issued2022-04
dc.identifier.urihttps://hdl.handle.net/2027.42/191718en
dc.description.abstractLarge language models (LLMs)—machine learning algorithms that can recognize, summarize, translate, predict, and generate human languages on the basis of very large text-based datasets—are likely to provide the most convincing computer-generated imitation of human language yet. Because language generated by LLMs will be more sophisticated and human-like than their predecessors, and because they perform better on tasks for which they have not been explicitly trained, we expect that they will be widely used. Policymakers might use them to assess public sentiment about pending legislation, patients could summarize and evaluate the state of biomedical knowledge to empower their interactions with healthcare professionals, and scientists could translate research findings across languages. In sum, LLMs have the potential to transform how and with whom we communicate.en_US
dc.description.sponsorshipThe Technology Assessment Project is supported in part through a generous grant from the Alfred P. Sloan Foundation (grant #G-2021-16769)en_US
dc.language.isoen_USen_US
dc.subjectLarge language models, CHatGPT, LLM, text-based datasets, human languageen_US
dc.titleWHAT’S IN THE CHATTERBOX? LARGE LANGUAGE MODELS, WHY THEY MATTER, AND WHAT WE SHOULD DO ABOUT THEMen_US
dc.typeTechnical Reporten_US
dc.subject.hlbtoplevelGovernment, Politics and Law
dc.contributor.affiliationumcampusAnn Arboren_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/191718/1/large-language-models-TAP-2022-final-051622.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/191718/2/LLMImplicationsforScience.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/191718/3/Large Language Models Executive Summary 2022.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/191718/4/large-language-models-one-pager STPP-TAP-2022-v3.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/21898
dc.identifier.sourceUNIVERSITY OF MICHIGAN TECHNOLOGY ASSESSMENT PROJECTen_US
dc.description.filedescriptionDescription of large-language-models-TAP-2022-final-051622.pdf : What’s in the Chatterbox? Large Language Models, Why They Matter, and What We Should Do About Them
dc.description.filedescriptionDescription of LLMImplicationsforScience.pdf : Implications for the Scientific Landscape (31 pages)
dc.description.filedescriptionDescription of Large Language Models Executive Summary 2022.pdf : Executive Summary- LLM
dc.description.filedescriptionDescription of large-language-models-one-pager STPP-TAP-2022-v3.pdf : One-pager LLM
dc.description.depositorSELFen_US
dc.working.doi10.7302/21898en_US
dc.owningcollnameScience, Technology, and Public Policy (STPP) program


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.