WHAT’S IN THE CHATTERBOX? LARGE LANGUAGE MODELS, WHY THEY MATTER, AND WHAT WE SHOULD DO ABOUT THEM
dc.contributor.author | Okerlund, Johanna | |
dc.contributor.author | Klasky, Evan | |
dc.contributor.author | Middha, Aditya | |
dc.contributor.author | Kim, Sujin | |
dc.contributor.author | Rosenfeld, Hannah | |
dc.contributor.author | Kleinman, Molly | |
dc.contributor.author | Parthasarathy, Shobita | |
dc.date.accessioned | 2023-12-21T17:29:59Z | |
dc.date.available | 2023-12-21T17:29:59Z | |
dc.date.issued | 2022-04 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/191718 | en |
dc.description.abstract | Large language models (LLMs)—machine learning algorithms that can recognize, summarize, translate, predict, and generate human languages on the basis of very large text-based datasets—are likely to provide the most convincing computer-generated imitation of human language yet. Because language generated by LLMs will be more sophisticated and human-like than their predecessors, and because they perform better on tasks for which they have not been explicitly trained, we expect that they will be widely used. Policymakers might use them to assess public sentiment about pending legislation, patients could summarize and evaluate the state of biomedical knowledge to empower their interactions with healthcare professionals, and scientists could translate research findings across languages. In sum, LLMs have the potential to transform how and with whom we communicate. | en_US |
dc.description.sponsorship | The Technology Assessment Project is supported in part through a generous grant from the Alfred P. Sloan Foundation (grant #G-2021-16769) | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Large language models, CHatGPT, LLM, text-based datasets, human language | en_US |
dc.title | WHAT’S IN THE CHATTERBOX? LARGE LANGUAGE MODELS, WHY THEY MATTER, AND WHAT WE SHOULD DO ABOUT THEM | en_US |
dc.type | Technical Report | en_US |
dc.subject.hlbtoplevel | Government, Politics and Law | |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/191718/1/large-language-models-TAP-2022-final-051622.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/191718/2/LLMImplicationsforScience.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/191718/3/Large Language Models Executive Summary 2022.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/191718/4/large-language-models-one-pager STPP-TAP-2022-v3.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/21898 | |
dc.identifier.source | UNIVERSITY OF MICHIGAN TECHNOLOGY ASSESSMENT PROJECT | en_US |
dc.description.filedescription | Description of large-language-models-TAP-2022-final-051622.pdf : What’s in the Chatterbox? Large Language Models, Why They Matter, and What We Should Do About Them | |
dc.description.filedescription | Description of LLMImplicationsforScience.pdf : Implications for the Scientific Landscape (31 pages) | |
dc.description.filedescription | Description of Large Language Models Executive Summary 2022.pdf : Executive Summary- LLM | |
dc.description.filedescription | Description of large-language-models-one-pager STPP-TAP-2022-v3.pdf : One-pager LLM | |
dc.description.depositor | SELF | en_US |
dc.working.doi | 10.7302/21898 | en_US |
dc.owningcollname | Science, Technology, and Public Policy (STPP) program |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.