Show simple item record

Grounding Language Learning in Vision for Artificial Intelligence and Brain Research

dc.contributor.authorZhang, Yizhen
dc.date.accessioned2021-09-24T19:15:17Z
dc.date.available2023-09-01
dc.date.available2021-09-24T19:15:17Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/2027.42/169841
dc.description.abstractMost models for natural language processing learn words merely from texts. However, humans learn language by referring to real-world experience and knowledge. My research aims to ground language learning in visual perception, taking one step closer to making machines learn language like humans. To achieve this goal, I have designed a two-stream model with deep neural networks. One stream extracts image features. The other stream extracts language features. The two streams merge to connect image and language features in a joint representation space. By contrastive learning, I have first trained the model to align images with their captions, and then refined the model to retrieve visual objects with language queries and infer their visual relations. After training, the model’s language stream is a stand-alone system capable of embedding words in a visually grounded semantic space. This space manifests principal dimensions explainable with human intuition and neurobiological knowledge. The visually grounded language model also enables compositional language understanding based on visual knowledge and multimodal image search with queries based on image-text combination. This model can also explain human brain activity observed with functional magnetic resonance imaging during natural language comprehension. It sheds new light on how the brain stores concepts and organizes concepts by their semantic relations and attributes.
dc.language.isoen_US
dc.subjectMachine learning and artificial intelligence
dc.subjectLanguage learning
dc.subjectVisual grounding
dc.subjectMultimodal learning
dc.subjectNeuroscience
dc.subjectGrounded cognition
dc.titleGrounding Language Learning in Vision for Artificial Intelligence and Brain Research
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineElectrical and Computer Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberLiu, Zhongming
dc.contributor.committeememberBrang, David Joseph
dc.contributor.committeememberFessler, Jeffrey A
dc.contributor.committeememberOwens, Andrew
dc.subject.hlbsecondlevelElectrical Engineering
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/169841/1/zhyz_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/2886
dc.identifier.orcid0000-0002-2836-2666
dc.identifier.name-orcidZhang, Yizhen; 0000-0002-2836-2666en_US
dc.working.doi10.7302/2886en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.