Causal Processes Underlying Unimodal and Multimodal Language
Ahn, EunSeon
2023
Abstract
Language, including speech production and perception, is a major cognitive function necessary for a healthy social and vocational outlook. It is reported that approximately 5-10% of the American population experience communication disorders which can manifest as hearing impairments, difficulty speaking, speech impairments such as stuttering, and more complex language disorders (Ruben, 2009). Given the high prevalence of communication disorders in the United States and the crucial role that language plays in everyday life, it is important to investigate the underlying neural processes and mechanisms that support this social function as well as the brain regions and networks that are involved. A deeper understanding of the mechanisms and structural correlates can help identify the numerous ways in which these functions may be impaired in individuals through disorder, disease, or injury. By understanding which specific components of the process are impacted by neural damage, researchers may gain greater insight into new ways to treat and rehabilitate language impairments as well as to promote the development of devices that can assist in living with these deficits. In this dissertation, I focus on two important aspects of language that are relevant to clinical deficits: semantic naming and audiovisual speech integration. Focusing on these two critical components of language, I discuss three lines of research that examine the causality of the brain regions involved in these unimodal and multimodal language functions. In Study 1, I employ a causal method, voxel lesion symptom mapping, in intrinsic brain tumor patients to show that the left middle temporal gyrus (MTG) is the primary locus of semantic naming. This finding is consistent with established findings in the stroke lesion literature and demonstrates the validity of the brain tumor model in lesion mapping. In Study 2, I extend the scope of language causality to audiovisual speech integration using the same brain tumor model. Audiovisual speech integration is a highly relevant form of multisensory integration. It allows the merging of information from various unisensory modalities into a single coherent percept and is an important part of how the brain processes sensory information. Using lesion mapping, I examine which brain regions are critically responsible for audiovisual speech integration behaviors to dissociate whether the merging of conflicting audiovisual speech and the processing of congruent audiovisual speech rely on the same audiovisual integration mechanism. This study challenges the widely held underlying assumption that these two forms of audiovisual processing reflect the same integration mechanism. Lastly, in Study 3, I extend the test of the causal brain regions involved in audiovisual speech to healthy individuals. In this study, single-pulse transcranial magnetic stimulation was applied to disrupt the cortical activity in the left posterior superior temporal sulcus (pSTS), a region largely believed to be the hub of multisensory speech processing. I show that inhibitory stimulation to this multisensory zone can disrupt the fusing of conflicting audiovisual speech while having no effect on the processing of congruent audiovisual speech. These findings point to a dissociation in neural mechanisms between the two audiovisual integration processes and demonstrate that the pSTS reflects only one of the multiple critical areas necessary for audiovisual speech interactions.Deep Blue DOI
Subjects
Audiovisual processing Multisensory integration Semantic naming Speech perception Lesion symptom mapping Transcranial magnetic stimulation
Types
Thesis
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.