Work Description

Title: Prosodic Processing: Neuroimaging in children and adults Open Access Deposited

h
Attribute Value
Methodology
  • We used Dr. Ted Hupperts nirs toolbox to process this optical data, ultimately telling us if the infrared light passed through oxygenated blood (HbO) or deoxygenated blood (HbR) in a given region at a given point in time. Oxygenated blood is measured as a proxy for activation in the brain, so we are able to make inferences about a BOLD response from a given stimulus.

  • This technology was used to study prosodic processing in children and adults in this experiment. In one trial subjects listened to two sentences, and they had to judge whether the sentences were the same or not. There were 24 trials of each type in an experimental run. There were three conditions: (1) same words, same intonation, (2) same words, different intonation, (3) different words, same intonation.

  • Intonation was altered in the 'Different Intonation' condition, which was the key condition, so the first sentence might be a statement, and the next sentence would be a question using the same words and in the same order.

  • In the rhyming task, participants listened to two words and had to judge whether the second rhymed with the first. There were three conditions: (1) rhyme where the two words rhyme, (2) easy non-rhyme where the phonemes were easily distinguishable, (3) hard non-rhyme where the phonemes were closer in sound.

  • In the oddball task, subjects listened to single phonemes (e.g. 'LA'), but sometimes there was an "oddball" (e.g. 'DA'). Participants had to press a button when they heard an "oddball" phoneme.
Description
  • This data is from a project examining prosodic processing in children and adults using functional near-infrared spectroscopy (fNIRS) neuroimaging. fNIRS data is optical data collected using a cap with an array of source and detector fibers that emit and detect infrared light, respectively. We used fNIRS neuroimaging to explore prosodic processing, rhyme judgement, and the "oddball" paradigm in children, adults, and a small sample of children with cochlear implants. Matlab scripts, including Ted Huppert's Nirs Toolbox, were used to process the neuroimaging data. The children also took a battery of behavioral assessments (OWLS, Digit Span, PPVT, CTOPP).
Creator
Depositor
  • zackar@umich.edu
Contact information
Discipline
Funding agency
  • National Institutes of Health (NIH)
  • Other Funding Agency
Other Funding agency
  • Center for Human Growth and Development and University of Michigan Office for Research, University of Michigan

  • Michigan Institute for Health and Clinical Science, University of Michigan
ORSP grant number
  • NIH R01HD092498
Keyword
Resource type
Last modified
  • 11/20/2022
Published
  • 07/22/2021
Language
DOI
  • https://doi.org/10.7302/xj95-gv44
License
To Cite this Work:
Pasquinelli, R., Hu, X., Tessier, A., Kovelman, I., Zwolan, T. A., Karas, Z. E., Wagley, N. (2021). Prosodic Processing: Neuroimaging in children and adults [Data set], University of Michigan - Deep Blue Data. https://doi.org/10.7302/xj95-gv44

Relationships

This work is not a member of any user collections.

Files (Count: 18; Size: 14.1 GB)

GENERAL INFO------------------------------------------------------------------------------

This data is from a project examining prosodic processing in children and adults using functional near-infrared spectroscopy (fNIRS) neuroimaging. More detail on the experimental design is provided below in the Methods section, but data was collected in 2017 in the Language and Literacy Lab at the University of Michigan. The project was lead by Rennie Pasquinelli under the guidance of Dr. Ioulia Kovelman, and Dr. Anne-Michelle Tessier. The study had three sources of funding, two of which were internal at the University of Michigan, the third was from the National Institute of Health:

2016-2018 Source: Michigan Institute for Health and Clinical Science, University of Michigan. Investigators: Hu; Kovelman, Johnson, Kileny
Grant Title: NIRS Imaging for Understanding Language and Hearing Restoration with Cochlear Implants

2018-2023 Source: National Institutes of Health (NIH) R01, R01 HD092498. Investigators: Kovelman, Hu, Johnson, Tardif, Satterfield). Title: Impact of heritage language on bilingual children’s path to English literacy

2014-2018 Source: Center for Human Growth and Development and University of Michigan Office for Research, University of Michigan. Title: Pediatric Neuroimaging with functional Near Infrared Spectroscopy. Investigators: Volling, Kovelman

This nirs data was analyzed on a Mac Desktop computer running macOS Catalina Version 10.15.7

The MATLAB version used for analyzing the data was 2019a. There were issues with plotting specifically in MATLAB version 2019b and beyond (brain activations plotted in blue, as opposed to blue and red).

The raw nirs files cannot be opened directly, but can be loaded into MATLAB after adding the 'HLR01 data analysis 2020' folder into your path, then using the 'nirs.loadDirectory' command. This command is used in the 'FullPipeline_V2.m script', which is the primary script with which users need to interact. Tutorials for how to use the toolbox can be found on YouTube: https://www.youtube.com/watch?v=zlAC3aWW8kc&ab_channel=NIRxMedicalTechnologies

To open the processed data (subject level and group level statistics), you will need to use the 'FullPipeline_V2.m' script. The first two uncommented lines will set the path to the 'HLR01 data analysis 2020' folder and the analysis scripts. YOU WILL NEED TO EDIT THE PATHS TO THIS FOLDER.

Once this path to 'HLR01 data analysis 2020' is set, you can load the Subject Stats and Group Stats without the fields loading in as 'uint32', as seen in the variables section of MATLAB.

Custom analysis scripts were written by Dr. Xiaosu (Frank) Hu, Rebecca Marks, Jessica Kim, Chi-Lin Yu, and Zach Karas.

FILE INVENTORY----------------------------------------------------------------------------

E-Prime_ReactionTime_Accuracy.xlsx - Excel document containing accuracy and reaction time information from all three neuroimaging tasks, gender, and age of the participants. Missing information is conveyed with a 'NaN' (not a number) label and light-blue highlights.

Analysis Scripts (folder)-

frankPlot - this is a custom plotting function for brain plots written by Dr. Xiaosu (Frank) Hu. It is called by the FullPipeline_V2.m script, but on line 26 you can change 'pval' to 'qval' and 0.05 to 0.01, depending on how you'd like to threshold the activations on the brain plots.

GainAdjustmentforCW6 - involved with the preprocessing steps in the FullPipeline_V2.m script.

getintensity.m - uses t-values from channel activations and scales these to red and blue intensities for the brain plots.

getp.m - extracts the p-values from the channel statistics at either the subject or group level.

getq.m - extracts the q-values from the channel statistics at either the subject or group level.

Intonation_finalStimFix.m - adjusts the stimulus marks so both sentences in each trial are modeled separately.

Plot3D_channel_registration_result.m - involved with the frankPlot plotting function, written by Dr. Xiaousu (Frank) Hu. You do not need to interact with this file directly. It is used "behind-the-scenes" by frankPlot.

Behavioral Test Scores - contains raw and standardized scores from behavioral measures administered to normal-hearing children and children with cochlear implants: Oral and Written Language Scales (OWLS), Digit Span, Nonword Repetition, and Peabody Picture Vocabulary Test (PPVT).

CHMNI_Bilateral46_AUG2020.mat - contains the MNI coordinates for the sources and detectors in the cap used for this experiment.

Consent_NH group_fNIRS.AME00075796 .pdf - Consent form used in the experiments.

fNIRS data (folder) -

Intonation Task - contains raw nirs files and processed subject-level and group-level statistics from the intonation task.

Raw - contains raw nirs files from all three groups.

Processed - contains the Subject and Group level statistics after quality control checks were implemented.

Oddball Task - contains raw nirs files from the oddball task. These data were not processed, so only the raw files exist.

Raw - raw nirs files from all three groups.

Rhyming Task - contains raw nirs files and processed subject-level and group-level statistics from the intonation task.

Raw - raw nirs files from all three groups.

Processed - contains the Subject and Group level statistics after quality control checks were implemented.

NOTE: Users cannot open these files directly, and will need to do so through MATLAB.

FullPipeline_V2.m - this is the main pipeline script. To run any analyses or load any data, this will be the primary script.

HLR01 data analysis 2020 (folder) - contains the nirs toolbox and the back-end analysis scripts used by the FullPipeline_V2.m script. USERS DO NOT NEED TO INTERACT WITH THIS FOLDER.

Intonation ACC.xlsx - contains accuracy information from the E-Prime task completed by participants during the neuroimaging experiment

Intonation_stim_deletion - script used to delete extra stimulus marks hypothesized to come from the rest trials.

plotting.m - independent plotting script that can be used instead of locating the plotting code in the FullPipeline_V2.m

README - file of general info and experimental methods. This file.

DEFINITIONS OF TERMS AND VARIABLES--------------------------------------------------------

Beta value - quantities calculated from the general linear model that quantify brain activation.

Cap - the physical cap worn by participants where source and detector fiberoptic cables are attached.

Channel - the space between a source-detector pair while data is being recorded.

Condition - one attribute of an experimental task. To give an example, in this experiment studying prosody, there was one condition where the sentences were the same, another condition where the two sentences contained different words, and another condition where the sentences had the same words but different intonation.

Detector (fiber) - a fiberoptic cable that detects the infrared light emitted by sources.

fNIRS - a neuroimaging technique that studies changes in brain activation by means of shining near-infrared red into subject's brains and detecting the changes in intensity.

General Linear Model (GLM) - A mathematical model used to calculate the brain signal. The model attempts to mirror the raw signal by combining a standardized hemodynamic response function with the experimental design as well as confounding factors such as motion or noise.

GroupStats - A variable in the Matlab workspace that summarizes the brain activations for all of the subjects included in the model. Contains other statistics as well, such as t-statistics and p/q-values for each channel and condition.

Motion Artifacts - participant motion that can be seen as jumps or spikes in the raw fNIRS signal.

Pipeline - Processing steps that are run sequentially with the goal, in our case, of turning raw data into statistical results.

Preprocessing - Early steps in the pipeline to clean the raw data of noise and participant motion artifacts. These steps also help to translate the data into a format that can be used for statistical analyses.

Prosody - patterns of pitch and emphasis in speech.

Prosodic processing - the brain mechanisms used to extrapolate meaning from changes in pitch and emphasis.

Raw data - unprocessed data. The data as it's originally recorded.

Script - a file of xecutable Matlab code.

Source (fiber)- a fiberoptic cable that emits infrared light.

SubjStats - A variable in the Matlab workspace that summarizes the brain activations for each participant individually, calculated using a general linear model. Contains other statistics as well, such as t-statistics and p/q-values for each channel and condition.

Trial - One discrete event that occurs in an experimental task. For our purposes, in one trial a participant hears two sentences and is given some time to decide whether the sentences are the same or not.

METHODS-----------------------------------------------------------------------------------

fNIRS data is optical data collected using a cap with an array of source and detector fibers that emit and detect infrared light, respectively. We use Dr. Ted Hupperts nirs toolbox to process this optical data, ultimately telling us if the infrared light passed through oxygenated blood (HbO) or deoxygenated blood (HbR) in a given region at a given point in time. Oxygenated blood is measured as a proxy for activation in the brain, so we are able to make inferences about a BOLD response from a given stimulus.

This technology was used to study prosodic processing in children and adults in this experiment. In one trial subjects listened to two sentences, and they had to judge whether the sentences were the same or not. There were 24 trials of each type in an experimental run. There were three conditions:
(1) same words, same intonation
(2) same words, different intonation
(3) different words, same intonation.

Intonation was altered in the 'Different Intonation' condition, which was the key condition, so the first sentence might be a statement, and the next sentence would be a question using the same words and in the same order. Each trial was 6.5 seconds long, with each sentence taking 2.25 seconds and a response time of 2 seconds. For our purposes, we were only interested in the brain activation from the second sentence, so the 'Intonation_finalStimFix.m' script was created to split the sentences for analysis. The response time was not analyzed. This sentence separation was done after the raw data was loaded in using the 'FullPipeline_V2.m' script, but before any processing of the raw data.

In the rhyming task, participants listened to two words and had to judge whether the second rhymed with the first. There were three conditions:
(1) rhyme where the two words rhyme
(2) easy non-rhyme where the phonemes were easily distinguishable
(3) hard non-rhyme where the phonemes were closer in sound

In the oddball task, subjects listened to single phonemes (e.g. 'LA'), but sometimes there was an "oddball" (e.g. 'DA'). Participants had to press a button when they heard an "oddball" phoneme.

The 'FullPipeline.m' script was run section by section to process the data. The t-values, beta values, p/q-values, and other statistics for each subject, channel-by-channel, can be found in the SubjStats variables. Beta values are used as a measure of brain activity, and are calculated using a general linear model that creates a model of the brain activity based on an idealized hemodynamic response function, the experimental design (stimulus marks), and other factors that may contribute to noise. A more detailed explanation can be found by watching Dr. Ted Huppert's workshops online, or by reading his publications.

After quality control checks were implemented to judge which participants were excluded from the group level model, accuracy data from the E-Prime information was included as demographics information. These values were written in the 'Intonation ACC.xlsx' file. The group level formula used was 'beta ~ -1 + Group:cond + ACC + (1|Subject)'.

In plotting the group level statistics, contrast vectors (e.g. [1 0 0 1 0 0 1 0 0]) are used to determine which conditions are "on" in the plot. These conditions can be found in the SubjStats and GroupStats variables in the 'conditions' field. The values plotted onto the brain plots are the t-values associated with each channel. Because each sentence was modeled separately in our analysis, contrast vectors exceeded lengths of 36 numbers.

Jessica Kim did much of the initial analyses, but in 2019 the project was passed on to Zach Karas who began the analyses again and helped bring the project to completion.

-written by Zach Karas 1/28/2021

Download All Files (To download individual files, select them in the “Files” panel above)

Total work file size of 14.1 GB is too large to download directly. Consider using Globus (see below).

Files are ready   Download Data from Globus
Best for data sets > 3 GB. Globus is the platform Deep Blue Data uses to make large data sets available.   More about Globus

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.