Work Description

Title: Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity" Open Access Deposited

h
Attribute Value
Methodology
  • A de-identified iEEG and fMRI dataset obtained from individuals (n = 14 iEEG; n = 64 fMRI) during an auditory-visual speech perception task. The data is preprocessed according to the descriptions provided in the Readme. The task consisted of auditory, visual, and auditory-visual words presented to subjects who were instructed to identify the initial consonant of the word. Anatomical information is provided for all subjects, taken from the freesurfer recon-all preprocessing pipeline. Matlab codes to replicate the results in the accompanying manuscript are also included. Instrument and/or Software specifications: Matlab 2019a or higher; SPM ( https://www.fil.ion.ucl.ac.uk/spm/software/); freesurfer ( https://surfer.nmr.mgh.harvard.edu/).
Description
  • Key Points: - We provide a dataset obtained from iEEG and fMRI who completed an auditory-visual speech perception task. The data is fully preprocessed and ready for analysis. Research Overview: Here, we investigated the hypothesis that the auditory system encodes visual speech information through spatially distributed representations using fMRI data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time-course of classification using intracranial recordings, lipread words were classified at earlier time-points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.
Creator
Depositor
  • djbrang@umich.edu
Contact information
Discipline
Funding agency
  • National Institutes of Health (NIH)
Citations to related material
  • Article in review
Resource type
Last modified
  • 07/19/2024
Published
  • 07/19/2024
DOI
  • https://doi.org/10.7302/0xb6-8855
License
To Cite this Work:
Brang, D. (2024). Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity" [Data set], University of Michigan - Deep Blue Data. https://doi.org/10.7302/0xb6-8855

Relationships

This work is not a member of any user collections.

Files (Count: 2; Size: 24.5 GB)

Date: 2 July, 2024

Dataset Title: Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity"

Dataset Creator: David Brang

Dataset Contact: David Brang, djbrang@umich.edu

 
Methodology
A de-identified iEEG and fMRI dataset obtained from individuals (n = 14 iEEG; n = 64 fMRI) during an auditory-visual speech perception task. The data is preprocessed according to the descriptions provided in the Readme. The task consisted of auditory, visual, and auditory-visual words presented to subjects who were instructed to identify the initial consonant of the word. Anatomical information is provided for all subjects, taken from the freesurfer recon-all preprocessing pipeline. Matlab codes to replicate the results in the accompanying manuscript are also included.

Instrument and/or Software specifications: Matlab 2019a or higher; SPM; freesurfer.

 
Description   
Key Points: - We provide a dataset obtained from iEEG and fMRI who completed an auditory-visual speech perception task. The data is fully preprocessed and ready for analysis.
Research Overview: Here, we investigated the hypothesis that the auditory system encodes visual speech information through spatially distributed representations using fMRI data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time-course of classification using intracranial recordings, lipread words were classified at earlier time-points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.
 
Files Contained here:
The dataset zip folder consists of two main sub-folders: (1) iEEG_Data and (2) fMRI_Data.

Within iEEG_Data there are three relevant groups of subfolders:
* A folder named for each of the 14 subjects that contains the ERP data in matlab format (at 500 hz and 10 hz). The files named X_ECoGInfo.mat are used by the analyses scripts to identify channel information.
* The Freesurfer directory contains relevant data for electrodes analyzed in MNI space. Specifically, iEEG data were registered to MNI space using Freesurfer's cvs_avg35_inMNI152 template.
* The Electrodes folder contains one file per subject (X_Electrodes.mat) that contain RAS coordinates for each electrode.

Within fMRI_Data there are four relevant subfolders:
* GroupData contains the each subject's MVPA and univariate results registered to the fsaverage surfaces
* ROIs contains subject specific ROIs in functional space
* SubData contains one folder for each of the 64 subjects, and within are the beta files, contrast files, and SPM.mat files
* T1s contains the defaced T1.mgz files

The top-level directory contains 3 analysis scripts:
iEEG_Classification.m is the main script used for the analysis of iEEG data.
iEEG_PlotElectrodes.m is used for plotting the results of electrode-level classification.
fMRI_Classification.m is the main script used for the analysis of fMRI data.

Use and Access:
This data set is made available under a Creative Commons Attribution-Noncommercial License (CC BY-NC 4.0).

To Cite Data:
Brang, D. Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity" [Data set], University of Michigan - Deep Blue Data. https://doi.org/10.7302/0xb6-8855

Download All Files (To download individual files, select them in the “Files” panel above)

Total work file size of 24.5 GB is too large to download directly. Consider using Globus (see below).

Files are ready   Download Data from Globus
Best for data sets > 3 GB. Globus is the platform Deep Blue Data uses to make large data sets available.   More about Globus

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.