Date: 2 July, 2024 Dataset Title: Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity" Dataset Creator: David Brang Dataset Contact: David Brang, djbrang@umich.edu   Methodology A de-identified iEEG and fMRI dataset obtained from individuals (n = 14 iEEG; n = 64 fMRI) during an auditory-visual speech perception task. The data is preprocessed according to the descriptions provided in the Readme. The task consisted of auditory, visual, and auditory-visual words presented to subjects who were instructed to identify the initial consonant of the word. Anatomical information is provided for all subjects, taken from the freesurfer recon-all preprocessing pipeline. Matlab codes to replicate the results in the accompanying manuscript are also included. Instrument and/or Software specifications: Matlab 2019a or higher; SPM; freesurfer.   Description    Key Points: - We provide a dataset obtained from iEEG and fMRI who completed an auditory-visual speech perception task. The data is fully preprocessed and ready for analysis. Research Overview: Here, we investigated the hypothesis that the auditory system encodes visual speech information through spatially distributed representations using fMRI data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time-course of classification using intracranial recordings, lipread words were classified at earlier time-points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.   Files Contained here: The dataset zip folder consists of two main sub-folders: (1) iEEG_Data and (2) fMRI_Data. Within iEEG_Data there are three relevant groups of subfolders: * A folder named for each of the 14 subjects that contains the ERP data in matlab format (at 500 hz and 10 hz). The files named X_ECoGInfo.mat are used by the analyses scripts to identify channel information. * The Freesurfer directory contains relevant data for electrodes analyzed in MNI space. Specifically, iEEG data were registered to MNI space using Freesurfer's cvs_avg35_inMNI152 template. * The Electrodes folder contains one file per subject (X_Electrodes.mat) that contain RAS coordinates for each electrode. Within fMRI_Data there are four relevant subfolders: * GroupData contains the each subject's MVPA and univariate results registered to the fsaverage surfaces * ROIs contains subject specific ROIs in functional space * SubData contains one folder for each of the 64 subjects, and within are the beta files, contrast files, and SPM.mat files * T1s contains the defaced T1.mgz files The top-level directory contains 3 analysis scripts: iEEG_Classification.m is the main script used for the analysis of iEEG data. iEEG_PlotElectrodes.m is used for plotting the results of electrode-level classification. fMRI_Classification.m is the main script used for the analysis of fMRI data. Use and Access: This data set is made available under a Creative Commons Attribution-Noncommercial License (CC BY-NC 4.0). To Cite Data: Brang, D. Dataset for "Auditory cortex encodes lipreading information through spatially distributed activity" [Data set], University of Michigan - Deep Blue Data. https://doi.org/10.7302/0xb6-8855