Key Points: - We provide a dataset obtained from iEEG and fMRI who completed an auditory-visual speech perception task. The data is fully preprocessed and ready for analysis.
Research Overview: Here, we investigated the hypothesis that the auditory system encodes visual speech information through spatially distributed representations using fMRI data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time-course of classification using intracranial recordings, lipread words were classified at earlier time-points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.
Key Points:
- We provide a dataset obtained from iEEG
- A total of 5 participants completed the tasks that involved an audio-visual spatial memory task with memory associated sounds played during sleep.
- The data is fully preprocessed and ready for analysis in three unique frequency bands; Theta (4-8Hz), sigma (12-16Hz), and gamma (20-100Hz). We followed up by testing low gamma (20-50 Hz), mid-gamma (50-80 Hz), and high gamma (80-100 Hz) as well as a separate ripple analysis. and Research Overview:
Here, we investigated overnight memory change by measuring electrical activity in and near the hippocampus. Electroencephalographic (EEG) recordings were made in five patients from electrodes implanted to determine whether a surgical treatment could relieve their seizure disorders. One night, while each patient slept in a hospital monitoring room, we recorded electrophysiological responses to 10-20 specific sounds that were presented very quietly, to avoid arousal. Half of the sounds had been associated with objects and their precise spatial locations that patients learned before sleep. After sleep, we found systematic improvements in spatial recall, replicating prior results. We assume that when the sounds were presented during sleep, they reactivated and strengthened corresponding spatial memories. Notably, the sounds also elicited oscillatory intracranial EEG activity, including increases in theta, sigma, and gamma EEG bands. Gamma responses, in particular, were consistently associated with the degree of improvement in spatial memory exhibited after sleep. We thus conclude that this electrophysiological activity in the hippocampus and adjacent medial temporal cortex reflects sleep-based enhancement of memory storage.
Citation to related publication:
Creery JD, Brang D, Arndt JD, Bassard A, Towle VL, Tao JX, Wu S, Rose S, Warnke P, Issa NP, Paller KA (in press). Electrical Markers of Memory Consolidation in the Human Brain when Memories are Reactivated during Sleep. Proceedings of the National Academy of Sciences.
Data were acquired from 21 patients with intractable epilepsy undergoing clinical evaluation using iEEG. Patients ranged in age from 15-58 years (mean = 37.1, SD = 12.8) and included 10 females. Across all patients, data was recorded from a total of 1367 electrodes. Each participant were presented with multiple trials of auditory only and congruent audio-visual stimuli. On each trial a single phoneme was presented to the participant. Three variants of the tasks were used with each variant consisting of a different set of phonemes (variant A: /ba/ /da/ /ta/ /tha/, variant B: /ba/ /da/ /ga/, variant C: /ba/ /ga/ /ka/ /pa/). Trials were presented in a random order and phonemes were distributed uniformly across conditions. While conditions were matched in terms of trial numbers, participants completed a variable number of trials (based on task variant and the number of blocks completed). All provided data has been resampled to 1024 Hz during initial stages of processing for all participants. Data has been referenced in a bipolar fashion (signals subtracted from each immediately adjacent electrode in a pairwise manner) to ensure that the observed signals were derived from maximally local neuronal populations. The preprocessing steps followed have been described in the detailed description document in the attached materials. and The dataset zip folder consists of three main sub-folders:
1) Electrodes: This folder provides details regarding the individual electrodes for each subject, their MNI coordinates as well as their MNI vertices information according to freesurfer parcellations. This folder also consists of images of the physical location of each of the electrode sets.
2) Processed: This folder contains preprocessed data in all three frequencies (theta, beta and high gamma power) for individual subjects and the corresponding vertex locations for each of the electrodes from which their data was recorded. The images subfolder also contains figures provided in the main manuscript.
3) MatlabCodes: This folder contains all the matlab scripts required to reproduce the results provided in the main manuscript. LME_AvsAV_Main_Windows.m is the main file that an user has to run to reproduce the results.
Ganesan, K., Plass, J., Beltz, A. M., Liu, Z., Grabowecky, M., Suzuki, S., ... & Brang, D. (2020). Visual speech differentially modulates beta, theta, and high gamma bands in auditory cortex. bioRxiv. https://doi.org/10.1101/2020.09.07.284455