This data and scripts are meant to test and show seizure differentiation based on bifurcation theory. A zip file is included which contains real and simulated seizure waveforms, Matlab scripts, and metadata. The matlab scripts allow for visual review validation and objective feature analysis. The file “README.txt” provides more detail about each individual file within the zip file. and Data citation: Crisp, D.N., Saggio, M.L., Scott, J., Stacey, W.C., Nakatani, M., Gliske, S.F., Lin, J. (2019). Epidynamics: Navigating the map of seizure dynamics - Code & Data [Data set]. University of Michigan Deep Blue Data Repository. https://doi.org/10.7302/ejhy-5h41
The data and the scripts are to show that seizure onset dynamics and evoked responses change over the progression of epileptogenesis defined in this intrahippocampal tetanus toxin rat model. All tests explored in this study can be repeated with the data and scripts included in this repository. and Dataset citation: Crisp, D.N., Cheung, W., Gliske, S.V., Lai, A., Freestone, D.R., Grayden, D.B., Cook, MJ., Stacey, W.C. (2019). Epileptogenesis modulates spontaneous and responsive brain state dynamics [Data set]. University of Michigan Deep Blue Data Repository. https://doi.org/10.7302/r6vg-9658
The relationship between words in a sentence often tell us more about the underlying semantic content of a document than its actual words, individually. Recent publications in the natural language processing arena, more specifically using word embeddings, try to incorporate semantic aspects into their word vector representation by considering the context of words and how they are distributed in a document collection. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II that combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings into a single decoupled system. In short, our approach has three main contributions: (i) unsupervised techniques that fully integrate word embeddings and lexical chains; (ii) a more solid semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. Knowledge-based systems that use natural language text can benefit from our approach to mitigate ambiguous semantic representations provided by traditional statistical approaches. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show that the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
Raw Rheology data in supplement to the 2019 Macromolecules publication: "Assessing the Range of Validity of Current Tube Models Through Analysis of a Comprehensive Set of Star-Linear 1,4-Polybutadiene Polymer Blends"
GOES_flare_list: contains a list of more than 10,000 flare events. The list has 6 columns, flare classification, active region number, date, start time end time, emission peak time, GOES_B_flare_list: contains time series data of SDO/HMI SHARP parameters for B class solar flares
, GOES_MX_flare_list: contains time series data of SDO/HMI SHARP parameters for M and X class solar flares, SHARP_B_flare_data_300.hdf5 and SHARP_MX_flare_data_300.hdf5 files contain time series more than 20 physical variables derived from the SDO/HMI SHARP data files. These data are saved at a 12 minute cadence and are used to train the LSTM model., and B_HARPs_CNNencoded_part_xxx.hdf5 and M_X HARPs_CNNencoded_part_xxx.hdf5 include neural network encoded features derived from vector magnetogram images derived from the Solar Dynamics Observatory (SDO) Helioseismic and Magnetic Imager (HMI). These data files typically contains one or two sequences of magnetograms covering an active region for a period of 24h with a 1 hour cadence. We encode each magnetogram with frames of a fixed size of 8x16 with 512 channels.