Search Constraints
« Previous |
1 - 10 of 44
|
Next »
Number of results to display per page
View results as:
Search Results
-
- Creator:
- Fries, Kevin J.
- Description:
- This data is in support of the WRR paper by Fries and Kerkez: Big Ship Data: Using Vessel Measurements to Improve Estimates of Temperature and Wind Speed on the Great Lakes Code is also provided
- Keyword:
- Gaussian process regression, Data integration, Wind speed, Water surface temperature, and Air temperature
- Citation to related publication:
- Fries, K., and B. Kerkez (2017), Big Ship Data: Using vessel measurements to improve estimates of temperature and wind speed on the Great Lakes, Water Resour. Res., 53, 3662–3679, http://doi.org/10.1002/2016WR020084.
- Discipline:
- Engineering
- Title:
- Big Ship Data: Pre- and Post-Processed Spatiotemporal Data for 2006-2014 for Great Lakes Air Temperature, Dew Point, Surface Water Temperature, and Wind Speed
-
- Creator:
- Malik, Hafiz and Khan, Muhammad Khurran, King Saud University
- Description:
- Details of the microphone used for data collection, acoustic environment in which data was collected, and naming convention used are provided here. 1 - Microphones Used: The microphones used to collect this dataset belong to 7 different trademarks. Table (1) illustrates the number of used Mics of different trademarks and models. Table 1: Trademarks and models of Mics Mic Trademark Mic Model # of Mics Shure SM-58 3 Electro-Voice RE-20 2 Sennheiser MD-421 3 AKG C 451 2 AKG C 3000 B 2 Neumann KM184 2 Coles 4038 2 The t.bone MB88U 6 Total 22 2- Environment Description: A brief description of the 6 environments in which the dataset was collected is presented here: (i) Soundproof room: a small room (nearly 1.5m × 1.5m × 2m), which is closed and completely isolated. With an exception of a small window in the front side of the room which is made of glass, all the walls of the room are made of wood and covered by a layer of sponge from the inner side, and the floor is covered by carpet. (ii) Class room: standard class room (6m × 5m × 3m). (iii) Lab: small lab (4m × 4m × 3m). All the walls are made of glasses and the floor is covered by carpet. The lab contains 9 computers. (iv) Stairs: is in the second floor. The place of recording is 3m × 5m (v) Parking: is the college parking. (vi) Garden: is an open space outside the buildings. 3- Naming Convention: This set of rules were followed as a naming convention to give each file in the dataset a unique name: (i) The file name is 19 characters long, and consists of 5 sections separated by underscores. (ii) The first section is of 3 characters indicates the Microphone trademark. (iii) The second section of 4 characters indicates the microphone model as in table (2). (iv) The third section of 2 characters indicates a specific microphone within a set of microphones of the same trademark and model, since we have more than one microphone of the same trademark and model. (v) The fourth section of 2 characters indicates the environment, where Soundproof room --> 01 Class room --> 02 Lab --> 03 Stairs --> 04 Parking --> 05 Garden --> 06 (vi) The fifth section of 2 characters indicates the language, where Arabic --> 01 English --> 02 Chinese --> 03 Indonesian --> 04 (vii) The sixth section of 2 characters indicates the speaker. Table 2: Microphones Naming Criteria Original Mic Trademark and model --> Naming Convenient Shure SM-58 --> SHU_0058 Electro-Voice RE-20 --> ELE_0020 Sennheiser MD-421 --> SEN_0421 AKG C 451 --> AKG_0451 AKG C 3000 B --> AKG_3000 Neumann KM184 --> NEU_0184 Coles 4038 --> COL_4038 The t.bone MB88U --> TBO_0088 For example: SEN_0421_02_01_02_03 is an English file recorded by speaker number 3 in the soundproof room using microphone number 2 of Sennheiser MD-421
- Keyword:
- audio forensic, multimedia forensics, microphone identification, tamper detection, splicing detection, and codec identification
- Citation to related publication:
- http://dx.doi.org/10.1080/00450618.2017.1296186
- Discipline:
- Science, Government, Politics and Law, and Engineering
- Title:
- The KSU-UMD Dataset for Benchmarking for Audio Forensic Algorithms
-
- Creator:
- Smith, Joeseph P., Gronewold, Andrew D., Read, Laura, Crooks, James L., School for Environment and Sustainability, University of Michigan, Department of Civil and Environmental Engineering, University of Michigan, and Cooperative Institute for Great Lakes Research
- Description:
- Using the statistical programming package R ( https://cran.r-project.org/), and JAGS (Just Another Gibbs Sampler, http://mcmc-jags.sourceforge.net/), we processed multiple estimates of the Laurentian Great Lakes water balance components -- over-lake precipitation, evaporation, lateral tributary runoff, connecting channel flows, and diversions -- feeding them into prior distributions (using data from 1950 through 1979), and likelihood functions. The Bayesian Network is coded in the BUGS language. Water balance computations assume that monthly change in storage for a given lake is the difference between beginning of month water levels surrounding each month. For example, the change in storage for June 2015 is the difference between the beginning of month water level for July 2015 and that for June 2015., More details on the model can be found in the following summary report for the International Watersheds Initiative of the International Joint Commission, where the model was used to generate a new water balance historical record from 1950 through 2015: https://www.glerl.noaa.gov/pubs/fulltext/2018/20180021.pdf. Large Lake Statistical Water Balance Model (L2SWBM): https://www.glerl.noaa.gov/data/WaterBalanceModel/, and This data set has a shorter timespan to accommodate a prior which uses data not used in the likelihood functions.
- Keyword:
- Water, Balance, Great Lakes, Laurentian, Machine, Learning, Lakes, Bayesian, and Network
- Citation to related publication:
- Discipline:
- Engineering and Science
- Title:
- Large Lake Statistical Water Balance Model - Laurentian Great Lakes - 1 month time window - 1980 through 2015 monthly summary data and model output
-
- Creator:
- Smith, Joeseph P., Gronewold, Andrew D., Read, Laura, Crooks, James L., School for Environment and Sustainability, University of Michigan, Department of Civil and Environmental Engineering, University of Michigan, and Cooperative Institute for Great Lakes Research
- Description:
- Using the statistical programming package R ( https://cran.r-project.org/), and JAGS (Just Another Gibbs Sampler, http://mcmc-jags.sourceforge.net/), we processed multiple estimates of the Laurentian Great Lakes water balance components -- over-lake precipitation, evaporation, lateral tributary runoff, connecting channel flows, and diversions -- feeding them into prior distributions (using data from 1950 through 1979), and likelihood functions. The Bayesian Network is coded in the BUGS language. Water balance computations assume that monthly change in storage for a given lake is the difference between beginning of month water levels surrounding each month. For example, the change in storage for June 2015 is the difference between the beginning of month water level for July 2015 and that for June 2015., More details on the model can be found in the following summary report for the International Watersheds Initiative of the International Joint Commission, where the model was used to generate a new water balance historical record from 1950 through 2015: https://www.glerl.noaa.gov/pubs/fulltext/2018/20180021.pdf. Large Lake Statistical Water Balance Model (L2SWBM): https://www.glerl.noaa.gov/data/WaterBalanceModel/ , and This data set has a shorter timespan to accommodate a prior which uses data not used in the likelihood functions.
- Keyword:
- Water, Balance, Great Lakes, Laurentian, Machine, Learning, Lakes, Bayesian, and Network
- Citation to related publication:
- Discipline:
- Engineering and Science
- Title:
- Large Lake Statistical Water Balance Model - Laurentian Great Lakes - 6 month time window - 1980 through 2015 monthly summary data and model output
-
- Creator:
- Baskar, Deepika and Gorodetsky, Alex
- Description:
- Studying the effect of wind on urban air mobility typically requires comprehensive fluid dynamics simulations in a realistic urban geometry. Motivated to enable wide-spread autonomous drone activity in urban centers, such studies have indeed been considered by several authors in the recent literature. However, the accessibility of these approaches to those with less fluid dynamics experience and/or without access to purpose built simulation tools has limited validation and application of the resulting path planning strategies. and The .dat files contain the flow variables for each of the 402240 points sampled from the region under study. For flow visualization purposes, the .dat files are readable using Tecplot Software.
- Keyword:
- UAM, Energy efficient path planning , CFD, and City of Boston
- Citation to related publication:
- Discipline:
- Other and Engineering
- Title:
- A Simulated Wind-field Dataset for Testing Energy Efficient Path-Planning Algorithms for UAVs in Urban Environment
-
- Creator:
- Smith, Joeseph P., Gronewold, Andrew D., Read, Laura, Crooks, James L., School for Environment and Sustainability, University of Michigan, Department of Civil and Environmental Engineering, University of Michigan, and Cooperative Institute for Great Lakes Research, University of Michigan
- Description:
- Using the statistical programming package R ( https://cran.r-project.org/), and JAGS (Just Another Gibbs Sampler, http://mcmc-jags.sourceforge.net/), we processed multiple estimates of the Laurentian Great Lakes water balance components -- over-lake precipitation, evaporation, lateral tributary runoff, connecting channel flows, and diversions -- feeding them into prior distributions (using data from 1950 through 1979), and likelihood functions. The Bayesian Network is coded in the BUGS language. Water balance computations assume that monthly change in storage for a given lake is the difference between beginning of month water levels surrounding each month. For example, the change in storage for June 2015 is the difference between the beginning of month water level for July 2015 and that for June 2015., More details on the model can be found in the following summary report for the International Watersheds Initiative of the International Joint Commission, where the model was used to generate a new water balance historical record from 1950 through 2015: https://www.glerl.noaa.gov/pubs/fulltext/2018/20180021.pdf. Large Lake Statistical Water Balance Model (L2SWBM): https://www.glerl.noaa.gov/data/WaterBalanceModel/, and This data set has a shorter timespan to accommodate a prior which uses data not used in the likelihood functions.
- Keyword:
- Water, Balance, Great Lakes, Laurentian, Machine Learning, Machine, Learning, Lakes, Bayesian, and Network
- Citation to related publication:
- Discipline:
- Engineering and Science
- Title:
- Large Lake Statistical Water Balance Model - 12 month time window - 1980 through 2015 monthly summary data and model output
-
- Creator:
- Chen, Yang and Manchester, Ward IV
- Description:
- GOES_flare_list: contains a list of more than 10,000 flare events. The list has 6 columns, flare classification, active region number, date, start time end time, emission peak time, GOES_B_flare_list: contains time series data of SDO/HMI SHARP parameters for B class solar flares , GOES_MX_flare_list: contains time series data of SDO/HMI SHARP parameters for M and X class solar flares, SHARP_B_flare_data_300.hdf5 and SHARP_MX_flare_data_300.hdf5 files contain time series more than 20 physical variables derived from the SDO/HMI SHARP data files. These data are saved at a 12 minute cadence and are used to train the LSTM model., and B_HARPs_CNNencoded_part_xxx.hdf5 and M_X HARPs_CNNencoded_part_xxx.hdf5 include neural network encoded features derived from vector magnetogram images derived from the Solar Dynamics Observatory (SDO) Helioseismic and Magnetic Imager (HMI). These data files typically contains one or two sequences of magnetograms covering an active region for a period of 24h with a 1 hour cadence. We encode each magnetogram with frames of a fixed size of 8x16 with 512 channels.
- Keyword:
- machine learning, data science, and solar flare prediction
- Citation to related publication:
- Chen, Y., Manchester, W., Hero, A., Toth, G., DuFumier, B. Zhou, T., Wang, X., Zhu, H., Sun, Zeyu, Gombosi, T., Identifying Solar Flare Precursors Using Time Series of SDO/HMI Images and SHARP Parameters, Space Weather Journal, submitted
- Discipline:
- Engineering and Science
- Title:
- Data and Data products for machine learning applied to solar flares
-
- Creator:
- Ramasubramani, Vyas
- Description:
- The goal of the work is to elucidate the stability of a complex experimentally observed structure of proteins. We found that supercharged GFP molecules spontaneously assemble into a complex 16-mer structure that we term a protomer, and that under the right conditions an even larger assembly is observed. The protomer structure is very well defined, and we performed simulations to try and understand the mechanics underlying its behavior. In particular, we focused on understanding the role of electrostatics in this system and how varying salt concentrations would alter the stability of the structure, with the ultimate goal of predicting the effects of various mutations on the stability of the structure. There are two separate projects included in this repository, but the two are closely linked. One, the candidate_structures folder, contains the atomistic outputs used to generate coarse-grained configurations. The actual coarse-grained simulations are in the rigid_protein folder, which pulls the atomistic coordinates from the other folder. All data is managed by signac and lives in the workspace directories, which contain various folders corresponding to different parameter combinations. The parameters associated with a given folder are stored in the signac_statepoint.json files within each subdirectory. The atomistic data uses experimentally determined protein structures as a starting point; all of these are stored in the ConfigFiles folder. The primary output is the topology files generated from the PDBs by GROMACS; these topologies are then used to parametrize the Monte Carlo simulations. In some cases, atomistic simulations were actually run as well, and the outputs are stored alongside the topology files. In the rigid_protein folder, the ConfigFiles folder contains MSMS, the software used to generate polyhedral representations of proteins from the PDBs in the candidate_structures folder. All of the actual polyhedral structures are also stored in the ConfigFiles folder. The actual simulation trajectories are stored as general simulation data (GSD) files within each subdirectory of the workspace, along with a single .pos file that contains the shape definition of the (nonconvex) polyhedron used to represent a protein. The logged quantities, such as energies and MC move sizes, are stored in .log files. The logic for the simulations in the candidate_structures project is in the Python scripts project.py, operations.py, and scripts/init.py. The rigid_protein folder also includes the notebooks directory, which contains Jupyter notebooks used to perform analyses, as well as the Python scripts used to actually perform the simulations and manage the data space. In particular, the project.py, operations.py and scripts/init.py scripts contain most of the logic associated with the simulations.
- Keyword:
- Protein assembly, Cryo TEM, Hierarchical Assembly, Monte Carlo simulation, and Coarse-grained simulation
- Citation to related publication:
- Anna J Simon, Vyas Ramasubramani, Jens Glaser, Arti Pothukuchy, Jillian Gerberich, Janelle Leggere, Barrett R Morrow, Jimmy Golihar, Cheulhee Jung, Sharon C Glotzer, David W Taylor, Andrew D Ellington,"Supercharging enables organized assembly of synthetic biomolecules," bioRxiv 323261; doi: https://doi.org/10.1101/323261
- Discipline:
- Engineering and Science
- Title:
- Simulation Data associated with the paper: Supercharging enables organized assembly of synthetic biomolecules
-
- Creator:
- Burgin, Tucker and Mayes, Heather B.
- Description:
- This project aimed to discover and analyze the molecular mechanism of synthesis of two particular fucosylated oligosaccharide products in a mutant enzyme, Thermatoga maratima Alpha-L-Fucosidase D224G, whose wild type performs the opposite reaction (cleavage of fucosyl glycosidic bonds). Discovery of the mechanism was performed using an unbiased simulations method known as aimless shooting, whereas analysis of the mechanism in terms of the energy profile was performed using a separate method known as equilibrium path sampling. The data here concerns the latter method. and The contents of the atesa_master.zip are the ATESA GitHub project. A Python program for automating transition path sampling with aimless shooting using Amber. https://github.com/team-mayes/atesa
- Keyword:
- Equilibrium Path Sampling, Transition Path Sampling, Enzymatic Mechanism, and GH29
- Citation to related publication:
- 10.1039/C8RE00240A
- Discipline:
- Engineering
- Title:
- Equilibrium Path Sampling Data for Two Glycosynthetic Reactions of Thermatoga maratima Alpha-L-Fucosidase D224G
-
- Creator:
- Stoev, Stilian and Hu, Weifeng
- Description:
- Many data sets come as point patterns of the form (longitude, latitude, time, magnitude). The examples of data sets in this format includes tornado events, origins/destination of internet flows, earthquakes, terrorist attacks and etc. It is difficult to visualize the data with simple plotting. This research project studies and implements non-parametric kernel smoothing in Python as a way of visualizing the intensity of point patterns in space and time. A two-dimensional grid M with size mx, my is used to store the calculation result for the kernel smoothing of each grid points. The heat-map in Python then uses the grid to plot the resulting images on a map where the resolution is determined by mx and my. The resulting images also depend on a spatial and a temporal smoothing parameters, which control the resolution (smoothness) of the figure. The Python code is applied to visualize over 56,000 tornado landings in the continental U.S. from the period 1950 - 2014. The magnitudes of the tornado are based on Fujita scale.
- Citation to related publication:
- Discipline:
- Engineering and Science
- Title:
- Statistics and Visualization of Point-Patterns