This collection contains estimates of the water balance of the Laurentian Great Lakes that were produced by the Large Lakes Statistical Water Balance Model (L2SWBM). Each data set has a different configuration and was used as the supplementary for a published peer-reviewed article (see "Citations to related material" section in the metadata of individual data sets). The key variables that were estimated by the L2SWBM are (1) over-lake precipitation, (2) over-lake evaporation, (3) lateral runoff, (4) connecting-channel outflows, (5) diversions, and (6) predictive changes in lake storage. and Contact: Andrew Gronewold
Office: 4040 Dana
Phone: (734) 764-6286
Email: drewgron@umich.edu
Smith, J. P., & Gronewold, A. D. (2017). Development and analysis of a Bayesian water balance model for large lake systems. arXiv preprint arXiv:1710.10161., Gronewold, A. D., Smith, J. P., Read, L., & Crooks, J. L. (2020). Reconciling the water balance of large lake systems. Advances in Water Resources, 103505., and Do, H.X., Smith, J., Fry, L.M., and Gronewold, A.D., Seventy-year long record of monthly water balance estimates for Earth’s largest lake system (under revision)
This database contains six datasets intended to aid in the conception, training, demonstration, evaluation, and comparison of reduced-complexity models for fluid mechanics. The six datasets are: large-eddy-simulation data for a turbulent jet, direct-numerical-simulation data for a zero-pressure-gradient turbulent boundary layer, particle-image-velocimetry data for the same boundary layer, direct-numerical-simulation data for laminar stationary and pitching flat-plate airfoils, particle-image-velocimetry and force data for an airfoil encountering a gust, and large-eddy-simulation data for the separated, turbulent flow over an airfoil.
All data are stored within hdf5 files, and each dataset additionally contains a README file and a Matlab script showing how the data can be read and manipulated. Since all datafiles use the hdf5 format, they can alternatively be read within virtually any other programing environment. An example.zip file included for each dataset provides an entry point for users.
The database is an initiative of the AIAA Discussion Group on Reduced-Complexity Modeling and is detailed in the paper listed below. For each dataset, the paper introduces the flow setup and computational or experimental methods, describes the available data, and provide an example of how these data can be used for reduced-complexity modeling. All users should cite this paper as well as appropriate primary sources contained therein.
Towne, A., Dawson, S., Brès, G. A., Lozano-Durán, A., Saxton-Fox, T., Parthasarthy, A., Biler, H., Jones, A. R., Yeh, C.-A., Patel, H., Taira, K. (2022). A database for reduced-complexity modeling of fluid flows. AIAA Journal 61(7): 2867-2892.
The collection contains the code and the data used to train machine learning algorithms to emulate simplified physical parameterizations within the Community Atmosphere Model (CAM6). CAM6 is the atmospheric general circulation model (GCM) within the Community Earth System Model (CESM) framework, developed by the National Center for Atmospheric Research (NCAR). GCMs are made up of a dynamical core, responsible for the geophysical fluid flow calculations, and physical parameterization schemes, which estimate various unresolved processes. Simple physics schemes were used to train both random forests and neural networks in the interest of exploring the feasibility of machine learning techniques being used in conjunction with the dynamical core for improved efficiency of future climate and weather models. The results of the research show that various physical forcing tendencies and precipitation rates can be effectively emulated by the machine learning models.