The purpose of this research is to compare levels of unprenylated Rab proteins in CHM-/- iPSC-RPE cells with and without compactin. Compactin is a statin that inhibits prenyl synthesis and thereby reduces prenylation overall and has an unbiased inhibitory effect on all protein prenylation. So we expect that for Rabs that are already poorly prenylated at baseline in choroideremia RPE cells, compactin will have minimal effect. However, for Rabs that are efficiently prenylated at baseline, compactin should have a much greater effect. And then we used tandem mass tag spectrometry to compare the ratio of each unprenylated Rab in compactin-treated choroideremia cells vs untreated choroideremia cells. In the spreadsheet, "F8" refers to the CHM-/- iPSC-RPE cells and "WT" refers to the isogenic control iPSC-RPE cells. In the "Proteins only" tab, column M shows the ratio of each protein in "DMSO" (untreated) choroideremia cells compared to Compactin-treated choroideremia cells. Compactin-treated control cells are also included in other columns. Untreated control cells could not be used because prenylation is so efficient in these cells, there is almost no material available after doing the in vitro prenylation assay (i.e. almost no unprenylated proteins to biotinylate).
The column descriptions can be found in the sheet titled "Explanations." In addition, AAs= number of amino acids in the protein, MW= molecular weight of the protein, and pI= isoelectric point.
The software is set to report abundance values only when certain criteria are met (S/N of 6, unique peptide etc). A value is NOT reported when the data for a protein fall below these criteria and the cell is instead left blank.
Raeker, M.O., Perera, N.D., Karoukis, A.J., Chen, L., Feathers, K.L., Ali, R.R., Thompson, D.A., Fahim, A.T. Reduced retinal pigment epithelial autophagy due to loss of Rab12 prenylation in a human iPSC-RPE model of choroideremia. Cells, manuscript accepted, in press.
WarpX simulations of the 2D axial-azimuthal Hall thruster benchmark, as described in IEPC paper 409 (2024):
https://www.thomasmarks.space/files/Marks_T_IEPC_2024_WarpX.pdf
Contains one subdirectory:
baseline_20us: 20 us of data, saved every 5000 iterations (32 GB)
The data is in AMReX plotfile format.
@inproceedings{marksWarpX2024, title = {Hall thruster simulations in {{Warp-X}}}, booktitle = {38th {{International Electric Propulsion Conference}}}, author = {Marks, Thomas A. and Gorodetsky, Alex A.}, year = {2024}, month = jun, address = {Toulouse, France} }
The MEVDT dataset was created to fill a critical gap in event-based computer vision research by supplying a high-quality, real-world labeled dataset. Intended to facilitate the development of advanced algorithms for object detection and tracking applications, MEVDT includes multi-modal traffic scene data with synchronized grayscale images and high-temporal-resolution event streams. Additionally, it provides annotations for object detection and tracking with class labels, pixel-precise bounding box coordinates, and unique object identifiers. The dataset is organized into directories containing sequences of images and event streams, comprehensive ground truth labels, fixed-duration event samples, and data indexing sets for training and testing. and To access and utilize the dataset, researchers need specific software or scripts compatible with the data formats included, such as PNG for grayscale images, CSV for event stream data, AEDAT for the encoded fixed-duration event samples, and TXT for annotations. Recommended tools include standard image processing libraries for PNG files and CSV or text parsers for event data. A specialized Python script for reading AEDAT files is available at: https://github.com/Zelshair/cstr-event-vision/blob/main/scripts/data_processing/read_aedat.py, which streamlines access to the encoded event sample data.
El Shair, Z. and Rawashdeh, S., 2024. MEVDT: Multi-Modal Event-Based Vehicle Detection and Tracking Dataset. Data In Brief (under review)., El Shair, Z. and Rawashdeh, S.A., 2022. High-temporal-resolution object detection and tracking using images and events. Journal of Imaging, 8(8), p.210., El Shair, Z. and Rawashdeh, S., 2023. High-temporal-resolution event-based vehicle detection and tracking. Optical Engineering, 62(3), pp.031209-031209., and El Shair, Z.A., 2024. Advancing Neuromorphic Event-Based Vision Methods for Robotic Perception Tasks (Doctoral dissertation, University of Michigan-Dearborn).
The study aims to describe how children worldwide progress through a sequence of theory of mind understandings in their development of insights into persons and minds. The focus is on the studies using Wellman and Liu's (2004) Theory of Mind Scale. A comprehensive search was run in PsycINFO, PsycArticles, Child Development & Adolescent Studies, Education Abstracts, Family & Society Studies Worldwide, and Social Sciences Abstracts. The dataset includes 91 studies using Wellman and Liu's (2004) Theory of Mind Scale.
Our project, mainly on Dogon languages of Mali, has branched out to Burkina Faso with emphasis on documentation of the most endangered languages. Tiefo-N was studied on an emergency basis since it was down to two aging competent speakers. For additional comments and links to a reference grammar, see the readme file.
Jalkunan is a small-population Mande language spoken in Blédougou village cluster in the Banfora plateau in SW Burkina Faso.A grammar was published electronically at Language Description Heritage Library in 2017.
http://ldh.clld.org/2017/01/01/escidoc2346932/ This is backed up at Deep Blue documents. http://hdl.handle.net/2027.42/139025 http://dogonlanguages.org/other#mande Seven texts were recorded digitally in 2016 and are archived here. Three of them (texts 1, 2, and 4) were transcribed and translated at the end of the published grammar. The remaining tapes are not transcribed as of May 2018. I give permission to other linguists to transcribe, translate, and/or analyse the remaining texts.
Moran, Steven & Forkel, Robert & Heath, Jeffrey (eds.) 2016. Dogon and Bangime Linguistics. Jena: Max Planck Institute for the Science of Human History. http://dogonlanguages.org
Moran, Steven & Forkel, Robert & Heath, Jeffrey (eds.) 2016. Dogon and Bangime Linguistics. Jena: Max Planck Institute for the Science of Human History. http://dogonlanguages.org
This is the flora-fauna lexical material obtained in the course of more general lexical and grammatical fieldwork on languages of central-eastern Mali (Dogon, Songhay, Bangime, Bozo). The spreadsheets in this work, duplicated in xlsx and csv formants, present our flora-fauna lexicons as of early 2019 for many languages of central-eastern Mali, and certain languages of southwestern Burkina Faso. The Malian data is in two spreadsheets (flora, fauna), while the Burkina data is in separate spreadsheets for flora, birds, fish, insects, lizards and snakes, and mammals. Please begin with the “readme” document.
Moran, Steven & Forkel, Robert & Heath, Jeffrey (eds.) 2016. Dogon and Bangime Linguistics. Jena: Max Planck Institute for the Science of Human History. https://dogonlanguages.org and Christfried Naumann & Tom Güldemann & Steven Moran & Guillaume Segerer & Robert Forkel (eds.) 2015. Tsammalex: A lexical database on plants and animals. Leipzig: Max Planck Institute for Evolutionary Anthropology. https://tsammalex.clld.org
The spreadsheets (in csv and xlsx formats) have columns for botanical family, genus-species binomials, synonymy (outdated binomials) on the left, folllowed by columns with native terms in several Dogon languages and in Bangime. Dogon languages included are Toro Tegu, Ben Tey, Bankan Tey, Nanga, Jamsay (main dialect), Perge Tegu (Jamsay of Pergé village), Gourou (aberrant variety of Jamsay), Togo Kan, Yorno So and Ibi So (in Toro So dialect complex), Donno So, Tomo Kan (of Segué and of Diangassagou), Tomo Kan, Dogul Dom, Tebul Ure, Yanda Dom, Najamba, Tiranige, Mombo, Ampari, Bunoge, and Penange. JH in column headings indicates that the material is from Dr. Heath's fieldwork. and For images of many of these plants, see the collection "Mali flora images" in Deep Blue Data ( https://doi.org/10.7302/aef4-fk26). For a practical guide to these plants, click on the link below in "related items in Deep Blue Documents".