Data Models
EnzymeML
EnzymeML is an XML-based data exchange format that supports the comprehensive documentation of enzymatic data by describing reaction conditions, time courses of substrate and product concentrations, the kinetic model, and the estimated kinetic constants. EnzymeML is based on the Systems Biology Markup Language, which was extended by implementing the STRENDA Guidelines. An EnzymeML document serves as a container to transfer data between experimental platforms, modeling tools, and databases. EnzymeML supports the scientific community by introducing a standardized data exchange format to make enzymatic data findable, accessible, interoperable, and reusable according to the FAIR data principles.
Porous Media Flow Model
This is the preliminary data model of EXC2075 PN1-3 provided in Markdown. The main goal of this document is to define a data storage standard for Particle Image Velocimetry (PIV) recordings. The data model is still under developement. PIV is an optical, particle-based measurement technique used to measure fluid flow velocities. By illuminating small particles in the flow field with a laser sheet and analyzing their displacement between two consecutive images, PIV provides highly time and space resolved data on velocity profiles and flow structures. EXC2075 PN1-3 focuses on understanding the turbulent pumping mechanisms in different porous structures topologies with different characteristic porous scales. These fluid flow interactions between energy, mass and momentum transfer need to be further understood to improve engineering applications such as transpiration cooling, filtration processes and heat exchangers. To that aim time-resolved velocity measurements were performed at the interface between a turbulent free flow and various porous structures.
Software-driven Research Data Management
Data models are commonly written in generic formats like XML or JSON, which makes them machine-readable and -actable. However, it also means that specialized software is required to handle the format and allow for integration. Unfortunately, the development of software often lags behind the format, which can lead to compatibility issues. Additionally, data models are often exclusive to a single format, while applications require different formats. To solve these problems, Software-driven Research Data Management (sdRDM) offers a generic object model for the data model. This model allows for more flexibility and compatibility across different formats, which can simplify the management of research data. Software-driven RDM allows the user to build modular data models from existing standards and to link them. Data models can also be generated from Markdown documents or from XML Schema Definitions. Furthermore, sdRDM data models can be interchanged to other standards by developing interfaces.
Database Tools
Machine Learning
Catalax
Catalax is a JAX-based framework that facilitates simulation and parameter inference through optimization algorithms and Hamiltonian Monte Carlo sampling. Its features enable efficient model building and inference, including the utilization of neural ODEs to model system dynamics and serve as surrogates for the aforementioned techniques.
Publication Tools
Dataverse Repository Sync
This is a GitHub workflow that offers seamless synchronization between your repository and your Dataverse dataset. This workflow empowers you to manage your data with greater ease and efficiency. Pushes all your repository files to your dataset Detects any changes and updates your dataset files accordingly Removes any files from your dataset that are not in your repository Lets you push content to any directory within your dataset. Besides being a publication, this dataset also serves as an example/test. It was created using the workflow outlined here and any updates made to the repository will be reflected in this dataset.
EasyDataverse
EasyDataverse is a Python libary used to interface Dataverse installations and generate Python code compatible to a metadatablock configuration given at a Dataverse installation. In addtion, EasyDataverse allows you to export and import datasets to and from various data formats. Features - Metadataconfig compliant classes for flexible Dataset creation. - Upload and download of files and directories to and from Dataverse installations. - Export and import of datasets to various formats (JSON, YAML, XML and HDF5). - Fetch datasets from any Dataverse installation into an object oriented structure ready to be integrated.
EasyReview
EasyReview is a website tool designed to help review datasets stored in Dataverse. Datasets are an important part of RDM to guarantee high quality date. EasyReview breaks down the review process by looking at each field of the dataset individually. Reviewers can be easily invited to join the review through a provided link, and can share their feedback with others involved in the process.
Harvester-Curator, a tool to elevate metadata provision in data and/or software repositories
Harvester-Curator is a tool, designed to elevate metadata provision in data repositories. In the first phase, Harvester-Curator acts as a scanner, navigating through user code and/or data repositories to identify suitable parsers for different file types. It collects metadata from each of the files by applying corresponding parsers and then compiles this information into a structured JSON file, providing researchers with a seamless and automated solution for metadata collection. Moving to the second phase, Harvester-Curator transforms into a curator, leveraging the harvested metadata to populate metadata fields in a target repository. By automating this process, it not only relieves researchers of the manual burden but also ensures the accuracy and comprehensiveness of the metadata. Beyond its role in streamlining the intricate task of metadata collection, this tool contributes to the broader objective of elevating data accessibility and interoperability within repositories.
Python DVUploader
Python equivalent to the DVUploader written in Java. Complements other libraries written in Python and facilitates the upload of files to a Dataverse instance via Direct Upload. - Parallel direct upload to a Dataverse backend storage - Files are streamed directly instead of being buffered in memory - Supports multipart uploads and chunks data accordingly
Simulation Software
DuMux 3.6.0
Release 3.6.0 of DuMux, DUNE for Multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media. DuMux is a free and open-source simulator for flow and transport processes in and around porous media. It is based on the Distributed and Unified Numerics Environment DUNE.
DuMux 3.7.0
Release 3.7.0 of DuMux, DUNE for Multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media. DuMux is a free and open-source simulator for flow and transport processes in and around porous media. It is based on the Distributed and Unified Numerics Environment DUNE.
DuMux 3.8.0
Release 3.8.0 of DuMux, DUNE for Multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media. DuMux is a free and open-source simulator for flow and transport processes in and around porous media. It is based on the Distributed and Unified Numerics Environment DUNE.
Micro Manager Version v0.3.0
The Micro Manager is a tool to facilitate solving two-scale (macro-micro) coupled problems using the coupling library preCICE. The compressed source files of this data set are only meant to archive the version v0.3.0. If you want to use the Micro Manager, please follow the information on the preCICE website. This version of the Micro Manager is compatible with preCICE v2.5.0.
Softwarepackage CCMOR2
The dataset entails the code of the CCMOR2 package developed in Matlab. This project aims to model and perform certified model order reduction on multi-physical systems. The systems are formulated in the port-Hamiltonian framework which incorporates useful system theoretic properties such as stability and passivity. The approach is subvided into three parts: Modeling of the multi-physical system in the port-Hamiltonian framework Reduce the high-dimensional system in a structure-preserving manner Certify the reduction by performing a-posteriori error analysis We refer to the README for further information on how to use this software.
preCICE Distribution Version v2104.0
The preCICE distribution is the larger ecosystem around preCICE, which includes the core library, language bindings, adapters for popular solvers, tutorials, and vagrant files to prepare a virtual machine image. The compressed source files of this data set are only meant to archive this specific version v2104.0 of the distribution. If you want to use preCICE, please follow the information on the preCICE website. For your first steps in preCICE, we particularly recommend the virtual machine, which contains all packages of the preCICE distribution, many supported solvers, and more useful tools.
Visualisation
Dataverse HDF5 Previewer
This is the source code of the Dataverse HDF5 file previewer that has been adopted from the H5Web demo and adjusted towards the needs of Dataverse's requirements for external tools. The following adjustments have been done: - Reduction to the H5Wasm example (others were not applicable) - Query parsing for siteUrl, fileid and key parameters - GET request to fetch files from a Dataverse installation - Build options to result into a non-index HTML file for previewer hosting
Workflow Manager
ZnTrack
ZnTrack (zɪŋk træk) is a lightweight and easy-to-use package for tracking parameters in your Python projects using DVC. With ZnTrack, you can define parameters in Python classes and monitor how they change over time. This information can then be used to compare the results of different runs, identify computational bottlenecks, and avoid the re-running of code components where parameters have not changed.