C-tau

From BioUML platform
Revision as of 16:53, 17 March 2022 by Ilya Kiselev (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

C-tau is a platform based on the BioUML platform for analysis of experimental data for nuclear physics. It was created for super-ctau factory project developed by The Institute of Nuclear Physics of SBRAS (https://ctd.inp.nsk.su/c-tau/). It does not include BioUML plugins specific to biomedical data and analysis (i.e. SBML, SBGN support, etc).

The C-tau platform contains additional technologies:

Root (http://root.cern.ch) - set of tools for work with large amounts of data. ROOT uses a system independent binary file format to store data. The user can import such a file into the platform and feed it to the analysis input, or open it in a separate tool that uses the jsroot library to visualize the data contained in the file. ROOT and Jupyter notebook can be used in combination - ROOT commands can be executed through the jupyter notebook to load, analyze and visualize data in the platform (fig. X).

Jade diagram type - special diagram type to integrate AGNES (library developed at The Institute of Computational Mathematics and Mathematical Geophysics SB RAS) for agent-based modeling of telecommunication systems based on Jade (https://jade.tilab.com). To integrate AGNES, we included Jade in the platform, and also created a software module that allows the user to create AGNES simulation models in the form of visual diagrams, edit agent properties and run numerical calculations (fig. Y). Each type of agents has its own graphic designation. Each object in the diagram corresponds to one group of agents with the same properties. Numbers for agents of each group are given in parentheses. Arrows between objects indicate connections between agents. When starting calculations on visual representation, an xml file with AGNES settings is generated and simulation is started using Jade.

The CWL (Common Workflow Language, www.commonwl.org) standard is used as follows (fig. Z):

  • to describe program launch options
  • to run the program with these parameters on any computing node of the cluster
  • to transfer the necessary input files from the data warehouse to the required computing node, as well as to transfer the results obtained to the data warehouse
  • to perform several tasks within one scenario (workflow).

Computing nodes integrated into clustered under the control of the SLURM download manager (https://github.com/SchedMD/slurm). Each computational task has associated meta-information about the number of processors and RAM required for execution. When a compute task arrives, SLURM finds a compute node with available resources and launches a docker container to run the simulation. If there are no computing resources required by the task, the task is placed in the SLURM queue and waits for the resources to be released.

Personal tools
Namespaces

Variants
Actions
BioUML platform
Community
Modelling
Analysis & Workflows
Collaborative research
Development
Virtual biology
Wiki
Toolbox