Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

1.1. CPO - Consistent Physical Object

A Consistent Physical Object (CPO), is a data structure that contains the relevant information on a Physical Entity, e.g. the plasma equilibrium, the core plasma profiles (densities, temperature etc.), distribution functions, etc.

What is important from the user's perspective is the way CPO data are stored. Each CPO structure can be accessed via UAL documentation at the ITM portal ITM Portal.

Accessing documentation structure is fairly easy. Open ITM Portal web page in browser and navigate to ISIP related documentation.

 

ITM Portal page
Image Added

 

After documentation is opened browse to: "Data structure" (it is located at the bottom of the picture)

 

ITM Portal - ISIP related section
Image Added

 

You should see web page similar to the content of picture below

 

UAL documentation
Image Added

 

After choosing "Browse" you will see the structure of the CPOs.

CPOs within data structure
Image Added

 

What you can see at the picture above is collection of CPOs. All CPOs are bound to the top level element. After you expand particular CPO you can browse it's details. In this case magdiag (Magnetic diagnostics) was expanded.

 

Tip

Downloading structure

If you plan to browse structure extensively, it is advised to download it to your local system.

 

Exercise no. 1 - After this exercise you will:
  • know how to find CPO related documentation
  • know how to browse through the tree structure
  • know how to access CPO details
Exercise no. 1 (approx. 10 min)

In this exercise you will browse documentation of CPO elements.

1. Log in into ITM Portal link

2. Browse to Documentation -> ISIP -> Data structure

3. Choose "Browse" from "Data structure 4.10b (Browse) (Download)"

4. Check few CPOs to get familiar with documentation

1.1 Structure of CPO

1.1.1 Time independent CPOs vs. time dependent CPOs

In general a CPO can contain both time-dependent and time-independent information. Typically, information related to the tokamak hardware will not be time-dependent, while the value of plasma physical quantities will likely be time dependent. Therefore the fields of a CPO can be either time- dependent or time-independent. The CPO itself will be time-independent if it contains only time- independent fields. It will be time-dependent otherwise. Only few CPOs in the ITM database are time-independent (e.g. topinfo), while the others will describe physical phenomena which vary over time.

...

We may have in the workflow an arbitrary number of particular modules, each of them producing specific output. These output CPOs must be initially clearly separated since they are produced by independent modules, therefore they are stored in multiple occurrences of the generic CPO. The various source modules may be called at different times of the workflow, this is allowed since all occurrences are independent: they can have an arbitrary number of time slices and each has its own time base.
During a workflow, multiple occurrences of CPOs of the same  type can be used, when there are multiple actors solving the same physical problem. In most cases, these occurrences can be used for exchanging intermediate physical data during the workflow, i.e. data that the user does not want to store in the simulation results (such as all internal time step of a transport equation solver).

1.1 Browsing CPOs

In this section we will take a closer look at the data. After copying MDSPlus database files into your account's public area you can dump the data using cpodump script or jTraverser app.

1.1.1 CPO dump

 

Code Block
languagebash
titlecpodump script
cpodump 13 3 equilibrium
 

1.1.2. jTraverser

 

Exercise no. 2 (approx. 10 min)

In this exercise you will browse MDSPlus database using jTraverser application.

1. source ITMv1 script by invoking

source $ITMSCRIPTDIR/ITMv1 kepler test 4.10a > /dev/null

2. Setup the location of your database files (in this use-case we will use machine "test")

setenv euitm_path $HOME/public/itmdb/itm_trees/test/4.10a/mdsplus/0

3. Start jTraverser application

jTraverser

4. Open database file

File -> Open
Tree: euitm
Shot: [the number following "euitm_" prefix]
5. You can now browse data

 

1.2. UAL - Universal Access Layer

In order to cope with multiple languages and maintaining at the same time a unique structure definition, the UAL architecture defines two layers. The top layer provides the external Application Programming Interface (API), and its code is automatically produced from the XML description of the ITM database structure. For each supported programming language, a high level layer is generated in the target language. The following sections will describe the language specific API, and they provide all the required information for simulation program developers.

The lower layer is implemented in C and provides unstructured data access to the underlying database. It defines an API which is used by all the high level layer implementations. Knowledge of this API (presented in a later section) is not necessary to end users, and is only required to the developers of new language specific high level implementations of the UAL as well as the developers of support tools for ITM management.

source: UAL User Guide

 

1.3. Kepler

Kepler is a workflow engine and design platform for analyzing and modeling scientific data. Kepler provides a graphical interface and a library of pre-defined components to enable users to construct scientific workflows which can undertake a wide range of functionality. It is primarily designed to access, analyse, and visualise scientific data but can be used to construct whole programs or run pre-existing simulation codes.

Kepler builds upon the mature Ptolemy II framework, developed at the University of California, Berkeley. Kepler itself is developed and maintained by the cross-project Kepler collaboration.

The main components in a Kepler workflow are actors, which are used in a design (inherited from Ptolemy II) that separates workflow components ("actors") from workflow orchestration ("directors"), making components more easily reusable. Workflows can work at very levels of granularity, from low-level workflows (that explicitly move data around or start and monitor remote jobs, for example) to high-level workflows that interlink complex steps/actors. Actors can be reused to construct more complex actors enabling complex functionality to be encapsulated in easy to use packages. A wide range of actors are available for use and reuse.

1.4. FC2K

FC2K is a tool for wrapping a Fortran or C++ source code into a Kepler actor. Before using it, your physics code should be ITM-compliant (i.e. use CPOs as input/output). After running the ITMv1 script (to properly set up the environment variables), FC2K can be run simply by typing fc2k in the Linux command line. FC2K was developed by ISIP in Java/Python. You can find more regarding FC2K at following location.