Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Running these commands inside the terminal will make the workflow start running. This will mean that we will be running a Kepler workflow inside a PRACE machine which does not have the IMAS environment installed. But we will be running inside the Docker image. This is a sample of the output:

Configure and submit workflow, then check output on GW

To configure a job we have to edit the files copied on the 

Code Block
mkdir $HOME/.sumi
cp sumi/conf/*.conf $HOME/.sumi

jobs.conf

local/bin/udocker run imas /bin/bash
 
 ******************************************************************************
 *                                                                            *
 *               STARTING f3e7e0cb-ea21-3e9c-a826-7e5256354c57                *
 *                                                                            *
 ******************************************************************************
 executing: bash
f3e7e0cb$ module load imas kepler
f3e7e0cb$ module load keplerdir
f3e7e0cb$ imasdb test
f3e7e0cb$ export USER=imas
f3e7e0cb$ kepler -runwf -nogui -user imas /home/imas/simple-workflow.xml
The base dir is /home/imas/keplerdir/kepler/kepler
Kepler.run going to run.setMain(org.kepler.Kepler)
JVM Memory: min = 1G,  max = 8G, stack = 20m, maxPermGen = default
adding $CLASSPATH to RunClassPath: /usr/share/java/jaxfront/JAXFront Eclipse Example Project/lib/xercesImpl.jar:/usr/share/java/jaxfront/JAXFront Eclipse Example Project/lib/jaxfront-swing.jar:/usr/share/java/jaxfront/JAXFront Eclipse Example Project/lib/jaxfront-core.jar:/home/imas/imas/core/imas/3.20.0/ual/3.8.3/jar/imas.jar:/usr/share/java/saxon/saxon9he.jar:/usr/share/java/saxon/saxon9-test.jar:/usr/share/java/saxon/saxon9-xqj.jar
      [run] log4j.properties found in CLASSPATH: /home/imas/keplerdir/kepler/kepler/kepler-2.5/resources/log4j.properties
      [run] Initializing Configuration Manager.
      [run] Setting Java Properties.
      [run] Copying Module Files.
      [run] Initializing Module: core.
      [run] Initializing Module: gui.
      [run] Kepler Initializing...
      [run] Starting HSQL Server for hsqldb
      [run] INFO  (org.kepler.util.sql.HSQL:_getConnection:771) started HSQL server at jdbc:hsqldb:hsql://localhost:24131/hsqldb;filepath=hsqldb:file:/home/imas/.kepler/cache-2.5/cachedata/hsqldb
      [run] Starting HSQL Server for coreDB
      [run] INFO  (org.kepler.util.sql.HSQL:_getConnection:771) started HSQL server at jdbc:hsqldb:hsql://localhost:44781/coreDB;filepath=hsqldb:file:/home/imas/KeplerData/modules/core/db-2.5/coreDB
      [run] Debug execution mode
      [run]
      [run] Synchronized execution mode: true
      [run] Wait for Python to finish: true
      [run] Out idx chosen from first input IDS: true
      [run] Input IDSs slice mode: false
      [run] Output IDSs slice mode: false
      [run] creation of a temporary file...

 

Configure and submit workflow, then check output on GW

To configure a job we have to edit the files copied on the

Code Block
mkdir $HOME/.sumi
cp sumi/conf/*.conf $HOME/.sumi

jobs.conf

The configuration file jobs.conf located at local directory $HOME/.sumi/ contains the configuration for the jobs to be run. The sample configuration file located at $SUMI_DIR/conf/jobs.conf has the following content.

Code Block
[test]
udocker = udocker.py
arguments =
cpus = 1
time = 1
threads_per_process = 1

servers.conf

The configuration file servers.conf located at local directory $HOME/The configuration file jobs.conf located at local directory $HOME/.sumi/ contains the configuration for the jobs to be run. servers where SUMI will connect The sample configuration file located at $SUMI_DIR/conf/jobsservers.conf has the following content.


Code Block
    [testmachine]
server = example.com
user = udockerusername
manager = udocker.pyslurm
protocol    arguments= ssh
upload_files =
    cpusupload_to = 1
    time
download_files = 1
    threads_per_process = 1

servers.conf

...


download_to =

To configure the login node of the remote supercomputer just specify the login node address, your user name and the name of the resource manager where the accepted are sge, slurm and pbs.

SUMI allows to upload and download files automatically before and after the execution. For this we can assume a directory "mywf" in our local directory where we have our script.sh with all the instructions we want to run as shown below

Code Block
#!/bin/bash

module load imas kepler
module load keplerdir
imasdb test
export USER=imas
kepler -runwf -nogui -user imas /home/imas/simple-workflow.xml

Inside the "wf" directory we will also have a new version of our "simple-workflow.xml" file which will overwrite the existing one. These files will be copied inside the image which is contained in the directory ".udocker/containers/imas/ROOT/". Once our job has finished we want to copy the IDS files to our local Gatewa/ITER.

Code Block
[machine]
server = example.com
user = username
manager = slurm
protocol = ssh
upload_files =
upload_to =
download_files =
download_to =

 

To configure the login node of y our cluster just specify the login node address, your user name and the name of the resource manager where the accepted are sge, slurm and pbs.

SUMI allows to upload and download files automatically. For this we can assume a directory "mywf" in our remote Marconi directory and another one in our local Gateway account, as well as a mywfresults folder on Gateway. This will

 

Code Block
[marconi]
server = login.marconi.cineca.it
user = USER
manager = slurm
protocol = ssh
upload_files = my_username
manager = slurm
protocol = ssh
upload_files/afs/eufus.eu/g2itmdev/USER/mywf/*
upload_to = /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/
download_files = /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/*
download_to = /afs/eufus.eu/g2itmdev/user/my_username/mywf/*
upload_to = /marconi/home/userexternal/agutierr/mywf/
download_files = /marconi/home/userexternal/agutierr/mywf/test*
download_to = /afs/eufus.eu/g2itmdev/user/g2agutie/mywfresults//USER/public/imasdb/test/3/0/

asd

Code Block
[test]
udocker = $HOME/.local/bin/udocker
arguments = run imas /bin/bash -l script.sh
cpus = 1
time = 20
threads_per_process = 1

Once the job has been configured we can run it using the following command

...

And this will show the correct structre structure of the IDS generated which will demonstrate that the stricture is correct and that they were generated correctly..