...
Inside the "wf" directory we will also have a new version of our "simple-workflow.xml" file which will overwrite the existing one. These files will be copied inside the image which is contained in the directory ".udocker/containers/imas/ROOT/". Once our job has finished we want to copy the IDS files to our local GatewaGateway/ITER. The following configuration is for a Gateway machine, but this only affect to the paths of "upload_files" and "download_to"
Code Block |
---|
[marconi] server = login.marconi.cineca.it user = USER manager = slurm protocol = ssh upload_files = /afs/eufus.eu/g2itmdev/USER/mywf/* upload_to = /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/ download_files = /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/* download_to = /afs/eufus.eu/g2itmdev/user/USER/public/imasdb/test/3/0/ |
asdThe job configuration is the following
Code Block |
---|
[test] udocker = $HOME/.local/bin/udocker arguments = run imas /bin/bash -l script.sh cpus = 1 time = 20 threads_per_process = 1 |
...
Code Block |
---|
sumi -r -j test -m marconi |
This will copy the files, run the workflow and retrieve the output results. Once we have them we can check whether the
Code Block |
---|
idsdump 1 1 pf_active |
The output will first show how the job is being configured and how the connection is set up
No Format |
---|
2018/12/19 17:09:47 INFO SUMI: Starting
2018/12/19 17:09:47 INFO SUMI: Reading local configuration
2018/12/19 17:09:47 INFO Job: configuring
2018/12/19 17:09:52 INFO SUMI: uploading files
2018/12/19 17:09:52 INFO Connected (version 2.0, client OpenSSH_6.6.1)
2018/12/19 17:09:53 INFO Authentication (publickey) successful! |
Copy the files from Gateway to Marconi
No Format |
---|
2018/12/19 17:09:54 INFO [chan 0] Opened sftp connection (server version 3)
2018/12/19 17:09:54 INFO SUMI: scp /afs/eufus.eu/g2itmdev/user/USER/mywf/simple-workflow.xml /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/
2018/12/19 17:09:54 INFO SUMI: scp /afs/eufus.eu/g2itmdev/user/USER/mywf/script.sh /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/
2018/12/19 17:09:54 INFO [chan 0] sftp session closed. |
Submit the job to the remote queueing system of Marconi and wait for the job to finish
No Format |
---|
2018/12/19 17:09:59 INFO Job: starting
2018/12/19 17:09:59 INFO Job: ID [slurm+ssh://login.marconi.cineca.it]-[3264873]
2018/12/19 17:10:03 INFO Job: state Pending
2018/12/19 17:10:03 INFO Job: waiting
2018/12/19 17:17:13 INFO Job: State Done
2018/12/19 17:17:13 INFO Job: Exitcode 0 |
Once it has finished SUMI retrieves the results and finishes
No Format |
---|
2018/12/19 17:17:13 INFO SUMI: downloading files
2018/12/19 17:17:13 INFO Connected (version 2.0, client OpenSSH_6.6.1)
2018/12/19 17:17:14 INFO Authentication (publickey) successful!
2018/12/19 17:17:14 INFO [chan 1] Opened sftp connection (server version 3)
2018/12/19 17:17:14 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_10001.characteristics /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:23 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_10001.datafile /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:23 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_10001.tree /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:31 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_19999.characteristics /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:39 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_19999.datafile /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:39 INFO SUMI: scp /marconi/home/userexternal/USER/.udocker/containers/imas/ROOT/home/imas/public/imasdb/test/3/0/ids_19999.tree /afs/eufus.eu/g2itmdev/user/user/public/imasdb/test/3/0/
2018/12/19 17:17:47 INFO [chan 1] sftp session closed.
2018/12/19 17:17:47 INFO SUMI: Done |
Once we have them we can check whether the results are correct by running idsdump this will show the correct structure of the IDS generated which will demonstrate that the stricture is correct and that they were generated correctly.
Code Block |
---|
idsdump 1 1 pf_active |
This will check that the whole process produced valid IDS.
Run as MPI job
The goal is to submit HPC demanding workflows to supercomputers. Therefore, the image must give support to code which run MPI codes. When running an MPI job, the MPI libraries can be installed in. The uDocker instructions describe how to install OpenMPI inside the image, but the aim is to use the MPI libraries of the host. One of the main challenges when running MPI codes is using the MPI libraries of the host system. That is because MPI libraries and configuration is optimized for the underlying system
When binding the paths to allow the uDocker image to use the MPI libraries is important to export all the environment variables. This is because there are many MPI variables that are loaded in the enviornment variables and then used by the compiler to access the parameters and determine the behaviour. To tell udocker to get the enviornment varibale we wll use the paramether "–hostenv".
The following sections describe how to run the MPI codes inside a uDocker image in a cluster in different supercomputers
Marconi
As mentioned before, Marconi is one of the targeted supercomputers for this project. In this example the loaded variables will be the Intel MPI library.
Because the binary will use the underlying enviornment and the lirbaries will be loadaded, then the binary that will be used can be compiled in the host system and then run on the guest system.
Code Block | ||
---|---|---|
| ||
module load intel/pe-xe-2018--binary
module load intelmpi/2018--binary
mpicc code.c -o binary |
In the case of Marconi by running the command "module show" it can be checked where the Intel libraries are installed to allow the image to access the path. In this case the path is /cineca/prod/opt/compilers/intel. This path must be accessible inside uDocker to retrieve all the necessery data, because of that it needs to be specified the "-v" paramethers allowing the mount inside the image a path from the host. In the described case, to get a bash command line and run the following command
Code Block | ||
---|---|---|
| ||
udocker.py run --hostenv -v /cineca/prod/opt/compilers/intel/ -w /cineca/prod/opt/compilers/intel/ imas /bin/bash |
MareNostrum IV
MareNostrum IV has a different setup but a similar approach. In this case the test usage will be also the Intel MPI library
Code Block | ||
---|---|---|
| ||
module load intel/2017.4
module load impi/2017.4 |
The modules have to be loaded in the host system to define the environment variables. Once it is done we can launch a shell script which runs the MPI code inside using the following way.
Code Block | ||
---|---|---|
| ||
udocker.py run --hostenv -v /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/ -v $HOME -w $HOME imaswf /bin/bash -c "chmod +x launch_mm.sh; ./launch_mm.sh" |
And this will show the correct structure of the IDS generated which will demonstrate that the stricture is correct and that they were generated correctly..