Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This will check that the whole process produced valid IDS.

 

Run as MPI job

The goal is to submit HPC demanding workflows to supercomputers. Therefore, the image must give support to code which run MPI codes. When running an MPI job, the MPI libraries can be installed in. The uDocker instructions describe how to install OpenMPI inside the image, but the aim is to use the MPI libraries of the host. One of the main challenges when running MPI codes is using the MPI libraries of the host system. That is because MPI libraries and configuration is optimized for the underlying system

When binding the paths to allow the uDocker image to use the MPI libraries is important to export all the environment variables. This is because there are many MPI variables that are loaded in the enviornment variables and then used by the compiler to access the parameters and determine the behaviour. To tell udocker to get the enviornment varibale we wll use the paramether "–hostenv".

The following sections describe how to run the MPI codes inside a uDocker image in a cluster in different supercomputers

Marconi

As mentioned before, Marconi is one of the targeted supercomputers for this project. In this example the loaded variables will be the Intel MPI library.

Because the binary will use the underlying enviornment and the lirbaries will be loadaded, then the binary that will be used can be compiled in the host system and then run on the guest system.

Code Block
languagebash
module load intel/pe-xe-2018--binary
module load intelmpi/2018--binary
mpicc code.c -o binary

 

In the case of Marconi by running the command "module show" it can be checked where the Intel libraries are installed to allow the image to access the path. In this case the path is /cineca/prod/opt/compilers/intel.  This path must be accessible inside uDocker to retrieve all the necessery data, because of that it needs to be specified the "-v" paramethers allowing the mount inside the image a path from the host. In the described case, to get a bash command line and run the following command

Code Block
languagebash
udocker.py run --hostenv -v /cineca/prod/opt/compilers/intel/ -w /cineca/prod/opt/compilers/intel/  imas /bin/bash

 

MareNostrum IV

MareNostrum IV has a different setup but a similar approach. In this case the test usage will be also the Intel MPI library

Code Block
languagebash
module load intel/2017.4
module load impi/2017.4 

 

The modules have to be loaded in the host system to define the environment variables. Once it is done we can launch a shell script which runs the MPI code inside using the following way.

 

Code Block
languagebash
udocker.py run --hostenv -v /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/ -v $HOME -w $HOME  imaswf /bin/bash -c "chmod +x launch_mm.sh; ./launch_mm.sh"