1. Container image

  • Images are binary files containing all the data and metadata required to start the container
  • They can be built locally or downloaded from remote locations
  • The most common standard is the Docker image format

1.1. Docker image format

  • Docker image is a tar archive with metadata and layers

  • Each layer consists of its own metadata and another tar archive with the set of changes the layer introduces

  • The first metadata file in the image is manifest.json:

    [
      {
        "Config": "f63181f19b2fe819156dcb068b3b5bc036820bec7014c5f77277cfa341d4cb5e.json",
        "RepoTags": [
          "ubuntu:latest"
        ],
        "Layers": [
          "151ae8ef4f042fd5173fd2497f0a365b4413468163e7bd567146f29dcfea3517/layer.tar",
          "2872658e1abe34d0c7391abbc0848fdeddb456659e39511df0574fcfc8b7ad70/layer.tar",
          "2b83a9243dd8405d0811beeb14aeb797745b100e4538d056adb63fcc6b47c59f/layer.tar"
        ]
     }
    ]
    
  • It contains:

    • Config -- path to configuration file (architecture requirements, etc.),
    • RepoTags -- the list of tags used,
    • Layers -- paths to tar files containing layer information

1.2. OCI image format

  • An alternative format was proposed by OCI (Open Container Initiative)

  • It is also a tar archive containing metadata and layers in the form of embedded tar archives

  • The first metadata file is index.json:

    {
      "schemaVersion": 2,
      "manifests": [
        {
          "mediaType": "application/vnd.oci.image.manifest.v1+json",
          "digest": "sha256:7ad481b55901a1b5472c0e1b3fbf0bf2867dc38feb6eb7a18cd310f00208e05c",
          "size": 658
        }
      ]
    }
    
  • The manifest contains paths to configuration and layers:

    {
      "schemaVersion": 2,
      "config": {
        "mediaType": "application/vnd.oci.image.config.v1+json",
        "digest": "sha256:10bdc2317d43a5421151e135881e172002c7d61e934de7e1e79df560a151f112",
        "size": 2427
      },
      "layers": [
        {
          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
          "digest": "sha256:f3f8f4bd7c131f4d967bc162207ab72c24f427915682f895eb4f793ad05d7e35",
          "size": 29989546
        },
        {
          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
          "digest": "sha256:0188b501936213b7cd0b5333245960781a8b035249cfa427fe9a229fe557c624",
          "size": 924
        },
        {
          "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
          "digest": "sha256:db861e57845ea7ba52a2ac277abbdd8cd04bda5db69c49bf95be49d11e5a47e1",
          "size": 202
        }
      ]
    }
    

1.3. Local vs remote

  • Both formats describe how the image is stored as a local file

  • When transferring the image from remote server, the client asks for a list of layers, then checks its cache contents and finally downloads only the missing layers

  • This allows to reuse layers in images depending on each other

  • For example:

    • Let's say image A is based on ubuntu
    • Image B extends A by adding Python executable on top
    • Image C also extends A, but it adds Apache HTTP server instead
    • A first-time user of B image will download layers from ubuntu, then from A and finally from B
    • When he/she wants to use C image, only the layers with Apache HTTP server will be downloaded as all previous are still in cache

2. Building container images

  • The most straightforward approach is by using the Dockerfile format
  • The Dockerfile is a text file with commands describing the recipe to build the image
  • The first command is FROM <image> which instructs what to base the image on
  • The RUN <command> executes a command in the builder context
  • The CMD <command> configures the default command a container will run upon creation
  • The EXPOSE <port> adds metadata about a port a service inside the image will listen on (Important! The container creator decided which ports to publish and how. Exposing a port in Dockerfile serves as a form of documentation.)
  • The ENV <key>=<value> sets environment variables' values
  • The COPY <src> <dest> allows to copy files from the host to the image
  • The WORKDIR <dir> sets the current working directory as seen inside the container
  • The ARG <name>=<default> configures a build-time argument that can be changed by the image builder

2.1. Example

  • Contents of index.html:

    <h1>Hello World from Docker!</h1>
    
  • Contents of Dockerfile:

    FROM ubuntu
    ENV DEBIAN_FRONTEND=noninteractive
    COPY index.html /var/www/html/index.html
    RUN apt-get update -y
    RUN apt-get install -y apache2
    EXPOSE 80
    CMD ["/usr/sbin/apachectl", "-DFOREGROUND"]
    

3. Optimization

  • The layer system allows caching, but it also has its consequences you need to be aware of

  • Scenario 1. Image A creates /bigfile, image B extending A deletes it. This fact is merely masked -- i.e. the user of B will not see /bigfile, but the file will still be part of image B and it will still take a lot of space

  • Scenario 2. One of the layers contains secrets (passwords, unencrypted private keys, etc.), the next layers delete them. Even though the secretes are not directly available, one can extract tar archives of every layer and get access to them anyway

  • In Dockerfile, each command creates a separate layer, so you should usually do this:

    • Combine subsequent RUN commands:

      -RUN touch /test1
      -RUN date > /test2
      +RUN touch /test1 && date > /test2
      
    • In a combined RUN, make sure to delete all temporary files:

      -RUN apt-get update -y
      -RUN apt-get install -y git
      -RUN rm -rf /var/lib/apt/lists/*
      +RUN apt-get update -y \
      + && apt-get install -y git \
      + && rm -rf /var/lib/apt/lists/*
      
      -RUN curl URL > archive.tar
      -RUN tar xf archive.tar
      -RUN rm archive.tar
      +RUN curl URL | tar x
      
  • Building procedure also makes use of caching

  • Scenario 1:

    • Layer 1: Install Apache HTTP server
    • Layer 2: Copy index.html
  • Scenario 2:

    • Layer 1: Copy index.html
    • Layer 2: Install Apache HTTP server
  • Both scenarios will create equivalent images, but if you change index.html, then in Scenario 2 both layers will be rebuilt, while in Scenario 1 only the second one

  • A good practice is to order the layers according to probability of being changed (the more probable, the later should it be)

3.1. Example after optimization

FROM ubuntu
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y \
    && apt-get install -y \
        apache2 \
    && rm -rf /var/lib/apt/lists/*
EXPOSE 80
CMD ["/usr/sbin/apachectl", "-DFOREGROUND"]
COPY index.html /var/www/html/index.html

4. Multi-stage Dockerfiles

  • Sometimes to build an image you need to generate the resources e.g. compile and build a JAR file from a Java project

  • In such cases, usually the source codes and a set of tools required to process them is not necessary in the final image

  • To optimize the image, you could try performing the following in a single RUN command:

    • Transfer source codes
    • Install build-time dependencies (compilers, etc.)
    • Build everything
    • Transfer the generated resource to its final destination
    • Remove all immediate files
  • This is a lot to be done in a single command, so it is prone to errors and hard to debug

  • To overcome this, you can use multi-stage building, which is like building multiple images simultaneously and freely transferring files between them

  • Each stage starts with its own FROM <image> AS <stage> command

  • You can copy files between images created in separate stages using COPY --from=<stage> syntax

4.1. Example

  • Contents of hello.go:

    package main
    import "fmt"
    func main() {
        fmt.Println("hello world")
    }
    
  • Contents of Dockerfile:

    FROM golang AS builder
    COPY hello.go hello.go
    RUN go build hello.go
    
    FROM ubuntu
    COPY --from=builder /go/hello /usr/bin/hello
    CMD /usr/bin/hello
    

5. BuildKit

  • Starting with version 18.09, Docker is shipped with two build engines: the legacy one (used by default) and the BuildKit

  • To use the new engine, you have to set the following environment variable DOCKER_BUILDKIT=1

  • The new engine has the following advantages:

    • Independent stages' steps are executed in parallel

    • You can pass private SSH keys to the build process and be sure they do not end up in any layer or metadata:

      RUN --mount=type=ssh <command>
      
      docker build --ssh default .
      
    • Similarly, you can pass any secrets to the build process:

      RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
      
      docker build --secret id=mysecret,src=file.txt .
      
  • And the following disadvantage:

    • It is harder to debug problems

6. Debugging (legacy engine)

  • Many things might go wrong during Docker image preparation
  • When building with the legacy engine, each command in Dockerfile creates a layer, which gets stored under unique id
  • If something goes wrong, you can instantiate an interactive container from the last layer that was built successfully and look for clues

6.1. Example

  • Contents of Dockerfile

    FROM alpine
    RUN date > /tmp/build-date.txt
    RUN cat /tmp/build-dat.txt > /tmp/final-date.txt
    
  • Results of docker build .:

    Sending build context to Docker daemon  2.048kB
    Step 1/3 : FROM alpine
     ---> 14119a10abf4
    Step 2/3 : RUN date > /tmp/build-date.txt
     ---> Running in 3fe480f490d7
    Removing intermediate container 3fe480f490d7
     ---> 7eadcff6ea01
    Step 3/3 : RUN cat /tmp/build-dat.txt > /tmp/final-date.txt
     ---> Running in 7cba04ecd3e3
    cat: can't open '/tmp/build-dat.txt': No such file or directory
    The command '/bin/sh -c cat /tmp/build-dat.txt > /tmp/final-date.txt' returned a non-zero code: 1
    
  • The result of FROM command is stored as 14119a10abf4

  • The result of the first RUN command is stored as 7eadcff6ea01

  • The second RUN fails, so you debug it by starting an interactive container (the -it switch):

    docker run -it 7eadcff6ea01
    
  • Now you can try to execute the command that failed: cat /tmp/build-dat.txt

  • Then figure out why it failed, what could cause it, etc.

7. Debugging (BuildKit)

  • With BuildKit layers are no longer stored after each command in the Dockerfile
  • To improve performance, BuildKit only stores the image when a stage is finished
  • To debug problems, you have to abuse this rule and introduce artificial stage beginnings and ends

7.1. Example

  • For the same Dockerfile as before, with BuildKit you will see this output (for TTY output style plain configured by running docker build --progress plain .):

    #2 [internal] load .dockerignore
    #2 sha256:28b059ecac284a33ba98daa285c6a068d86485b54afc2e67f18e2bd1640d871a
    #2 transferring context: 2B done
    #2 DONE 0.1s
    
    #1 [internal] load build definition from Dockerfile
    #1 sha256:cbd3d6400308afcf33c0910b894d2e44156fc4127a0db290d19df5a4e8eae37e
    #1 transferring dockerfile: 37B done
    #1 DONE 0.3s
    
    #3 [internal] load metadata for docker.io/library/alpine:latest
    #3 sha256:d4fb25f5b5c00defc20ce26f2efc4e288de8834ed5aa59dff877b495ba88fda6
    #3 DONE 0.0s
    
    #4 [1/3] FROM docker.io/library/alpine
    #4 sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7
    #4 CACHED
    
    #5 [2/3] RUN date > /tmp/build-date.txt
    #5 sha256:684d70446f71e64256b21f59555e6fedc1eac55780675519af54f9e174fd16e1
    #5 DONE 1.2s
    
    #6 [3/3] RUN cat /tmp/build-dat.txt > /tmp/final-date.txt
    #6 sha256:bde8a93c3b05727094dd3d24e010c506f451a33718cc07f32f4e6b1ccab0b645
    #6 0.925 cat: can't open '/tmp/build-dat.txt': No such file or directory
    #6 ERROR: executor failed running [/bin/sh -c cat /tmp/build-dat.txt > /tmp/final-date.txt]: exit code: 1
    ------
     > [3/3] RUN cat /tmp/build-dat.txt > /tmp/final-date.txt:
    ------
    executor failed running [/bin/sh -c cat /tmp/build-dat.txt > /tmp/final-date.txt]: exit code: 1
    
  • Although you can notice lines starting with sha256:..., they do not correspond to intermediate layers' ids

  • To be able to debug the problem as previously, you have to alter the Dockerfile by (1) naming the debugged stage if not yet done and (2) adding FROM scratch line just before the command you need to debug:

    -FROM alpine
    +FROM alpine AS debug
     RUN date > /tmp/build-date.txt
    +FROM scratch
     RUN cat /tmp/build-dat.txt > /tmp/final-date.txt
    
  • Now you can set BuildKit to only consider debug target:

    docker build --progress plain --target debug .
    
    #1 [internal] load build definition from Dockerfile
    #1 sha256:ffe58018ac4c453ce043471e51216f7528c1fe315a9a83fa1ad276df0ac9f8a6
    #1 transferring dockerfile: 157B done
    #1 DONE 0.1s
    
    #2 [internal] load .dockerignore
    #2 sha256:dda1a34cfb4f2eb8169d58953a937062de5c70a0ae78ca49118e49ea8279a7b7
    #2 transferring context: 2B done
    #2 DONE 0.2s
    
    #3 [internal] load metadata for docker.io/library/alpine:latest
    #3 sha256:d4fb25f5b5c00defc20ce26f2efc4e288de8834ed5aa59dff877b495ba88fda6
    #3 DONE 0.0s
    
    #4 [debug 1/2] FROM docker.io/library/alpine
    #4 sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7
    #4 DONE 0.0s
    
    #5 [debug 2/2] RUN date > /tmp/build-date.txt
    #5 sha256:684d70446f71e64256b21f59555e6fedc1eac55780675519af54f9e174fd16e1
    #5 CACHED
    
    #6 exporting to image
    #6 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
    #6 exporting layers
    #6 exporting layers 0.4s done
    #6 writing image sha256:0846d70d4f562fa95835da5893cb53beb7f623cf228f57460bd298a0f87da680 0.0s done
    #6 DONE 0.5s
    
  • Finally you can start an interactive container by taking the hash value of the written image:

    docker run -it 0846d70d4f562fa95835da5893cb53beb7f623cf228f57460bd298a0f87da680
    

8. IMAS Docker

8.1. General remarks

  • The IMAS Docker build project is available at git.iter.org as IMEX/imas-container
  • It contains ansible-container/, buildah/ and docker/ subdirectories
  • Only the last one is actively developed
  • The Dockerfile requires BuildKit as it uses --mount=type=ssh
  • This also means that if you want to build IMAS Docker, you need to configure SSH agent and add there the private key which gives access to git.iter.org projects
ssh://git@git.iter.org/imas/access-layer.git
ssh://git@git.iter.org/imas/data-dictionary.git
ssh://git@git.iter.org/imex/fc2k.git
ssh://git@git.iter.org/imas/installer.git
ssh://git@git.iter.org/imex/kepler-core-actors.git
ssh://git@git.iter.org/imex/kepler-installer.git
ssh://git@git.iter.org/imex/kepler-patches.git
ssh://git@git.iter.org/imas/uda.git
ssh://git@git.iter.org/imas/uda-plugins.git
ssh://git@git.iter.org/imex/kepler.git

8.2. build.sh

  • This Bash script accepts the following command line options:

    -f            disable cache (build everything from scratch)
    -u            build with UDA
    -c CPUs       number of CPUs [default=$(nproc)]
    -t target     build only one target
    
  • There are three auxiliary Bash functions defined:

    • latest_git_tag url blacklisted. Returns the latest (in the meaning of sort --version-sort) tag in a git repository given with the url. If blacklisted is given, such tag will be ignored. For example, the UDA repository has tag code_camp_cadarache which should be ignored.
    • latest_stable_git_tag url. As above, but the tag has to contain stable keyword (applicable for MDS+)
    • latest_released_git_tag url. As above, but the tag has to contain rel keyword (applicable for Ant)
  • Next, in the build.sh script you can set either of these tags to specific values:

    tag_al=4.8.7                        # FIXME: 4.8.7 is used by ETS
    tag_ant=1.10.6                      # FIXME: later versions of ant fail to build...
    tag_blitz=
    tag_cmake=v3.20.0
    tag_dd=3.31.0                       # FIXME: 3.31.0 is used by ETS
    tag_fc2k=
    tag_hdf5=hdf5-1_12_0
    tag_installer=
    tag_kca=
    tag_kepler_installer=
    tag_kp=
    tag_lapack=
    tag_mdsplus=stable_release-7-96-15  # FIXME: later versions of mdsplus fail to build...
    tag_tigervnc=
    tag_uda=2.3.1                       # FIXME: uda/2.3.1 is known to work well with uda-plugins/1.2.0
    tag_uda_plugins=1.2.0               # FIXME: uda/2.3.1 is known to work well with uda-plugins/1.2.0
    ver_kb=
    
  • All unset values will be checked using functions defined previously

8.3. files/{base,ual,kepler,fc2k,gui}

  • There are several files used in Dockerfile available in files/* directories

    • In files/base you can find blas-ifort.pc and lapack-ifort.pc files preconfigured for BLAS and LAPACK (they do not come with the upstream package, so they were crafted manually)

    • In files/kepler you can find modulefile for JAXFront (it also had to be crafted manually)

    • In files/{ual,kepler} you can find Makefile.Docker.Ubuntu which is a configuration file used by IMAS Installer and Kepler Installer

      • In these files you select what to build e.g. you can switch off gfortran compilation of Fortran interface
    • In files/fc2k you can find install_Docker.Ubuntu.xml and settings_Docker.Ubuntu.xml which control the installation and usage of FC2K

    • In files/{ual,kepler,fc2k} you can find docker-entrypoint.sh which is set to be the entrypoint of corresponding images. The role of these scripts is to load necessary modules (e.g. IMAS or UDA) and execute imasdb test

8.4. Dockerfile

  • There are 14 stages in the Dockerfile, all based on Ubuntu 18.04

    • common-builder has compilers and libraries for building Blitz++, HDF5 and MDS+. It installs CMake from GitHub, because the version in Ubuntu 18.04 repo is too old

      FROM ubuntu:18.04 AS common-builder
      
      ENV DEBIAN_FRONTEND=noninteractive
      
      RUN apt-get update \
          && apt-get install -y \
              curl \
              g++ \
              gfortran \
              git \
              libreadline-dev \
              libxml2-dev \
              make \
              openjdk-8-jdk-headless \
              openssh-client \
              python \
              tclsh \
          && rm -rf /var/lib/apt/lists/*
      
      # Ubuntu 18.04 has old version of CMake, so blitz++ and hdf5 cannot be built -> install it manually
      ARG tag_cmake
      RUN curl -L https://github.com/Kitware/CMake/releases/download/${tag_cmake}/cmake-${tag_cmake#v}-linux-x86_64.tar.gz | tar xz
      
    • blitz-builder builds Blitz++ in /opt/blitz, hdf5-builder builds HDF5 in /opt/hdf5, mdsplus-builder builds MDS+ in /opt/mdsplus

      # blitz-builder ###############################################################
      
      FROM common-builder AS blitz-builder
      
      # Ubuntu 18.04 does not have blitz++ -> have to install it manually
      ARG tag_blitz
      ARG cpus=1
      RUN curl -L https://github.com/blitzpp/blitz/archive/${tag_blitz}.tar.gz | tar xz
      RUN mkdir blitz-${tag_blitz}/build \
          && cd blitz-${tag_blitz}/build \
          && /cmake-${tag_cmake#v}-linux-x86_64/bin/cmake \
              -DCMAKE_INSTALL_PREFIX:PATH=/opt/blitz \
              .. \
          && make -j ${cpus} lib \
          && make install
      
      # hdf5-builder ################################################################
      
      FROM common-builder AS hdf5-builder
      
      # Ubuntu 18.04 packages hdf5 in a different way than IMAS expects it to be packaged
      ARG tag_hdf5
      ARG cpus=1
      RUN curl -L https://github.com/HDFGroup/hdf5/archive/refs/tags/${tag_hdf5}.tar.gz | tar xz
      RUN mkdir hdf5-${tag_hdf5}/build \
          && cd hdf5-${tag_hdf5}/build \
          && /cmake-${tag_cmake#v}-linux-x86_64/bin/cmake \
              -DCMAKE_INSTALL_PREFIX:PATH=/opt/hdf5 \
              .. \
          && make -j ${cpus} \
          && make install \
          && version=$(echo ${tag_hdf5} | sed -e 's/^hdf5-//' -e 's/_/./g') \
          && ln -s hdf5-${version}.pc /opt/hdf5/lib/pkgconfig/hdf5.pc \
          && ln -s hdf5_cpp-${version}.pc /opt/hdf5/lib/pkgconfig/hdf5-cpp-gnu.pc
      
      # mdsplus-builder #############################################################
      
      FROM common-builder AS mdsplus-builder
      
      ARG tag_mdsplus
      RUN curl -L https://github.com/MDSplus/mdsplus/archive/${tag_mdsplus}.tar.gz | tar xz
      RUN cd mdsplus-${tag_mdsplus} \
          && ./configure --prefix=/opt/mdsplus \
          && make \
          && make install
      
    • imas-git-puller pulls all repositories from git and it is the only stage that accesses SSH keys

      FROM common-builder AS imas-git-puller
      
      RUN mkdir -p /root/.ssh \
          && chmod 700 /root/.ssh \
          && ssh-keyscan git.iter.org > /root/.ssh/known_hosts
      
      ARG tag_fc2k
      RUN --mount=type=ssh true \
          && git clone --single-branch -b ${tag_fc2k} ssh://git@git.iter.org/imex/fc2k.git
      
      ARG tag_uda
      RUN --mount=type=ssh true \
          && git clone --single-branch -b ${tag_uda} ssh://git@git.iter.org/imas/uda.git
      
      ARG tag_uda_plugins
      RUN --mount=type=ssh true \
          && git clone --single-branch -b ${tag_uda_plugins} ssh://git@git.iter.org/imas/uda-plugins.git
      
      ARG tag_kepler_installer
      RUN --mount=type=ssh true \
          && git clone --single-branch -b ${tag_kepler_installer} ssh://git@git.iter.org/imex/kepler-installer.git
      
      COPY files/kepler/Makefile.Docker.Ubuntu /kepler-installer-site-config/
      ARG ver_kb
      ARG tag_kp
      ARG tag_kca
      RUN --mount=type=ssh true \
          && cd kepler-installer \
          && make update \
              SITECONFIG=/kepler-installer-site-config/Makefile.Docker.Ubuntu \
              VER_KB=${ver_kb} \
              TAG_KP=${tag_kp} \
              TAG_KCA=${tag_kca}
      
      ARG tag_installer
      ARG tag_dd
      ARG tag_al
      RUN --mount=type=ssh true \
          && git clone --single-branch -b ${tag_installer} ssh://git@git.iter.org/imas/installer.git \
          && cd installer \
          && make checkout \
              TAG_DD=${tag_dd} \
              TAG_AL=${tag_al}
      
      COPY files/ual/Makefile.Docker.Ubuntu /installer/site-config/
      
    • base contains a long list of applications, libraries and environment variables that will be used by any other imas/* image

      FROM ubuntu:18.04 AS base
      
      ENV DEBIAN_FRONTEND=noninteractive
      
      RUN apt-get update \
          && apt-get install -y \
              autoconf \
              bsdtar \
              cmake \
              curl \
              doxygen \
              environment-modules \
              g++ \
              gawk \
              gcc \
              gfortran \
              git \
              libboost-dev \
              libboost-filesystem-dev \
              libboost-system-dev \
              libmemcached-dev \
              libopenmpi-dev \
              libreadline-dev \
              libsaxonhe-java \
              libssh-dev \
              libssl-dev \
              libxml2-dev \
              make \
              openjdk-8-jdk-headless \
              openmpi-bin \
              pkg-config \
              python3 \
              python3-dev \
              python3-pip \
              rsync \
              tclsh \
              tcsh \
              unzip \
              xinetd \
              xsltproc \
          && rm -rf /var/lib/apt/lists/* \
          && update-alternatives --install /usr/bin/python python /usr/bin/python3 10 \
          && pip3 install \
              cython \
              numpy \
              matplotlib \
              setuptools \
              wheel \
          && ln --symbolic /usr/share/java/Saxon-HE.jar /usr/share/java/saxon9he.jar \
          && ln --symbolic /usr/sbin/xinetd /usr/local/sbin/xinetd \
          && ln --symbolic --force /bin/bash /bin/sh
      
      COPY --from=blitz-builder /opt/blitz /opt/blitz
      COPY --from=hdf5-builder /opt/hdf5 /opt/hdf5
      COPY --from=mdsplus-builder /opt/mdsplus /opt/mdsplus
      
      ENV CFLAGS=-pthread \
          CLASSPATH=/usr/share/java/saxon9he.jar \
          CXXFLAGS=-pthread \
          HDF5_HOME=/opt/hdf5 \
          ITMSCRATCH=/tmp \
          JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 \
          LANG=C.UTF-8 \
          MDSPLUS_DIR=/opt/mdsplus \
          PKG_CONFIG_PATH=/opt/blitz/lib/pkgconfig:/opt/hdf5/lib/pkgconfig
      ENV CPLUS_INCLUDE_PATH=${HDF5_HOME}/include:${MDSPLUS_DIR}/include \
          C_INCLUDE_PATH=${HDF5_HOME}/include:${MDSPLUS_DIR}/include:/usr/lib/jvm/java-8-openjdk-amd64/include/linux:/usr/lib/jvm/java-8-openjdk-amd64/include \
          LD_LIBRARY_PATH=${HDF5_HOME}/lib:${MDSPLUS_DIR}/lib:/usr/lib/x86_64-linux-gnu \
          LIBRARY_PATH=${HDF5_HOME}/lib:${MDSPLUS_DIR}/lib:/usr/lib/x86_64-linux-gnu \
          MDS_PATH=${MDSPLUS_DIR}/tdi
      
    • base-devel adds Intel compilers on top of that and it uses them to compile BLAS and LAPACK

      FROM base AS base-devel
      
      COPY --from=intel/oneapi-hpckit:devel-ubuntu18.04 /opt/intel /opt/intel
      
      ARG tag_lapack
      ARG cpus=1
      RUN curl -L https://github.com/Reference-LAPACK/lapack/archive/refs/tags/${tag_lapack}.tar.gz | tar xz \
          && . /opt/intel/oneapi/compiler/latest/env/vars.sh \
          && cd /lapack-${tag_lapack#v} \
          && cp INSTALL/make.inc.ifort make.inc \
          && sed -i \
              -e 's/CFLAGS =.*/& -fPIC/g' \
              -e 's/FFLAGS =.*/& -fPIC/g' \
              -e 's/FFLAGS_NOOPT =.*/& -fPIC/g' \
              -e 's/LDFLAGS =.*/& -fPIC/g' \
              make.inc \
          && make -j ${cpus} \
          && install -Dm644 librefblas.a /opt/blas/lib/librefblas.a \
          && install -Dm644 liblapack.a /opt/lapack/lib/liblapack.a \
          && install -d /opt/blas/lib/pkgconfig /opt/lapack/lib/pkgconfig \
          && ln -s librefblas.a /opt/blas/lib/libblas.a \
          && ln -s blas-ifort.pc /opt/blas/lib/pkgconfig/blas.pc \
          && ln -s lapack-ifort.pc /opt/lapack/lib/pkgconfig/lapack.pc
      
      COPY files/base/blas-ifort.pc /opt/blas/lib/pkgconfig/blas-ifort.pc
      COPY files/base/lapack-ifort.pc /opt/lapack/lib/pkgconfig/lapack-ifort.pc
      
      ENV PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:/opt/blas/lib/pkgconfig:/opt/lapack/lib/pkgconfig \
          LIBRARY_PATH=${LIBRARY_PATH}:/opt/blas/lib:/opt/lapack/lib
      
    • ual-devel does the following: (1) compiles UDA and UDA Plugins, (2) compiles IMAS without Fortran interface, (3) compiles Fortran interface, (4) installs IMAS, (5) builds UDA Plugins again

      • UDA has a cyclic dependency: IMAS requires UDA, UDA Plugins require IMAS
      • Fortan interface takes the longest to compile and requires the most amount of RAM. If you have trouble building the image, set CPU count in parallel building to a smaller value (see -c CPUs in build.sh description)
      FROM base-devel AS ual-devel
      
      COPY --from=imas-git-puller /uda /uda
      COPY --from=imas-git-puller /uda-plugins /uda-plugins
      ARG with_uda
      ARG cpus=1
      
      ENV UDA_HOME=/opt/uda
      
      # build UDA
      RUN mkdir /opt/uda \
          && if test ${with_uda} = y; then \
              mkdir /uda/build \
              && cd /uda/build \
              && cmake \
                  -DCMAKE_CXX_FLAGS=-I/usr/include/libxml2 \
                  -DUDA_HOST=localhost \
                  -DUDA_PORT=56565 \
                  -DBUILD_SHARED_LIBS=true \
                  -DCMAKE_INSTALL_PREFIX:PATH=${UDA_HOME} \
                 .. \
              && make -j ${cpus} \
              && make install \
              && sed -i 's/^Libs:.*/& -lmemcached/g' ${UDA_HOME}/lib/pkgconfig/uda-* \
              && cd ${UDA_HOME}/python_installer \
              && python setup.py install --prefix=${UDA_HOME} \
              ; \
          fi
      
      ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${UDA_HOME}/lib/plugins \
          LIBRARY_PATH=${LIBRARY_PATH}:${UDA_HOME}/lib/plugins \
          MODULEPATH=/etc/modulefiles:${UDA_HOME}/modulefiles \
          PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:${UDA_HOME}/lib/pkgconfig \
          PYTHONPATH=${UDA_HOME}/lib/python3.6/site-packages \
          UDA_DISABLED=0 \
          UDA_IMAS_MACHINE_MAP=${UDA_HOME}/etc/udaMachines.txt \
          UDA_IMAS_PLUGIN_MAP=${UDA_HOME}/etc/udaMappings.txt \
          UDA_LOG=/tmp \
          UDA_LOG_LEVEL=ERROR \
          UDA_LOG_MODE=a \
          UDA_PLUGIN_CONFIG=${UDA_HOME}/etc/udaPlugins.conf \
          UDA_PLUGIN_DEBUG_SINGLEFILE=1 \
          UDA_PLUGIN_DIR=${UDA_HOME}/lib/plugins \
          UDA_SARRAY_CONFIG=${UDA_HOME}/etc/udagenstruct.conf
      
      # build UDA plugins
      RUN if test ${with_uda} = y; then \
              mkdir /uda-plugins/build \
              && cd /uda-plugins/build \
              && cmake \
                  -DCMAKE_INSTALL_PREFIX:PATH=${UDA_HOME} \
                  '-DBUILD_PLUGINS:STRING=exp2imas;hl2a;imas_forward;imas_mapping;imas_partial;imas_remote;imas_uda;imasdd;iter_md;jet_equilibrium;jet_magnetics;jet_summary;mast_imas;source;tcv;tcvm;tore_supra;west;west_tunnel' \
                  .. \
              && make -j ${cpus} \
              && make install \
              && cat ${UDA_HOME}/etc/plugins/udaPlugins* \
                  | sed 's/, , WEST_TUNNEL::read()/, Use SSH tunnel, WEST_TUNNEL::read()/g' \
                  | sort -u > ${UDA_PLUGIN_CONFIG} \
              && echo 'JET  IMAS_MAPPING - -' >  ${UDA_IMAS_PLUGIN_MAP} \
              && echo 'MAST IMAS_MAPPING - -' >> ${UDA_IMAS_PLUGIN_MAP} \
              && echo 'WEST WEST_TUNNEL  - -' >> ${UDA_IMAS_PLUGIN_MAP} \
              && echo 'JET  * - EXP2IMAS'     >  ${UDA_IMAS_MACHINE_MAP} \
              && echo 'MAST * - IMAS_UDA'     >> ${UDA_IMAS_MACHINE_MAP} \
              && echo 'WEST * - WEST_TUNNEL'  >> ${UDA_IMAS_MACHINE_MAP} \
              ; \
          fi
      
      COPY --from=imas-git-puller /installer /installer
      ARG tag_dd
      ARG tag_al
      
      # compile everything but fortran
      RUN cd installer \
          && if test ${with_uda} = y; then . /etc/profile.d/modules.sh && module load uda && export IMAS_UDA=fat; fi \
          && make -j ${cpus} IMAS_FORTRAN=no GIT_OFFLINE=y SITECONFIG=/installer/site-config/Makefile.Docker.Ubuntu TAG_DD=${tag_dd} TAG_AL=${tag_al}
      
      # compile fortran
      RUN cd installer \
          && . /opt/intel/oneapi/compiler/latest/env/vars.sh \
          && if test ${with_uda} = y; then . /etc/profile.d/modules.sh && module load uda && export IMAS_UDA=fat; fi \
          && make -j ${cpus} GIT_OFFLINE=y SITECONFIG=/installer/site-config/Makefile.Docker.Ubuntu TAG_DD=${tag_dd} TAG_AL=${tag_al}
      
      # install everything
      RUN cd installer \
          && if test ${with_uda} = y; then . /etc/profile.d/modules.sh && module load uda && export IMAS_UDA=fat; fi \
          && make install -j ${cpus} GIT_OFFLINE=y SITECONFIG=/installer/site-config/Makefile.Docker.Ubuntu TAG_DD=${tag_dd} TAG_AL=${tag_al}
      
      ENV MODULEPATH=${MODULEPATH}:/opt/imas/etc/modulefiles
      
      # build UDA Plugins again, so that now it finds IMAS
      RUN if test ${with_uda} = y; then \
              . /etc/profile.d/modules.sh \
              && module load IMAS \
              && mkdir /uda-plugins/build-with-imas \
              && cd /uda-plugins/build-with-imas \
              && cmake \
                  -DCMAKE_INSTALL_PREFIX:PATH=${UDA_HOME} \
                  '-DBUILD_PLUGINS:STRING=exp2imas;hl2a;imas_forward;imas_mapping;imas_partial;imas_remote;imas_uda;imasdd;iter_md;jet_equilibrium;jet_magnetics;jet_summary;mast_imas;source;tcv;tcvm;tore_supra;west;west_tunnel' \
                  .. \
              && make -j ${cpus} \
              && make install \
              ; \
          fi
      
      COPY files/ual/docker-entrypoint.sh /
      ENTRYPOINT ["/docker-entrypoint.sh"]
      CMD ["/bin/bash"]
      
    • kepler-devel adds Ant, JAXFront and Kepler

      • JAXFront is a licensed software and IMAS Docker uses the free edition
      FROM ual-devel AS kepler-devel
      
      COPY files/kepler/Makefile.Docker.Ubuntu /kepler-installer/site-config/
      COPY files/kepler/jaxfront /etc/modulefiles/
      
      COPY --from=imas-git-puller /kepler-installer /kepler-installer
      ARG ver_kb
      ARG tag_kp
      ARG tag_kca
      ARG tag_ant
      
      RUN curl -L https://archive.apache.org/dist/ant/binaries/apache-ant-${tag_ant}-bin.tar.xz | tar xJ -C /opt \
          && curl -L https://www.jaxfront.org/download/JAXFront-Demo-Project.zip | bsdtar xf - \
          && mv JAXFront\ Eclipse\ Example\ Project /opt/jaxfront \
          && ln -s /opt/apache-ant-${tag_ant}/bin/ant /usr/bin/ \
          && cd kepler-installer/ \
          && . /etc/profile.d/modules.sh \
          && module load IMAS jaxfront \
          && make install GIT_OFFLINE=y SITECONFIG=/kepler-installer/site-config/Makefile.Docker.Ubuntu VER_KB=${ver_kb} TAG_KP=${tag_kp} TAG_KCA=${tag_kca} \
          && module load Kepler \
          && kepler_install -f default
      
      COPY files/kepler/docker-entrypoint.sh /
      
      ENV USER=root
      ENV KEPLER=${USER}/.local/Kepler/default/kepler
      
    • fc2k-devel adds FC2K

      FROM kepler-devel AS fc2k-devel
      
      COPY files/fc2k/install_Docker.Ubuntu.xml /fc2k/config/
      COPY files/fc2k/settings_Docker.Ubuntu.xml /fc2k/config/
      
      COPY --from=imas-git-puller /fc2k /fc2k
      RUN cd fc2k \
          && . /etc/profile.d/modules.sh \
          && module load IMAS \
          && ln --symbolic --force /fc2k/config/settings_Docker.Ubuntu.xml /fc2k/config/settings_default.xml \
          && ant install -Dcfg.file=/fc2k/config/install_Docker.Ubuntu.xml
      
      COPY files/fc2k/docker-entrypoint.sh /
      
    • ual starts from base image and copies from ual-devel all things built previously (BLAS, LAPACK, UDA, IMAS)

      • This way the ual image is free of the *-devel software (i.e. the Intel compilers) and IMAS source codes
      FROM base AS ual
      
      COPY --from=base-devel /opt/blas /opt/blas
      COPY --from=base-devel /opt/lapack /opt/lapack
      COPY --from=ual-devel /opt/uda /opt/uda
      COPY --from=ual-devel /opt/imas /opt/imas
      
      ENV PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:/opt/blas/lib/pkgconfig:/opt/lapack/lib/pkgconfig \
          UDA_HOME=/opt/uda
      ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${UDA_HOME}/lib/plugins \
          LIBRARY_PATH=${LIBRARY_PATH}:${UDA_HOME}/lib/plugins \
          MODULEPATH=/etc/modulefiles:${UDA_HOME}/modulefiles:/opt/imas/etc/modulefiles \
          PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:${UDA_HOME}/lib/pkgconfig \
          PYTHONPATH=${UDA_HOME}/lib/python3.6/site-packages \
          UDA_DISABLED=0 \
          UDA_IMAS_MACHINE_MAP=${UDA_HOME}/etc/udaMachines.txt \
          UDA_IMAS_PLUGIN_MAP=${UDA_HOME}/etc/udaMappings.txt \
          UDA_LOG=/tmp \
          UDA_LOG_LEVEL=ERROR \
          UDA_LOG_MODE=a \
          UDA_PLUGIN_CONFIG=${UDA_HOME}/etc/udaPlugins.conf \
          UDA_PLUGIN_DEBUG_SINGLEFILE=1 \
          UDA_PLUGIN_DIR=${UDA_HOME}/lib/plugins \
          UDA_SARRAY_CONFIG=${UDA_HOME}/etc/udagenstruct.conf
      
      COPY files/ual/docker-entrypoint.sh /
      ENTRYPOINT ["/docker-entrypoint.sh"]
      CMD ["/bin/bash"]
      
    • kepler extends it and copies from kepler-devel

      FROM ual AS kepler
      
      COPY files/kepler/jaxfront /etc/modulefiles/
      
      ARG tag_ant
      COPY --from=kepler-devel /opt/apache-ant-${tag_ant} /opt/apache-ant-${tag_ant}
      COPY --from=kepler-devel /opt/imas/etc/modulefiles/Kepler /opt/imas/etc/modulefiles/Kepler
      COPY --from=kepler-devel /opt/imas/etc/modulefiles/Keplertools /opt/imas/etc/modulefiles/Keplertools
      COPY --from=kepler-devel /opt/imas/extra/Kepler /opt/imas/extra/Kepler
      COPY --from=kepler-devel /opt/imas/extra/Keplertools /opt/imas/extra/Keplertools
      COPY --from=kepler-devel /opt/jaxfront /opt/jaxfront
      COPY --from=kepler-devel /root/.local/Kepler /root/.local/Kepler
      COPY --from=kepler-devel /usr/bin/ant /usr/bin/ant
      
      ENV USER=root
      ENV KEPLER=${USER}/.local/Kepler/default/kepler
      
      COPY files/kepler/docker-entrypoint.sh /
      
    • fc2k extends it and copies from fc2k-devel

      FROM kepler AS fc2k
      
      COPY --from=fc2k-devel /opt/imas/etc/modulefiles/fc2k /opt/imas/etc/modulefiles/fc2k
      COPY --from=fc2k-devel /opt/imas/extra/fc2k /opt/imas/extra/fc2k
      
      COPY files/fc2k/docker-entrypoint.sh /
      
    • gui extends it and adds XFCE4 and TigerVNC

      FROM fc2k AS gui
      
      # the password to VNC is imas
      COPY files/gui/passwd /root/.vnc/
      COPY files/gui/xstartup /root/.vnc/
      COPY files/gui/Kepler.desktop /root/Desktop/
      
      ARG tag_tigervnc
      RUN apt-get update \
          && apt-get install -y \
              xfce4 \
          && rm -rf /var/lib/apt/lists/*
      RUN curl -L https://sourceforge.net/projects/tigervnc/files/stable/${tag_tigervnc#v}/tigervnc-${tag_tigervnc#v}.x86_64.tar.gz | tar xz -C /opt
      CMD ["/opt/tigervnc-${tag_tigervnc#v}.x86_64/usr/bin/vncserver", "-fg", ":1"]
      
  • There are 7 images produced:

    • imas/ual-devel
    • imas/kepler-devel
    • imas/fc2k-devel
    • imas/ual
    • imas/kepler
    • imas/fc2k
    • imas/gui

8.5. push.sh and save.sh

  • The Bash script push.sh tags all images with registry prefix and pushes them there
  • The Bash script save.sh saves the non-devel images as .tar.zst in /tmp directory

9. Dockerfile graph


This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No. 101052200—EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. The scientific work is published for the realization of the international project co-financed by Polish Ministry of Science and Higher Education in 2021 from financial resources of the program entitled "PMW” 5218/HEU - EURATOM/2022/2

  • No labels