Lancium Prebuilt Images

Table of Contents

Singularity Images

Generic Linux Images


Version: 18.04

Included Packages:

Example GCLI startContainers call:

$ gcli startContainers Lancium/ubuntu18.04.simg MyUbuntuJob 1 \
	"echo hello world" \
	-o ubuntuoutput.txt stdout.txt

Ubuntu with CUDA Libraries

Version: Ubuntu 18.04, CUDA 9.2

Included Packages:



Example GCLI startContainers call:

$ gcli startContainers Lancium/ubuntu18.04_cuda9.2.simg MyUbuntuJob 1 \
	"echo hello world" \
	-o ubuntu_cuda_output.txt stdout.txt

Molecular Dynamics


Author: KTH Royal Institute Access here


$ gmx --version

GROMACS:      gmx, version 2018.8
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Working dir:  /tmp
Command line:
  gmx --version

GROMACS version:    2018.8
Precision:          single
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX_256
FFT library:        fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
Built on:           2019-12-02 16:33:44
Built by:           root@lmf03ab2300 [CMAKE]
Build OS/arch:      Linux 4.15.0-62-generic x86_64
Build CPU vendor:   Intel
Build CPU brand:    Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
Build CPU family:   6   Model: 45   Stepping: 7
Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler:         /usr/bin/cc GNU 7.4.0
C compiler flags:    -mavx     -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
C++ compiler:       /usr/bin/c++ GNU 7.4.0
C++ compiler flags:  -mavx    -std=c++11   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
CUDA compiler:      /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on Sun_Jul_28_19:07:16_PDT_2019;Cuda compilation tools, release 10.1, V10.1.243
CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;; ;-mavx;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:        10.20
CUDA runtime:       10.10


There is nothing specific to GROMACS setup in the environment apart from /usr/local/gromacs/bin/ being added to the search path.

Example GCLI startContainers call:

$ gcli startContainers Lancium/gromacs.simg MyGromacsJob 1 \
	"gmx --version" \
	-o gmxoutput.txt stdout.txt


Version: 6.4.1


QE_BIN_PATH=/q-e-qe-6.4.1/bin/ the install directory of Quantum Espresso

Included binaries:


alpha2f.x     fermi_velocity.x		     ld1.x		   pp.x		   simple_bse.x
average.x     fqha.x			     manycp.x		   ppacf.x	   simple_ip.x
bands.x       fs.x			     manypw.x		   projwfc.x	   spectra_correction.x
bse_main.x    generate_rVV10_kernel_table.x  matdyn.x		   pw.x		   sumpdos.x
cell2ibrav.x  generate_vdW_kernel_table.x    molecularnexafs.x	   pw2bgw.x	   turbo_davidson.x
cp.x	      gww.x			     molecularpdos.x	   pw2critic.x	   turbo_eels.x
dist.x	      gww_fit.x			     neb.x		   pw2gw.x	   turbo_lanczos.x
dos.x	      head.x			     open_grid.x	   pw2wannier90.x  turbo_spectrum.x
dynmat.x      hp.x			     path_interpolation.x  pw4gww.x	   wannier_ham.x
epa.x	      ibrav2cell.x		     ph.x		   pwcond.x	   wannier_plot.x
epsilon.x     initial_state.x		     phcg.x		   pwi2xsf.x	   wfck2r.x
ev.x	      iotk			     plan_avg.x		   q2qstar.x	   wfdd.x
fd.x	      iotk.x			     plotband.x		   q2r.x	   xspectra.x
fd_ef.x       iotk_print_kinds.x	     plotproj.x		   q2trans.x
fd_ifc.x      kpoints.x			     plotrho.x		   q2trans_fd.x
fermi_proj.x  lambda.x			     pmw.x		   simple.x

Example GCLI startContainers call:

$ gcli startContainers Lancium/QuantumEspresso.simg MyQEJob 1 \
	"pw.x" \
	-o QEoutput.txt stdout.txt


Version: Amber 18 CUDA Version: 9.2


AMBERHOME=/opt/amber, the install directory of AMBER.

Included Binaries:

$ ls $AMBERHOME/bin

AddToBox	    atomtype		  lmanal		 nef_to_RST	      resp	    bondtype		  makeANG_RST		 new2oldparm	      respgen
ChBox	  makeCHIR_RST		 new_crd_to_dyn       rism1d
CheckMD		  makeDIST_RST		 new_to_old_crd       rism3d.orave		    cestats		  make_crd_hg		 nf-config	      rism3d.snglpnt  mdgx			 nfe-umbrella-slice   rism3d.thermo	  mdnab			 nmode		      sander	    cphstats	 packmol	      sander.LES	 packmol-memgen       saxs_md	    cpptraj		  memembed		 paramfit	      saxs_rism	    cpptraj.cuda	  metatwist		 parmchk2	      senergy
PropPDB		    dacdif		  minab			 parmed
UnitCell	    elsize		 pbsa
addles		    espgen		  mm_pbsa_nabnmode	 pbsa.cuda
am1bcc  pdb4amber	      sqm	    fantasian		  mmpbsa_py_energy	 pmemd		      sviol  ffgbsa		  mmpbsa_py_nabnmode	 pmemd.cuda	      sviol2
amber.conda	    fftw-wisdom		 pmemd.cuda_DPFP      teLeap
amber.ipython	    fftw-wisdom-to-conf   molsurf		 pmemd.cuda_SPFP      tinker_to_amber
amber.jupyter	  nab			 prepgen	      tleap
amber.pip		  nab2c			 process_mdout.perl   to_be_dispatched
amber.python	    fix_new_inpcrd_vel	  nc-config		 process_minout.perl  ucpp
ambmask	  nccopy		 pymdpbsa	      volslice
ambpdb		    gbnsr6		  ncdump		 pytleap	      xaLeap	  ncgen			 reduce		      xleap
antechamber	    hcp_getpdb		  ncgen3		 residuegen	      xparmed

Example GCLI startContainers call:

$ gcli startContainers Lancium/L_amber18-cuda9.2.simg MyAmberJob 1 \
	"/opt/amber/bin/pmemd.cuda --help" \
	-o amberoutput.txt stdout.txt

Please also see the comprehensive AMBER demo.

Machine Learning

Note: All of our ML containers (except for Theano, currently a work in progress) are built for both python3 and python2. If you wanted to use the PyCaffe python2 or python3 variant, indicate that by using Lancium/py3_pycaffe when you call startContainers.



$ caffe --version

caffe version 1.0.0
Debug build (NDEBUG not #defined)


$PATH includes the directory containing the binaries for caffe.

/opt/conda/envs/caffe2/bin in Python2 Container

/opt/conda/envs/caffe/bin in Python3 Container

Example GCLI startContainers call:

$ gcli startContainers Lancium/py2_caffe.simg MyCaffeJob 1 \
	"caffe --version" \
	-o caffeoutput.txt stdout.txt

$ gcli startContainers Lancium/py3_caffe.simg MyCaffeJob 1 \
	"caffe --version" \
	-o caffeoutput.txt stdout.txt



1.14.0 in Python 2 Container 1.13.1 in Python 3 Container


Tensorflow binary path included in search path.

/opt/conda/envs/tensorflow-gpu2/bin in Python 2 Container

/opt/conda/envs/tensorflow-gpu/bin in Python 3 Container

Example GCLI startContainers call:

$ gcli startContainers Lancium/py2_tensorflow.simg MyTensorflowJob 1 \
	"ls /opt/conda/envs/tensorflow-gpu2/bin" \
	-o tf_output.txt stdout.txt

$ gcli startContainers Lancium/py3_tensorflow.simg MyTensorflowJob 1 \
	"ls /opt/conda/envs/tensorflow-gpu/bin" \
	-o tf_output.txt stdout.txt



1.0.4 in Python 2 Container


Theano binary path included in search path.

/opt/conda/envs/theano2/bin in Python 2 Container

Example GCLI startContainers call:

$ gcli startContainers Lancium/py2_theano.simg MyTheanoJob 1 \
	"ls /opt/conda/envs/theano2/bin" \
	-o theano_output.txt stdout.txt



1.1.0 in Python 2 Container 1.0.0 in Python 3 Container


PyTorch binary path included in search path.

/opt/conda/envs/pytorch2/bin in Python 2 Container

/opt/conda/envs/pytorch/bin in Python 3 Container

Example GCLI startContainers call:

$ gcli startContainers Lancium/py2_pytorch.simg MyPyTorchJob 1 \
	"ls /opt/conda/envs/pytorch2/bin" \
	-o pytorch_output.txt stdout.txt

$ gcli startContainers Lancium/py3_pytorch.simg MyPyTorchJob 1 \
	"ls /opt/conda/envs/pytorch/bin" \
	-o pytorch_output.txt stdout.txt