Hey Patrick,
Here’s my first attempt and getting VASP going on Europa. As you know, we tend to use the Intel compilers with VASP. When we use the standard Intel Parallel Studio compilers, those require a license. Also it makes it harder for you guys to create equivalent development environments locally.
TRECIS is about teaching people to support themselves and helping to create a more educated user community. So, let’s try to do this build with containers using the beta release of Intel OpenAPI HPCKIT, https://hub.docker.com/r/intel/oneapi-hpckit
If I can document how to build vasp using the Intel compilers from the oneapi container, this procedure would be general and would work everywhere. So let’s see how far it gets us.
I’m going to document building the container, building VASP using the container, and then running jobs here. I’m also going to do this build on europa. That means we can’t use Docker, but Podman is available to build docker images, and then we can convert those podman images to an “HPC” container.
OpenHPC (and Europa) support two container technologies out of the box, singularity and charliecloud.
I started with the Intel-hpckit Dockerfile from their hpckit-devel-centos8 repo on the intel oneapi containers github repo.
Building the podman container from the Dockerfile, and then converting it to a charliecloud image via the following script
#!/bin/bash
ml load charliecloud singularity
export TMPDIR=$SCRATCH/tmp
APPROOT=/scratch/csim/containers
APPNAME=intel_oneapi-hpckit
APPVER=2021.1-beta10
for APPDIR in ch sifs src
do mkdir -p $APPROOT/$APPNAME/$APPDIR
done
podman build -t $APPNAME -f Dockerfile-$APPNAME
ch-builder2tar $APPNAME $APPROOT/$APPNAME/src
pushd $APPROOT/$APPNAME/src
umask 0022
ch-tar2dir $APPNAME.tar.gz ../ch
mv $APPNAME.tar.gz $APPNAME-$APPVER.tar.gz
mkdir -p ../ch/intel_oneapi-hpckit/scratch/csim
popd
After that, let’s start our container.
$ ml load charliecloud
$ cd $SCRATCH/containers/intel_oneapi-hpckit/ch/
$ ch-run -b /scratch/csim/:/scratch/csim --set-env=./intel_oneapi-hpckit/ch/environment ./intel_oneapi-hpckit/ -- bash
with the above command we have
- started our charliecloud intel oneapi container
- set our environment to be the same as the builder environment via the --set-env option
- mounted $SCRATCH inside our container ($HOME is mounted by default)
Once we are inside the container, the intel compilers are available.
[csim@europa src]$ which ifort
/opt/intel/oneapi/compiler/2021.1-beta10/linux/bin/intel64/ifort
After that, it’s a fairly standard compile.
$ tar xf vasp.5.4.4.tar.gz && cd vasp.5.4.4
$ cp arch/makefile.include.linux_intel makefile.include
edit makefile.include to add -DNGZhalf to the C preprocessor option
$ make std
edit makefile.include to add -DwNGZhalf to the C preprocessor option
$ make gam
copy back the original makefile.include for intel if you also want to build ncl
$ cp arch/makefile.include.linux_intel makefile.include
$ make ncl
So all of that worked. I now have VASP binaries on Europa. This is just standard, vanilla VASP. None of the VTST stuff and none of the Wannier90 stuff is there yet. Let’s start with pure VASP, test it, and then add the patches.
Do you have a small test job with known good output I could use to test? Perhaps an input file, slurm submission script, and output from Ganymede?
Once I have that, I’ll test it and make sure the container-built binaries work and then will make them available along with an example slurm script. After that, I’ll look at VTST and Wannier90.