VASP for Europa

Hello @csim,

The Ganymede VASP modulefile needs to be rebuilt for Europa. The VASP module was originally compiled using fftw, scalapack, and impi. Of those dependencies only impi is missing from Europa, but maybe another MPI stack will work fine. Since we’re rebuilding anyway, I’d like to add Wannier90 if possible. As of version 5.2.12, VASP has full interface for Wannier90. Note that Wannier90 is now up to version 3.1.0, but VASP only supports version 1.2. Unfortunately, I have never built this package or interface myself, but the documentation on the linked page for Wannier90 seems thorough.

Thanks,
Patrick

Hey Patrick,

Yeah we don’t have the Intel compiler and MPI stacks on Europa (yet). I might be able to find some time on Monday to start on this; can’t promise how quickly it’ll go as I’ve never done the Wannier90 interface (and VASP interfaces can be delicate). Otherwise, it’ll be Wednesday of next week before I can get to it.

Longer term, I’d like to finish full documentation of setting up VASP on OpenHPC-based systems and show you guys how to maintain modulefiles (and software). While VASP is often provided at HPC centers, it usually does not have all the bells and whistles like the VSTST foo or the Wannier90 stuff.

So, longer term, it’s probably in y’alls best interest to learn how to set it all up.

Here’s the documentation from the last time I did it on Ganymede; VASP on Ganymede. If you wanted to start running now, you can try to set it up yourself.

More from me early next week.

In the time before your team centralized infrastructure and support, our group mostly managed itself in this regard. So most of our senior members have compiled VASP before, on different systems with different configurations. We can make it work for Europa without too much trouble. Previously, none of us were familiar with the module system, so we’ve deferred to you until now. And I won’t try to make a module for now - I’ll just leave the binary in my home directory.

Like I said, I’ve never build the Wannier90 package or interface, though. So we’ll see how that goes.

Hey Patrick,

Here’s my first attempt and getting VASP going on Europa. As you know, we tend to use the Intel compilers with VASP. When we use the standard Intel Parallel Studio compilers, those require a license. Also it makes it harder for you guys to create equivalent development environments locally.

TRECIS is about teaching people to support themselves and helping to create a more educated user community. So, let’s try to do this build with containers using the beta release of Intel OpenAPI HPCKIT, https://hub.docker.com/r/intel/oneapi-hpckit

If I can document how to build vasp using the Intel compilers from the oneapi container, this procedure would be general and would work everywhere. So let’s see how far it gets us.

I’m going to document building the container, building VASP using the container, and then running jobs here. I’m also going to do this build on europa. That means we can’t use Docker, but Podman is available to build docker images, and then we can convert those podman images to an “HPC” container.

OpenHPC (and Europa) support two container technologies out of the box, singularity and charliecloud.

I started with the Intel-hpckit Dockerfile from their hpckit-devel-centos8 repo on the intel oneapi containers github repo.

Building the podman container from the Dockerfile, and then converting it to a charliecloud image via the following script

#!/bin/bash

ml load charliecloud singularity

export TMPDIR=$SCRATCH/tmp

APPROOT=/scratch/csim/containers
APPNAME=intel_oneapi-hpckit
APPVER=2021.1-beta10

for APPDIR in ch sifs src
do  mkdir -p $APPROOT/$APPNAME/$APPDIR
done

podman build -t $APPNAME  -f Dockerfile-$APPNAME

ch-builder2tar $APPNAME $APPROOT/$APPNAME/src

pushd $APPROOT/$APPNAME/src
umask 0022
ch-tar2dir $APPNAME.tar.gz ../ch
mv $APPNAME.tar.gz $APPNAME-$APPVER.tar.gz
mkdir -p ../ch/intel_oneapi-hpckit/scratch/csim
popd

After that, let’s start our container.

$ ml load charliecloud
$ cd $SCRATCH/containers/intel_oneapi-hpckit/ch/
$ ch-run -b /scratch/csim/:/scratch/csim --set-env=./intel_oneapi-hpckit/ch/environment ./intel_oneapi-hpckit/ -- bash

with the above command we have

  • started our charliecloud intel oneapi container
  • set our environment to be the same as the builder environment via the --set-env option
  • mounted $SCRATCH inside our container ($HOME is mounted by default)

Once we are inside the container, the intel compilers are available.

[csim@europa src]$ which ifort
/opt/intel/oneapi/compiler/2021.1-beta10/linux/bin/intel64/ifort

After that, it’s a fairly standard compile.

$ tar xf vasp.5.4.4.tar.gz && cd vasp.5.4.4
$ cp arch/makefile.include.linux_intel makefile.include

edit makefile.include to add -DNGZhalf to the C preprocessor option

$ make std

edit makefile.include to add -DwNGZhalf to the C preprocessor option

$ make gam

copy back the original makefile.include for intel if you also want to build ncl

$ cp arch/makefile.include.linux_intel makefile.include
$ make ncl

So all of that worked. I now have VASP binaries on Europa. This is just standard, vanilla VASP. None of the VTST stuff and none of the Wannier90 stuff is there yet. Let’s start with pure VASP, test it, and then add the patches.

Do you have a small test job with known good output I could use to test? Perhaps an input file, slurm submission script, and output from Ganymede?

Once I have that, I’ll test it and make sure the container-built binaries work and then will make them available along with an example slurm script. After that, I’ll look at VTST and Wannier90.

Hi Chris,

Thanks for detailing all of this. As I’ve said, many of our group members are familiar with the build process, but I don’t think any of us have ever used containers. Multiple compute resources are available to us, but we tend to heavily compartmentalize our work due to the effort involved moving between systems. So something like this could be very useful to us.

I’m going to email you a test case.

1 Like

Hi @csim,

For future VASP builds, please include the following modification:
in the source file linear_optics.F, un-comment the line which reads “CALL WRT_CDER_BETWEEN_STATES_FORMATTED”

Calculations involving optical properties write the optical transition state data to a binary file. By un-commenting the above line, the data is instead written in plaintext, allowing us to actually use it. As of now nobody seems to have figured out a direct conversion, so anyone who wants to use optical data from VASP has to modify the source code in this way. I recently learned that the guys in our group studying qbits are using a separate build for this reason. For now there’s no need to rebuilt the Ganymede module, but as we develop Europa and test new hardware for Ganymede, we should include this in our process.

Thanks,
Patrick

Hi @csim

When building VASP on a machine using Intel CPUs, we would ideally use the Intel Fortan Compiler, Intel MKL for all the linear algebra dependencies, and Intel’s implementation of MPI. As you mentioned, none of these are currently available on Europa. I am less concerned with the compiler, but the lack of MKL and IMPI is problematic.

I tried to build VASP myself using, openmpi (v4.0.4 module), openblas (v0.3.7 module), scalapack (v2.1.0module), fftw (v3.3.8 module), and LAPACK (v3.9 local install). The compiler was mpifort included under the openmpi module.

VASP compiled without error using these dependencies, and using a single node the binary executes successfully. Setup time is atypically long, and upon completion I get the following message “Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG IEEE_DENORMAL” written 16 times (once per MPI task). All the data is as expected.

However, if I try to run the binary on more than one node, the job fails with the following error:
“An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This could be caused by a number
of factors, including an inability to create a connection back
to mpirun due to a lack of common network interfaces and/or no
route found between them. Please check network connectivity
(including firewalls and network routing requirements).”

Is this an issue with my build, or the hardware configuration?

About VASP pre-compiler flags:
The flags: -DNGZhalf, -DwNGZhalf, -DNGXhalf, -DwNGXhalf are technically deprecated options. One of the lower-level makefiles will filter out all of these options from your makefile.include, and then add back the flag which is appropriate for your version (std, gam, ncl).

What’s the “better” way to do it then to set these deprecated flags? Last time I looked, this was the way TACC was still doing it.

Best,
Chris

Deprecated is strange term for these flags, because they are still technically used. The manual has very little to say on the matter, but you can see what happens by digging into the source code.

Under /src there is a low-level makefile which contains the following line:
FPP=$(filter-out -DwNGZhalf -DNGZhalf -DwNGXhalf -DNGXhalf,$(CPP))

But this is followed by a series of if/else statements that detect the version and reinsert the appropriate flag. So DNGZhalf does get used for the std version, for example. That decision is just made automatically.

I think before VASP 5.x the user was required to specify the appropriate flag for each version manually, which may be why TACC still has it in their specifies its use.

As of now nobody seems to have figured out a direct conversion, so anyone who wants to use optical data from VASP has to modify the source code in this way.

Hello, I stumbled on this thread while trying to compile VASP using an Intel oneapi-hpckit image (unfortunately did not succeed, the compilation was successful and runs in serial, but fails with mpirun).

In any case, I read this comment and just wanted to mention that pymatgen can read both formatted and unformatted WAVEDER(F) files and the same information is ultimately contained in both, so this is one option for people who do not want to recompile. Hope this helps.

Best,

Matt

(Just visiting your nice forum, not affiliated with CI)