How do I convert my openmpi PBS job script to a SLURM script?

How can I convert this PBS script to SLURM?

In particular, I’m not quite sure which parameters to use with SLURM that will provide the same behavior as this parameter in the PBS script: -l nodes=2:ppn=16

#!/bin/bash                                                                      
#PBS -l nodes=2:ppn=16                                                                                                                        
#PBS -N abcName                                                       
#PBS -A 9876                                                                
#PBS -o $PBS_JOBID.stdout                                        
#PBS -e $PBS_JOBID.stderr                                        
#PBS -r n                                                                       
#PBS -M mlacount@mines.edu                                  
#PBS -m abe                                                                
#PBS -V                                                                        
#PBS -l walltime=3:00:00                                            
#-----------------------------------------------------
module load PrgEnv/intel/default
module load openmpi/intel/default
module load PrgEnv/libs/fftw/3.3.3
module load Apps/RGWBS/Jan2_2009

cd $PBS_O_WORKDIR

#save a nicely sorted list of nodes
#sort -u  $PBS_NODEFILE > mynodes.$PBS_JOBID

mpiexec parsec.mpi > parsec.$PBS_JOBID.log
mpiexec tdlda.mpi > tdlda.$PBS_JOBID.log
mpiexec sigma.mpi > sigma.$PBS_JOBID.log
mpiexec bsesolv.mpi > bse.$PBS_JOBID.log
#!/bin/bash                     #!/bin/bash
#PBS -l nodes=2:ppn=16          #SBATCH --nodes=2      
                                #SBATCH -n 16
#PBS -N abcName                 #SBATCH --job-name="abcName"
#PBS -A 9876                    #SBATCH --account=9876
#PBS -o $PBS_JOBID.stdout       #SBATCH -o $SLURM_JOBID.out
#PBS -e $PBS_JOBID.stderr       #SBATCH -e $SLURM_JOBID.err
#PBS -r n                       #SBATCH --no-requeue
#PBS -M baz@foo.edu             #SBATCH --mail-user=baz@foo.edu
#PBS -m abe                     #SBATCH --mail-type=all
#PBS -V                         #SBATCH --export=ALL
#PBS -l walltime=3:00:00        #SBATCH -t 180                           
#-----------------------------------------------------

module load PrgEnv/intel/default		module load PrgEnv/intel/default							
module load openmpi/intel/default		module load openmpi/intel/default
module load PrgEnv/libs/fftw/3.3.3		module load PrgEnv/libs/fftw/3.3.3
module load Apps/RGWBS/Jan2_2009		module load Apps/RGWBS/Jan2_2009

cd $PBS_O_WORKDIR				cd $SLURM_SUBMIT_DIR


mpiexec parsec.mpi > parsec.$PBS_JOBID.log	srun parsec.$SLURM_JOBID.log
mpiexec tdlda.mpi > tdlda.$PBS_JOBID.log	srun tdlda.$PBS_JOBID.log
mpiexec sigma.mpi > sigma.$PBS_JOBID.log	srun sigma.$PBS_JOBID.log
mpiexec bsesolv.mpi > bse.$PBS_JOBID.log	srun bse.$PBS_JOBID.log

SchedMD provides a document that helps translate between different schedulers: SLURM Rosetta Stone

NOTES: In PBS, there is no equivalent to SLURM’s cpus-per-task. PBS scripts are based on a quantity called Processor Equivalent (PE), a scalar value that is the maximum result among four job and system-dependent parameters. We tended to base PBS scripts on number of nodes and processors per node, as shown above; far left PBS script. SLURM allows a script to specify resources in terms of tasks; the above SLURM script (center script) is an attempt to make a ‘direct’ translation. These examples are just that; there are other valid parameter combinations that can achieve similar configuration goals.

Recommended SLURM script:

#!/bin/bash                              
#SBATCH --nodes=2
#SBATCH --tasks-per-node=8                       
#SBATCH --cpus-per-task=2                            
#SBATCH --job-name="abcName"             
#SBATCH --account=9876                   
#SBATCH -o $SLURM_JOBID.out              
#SBATCH -e $SLURM_JOBID.err                                   
#SBATCH --mail-user=baz@foo.edu
#SBATCH --mail-type=all                  
#SBATCH --export=ALL                     
#SBATCH -t 180

module load PrgEnv/intel/default							
module load openmpi/intel/default
module load PrgEnv/libs/fftw/3.3.3
module load Apps/RGWBS/Jan2_2009

cd $SLURM_SUBMIT_DIR

srun parsec.$SLURM_JOBID.log
srun tdlda.$PBS_JOBID.log
srun sigma.$PBS_JOBID.log
srun bse.$PBS_JOBID.log