MPI program runs out of memory

I submitted my MPI job and many nodes ran out of memory. Is there any way to submit my code in such a way that it would fit into the memory limits for each node?

CURATOR: Katia

ANSWER: In many cases the memory usage on each node may be reduced by using fewer cores than are available on each node. For example consider the case: running the mpi application on 3 nodes with 16 cores each. Your SLURM submission script would include the following lines:

# Request 3 nodes with 16 cores each:
#SBATCH -n 48
#SBATCH --tasks-per-node=16

#load modules if needed
module load openmpi

# run program with only half of the available cores on each node
srun -n 24 -bynode hello-mpi