Just curious if anyone has any comments regarding a problem we are seeing on our now skylake cluster. All MPI/compiler combinations including intel-2018, intel-2019, gcc-9.1.0 with openmpi/mvapich2 as well as intel-2018-mpi run fine on all 960 cores, however, intel-2019-mpi on more than ~300 cores fails. We can switch the FI_PROVIDER from ofi_rxm to verbs, but then all codes slow down significantly (but no crash).
I have posted a message on Mellanox’s forum and was told I should contact Intel. I have also tried to post on Intel’s HPC forum and the libfabric forum, but these messages are stuck in moderation.
Anyway, if anyone has any pointers for this. I am really interested in using intel-2019 mpi.
Hi, we are currently standing up a new cluster with Mellanox ConnectX-5 adapters. I have found that using openMPI, mvapich2, and intel2018-mpi, we can run MPI jobs on all 960 cores in the cluster, however, using intel2019-mpi we can’t get beyond ~300 mpi ranks. If we do, we get the following error for every rank:
Abort(273768207) on node 650 (rank 650 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack:
PMPI_Comm_split(507)…: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=650, new_comm=0x7911e8) failed
PMPI_Comm_split(489)…:
MPIR_Comm_split_impl(167)…:
MPIR_Allgather_intra_auto(145)…: Failure during collective
MPIR_Allgather_intra_auto(141)…:
MPIR_Allgather_intra_brucks(115)…:
MPIC_Sendrecv(344)…:
MPID_Isend(662)…:
MPID_isend_unsafe(282)…:
MPIDI_OFI_send_lightweight_request(106):
(unknown)(): Other MPI error
This is using the default FI_PROVIDER of ofi_rxm. If we switch to using “verbs”, we can run all 960 cores, but tests show an order of magnitude increase in latency and much longer run times.
We have tried installing our own libfabrics (from the git repo ; also we verified with verbose debugging that we are using this libfabrics) and this behavoir does not change
Is there anything I can change to allow all 960 cores using the default ofi_rxm provider? Or, is there a way to improve performance using the verbs provider?
For completeness:
Using MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.6-x86_64 ofed
CentOS 7.6.1810 (kernel = 3.10.0-957.21.3.el7.x86_64)
Intel Parallel studio version 19.0.4.243
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Thanks!
Eric