[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Parallel universe - Multi-hosts MPI UCX not working



Hi Martin,

I helped develop part of the openmpiscriptÂa few years ago, and it hasn't been tested since the days that OpenMPI 3.x was current,Âso I'm not too surprised that it's probably time to look at it again. I don't know what MCA parameters are available for UCX, but maybe you could try fiddling with MCA parameters to crank up the verbosity of messages (based on your email, I'm guessing one of these could be "--mca pml_ucx_verbose 100") and send along what you find.

Jason Patton

On Thu, Nov 17, 2022 at 12:07 PM Beaumont, Martin <Martin.Beaumont@xxxxxxxxxxxxxxx> wrote:

Hello all,

Â

Using Open MPI 4.x does not work when a parallel universe job requests more than 1 host. Has anyone succeeded in using ucx PML with htcondorâs openmpiscript wrapper example?

Â

$CondorVersion: 9.0.15 Jul 20 2022 BuildID: 597761 PackageID: 9.0.15-1 $

$CondorPlatform: x86_64_Rocky8 $

Â

Using both OpenFOAM and SU2 compiled against Open MPI 4.1.4, using the following mpirun mca arguments â--mca btl ^openib --mca pml ucx --mca plm rshâ, parallel jobs running on only 1 host complete successfully. If more than one host is requested, UCX blurps out memory allocation errors and the job fails right away. This is using an InfiniBand fabric.

Â

Â

[1668707979.493116] [compute1:22247:0]ÂÂÂÂÂÂ ib_iface.c:966Â UCXÂ ERROR ibv_create_cq(cqe=4096) failed: Cannot allocate memory

â. X number of cores

[compute1:22247] ../../../../../ompi/mca/pml/ucx/pml_ucx.c:309Â Error: Failed to create UCP worker

--------------------------------------------------------------------------

No components were able to be opened in the pml framework.

Â

This typically means that either no components of this type were

installed, or none of the installed components can be loaded.

Sometimes this means that shared libraries required by these

components are unable to be found/loaded.

Â

 Host: compute1

 Framework: pml

--------------------------------------------------------------------------

[compute1:22247] PML ucx cannot be selected

[compute1:22236] ../../../../../ompi/mca/pml/ucx/pml_ucx.c:309 ÂError: Failed to create UCP worker

â. X number of cores

[compute2:22723] 31 more processes have sent help message help-mca-base.txt / find-available:none found

[compute2:22723] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Â

Â

If I instead use Open MPI 3.1.6, with the arguments â--mca btl openib,self --mca plm rshâ, then multi-hosts jobs complete successfully.

Also, if I use the same mpirun command with â--mca btl ^openib --mca pml ucxâ without using HTCondor (no schedd, live on cluster using passwordless ssh), multi-hosts jobs also works.

Â

Since openib is already deprecated with OMPI 4.x and scheduled to be removed in 5.x (https://www.open-mpi.org/faq/?category=openfabrics#openfabrics-default-stack), instead of trying to make openib work again with OMPI 4+, Iâd prefer to find a way to make UCX work within htcondor.

Using OMPI 3.1.6 is still viable for now, but Iâm guessing weâll eventually find an OS or app version that simply wonât work with old ompi versions.

Â

My wild guess is that it has something to do with âorted_launcher.sh / get_orted_cmd.sh / condor_chirpâ and UCX not being able to work together properly, but this is beyond my understanding at this point.

Â

Any clues would be appreciated. Thanks!

Â

Martin

Â

_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/