Hi Todd, thanks for your answer!
The setup looks like this:
![]() The common filesystem only exists on the APSUSE cluster and the head node, _not_ on the ProtoNip cluster.
I would still like to transfer job input data from the BeeGFS to the ProtoNip cluster.
Doing so I would like Condor to use the extra 10GbE and not the default 1GbE line.
As I understand this can be achieved with URL transfers using the standard http or a custom rsync plugin e.g.
The manual says:
"For transferring input files, URL specification is limited to jobs running under the vanilla universe and to a vm universe VM image file." [1]
So my question is: If I cannot use URL file transfers with the Docker universe, how would you go about transferring job input data from APSUSE's BeeGFS to the ProtoNip cluster execution nodes?
Thanks for your help!
Cheers, Jan
[1] https://htcondor.readthedocs.io/en/latest/admin-manual/setting-up-special-environments.html
--
MAX-PLANCK-INSTITUT fuer Radioastronomie
Jan Behrend - Backend Development Group ---------------------------------------- Auf dem Huegel 69, D-53121 Bonn Tel: +49 (228) 525 248 http://www.mpifr-bonn.mpg.de |