Hello,
I'm trying to submit a job via bosco withKenyiIf I use request_memory, that works fine. Digging into the parameters passed to bosco/condor/glite/bin/condor_in the submit file. For some reason, the job in the remote submit host (condor batch system) uses 1 for RequestCpus always, ignoring my request.request_cpus = N
I see some patch regarding multicore here, but I don't know what is in charge of sending that 'mpinodes' parameters to blah. Is that the remote gahp / batch_gahp?
https://github.com/htcondor/htcondor/commit/02223163371769e6 52ea9072caa07191137c86ce#diff- 2acfefa411fd99ff66c1d123923ec9 b5 submit.sh, I can see "-n" is never parsed [1].
What do I need to do to properly submit multicore jobs?I'm using condor 8.6.9 on the bosco side.
[1]
-c /home/khurtado/.condor/bosco/sandbox/58fb/58fbd746/apf- test.virtualclusters.org_ -T /tmp -O /tmp/OutputFileList_2924_11000_apf-test. virtualclusters.org#55038.0# 1525460834/condor_exec.exe 1525460844647080 -i /dev/null -o _condor_stdout -e _condor_stderr -w /home/khurtado/.condor/bosco/ sandbox/58fb/58fbd746/apf- test.virtualclusters.org_ -D home_bl_apf-test.11000_apf-test. virtualclusters.org#55038.0# 1525460834 virtualclusters.org_11000_apf- -m 4000 -V "FACTORYUSER=autopyfactory"test.virtualclusters.org# 55038.0#1525460834 Best,