On 13/05/2016 14:48, Rich Pieri wrote:
That's unless your jobs are doing significant amounts of other activity (in particular disk or network I/O), in which case you may need to pretend that there are more cores than you really have in order to utilise them fully.On 5/13/16 9:08 AM, sjones wrote:# cat /etc/condor/config.d/00-node_parameters NUM_SLOTS = 1 SLOT_TYPE_1 = cpus=32,mem=auto,disk=auto NUM_SLOTS_TYPE_1 = 1 SLOT_TYPE_1_PARTITIONABLE = TRUE Yet the system continues to run the previous number, cpus=24.Try "NUM_CPUS = 32" to override the detected CPU count. As a rule of thumb, though, you should not allocate more CPUs or cores than you actually have. Running two jobs on one core simultaneously will take longer for both to complete than running each job sequentially on the same core.
Last time I looked at this, I wasn't able to make a job request a fraction of a CPU, although it was a while ago since I last tried that.
Also: if a slot advertises one CPU, when a matched job runs I don't think it will be prevented from using more than one CPU, unless you are using cgroups enforcement.
Regards, Brian.