This works as a job-side sorting of keeping small jobs on the smaller-memory nodes. Two things to note here:
1) The âMâ suffix to the memory quantity works for the request_memory command in the submit file, but doesnât work in the requirements _expression_. The units for RequestMemory in the job ad and Memory in the slot ad are megabytes. condor_submit translates
the âM suffix in request_memory from the submit file when constructing the job ad.
2) You donât need to set requirements for the large-memory jobs at all. Then condor_submit will add the default constraint ofÂ(TARGET.Memory >= RequestMemory).
You can also do execute-side enforcement if you want this policy to apply to all jobs. Youâd modify the START _expression_ of the gldeins on large-memory nodes to be this (along with any other constraints you want):
START = TARGET.RequestMemory > 500000
Â- Jaime
On Mar 5, 2024, at 4:20âPM, Seung-Jin Sul <
ssul@xxxxxxx> wrote:
Hi,
We are using a SLURM-based glide-in system. Some of the nodes are high-mem nodes which have 1.5TB memory space and the others have 500GB mem. What could be the best way to schedule tasks to the different pools based `RequestMemory`?
For exmaples:
if `
RequestMemory` <= 500GB
===>
requirements = (TARGET.Memory >= RequestMemory) && (TARGET.Memory <= 500000M)
else if `
RequestMemory` > 500GB and
`RequestMemory` <= 1.5TB
===>
requirements = (TARGET.Memory >= RequestMemory) && (TARGET.Memory > 500000M)
&& (TARGET.Memory > 1500000M)
Thank you for your help!
Best,
Seung
_______________________________________________