Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[HTCondor-users] problems with jobs requiring more then 2GB memory
- Date: Fri, 30 May 2025 10:28:48 +0300
- From: Mihai Ciubancan <ciubancan@xxxxxxxx>
- Subject: [HTCondor-users] problems with jobs requiring more then 2GB memory
Hello,
I encounter issues with LHCb jobs ,which are requiring more than 2GB per
jobs. The jobs are failling with the following error:
LastHoldReason = "Error from reserved-LHCb2_5@xxxxxxxxxxxxxx: Job has
gone over cgroup memory limit of 2048 megabytes. Last measured usage:
2033 megabytes. Consider resubmitting with a higher request_memory."
I have configure partionable slots:
CLAIM_WORKLIFE=3600
CONTINUE=TRUE
JOB_RENICE_INCREMENT=10
KILL=FALSE
NUM_SLOTS=4
NUM_SLOTS_TYPE_1=4
SLOT_TYPE_1_PARTITIONABLE=TRUE
SLOT_TYPE_1=cpus=8, memory=4096
SLOT_TYPE_1_START=Owner=="pillhcb01"
SLOT_TYPE_1_NAME_PREFIX=reserved-LHCb
PREEMPT=FALSE
RANK=0
SUSPEND=FALSE
SLOT_TYPE_1_CONSUMPTION_POLICY=False
CONSUMPTION_POLICY=False
CLAIM_PARTITIONABLE_LEFTOVERS=False
Also is enable cgroup policy:
BASE_CGROUP = /system.slice/condor.service
CGROUP_MEMORY_LIMIT_POLICY = soft
MAXJOBRETIREMENTTIME = $(HOUR) * 24 * 7
SYSTEM_PERIODIC_REMOVE = ResidentSetSize > 3000*RequestMemory
If you have any suggestion will be highly appreciated!
Best,
Mihai