Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[HTCondor-users] Using cgroups to limit job memory
- Date: Wed, 01 Apr 2015 15:20:43 +0100
- From: Roderick Johnstone <rmj@xxxxxxxxxxxxx>
- Subject: [HTCondor-users] Using cgroups to limit job memory
Hi
I'm using HTCondor 8.2.7 on Redhat 6.6 and have set up cgroups as per
the manual so that jobs with many processes cannot take too much memory.
I have CGROUP_MEMORY_LIMIT_POLICY = hard
When I specify eg request_memory=100M in the job submit file the job is
indeed limited to 100M of resident memory.
While this behaviour is good for the machine owner, its less than ideal
for the job owner since the job may continue but only very slowly since
its paging a lot. This condition might not be obvious to the job owner.
Although this seems to be the behaviour documented in the manual, I'm
sure I have seen a description of a configuration in which the job can
be placed on hold with a suitable message if it tries to allocate more
memory than it requests, although I can't find that now.
So, is it possible to configure what happens when the job exceeds the
requested memory at all?
Thanks
Roderick Johnstone