Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Request_memory and dynamic slots
- Date: Wed, 23 May 2012 07:30:45 -0500
- From: Todd Tannenbaum <tannenba@xxxxxxxxxxx>
- Subject: Re: [Condor-users] Request_memory and dynamic slots
On 5/23/2012 3:08 AM, Ian Cottam wrote:
Scenario: we have a Linux/64 pool with all clients using dynamic slots.
We have 7.4/7.6 (but if the answer to the question below is different for
7.8 I would be interested in that too).
I'll tackle the answer for v7.8 since I think I can do that off the top
of my head and things did change in v7.8... I'll let other answer for
v7.4/v7.6.
We are about to tell our users not to use (Memory>=n) in Requirements, but
use
Request_memory = n
as this appears to work much better with dynamic slots.
Yes, it not only appears to work better, it does work better :). In
fact, in v7.8 by default condor_submit will give a warning if you
reference Memory in the Requirements expression and will suggest you use
request_memory instead.
In v7.8, the administrator has some config knobs to play with:
JOB_DEFAULT_REQUESTMEMORY : Used by condor_submit as the value for
request_memory if the user does not include request_memory in the submit
file.
Defaults to ifthenelse(MemoryUsage =!= UNDEFINED,MemoryUsage, 1)
The admin can also setup a default request_memory on the execute node -
this is very handy if you don't have config control over all the
machines submitting/flocking into your pool, or if you want to quantize
the incoming request_memory so you don't end up with lots of little tiny
1 MB slots that are hard to reuse. The knob for this is
MODIFY_REQUEST_EXPR_REQUESTMEMORY : ClassAd expression used by the
startd to modify/over-ride whatever request_memory value is specified by
the incoming job. The expression can reference attributes in the
incoming job classad as well as machine ad attributes.
Defaults to quantize(RequestMemory,{TotalSlotMemory / TotalSlotCpus
/ 4})
Say a given job will grow to use 3000MB of memory; what is the difference
between the following cases:
Note in all cases, condor_submit will append a clause to the
Requirements expression such that:
Memory >= request_memory
(i.e. match the job with a slot that has at least as much memory as the
job requests).
a) User does not supply a Request_memory line.
I believe the default is Request_memory=1.
Then JOB_DEFAULT_REQUESTMEMORY is used, which will default to
request_memory = 1 if the job has never run, else it will default to the
memory usage observed from the last run.
Note that if request_memory = 1, that will not necessarily result in a 1
MB dynamic slot being created because of the default
MODIFY_REQUEST_EXPR_REQUESTMEMORY. If a job with request_memory = 1
lands on a machine with 8 cores and 16 GB memory configured into one
partionable slot, the startd will attempt to create a dynamic slot for
this job with 512 MB. (16*1024/8/4=512)
b) User supplies:
Request_memory = 1
Is this different from the default case a (on 7.4/7.6)?
On v7.8 it is indeed different. In case (a) request_memory equals
whatever JOB_DEFAULT_REQUESTMEMORY is in the config, in case (b)
request_memory equals 1.
c) User supplies:
Request_memory = 1000
d) User supplies:
Request_memory = 3000
(c) and (d) are same as (b) --- condor_submit goes with what the user
specifies.
hope the above helps
Todd