Re: [HTCondor-devel] p-slots and preemption


Date: Tue, 18 Jun 2013 06:49:36 -0500
From: Brian Bockelman <bbockelm@xxxxxxxxxxx>
Subject: Re: [HTCondor-devel] p-slots and preemption
Yes - this would increase the size of the slots used by opportunistic jobs.

However, not all nodes have 2.5GB of RAM, so we have to sacrifice some cores in this case.

Brian

On Jun 17, 2013, at 9:51 PM, Erik Erlandson <eje@xxxxxxxxxx> wrote:

> With consumption policies, the way to get slots with more RAM would be:
> 
> # startd config:
> CONSUMPTION_POLICY = True
> CONSUMPTION_CPUS = quantize(target.Cpus, {1})
> CONSUMPTION_MEMORY = quantize(target.Memory, {2500})
> CONSUMPTION_DISK = quantize(target.Disk, {100}) # or whatever
> 
> FYI, consumption policies are now available on master.  Another way memory quantization could also be achieved is using
> 
> MODIFY_REQUEST_EXPR_REQUESTMEMORY = quantize(RequestMemory,{2500})
> 
> 
> 
> 
> ----- Original Message -----
>> Hi,
>> 
>> We recently tried to enable preemption on our local cluster - and, well, not
>> much happened.
>> 
>> We were trying to preempt opportunistic jobs - 1.9GB RAM, 1 core requested
>> apiece - with CMS jobs - 2.5GB RAM, 1 core.  Unfortunately, we use p-slots.
>> That means each opportunistic job runs in a slot which is precisely 1.9GB
>> of RAM, too small for CMS jobs to match.
>> 
>> CMS jobs don't match, meaning they can't preempt, meaning the high priority
>> folks starve in line after the low-priority folks.
>> 
>> I can't think of any particular way to "simply" fix this.
>> 
>> Thoughts?  How would this work with eje's consumption branch?
>> 
>> Brian
>> _______________________________________________
>> HTCondor-devel mailing list
>> HTCondor-devel@xxxxxxxxxxx
>> https://lists.cs.wisc.edu/mailman/listinfo/htcondor-devel

Attachment: smime.p7s
Description: S/MIME cryptographic signature

[← Prev in Thread] Current Thread [Next in Thread→]