Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] Global prioritization
- Date: Thu, 07 Nov 2013 13:15:33 -0600
- From: Todd Tannenbaum <tannenba@xxxxxxxxxxx>
- Subject: Re: [HTCondor-users] Global prioritization
On 11/6/2013 10:38 AM, Alison McCrea wrote:
Hello users,
I'm trying to figure out how to use global prioritization, which I heard
was available on Condor 8.0.x. I don't see much in the v8 manual
(condor_prio doesn't seem to support prioritization across multiple
schedds). Does anyone have any knowledge about how to use this feature?
This new feature you heard about for v8.0 allows the job priority (as
set by condor_prio) to be honored across multiple schedds if you set
USE_GLOBAL_JOB_PRIOS = True
in condor_config.
But based upon your goal stated below, job priority is not what you
want, since job priority only dictates order of job execution for jobs
submitted from the same user. It has no impact on what user gets to run
ahead of another user, and your goal is to run high priority jobs
regardless of the user who submitted them. More below...
My goal is to be able to prioritize certain jobs (jobs asking for memory
above a certain threshold), regardless of schedd/user, and make them
highest priority in the entire pool. Is this possible? (Or is it expected
in a later version of Condor?)
Whether or not what you want to do is possible is hard to say precisely,
since the above is not a complete policy description... for instance,
you say jobs with memory > x should be 'high priority'. What if two
different users submit high-priority jobs - should they run FIFO
independent of who submitted them, or should all priority jobs run
'fair-share' across all the users who submitted high-priority jobs?
Should a high-priority job preempt (kick-off) an actively running
low-priority job or wait for the low-priority job to complete (perhaps
only waiting for a max of X seconds)?
Having said the above, the answer to your questions is "yeah, it is
probably possible". :)
Hopefully I can point you in the right direction -
Are your machines configured with static slots or partitionable slots?
If static slots, perhaps the easiest thing to do is to simply set a
startd rank expression which essentially says you want the slots
themselves to prefer "high priority jobs", eg in condor_config have
RANK = memory > x
While this is pleasantly simple, it unfortunately will not work with
partitionable slots, as startd rank and partitionable slots currently
cannot be mixed (with luck we will remedy that situation in the current
developer cycle). If you do not know what a partitionable slot is, you
are not likely using them (and just using default static slots), so you
can ignore the rest.
Another possibility would be to configure an accounting group quota for
the high priority jobs, and set the quota on this group to be bigger
than the size of your pool. This has the advantage of working with both
static and partitionable slots. The potential downside is the job would
have to flagged as 'high priority' in the job submit file in the sense
that the submit file would have to specify the accounting group to use.
This part of the manual and/or this HOWTO may also be helpful:
https://htcondor-wiki.cs.wisc.edu/index.cgi/wiki?p=HowToConfigPrioritiesForUsers
http://goo.gl/J9Q1QG
Feel free to repost with more specific questions if you need help and/or
once you flesh out more precisely your desired policy...
hope the above helps,
best regards
Todd