Re: [Condor-users] Default User priority factor


Date: Wed, 9 Feb 2005 14:27:41 +0000
From: Matt Hope <matthew.hope@xxxxxxxxx>
Subject: Re: [Condor-users] Default User priority factor
On Wed, 9 Feb 2005 08:53:15 -0500, Robert E. Parrott
<parrott@xxxxxxxxxxxxxxxx> wrote:
> Hi folks,
> 
> We'd like to manage effective user priority based on how much a
> research group contributes to the resources in a condor pool.
> 
> As such, we need to be able to increase the effective priority on a per
> user basis, raising certain user's priorities about the rest.
> 
> To do this, ideally we would want to the default user priority factor
> to be something like 100, and lower this in proportion to the resources
> that a user (or user's group) brings to the table.
> 
> At this point, the default user factor is set to 1.0, and can't go any
> lower, which means we've bottomed out in being able to raise a certain
> user's priority over the default.
> 
> I s there a way to set the default user priority factor? We've set the
> NICE_USER_PRIO_FACTOR to 10, but this only affects users who choose to
> nice themselves in the submit scripts, and so doesn't allow for
> enforcement of such a policy.

Unless you only have one submitting user per 'group' then the policy
enforcement you describe will have issues

Say three users
A1, A2, B1 from groups A and B

group A contributes 1/4 and group B 3/4

if  you set A1 and A2 to have effective priority both at 3 times lower
than B's then if all users are submitting jobs then A1 and A2 between
them will get more than the "fair share" for group A.

If you instead split the bump in base priority between all users in
the group then they will only achieve full fair use when all users in
the group are running something.

Neither solution is going to work 100% without group == user being a
prerequisite.

If the resources are themselves submitted by the individuals then you
will get fair behaviour by letting the individual resources themselves
express a preference via machine RANK.
The notable problem with this approach is that there is NO WAY to
prevent preemption's from occurring with this (job retirement will
prevent the preemption from having a major impact but this is a fudge
at best and may prevent alternate preemption methodologies from
working appropriately)
If you will be running standard universes or other preemption friendly
code this may not be an issue.

Incidentally it matters not at all if you are sourcing the pool
infrastructure centrally from contributions of money rather than
physical machines, just set up the configs on sections of those
machines as required.

Two issues with such a setup 
Your granularity of segmentation is limited by the number of execute nodes.
You need to maintain a few macros in the config files identifying
which users are in which group.
Hopefully neither of these are showstoppers...For me the rank
triggering preemption is the biggest annoyance. I would love there to
be (yet) another PREEMPT expression which would be used for cases
where a machine ranks a job higher than it's existing job...

Matt

[← Prev in Thread] Current Thread [Next in Thread→]