|
Hi Jeff,
The original goal of consumption policies was as Max stated to all a p-lsot to be split multiple times in a single negotiation cycle. However, this behavior is now achieved by the Schedd itself utilizing
CLAIM_PARTITIONABLE_LEFTOVERS. That being said consumption policies are still useful for the following cases:
-
When the pool has more users than p-slots (e.g. 16 users & 8x 128 core machines). With the assumption of an infinite number of jobs and they don't all end at the same time, the EPs will eventually average out
to 64 cores per user.
-
Making concurrency limits function the way you would expect. The negotiator doesn't -- and presently can't -- know how many concurrency tokens to assign to a given p-slot, so each slot only gets one. When the
schedd gets a p-slot with space for more than one job, it will therefore only start one job that requires a concurrency token, no matter how many other jobs from that user it starts on that p-slot. This can lead to priority inversion (-ish), where lower-priority
jobs that don't need the concurrency token run instead, because the p-slot is wholly consumed by the schedd before the negotiator gets another chance to distribute another concurrency token.
Now the bad news of consumption policies is there is not plan to spend developer hours supporting them just so you know.
Hope this helps,
Cole Bollig
From: HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> on behalf of Kühn, Max (SCC) <max.fischer@xxxxxxx>
Sent: Wednesday, February 4, 2026 2:53 PM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: Re: [HTCondor-users] What do consumption policies actually do?
Hi Jeff,
The ConsumptionPolicies really just do what they say, nothing more. They allow multiple jobs to match the same partition able slot *during one cycle*. They don’t allow matching them at once, or even selecting the best job to match.
Basically, ConsumptionPolicies allow the Negotiator to *immediately compute* the remainder after a match. Otherwise, it has to *wait* for the job to actually land on the slot, have its resources carved out, then the
remainder reported back to the collector, which then is *observed* by the Negotiator in a future cycle.
The Negotiator will still just match one slot after another. It’s just that „one after another“ gets compressed into one negotiation cycle.
What this buys you is the option for depth-first filling (the negotiator can proceed filling the same slot instead of having to look at another) and faster turnaround (because more matches can be made per cycle).
The options don’t define „best“, they really just define whether this mechanism is used at all (CONSUMPTION_POLICY) and how large/small the carved out slots are compared to the request.
Cheers,
Max
On 4. Feb 2026, at 09:18, Jeff Templon <templon@xxxxxxxxx> wrote:
Hi,
We’re looking into how to effectively deal with scheduling of GPU jobs here. In the manual, GPUs are mentioned in “consumption policies”. How to set one up is pretty well documented, what is not documented (at least, not in the section explaining how
to configure them) is what a consumption policy actually DOES. There are these two bits:
For partitionable slots, the specification of a consumption policy permits matchmaking at the negotiator. A dynamic slot carved from the partitionable slot acquires the required quantities of
resources, leaving the partitionable slot with the remainder. This differs from scheduler matchmaking in that multiple jobs can match with the partitionable slot during a single negotiation cycle.
and
and that the resource this policy cares about allocating are the cores.
Summary, matchmaking can happen at the negotiator, multiple jobs can match the slot during a single negotiation cycle, and you can tell it which resource to care about. I still don’t see what that “caring about” means in practice - presumably matching
multiple jobs in a single cycle means that you can pick the best match? And what defines “best” if I have only
CONSUMPTION_POLICY = True
defined? And is that different than if I have
SLOT_TYPE_1_CONSUMPTION_CPUS = TARGET.RequestCpus defined? Thanks in advance, JT
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
The archives can be found at: https://www-auth.cs.wisc.edu/lists/htcondor-users/
|