The simplest solution is to adjust your 'rank' _expression_ for your jobs.
If you wanted to apply it across a pool you could update your SUBMIT_EXPRS to include some rank _expression_ default.
Cheers,
Tim
From: "Hermann Fuchs" <hermann.fuchs@xxxxxxxxxxxxxxxx>
To: htcondor-users@xxxxxxxxxxx
Sent: Wednesday, April 17, 2013 2:16:26 AM
Subject: [HTCondor-users] Steer jobs towards machines with less memory
Hello everybody
First a bit of background:
In our cluster we have some jobs which require a lot of memory(3gb+), and many which require less(800mb+).
We have a lot of machines with limited amount of RAM(1-2gb) and a few which have around than 8gb of RAM.
All machines use dynamic partitioning.
I am observing now, that on the big RAM machines one or two small jobs are started, effectively blocking the big jobs due to lack of memory. At the same time some small RAM machines are still unclaimed.
Is there a configuration setting to steer jobs towards machines with less free memory? E.g. that machines with less free memory are filled up first?
Something like: Prefer faster machines with less free memory
How would I do this?
Best regards from Austria,
Hermann
--
-------------
DI Hermann Fuchs
Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology
Department of Radiation Oncology
Medical University Vienna
Währinger Gürtel 18-20
A-1090 Wien
Tel. + 43 / 1 / 40 400 7271
Mail. hermann.fuchs@xxxxxxxxxxxxxxxx
|
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/