|
thank you for the suggestion, dimitri. but if i understand correctly
what is happening there, a job that exceeds the limit will be put on
hold and then rescheduled also if there is the possibility to just
increase the request_memory on the same machine. we cannot work with
checkpoints here (at least not using htcondor's standard universe),
so this would mean that jobs would need to rerun from the very
beginning. if there was a possibility to update the requirements of a job while it is running and after checking whether under these new requirements, the job can remain on the machine, would be great for my use case. don't get me wrong: yours is a wonderful suggestion and if this extra bit is not possible i will definitely test it! thanks again, thomas Am 2016-03-15 um 18:50 schrieb Dimitri
Maziuk:
On 03/15/2016 08:24 AM, Thomas Hartmann wrote:2. handle RAM allocations more dynamically. for instance: 2.1. if a job wants to use more RAM than previously requested, see whether the machine on which it runs still has this amount of RAM available. 2.2. if it does, update the request_memory to a safe value and continue running the job. 2.3. if the extra RAM is not available, stop the job, update the request_memory to a safe value and put it back into the queue.Courtesy of Lauren Michael: -- Dr. Thomas Hartmann Centre for Cognitive Neuroscience FB Psychologie Universität Salzburg Hellbrunnerstraße 34/II 5020 Salzburg Tel: +43 662 8044 5109 Email: thomas.hartmann@xxxxxxxx "I am a brain, Watson. The rest of me is a mere appendix. " (Arthur Conan Doyle) |