On 09/25/2012 08:44 AM, Tim St Clair wrote: > can you please dump a condor_q -long on a stuck job into this email thread. It's too late now. I will if it happens again, but I've already changed the submit file to "request_memory = 1K". > Typically >> 7.8 it is recommended to use the Request* and not alter the requirements. What I actually want is "requrements = off" switch. I *know* that a) condor gets memory requirements wrong for these jobs and b) all nodes in my pool have enough resources to run these jobs. As far as not clearly documented side-effects that are subject to change without notice go, "Memory > 0" is a much cleaner one than "request_memory = some-number". What does "request_memory = 0" mean? -- nothing gets to run because every node has > 0 memory? What happens when I lie to condor and "request-memory = 1K" and then the job actually asks for 2GB? And so on. So it may be a much shinier knob for fine-tuning the requirements, but unfortunately it's the exact opposite of what I need. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
Attachment:
signature.asc
Description: OpenPGP digital signature