Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Condor-users] Pool with some dynamic slots
- Date: Thu, 10 Nov 2011 16:33:07 +0100
- From: Steffen Grunewald <Steffen.Grunewald@xxxxxxxxxx>
- Subject: [Condor-users] Pool with some dynamic slots
Good afternoon,
I have run into an issue which even the latest manual (7.6.4) cannot help
to resolve.
I'm running a pool, consisting of a number of nodes which for traditional
reasons have been "hard-partitioned". Some of them offer a single slot,
with 2500 MB of RAM, and some are split at ~40/60 ratio.
Now I want to add another node which comes with 8 GB installed, and use
dynamic provisioning.
Here's my submit file:
---
universe = vanilla
initialdir = /home/testuser/test
notification = Never
on_exit_remove = (ExitBySignal == False) || ((ExitBySignal == True) && (ExitSignal != 11))
executable = /bin/sleep
arguments = 1800
Requirements = (Memory >= 2600)
request_cpus = 1
request_memory = 2650
queue 4
---
What confuses me is the semantics of the Requirements and request_memory
lines.
To avoid the jobs being scheduled onto the old-style slots, I have to
define Requirements which rule out those.
If I set request_memory = 500, the matchmaker will assign the proper slot,
but the startd will hand back the job - resulting in a condor_q -better
output of "job not yet considered by the matchmaker" (which isn't true),
and the job starting never.
If I raise the number to 2650, the job(s) will run (3 of them in parallel
as only ~500 MB are left on the slot).
If I remove the Requirements line, the job will be matched against an
old-style slot which isn't large enough.
Where's my mistake?
Cheers,
Steffen
--
Steffen Grunewald * MPI Grav.Phys.(AEI) * Am Mühlenberg 1, D-14476 Potsdam
Cluster Admin * --------------------------------- * http://www.aei.mpg.de/
* e-mail: steffen.grunewald(*)aei.mpg.de * +49-331-567-{fon:7274,fax:7298}