As far as I know the user in queued each job separately (i.e. the job
IDs do not have a subjob ID: 694488.0, 694489.0, ...).
The command I used to put the jobs on hold was:
condor_hold <username> -constraint JobStatus==1 -reason '<bad user, blah
blah>'
I have tried to stop the condor service since we were at risk to run out
of RAM & swap, however both condor_master and condor_schedd were
hanging. In the meantime, no command was able to query the scheduler.
After some deliberation it was decided to kill the processes and start
the service from scratch....-> same pattern (RAM & swap).
While I was chatting with the users and debating about deleting the
queue (most jobs were from the bad user) I added 100 GB of swap, updated
condor to 8.4.5 and attempted another restart.
Sometime during that the user in question must have deleted his held
jobs as I found the queue with only his running jobs and other users' jobs.
All-in-all I am positively surprised that whatever happened with the
held jobs, other users did not suffer.
Thanks for the reply and for reading :).
Cheers,
Luke
On 31 March 2016 at 20:02, Todd Tannenbaum <tannenba@xxxxxxxxxxx
<mailto:tannenba@xxxxxxxxxxx>> wrote:
On 3/31/2016 7:55 AM, L Kreczko wrote:
Dear experts,
I am trying to understand the schedd behaviour I witnessed today.
After sending 10k (bad) jobs to hold status, the RAM usage of the
condor_schedd process exploded (see attached png).
The job_queue log is now 9.3GB and contains all ClassAds of the held
jobs (I assume this is what is causing the RAM usage).
This was not the case when the jobs where idle. Is this
behaviour expected?
Can I do something to prevent this from happening?
Cheers,
Luke
Hi Luke,
What HTCondor version / operating system are your using?
Including version information in any incident report is always a
good idea. :)
Also, did you submit these 10k jobs via 10,000 invocations of
condor_submit, or via one invocation with "queue 10000" ?
Just to be sure we have the correct facts: you submitted the 10k
jobs, and memory usage of the schedd was fine (i.e. less than 5 gig
according to your graph). Then schedd memory usage exploded to
15GB+ as soon as you did the condor_hold, and most (all?) of the
jobs you put on hold were previously in the idle state.
Also, could you send the output of
condor_schedd -v
and
condor_config_val -dump QUEUE
As you is there something you can do to prevent this: Once we have
clarification on the above, we can investigate more (i.e. reproduce
here) and hopefully give better advice. Until then I cannot
precisely say what is going on, so my naive initial in the mean time
advice would be run the latest release in whatever series you are
using, and perhaps hold jobs a chunk at a time , i.e 500 at a time
could be done like
condor_hold -cons 'ClusterId > 5000 && Cluster <= 5500'
Certainly HTCondor should be able to handle putting 10k jobs on hold
in one go. As to what I think is going on: When you do condor_hold
(or whatever) on a large group of jobs all at once, either all the
jobs will go on hold, or none of the jobs will go on hold (i.e.
database-style transactional processing). The schedd will store 10k
changes to a transaction log in RAM... I wouldn't expect this log to
take many gigs of ram however! But one improvement we've had in
mind for a while (mainly for speed) is instead of having 10k
transaction log entries would be to have one transaction log action
that effectively gives a constraint like "all jobs" or whatever you
gave to condor_hold... A downside of implementing this is it would
not be forwards compatible - i.e. after upgrading to a new schedd
with this feature, you may not be able to downgrade anymore (because
the job_queue.log file may contains entries an old schedd would not
understand).
Absolute worst case you could shutdown HTCondor and remove
everything in the $(SPOOL) directory, effectively flushing all your
jobs to the bitbucket. Then before restarting you could set config
knob SCHEDD_CLUSTER_INITIAL_VALUE to a number higher than your
previous job id so that you don't repeat job id numbers, if you care
about that. Of course it shouldn't have to come down to this
extreme option, but I thought I'd mention it just in case everything
is on fire and restarting HTCondor doesn't help.
Thanks
Todd
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx
<mailto:htcondor-users-request@xxxxxxxxxxx> with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
--
*********************************************************
*Dr Lukasz Kreczko
Research Associate*
* Department of Physics*
* Particle Physics Group
*
University of Bristol
HH Wills Physics Lab
University of Bristol
Tyndall Avenue
Bristol
BS8 1TL
+44 (0)117 928 8724
L.Kreczko@xxxxxxxxxxxxx <mailto:L.Kreczko@xxxxxxxxxxxxx>
A top 5 UK university with leading employers (2015)
A top 5 UK university for research (2014 REF)
A world top 40 university (QS Ranking 2015)
*********************************************************
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/