Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] two jobs, but only one scratch dir created
- Date: Fri, 05 Apr 2013 10:58:59 -0500
- From: Todd Tannenbaum <tannenba@xxxxxxxxxxx>
- Subject: Re: [HTCondor-users] two jobs, but only one scratch dir created
On 4/5/2013 3:24 AM, Karin Lagesen wrote:
Hi!
I am trying to get some jobs going under condor. The scripts are all the
same, except input/output. When I condor_submit one script, everything
works nicely. However, when I submit more, I get no results back.
Could you post the submit files you are using with condor_submit ?
Perhaps you are telling HTCondor to write the output to the same file,
and so when you run more than one job the files are clobbering each
other? For instance, submitting to files like this would be bad:
output = foo
executable = myprogram.exe
queue
Instead, you want something like
output = foo.$(Cluster).$(Process)
executable = myprogram.exe
queue
I now
submittet two jobs at the same time, and I noticed that in my scratch
directory, inside the execute directory, only one temp directory is
created. This directory contains the files that it should for the first
job.
How many machines are in your pool? Perhaps your two jobs are running
on two different execute machines instead of on two different slots on
the same machine?
On your execute node, there should be a one-to-one correspondence
between the number of slots that in claimed/busy and the number of
sub-directories in the condor execute directory. Are you saying you
observe a single execute machine with two claimed/busy slots and yet
only one scratch directory (format execute\dir_<pid of starter>)? That
would be very very strange...
Hope the above helps
Todd