Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Forcing job to run on submit host
- Date: Mon, 5 Nov 2007 12:31:11 -0600
- From: Erik Paulson <epaulson@xxxxxxxxxxx>
- Subject: Re: [Condor-users] Forcing job to run on submit host
On Mon, Nov 05, 2007 at 01:41:55PM +0100, Steffen Grunewald wrote:
> On Mon, Oct 22, 2007 at 09:41:58AM -0500, R. Kent Wenger wrote:
> > On Mon, 22 Oct 2007, Atle Rudshaug wrote:
> >
> > > My question is how to I force the last job in DAGMan (which is the
> > > child of all the other jobs) to run on the submitters machine where
> > > all the outputs files are (this being a bash script or a c++
> > > executable). Is there a better/easier way of doing this?
> >
> > Just make that job a local universe job. (Note that you do that by
> > specifying local universe in the job's submit file -- it's not something
> > that you do in the DAG file, and it's independent of whether the job
> > runs within a DAG.)
>
> I've got quite the opposite problem: since the submit hosts only run
> a scaled-down version of the Condor daemon list (in particular, the
> STARTD is missing), they refuse to run "local universe" jobs as well.
>
> Any suggestion how to make submit machines available for local universe
> only is highly appreciated!
>
Jobs in the "scheduler" and "local" universe are spawned directly by
the condor_schedd. The main difference between the "scheduler" universe
and the "local" universe is that the "local" universe requires a
condor_starter to be present (note: startER, not startD).
You may have scaled down Condor too far for your submit hosts. Make sure you
have condor_starter binaries available on your submit hosts.
-Erik