Brian,
This is a good question and one which we have started to
explore. There are definitely times that one job in a dag
needs to do preparatory work for a following phase, or might
qualify that a node is sufficient for a following task,
or might want to stage work into place....
I think if you view many tasks as phases of work you
can extend this list of cases easily.
And it gets more complicated by the fact that just going back to the same
machine may not be sufficient. What if disk space was a critical
resource and a job runs inbetween which takes this resource below
the level needed for the child node?
Idealy we have information in the schedd to allow for extending
the claim on a node which meets our requirements to allow multiple
nodes to be run once we get a workable match.
However, we are not there quite yet and there are at least 2
ways to get a partial solution. The simplest would be to have the parent
node bring back the machine name where it ran. Then the Postscript
could rewrite the requirements of the child nodes to only
match to this machine.
More complicated would be to have Hawkeye running in your
pool with a module which your jobs can talk to. It can
be some method as simple as adding information to a file. This
Hawkeye module would then add match information for the child
nodes into the Machine AD where the parent ran. The child simply
adds this information into its job requirements.
Hope this helps some.
Bill Taylor
Condor Project
> -----Original Message-----
> From: condor-users-bounces@xxxxxxxxxxx
> [mailto:condor-users-bounces@xxxxxxxxxxx] On Behalf Of Brian Gyss
> Sent: Wednesday, February 09, 2005 7:11 PM
> To: condor-users@xxxxxxxxxxx
> Subject: [Condor-users] quick DAG question...
>
> Hello all -
>
> Is there any way to set up a DAG so that the child job runs
> on the same machine as the parent?
>
> Thanks.
>
> - Brian
> _______________________________________________
> Condor-users mailing list
> Condor-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>
|