[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] specify scratch directory on computation machine



Hi,
 First let me provide basic information on the condor system I am using. Our condor system is composed by a head node and 10 identical computation nodes. Head node is only used for submit job. All these machines has access to share file system. My condor and dagman jobs are running fine with share file system.

 Now we want to take advantage of fast disk (SSD) on computation nodes, but these disks are not on shared file system. I read example 6 on page 30 (condor manual version 8.1.6), seems this is close to what I want. My question is:

(1) how do I specify directory path in the condor script? I want to use this directory on computation node to do I/O intensive computation. Example 6 shows how to specify output file in the argument, but not directory. My job will generate several hundred output files so I want to know if I can specify directory instead of individual file names.

(2) will these files in this directory be deleted automatically or I have to clean them manually?

ÂThe executable and all input files are in shared file system, and I want to have all final output be put back on shared file system upon job finishing. Note all 10 computation nodes are identical.

Thanks

Jiande Wang