[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Jobs on Windows and heterogeneous pool



Dear Romain,

You probably are having problems with permissions. You probably shouldn't be using absolute file paths unless you have set up credd. Your processes may not have permission to use them. Note that you cannot use shared resources unless you are using "run_as_owner", or are using dedicated accounts that have access to all the resources (see Robert McMillan's post on this), and you cannot use the mapped drive numbers without remapping them (net use or pushd/popd). 

Looking ahead to when you have solved your permissions problems: don't forget also to make sure that you have the environment variables that you need.


I have ended up with the following setup for running analyses on Windows:
* adding credd with a pool password
* using getenv and/or load_profle  OR  run_as_owner
* calling my executables from batch scripts
* transfer files (send the batch file that calls the executable, the input files and return the log files)
* alternatively use pushd and popd to access shared resources (this is done in the batch file with a double layer approach - push to the network resource and store the path, then push again to the execute folder)

This is my submit file:
---------------------------
universe = vanilla
executable = run _GSA.cmd
should_transfer_files = true
when_to_transfer_output = ON_EXIT
transfer_input_files = input_file.gwb
error = job-error.$(Cluster).$(Process).txt
log = job-log.$(Cluster).$(Process).txt
output = job-output.$(Cluster).$(Process).txt
getenv = False
run_as_owner = true
queue
--------------------------

Hope this helps,

Andrew

-----Original Message-----
From: htcondor-users-bounces@xxxxxxxxxxx [mailto:htcondor-users-bounces@xxxxxxxxxxx] On Behalf Of Romain
Sent: 07 May 2013 15:35
To: htcondor-users@xxxxxxxxxxx
Subject: [HTCondor-users] Jobs on Windows and heterogeneous pool

Hi everybody,

I've some troubles with executing jobs on Windows, jobs get on hold quickly after submission, there's some parametrer to set in condor_config differently for Windows that from Linux ?

I would like to know the better solution(s) for run simply and effectively run jobs on an heterogeneous pool composed of Windows (7) and Linux (Ubuntu 12.04/Bio-Linux 7) machines.

So I do test and I've problem to :
- send job for Windows from a Linux machine
- run job on Windows (they get on hold quickly after submission)

My goal is to submit jobs most simply as possible (at a time) to be run on Linux or Windows indifferently.

The solution is it to use a Linux "simulator" on Windows ? (I know nothing about this)

For softwares that exists on Windows AND Linux there's a method ?
For example Blast on Windows create a directory that is same as on Linux so...

What kind of submit file should I do ?
-----
This is the one I do (for Windows) :
universe 		= vanilla
executable		= C:\Program Files\NCBI\blast-2.2.28+\bin\blastn.exe
arguments		= -query Z:\xxxx\xxxx\xxxx.fna -db 
Z:\xxxx\xxxx\xxxx-out Z:\xxxx\xxxx\folder_out\xxxx$(Cluster)_$(Process).out
log			= 
Z:\xxxx\xxxx\folder_out\sub.log.$(Cluster).$(Process)
output			= 
Z:\xxxx\xxxx\folder_out\sub.out.$(Cluster).$(Process)
error			= 
Z:\xxxx\xxxx\folder_out\sub.err.$(Cluster).$(Process)
requirements 		= ( OpSys == "WINDOWS" )
getenv			= true
queue			
-----

Thank you in advance
Have a nice day

-* Romain *-

____________________________________________________________
Electronic mail messages entering and leaving Arup  business
systems are scanned for acceptability of content and viruses