Hello again,
My setup to remote SGE cluster is finally working !
However Bosco (Condor) is still ignoring my SGE job file
configuration. Bosco submits a job to a remote SGE cluster using a
Condor job file.
Executable inside Condor file points to a wrapper script which has
SGE configuration inside (number of processors, selected queue,
job name, current working directory) but Condor completly ignores
them when submiting the job.
Thank you.
Best regards,
Guillermo.
On 03/02/2013 02:22 PM, Guillermo Marco Puche wrote:
Hello,
That's exactly what i thought. I'll stick to Bosco then ;)
Thanks.
Best regards,
Guillermo.
El 01/03/2013 18:53, Jaime Frey escribió:
On Mar 1, 2013, at 5:00 AM, Guillermo
Marco Puche <guillermo.marco@xxxxxxxxxxxxxxxxxxxxx> wrote:
I've been trying Bosco lately and seems
to work pretty well for me to submit to another lan cluster
SGE cluster.
For example:
$ condor_q
-- Submitter: brugal :
<192.168.6.2:11000?sock=3072_dcd9_3> : brugal
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE
CMD
62.0 gmarco 3/1 04:43 0+00:14:32 R 0 0.0
bwa.sh
I was then trying to achieve the same but with my local Condor
installation and not with condor pool inside Bosco. I'm having
no success when trying to submit exactly the same condor job
file:
As root i start condor with "condor_master".
ps -ef | grep condor
condor 3850 1 0 05:05 ? 00:00:00 condor_master
condor 3851 3850 0 05:05 ? 00:00:00
condor_collector -f
condor 3853 3850 0 05:05 ? 00:00:00
condor_negotiator -f
condor 3854 3850 0 05:05 ? 00:00:00 condor_schedd
-f
condor 3855 3850 0 05:05 ? 00:00:00 condor_startd
-f
root 3856 3854 0 05:05 ? 00:00:00 condor_procd
-A /var/run/condor/procd_pipe.SCHEDD -L
/var/log/condor/ProcLog.SCHEDD -R 10000000 -S 60 -C 498
condor 3907 3855 87 05:05 ? 00:00:03 mips
root 3924 3758 0 05:05 pts/0 00:00:00 grep condor
I try to submit my job and holds on Idle state forever, with
Bosco I don't have that problem:
condor_q
-- Submitter: brugal : <192.168.6.2:41257> : brugal
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE
CMD
26.0 gmarco 3/1 05:07 0+00:00:00 I 0 0.0
bwa.sh
That's my job file:
universe = grid
grid_resource = batch sge gmarco@cacique
executable = bwa.sh
output = bwa.out
error = bwa.err
log = bwa.log
should_transfer_files = YES
transfer_output = true
stream_output = true
when_to_transfer_output = ON_EXIT_OR_EVICT
queue
Submitting jobs to a remote cluster using a regular installation
of HTCondor requires some manual configuration steps, which we
don't have documented currently. This is one of the advantages
of Bosco. Over time, we may make this kind of job submission
easier to do with a regular HTCondor installation.
Thanks and regards,
Jaime Frey
UW-Madison HTCondor Project
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to
htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to
htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
|