Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] DAG error: "BAD EVENT: job (...) executing, total end count != 0 (1)"
- Date: Tue, 12 Feb 2019 22:08:36 +0100
- From: Nicolas Arnaud <narnaud@xxxxxxxxxxxx>
- Subject: Re: [HTCondor-users] DAG error: "BAD EVENT: job (...) executing, total end count != 0 (1)"
Hi Mark,
I've been looking into this.
Thanks!
(...)
Are you running on Windows or Linux? It seems that all previous
occurrences of this problem happened on Windows.
I'm running on Linux. Some information:
condor_version
$CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $
$CondorPlatform: x86_64_RedHat7 $
echo $UNAME
Linux-x86_64-CL7
These bugs were never resolved, although it seems like Kent spent some
time on them and determined the problem was most likely in the
log-reading code (so at the user level, not the farm). However it's hard
to tell without seeing what events are actually showing up in the log.
I'd like to try and reproduce this locally -- could you send your a)
.nodes.log file, b) .dagman.out file, c) full .dag file? These should
help me figure out where the bug is happening.
Please find in attachment two sets of these three files:
* those tagged "20190207_narnaud_2" correspond to a "BAD EVENT" case
followed by a dag abort (DAGMAN_ALLOW_EVENTS = 114, the default value)
* those tagged "20190212_narnaud_7" correspond to a "BAD EVENT" case,
mitigated by DAGMAN_ALLOW_EVENTS = 5: the dag goes on until completion.
As the dag file relies on independent sub files, I am also sending you
the template sub file we're using to generate all the individual task
sub files.
For a short term workaround, you could try adjusting the value of
DAGMAN_ALLOW_EVENTS to 5 like you suggested. It's true this could affect
the semantics, but I think the worst case is that DAGMan could get stuck
in a logical loop. If you're able to keep an eye on its progress and
manually abort if necessary, I think this should work.
See above: indeed setting DAGMAN_ALLOW_EVENTS = 5 allows the dag to go on.
The point is that since I've noticed this issue I am always running the
"same" dag: the only thing that changes is its tag -- basically driving
the output directory and used for many filenames. In about 40% of the
cases, I get a "BAD EVENT" error but each time it affects a different
task and so happens at different times of the dag processing as the
tasks have very different durations. While in about 60% of the cases,
the dag completes fine w/o any "BAD EVENT".
Let me know if you need more information or if anything is unclear.
Cheers,
Nicolas
Mark
On Tue, Feb 12, 2019 at 2:42 AM Nicolas Arnaud <narnaud@xxxxxxxxxxxx
<mailto:narnaud@xxxxxxxxxxxx>> wrote:
Hello,
I'm using a Condor farm to run dags containing a dozen of independent
tasks, each task being made of a few processes running sequentially
following the parent/child logic. Lately I have encountered errors like
the one below:
> (...)
> 02/08/19 00:30:10 Event: ULOG_IMAGE_SIZE for HTCondor Node
test_20190208_narnaud_virgo_status (281605.0.0) {02/08/19 00:30:06}
> 02/08/19 00:30:10 Event: ULOG_JOB_TERMINATED for HTCondor Node
test_20190208_narnaud_virgo_status (281605.0.0) {02/08/19 00:30:06}
> 02/08/19 00:30:10 Number of idle job procs: 0
> 02/08/19 00:30:10 Node test_20190208_narnaud_virgo_status job
proc (281605.0.0) completed successfully.
> 02/08/19 00:30:10 Node test_20190208_narnaud_virgo_status job
completed
> 02/08/19 00:30:10 Event: ULOG_EXECUTE for HTCondor Node
test_20190208_narnaud_virgo_status (281605.0.0) {02/08/19 00:30:07}
> 02/08/19 00:30:10 BAD EVENT: job (281605.0.0) executing, total
end count != 0 (1)
> 02/08/19 00:30:10 ERROR: aborting DAG because of bad event (BAD
EVENT: job (281605.0.0) executing, total end count != 0 (1))
> (...)
> 02/08/19 00:30:10 ProcessLogEvents() returned false
> 02/08/19 00:30:10 Aborting DAG...
> (...)
Condor correctly asseses one job as being successfully completed but it
seems that it starts executing it again immediately. Then there is a
"BAD EVENT" error and the DAG aborts, killing all the jobs that were
running.
So far this problem seems to occur randomly: some dags complete fine
while, when the problem occurs, the job that suffers from it is
different each time. So are the machine and the slot on which that
particular job is running.
In the above example, the dag snippet is fairly simple
> (...)
> JOB test_20190208_narnaud_virgo_status virgo_status.sub
> VARS test_20190208_narnaud_virgo_status
initialdir="/data/procdata/web/dqr/test_20190208_narnaud/dag"
> RETRY test_20190208_narnaud_virgo_status 1
> (...)
and the sub file reads
> universe = vanilla
> executable =
/users/narnaud/Software/RRT/Virgo/VirgoDQR/trunk/scripts/virgo_status.py
> arguments = "--event_gps 1233176418.54321 --event_id
test_20190208_narnaud --data_stream /virgoData/ffl/raw.ffl
--output_dir /data/procdata/web/dqr/test_20190208_narnaud
--n_seconds_backward 10 --n_seconds_forward 10"
> priority = 10
> getenv = True
> error =
/data/procdata/web/dqr/test_20190208_narnaud/virgo_status/logs/$(cluster)-$(process)-$$(Name).err
> output =
/data/procdata/web/dqr/test_20190208_narnaud/virgo_status/logs/$(cluster)-$(process)-$$(Name).out
> notification = never
> +Experiment = "DetChar"
> +AccountingGroup= "virgo.prod.o3.detchar.transient.dqr"
> queue 1
=> Would you know what could cause this error? And whether this is
at my
level (user) or at the level of the farm?
=> And, until the problem is fixed, would there be a way to convince
the
dag to continue instead of aborting? Possibly by modifying the default
value of the macro
> DAGMAN_ALLOW_EVENTS = 114
? But changing this value to 5 [!?] is said to "break the semantics of
the DAG" => I'm not sure this is the right way to proceed.
Thanks in advance for your help,
Nicolas
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx
<mailto:htcondor-users-request@xxxxxxxxxxx> with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
--
Mark Coatsworth
Systems Programmer
Center for High Throughput Computing
Department of Computer Sciences
University of Wisconsin-Madison
+1 608 206 4703
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/
CONFIG /virgoData/VirgoDQR/Parameters/dag.config
JOB test_20190212_narnaud_7_gps_numerology gps_numerology.sub
VARS test_20190212_narnaud_7_gps_numerology initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_gps_numerology 1
JOB test_20190212_narnaud_7_virgo_noise virgo_noise.sub
VARS test_20190212_narnaud_7_virgo_noise initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_virgo_noise 0
JOB test_20190212_narnaud_7_virgo_noise_json virgo_noise_json.sub
VARS test_20190212_narnaud_7_virgo_noise_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_virgo_noise_json 0
PARENT test_20190212_narnaud_7_virgo_noise CHILD test_20190212_narnaud_7_virgo_noise_json
JOB test_20190212_narnaud_7_virgo_status virgo_status.sub
VARS test_20190212_narnaud_7_virgo_status initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_virgo_status 1
JOB test_20190212_narnaud_7_dqprint_brmsmon dqprint_brmsmon.sub
VARS test_20190212_narnaud_7_dqprint_brmsmon initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_dqprint_brmsmon 0
JOB test_20190212_narnaud_7_dqprint_brmsmon_json dqprint_brmsmon_json.sub
VARS test_20190212_narnaud_7_dqprint_brmsmon_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_dqprint_brmsmon_json 0
PARENT test_20190212_narnaud_7_dqprint_brmsmon CHILD test_20190212_narnaud_7_dqprint_brmsmon_json
JOB test_20190212_narnaud_7_dqprint_dqflags dqprint_dqflags.sub
VARS test_20190212_narnaud_7_dqprint_dqflags initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_dqprint_dqflags 0
JOB test_20190212_narnaud_7_dqprint_dqflags_json dqprint_dqflags_json.sub
VARS test_20190212_narnaud_7_dqprint_dqflags_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_dqprint_dqflags_json 0
PARENT test_20190212_narnaud_7_dqprint_dqflags CHILD test_20190212_narnaud_7_dqprint_dqflags_json
JOB test_20190212_narnaud_7_omicronscanhoftV1 omicronscanhoftV1.sub
VARS test_20190212_narnaud_7_omicronscanhoftV1 initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftV1 0
JOB test_20190212_narnaud_7_omicronscanhoftV1_json omicronscanhoftV1_json.sub
VARS test_20190212_narnaud_7_omicronscanhoftV1_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftV1_json 0
PARENT test_20190212_narnaud_7_omicronscanhoftV1 CHILD test_20190212_narnaud_7_omicronscanhoftV1_json
JOB test_20190212_narnaud_7_omicronscanhoftH1 omicronscanhoftH1.sub
VARS test_20190212_narnaud_7_omicronscanhoftH1 initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftH1 0
JOB test_20190212_narnaud_7_omicronscanhoftH1_json omicronscanhoftH1_json.sub
VARS test_20190212_narnaud_7_omicronscanhoftH1_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftH1_json 0
PARENT test_20190212_narnaud_7_omicronscanhoftH1 CHILD test_20190212_narnaud_7_omicronscanhoftH1_json
JOB test_20190212_narnaud_7_omicronscanhoftL1 omicronscanhoftL1.sub
VARS test_20190212_narnaud_7_omicronscanhoftL1 initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftL1 0
JOB test_20190212_narnaud_7_omicronscanhoftL1_json omicronscanhoftL1_json.sub
VARS test_20190212_narnaud_7_omicronscanhoftL1_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanhoftL1_json 0
PARENT test_20190212_narnaud_7_omicronscanhoftL1 CHILD test_20190212_narnaud_7_omicronscanhoftL1_json
JOB test_20190212_narnaud_7_omicronscanfull2048 omicronscanfull2048.sub
VARS test_20190212_narnaud_7_omicronscanfull2048 initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanfull2048 0
JOB test_20190212_narnaud_7_omicronscanfull2048_json omicronscanfull2048_json.sub
VARS test_20190212_narnaud_7_omicronscanfull2048_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanfull2048_json 0
PARENT test_20190212_narnaud_7_omicronscanfull2048 CHILD test_20190212_narnaud_7_omicronscanfull2048_json
JOB test_20190212_narnaud_7_omicronscanfull512 omicronscanfull512.sub
VARS test_20190212_narnaud_7_omicronscanfull512 initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanfull512 0
JOB test_20190212_narnaud_7_omicronscanfull512_json omicronscanfull512_json.sub
VARS test_20190212_narnaud_7_omicronscanfull512_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronscanfull512_json 0
PARENT test_20190212_narnaud_7_omicronscanfull512 CHILD test_20190212_narnaud_7_omicronscanfull512_json
JOB test_20190212_narnaud_7_omicronplot omicronplot.sub
VARS test_20190212_narnaud_7_omicronplot initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronplot 0
JOB test_20190212_narnaud_7_omicronplot_exe omicronplot_exe.sub
VARS test_20190212_narnaud_7_omicronplot_exe initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronplot_exe 0
PARENT test_20190212_narnaud_7_omicronplot CHILD test_20190212_narnaud_7_omicronplot_exe
JOB test_20190212_narnaud_7_omicronplot_json omicronplot_json.sub
VARS test_20190212_narnaud_7_omicronplot_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_omicronplot_json 0
PARENT test_20190212_narnaud_7_omicronplot_exe CHILD test_20190212_narnaud_7_omicronplot_json
JOB test_20190212_narnaud_7_query_ingv_public_data query_ingv_public_data.sub
VARS test_20190212_narnaud_7_query_ingv_public_data initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_query_ingv_public_data 1
JOB test_20190212_narnaud_7_scan_logfiles scan_logfiles.sub
VARS test_20190212_narnaud_7_scan_logfiles initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_scan_logfiles 1
JOB test_20190212_narnaud_7_decode_DMS_snapshots decode_DMS_snapshots.sub
VARS test_20190212_narnaud_7_decode_DMS_snapshots initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_decode_DMS_snapshots 1
JOB test_20190212_narnaud_7_upv upv.sub
VARS test_20190212_narnaud_7_upv initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_upv 1
JOB test_20190212_narnaud_7_upv_exe upv_exe.sub
VARS test_20190212_narnaud_7_upv_exe initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_upv_exe 1
PARENT test_20190212_narnaud_7_upv CHILD test_20190212_narnaud_7_upv_exe
JOB test_20190212_narnaud_7_upv_json upv_json.sub
VARS test_20190212_narnaud_7_upv_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_upv_json 1
PARENT test_20190212_narnaud_7_upv_exe CHILD test_20190212_narnaud_7_upv_json
JOB test_20190212_narnaud_7_bruco bruco.sub
VARS test_20190212_narnaud_7_bruco initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco 1
JOB test_20190212_narnaud_7_bruco_std bruco_std.sub
VARS test_20190212_narnaud_7_bruco_std initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco_std 1
PARENT test_20190212_narnaud_7_bruco CHILD test_20190212_narnaud_7_bruco_std
JOB test_20190212_narnaud_7_bruco_std-prev bruco_std-prev.sub
VARS test_20190212_narnaud_7_bruco_std-prev initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco_std-prev 1
PARENT test_20190212_narnaud_7_bruco CHILD test_20190212_narnaud_7_bruco_std-prev
JOB test_20190212_narnaud_7_bruco_env bruco_env.sub
VARS test_20190212_narnaud_7_bruco_env initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco_env 1
PARENT test_20190212_narnaud_7_bruco CHILD test_20190212_narnaud_7_bruco_env
JOB test_20190212_narnaud_7_bruco_env-prev bruco_env-prev.sub
VARS test_20190212_narnaud_7_bruco_env-prev initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco_env-prev 1
PARENT test_20190212_narnaud_7_bruco CHILD test_20190212_narnaud_7_bruco_env-prev
JOB test_20190212_narnaud_7_bruco_json bruco_json.sub
VARS test_20190212_narnaud_7_bruco_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_bruco_json 1
PARENT test_20190212_narnaud_7_bruco_std CHILD test_20190212_narnaud_7_bruco_json
PARENT test_20190212_narnaud_7_bruco_std-prev CHILD test_20190212_narnaud_7_bruco_json
PARENT test_20190212_narnaud_7_bruco_env CHILD test_20190212_narnaud_7_bruco_json
PARENT test_20190212_narnaud_7_bruco_env-prev CHILD test_20190212_narnaud_7_bruco_json
JOB test_20190212_narnaud_7_data_ref_comparison_INJ data_ref_comparison_INJ.sub
VARS test_20190212_narnaud_7_data_ref_comparison_INJ initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_data_ref_comparison_INJ 1
JOB test_20190212_narnaud_7_data_ref_comparison_INJ_comparison data_ref_comparison_INJ_comparison.sub
VARS test_20190212_narnaud_7_data_ref_comparison_INJ_comparison initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_data_ref_comparison_INJ_comparison 1
PARENT test_20190212_narnaud_7_data_ref_comparison_INJ CHILD test_20190212_narnaud_7_data_ref_comparison_INJ_comparison
JOB test_20190212_narnaud_7_data_ref_comparison_ISC data_ref_comparison_ISC.sub
VARS test_20190212_narnaud_7_data_ref_comparison_ISC initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_data_ref_comparison_ISC 1
JOB test_20190212_narnaud_7_data_ref_comparison_ISC_comparison data_ref_comparison_ISC_comparison.sub
VARS test_20190212_narnaud_7_data_ref_comparison_ISC_comparison initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_data_ref_comparison_ISC_comparison 1
PARENT test_20190212_narnaud_7_data_ref_comparison_ISC CHILD test_20190212_narnaud_7_data_ref_comparison_ISC_comparison
JOB test_20190212_narnaud_7_generate_dqr_json generate_dqr_json.sub
VARS test_20190212_narnaud_7_generate_dqr_json initialdir="/data/procdata/web/dqr/test_20190212_narnaud_7/dag"
RETRY test_20190212_narnaud_7_generate_dqr_json 0
000 (283564.000.000) 02/12 17:33:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_gps_numerology
...
000 (283565.000.000) 02/12 17:33:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_virgo_noise
...
000 (283566.000.000) 02/12 17:33:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_virgo_status
...
000 (283567.000.000) 02/12 17:33:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_dqprint_brmsmon
...
000 (283568.000.000) 02/12 17:33:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_dqprint_dqflags
...
000 (283569.000.000) 02/12 17:33:18 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftV1
...
000 (283570.000.000) 02/12 17:33:18 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftH1
...
000 (283571.000.000) 02/12 17:33:18 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftL1
...
000 (283572.000.000) 02/12 17:33:18 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanfull2048
...
000 (283573.000.000) 02/12 17:33:18 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanfull512
...
000 (283574.000.000) 02/12 17:33:23 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronplot
...
000 (283575.000.000) 02/12 17:33:23 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_query_ingv_public_data
...
000 (283576.000.000) 02/12 17:33:23 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_scan_logfiles
...
000 (283577.000.000) 02/12 17:33:23 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_decode_DMS_snapshots
...
000 (283578.000.000) 02/12 17:33:23 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_upv
...
000 (283579.000.000) 02/12 17:33:28 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco
...
000 (283580.000.000) 02/12 17:33:28 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_data_ref_comparison_INJ
...
000 (283581.000.000) 02/12 17:33:28 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_data_ref_comparison_ISC
...
000 (283582.000.000) 02/12 17:33:28 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_generate_dqr_json
...
001 (283565.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.100:9618?addrs=90.147.139.100-9618+[--1]-9618&noUDP&sock=3253_ddab_3>
...
001 (283567.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.88:9618?addrs=90.147.139.88-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
001 (283566.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.97:9618?addrs=90.147.139.97-9618+[--1]-9618&noUDP&sock=3357_3385_3>
...
001 (283564.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.75:9618?addrs=90.147.139.75-9618+[--1]-9618&noUDP&sock=3224_f96c_3>
...
001 (283568.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.82:9618?addrs=90.147.139.82-9618+[--1]-9618&noUDP&sock=3373_2d09_3>
...
001 (283576.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.102:9618?addrs=90.147.139.102-9618+[--1]-9618&noUDP&sock=3369_6ea8_3>
...
001 (283572.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.99:9618?addrs=90.147.139.99-9618+[--1]-9618&noUDP&sock=3344_48ca_3>
...
001 (283570.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.87:9618?addrs=90.147.139.87-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
001 (283578.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.104:9618?addrs=90.147.139.104-9618+[--1]-9618&noUDP&sock=3354_24bc_3>
...
001 (283577.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.84:9618?addrs=90.147.139.84-9618+[--1]-9618&noUDP&sock=3351_15f3_3>
...
001 (283571.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.79:9618?addrs=90.147.139.79-9618+[--1]-9618&noUDP&sock=3366_5fdf_3>
...
001 (283579.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.96:9618?addrs=90.147.139.96-9618+[--1]-9618&noUDP&sock=3361_f1e6_3>
...
001 (283580.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.92:9618?addrs=90.147.139.92-9618+[--1]-9618&noUDP&sock=3353_7524_3>
...
001 (283569.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.89:9618?addrs=90.147.139.89-9618+[--1]-9618&noUDP&sock=3339_dad0_3>
...
001 (283575.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.78:9618?addrs=90.147.139.78-9618+[--1]-9618&noUDP&sock=3368_bf10_3>
...
001 (283573.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.94:9618?addrs=90.147.139.94-9618+[--1]-9618&noUDP&sock=3349_b6c3_3>
...
001 (283574.000.000) 02/12 17:33:29 Job executing on host: <90.147.139.91:9618?addrs=90.147.139.91-9618+[--1]-9618&noUDP&sock=3354_24bc_3>
...
006 (283579.000.000) 02/12 17:33:30 Image size of job updated: 35
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283579.000.000) 02/12 17:33:30 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90545
Memory (MB) : 0 1 1
...
001 (283582.000.000) 02/12 17:33:31 Job executing on host: <90.147.139.96:9618?addrs=90.147.139.96-9618+[--1]-9618&noUDP&sock=3361_f1e6_3>
...
006 (283565.000.000) 02/12 17:33:37 Image size of job updated: 71132
61 - MemoryUsage of job (MB)
61684 - ResidentSetSize of job (KB)
...
006 (283566.000.000) 02/12 17:33:37 Image size of job updated: 68488
67 - MemoryUsage of job (MB)
68484 - ResidentSetSize of job (KB)
...
006 (283567.000.000) 02/12 17:33:37 Image size of job updated: 66984
66 - MemoryUsage of job (MB)
66844 - ResidentSetSize of job (KB)
...
006 (283564.000.000) 02/12 17:33:37 Image size of job updated: 69356
68 - MemoryUsage of job (MB)
69352 - ResidentSetSize of job (KB)
...
006 (283568.000.000) 02/12 17:33:37 Image size of job updated: 26292
7 - MemoryUsage of job (MB)
6832 - ResidentSetSize of job (KB)
...
006 (283574.000.000) 02/12 17:33:38 Image size of job updated: 104
1 - MemoryUsage of job (MB)
104 - ResidentSetSize of job (KB)
...
006 (283573.000.000) 02/12 17:33:38 Image size of job updated: 41748
20 - MemoryUsage of job (MB)
20316 - ResidentSetSize of job (KB)
...
005 (283565.000.000) 02/12 17:33:38 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:03, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:03, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 1 1 90552
Memory (MB) : 61 1 1
...
006 (283578.000.000) 02/12 17:33:38 Image size of job updated: 104
1 - MemoryUsage of job (MB)
104 - ResidentSetSize of job (KB)
...
006 (283576.000.000) 02/12 17:33:38 Image size of job updated: 69812
69 - MemoryUsage of job (MB)
69808 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:33:38 Image size of job updated: 62064
24 - MemoryUsage of job (MB)
24392 - ResidentSetSize of job (KB)
...
006 (283570.000.000) 02/12 17:33:38 Image size of job updated: 61544
24 - MemoryUsage of job (MB)
23920 - ResidentSetSize of job (KB)
...
006 (283577.000.000) 02/12 17:33:38 Image size of job updated: 70272
69 - MemoryUsage of job (MB)
70268 - ResidentSetSize of job (KB)
...
006 (283580.000.000) 02/12 17:33:38 Image size of job updated: 23588
18 - MemoryUsage of job (MB)
18080 - ResidentSetSize of job (KB)
...
006 (283571.000.000) 02/12 17:33:38 Image size of job updated: 62240
24 - MemoryUsage of job (MB)
24500 - ResidentSetSize of job (KB)
...
006 (283575.000.000) 02/12 17:33:38 Image size of job updated: 24840
19 - MemoryUsage of job (MB)
18488 - ResidentSetSize of job (KB)
...
000 (283583.000.000) 02/12 17:33:38 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco_std
...
000 (283584.000.000) 02/12 17:33:38 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco_std-prev
...
006 (283569.000.000) 02/12 17:33:38 Image size of job updated: 68360
26 - MemoryUsage of job (MB)
26432 - ResidentSetSize of job (KB)
...
000 (283585.000.000) 02/12 17:33:38 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco_env
...
000 (283586.000.000) 02/12 17:33:38 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco_env-prev
...
006 (283582.000.000) 02/12 17:33:39 Image size of job updated: 25212
19 - MemoryUsage of job (MB)
18844 - ResidentSetSize of job (KB)
...
001 (283581.000.000) 02/12 17:33:39 Job executing on host: <90.147.139.100:9618?addrs=90.147.139.100-9618+[--1]-9618&noUDP&sock=3253_ddab_3>
...
006 (283564.000.000) 02/12 17:33:39 Image size of job updated: 77588
68 - MemoryUsage of job (MB)
69352 - ResidentSetSize of job (KB)
...
005 (283564.000.000) 02/12 17:33:39 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 15 15 90530
Memory (MB) : 68 1 1
...
001 (283583.000.000) 02/12 17:33:39 Job executing on host: <90.147.139.75:9618?addrs=90.147.139.75-9618+[--1]-9618&noUDP&sock=3224_f96c_3>
...
006 (283577.000.000) 02/12 17:33:41 Image size of job updated: 82744
69 - MemoryUsage of job (MB)
70368 - ResidentSetSize of job (KB)
...
005 (283577.000.000) 02/12 17:33:41 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 17 17 90542
Memory (MB) : 69 1 1
...
001 (283584.000.000) 02/12 17:33:42 Job executing on host: <90.147.139.84:9618?addrs=90.147.139.84-9618+[--1]-9618&noUDP&sock=3351_15f3_3>
...
006 (283581.000.000) 02/12 17:33:43 Image size of job updated: 7
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283581.000.000) 02/12 17:33:43 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90552
Memory (MB) : 0 1 1
...
006 (283580.000.000) 02/12 17:33:43 Image size of job updated: 81596
18 - MemoryUsage of job (MB)
18080 - ResidentSetSize of job (KB)
...
005 (283580.000.000) 02/12 17:33:43 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90543
Memory (MB) : 18 1 1
...
000 (283587.000.000) 02/12 17:33:44 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_virgo_noise_json
...
001 (283585.000.000) 02/12 17:33:44 Job executing on host: <90.147.139.100:9618?addrs=90.147.139.100-9618+[--1]-9618&noUDP&sock=3253_ddab_3>
...
001 (283586.000.000) 02/12 17:33:45 Job executing on host: <90.147.139.92:9618?addrs=90.147.139.92-9618+[--1]-9618&noUDP&sock=3353_7524_3>
...
006 (283583.000.000) 02/12 17:33:48 Image size of job updated: 1174152
1147 - MemoryUsage of job (MB)
1174052 - ResidentSetSize of job (KB)
...
000 (283588.000.000) 02/12 17:33:49 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_data_ref_comparison_ISC_comparison
...
000 (283589.000.000) 02/12 17:33:49 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_data_ref_comparison_INJ_comparison
...
006 (283584.000.000) 02/12 17:33:50 Image size of job updated: 1272764
1243 - MemoryUsage of job (MB)
1272756 - ResidentSetSize of job (KB)
...
006 (283567.000.000) 02/12 17:33:51 Image size of job updated: 102488
66 - MemoryUsage of job (MB)
66900 - ResidentSetSize of job (KB)
...
005 (283567.000.000) 02/12 17:33:51 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:17, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:17, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 75 75 90538
Memory (MB) : 66 1 1
...
001 (283587.000.000) 02/12 17:33:52 Job executing on host: <90.147.139.88:9618?addrs=90.147.139.88-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
006 (283575.000.000) 02/12 17:33:52 Image size of job updated: 92884
19 - MemoryUsage of job (MB)
18488 - ResidentSetSize of job (KB)
...
005 (283575.000.000) 02/12 17:33:52 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90541
Memory (MB) : 19 1 1
...
006 (283587.000.000) 02/12 17:33:52 Image size of job updated: 2
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283587.000.000) 02/12 17:33:52 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 2 2 90538
Memory (MB) : 0 1 1
...
006 (283582.000.000) 02/12 17:33:53 Image size of job updated: 94132
19 - MemoryUsage of job (MB)
18844 - ResidentSetSize of job (KB)
...
005 (283582.000.000) 02/12 17:33:53 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 10 10 90545
Memory (MB) : 19 1 1
...
006 (283586.000.000) 02/12 17:33:53 Image size of job updated: 606504
588 - MemoryUsage of job (MB)
601592 - ResidentSetSize of job (KB)
...
001 (283588.000.000) 02/12 17:33:53 Job executing on host: <90.147.139.78:9618?addrs=90.147.139.78-9618+[--1]-9618&noUDP&sock=3368_bf10_3>
...
001 (283589.000.000) 02/12 17:33:53 Job executing on host: <90.147.139.88:9618?addrs=90.147.139.88-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
006 (283585.000.000) 02/12 17:33:53 Image size of job updated: 1173724
1147 - MemoryUsage of job (MB)
1173720 - ResidentSetSize of job (KB)
...
006 (283566.000.000) 02/12 17:33:53 Image size of job updated: 3817688
67 - MemoryUsage of job (MB)
68484 - ResidentSetSize of job (KB)
...
005 (283566.000.000) 02/12 17:33:53 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:09, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:09, Sys 0 00:00:03 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 30 30 90545
Memory (MB) : 67 1 1
...
006 (283568.000.000) 02/12 17:33:56 Image size of job updated: 147980
7 - MemoryUsage of job (MB)
6832 - ResidentSetSize of job (KB)
...
005 (283568.000.000) 02/12 17:33:56 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:17, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:17, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 75 75 90534
Memory (MB) : 7 1 1
...
000 (283590.000.000) 02/12 17:33:59 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_dqprint_brmsmon_json
...
001 (283590.000.000) 02/12 17:33:59 Job executing on host: <90.147.139.96:9618?addrs=90.147.139.96-9618+[--1]-9618&noUDP&sock=3361_f1e6_3>
...
006 (283590.000.000) 02/12 17:33:59 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283590.000.000) 02/12 17:33:59 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90545
Memory (MB) : 0 1 1
...
006 (283588.000.000) 02/12 17:34:02 Image size of job updated: 199648
190 - MemoryUsage of job (MB)
193776 - ResidentSetSize of job (KB)
...
006 (283589.000.000) 02/12 17:34:02 Image size of job updated: 141240
138 - MemoryUsage of job (MB)
141200 - ResidentSetSize of job (KB)
...
000 (283591.000.000) 02/12 17:34:04 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_dqprint_dqflags_json
...
001 (283591.000.000) 02/12 17:34:04 Job executing on host: <90.147.139.82:9618?addrs=90.147.139.82-9618+[--1]-9618&noUDP&sock=3373_2d09_3>
...
006 (283591.000.000) 02/12 17:34:04 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283591.000.000) 02/12 17:34:04 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90534
Memory (MB) : 0 1 1
...
006 (283589.000.000) 02/12 17:34:59 Image size of job updated: 503256
138 - MemoryUsage of job (MB)
141200 - ResidentSetSize of job (KB)
...
005 (283589.000.000) 02/12 17:34:59 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:55, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:55, Sys 0 00:00:03 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 27 27 90538
Memory (MB) : 138 1 1
...
006 (283569.000.000) 02/12 17:35:29 Image size of job updated: 5156932
26 - MemoryUsage of job (MB)
26480 - ResidentSetSize of job (KB)
...
005 (283569.000.000) 02/12 17:35:29 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:01:43, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:01:43, Sys 0 00:00:03 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90548
Memory (MB) : 26 1 1
...
000 (283592.000.000) 02/12 17:35:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftV1_json
...
001 (283592.000.000) 02/12 17:35:39 Job executing on host: <90.147.139.88:9618?addrs=90.147.139.88-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
006 (283592.000.000) 02/12 17:35:39 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283592.000.000) 02/12 17:35:39 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90538
Memory (MB) : 0 1 1
...
006 (283588.000.000) 02/12 17:36:11 Image size of job updated: 265240
190 - MemoryUsage of job (MB)
193776 - ResidentSetSize of job (KB)
...
005 (283588.000.000) 02/12 17:36:11 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:02:12, Sys 0 00:00:01 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:02:12, Sys 0 00:00:01 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 27 27 90541
Memory (MB) : 190 1 1
...
006 (283571.000.000) 02/12 17:38:04 Image size of job updated: 5158424
24 - MemoryUsage of job (MB)
24500 - ResidentSetSize of job (KB)
...
005 (283571.000.000) 02/12 17:38:04 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:04:16, Sys 0 00:00:04 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:04:16, Sys 0 00:00:04 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90538
Memory (MB) : 24 1 1
...
006 (283570.000.000) 02/12 17:38:13 Image size of job updated: 5160580
24 - MemoryUsage of job (MB)
23924 - ResidentSetSize of job (KB)
...
005 (283570.000.000) 02/12 17:38:13 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:04:25, Sys 0 00:00:04 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:04:25, Sys 0 00:00:04 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90540
Memory (MB) : 24 1 1
...
000 (283593.000.000) 02/12 17:38:14 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftL1_json
...
001 (283593.000.000) 02/12 17:38:14 Job executing on host: <90.147.139.79:9618?addrs=90.147.139.79-9618+[--1]-9618&noUDP&sock=3366_5fdf_3>
...
006 (283593.000.000) 02/12 17:38:15 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283593.000.000) 02/12 17:38:15 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90538
Memory (MB) : 0 1 1
...
000 (283594.000.000) 02/12 17:38:19 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanhoftH1_json
...
001 (283594.000.000) 02/12 17:38:19 Job executing on host: <90.147.139.87:9618?addrs=90.147.139.87-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
006 (283594.000.000) 02/12 17:38:20 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283594.000.000) 02/12 17:38:20 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90540
Memory (MB) : 0 1 1
...
006 (283573.000.000) 02/12 17:38:38 Image size of job updated: 2117948
2019 - MemoryUsage of job (MB)
2066580 - ResidentSetSize of job (KB)
...
006 (283574.000.000) 02/12 17:38:38 Image size of job updated: 107956
1 - MemoryUsage of job (MB)
104 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:38:39 Image size of job updated: 1859204
1766 - MemoryUsage of job (MB)
1807868 - ResidentSetSize of job (KB)
...
006 (283578.000.000) 02/12 17:38:39 Image size of job updated: 107956
1 - MemoryUsage of job (MB)
104 - ResidentSetSize of job (KB)
...
006 (283576.000.000) 02/12 17:38:39 Image size of job updated: 89012
87 - MemoryUsage of job (MB)
89008 - ResidentSetSize of job (KB)
...
006 (283583.000.000) 02/12 17:38:48 Image size of job updated: 1180420
1153 - MemoryUsage of job (MB)
1180260 - ResidentSetSize of job (KB)
...
006 (283584.000.000) 02/12 17:38:50 Image size of job updated: 1280584
1251 - MemoryUsage of job (MB)
1280560 - ResidentSetSize of job (KB)
...
006 (283586.000.000) 02/12 17:38:53 Image size of job updated: 1288624
1248 - MemoryUsage of job (MB)
1277936 - ResidentSetSize of job (KB)
...
006 (283585.000.000) 02/12 17:38:54 Image size of job updated: 1179648
1152 - MemoryUsage of job (MB)
1179628 - ResidentSetSize of job (KB)
...
006 (283586.000.000) 02/12 17:39:34 Image size of job updated: 1300676
1248 - MemoryUsage of job (MB)
1277936 - ResidentSetSize of job (KB)
...
005 (283586.000.000) 02/12 17:39:34 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:05:22, Sys 0 00:00:07 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:05:22, Sys 0 00:00:07 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90543
Memory (MB) : 1248 1 1
...
006 (283585.000.000) 02/12 17:39:34 Image size of job updated: 1194596
1152 - MemoryUsage of job (MB)
1179628 - ResidentSetSize of job (KB)
...
005 (283585.000.000) 02/12 17:39:34 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:05:27, Sys 0 00:00:06 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:05:27, Sys 0 00:00:06 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90552
Memory (MB) : 1152 1 1
...
006 (283573.000.000) 02/12 17:39:42 Image size of job updated: 2239964
2019 - MemoryUsage of job (MB)
2066580 - ResidentSetSize of job (KB)
...
005 (283573.000.000) 02/12 17:39:42 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:05:05, Sys 0 00:00:20 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:05:05, Sys 0 00:00:20 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90548
Memory (MB) : 2019 1 1
...
000 (283595.000.000) 02/12 17:39:49 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanfull512_json
...
001 (283595.000.000) 02/12 17:39:50 Job executing on host: <90.147.139.94:9618?addrs=90.147.139.94-9618+[--1]-9618&noUDP&sock=3349_b6c3_3>
...
006 (283595.000.000) 02/12 17:39:51 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283595.000.000) 02/12 17:39:51 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90548
Memory (MB) : 0 1 1
...
006 (283576.000.000) 02/12 17:43:40 Image size of job updated: 89032
87 - MemoryUsage of job (MB)
89028 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:43:40 Image size of job updated: 2154000
2054 - MemoryUsage of job (MB)
2102664 - ResidentSetSize of job (KB)
...
006 (283576.000.000) 02/12 17:48:40 Image size of job updated: 89044
87 - MemoryUsage of job (MB)
89040 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:48:41 Image size of job updated: 2474836
2367 - MemoryUsage of job (MB)
2423500 - ResidentSetSize of job (KB)
...
006 (283584.000.000) 02/12 17:48:51 Image size of job updated: 1280592
1251 - MemoryUsage of job (MB)
1280568 - ResidentSetSize of job (KB)
...
006 (283576.000.000) 02/12 17:53:40 Image size of job updated: 89048
87 - MemoryUsage of job (MB)
89044 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:53:41 Image size of job updated: 2714236
2601 - MemoryUsage of job (MB)
2662900 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 17:58:41 Image size of job updated: 3088156
2966 - MemoryUsage of job (MB)
3036820 - ResidentSetSize of job (KB)
...
005 (283572.000.000) 02/12 18:00:13 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:23:33, Sys 0 00:01:16 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:23:33, Sys 0 00:01:16 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90543
Memory (MB) : 2966 1 1
...
001 (283572.000.000) 02/12 18:00:14 Job executing on host: <90.147.139.99:9618?addrs=90.147.139.99-9618+[--1]-9618&noUDP&sock=3344_48ca_3>
...
000 (283712.000.000) 02/12 18:00:21 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronscanfull2048_json
...
001 (283712.000.000) 02/12 18:00:22 Job executing on host: <90.147.139.75:9618?addrs=90.147.139.75-9618+[--1]-9618&noUDP&sock=3224_f96c_3>
...
006 (283712.000.000) 02/12 18:00:23 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283712.000.000) 02/12 18:00:23 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90530
Memory (MB) : 0 1 1
...
006 (283572.000.000) 02/12 18:00:23 Image size of job updated: 39968
25 - MemoryUsage of job (MB)
25180 - ResidentSetSize of job (KB)
...
006 (283572.000.000) 02/12 18:01:17 Image size of job updated: 9057840
25 - MemoryUsage of job (MB)
25180 - ResidentSetSize of job (KB)
...
005 (283572.000.000) 02/12 18:01:17 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:20, Sys 0 00:00:05 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:20, Sys 0 00:00:05 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 30 30 90543
Memory (MB) : 25 1 1
...
005 (283576.000.000) 02/12 18:02:52 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:53, Sys 0 00:00:20 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:53, Sys 0 00:00:20 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 20 20 90537
Memory (MB) : 87 1 1
...
005 (283583.000.000) 02/12 18:04:28 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:29:31, Sys 0 00:00:37 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:29:31, Sys 0 00:00:37 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90530
Memory (MB) : 1153 1 1
...
006 (283584.000.000) 02/12 18:05:11 Image size of job updated: 1312140
1251 - MemoryUsage of job (MB)
1280568 - ResidentSetSize of job (KB)
...
005 (283584.000.000) 02/12 18:05:11 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:30:10, Sys 0 00:00:38 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:30:10, Sys 0 00:00:38 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90542
Memory (MB) : 1251 1 1
...
000 (283713.000.000) 02/12 18:05:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_bruco_json
...
001 (283713.000.000) 02/12 18:05:17 Job executing on host: <90.147.139.99:9618?addrs=90.147.139.99-9618+[--1]-9618&noUDP&sock=3344_48ca_3>
...
006 (283713.000.000) 02/12 18:05:17 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283713.000.000) 02/12 18:05:17 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90543
Memory (MB) : 0 1 1
...
005 (283578.000.000) 02/12 18:06:50 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90548
Memory (MB) : 1 1 1
...
005 (283574.000.000) 02/12 18:06:51 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90553
Memory (MB) : 1 1 1
...
000 (283714.000.000) 02/12 18:06:57 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_upv_exe
...
000 (283715.000.000) 02/12 18:06:57 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronplot_exe
...
001 (283714.000.000) 02/12 18:06:57 Job executing on host: <90.147.139.91:9618?addrs=90.147.139.91-9618+[--1]-9618&noUDP&sock=3354_24bc_3>
...
001 (283715.000.000) 02/12 18:06:57 Job executing on host: <90.147.139.104:9618?addrs=90.147.139.104-9618+[--1]-9618&noUDP&sock=3354_24bc_3>
...
006 (283715.000.000) 02/12 18:07:03 Image size of job updated: 58832
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283715.000.000) 02/12 18:07:03 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:02, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:02, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 100 100 90548
Memory (MB) : 0 1 1
...
006 (283714.000.000) 02/12 18:07:05 Image size of job updated: 122600
120 - MemoryUsage of job (MB)
122348 - ResidentSetSize of job (KB)
...
000 (283716.000.000) 02/12 18:07:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_omicronplot_json
...
001 (283716.000.000) 02/12 18:07:12 Job executing on host: <90.147.139.104:9618?addrs=90.147.139.104-9618+[--1]-9618&noUDP&sock=3354_24bc_3>
...
006 (283716.000.000) 02/12 18:07:12 Image size of job updated: 2
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283716.000.000) 02/12 18:07:12 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 2 2 90548
Memory (MB) : 0 1 1
...
006 (283714.000.000) 02/12 18:12:06 Image size of job updated: 162276
159 - MemoryUsage of job (MB)
161832 - ResidentSetSize of job (KB)
...
006 (283714.000.000) 02/12 18:17:07 Image size of job updated: 182280
178 - MemoryUsage of job (MB)
181812 - ResidentSetSize of job (KB)
...
005 (283714.000.000) 02/12 18:17:22 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:08:16, Sys 0 00:00:22 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:08:16, Sys 0 00:00:22 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 75 75 90553
Memory (MB) : 178 1 1
...
000 (283727.000.000) 02/12 18:17:28 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
DAG Node: test_20190212_narnaud_7_upv_json
...
001 (283727.000.000) 02/12 18:17:28 Job executing on host: <90.147.139.73:9618?addrs=90.147.139.73-9618+[--1]-9618&noUDP&sock=3353_7524_3>
...
006 (283727.000.000) 02/12 18:17:30 Image size of job updated: 14708
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (283727.000.000) 02/12 18:17:30 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 15 15 90538
Memory (MB) : 0 1 1
...
02/12/19 17:33:08 ******************************************************
02/12/19 17:33:08 ** condor_scheduniv_exec.283563.0 (CONDOR_DAGMAN) STARTING UP
02/12/19 17:33:08 ** /usr/bin/condor_dagman
02/12/19 17:33:08 ** SubsystemInfo: name=DAGMAN type=DAGMAN(10) class=DAEMON(1)
02/12/19 17:33:08 ** Configuration: subsystem:DAGMAN local:<NONE> class:DAEMON
02/12/19 17:33:08 ** $CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $
02/12/19 17:33:08 ** $CondorPlatform: x86_64_RedHat7 $
02/12/19 17:33:08 ** PID = 150593
02/12/19 17:33:08 ** Log last touched time unavailable (No such file or directory)
02/12/19 17:33:08 ******************************************************
02/12/19 17:33:08 Using config source: /etc/condor/condor_config
02/12/19 17:33:08 Using local config sources:
02/12/19 17:33:08 /etc/condor/condor_config.local
02/12/19 17:33:08 config Macros = 205, Sorted = 205, StringBytes = 7384, TablesBytes = 7428
02/12/19 17:33:08 CLASSAD_CACHING is ENABLED
02/12/19 17:33:08 Daemon Log is logging: D_ALWAYS D_ERROR
02/12/19 17:33:08 DaemonCore: No command port requested.
02/12/19 17:33:08 Using DAGMan config file: /virgoData/VirgoDQR/Parameters/dag.config
02/12/19 17:33:08 DAGMAN_USE_STRICT setting: 1
02/12/19 17:33:08 DAGMAN_VERBOSITY setting: 3
02/12/19 17:33:08 DAGMAN_DEBUG_CACHE_SIZE setting: 5242880
02/12/19 17:33:08 DAGMAN_DEBUG_CACHE_ENABLE setting: False
02/12/19 17:33:08 DAGMAN_SUBMIT_DELAY setting: 0
02/12/19 17:33:08 DAGMAN_MAX_SUBMIT_ATTEMPTS setting: 6
02/12/19 17:33:08 DAGMAN_STARTUP_CYCLE_DETECT setting: False
02/12/19 17:33:08 DAGMAN_MAX_SUBMITS_PER_INTERVAL setting: 5
02/12/19 17:33:08 DAGMAN_USER_LOG_SCAN_INTERVAL setting: 5
02/12/19 17:33:08 DAGMAN_DEFAULT_PRIORITY setting: 0
02/12/19 17:33:08 DAGMAN_SUPPRESS_NOTIFICATION setting: True
02/12/19 17:33:08 allow_events (DAGMAN_ALLOW_EVENTS) setting: 5
02/12/19 17:33:08 DAGMAN_RETRY_SUBMIT_FIRST setting: True
02/12/19 17:33:08 DAGMAN_RETRY_NODE_FIRST setting: False
02/12/19 17:33:08 DAGMAN_MAX_JOBS_IDLE setting: 1000
02/12/19 17:33:08 DAGMAN_MAX_JOBS_SUBMITTED setting: 0
02/12/19 17:33:08 DAGMAN_MAX_PRE_SCRIPTS setting: 20
02/12/19 17:33:08 DAGMAN_MAX_POST_SCRIPTS setting: 20
02/12/19 17:33:08 DAGMAN_MUNGE_NODE_NAMES setting: True
02/12/19 17:33:08 DAGMAN_PROHIBIT_MULTI_JOBS setting: False
02/12/19 17:33:08 DAGMAN_SUBMIT_DEPTH_FIRST setting: False
02/12/19 17:33:08 DAGMAN_ALWAYS_RUN_POST setting: False
02/12/19 17:33:08 DAGMAN_ABORT_DUPLICATES setting: True
02/12/19 17:33:08 DAGMAN_ABORT_ON_SCARY_SUBMIT setting: True
02/12/19 17:33:08 DAGMAN_PENDING_REPORT_INTERVAL setting: 600
02/12/19 17:33:08 DAGMAN_AUTO_RESCUE setting: True
02/12/19 17:33:08 DAGMAN_MAX_RESCUE_NUM setting: 100
02/12/19 17:33:08 DAGMAN_WRITE_PARTIAL_RESCUE setting: True
02/12/19 17:33:08 DAGMAN_DEFAULT_NODE_LOG setting: @(DAG_DIR)/@(DAG_FILE).nodes.log
02/12/19 17:33:08 DAGMAN_GENERATE_SUBDAG_SUBMITS setting: True
02/12/19 17:33:08 DAGMAN_MAX_JOB_HOLDS setting: 100
02/12/19 17:33:08 DAGMAN_HOLD_CLAIM_TIME setting: 20
02/12/19 17:33:08 ALL_DEBUG setting:
02/12/19 17:33:08 DAGMAN_DEBUG setting:
02/12/19 17:33:08 DAGMAN_SUPPRESS_JOB_LOGS setting: False
02/12/19 17:33:08 DAGMAN_REMOVE_NODE_JOBS setting: True
02/12/19 17:33:08 argv[0] == "condor_scheduniv_exec.283563.0"
02/12/19 17:33:08 argv[1] == "-Lockfile"
02/12/19 17:33:08 argv[2] == "dqr_test_20190212_narnaud_7.dag.lock"
02/12/19 17:33:08 argv[3] == "-AutoRescue"
02/12/19 17:33:08 argv[4] == "1"
02/12/19 17:33:08 argv[5] == "-DoRescueFrom"
02/12/19 17:33:08 argv[6] == "0"
02/12/19 17:33:08 argv[7] == "-Dag"
02/12/19 17:33:08 argv[8] == "dqr_test_20190212_narnaud_7.dag"
02/12/19 17:33:08 argv[9] == "-Suppress_notification"
02/12/19 17:33:08 argv[10] == "-CsdVersion"
02/12/19 17:33:08 argv[11] == "$CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $"
02/12/19 17:33:08 argv[12] == "-Dagman"
02/12/19 17:33:08 argv[13] == "/usr/bin/condor_dagman"
02/12/19 17:33:08 Workflow batch-name: <dqr_test_20190212_narnaud_7.dag+283563>
02/12/19 17:33:08 Workflow accounting_group: <>
02/12/19 17:33:08 Workflow accounting_group_user: <>
02/12/19 17:33:08 Warning: failed to get attribute DAGNodeName
02/12/19 17:33:08 DAGMAN_LOG_ON_NFS_IS_ERROR setting: False
02/12/19 17:33:08 Default node log file is: </data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log>
02/12/19 17:33:08 DAG Lockfile will be written to dqr_test_20190212_narnaud_7.dag.lock
02/12/19 17:33:08 DAG Input file is dqr_test_20190212_narnaud_7.dag
02/12/19 17:33:08 Parsing 1 dagfiles
02/12/19 17:33:08 Parsing dqr_test_20190212_narnaud_7.dag ...
02/12/19 17:33:08 Dag contains 38 total jobs
02/12/19 17:33:08 Sleeping for 3 seconds to ensure ProcessId uniqueness
02/12/19 17:33:11 Bootstrapping...
02/12/19 17:33:11 Number of pre-completed nodes: 0
02/12/19 17:33:11 MultiLogFiles: truncating log file /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:11 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:11 Of 38 nodes total:
02/12/19 17:33:11 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:11 === === === === === === ===
02/12/19 17:33:11 0 0 0 0 19 19 0
02/12/19 17:33:11 0 job proc(s) currently held
02/12/19 17:33:11 Registering condor_event_timer...
02/12/19 17:33:12 Submitting HTCondor Node test_20190212_narnaud_7_gps_numerology job(s)...
02/12/19 17:33:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:12 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_gps_numerology -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_gps_numerology -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" gps_numerology.sub
02/12/19 17:33:13 From submit: Submitting job(s).
02/12/19 17:33:13 From submit: 1 job(s) submitted to cluster 283564.
02/12/19 17:33:13 assigned HTCondor ID (283564.0.0)
02/12/19 17:33:13 Submitting HTCondor Node test_20190212_narnaud_7_virgo_noise job(s)...
02/12/19 17:33:13 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:13 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:13 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:13 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_virgo_noise -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_virgo_noise -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_noise.sub
02/12/19 17:33:13 From submit: Submitting job(s).
02/12/19 17:33:13 From submit: 1 job(s) submitted to cluster 283565.
02/12/19 17:33:13 assigned HTCondor ID (283565.0.0)
02/12/19 17:33:13 Submitting HTCondor Node test_20190212_narnaud_7_virgo_status job(s)...
02/12/19 17:33:13 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:13 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:13 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:13 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_virgo_status -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_virgo_status -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_status.sub
02/12/19 17:33:13 From submit: Submitting job(s).
02/12/19 17:33:13 From submit: 1 job(s) submitted to cluster 283566.
02/12/19 17:33:13 assigned HTCondor ID (283566.0.0)
02/12/19 17:33:13 Submitting HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon job(s)...
02/12/19 17:33:13 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:13 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:13 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:13 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_dqprint_brmsmon -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_dqprint_brmsmon -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_brmsmon.sub
02/12/19 17:33:13 From submit: Submitting job(s).
02/12/19 17:33:13 From submit: 1 job(s) submitted to cluster 283567.
02/12/19 17:33:13 assigned HTCondor ID (283567.0.0)
02/12/19 17:33:13 Submitting HTCondor Node test_20190212_narnaud_7_dqprint_dqflags job(s)...
02/12/19 17:33:13 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:13 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:13 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:13 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_dqprint_dqflags -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_dqprint_dqflags -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_dqflags.sub
02/12/19 17:33:13 From submit: Submitting job(s).
02/12/19 17:33:13 From submit: 1 job(s) submitted to cluster 283568.
02/12/19 17:33:13 assigned HTCondor ID (283568.0.0)
02/12/19 17:33:13 Just submitted 5 jobs this cycle...
02/12/19 17:33:13 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:13 Of 38 nodes total:
02/12/19 17:33:13 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:13 === === === === === === ===
02/12/19 17:33:13 0 0 5 0 14 19 0
02/12/19 17:33:13 0 job proc(s) currently held
02/12/19 17:33:18 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 job(s)...
02/12/19 17:33:18 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:18 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:18 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:18 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftV1 -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftV1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftV1.sub
02/12/19 17:33:18 From submit: Submitting job(s).
02/12/19 17:33:18 From submit: 1 job(s) submitted to cluster 283569.
02/12/19 17:33:18 assigned HTCondor ID (283569.0.0)
02/12/19 17:33:18 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 job(s)...
02/12/19 17:33:18 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:18 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:18 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:18 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftH1 -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftH1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftH1.sub
02/12/19 17:33:18 From submit: Submitting job(s).
02/12/19 17:33:18 From submit: 1 job(s) submitted to cluster 283570.
02/12/19 17:33:18 assigned HTCondor ID (283570.0.0)
02/12/19 17:33:18 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 job(s)...
02/12/19 17:33:18 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:18 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:18 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:18 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftL1 -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftL1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftL1.sub
02/12/19 17:33:18 From submit: Submitting job(s).
02/12/19 17:33:18 From submit: 1 job(s) submitted to cluster 283571.
02/12/19 17:33:18 assigned HTCondor ID (283571.0.0)
02/12/19 17:33:18 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 job(s)...
02/12/19 17:33:18 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:18 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:18 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:18 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanfull2048 -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanfull2048 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull2048.sub
02/12/19 17:33:18 From submit: Submitting job(s).
02/12/19 17:33:18 From submit: 1 job(s) submitted to cluster 283572.
02/12/19 17:33:18 assigned HTCondor ID (283572.0.0)
02/12/19 17:33:18 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanfull512 job(s)...
02/12/19 17:33:18 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:18 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:18 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:18 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanfull512 -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanfull512 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull512.sub
02/12/19 17:33:18 From submit: Submitting job(s).
02/12/19 17:33:18 From submit: 1 job(s) submitted to cluster 283573.
02/12/19 17:33:18 assigned HTCondor ID (283573.0.0)
02/12/19 17:33:18 Just submitted 5 jobs this cycle...
02/12/19 17:33:18 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_gps_numerology from (283564.0.0) to (283564.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_gps_numerology (283564.0.0) {02/12/19 17:33:13}
02/12/19 17:33:18 Number of idle job procs: 1
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_virgo_noise from (283565.0.0) to (283565.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_virgo_noise (283565.0.0) {02/12/19 17:33:13}
02/12/19 17:33:18 Number of idle job procs: 2
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_virgo_status from (283566.0.0) to (283566.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_virgo_status (283566.0.0) {02/12/19 17:33:13}
02/12/19 17:33:18 Number of idle job procs: 3
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_dqprint_brmsmon from (283567.0.0) to (283567.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon (283567.0.0) {02/12/19 17:33:13}
02/12/19 17:33:18 Number of idle job procs: 4
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_dqprint_dqflags from (283568.0.0) to (283568.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags (283568.0.0) {02/12/19 17:33:13}
02/12/19 17:33:18 Number of idle job procs: 5
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftV1 from (283569.0.0) to (283569.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 (283569.0.0) {02/12/19 17:33:18}
02/12/19 17:33:18 Number of idle job procs: 6
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftH1 from (283570.0.0) to (283570.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 (283570.0.0) {02/12/19 17:33:18}
02/12/19 17:33:18 Number of idle job procs: 7
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftL1 from (283571.0.0) to (283571.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 (283571.0.0) {02/12/19 17:33:18}
02/12/19 17:33:18 Number of idle job procs: 8
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_omicronscanfull2048 from (283572.0.0) to (283572.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:33:18}
02/12/19 17:33:18 Number of idle job procs: 9
02/12/19 17:33:18 Reassigning the id of job test_20190212_narnaud_7_omicronscanfull512 from (283573.0.0) to (283573.0.0)
02/12/19 17:33:18 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:33:18}
02/12/19 17:33:18 Number of idle job procs: 10
02/12/19 17:33:18 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:18 Of 38 nodes total:
02/12/19 17:33:18 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:18 === === === === === === ===
02/12/19 17:33:18 0 0 10 0 9 19 0
02/12/19 17:33:18 0 job proc(s) currently held
02/12/19 17:33:23 Submitting HTCondor Node test_20190212_narnaud_7_omicronplot job(s)...
02/12/19 17:33:23 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:23 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:23 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:23 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronplot -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronplot -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronplot.sub
02/12/19 17:33:23 From submit: Submitting job(s).
02/12/19 17:33:23 From submit: 1 job(s) submitted to cluster 283574.
02/12/19 17:33:23 assigned HTCondor ID (283574.0.0)
02/12/19 17:33:23 Submitting HTCondor Node test_20190212_narnaud_7_query_ingv_public_data job(s)...
02/12/19 17:33:23 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:23 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:23 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:23 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_query_ingv_public_data -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_query_ingv_public_data -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" query_ingv_public_data.sub
02/12/19 17:33:23 From submit: Submitting job(s).
02/12/19 17:33:23 From submit: 1 job(s) submitted to cluster 283575.
02/12/19 17:33:23 assigned HTCondor ID (283575.0.0)
02/12/19 17:33:23 Submitting HTCondor Node test_20190212_narnaud_7_scan_logfiles job(s)...
02/12/19 17:33:23 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:23 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:23 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:23 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_scan_logfiles -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_scan_logfiles -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" scan_logfiles.sub
02/12/19 17:33:23 From submit: Submitting job(s).
02/12/19 17:33:23 From submit: 1 job(s) submitted to cluster 283576.
02/12/19 17:33:23 assigned HTCondor ID (283576.0.0)
02/12/19 17:33:23 Submitting HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots job(s)...
02/12/19 17:33:23 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:23 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:23 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:23 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_decode_DMS_snapshots -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_decode_DMS_snapshots -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" decode_DMS_snapshots.sub
02/12/19 17:33:23 From submit: Submitting job(s).
02/12/19 17:33:23 From submit: 1 job(s) submitted to cluster 283577.
02/12/19 17:33:23 assigned HTCondor ID (283577.0.0)
02/12/19 17:33:23 Submitting HTCondor Node test_20190212_narnaud_7_upv job(s)...
02/12/19 17:33:23 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:23 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:23 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:23 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_upv -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_upv -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" upv.sub
02/12/19 17:33:23 From submit: Submitting job(s).
02/12/19 17:33:23 From submit: 1 job(s) submitted to cluster 283578.
02/12/19 17:33:23 assigned HTCondor ID (283578.0.0)
02/12/19 17:33:23 Just submitted 5 jobs this cycle...
02/12/19 17:33:23 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:23 Reassigning the id of job test_20190212_narnaud_7_omicronplot from (283574.0.0) to (283574.0.0)
02/12/19 17:33:23 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronplot (283574.0.0) {02/12/19 17:33:23}
02/12/19 17:33:23 Number of idle job procs: 11
02/12/19 17:33:23 Reassigning the id of job test_20190212_narnaud_7_query_ingv_public_data from (283575.0.0) to (283575.0.0)
02/12/19 17:33:23 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_query_ingv_public_data (283575.0.0) {02/12/19 17:33:23}
02/12/19 17:33:23 Number of idle job procs: 12
02/12/19 17:33:23 Reassigning the id of job test_20190212_narnaud_7_scan_logfiles from (283576.0.0) to (283576.0.0)
02/12/19 17:33:23 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:33:23}
02/12/19 17:33:23 Number of idle job procs: 13
02/12/19 17:33:23 Reassigning the id of job test_20190212_narnaud_7_decode_DMS_snapshots from (283577.0.0) to (283577.0.0)
02/12/19 17:33:23 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots (283577.0.0) {02/12/19 17:33:23}
02/12/19 17:33:23 Number of idle job procs: 14
02/12/19 17:33:23 Reassigning the id of job test_20190212_narnaud_7_upv from (283578.0.0) to (283578.0.0)
02/12/19 17:33:23 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_upv (283578.0.0) {02/12/19 17:33:23}
02/12/19 17:33:23 Number of idle job procs: 15
02/12/19 17:33:23 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:23 Of 38 nodes total:
02/12/19 17:33:23 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:23 === === === === === === ===
02/12/19 17:33:23 0 0 15 0 4 19 0
02/12/19 17:33:23 0 job proc(s) currently held
02/12/19 17:33:28 Submitting HTCondor Node test_20190212_narnaud_7_bruco job(s)...
02/12/19 17:33:28 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:28 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:28 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:28 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" bruco.sub
02/12/19 17:33:28 From submit: Submitting job(s).
02/12/19 17:33:28 From submit: 1 job(s) submitted to cluster 283579.
02/12/19 17:33:28 assigned HTCondor ID (283579.0.0)
02/12/19 17:33:28 Submitting HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ job(s)...
02/12/19 17:33:28 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:28 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:28 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:28 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_data_ref_comparison_INJ -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_data_ref_comparison_INJ -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_INJ.sub
02/12/19 17:33:28 From submit: Submitting job(s).
02/12/19 17:33:28 From submit: 1 job(s) submitted to cluster 283580.
02/12/19 17:33:28 assigned HTCondor ID (283580.0.0)
02/12/19 17:33:28 Submitting HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC job(s)...
02/12/19 17:33:28 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:28 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:28 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:28 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_data_ref_comparison_ISC -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_data_ref_comparison_ISC -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_ISC.sub
02/12/19 17:33:28 From submit: Submitting job(s).
02/12/19 17:33:28 From submit: 1 job(s) submitted to cluster 283581.
02/12/19 17:33:28 assigned HTCondor ID (283581.0.0)
02/12/19 17:33:28 Submitting HTCondor Node test_20190212_narnaud_7_generate_dqr_json job(s)...
02/12/19 17:33:28 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:28 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:28 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:28 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_generate_dqr_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_generate_dqr_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" generate_dqr_json.sub
02/12/19 17:33:28 From submit: Submitting job(s).
02/12/19 17:33:28 From submit: 1 job(s) submitted to cluster 283582.
02/12/19 17:33:28 assigned HTCondor ID (283582.0.0)
02/12/19 17:33:28 Just submitted 4 jobs this cycle...
02/12/19 17:33:28 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:28 Reassigning the id of job test_20190212_narnaud_7_bruco from (283579.0.0) to (283579.0.0)
02/12/19 17:33:28 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco (283579.0.0) {02/12/19 17:33:28}
02/12/19 17:33:28 Number of idle job procs: 16
02/12/19 17:33:28 Reassigning the id of job test_20190212_narnaud_7_data_ref_comparison_INJ from (283580.0.0) to (283580.0.0)
02/12/19 17:33:28 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ (283580.0.0) {02/12/19 17:33:28}
02/12/19 17:33:28 Number of idle job procs: 17
02/12/19 17:33:28 Reassigning the id of job test_20190212_narnaud_7_data_ref_comparison_ISC from (283581.0.0) to (283581.0.0)
02/12/19 17:33:28 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC (283581.0.0) {02/12/19 17:33:28}
02/12/19 17:33:28 Number of idle job procs: 18
02/12/19 17:33:28 Reassigning the id of job test_20190212_narnaud_7_generate_dqr_json from (283582.0.0) to (283582.0.0)
02/12/19 17:33:28 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_generate_dqr_json (283582.0.0) {02/12/19 17:33:28}
02/12/19 17:33:28 Number of idle job procs: 19
02/12/19 17:33:28 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:28 Of 38 nodes total:
02/12/19 17:33:28 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:28 === === === === === === ===
02/12/19 17:33:28 0 0 19 0 0 19 0
02/12/19 17:33:28 0 job proc(s) currently held
02/12/19 17:33:33 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_virgo_noise (283565.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 18
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon (283567.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 17
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_virgo_status (283566.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 16
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_gps_numerology (283564.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 15
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags (283568.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 14
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 13
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 12
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 (283570.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 11
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_upv (283578.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 10
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots (283577.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 9
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 (283571.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 8
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco (283579.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 7
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ (283580.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 6
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 (283569.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 5
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_query_ingv_public_data (283575.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 4
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 3
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronplot (283574.0.0) {02/12/19 17:33:29}
02/12/19 17:33:33 Number of idle job procs: 2
02/12/19 17:33:33 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco (283579.0.0) {02/12/19 17:33:30}
02/12/19 17:33:33 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco (283579.0.0) {02/12/19 17:33:30}
02/12/19 17:33:33 Number of idle job procs: 2
02/12/19 17:33:33 Node test_20190212_narnaud_7_bruco job proc (283579.0.0) completed successfully.
02/12/19 17:33:33 Node test_20190212_narnaud_7_bruco job completed
02/12/19 17:33:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_generate_dqr_json (283582.0.0) {02/12/19 17:33:31}
02/12/19 17:33:33 Number of idle job procs: 1
02/12/19 17:33:33 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:33 Of 38 nodes total:
02/12/19 17:33:33 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:33 === === === === === === ===
02/12/19 17:33:33 1 0 18 0 4 15 0
02/12/19 17:33:33 0 job proc(s) currently held
02/12/19 17:33:38 Submitting HTCondor Node test_20190212_narnaud_7_bruco_std job(s)...
02/12/19 17:33:38 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:38 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:38 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:38 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco_std -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco_std -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_bruco" bruco_std.sub
02/12/19 17:33:38 From submit: Submitting job(s).
02/12/19 17:33:38 From submit: 1 job(s) submitted to cluster 283583.
02/12/19 17:33:38 assigned HTCondor ID (283583.0.0)
02/12/19 17:33:38 Submitting HTCondor Node test_20190212_narnaud_7_bruco_std-prev job(s)...
02/12/19 17:33:38 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:38 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:38 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:38 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco_std-prev -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco_std-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_bruco" bruco_std-prev.sub
02/12/19 17:33:38 From submit: Submitting job(s).
02/12/19 17:33:38 From submit: 1 job(s) submitted to cluster 283584.
02/12/19 17:33:38 assigned HTCondor ID (283584.0.0)
02/12/19 17:33:38 Submitting HTCondor Node test_20190212_narnaud_7_bruco_env job(s)...
02/12/19 17:33:38 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:38 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:38 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:38 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco_env -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco_env -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_bruco" bruco_env.sub
02/12/19 17:33:38 From submit: Submitting job(s).
02/12/19 17:33:38 From submit: 1 job(s) submitted to cluster 283585.
02/12/19 17:33:38 assigned HTCondor ID (283585.0.0)
02/12/19 17:33:38 Submitting HTCondor Node test_20190212_narnaud_7_bruco_env-prev job(s)...
02/12/19 17:33:38 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:38 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:38 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:38 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco_env-prev -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco_env-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_bruco" bruco_env-prev.sub
02/12/19 17:33:39 From submit: Submitting job(s).
02/12/19 17:33:39 From submit: 1 job(s) submitted to cluster 283586.
02/12/19 17:33:39 assigned HTCondor ID (283586.0.0)
02/12/19 17:33:39 Just submitted 4 jobs this cycle...
02/12/19 17:33:39 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_virgo_noise (283565.0.0) {02/12/19 17:33:37}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_virgo_status (283566.0.0) {02/12/19 17:33:37}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon (283567.0.0) {02/12/19 17:33:37}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_gps_numerology (283564.0.0) {02/12/19 17:33:37}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags (283568.0.0) {02/12/19 17:33:37}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronplot (283574.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_virgo_noise (283565.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Number of idle job procs: 1
02/12/19 17:33:39 Node test_20190212_narnaud_7_virgo_noise job proc (283565.0.0) completed successfully.
02/12/19 17:33:39 Node test_20190212_narnaud_7_virgo_noise job completed
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv (283578.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 (283570.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots (283577.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ (283580.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 (283571.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_query_ingv_public_data (283575.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Reassigning the id of job test_20190212_narnaud_7_bruco_std from (283583.0.0) to (283583.0.0)
02/12/19 17:33:39 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco_std (283583.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Number of idle job procs: 2
02/12/19 17:33:39 Reassigning the id of job test_20190212_narnaud_7_bruco_std-prev from (283584.0.0) to (283584.0.0)
02/12/19 17:33:39 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Number of idle job procs: 3
02/12/19 17:33:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 (283569.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Reassigning the id of job test_20190212_narnaud_7_bruco_env from (283585.0.0) to (283585.0.0)
02/12/19 17:33:39 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Number of idle job procs: 4
02/12/19 17:33:39 Reassigning the id of job test_20190212_narnaud_7_bruco_env-prev from (283586.0.0) to (283586.0.0)
02/12/19 17:33:39 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:33:38}
02/12/19 17:33:39 Number of idle job procs: 5
02/12/19 17:33:39 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:39 Of 38 nodes total:
02/12/19 17:33:39 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:39 === === === === === === ===
02/12/19 17:33:39 2 0 21 0 1 14 0
02/12/19 17:33:39 0 job proc(s) currently held
02/12/19 17:33:44 Submitting HTCondor Node test_20190212_narnaud_7_virgo_noise_json job(s)...
02/12/19 17:33:44 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:44 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:44 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:44 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_virgo_noise_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_virgo_noise_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_virgo_noise" virgo_noise_json.sub
02/12/19 17:33:44 From submit: Submitting job(s).
02/12/19 17:33:44 From submit: 1 job(s) submitted to cluster 283587.
02/12/19 17:33:44 assigned HTCondor ID (283587.0.0)
02/12/19 17:33:44 Just submitted 1 job this cycle...
02/12/19 17:33:44 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_generate_dqr_json (283582.0.0) {02/12/19 17:33:39}
02/12/19 17:33:44 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC (283581.0.0) {02/12/19 17:33:39}
02/12/19 17:33:44 Number of idle job procs: 4
02/12/19 17:33:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_gps_numerology (283564.0.0) {02/12/19 17:33:39}
02/12/19 17:33:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_gps_numerology (283564.0.0) {02/12/19 17:33:39}
02/12/19 17:33:44 Number of idle job procs: 4
02/12/19 17:33:44 Node test_20190212_narnaud_7_gps_numerology job proc (283564.0.0) completed successfully.
02/12/19 17:33:44 Node test_20190212_narnaud_7_gps_numerology job completed
02/12/19 17:33:44 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco_std (283583.0.0) {02/12/19 17:33:39}
02/12/19 17:33:44 Number of idle job procs: 3
02/12/19 17:33:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots (283577.0.0) {02/12/19 17:33:41}
02/12/19 17:33:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_decode_DMS_snapshots (283577.0.0) {02/12/19 17:33:41}
02/12/19 17:33:44 Number of idle job procs: 3
02/12/19 17:33:44 Node test_20190212_narnaud_7_decode_DMS_snapshots job proc (283577.0.0) completed successfully.
02/12/19 17:33:44 Node test_20190212_narnaud_7_decode_DMS_snapshots job completed
02/12/19 17:33:44 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 17:33:42}
02/12/19 17:33:44 Number of idle job procs: 2
02/12/19 17:33:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC (283581.0.0) {02/12/19 17:33:43}
02/12/19 17:33:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC (283581.0.0) {02/12/19 17:33:43}
02/12/19 17:33:44 Number of idle job procs: 2
02/12/19 17:33:44 Node test_20190212_narnaud_7_data_ref_comparison_ISC job proc (283581.0.0) completed successfully.
02/12/19 17:33:44 Node test_20190212_narnaud_7_data_ref_comparison_ISC job completed
02/12/19 17:33:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ (283580.0.0) {02/12/19 17:33:43}
02/12/19 17:33:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ (283580.0.0) {02/12/19 17:33:43}
02/12/19 17:33:44 Number of idle job procs: 2
02/12/19 17:33:44 Node test_20190212_narnaud_7_data_ref_comparison_INJ job proc (283580.0.0) completed successfully.
02/12/19 17:33:44 Node test_20190212_narnaud_7_data_ref_comparison_INJ job completed
02/12/19 17:33:44 Reassigning the id of job test_20190212_narnaud_7_virgo_noise_json from (283587.0.0) to (283587.0.0)
02/12/19 17:33:44 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_virgo_noise_json (283587.0.0) {02/12/19 17:33:44}
02/12/19 17:33:44 Number of idle job procs: 3
02/12/19 17:33:44 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:44 Of 38 nodes total:
02/12/19 17:33:44 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:44 === === === === === === ===
02/12/19 17:33:44 6 0 18 0 2 12 0
02/12/19 17:33:44 0 job proc(s) currently held
02/12/19 17:33:49 Submitting HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison job(s)...
02/12/19 17:33:49 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:49 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:49 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:49 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_data_ref_comparison_ISC_comparison -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_data_ref_comparison_ISC_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_data_ref_comparison_ISC" data_ref_comparison_ISC_comparison.sub
02/12/19 17:33:49 From submit: Submitting job(s).
02/12/19 17:33:49 From submit: 1 job(s) submitted to cluster 283588.
02/12/19 17:33:49 assigned HTCondor ID (283588.0.0)
02/12/19 17:33:49 Submitting HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison job(s)...
02/12/19 17:33:49 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:49 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:49 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:49 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_data_ref_comparison_INJ_comparison -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_data_ref_comparison_INJ_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_data_ref_comparison_INJ" data_ref_comparison_INJ_comparison.sub
02/12/19 17:33:49 From submit: Submitting job(s).
02/12/19 17:33:49 From submit: 1 job(s) submitted to cluster 283589.
02/12/19 17:33:49 assigned HTCondor ID (283589.0.0)
02/12/19 17:33:49 Just submitted 2 jobs this cycle...
02/12/19 17:33:49 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:49 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:33:44}
02/12/19 17:33:49 Number of idle job procs: 2
02/12/19 17:33:49 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:33:45}
02/12/19 17:33:49 Number of idle job procs: 1
02/12/19 17:33:49 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std (283583.0.0) {02/12/19 17:33:48}
02/12/19 17:33:49 Reassigning the id of job test_20190212_narnaud_7_data_ref_comparison_ISC_comparison from (283588.0.0) to (283588.0.0)
02/12/19 17:33:49 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison (283588.0.0) {02/12/19 17:33:49}
02/12/19 17:33:49 Number of idle job procs: 2
02/12/19 17:33:49 Reassigning the id of job test_20190212_narnaud_7_data_ref_comparison_INJ_comparison from (283589.0.0) to (283589.0.0)
02/12/19 17:33:49 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison (283589.0.0) {02/12/19 17:33:49}
02/12/19 17:33:49 Number of idle job procs: 3
02/12/19 17:33:49 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:49 Of 38 nodes total:
02/12/19 17:33:49 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:49 === === === === === === ===
02/12/19 17:33:49 6 0 20 0 0 12 0
02/12/19 17:33:49 0 job proc(s) currently held
02/12/19 17:33:54 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 17:33:50}
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon (283567.0.0) {02/12/19 17:33:51}
02/12/19 17:33:54 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon (283567.0.0) {02/12/19 17:33:51}
02/12/19 17:33:54 Number of idle job procs: 3
02/12/19 17:33:54 Node test_20190212_narnaud_7_dqprint_brmsmon job proc (283567.0.0) completed successfully.
02/12/19 17:33:54 Node test_20190212_narnaud_7_dqprint_brmsmon job completed
02/12/19 17:33:54 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_virgo_noise_json (283587.0.0) {02/12/19 17:33:52}
02/12/19 17:33:54 Number of idle job procs: 2
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_query_ingv_public_data (283575.0.0) {02/12/19 17:33:52}
02/12/19 17:33:54 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_query_ingv_public_data (283575.0.0) {02/12/19 17:33:52}
02/12/19 17:33:54 Number of idle job procs: 2
02/12/19 17:33:54 Node test_20190212_narnaud_7_query_ingv_public_data job proc (283575.0.0) completed successfully.
02/12/19 17:33:54 Node test_20190212_narnaud_7_query_ingv_public_data job completed
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_virgo_noise_json (283587.0.0) {02/12/19 17:33:52}
02/12/19 17:33:54 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_virgo_noise_json (283587.0.0) {02/12/19 17:33:52}
02/12/19 17:33:54 Number of idle job procs: 2
02/12/19 17:33:54 Node test_20190212_narnaud_7_virgo_noise_json job proc (283587.0.0) completed successfully.
02/12/19 17:33:54 Node test_20190212_narnaud_7_virgo_noise_json job completed
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_generate_dqr_json (283582.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_generate_dqr_json (283582.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Number of idle job procs: 2
02/12/19 17:33:54 Node test_20190212_narnaud_7_generate_dqr_json job proc (283582.0.0) completed successfully.
02/12/19 17:33:54 Node test_20190212_narnaud_7_generate_dqr_json job completed
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison (283588.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Number of idle job procs: 1
02/12/19 17:33:54 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison (283589.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Number of idle job procs: 0
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_virgo_status (283566.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_virgo_status (283566.0.0) {02/12/19 17:33:53}
02/12/19 17:33:54 Number of idle job procs: 0
02/12/19 17:33:54 Node test_20190212_narnaud_7_virgo_status job proc (283566.0.0) completed successfully.
02/12/19 17:33:54 Node test_20190212_narnaud_7_virgo_status job completed
02/12/19 17:33:54 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:54 Of 38 nodes total:
02/12/19 17:33:54 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:54 === === === === === === ===
02/12/19 17:33:54 11 0 15 0 1 11 0
02/12/19 17:33:54 0 job proc(s) currently held
02/12/19 17:33:59 Submitting HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon_json job(s)...
02/12/19 17:33:59 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:33:59 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:33:59 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:33:59 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_dqprint_brmsmon_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_dqprint_brmsmon_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_dqprint_brmsmon" dqprint_brmsmon_json.sub
02/12/19 17:33:59 From submit: Submitting job(s).
02/12/19 17:33:59 From submit: 1 job(s) submitted to cluster 283590.
02/12/19 17:33:59 assigned HTCondor ID (283590.0.0)
02/12/19 17:33:59 Just submitted 1 job this cycle...
02/12/19 17:33:59 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:33:59 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags (283568.0.0) {02/12/19 17:33:56}
02/12/19 17:33:59 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags (283568.0.0) {02/12/19 17:33:56}
02/12/19 17:33:59 Number of idle job procs: 0
02/12/19 17:33:59 Node test_20190212_narnaud_7_dqprint_dqflags job proc (283568.0.0) completed successfully.
02/12/19 17:33:59 Node test_20190212_narnaud_7_dqprint_dqflags job completed
02/12/19 17:33:59 Reassigning the id of job test_20190212_narnaud_7_dqprint_brmsmon_json from (283590.0.0) to (283590.0.0)
02/12/19 17:33:59 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon_json (283590.0.0) {02/12/19 17:33:59}
02/12/19 17:33:59 Number of idle job procs: 1
02/12/19 17:33:59 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:33:59 Of 38 nodes total:
02/12/19 17:33:59 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:33:59 === === === === === === ===
02/12/19 17:33:59 12 0 15 0 1 10 0
02/12/19 17:33:59 0 job proc(s) currently held
02/12/19 17:34:04 Submitting HTCondor Node test_20190212_narnaud_7_dqprint_dqflags_json job(s)...
02/12/19 17:34:04 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:34:04 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:34:04 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:34:04 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_dqprint_dqflags_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_dqprint_dqflags_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_dqprint_dqflags" dqprint_dqflags_json.sub
02/12/19 17:34:04 From submit: Submitting job(s).
02/12/19 17:34:04 From submit: 1 job(s) submitted to cluster 283591.
02/12/19 17:34:04 assigned HTCondor ID (283591.0.0)
02/12/19 17:34:04 Just submitted 1 job this cycle...
02/12/19 17:34:04 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:34:04 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon_json (283590.0.0) {02/12/19 17:33:59}
02/12/19 17:34:04 Number of idle job procs: 0
02/12/19 17:34:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon_json (283590.0.0) {02/12/19 17:33:59}
02/12/19 17:34:04 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_dqprint_brmsmon_json (283590.0.0) {02/12/19 17:33:59}
02/12/19 17:34:04 Number of idle job procs: 0
02/12/19 17:34:04 Node test_20190212_narnaud_7_dqprint_brmsmon_json job proc (283590.0.0) completed successfully.
02/12/19 17:34:04 Node test_20190212_narnaud_7_dqprint_brmsmon_json job completed
02/12/19 17:34:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison (283588.0.0) {02/12/19 17:34:02}
02/12/19 17:34:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison (283589.0.0) {02/12/19 17:34:02}
02/12/19 17:34:04 Reassigning the id of job test_20190212_narnaud_7_dqprint_dqflags_json from (283591.0.0) to (283591.0.0)
02/12/19 17:34:04 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags_json (283591.0.0) {02/12/19 17:34:04}
02/12/19 17:34:04 Number of idle job procs: 1
02/12/19 17:34:04 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:34:04 Of 38 nodes total:
02/12/19 17:34:04 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:34:04 === === === === === === ===
02/12/19 17:34:04 13 0 15 0 0 10 0
02/12/19 17:34:04 0 job proc(s) currently held
02/12/19 17:34:09 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:34:09 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags_json (283591.0.0) {02/12/19 17:34:04}
02/12/19 17:34:09 Number of idle job procs: 0
02/12/19 17:34:09 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags_json (283591.0.0) {02/12/19 17:34:04}
02/12/19 17:34:09 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_dqprint_dqflags_json (283591.0.0) {02/12/19 17:34:04}
02/12/19 17:34:09 Number of idle job procs: 0
02/12/19 17:34:09 Node test_20190212_narnaud_7_dqprint_dqflags_json job proc (283591.0.0) completed successfully.
02/12/19 17:34:09 Node test_20190212_narnaud_7_dqprint_dqflags_json job completed
02/12/19 17:34:09 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:34:09 Of 38 nodes total:
02/12/19 17:34:09 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:34:09 === === === === === === ===
02/12/19 17:34:09 14 0 14 0 0 10 0
02/12/19 17:34:09 0 job proc(s) currently held
02/12/19 17:34:59 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:34:59 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison (283589.0.0) {02/12/19 17:34:59}
02/12/19 17:34:59 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison (283589.0.0) {02/12/19 17:34:59}
02/12/19 17:34:59 Number of idle job procs: 0
02/12/19 17:34:59 Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison job proc (283589.0.0) completed successfully.
02/12/19 17:34:59 Node test_20190212_narnaud_7_data_ref_comparison_INJ_comparison job completed
02/12/19 17:34:59 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:34:59 Of 38 nodes total:
02/12/19 17:34:59 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:34:59 === === === === === === ===
02/12/19 17:34:59 15 0 13 0 0 10 0
02/12/19 17:34:59 0 job proc(s) currently held
02/12/19 17:35:34 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:35:34 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 (283569.0.0) {02/12/19 17:35:29}
02/12/19 17:35:34 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1 (283569.0.0) {02/12/19 17:35:29}
02/12/19 17:35:34 Number of idle job procs: 0
02/12/19 17:35:34 Node test_20190212_narnaud_7_omicronscanhoftV1 job proc (283569.0.0) completed successfully.
02/12/19 17:35:34 Node test_20190212_narnaud_7_omicronscanhoftV1 job completed
02/12/19 17:35:34 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:35:34 Of 38 nodes total:
02/12/19 17:35:34 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:35:34 === === === === === === ===
02/12/19 17:35:34 16 0 12 0 1 9 0
02/12/19 17:35:34 0 job proc(s) currently held
02/12/19 17:35:39 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1_json job(s)...
02/12/19 17:35:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:35:39 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:35:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:35:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftV1_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftV1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronscanhoftV1" omicronscanhoftV1_json.sub
02/12/19 17:35:39 From submit: Submitting job(s).
02/12/19 17:35:39 From submit: 1 job(s) submitted to cluster 283592.
02/12/19 17:35:39 assigned HTCondor ID (283592.0.0)
02/12/19 17:35:39 Just submitted 1 job this cycle...
02/12/19 17:35:39 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:35:39 Of 38 nodes total:
02/12/19 17:35:39 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:35:39 === === === === === === ===
02/12/19 17:35:39 16 0 13 0 0 9 0
02/12/19 17:35:39 0 job proc(s) currently held
02/12/19 17:35:44 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:35:44 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftV1_json from (283592.0.0) to (283592.0.0)
02/12/19 17:35:44 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1_json (283592.0.0) {02/12/19 17:35:39}
02/12/19 17:35:44 Number of idle job procs: 1
02/12/19 17:35:44 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1_json (283592.0.0) {02/12/19 17:35:39}
02/12/19 17:35:44 Number of idle job procs: 0
02/12/19 17:35:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1_json (283592.0.0) {02/12/19 17:35:39}
02/12/19 17:35:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftV1_json (283592.0.0) {02/12/19 17:35:39}
02/12/19 17:35:44 Number of idle job procs: 0
02/12/19 17:35:44 Node test_20190212_narnaud_7_omicronscanhoftV1_json job proc (283592.0.0) completed successfully.
02/12/19 17:35:44 Node test_20190212_narnaud_7_omicronscanhoftV1_json job completed
02/12/19 17:35:44 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:35:44 Of 38 nodes total:
02/12/19 17:35:44 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:35:44 === === === === === === ===
02/12/19 17:35:44 17 0 12 0 0 9 0
02/12/19 17:35:44 0 job proc(s) currently held
02/12/19 17:36:14 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:36:14 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison (283588.0.0) {02/12/19 17:36:11}
02/12/19 17:36:14 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison (283588.0.0) {02/12/19 17:36:11}
02/12/19 17:36:14 Number of idle job procs: 0
02/12/19 17:36:14 Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison job proc (283588.0.0) completed successfully.
02/12/19 17:36:14 Node test_20190212_narnaud_7_data_ref_comparison_ISC_comparison job completed
02/12/19 17:36:14 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:36:14 Of 38 nodes total:
02/12/19 17:36:14 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:36:14 === === === === === === ===
02/12/19 17:36:14 18 0 11 0 0 9 0
02/12/19 17:36:14 0 job proc(s) currently held
02/12/19 17:38:09 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:09 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 (283571.0.0) {02/12/19 17:38:04}
02/12/19 17:38:09 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1 (283571.0.0) {02/12/19 17:38:04}
02/12/19 17:38:09 Number of idle job procs: 0
02/12/19 17:38:09 Node test_20190212_narnaud_7_omicronscanhoftL1 job proc (283571.0.0) completed successfully.
02/12/19 17:38:09 Node test_20190212_narnaud_7_omicronscanhoftL1 job completed
02/12/19 17:38:09 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:38:09 Of 38 nodes total:
02/12/19 17:38:09 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:38:09 === === === === === === ===
02/12/19 17:38:09 19 0 10 0 1 8 0
02/12/19 17:38:09 0 job proc(s) currently held
02/12/19 17:38:14 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1_json job(s)...
02/12/19 17:38:14 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:38:14 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:38:14 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:38:14 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftL1_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftL1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronscanhoftL1" omicronscanhoftL1_json.sub
02/12/19 17:38:14 From submit: Submitting job(s).
02/12/19 17:38:14 From submit: 1 job(s) submitted to cluster 283593.
02/12/19 17:38:14 assigned HTCondor ID (283593.0.0)
02/12/19 17:38:14 Just submitted 1 job this cycle...
02/12/19 17:38:14 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:14 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 (283570.0.0) {02/12/19 17:38:13}
02/12/19 17:38:14 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1 (283570.0.0) {02/12/19 17:38:13}
02/12/19 17:38:14 Number of idle job procs: 0
02/12/19 17:38:14 Node test_20190212_narnaud_7_omicronscanhoftH1 job proc (283570.0.0) completed successfully.
02/12/19 17:38:14 Node test_20190212_narnaud_7_omicronscanhoftH1 job completed
02/12/19 17:38:14 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftL1_json from (283593.0.0) to (283593.0.0)
02/12/19 17:38:14 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1_json (283593.0.0) {02/12/19 17:38:14}
02/12/19 17:38:14 Number of idle job procs: 1
02/12/19 17:38:14 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:38:14 Of 38 nodes total:
02/12/19 17:38:14 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:38:14 === === === === === === ===
02/12/19 17:38:14 20 0 10 0 1 7 0
02/12/19 17:38:14 0 job proc(s) currently held
02/12/19 17:38:19 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1_json job(s)...
02/12/19 17:38:19 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:38:19 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:38:19 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:38:19 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanhoftH1_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanhoftH1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronscanhoftH1" omicronscanhoftH1_json.sub
02/12/19 17:38:19 From submit: Submitting job(s).
02/12/19 17:38:19 From submit: 1 job(s) submitted to cluster 283594.
02/12/19 17:38:19 assigned HTCondor ID (283594.0.0)
02/12/19 17:38:19 Just submitted 1 job this cycle...
02/12/19 17:38:19 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:19 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1_json (283593.0.0) {02/12/19 17:38:14}
02/12/19 17:38:19 Number of idle job procs: 0
02/12/19 17:38:19 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1_json (283593.0.0) {02/12/19 17:38:15}
02/12/19 17:38:19 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftL1_json (283593.0.0) {02/12/19 17:38:15}
02/12/19 17:38:19 Number of idle job procs: 0
02/12/19 17:38:19 Node test_20190212_narnaud_7_omicronscanhoftL1_json job proc (283593.0.0) completed successfully.
02/12/19 17:38:19 Node test_20190212_narnaud_7_omicronscanhoftL1_json job completed
02/12/19 17:38:19 Reassigning the id of job test_20190212_narnaud_7_omicronscanhoftH1_json from (283594.0.0) to (283594.0.0)
02/12/19 17:38:19 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1_json (283594.0.0) {02/12/19 17:38:19}
02/12/19 17:38:19 Number of idle job procs: 1
02/12/19 17:38:19 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:38:19 Of 38 nodes total:
02/12/19 17:38:19 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:38:19 === === === === === === ===
02/12/19 17:38:19 21 0 10 0 0 7 0
02/12/19 17:38:19 0 job proc(s) currently held
02/12/19 17:38:24 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:24 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1_json (283594.0.0) {02/12/19 17:38:19}
02/12/19 17:38:24 Number of idle job procs: 0
02/12/19 17:38:24 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1_json (283594.0.0) {02/12/19 17:38:20}
02/12/19 17:38:24 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanhoftH1_json (283594.0.0) {02/12/19 17:38:20}
02/12/19 17:38:24 Number of idle job procs: 0
02/12/19 17:38:24 Node test_20190212_narnaud_7_omicronscanhoftH1_json job proc (283594.0.0) completed successfully.
02/12/19 17:38:24 Node test_20190212_narnaud_7_omicronscanhoftH1_json job completed
02/12/19 17:38:24 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:38:24 Of 38 nodes total:
02/12/19 17:38:24 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:38:24 === === === === === === ===
02/12/19 17:38:24 22 0 9 0 0 7 0
02/12/19 17:38:24 0 job proc(s) currently held
02/12/19 17:38:39 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:38:38}
02/12/19 17:38:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronplot (283574.0.0) {02/12/19 17:38:38}
02/12/19 17:38:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:38:39}
02/12/19 17:38:44 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv (283578.0.0) {02/12/19 17:38:39}
02/12/19 17:38:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:38:39}
02/12/19 17:38:49 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:49 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std (283583.0.0) {02/12/19 17:38:48}
02/12/19 17:38:54 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:38:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 17:38:50}
02/12/19 17:38:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:38:53}
02/12/19 17:38:54 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:38:54}
02/12/19 17:39:34 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:39:34 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:39:34}
02/12/19 17:39:34 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco_env-prev (283586.0.0) {02/12/19 17:39:34}
02/12/19 17:39:34 Number of idle job procs: 0
02/12/19 17:39:34 Node test_20190212_narnaud_7_bruco_env-prev job proc (283586.0.0) completed successfully.
02/12/19 17:39:34 Node test_20190212_narnaud_7_bruco_env-prev job completed
02/12/19 17:39:34 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:39:34 Of 38 nodes total:
02/12/19 17:39:34 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:39:34 === === === === === === ===
02/12/19 17:39:34 23 0 8 0 0 7 0
02/12/19 17:39:34 0 job proc(s) currently held
02/12/19 17:39:39 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:39:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:39:34}
02/12/19 17:39:39 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco_env (283585.0.0) {02/12/19 17:39:34}
02/12/19 17:39:39 Number of idle job procs: 0
02/12/19 17:39:39 Node test_20190212_narnaud_7_bruco_env job proc (283585.0.0) completed successfully.
02/12/19 17:39:39 Node test_20190212_narnaud_7_bruco_env job completed
02/12/19 17:39:39 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:39:39 Of 38 nodes total:
02/12/19 17:39:39 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:39:39 === === === === === === ===
02/12/19 17:39:39 24 0 7 0 0 7 0
02/12/19 17:39:39 0 job proc(s) currently held
02/12/19 17:39:44 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:39:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:39:42}
02/12/19 17:39:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanfull512 (283573.0.0) {02/12/19 17:39:42}
02/12/19 17:39:44 Number of idle job procs: 0
02/12/19 17:39:44 Node test_20190212_narnaud_7_omicronscanfull512 job proc (283573.0.0) completed successfully.
02/12/19 17:39:44 Node test_20190212_narnaud_7_omicronscanfull512 job completed
02/12/19 17:39:44 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:39:44 Of 38 nodes total:
02/12/19 17:39:44 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:39:44 === === === === === === ===
02/12/19 17:39:44 25 0 6 0 1 6 0
02/12/19 17:39:44 0 job proc(s) currently held
02/12/19 17:39:49 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanfull512_json job(s)...
02/12/19 17:39:49 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 17:39:49 Masking the events recorded in the DAGMAN workflow log
02/12/19 17:39:49 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 17:39:49 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanfull512_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanfull512_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronscanfull512" omicronscanfull512_json.sub
02/12/19 17:39:49 From submit: Submitting job(s).
02/12/19 17:39:49 From submit: 1 job(s) submitted to cluster 283595.
02/12/19 17:39:49 assigned HTCondor ID (283595.0.0)
02/12/19 17:39:49 Just submitted 1 job this cycle...
02/12/19 17:39:49 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:39:49 Of 38 nodes total:
02/12/19 17:39:49 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:39:49 === === === === === === ===
02/12/19 17:39:49 25 0 7 0 0 6 0
02/12/19 17:39:49 0 job proc(s) currently held
02/12/19 17:39:54 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:39:55 Reassigning the id of job test_20190212_narnaud_7_omicronscanfull512_json from (283595.0.0) to (283595.0.0)
02/12/19 17:39:55 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanfull512_json (283595.0.0) {02/12/19 17:39:49}
02/12/19 17:39:55 Number of idle job procs: 1
02/12/19 17:39:55 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512_json (283595.0.0) {02/12/19 17:39:50}
02/12/19 17:39:55 Number of idle job procs: 0
02/12/19 17:39:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull512_json (283595.0.0) {02/12/19 17:39:51}
02/12/19 17:39:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanfull512_json (283595.0.0) {02/12/19 17:39:51}
02/12/19 17:39:55 Number of idle job procs: 0
02/12/19 17:39:55 Node test_20190212_narnaud_7_omicronscanfull512_json job proc (283595.0.0) completed successfully.
02/12/19 17:39:55 Node test_20190212_narnaud_7_omicronscanfull512_json job completed
02/12/19 17:39:55 DAG status: 0 (DAG_STATUS_OK)
02/12/19 17:39:55 Of 38 nodes total:
02/12/19 17:39:55 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 17:39:55 === === === === === === ===
02/12/19 17:39:55 26 0 6 0 0 6 0
02/12/19 17:39:55 0 job proc(s) currently held
02/12/19 17:43:40 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:43:40 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:43:40}
02/12/19 17:43:45 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:43:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:43:40}
02/12/19 17:48:40 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:48:40 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:48:40}
02/12/19 17:48:45 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:48:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:48:41}
02/12/19 17:48:55 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:48:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 17:48:51}
02/12/19 17:53:40 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:53:40 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 17:53:40}
02/12/19 17:53:45 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:53:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:53:41}
02/12/19 17:58:46 Currently monitoring 1 HTCondor log file(s)
02/12/19 17:58:46 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 17:58:41}
02/12/19 18:00:16 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:00:16 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 18:00:13}
02/12/19 18:00:16 Number of idle job procs: 0
02/12/19 18:00:16 Node test_20190212_narnaud_7_omicronscanfull2048 job proc (283572.0.0) completed successfully.
02/12/19 18:00:16 Node test_20190212_narnaud_7_omicronscanfull2048 job completed
02/12/19 18:00:16 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 18:00:14}
02/12/19 18:00:16 BAD EVENT: job (283572.0.0) executing, total end count != 0 (1)
02/12/19 18:00:16 Continuing with DAG in spite of bad event (BAD EVENT: job (283572.0.0) executing, total end count != 0 (1)) because of allow_events setting
02/12/19 18:00:16 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:00:16 Of 38 nodes total:
02/12/19 18:00:16 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:00:16 === === === === === === ===
02/12/19 18:00:16 27 0 5 0 1 5 0
02/12/19 18:00:16 0 job proc(s) currently held
02/12/19 18:00:21 Submitting HTCondor Node test_20190212_narnaud_7_omicronscanfull2048_json job(s)...
02/12/19 18:00:21 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:00:21 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:00:21 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:00:21 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronscanfull2048_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronscanfull2048_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronscanfull2048" omicronscanfull2048_json.sub
02/12/19 18:00:21 From submit: Submitting job(s).
02/12/19 18:00:21 From submit: 1 job(s) submitted to cluster 283712.
02/12/19 18:00:21 assigned HTCondor ID (283712.0.0)
02/12/19 18:00:21 Just submitted 1 job this cycle...
02/12/19 18:00:21 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:00:21 Of 38 nodes total:
02/12/19 18:00:21 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:00:21 === === === === === === ===
02/12/19 18:00:21 27 0 6 0 0 5 0
02/12/19 18:00:21 0 job proc(s) currently held
02/12/19 18:00:26 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:00:26 Reassigning the id of job test_20190212_narnaud_7_omicronscanfull2048_json from (283712.0.0) to (283712.0.0)
02/12/19 18:00:26 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048_json (283712.0.0) {02/12/19 18:00:21}
02/12/19 18:00:26 Number of idle job procs: 1
02/12/19 18:00:26 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048_json (283712.0.0) {02/12/19 18:00:22}
02/12/19 18:00:26 Number of idle job procs: 0
02/12/19 18:00:26 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048_json (283712.0.0) {02/12/19 18:00:23}
02/12/19 18:00:26 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048_json (283712.0.0) {02/12/19 18:00:23}
02/12/19 18:00:26 Number of idle job procs: 0
02/12/19 18:00:26 Node test_20190212_narnaud_7_omicronscanfull2048_json job proc (283712.0.0) completed successfully.
02/12/19 18:00:26 Node test_20190212_narnaud_7_omicronscanfull2048_json job completed
02/12/19 18:00:26 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 18:00:23}
02/12/19 18:00:26 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:00:26 Of 38 nodes total:
02/12/19 18:00:26 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:00:26 === === === === === === ===
02/12/19 18:00:26 28 0 5 0 0 5 0
02/12/19 18:00:26 0 job proc(s) currently held
02/12/19 18:01:21 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:01:21 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 18:01:17}
02/12/19 18:01:21 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronscanfull2048 (283572.0.0) {02/12/19 18:01:17}
02/12/19 18:01:21 BAD EVENT: job (283572.0.0) ended, total end count != 1 (2)
02/12/19 18:01:21 Continuing with DAG in spite of bad event (BAD EVENT: job (283572.0.0) ended, total end count != 1 (2)) because of allow_events setting
02/12/19 18:02:56 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:02:56 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_scan_logfiles (283576.0.0) {02/12/19 18:02:52}
02/12/19 18:02:56 Number of idle job procs: 0
02/12/19 18:02:56 Node test_20190212_narnaud_7_scan_logfiles job proc (283576.0.0) completed successfully.
02/12/19 18:02:56 Node test_20190212_narnaud_7_scan_logfiles job completed
02/12/19 18:02:56 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:02:56 Of 38 nodes total:
02/12/19 18:02:56 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:02:56 === === === === === === ===
02/12/19 18:02:56 29 0 4 0 0 5 0
02/12/19 18:02:56 0 job proc(s) currently held
02/12/19 18:04:31 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:04:31 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco_std (283583.0.0) {02/12/19 18:04:28}
02/12/19 18:04:31 Number of idle job procs: 0
02/12/19 18:04:31 Node test_20190212_narnaud_7_bruco_std job proc (283583.0.0) completed successfully.
02/12/19 18:04:31 Node test_20190212_narnaud_7_bruco_std job completed
02/12/19 18:04:31 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:04:31 Of 38 nodes total:
02/12/19 18:04:31 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:04:31 === === === === === === ===
02/12/19 18:04:31 30 0 3 0 0 5 0
02/12/19 18:04:31 0 job proc(s) currently held
02/12/19 18:05:11 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:05:11 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 18:05:11}
02/12/19 18:05:11 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco_std-prev (283584.0.0) {02/12/19 18:05:11}
02/12/19 18:05:11 Number of idle job procs: 0
02/12/19 18:05:11 Node test_20190212_narnaud_7_bruco_std-prev job proc (283584.0.0) completed successfully.
02/12/19 18:05:11 Node test_20190212_narnaud_7_bruco_std-prev job completed
02/12/19 18:05:11 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:05:11 Of 38 nodes total:
02/12/19 18:05:11 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:05:11 === === === === === === ===
02/12/19 18:05:11 31 0 2 0 1 4 0
02/12/19 18:05:11 0 job proc(s) currently held
02/12/19 18:05:16 Submitting HTCondor Node test_20190212_narnaud_7_bruco_json job(s)...
02/12/19 18:05:16 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:05:16 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:05:16 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:05:16 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_bruco_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_bruco_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_bruco_std,test_20190212_narnaud_7_bruco_std-prev,test_20190212_narnaud_7_bruco_env,test_20190212_narnaud_7_bruco_env-prev" bruco_json.sub
02/12/19 18:05:17 From submit: Submitting job(s).
02/12/19 18:05:17 From submit: 1 job(s) submitted to cluster 283713.
02/12/19 18:05:17 assigned HTCondor ID (283713.0.0)
02/12/19 18:05:17 Just submitted 1 job this cycle...
02/12/19 18:05:17 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:05:17 Of 38 nodes total:
02/12/19 18:05:17 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:05:17 === === === === === === ===
02/12/19 18:05:17 31 0 3 0 0 4 0
02/12/19 18:05:17 0 job proc(s) currently held
02/12/19 18:05:22 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:05:22 Reassigning the id of job test_20190212_narnaud_7_bruco_json from (283713.0.0) to (283713.0.0)
02/12/19 18:05:22 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_bruco_json (283713.0.0) {02/12/19 18:05:17}
02/12/19 18:05:22 Number of idle job procs: 1
02/12/19 18:05:22 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_bruco_json (283713.0.0) {02/12/19 18:05:17}
02/12/19 18:05:22 Number of idle job procs: 0
02/12/19 18:05:22 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_bruco_json (283713.0.0) {02/12/19 18:05:17}
02/12/19 18:05:22 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_bruco_json (283713.0.0) {02/12/19 18:05:17}
02/12/19 18:05:22 Number of idle job procs: 0
02/12/19 18:05:22 Node test_20190212_narnaud_7_bruco_json job proc (283713.0.0) completed successfully.
02/12/19 18:05:22 Node test_20190212_narnaud_7_bruco_json job completed
02/12/19 18:05:22 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:05:22 Of 38 nodes total:
02/12/19 18:05:22 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:05:22 === === === === === === ===
02/12/19 18:05:22 32 0 2 0 0 4 0
02/12/19 18:05:22 0 job proc(s) currently held
02/12/19 18:06:52 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:06:52 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_upv (283578.0.0) {02/12/19 18:06:50}
02/12/19 18:06:52 Number of idle job procs: 0
02/12/19 18:06:52 Node test_20190212_narnaud_7_upv job proc (283578.0.0) completed successfully.
02/12/19 18:06:52 Node test_20190212_narnaud_7_upv job completed
02/12/19 18:06:52 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronplot (283574.0.0) {02/12/19 18:06:51}
02/12/19 18:06:52 Number of idle job procs: 0
02/12/19 18:06:52 Node test_20190212_narnaud_7_omicronplot job proc (283574.0.0) completed successfully.
02/12/19 18:06:52 Node test_20190212_narnaud_7_omicronplot job completed
02/12/19 18:06:52 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:06:52 Of 38 nodes total:
02/12/19 18:06:52 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:06:52 === === === === === === ===
02/12/19 18:06:52 34 0 0 0 2 2 0
02/12/19 18:06:52 0 job proc(s) currently held
02/12/19 18:06:57 Submitting HTCondor Node test_20190212_narnaud_7_upv_exe job(s)...
02/12/19 18:06:57 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:06:57 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:06:57 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:06:57 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_upv_exe -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_upv_exe -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_upv" upv_exe.sub
02/12/19 18:06:57 From submit: Submitting job(s).
02/12/19 18:06:57 From submit: 1 job(s) submitted to cluster 283714.
02/12/19 18:06:57 assigned HTCondor ID (283714.0.0)
02/12/19 18:06:57 Submitting HTCondor Node test_20190212_narnaud_7_omicronplot_exe job(s)...
02/12/19 18:06:57 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:06:57 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:06:57 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:06:57 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronplot_exe -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronplot_exe -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronplot" omicronplot_exe.sub
02/12/19 18:06:57 From submit: Submitting job(s).
02/12/19 18:06:57 From submit: 1 job(s) submitted to cluster 283715.
02/12/19 18:06:57 assigned HTCondor ID (283715.0.0)
02/12/19 18:06:57 Just submitted 2 jobs this cycle...
02/12/19 18:06:57 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:06:57 Of 38 nodes total:
02/12/19 18:06:57 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:06:57 === === === === === === ===
02/12/19 18:06:57 34 0 2 0 0 2 0
02/12/19 18:06:57 0 job proc(s) currently held
02/12/19 18:07:02 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:07:02 Reassigning the id of job test_20190212_narnaud_7_upv_exe from (283714.0.0) to (283714.0.0)
02/12/19 18:07:02 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:06:57}
02/12/19 18:07:02 Number of idle job procs: 1
02/12/19 18:07:02 Reassigning the id of job test_20190212_narnaud_7_omicronplot_exe from (283715.0.0) to (283715.0.0)
02/12/19 18:07:02 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronplot_exe (283715.0.0) {02/12/19 18:06:57}
02/12/19 18:07:02 Number of idle job procs: 2
02/12/19 18:07:02 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:06:57}
02/12/19 18:07:02 Number of idle job procs: 1
02/12/19 18:07:02 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronplot_exe (283715.0.0) {02/12/19 18:06:57}
02/12/19 18:07:02 Number of idle job procs: 0
02/12/19 18:07:07 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:07:07 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronplot_exe (283715.0.0) {02/12/19 18:07:03}
02/12/19 18:07:07 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronplot_exe (283715.0.0) {02/12/19 18:07:03}
02/12/19 18:07:07 Number of idle job procs: 0
02/12/19 18:07:07 Node test_20190212_narnaud_7_omicronplot_exe job proc (283715.0.0) completed successfully.
02/12/19 18:07:07 Node test_20190212_narnaud_7_omicronplot_exe job completed
02/12/19 18:07:07 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:07:05}
02/12/19 18:07:07 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:07:07 Of 38 nodes total:
02/12/19 18:07:07 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:07:07 === === === === === === ===
02/12/19 18:07:07 35 0 1 0 1 1 0
02/12/19 18:07:07 0 job proc(s) currently held
02/12/19 18:07:12 Submitting HTCondor Node test_20190212_narnaud_7_omicronplot_json job(s)...
02/12/19 18:07:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:07:12 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:07:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:07:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_omicronplot_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_omicronplot_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_omicronplot_exe" omicronplot_json.sub
02/12/19 18:07:12 From submit: Submitting job(s).
02/12/19 18:07:12 From submit: 1 job(s) submitted to cluster 283716.
02/12/19 18:07:12 assigned HTCondor ID (283716.0.0)
02/12/19 18:07:12 Just submitted 1 job this cycle...
02/12/19 18:07:12 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:07:12 Of 38 nodes total:
02/12/19 18:07:12 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:07:12 === === === === === === ===
02/12/19 18:07:12 35 0 2 0 0 1 0
02/12/19 18:07:12 0 job proc(s) currently held
02/12/19 18:07:17 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:07:17 Reassigning the id of job test_20190212_narnaud_7_omicronplot_json from (283716.0.0) to (283716.0.0)
02/12/19 18:07:17 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_omicronplot_json (283716.0.0) {02/12/19 18:07:12}
02/12/19 18:07:17 Number of idle job procs: 1
02/12/19 18:07:17 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_omicronplot_json (283716.0.0) {02/12/19 18:07:12}
02/12/19 18:07:17 Number of idle job procs: 0
02/12/19 18:07:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_omicronplot_json (283716.0.0) {02/12/19 18:07:12}
02/12/19 18:07:17 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_omicronplot_json (283716.0.0) {02/12/19 18:07:12}
02/12/19 18:07:17 Number of idle job procs: 0
02/12/19 18:07:17 Node test_20190212_narnaud_7_omicronplot_json job proc (283716.0.0) completed successfully.
02/12/19 18:07:17 Node test_20190212_narnaud_7_omicronplot_json job completed
02/12/19 18:07:17 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:07:17 Of 38 nodes total:
02/12/19 18:07:17 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:07:17 === === === === === === ===
02/12/19 18:07:17 36 0 1 0 0 1 0
02/12/19 18:07:17 0 job proc(s) currently held
02/12/19 18:12:07 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:12:07 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:12:06}
02/12/19 18:17:08 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:17:08 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:17:07}
02/12/19 18:17:23 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:17:23 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_upv_exe (283714.0.0) {02/12/19 18:17:22}
02/12/19 18:17:23 Number of idle job procs: 0
02/12/19 18:17:23 Node test_20190212_narnaud_7_upv_exe job proc (283714.0.0) completed successfully.
02/12/19 18:17:23 Node test_20190212_narnaud_7_upv_exe job completed
02/12/19 18:17:23 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:17:23 Of 38 nodes total:
02/12/19 18:17:23 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:17:23 === === === === === === ===
02/12/19 18:17:23 37 0 0 0 1 0 0
02/12/19 18:17:23 0 job proc(s) currently held
02/12/19 18:17:28 Submitting HTCondor Node test_20190212_narnaud_7_upv_json job(s)...
02/12/19 18:17:28 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log
02/12/19 18:17:28 Masking the events recorded in the DAGMAN workflow log
02/12/19 18:17:28 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/12/19 18:17:28 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190212_narnaud_7_upv_json -a +DAGManJobId' '=' '283563 -a DAGManJobId' '=' '283563 -batch-name dqr_test_20190212_narnaud_7.dag+283563 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190212_narnaud_7_upv_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag/./dqr_test_20190212_narnaud_7.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190212_narnaud_7/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190212_narnaud_7_upv_exe" upv_json.sub
02/12/19 18:17:28 From submit: Submitting job(s).
02/12/19 18:17:28 From submit: 1 job(s) submitted to cluster 283727.
02/12/19 18:17:28 assigned HTCondor ID (283727.0.0)
02/12/19 18:17:28 Just submitted 1 job this cycle...
02/12/19 18:17:28 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:17:28 Of 38 nodes total:
02/12/19 18:17:28 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:17:28 === === === === === === ===
02/12/19 18:17:28 37 0 1 0 0 0 0
02/12/19 18:17:28 0 job proc(s) currently held
02/12/19 18:17:33 Currently monitoring 1 HTCondor log file(s)
02/12/19 18:17:33 Reassigning the id of job test_20190212_narnaud_7_upv_json from (283727.0.0) to (283727.0.0)
02/12/19 18:17:33 Event: ULOG_SUBMIT for HTCondor Node test_20190212_narnaud_7_upv_json (283727.0.0) {02/12/19 18:17:28}
02/12/19 18:17:33 Number of idle job procs: 1
02/12/19 18:17:33 Event: ULOG_EXECUTE for HTCondor Node test_20190212_narnaud_7_upv_json (283727.0.0) {02/12/19 18:17:28}
02/12/19 18:17:33 Number of idle job procs: 0
02/12/19 18:17:33 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190212_narnaud_7_upv_json (283727.0.0) {02/12/19 18:17:30}
02/12/19 18:17:33 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190212_narnaud_7_upv_json (283727.0.0) {02/12/19 18:17:30}
02/12/19 18:17:33 Number of idle job procs: 0
02/12/19 18:17:33 Node test_20190212_narnaud_7_upv_json job proc (283727.0.0) completed successfully.
02/12/19 18:17:33 Node test_20190212_narnaud_7_upv_json job completed
02/12/19 18:17:33 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:17:33 Of 38 nodes total:
02/12/19 18:17:33 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:17:33 === === === === === === ===
02/12/19 18:17:33 38 0 0 0 0 0 0
02/12/19 18:17:33 0 job proc(s) currently held
02/12/19 18:17:33 Warning checking HTCondor job events: BAD EVENT: job (283572.0.0) ended, total end count != 1 (2)
02/12/19 18:17:33 All jobs Completed!
02/12/19 18:17:33 Note: 0 total job deferrals because of -MaxJobs limit (0)
02/12/19 18:17:33 Note: 0 total job deferrals because of -MaxIdle limit (1000)
02/12/19 18:17:33 Note: 0 total job deferrals because of node category throttles
02/12/19 18:17:33 Note: 0 total PRE script deferrals because of -MaxPre limit (20) or DEFER
02/12/19 18:17:33 Note: 0 total POST script deferrals because of -MaxPost limit (20) or DEFER
02/12/19 18:17:33 DAG status: 0 (DAG_STATUS_OK)
02/12/19 18:17:33 Of 38 nodes total:
02/12/19 18:17:33 Done Pre Queued Post Ready Un-Ready Failed
02/12/19 18:17:33 === === === === === === ===
02/12/19 18:17:33 38 0 0 0 0 0 0
02/12/19 18:17:33 0 job proc(s) currently held
02/12/19 18:17:33 Wrote metrics file dqr_test_20190212_narnaud_7.dag.metrics.
02/12/19 18:17:33 Metrics not sent because of PEGASUS_METRICS or CONDOR_DEVELOPERS setting.
02/12/19 18:17:33 **** condor_scheduniv_exec.283563.0 (condor_DAGMAN) pid 150593 EXITING WITH STATUS 0
universe = vanilla
executable = [EXE]
arguments = "[ARGS]"
priority = [PRIORITY]
getenv = True
error = [OUTDIR]/[TESTNAME]/logs/$(cluster)-$(process)-$$(Name).err
output = [OUTDIR]/[TESTNAME]/logs/$(cluster)-$(process)-$$(Name).out
notification = never
+Experiment = "DetChar"
+AccountingGroup= "virgo.prod.o3.detchar.transient.dqr"
queue 1
JOB test_20190207_narnaud_2_gps_numerology gps_numerology.sub
VARS test_20190207_narnaud_2_gps_numerology initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_gps_numerology 1
JOB test_20190207_narnaud_2_virgo_noise virgo_noise.sub
VARS test_20190207_narnaud_2_virgo_noise initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_virgo_noise 0
JOB test_20190207_narnaud_2_virgo_noise_json virgo_noise_json.sub
VARS test_20190207_narnaud_2_virgo_noise_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_virgo_noise_json 0
PARENT test_20190207_narnaud_2_virgo_noise CHILD test_20190207_narnaud_2_virgo_noise_json
JOB test_20190207_narnaud_2_virgo_status virgo_status.sub
VARS test_20190207_narnaud_2_virgo_status initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_virgo_status 1
JOB test_20190207_narnaud_2_dqprint_brmsmon dqprint_brmsmon.sub
VARS test_20190207_narnaud_2_dqprint_brmsmon initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_dqprint_brmsmon 0
JOB test_20190207_narnaud_2_dqprint_brmsmon_json dqprint_brmsmon_json.sub
VARS test_20190207_narnaud_2_dqprint_brmsmon_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_dqprint_brmsmon_json 0
PARENT test_20190207_narnaud_2_dqprint_brmsmon CHILD test_20190207_narnaud_2_dqprint_brmsmon_json
JOB test_20190207_narnaud_2_dqprint_dqflags dqprint_dqflags.sub
VARS test_20190207_narnaud_2_dqprint_dqflags initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_dqprint_dqflags 0
JOB test_20190207_narnaud_2_dqprint_dqflags_json dqprint_dqflags_json.sub
VARS test_20190207_narnaud_2_dqprint_dqflags_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_dqprint_dqflags_json 0
PARENT test_20190207_narnaud_2_dqprint_dqflags CHILD test_20190207_narnaud_2_dqprint_dqflags_json
JOB test_20190207_narnaud_2_omicronscanhoftV1 omicronscanhoftV1.sub
VARS test_20190207_narnaud_2_omicronscanhoftV1 initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftV1 0
JOB test_20190207_narnaud_2_omicronscanhoftV1_json omicronscanhoftV1_json.sub
VARS test_20190207_narnaud_2_omicronscanhoftV1_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftV1_json 0
PARENT test_20190207_narnaud_2_omicronscanhoftV1 CHILD test_20190207_narnaud_2_omicronscanhoftV1_json
JOB test_20190207_narnaud_2_omicronscanhoftH1 omicronscanhoftH1.sub
VARS test_20190207_narnaud_2_omicronscanhoftH1 initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftH1 0
JOB test_20190207_narnaud_2_omicronscanhoftH1_json omicronscanhoftH1_json.sub
VARS test_20190207_narnaud_2_omicronscanhoftH1_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftH1_json 0
PARENT test_20190207_narnaud_2_omicronscanhoftH1 CHILD test_20190207_narnaud_2_omicronscanhoftH1_json
JOB test_20190207_narnaud_2_omicronscanhoftL1 omicronscanhoftL1.sub
VARS test_20190207_narnaud_2_omicronscanhoftL1 initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftL1 0
JOB test_20190207_narnaud_2_omicronscanhoftL1_json omicronscanhoftL1_json.sub
VARS test_20190207_narnaud_2_omicronscanhoftL1_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanhoftL1_json 0
PARENT test_20190207_narnaud_2_omicronscanhoftL1 CHILD test_20190207_narnaud_2_omicronscanhoftL1_json
JOB test_20190207_narnaud_2_omicronscanfull2048 omicronscanfull2048.sub
VARS test_20190207_narnaud_2_omicronscanfull2048 initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanfull2048 0
JOB test_20190207_narnaud_2_omicronscanfull2048_json omicronscanfull2048_json.sub
VARS test_20190207_narnaud_2_omicronscanfull2048_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanfull2048_json 0
PARENT test_20190207_narnaud_2_omicronscanfull2048 CHILD test_20190207_narnaud_2_omicronscanfull2048_json
JOB test_20190207_narnaud_2_omicronscanfull512 omicronscanfull512.sub
VARS test_20190207_narnaud_2_omicronscanfull512 initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanfull512 0
JOB test_20190207_narnaud_2_omicronscanfull512_json omicronscanfull512_json.sub
VARS test_20190207_narnaud_2_omicronscanfull512_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronscanfull512_json 0
PARENT test_20190207_narnaud_2_omicronscanfull512 CHILD test_20190207_narnaud_2_omicronscanfull512_json
JOB test_20190207_narnaud_2_omicronplot omicronplot.sub
VARS test_20190207_narnaud_2_omicronplot initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronplot 0
JOB test_20190207_narnaud_2_omicronplot_exe omicronplot_exe.sub
VARS test_20190207_narnaud_2_omicronplot_exe initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronplot_exe 0
PARENT test_20190207_narnaud_2_omicronplot CHILD test_20190207_narnaud_2_omicronplot_exe
JOB test_20190207_narnaud_2_omicronplot_json omicronplot_json.sub
VARS test_20190207_narnaud_2_omicronplot_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_omicronplot_json 0
PARENT test_20190207_narnaud_2_omicronplot_exe CHILD test_20190207_narnaud_2_omicronplot_json
JOB test_20190207_narnaud_2_query_ingv_public_data query_ingv_public_data.sub
VARS test_20190207_narnaud_2_query_ingv_public_data initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_query_ingv_public_data 1
JOB test_20190207_narnaud_2_scan_logfiles scan_logfiles.sub
VARS test_20190207_narnaud_2_scan_logfiles initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_scan_logfiles 1
JOB test_20190207_narnaud_2_decode_DMS_snapshots decode_DMS_snapshots.sub
VARS test_20190207_narnaud_2_decode_DMS_snapshots initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_decode_DMS_snapshots 1
JOB test_20190207_narnaud_2_upv upv.sub
VARS test_20190207_narnaud_2_upv initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_upv 1
JOB test_20190207_narnaud_2_upv_exe upv_exe.sub
VARS test_20190207_narnaud_2_upv_exe initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_upv_exe 1
PARENT test_20190207_narnaud_2_upv CHILD test_20190207_narnaud_2_upv_exe
JOB test_20190207_narnaud_2_upv_json upv_json.sub
VARS test_20190207_narnaud_2_upv_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_upv_json 1
PARENT test_20190207_narnaud_2_upv_exe CHILD test_20190207_narnaud_2_upv_json
JOB test_20190207_narnaud_2_bruco bruco.sub
VARS test_20190207_narnaud_2_bruco initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco 1
JOB test_20190207_narnaud_2_bruco_std bruco_std.sub
VARS test_20190207_narnaud_2_bruco_std initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco_std 1
PARENT test_20190207_narnaud_2_bruco CHILD test_20190207_narnaud_2_bruco_std
JOB test_20190207_narnaud_2_bruco_std-prev bruco_std-prev.sub
VARS test_20190207_narnaud_2_bruco_std-prev initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco_std-prev 1
PARENT test_20190207_narnaud_2_bruco CHILD test_20190207_narnaud_2_bruco_std-prev
JOB test_20190207_narnaud_2_bruco_env bruco_env.sub
VARS test_20190207_narnaud_2_bruco_env initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco_env 1
PARENT test_20190207_narnaud_2_bruco CHILD test_20190207_narnaud_2_bruco_env
JOB test_20190207_narnaud_2_bruco_env-prev bruco_env-prev.sub
VARS test_20190207_narnaud_2_bruco_env-prev initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco_env-prev 1
PARENT test_20190207_narnaud_2_bruco CHILD test_20190207_narnaud_2_bruco_env-prev
JOB test_20190207_narnaud_2_bruco_json bruco_json.sub
VARS test_20190207_narnaud_2_bruco_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_bruco_json 1
PARENT test_20190207_narnaud_2_bruco_std CHILD test_20190207_narnaud_2_bruco_json
PARENT test_20190207_narnaud_2_bruco_std-prev CHILD test_20190207_narnaud_2_bruco_json
PARENT test_20190207_narnaud_2_bruco_env CHILD test_20190207_narnaud_2_bruco_json
PARENT test_20190207_narnaud_2_bruco_env-prev CHILD test_20190207_narnaud_2_bruco_json
JOB test_20190207_narnaud_2_data_ref_comparison_INJ data_ref_comparison_INJ.sub
VARS test_20190207_narnaud_2_data_ref_comparison_INJ initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_data_ref_comparison_INJ 1
JOB test_20190207_narnaud_2_data_ref_comparison_INJ_comparison data_ref_comparison_INJ_comparison.sub
VARS test_20190207_narnaud_2_data_ref_comparison_INJ_comparison initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_data_ref_comparison_INJ_comparison 1
PARENT test_20190207_narnaud_2_data_ref_comparison_INJ CHILD test_20190207_narnaud_2_data_ref_comparison_INJ_comparison
JOB test_20190207_narnaud_2_data_ref_comparison_ISC data_ref_comparison_ISC.sub
VARS test_20190207_narnaud_2_data_ref_comparison_ISC initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_data_ref_comparison_ISC 1
JOB test_20190207_narnaud_2_data_ref_comparison_ISC_comparison data_ref_comparison_ISC_comparison.sub
VARS test_20190207_narnaud_2_data_ref_comparison_ISC_comparison initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_data_ref_comparison_ISC_comparison 1
PARENT test_20190207_narnaud_2_data_ref_comparison_ISC CHILD test_20190207_narnaud_2_data_ref_comparison_ISC_comparison
JOB test_20190207_narnaud_2_generate_dqr_json generate_dqr_json.sub
VARS test_20190207_narnaud_2_generate_dqr_json initialdir="/data/procdata/web/dqr/test_20190207_narnaud_2/dag"
RETRY test_20190207_narnaud_2_generate_dqr_json 0
000 (281365.000.000) 02/07 16:09:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_gps_numerology
...
000 (281366.000.000) 02/07 16:09:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_virgo_noise
...
000 (281367.000.000) 02/07 16:09:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_virgo_status
...
000 (281368.000.000) 02/07 16:09:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_dqprint_brmsmon
...
000 (281369.000.000) 02/07 16:09:02 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_dqprint_dqflags
...
001 (281366.000.000) 02/07 16:09:04 Job executing on host: <90.147.139.47:9618?addrs=90.147.139.47-9618+[--1]-9618&noUDP&sock=16201_1def_3>
...
001 (281368.000.000) 02/07 16:09:04 Job executing on host: <90.147.139.52:9618?addrs=90.147.139.52-9618+[--1]-9618&noUDP&sock=17699_9ece_3>
...
001 (281367.000.000) 02/07 16:09:04 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=11932_f021_3>
...
001 (281369.000.000) 02/07 16:09:04 Job executing on host: <90.147.139.48:9618?addrs=90.147.139.48-9618+[--1]-9618&noUDP&sock=14762_1526_3>
...
001 (281365.000.000) 02/07 16:09:04 Job executing on host: <90.147.139.62:9618?addrs=90.147.139.62-9618+[--1]-9618&noUDP&sock=15582_88cf_3>
...
000 (281370.000.000) 02/07 16:09:07 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronscanhoftV1
...
000 (281371.000.000) 02/07 16:09:07 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronscanhoftH1
...
000 (281372.000.000) 02/07 16:09:07 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronscanhoftL1
...
000 (281373.000.000) 02/07 16:09:07 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronscanfull2048
...
000 (281374.000.000) 02/07 16:09:07 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronscanfull512
...
006 (281368.000.000) 02/07 16:09:08 Image size of job updated: 75
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281368.000.000) 02/07 16:09:08 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 75 75 90566
Memory (MB) : 0 1 1
...
001 (281370.000.000) 02/07 16:09:08 Job executing on host: <90.147.139.52:9618?addrs=90.147.139.52-9618+[--1]-9618&noUDP&sock=17699_9ece_3>
...
006 (281366.000.000) 02/07 16:09:09 Image size of job updated: 1
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
006 (281369.000.000) 02/07 16:09:09 Image size of job updated: 75
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281366.000.000) 02/07 16:09:09 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:03, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:03, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 1 1 90421
Memory (MB) : 0 1 1
...
005 (281369.000.000) 02/07 16:09:09 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 75 75 90478
Memory (MB) : 0 1 1
...
001 (281371.000.000) 02/07 16:09:09 Job executing on host: <90.147.139.48:9618?addrs=90.147.139.48-9618+[--1]-9618&noUDP&sock=14762_1526_3>
...
001 (281372.000.000) 02/07 16:09:09 Job executing on host: <90.147.139.47:9618?addrs=90.147.139.47-9618+[--1]-9618&noUDP&sock=16201_1def_3>
...
000 (281375.000.000) 02/07 16:09:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_omicronplot
...
000 (281376.000.000) 02/07 16:09:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_query_ingv_public_data
...
000 (281377.000.000) 02/07 16:09:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_scan_logfiles
...
000 (281378.000.000) 02/07 16:09:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_decode_DMS_snapshots
...
000 (281379.000.000) 02/07 16:09:12 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_upv
...
006 (281367.000.000) 02/07 16:09:12 Image size of job updated: 85148
84 - MemoryUsage of job (MB)
85144 - ResidentSetSize of job (KB)
...
006 (281365.000.000) 02/07 16:09:12 Image size of job updated: 84044
83 - MemoryUsage of job (MB)
84040 - ResidentSetSize of job (KB)
...
006 (281365.000.000) 02/07 16:09:13 Image size of job updated: 84100
83 - MemoryUsage of job (MB)
84096 - ResidentSetSize of job (KB)
...
005 (281365.000.000) 02/07 16:09:13 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 15 15 90582
Memory (MB) : 83 1 1
...
001 (281376.000.000) 02/07 16:09:14 Job executing on host: <90.147.139.62:9618?addrs=90.147.139.62-9618+[--1]-9618&noUDP&sock=15582_88cf_3>
...
006 (281376.000.000) 02/07 16:09:16 Image size of job updated: 7
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281376.000.000) 02/07 16:09:16 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90582
Memory (MB) : 0 1 1
...
001 (281373.000.000) 02/07 16:09:16 Job executing on host: <90.147.139.62:9618?addrs=90.147.139.62-9618+[--1]-9618&noUDP&sock=15582_88cf_3>
...
006 (281371.000.000) 02/07 16:09:17 Image size of job updated: 5087800
4969 - MemoryUsage of job (MB)
5087800 - ResidentSetSize of job (KB)
...
006 (281372.000.000) 02/07 16:09:17 Image size of job updated: 73024
29 - MemoryUsage of job (MB)
29540 - ResidentSetSize of job (KB)
...
000 (281380.000.000) 02/07 16:09:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_bruco
...
000 (281381.000.000) 02/07 16:09:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_data_ref_comparison_INJ
...
000 (281382.000.000) 02/07 16:09:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_data_ref_comparison_ISC
...
000 (281383.000.000) 02/07 16:09:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_generate_dqr_json
...
000 (281384.000.000) 02/07 16:09:17 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_dqprint_brmsmon_json
...
006 (281370.000.000) 02/07 16:09:17 Image size of job updated: 5086276
4968 - MemoryUsage of job (MB)
5086276 - ResidentSetSize of job (KB)
...
000 (281385.000.000) 02/07 16:09:22 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_virgo_noise_json
...
000 (281386.000.000) 02/07 16:09:22 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_dqprint_dqflags_json
...
001 (281383.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.62:9618?addrs=90.147.139.62-9618+[--1]-9618&noUDP&sock=15582_88cf_3>
...
001 (281381.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.47:9618?addrs=90.147.139.47-9618+[--1]-9618&noUDP&sock=16201_1def_3>
...
001 (281385.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.48:9618?addrs=90.147.139.48-9618+[--1]-9618&noUDP&sock=14762_1526_3>
...
001 (281386.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.50:9618?addrs=90.147.139.50-9618+[--1]-9618&noUDP&sock=13815_84f6_3>
...
001 (281382.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=11932_f021_3>
...
001 (281374.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.83:9618?addrs=90.147.139.83-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
001 (281384.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.52:9618?addrs=90.147.139.52-9618+[--1]-9618&noUDP&sock=17699_9ece_3>
...
001 (281379.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.76:9618?addrs=90.147.139.76-9618+[--1]-9618&noUDP&sock=3371_cdd8_3>
...
001 (281378.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.68:9618?addrs=90.147.139.68-9618+[--1]-9618&noUDP&sock=13894_b4f2_3>
...
001 (281377.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.49:9618?addrs=90.147.139.49-9618+[--1]-9618&noUDP&sock=14991_2857_3>
...
001 (281380.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.63:9618?addrs=90.147.139.63-9618+[--1]-9618&noUDP&sock=15658_aa02_3>
...
001 (281375.000.000) 02/07 16:09:25 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=17331_33f4_3>
...
006 (281380.000.000) 02/07 16:09:25 Image size of job updated: 35
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
006 (281385.000.000) 02/07 16:09:25 Image size of job updated: 2
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281380.000.000) 02/07 16:09:25 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90483
Memory (MB) : 0 1 1
...
005 (281385.000.000) 02/07 16:09:25 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 2 2 90478
Memory (MB) : 0 1 1
...
006 (281382.000.000) 02/07 16:09:25 Image size of job updated: 7
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281382.000.000) 02/07 16:09:25 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90480
Memory (MB) : 0 1 1
...
006 (281373.000.000) 02/07 16:09:26 Image size of job updated: 238512
188 - MemoryUsage of job (MB)
192004 - ResidentSetSize of job (KB)
...
006 (281384.000.000) 02/07 16:09:26 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281384.000.000) 02/07 16:09:26 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90566
Memory (MB) : 0 1 1
...
006 (281386.000.000) 02/07 16:09:26 Image size of job updated: 3
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281386.000.000) 02/07 16:09:26 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 3 3 90583
Memory (MB) : 0 1 1
...
006 (281383.000.000) 02/07 16:09:27 Image size of job updated: 10
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281383.000.000) 02/07 16:09:27 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 10 10 90582
Memory (MB) : 0 1 1
...
006 (281381.000.000) 02/07 16:09:28 Image size of job updated: 7
0 - MemoryUsage of job (MB)
0 - ResidentSetSize of job (KB)
...
005 (281381.000.000) 02/07 16:09:28 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 7 7 90421
Memory (MB) : 0 1 1
...
000 (281387.000.000) 02/07 16:09:32 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_bruco_std
...
000 (281388.000.000) 02/07 16:09:32 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_bruco_std-prev
...
000 (281389.000.000) 02/07 16:09:32 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_bruco_env
...
000 (281390.000.000) 02/07 16:09:32 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_bruco_env-prev
...
000 (281391.000.000) 02/07 16:09:32 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_data_ref_comparison_ISC_comparison
...
001 (281387.000.000) 02/07 16:09:33 Job executing on host: <90.147.139.47:9618?addrs=90.147.139.47-9618+[--1]-9618&noUDP&sock=16201_1def_3>
...
001 (281388.000.000) 02/07 16:09:33 Job executing on host: <90.147.139.63:9618?addrs=90.147.139.63-9618+[--1]-9618&noUDP&sock=15658_aa02_3>
...
001 (281389.000.000) 02/07 16:09:33 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=11932_f021_3>
...
006 (281374.000.000) 02/07 16:09:33 Image size of job updated: 1618940
1581 - MemoryUsage of job (MB)
1618940 - ResidentSetSize of job (KB)
...
006 (281379.000.000) 02/07 16:09:33 Image size of job updated: 100
1 - MemoryUsage of job (MB)
100 - ResidentSetSize of job (KB)
...
006 (281378.000.000) 02/07 16:09:33 Image size of job updated: 91364
90 - MemoryUsage of job (MB)
91360 - ResidentSetSize of job (KB)
...
006 (281377.000.000) 02/07 16:09:33 Image size of job updated: 89020
87 - MemoryUsage of job (MB)
88992 - ResidentSetSize of job (KB)
...
006 (281375.000.000) 02/07 16:09:33 Image size of job updated: 100
1 - MemoryUsage of job (MB)
100 - ResidentSetSize of job (KB)
...
005 (281378.000.000) 02/07 16:09:33 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:01, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:01, Sys 0 00:00:00 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 17 17 90467
Memory (MB) : 90 1 1
...
001 (281391.000.000) 02/07 16:09:33 Job executing on host: <90.147.139.68:9618?addrs=90.147.139.68-9618+[--1]-9618&noUDP&sock=13894_b4f2_3>
...
000 (281392.000.000) 02/07 16:09:38 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=1346832_4e48_3>
DAG Node: test_20190207_narnaud_2_data_ref_comparison_INJ_comparison
...
006 (281367.000.000) 02/07 16:09:38 Image size of job updated: 7527836
84 - MemoryUsage of job (MB)
85144 - ResidentSetSize of job (KB)
...
005 (281367.000.000) 02/07 16:09:38 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:21, Sys 0 00:00:04 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:21, Sys 0 00:00:04 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 30 30 90480
Memory (MB) : 84 1 1
...
001 (281392.000.000) 02/07 16:09:39 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=11932_f021_3>
...
006 (281387.000.000) 02/07 16:09:41 Image size of job updated: 1178116
1151 - MemoryUsage of job (MB)
1178112 - ResidentSetSize of job (KB)
...
006 (281389.000.000) 02/07 16:09:41 Image size of job updated: 1180280
1153 - MemoryUsage of job (MB)
1180276 - ResidentSetSize of job (KB)
...
006 (281388.000.000) 02/07 16:09:41 Image size of job updated: 1138736
1113 - MemoryUsage of job (MB)
1138732 - ResidentSetSize of job (KB)
...
006 (281391.000.000) 02/07 16:09:42 Image size of job updated: 215380
211 - MemoryUsage of job (MB)
215376 - ResidentSetSize of job (KB)
...
001 (281390.000.000) 02/07 16:09:45 Job executing on host: <90.147.139.62:9618?addrs=90.147.139.62-9618+[--1]-9618&noUDP&sock=15582_88cf_3>
...
006 (281392.000.000) 02/07 16:09:48 Image size of job updated: 220780
216 - MemoryUsage of job (MB)
220776 - ResidentSetSize of job (KB)
...
006 (281390.000.000) 02/07 16:09:53 Image size of job updated: 1276912
1247 - MemoryUsage of job (MB)
1276908 - ResidentSetSize of job (KB)
...
006 (281392.000.000) 02/07 16:10:38 Image size of job updated: 507828
216 - MemoryUsage of job (MB)
220776 - ResidentSetSize of job (KB)
...
005 (281392.000.000) 02/07 16:10:38 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:54, Sys 0 00:00:02 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:54, Sys 0 00:00:02 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 27 27 90480
Memory (MB) : 216 1 1
...
001 (281392.000.000) 02/07 16:10:39 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=11932_f021_3>
...
006 (281379.000.000) 02/07 16:10:43 Image size of job updated: 107960
1 - MemoryUsage of job (MB)
100 - ResidentSetSize of job (KB)
...
004 (281379.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90523
Memory (MB) : 1 1 1
...
006 (281375.000.000) 02/07 16:10:43 Image size of job updated: 107956
1 - MemoryUsage of job (MB)
100 - ResidentSetSize of job (KB)
...
004 (281375.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90575
Memory (MB) : 1 1 1
...
009 (281379.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
009 (281375.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281391.000.000) 02/07 16:10:43 Image size of job updated: 282528
211 - MemoryUsage of job (MB)
215376 - ResidentSetSize of job (KB)
...
004 (281391.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:06, Sys 0 00:00:00 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 27 27 90467
Memory (MB) : 211 1 1
...
009 (281391.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281377.000.000) 02/07 16:10:43 Image size of job updated: 89064
87 - MemoryUsage of job (MB)
88992 - ResidentSetSize of job (KB)
...
004 (281377.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:29, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 20 20 90585
Memory (MB) : 87 1 1
...
009 (281377.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281390.000.000) 02/07 16:10:43 Image size of job updated: 1277912
1247 - MemoryUsage of job (MB)
1276908 - ResidentSetSize of job (KB)
...
004 (281390.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:54, Sys 0 00:00:01 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90582
Memory (MB) : 1247 1 1
...
009 (281390.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281370.000.000) 02/07 16:10:43 Image size of job updated: 5105308
4968 - MemoryUsage of job (MB)
5086276 - ResidentSetSize of job (KB)
...
004 (281370.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:29, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90566
Memory (MB) : 4968 1 1
...
009 (281370.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281371.000.000) 02/07 16:10:43 Image size of job updated: 5103936
4969 - MemoryUsage of job (MB)
5087800 - ResidentSetSize of job (KB)
...
004 (281371.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:29, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90478
Memory (MB) : 4969 1 1
...
009 (281371.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281374.000.000) 02/07 16:10:43 Image size of job updated: 1964264
1582 - MemoryUsage of job (MB)
1619620 - ResidentSetSize of job (KB)
...
004 (281374.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:24, Sys 0 00:00:26 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90544
Memory (MB) : 1582 1 1
...
009 (281374.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281387.000.000) 02/07 16:10:43 Image size of job updated: 1179644
1151 - MemoryUsage of job (MB)
1178188 - ResidentSetSize of job (KB)
...
004 (281387.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:04, Sys 0 00:00:02 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90421
Memory (MB) : 1151 1 1
...
009 (281387.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281389.000.000) 02/07 16:10:43 Image size of job updated: 1789152
1153 - MemoryUsage of job (MB)
1180276 - ResidentSetSize of job (KB)
...
004 (281389.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:06, Sys 0 00:00:01 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90480
Memory (MB) : 1153 1 1
...
009 (281389.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281373.000.000) 02/07 16:10:43 Image size of job updated: 2277020
189 - MemoryUsage of job (MB)
193084 - ResidentSetSize of job (KB)
...
004 (281373.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:00:08, Sys 0 00:00:27 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90582
Memory (MB) : 189 1 1
...
006 (281372.000.000) 02/07 16:10:43 Image size of job updated: 5156020
29 - MemoryUsage of job (MB)
29540 - ResidentSetSize of job (KB)
...
009 (281373.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
004 (281372.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:20, Sys 0 00:00:03 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 47 47 90421
Memory (MB) : 29 1 1
...
006 (281388.000.000) 02/07 16:10:43 Image size of job updated: 1279084
1113 - MemoryUsage of job (MB)
1139344 - ResidentSetSize of job (KB)
...
009 (281372.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
004 (281388.000.000) 02/07 16:10:43 Job was evicted.
(0) Job was not checkpointed.
Usr 0 00:01:01, Sys 0 00:00:02 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 35 35 90483
Memory (MB) : 1113 1 1
...
009 (281388.000.000) 02/07 16:10:43 Job was aborted by the user.
via condor_rm (by user narnaud)
...
006 (281392.000.000) 02/07 16:10:48 Image size of job updated: 1195104
1168 - MemoryUsage of job (MB)
1195100 - ResidentSetSize of job (KB)
...
006 (281392.000.000) 02/07 16:11:10 Image size of job updated: 8392024
1168 - MemoryUsage of job (MB)
1195468 - ResidentSetSize of job (KB)
...
005 (281392.000.000) 02/07 16:11:10 Job terminated.
(1) Normal termination (return value 0)
Usr 0 00:00:21, Sys 0 00:00:05 - Run Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage
Usr 0 00:00:21, Sys 0 00:00:05 - Total Remote Usage
Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
0 - Total Bytes Sent By Job
0 - Total Bytes Received By Job
Partitionable Resources : Usage Request Allocated
Cpus : 1 1
Disk (KB) : 30 30 90480
Memory (MB) : 1168 1 1
...
02/07/19 16:08:57 ******************************************************
02/07/19 16:08:57 ** condor_scheduniv_exec.281364.0 (CONDOR_DAGMAN) STARTING UP
02/07/19 16:08:57 ** /usr/bin/condor_dagman
02/07/19 16:08:57 ** SubsystemInfo: name=DAGMAN type=DAGMAN(10) class=DAEMON(1)
02/07/19 16:08:57 ** Configuration: subsystem:DAGMAN local:<NONE> class:DAEMON
02/07/19 16:08:57 ** $CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $
02/07/19 16:08:57 ** $CondorPlatform: x86_64_RedHat7 $
02/07/19 16:08:57 ** PID = 2452541
02/07/19 16:08:57 ** Log last touched time unavailable (No such file or directory)
02/07/19 16:08:57 ******************************************************
02/07/19 16:08:57 Using config source: /etc/condor/condor_config
02/07/19 16:08:57 Using local config sources:
02/07/19 16:08:57 /etc/condor/condor_config.local
02/07/19 16:08:57 config Macros = 204, Sorted = 204, StringBytes = 7353, TablesBytes = 7392
02/07/19 16:08:57 CLASSAD_CACHING is ENABLED
02/07/19 16:08:57 Daemon Log is logging: D_ALWAYS D_ERROR
02/07/19 16:08:57 DaemonCore: No command port requested.
02/07/19 16:08:57 DAGMAN_USE_STRICT setting: 1
02/07/19 16:08:57 DAGMAN_VERBOSITY setting: 3
02/07/19 16:08:57 DAGMAN_DEBUG_CACHE_SIZE setting: 5242880
02/07/19 16:08:57 DAGMAN_DEBUG_CACHE_ENABLE setting: False
02/07/19 16:08:57 DAGMAN_SUBMIT_DELAY setting: 0
02/07/19 16:08:57 DAGMAN_MAX_SUBMIT_ATTEMPTS setting: 6
02/07/19 16:08:57 DAGMAN_STARTUP_CYCLE_DETECT setting: False
02/07/19 16:08:57 DAGMAN_MAX_SUBMITS_PER_INTERVAL setting: 5
02/07/19 16:08:57 DAGMAN_USER_LOG_SCAN_INTERVAL setting: 5
02/07/19 16:08:57 DAGMAN_DEFAULT_PRIORITY setting: 0
02/07/19 16:08:57 DAGMAN_SUPPRESS_NOTIFICATION setting: True
02/07/19 16:08:57 allow_events (DAGMAN_ALLOW_EVENTS) setting: 114
02/07/19 16:08:57 DAGMAN_RETRY_SUBMIT_FIRST setting: True
02/07/19 16:08:57 DAGMAN_RETRY_NODE_FIRST setting: False
02/07/19 16:08:57 DAGMAN_MAX_JOBS_IDLE setting: 1000
02/07/19 16:08:57 DAGMAN_MAX_JOBS_SUBMITTED setting: 0
02/07/19 16:08:57 DAGMAN_MAX_PRE_SCRIPTS setting: 20
02/07/19 16:08:57 DAGMAN_MAX_POST_SCRIPTS setting: 20
02/07/19 16:08:57 DAGMAN_MUNGE_NODE_NAMES setting: True
02/07/19 16:08:57 DAGMAN_PROHIBIT_MULTI_JOBS setting: False
02/07/19 16:08:57 DAGMAN_SUBMIT_DEPTH_FIRST setting: False
02/07/19 16:08:57 DAGMAN_ALWAYS_RUN_POST setting: False
02/07/19 16:08:57 DAGMAN_ABORT_DUPLICATES setting: True
02/07/19 16:08:57 DAGMAN_ABORT_ON_SCARY_SUBMIT setting: True
02/07/19 16:08:57 DAGMAN_PENDING_REPORT_INTERVAL setting: 600
02/07/19 16:08:57 DAGMAN_AUTO_RESCUE setting: True
02/07/19 16:08:57 DAGMAN_MAX_RESCUE_NUM setting: 100
02/07/19 16:08:57 DAGMAN_WRITE_PARTIAL_RESCUE setting: True
02/07/19 16:08:57 DAGMAN_DEFAULT_NODE_LOG setting: @(DAG_DIR)/@(DAG_FILE).nodes.log
02/07/19 16:08:57 DAGMAN_GENERATE_SUBDAG_SUBMITS setting: True
02/07/19 16:08:57 DAGMAN_MAX_JOB_HOLDS setting: 100
02/07/19 16:08:57 DAGMAN_HOLD_CLAIM_TIME setting: 20
02/07/19 16:08:57 ALL_DEBUG setting:
02/07/19 16:08:57 DAGMAN_DEBUG setting:
02/07/19 16:08:57 DAGMAN_SUPPRESS_JOB_LOGS setting: False
02/07/19 16:08:57 DAGMAN_REMOVE_NODE_JOBS setting: True
02/07/19 16:08:57 argv[0] == "condor_scheduniv_exec.281364.0"
02/07/19 16:08:57 argv[1] == "-Lockfile"
02/07/19 16:08:57 argv[2] == "dqr_test_20190207_narnaud_2.dag.lock"
02/07/19 16:08:57 argv[3] == "-AutoRescue"
02/07/19 16:08:57 argv[4] == "1"
02/07/19 16:08:57 argv[5] == "-DoRescueFrom"
02/07/19 16:08:57 argv[6] == "0"
02/07/19 16:08:57 argv[7] == "-Dag"
02/07/19 16:08:57 argv[8] == "dqr_test_20190207_narnaud_2.dag"
02/07/19 16:08:57 argv[9] == "-Suppress_notification"
02/07/19 16:08:57 argv[10] == "-CsdVersion"
02/07/19 16:08:57 argv[11] == "$CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $"
02/07/19 16:08:57 argv[12] == "-Dagman"
02/07/19 16:08:57 argv[13] == "/usr/bin/condor_dagman"
02/07/19 16:08:57 Workflow batch-name: <dqr_test_20190207_narnaud_2.dag+281364>
02/07/19 16:08:57 Workflow accounting_group: <>
02/07/19 16:08:57 Workflow accounting_group_user: <>
02/07/19 16:08:57 Warning: failed to get attribute DAGNodeName
02/07/19 16:08:57 DAGMAN_LOG_ON_NFS_IS_ERROR setting: False
02/07/19 16:08:57 Default node log file is: </data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log>
02/07/19 16:08:57 DAG Lockfile will be written to dqr_test_20190207_narnaud_2.dag.lock
02/07/19 16:08:57 DAG Input file is dqr_test_20190207_narnaud_2.dag
02/07/19 16:08:57 Parsing 1 dagfiles
02/07/19 16:08:57 Parsing dqr_test_20190207_narnaud_2.dag ...
02/07/19 16:08:57 Dag contains 38 total jobs
02/07/19 16:08:57 Sleeping for 3 seconds to ensure ProcessId uniqueness
02/07/19 16:09:00 Bootstrapping...
02/07/19 16:09:00 Number of pre-completed nodes: 0
02/07/19 16:09:00 MultiLogFiles: truncating log file /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:00 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:00 Of 38 nodes total:
02/07/19 16:09:00 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:00 === === === === === === ===
02/07/19 16:09:00 0 0 0 0 19 19 0
02/07/19 16:09:00 0 job proc(s) currently held
02/07/19 16:09:00 Registering condor_event_timer...
02/07/19 16:09:01 Submitting HTCondor Node test_20190207_narnaud_2_gps_numerology job(s)...
02/07/19 16:09:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:01 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_gps_numerology -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_gps_numerology -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" gps_numerology.sub
02/07/19 16:09:01 From submit: Submitting job(s).
02/07/19 16:09:01 From submit: 1 job(s) submitted to cluster 281365.
02/07/19 16:09:01 assigned HTCondor ID (281365.0.0)
02/07/19 16:09:01 Submitting HTCondor Node test_20190207_narnaud_2_virgo_noise job(s)...
02/07/19 16:09:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:01 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_virgo_noise -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_virgo_noise -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_noise.sub
02/07/19 16:09:01 From submit: Submitting job(s).
02/07/19 16:09:01 From submit: 1 job(s) submitted to cluster 281366.
02/07/19 16:09:01 assigned HTCondor ID (281366.0.0)
02/07/19 16:09:01 Submitting HTCondor Node test_20190207_narnaud_2_virgo_status job(s)...
02/07/19 16:09:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:01 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_virgo_status -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_virgo_status -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_status.sub
02/07/19 16:09:01 From submit: Submitting job(s).
02/07/19 16:09:01 From submit: 1 job(s) submitted to cluster 281367.
02/07/19 16:09:01 assigned HTCondor ID (281367.0.0)
02/07/19 16:09:01 Submitting HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon job(s)...
02/07/19 16:09:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:01 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_dqprint_brmsmon -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_dqprint_brmsmon -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_brmsmon.sub
02/07/19 16:09:01 From submit: Submitting job(s).
02/07/19 16:09:01 From submit: 1 job(s) submitted to cluster 281368.
02/07/19 16:09:01 assigned HTCondor ID (281368.0.0)
02/07/19 16:09:01 Submitting HTCondor Node test_20190207_narnaud_2_dqprint_dqflags job(s)...
02/07/19 16:09:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:01 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_dqprint_dqflags -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_dqprint_dqflags -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_dqflags.sub
02/07/19 16:09:02 From submit: Submitting job(s).
02/07/19 16:09:02 From submit: 1 job(s) submitted to cluster 281369.
02/07/19 16:09:02 assigned HTCondor ID (281369.0.0)
02/07/19 16:09:02 Just submitted 5 jobs this cycle...
02/07/19 16:09:02 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:02 Of 38 nodes total:
02/07/19 16:09:02 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:02 === === === === === === ===
02/07/19 16:09:02 0 0 5 0 14 19 0
02/07/19 16:09:02 0 job proc(s) currently held
02/07/19 16:09:07 Submitting HTCondor Node test_20190207_narnaud_2_omicronscanhoftV1 job(s)...
02/07/19 16:09:07 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:07 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:07 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:07 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronscanhoftV1 -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronscanhoftV1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftV1.sub
02/07/19 16:09:07 From submit: Submitting job(s).
02/07/19 16:09:07 From submit: 1 job(s) submitted to cluster 281370.
02/07/19 16:09:07 assigned HTCondor ID (281370.0.0)
02/07/19 16:09:07 Submitting HTCondor Node test_20190207_narnaud_2_omicronscanhoftH1 job(s)...
02/07/19 16:09:07 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:07 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:07 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:07 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronscanhoftH1 -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronscanhoftH1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftH1.sub
02/07/19 16:09:07 From submit: Submitting job(s).
02/07/19 16:09:07 From submit: 1 job(s) submitted to cluster 281371.
02/07/19 16:09:07 assigned HTCondor ID (281371.0.0)
02/07/19 16:09:07 Submitting HTCondor Node test_20190207_narnaud_2_omicronscanhoftL1 job(s)...
02/07/19 16:09:07 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:07 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:07 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:07 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronscanhoftL1 -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronscanhoftL1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftL1.sub
02/07/19 16:09:07 From submit: Submitting job(s).
02/07/19 16:09:07 From submit: 1 job(s) submitted to cluster 281372.
02/07/19 16:09:07 assigned HTCondor ID (281372.0.0)
02/07/19 16:09:07 Submitting HTCondor Node test_20190207_narnaud_2_omicronscanfull2048 job(s)...
02/07/19 16:09:07 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:07 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:07 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:07 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronscanfull2048 -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronscanfull2048 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull2048.sub
02/07/19 16:09:07 From submit: Submitting job(s).
02/07/19 16:09:07 From submit: 1 job(s) submitted to cluster 281373.
02/07/19 16:09:07 assigned HTCondor ID (281373.0.0)
02/07/19 16:09:07 Submitting HTCondor Node test_20190207_narnaud_2_omicronscanfull512 job(s)...
02/07/19 16:09:07 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:07 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:07 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:07 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronscanfull512 -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronscanfull512 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull512.sub
02/07/19 16:09:07 From submit: Submitting job(s).
02/07/19 16:09:07 From submit: 1 job(s) submitted to cluster 281374.
02/07/19 16:09:07 assigned HTCondor ID (281374.0.0)
02/07/19 16:09:07 Just submitted 5 jobs this cycle...
02/07/19 16:09:07 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_gps_numerology from (281365.0.0) to (281365.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_gps_numerology (281365.0.0) {02/07/19 16:09:01}
02/07/19 16:09:07 Number of idle job procs: 1
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_virgo_noise from (281366.0.0) to (281366.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_virgo_noise (281366.0.0) {02/07/19 16:09:01}
02/07/19 16:09:07 Number of idle job procs: 2
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_virgo_status from (281367.0.0) to (281367.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_virgo_status (281367.0.0) {02/07/19 16:09:01}
02/07/19 16:09:07 Number of idle job procs: 3
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_dqprint_brmsmon from (281368.0.0) to (281368.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon (281368.0.0) {02/07/19 16:09:01}
02/07/19 16:09:07 Number of idle job procs: 4
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_dqprint_dqflags from (281369.0.0) to (281369.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags (281369.0.0) {02/07/19 16:09:02}
02/07/19 16:09:07 Number of idle job procs: 5
02/07/19 16:09:07 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_virgo_noise (281366.0.0) {02/07/19 16:09:04}
02/07/19 16:09:07 Number of idle job procs: 4
02/07/19 16:09:07 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon (281368.0.0) {02/07/19 16:09:04}
02/07/19 16:09:07 Number of idle job procs: 3
02/07/19 16:09:07 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_virgo_status (281367.0.0) {02/07/19 16:09:04}
02/07/19 16:09:07 Number of idle job procs: 2
02/07/19 16:09:07 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags (281369.0.0) {02/07/19 16:09:04}
02/07/19 16:09:07 Number of idle job procs: 1
02/07/19 16:09:07 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_gps_numerology (281365.0.0) {02/07/19 16:09:04}
02/07/19 16:09:07 Number of idle job procs: 0
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_omicronscanhoftV1 from (281370.0.0) to (281370.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronscanhoftV1 (281370.0.0) {02/07/19 16:09:07}
02/07/19 16:09:07 Number of idle job procs: 1
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_omicronscanhoftH1 from (281371.0.0) to (281371.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronscanhoftH1 (281371.0.0) {02/07/19 16:09:07}
02/07/19 16:09:07 Number of idle job procs: 2
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_omicronscanhoftL1 from (281372.0.0) to (281372.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronscanhoftL1 (281372.0.0) {02/07/19 16:09:07}
02/07/19 16:09:07 Number of idle job procs: 3
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_omicronscanfull2048 from (281373.0.0) to (281373.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronscanfull2048 (281373.0.0) {02/07/19 16:09:07}
02/07/19 16:09:07 Number of idle job procs: 4
02/07/19 16:09:07 Reassigning the id of job test_20190207_narnaud_2_omicronscanfull512 from (281374.0.0) to (281374.0.0)
02/07/19 16:09:07 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronscanfull512 (281374.0.0) {02/07/19 16:09:07}
02/07/19 16:09:07 Number of idle job procs: 5
02/07/19 16:09:07 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:07 Of 38 nodes total:
02/07/19 16:09:07 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:07 === === === === === === ===
02/07/19 16:09:07 0 0 10 0 9 19 0
02/07/19 16:09:07 0 job proc(s) currently held
02/07/19 16:09:12 Submitting HTCondor Node test_20190207_narnaud_2_omicronplot job(s)...
02/07/19 16:09:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:12 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_omicronplot -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_omicronplot -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronplot.sub
02/07/19 16:09:12 From submit: Submitting job(s).
02/07/19 16:09:12 From submit: 1 job(s) submitted to cluster 281375.
02/07/19 16:09:12 assigned HTCondor ID (281375.0.0)
02/07/19 16:09:12 Submitting HTCondor Node test_20190207_narnaud_2_query_ingv_public_data job(s)...
02/07/19 16:09:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:12 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_query_ingv_public_data -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_query_ingv_public_data -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" query_ingv_public_data.sub
02/07/19 16:09:12 From submit: Submitting job(s).
02/07/19 16:09:12 From submit: 1 job(s) submitted to cluster 281376.
02/07/19 16:09:12 assigned HTCondor ID (281376.0.0)
02/07/19 16:09:12 Submitting HTCondor Node test_20190207_narnaud_2_scan_logfiles job(s)...
02/07/19 16:09:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:12 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_scan_logfiles -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_scan_logfiles -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" scan_logfiles.sub
02/07/19 16:09:12 From submit: Submitting job(s).
02/07/19 16:09:12 From submit: 1 job(s) submitted to cluster 281377.
02/07/19 16:09:12 assigned HTCondor ID (281377.0.0)
02/07/19 16:09:12 Submitting HTCondor Node test_20190207_narnaud_2_decode_DMS_snapshots job(s)...
02/07/19 16:09:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:12 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_decode_DMS_snapshots -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_decode_DMS_snapshots -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" decode_DMS_snapshots.sub
02/07/19 16:09:12 From submit: Submitting job(s).
02/07/19 16:09:12 From submit: 1 job(s) submitted to cluster 281378.
02/07/19 16:09:12 assigned HTCondor ID (281378.0.0)
02/07/19 16:09:12 Submitting HTCondor Node test_20190207_narnaud_2_upv job(s)...
02/07/19 16:09:12 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:12 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:12 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:12 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_upv -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_upv -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" upv.sub
02/07/19 16:09:12 From submit: Submitting job(s).
02/07/19 16:09:12 From submit: 1 job(s) submitted to cluster 281379.
02/07/19 16:09:12 assigned HTCondor ID (281379.0.0)
02/07/19 16:09:12 Just submitted 5 jobs this cycle...
02/07/19 16:09:12 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:12 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon (281368.0.0) {02/07/19 16:09:08}
02/07/19 16:09:12 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon (281368.0.0) {02/07/19 16:09:08}
02/07/19 16:09:12 Number of idle job procs: 5
02/07/19 16:09:12 Node test_20190207_narnaud_2_dqprint_brmsmon job proc (281368.0.0) completed successfully.
02/07/19 16:09:12 Node test_20190207_narnaud_2_dqprint_brmsmon job completed
02/07/19 16:09:12 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftV1 (281370.0.0) {02/07/19 16:09:08}
02/07/19 16:09:12 Number of idle job procs: 4
02/07/19 16:09:12 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_virgo_noise (281366.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags (281369.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_virgo_noise (281366.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Number of idle job procs: 4
02/07/19 16:09:12 Node test_20190207_narnaud_2_virgo_noise job proc (281366.0.0) completed successfully.
02/07/19 16:09:12 Node test_20190207_narnaud_2_virgo_noise job completed
02/07/19 16:09:12 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags (281369.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Number of idle job procs: 4
02/07/19 16:09:12 Node test_20190207_narnaud_2_dqprint_dqflags job proc (281369.0.0) completed successfully.
02/07/19 16:09:12 Node test_20190207_narnaud_2_dqprint_dqflags job completed
02/07/19 16:09:12 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftH1 (281371.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Number of idle job procs: 3
02/07/19 16:09:12 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftL1 (281372.0.0) {02/07/19 16:09:09}
02/07/19 16:09:12 Number of idle job procs: 2
02/07/19 16:09:12 Reassigning the id of job test_20190207_narnaud_2_omicronplot from (281375.0.0) to (281375.0.0)
02/07/19 16:09:12 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_omicronplot (281375.0.0) {02/07/19 16:09:12}
02/07/19 16:09:12 Number of idle job procs: 3
02/07/19 16:09:12 Reassigning the id of job test_20190207_narnaud_2_query_ingv_public_data from (281376.0.0) to (281376.0.0)
02/07/19 16:09:12 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_query_ingv_public_data (281376.0.0) {02/07/19 16:09:12}
02/07/19 16:09:12 Number of idle job procs: 4
02/07/19 16:09:12 Reassigning the id of job test_20190207_narnaud_2_scan_logfiles from (281377.0.0) to (281377.0.0)
02/07/19 16:09:12 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_scan_logfiles (281377.0.0) {02/07/19 16:09:12}
02/07/19 16:09:12 Number of idle job procs: 5
02/07/19 16:09:12 Reassigning the id of job test_20190207_narnaud_2_decode_DMS_snapshots from (281378.0.0) to (281378.0.0)
02/07/19 16:09:12 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_decode_DMS_snapshots (281378.0.0) {02/07/19 16:09:12}
02/07/19 16:09:12 Number of idle job procs: 6
02/07/19 16:09:12 Reassigning the id of job test_20190207_narnaud_2_upv from (281379.0.0) to (281379.0.0)
02/07/19 16:09:12 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_upv (281379.0.0) {02/07/19 16:09:12}
02/07/19 16:09:12 Number of idle job procs: 7
02/07/19 16:09:12 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:12 Of 38 nodes total:
02/07/19 16:09:12 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:12 === === === === === === ===
02/07/19 16:09:12 3 0 12 0 7 16 0
02/07/19 16:09:12 0 job proc(s) currently held
02/07/19 16:09:17 Submitting HTCondor Node test_20190207_narnaud_2_bruco job(s)...
02/07/19 16:09:17 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:17 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:17 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:17 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_bruco -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_bruco -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" bruco.sub
02/07/19 16:09:17 From submit: Submitting job(s).
02/07/19 16:09:17 From submit: 1 job(s) submitted to cluster 281380.
02/07/19 16:09:17 assigned HTCondor ID (281380.0.0)
02/07/19 16:09:17 Submitting HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ job(s)...
02/07/19 16:09:17 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:17 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:17 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:17 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_data_ref_comparison_INJ -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_data_ref_comparison_INJ -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_INJ.sub
02/07/19 16:09:17 From submit: Submitting job(s).
02/07/19 16:09:17 From submit: 1 job(s) submitted to cluster 281381.
02/07/19 16:09:17 assigned HTCondor ID (281381.0.0)
02/07/19 16:09:17 Submitting HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC job(s)...
02/07/19 16:09:17 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:17 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:17 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:17 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_data_ref_comparison_ISC -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_data_ref_comparison_ISC -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_ISC.sub
02/07/19 16:09:17 From submit: Submitting job(s).
02/07/19 16:09:17 From submit: 1 job(s) submitted to cluster 281382.
02/07/19 16:09:17 assigned HTCondor ID (281382.0.0)
02/07/19 16:09:17 Submitting HTCondor Node test_20190207_narnaud_2_generate_dqr_json job(s)...
02/07/19 16:09:17 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:17 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:17 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:17 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_generate_dqr_json -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_generate_dqr_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" generate_dqr_json.sub
02/07/19 16:09:17 From submit: Submitting job(s).
02/07/19 16:09:17 From submit: 1 job(s) submitted to cluster 281383.
02/07/19 16:09:17 assigned HTCondor ID (281383.0.0)
02/07/19 16:09:17 Submitting HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon_json job(s)...
02/07/19 16:09:17 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:17 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:17 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:17 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_dqprint_brmsmon_json -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_dqprint_brmsmon_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_dqprint_brmsmon" dqprint_brmsmon_json.sub
02/07/19 16:09:17 From submit: Submitting job(s).
02/07/19 16:09:17 From submit: 1 job(s) submitted to cluster 281384.
02/07/19 16:09:17 assigned HTCondor ID (281384.0.0)
02/07/19 16:09:17 Just submitted 5 jobs this cycle...
02/07/19 16:09:17 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_virgo_status (281367.0.0) {02/07/19 16:09:12}
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_gps_numerology (281365.0.0) {02/07/19 16:09:12}
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_gps_numerology (281365.0.0) {02/07/19 16:09:13}
02/07/19 16:09:17 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_gps_numerology (281365.0.0) {02/07/19 16:09:13}
02/07/19 16:09:17 Number of idle job procs: 7
02/07/19 16:09:17 Node test_20190207_narnaud_2_gps_numerology job proc (281365.0.0) completed successfully.
02/07/19 16:09:17 Node test_20190207_narnaud_2_gps_numerology job completed
02/07/19 16:09:17 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_query_ingv_public_data (281376.0.0) {02/07/19 16:09:14}
02/07/19 16:09:17 Number of idle job procs: 6
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_query_ingv_public_data (281376.0.0) {02/07/19 16:09:16}
02/07/19 16:09:17 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_query_ingv_public_data (281376.0.0) {02/07/19 16:09:16}
02/07/19 16:09:17 Number of idle job procs: 6
02/07/19 16:09:17 Node test_20190207_narnaud_2_query_ingv_public_data job proc (281376.0.0) completed successfully.
02/07/19 16:09:17 Node test_20190207_narnaud_2_query_ingv_public_data job completed
02/07/19 16:09:17 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronscanfull2048 (281373.0.0) {02/07/19 16:09:16}
02/07/19 16:09:17 Number of idle job procs: 5
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftH1 (281371.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftL1 (281372.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Reassigning the id of job test_20190207_narnaud_2_bruco from (281380.0.0) to (281380.0.0)
02/07/19 16:09:17 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_bruco (281380.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Number of idle job procs: 6
02/07/19 16:09:17 Reassigning the id of job test_20190207_narnaud_2_data_ref_comparison_INJ from (281381.0.0) to (281381.0.0)
02/07/19 16:09:17 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ (281381.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Number of idle job procs: 7
02/07/19 16:09:17 Reassigning the id of job test_20190207_narnaud_2_data_ref_comparison_ISC from (281382.0.0) to (281382.0.0)
02/07/19 16:09:17 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC (281382.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Number of idle job procs: 8
02/07/19 16:09:17 Reassigning the id of job test_20190207_narnaud_2_generate_dqr_json from (281383.0.0) to (281383.0.0)
02/07/19 16:09:17 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_generate_dqr_json (281383.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Number of idle job procs: 9
02/07/19 16:09:17 Reassigning the id of job test_20190207_narnaud_2_dqprint_brmsmon_json from (281384.0.0) to (281384.0.0)
02/07/19 16:09:17 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon_json (281384.0.0) {02/07/19 16:09:17}
02/07/19 16:09:17 Number of idle job procs: 10
02/07/19 16:09:17 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:17 Of 38 nodes total:
02/07/19 16:09:17 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:17 === === === === === === ===
02/07/19 16:09:17 5 0 15 0 2 16 0
02/07/19 16:09:17 0 job proc(s) currently held
02/07/19 16:09:22 Submitting HTCondor Node test_20190207_narnaud_2_virgo_noise_json job(s)...
02/07/19 16:09:22 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:22 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:22 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:22 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_virgo_noise_json -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_virgo_noise_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_virgo_noise" virgo_noise_json.sub
02/07/19 16:09:22 From submit: Submitting job(s).
02/07/19 16:09:22 From submit: 1 job(s) submitted to cluster 281385.
02/07/19 16:09:22 assigned HTCondor ID (281385.0.0)
02/07/19 16:09:22 Submitting HTCondor Node test_20190207_narnaud_2_dqprint_dqflags_json job(s)...
02/07/19 16:09:22 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:22 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:22 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:22 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_dqprint_dqflags_json -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_dqprint_dqflags_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_dqprint_dqflags" dqprint_dqflags_json.sub
02/07/19 16:09:22 From submit: Submitting job(s).
02/07/19 16:09:22 From submit: 1 job(s) submitted to cluster 281386.
02/07/19 16:09:22 assigned HTCondor ID (281386.0.0)
02/07/19 16:09:22 Just submitted 2 jobs this cycle...
02/07/19 16:09:22 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:22 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronscanhoftV1 (281370.0.0) {02/07/19 16:09:17}
02/07/19 16:09:22 Reassigning the id of job test_20190207_narnaud_2_virgo_noise_json from (281385.0.0) to (281385.0.0)
02/07/19 16:09:22 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_virgo_noise_json (281385.0.0) {02/07/19 16:09:22}
02/07/19 16:09:22 Number of idle job procs: 11
02/07/19 16:09:22 Reassigning the id of job test_20190207_narnaud_2_dqprint_dqflags_json from (281386.0.0) to (281386.0.0)
02/07/19 16:09:22 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags_json (281386.0.0) {02/07/19 16:09:22}
02/07/19 16:09:22 Number of idle job procs: 12
02/07/19 16:09:22 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:22 Of 38 nodes total:
02/07/19 16:09:22 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:22 === === === === === === ===
02/07/19 16:09:22 5 0 17 0 0 16 0
02/07/19 16:09:22 0 job proc(s) currently held
02/07/19 16:09:27 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_generate_dqr_json (281383.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 11
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ (281381.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 10
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_virgo_noise_json (281385.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 9
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags_json (281386.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 8
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC (281382.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 7
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronscanfull512 (281374.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 6
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon_json (281384.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 5
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_upv (281379.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 4
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_decode_DMS_snapshots (281378.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 3
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_scan_logfiles (281377.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 2
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_bruco (281380.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 1
02/07/19 16:09:27 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_omicronplot (281375.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_bruco (281380.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_virgo_noise_json (281385.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_bruco (281380.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_bruco job proc (281380.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_bruco job completed
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_virgo_noise_json (281385.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_virgo_noise_json job proc (281385.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_virgo_noise_json job completed
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC (281382.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC (281382.0.0) {02/07/19 16:09:25}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_data_ref_comparison_ISC job proc (281382.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_data_ref_comparison_ISC job completed
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronscanfull2048 (281373.0.0) {02/07/19 16:09:26}
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon_json (281384.0.0) {02/07/19 16:09:26}
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_dqprint_brmsmon_json (281384.0.0) {02/07/19 16:09:26}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_dqprint_brmsmon_json job proc (281384.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_dqprint_brmsmon_json job completed
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags_json (281386.0.0) {02/07/19 16:09:26}
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_dqprint_dqflags_json (281386.0.0) {02/07/19 16:09:26}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_dqprint_dqflags_json job proc (281386.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_dqprint_dqflags_json job completed
02/07/19 16:09:27 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_generate_dqr_json (281383.0.0) {02/07/19 16:09:27}
02/07/19 16:09:27 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_generate_dqr_json (281383.0.0) {02/07/19 16:09:27}
02/07/19 16:09:27 Number of idle job procs: 0
02/07/19 16:09:27 Node test_20190207_narnaud_2_generate_dqr_json job proc (281383.0.0) completed successfully.
02/07/19 16:09:27 Node test_20190207_narnaud_2_generate_dqr_json job completed
02/07/19 16:09:27 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:27 Of 38 nodes total:
02/07/19 16:09:27 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:27 === === === === === === ===
02/07/19 16:09:27 11 0 11 0 5 11 0
02/07/19 16:09:27 0 job proc(s) currently held
02/07/19 16:09:32 Submitting HTCondor Node test_20190207_narnaud_2_bruco_std job(s)...
02/07/19 16:09:32 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:32 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:32 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:32 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_bruco_std -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_bruco_std -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_bruco" bruco_std.sub
02/07/19 16:09:32 From submit: Submitting job(s).
02/07/19 16:09:32 From submit: 1 job(s) submitted to cluster 281387.
02/07/19 16:09:32 assigned HTCondor ID (281387.0.0)
02/07/19 16:09:32 Submitting HTCondor Node test_20190207_narnaud_2_bruco_std-prev job(s)...
02/07/19 16:09:32 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:32 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:32 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:32 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_bruco_std-prev -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_bruco_std-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_bruco" bruco_std-prev.sub
02/07/19 16:09:32 From submit: Submitting job(s).
02/07/19 16:09:32 From submit: 1 job(s) submitted to cluster 281388.
02/07/19 16:09:32 assigned HTCondor ID (281388.0.0)
02/07/19 16:09:32 Submitting HTCondor Node test_20190207_narnaud_2_bruco_env job(s)...
02/07/19 16:09:32 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:32 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:32 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:32 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_bruco_env -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_bruco_env -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_bruco" bruco_env.sub
02/07/19 16:09:32 From submit: Submitting job(s).
02/07/19 16:09:32 From submit: 1 job(s) submitted to cluster 281389.
02/07/19 16:09:32 assigned HTCondor ID (281389.0.0)
02/07/19 16:09:32 Submitting HTCondor Node test_20190207_narnaud_2_bruco_env-prev job(s)...
02/07/19 16:09:32 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:32 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:32 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:32 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_bruco_env-prev -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_bruco_env-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_bruco" bruco_env-prev.sub
02/07/19 16:09:32 From submit: Submitting job(s).
02/07/19 16:09:32 From submit: 1 job(s) submitted to cluster 281390.
02/07/19 16:09:32 assigned HTCondor ID (281390.0.0)
02/07/19 16:09:32 Submitting HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC_comparison job(s)...
02/07/19 16:09:32 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:32 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:32 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:32 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_data_ref_comparison_ISC_comparison -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_data_ref_comparison_ISC_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_data_ref_comparison_ISC" data_ref_comparison_ISC_comparison.sub
02/07/19 16:09:32 From submit: Submitting job(s).
02/07/19 16:09:32 From submit: 1 job(s) submitted to cluster 281391.
02/07/19 16:09:32 assigned HTCondor ID (281391.0.0)
02/07/19 16:09:32 Just submitted 5 jobs this cycle...
02/07/19 16:09:32 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:32 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ (281381.0.0) {02/07/19 16:09:28}
02/07/19 16:09:32 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ (281381.0.0) {02/07/19 16:09:28}
02/07/19 16:09:32 Number of idle job procs: 0
02/07/19 16:09:32 Node test_20190207_narnaud_2_data_ref_comparison_INJ job proc (281381.0.0) completed successfully.
02/07/19 16:09:32 Node test_20190207_narnaud_2_data_ref_comparison_INJ job completed
02/07/19 16:09:32 Reassigning the id of job test_20190207_narnaud_2_bruco_std from (281387.0.0) to (281387.0.0)
02/07/19 16:09:32 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_bruco_std (281387.0.0) {02/07/19 16:09:32}
02/07/19 16:09:32 Number of idle job procs: 1
02/07/19 16:09:32 Reassigning the id of job test_20190207_narnaud_2_bruco_std-prev from (281388.0.0) to (281388.0.0)
02/07/19 16:09:32 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_bruco_std-prev (281388.0.0) {02/07/19 16:09:32}
02/07/19 16:09:32 Number of idle job procs: 2
02/07/19 16:09:32 Reassigning the id of job test_20190207_narnaud_2_bruco_env from (281389.0.0) to (281389.0.0)
02/07/19 16:09:32 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_bruco_env (281389.0.0) {02/07/19 16:09:32}
02/07/19 16:09:32 Number of idle job procs: 3
02/07/19 16:09:32 Reassigning the id of job test_20190207_narnaud_2_bruco_env-prev from (281390.0.0) to (281390.0.0)
02/07/19 16:09:32 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_bruco_env-prev (281390.0.0) {02/07/19 16:09:32}
02/07/19 16:09:32 Number of idle job procs: 4
02/07/19 16:09:32 Reassigning the id of job test_20190207_narnaud_2_data_ref_comparison_ISC_comparison from (281391.0.0) to (281391.0.0)
02/07/19 16:09:32 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC_comparison (281391.0.0) {02/07/19 16:09:32}
02/07/19 16:09:32 Number of idle job procs: 5
02/07/19 16:09:32 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:32 Of 38 nodes total:
02/07/19 16:09:32 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:32 === === === === === === ===
02/07/19 16:09:32 12 0 15 0 1 10 0
02/07/19 16:09:32 0 job proc(s) currently held
02/07/19 16:09:37 Submitting HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison job(s)...
02/07/19 16:09:37 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log
02/07/19 16:09:37 Masking the events recorded in the DAGMAN workflow log
02/07/19 16:09:37 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/07/19 16:09:37 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190207_narnaud_2_data_ref_comparison_INJ_comparison -a +DAGManJobId' '=' '281364 -a DAGManJobId' '=' '281364 -batch-name dqr_test_20190207_narnaud_2.dag+281364 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190207_narnaud_2_data_ref_comparison_INJ_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag/./dqr_test_20190207_narnaud_2.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190207_narnaud_2/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190207_narnaud_2_data_ref_comparison_INJ" data_ref_comparison_INJ_comparison.sub
02/07/19 16:09:38 From submit: Submitting job(s).
02/07/19 16:09:38 From submit: 1 job(s) submitted to cluster 281392.
02/07/19 16:09:38 assigned HTCondor ID (281392.0.0)
02/07/19 16:09:38 Just submitted 1 job this cycle...
02/07/19 16:09:38 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:38 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_bruco_std (281387.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Number of idle job procs: 4
02/07/19 16:09:38 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_bruco_std-prev (281388.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Number of idle job procs: 3
02/07/19 16:09:38 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_bruco_env (281389.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Number of idle job procs: 2
02/07/19 16:09:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronscanfull512 (281374.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_upv (281379.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_decode_DMS_snapshots (281378.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_scan_logfiles (281377.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_omicronplot (281375.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_decode_DMS_snapshots (281378.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Number of idle job procs: 2
02/07/19 16:09:38 Node test_20190207_narnaud_2_decode_DMS_snapshots job proc (281378.0.0) completed successfully.
02/07/19 16:09:38 Node test_20190207_narnaud_2_decode_DMS_snapshots job completed
02/07/19 16:09:38 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC_comparison (281391.0.0) {02/07/19 16:09:33}
02/07/19 16:09:38 Number of idle job procs: 1
02/07/19 16:09:38 Reassigning the id of job test_20190207_narnaud_2_data_ref_comparison_INJ_comparison from (281392.0.0) to (281392.0.0)
02/07/19 16:09:38 Event: ULOG_SUBMIT for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:09:38}
02/07/19 16:09:38 Number of idle job procs: 2
02/07/19 16:09:38 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:38 Of 38 nodes total:
02/07/19 16:09:38 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:38 === === === === === === ===
02/07/19 16:09:38 13 0 15 0 0 10 0
02/07/19 16:09:38 0 job proc(s) currently held
02/07/19 16:09:43 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_virgo_status (281367.0.0) {02/07/19 16:09:38}
02/07/19 16:09:43 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_virgo_status (281367.0.0) {02/07/19 16:09:38}
02/07/19 16:09:43 Number of idle job procs: 2
02/07/19 16:09:43 Node test_20190207_narnaud_2_virgo_status job proc (281367.0.0) completed successfully.
02/07/19 16:09:43 Node test_20190207_narnaud_2_virgo_status job completed
02/07/19 16:09:43 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:09:39}
02/07/19 16:09:43 Number of idle job procs: 1
02/07/19 16:09:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_bruco_std (281387.0.0) {02/07/19 16:09:41}
02/07/19 16:09:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_bruco_env (281389.0.0) {02/07/19 16:09:41}
02/07/19 16:09:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_bruco_std-prev (281388.0.0) {02/07/19 16:09:41}
02/07/19 16:09:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_ISC_comparison (281391.0.0) {02/07/19 16:09:42}
02/07/19 16:09:43 DAG status: 0 (DAG_STATUS_OK)
02/07/19 16:09:43 Of 38 nodes total:
02/07/19 16:09:43 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:09:43 === === === === === === ===
02/07/19 16:09:43 14 0 14 0 0 10 0
02/07/19 16:09:43 0 job proc(s) currently held
02/07/19 16:09:48 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:48 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_bruco_env-prev (281390.0.0) {02/07/19 16:09:45}
02/07/19 16:09:48 Number of idle job procs: 0
02/07/19 16:09:53 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:53 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:09:48}
02/07/19 16:09:58 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:09:58 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_bruco_env-prev (281390.0.0) {02/07/19 16:09:53}
02/07/19 16:10:43 Currently monitoring 1 HTCondor log file(s)
02/07/19 16:10:43 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:10:38}
02/07/19 16:10:43 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:10:38}
02/07/19 16:10:43 Number of idle job procs: 0
02/07/19 16:10:43 Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison job proc (281392.0.0) completed successfully.
02/07/19 16:10:43 Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison job completed
02/07/19 16:10:43 Event: ULOG_EXECUTE for HTCondor Node test_20190207_narnaud_2_data_ref_comparison_INJ_comparison (281392.0.0) {02/07/19 16:10:39}
02/07/19 16:10:43 BAD EVENT: job (281392.0.0) executing, total end count != 0 (1)
02/07/19 16:10:43 ERROR: aborting DAG because of bad event (BAD EVENT: job (281392.0.0) executing, total end count != 0 (1))
02/07/19 16:10:43 ProcessLogEvents() returned false
02/07/19 16:10:43 Aborting DAG...
02/07/19 16:10:43 Writing Rescue DAG to dqr_test_20190207_narnaud_2.dag.rescue001...
02/07/19 16:10:43 Removing submitted jobs...
02/07/19 16:10:43 Removing any/all submitted HTCondor jobs...
02/07/19 16:10:43 Running: /usr/bin/condor_rm -const DAGManJobId' '=?=' '281364
02/07/19 16:10:43 Note: 0 total job deferrals because of -MaxJobs limit (0)
02/07/19 16:10:43 Note: 0 total job deferrals because of -MaxIdle limit (1000)
02/07/19 16:10:43 Note: 0 total job deferrals because of node category throttles
02/07/19 16:10:43 Note: 0 total PRE script deferrals because of -MaxPre limit (20) or DEFER
02/07/19 16:10:43 Note: 0 total POST script deferrals because of -MaxPost limit (20) or DEFER
02/07/19 16:10:43 DAG status: 1 (DAG_STATUS_ERROR)
02/07/19 16:10:43 Of 38 nodes total:
02/07/19 16:10:43 Done Pre Queued Post Ready Un-Ready Failed
02/07/19 16:10:43 === === === === === === ===
02/07/19 16:10:43 15 0 13 0 0 10 0
02/07/19 16:10:43 0 job proc(s) currently held
02/07/19 16:10:43 Wrote metrics file dqr_test_20190207_narnaud_2.dag.metrics.
02/07/19 16:10:43 Metrics not sent because of PEGASUS_METRICS or CONDOR_DEVELOPERS setting.
02/07/19 16:10:43 **** condor_scheduniv_exec.281364.0 (condor_DAGMAN) pid 2452541 EXITING WITH STATUS 1