I have a question regarding
querying condor_stats. I'm gettin some peculiar behavior in terms of
trace granularity. For example, if I query:
condor_stats -orgformat -from 2 13 2007 -resourcequery laotzu.cse.nd.edu -f laotzu.cse.nd.edu.2007.txt
then the output is roughly 37kb and consists of measurements every 16 minutes or so. However, if I query
condor_stats -orgformat -from 2 14 2007 -resourcequery laotzu.cse.nd.edu -f laotzu.cse.nd.edu.2007.txt
the
output is roughly 131kb and consists of measurements every 6 minutes or
so. I've also noticed that if I go even farther back, the granularity
of the data gets longer and longer the further back I go. It seems that everything inside of a week is every 6 minutes, everything inside of a month is 16 minutes and everything else is 64 minutes. Is this just
a limitation of the condor_stats system? Does it simply aggregrate data
more and more as you go back in time and then on output, just takes the
largest granulatriy (that seems to be the case given the behavior)? And
lastly, is there a way to get around this and get older historical data
at lower granularity levels (or if thats not possible, get it to output
all the data it has...and allow the granularity to decrease as it
outputs more recent information)?
I was also wondering if anyone knew if it were possible to be
able to judge when a machine shutdown or when it failed in the
historical information. That includes using condor_stats or any other condor mechanism to be able to view when machines shutdown (e.g. condor software shutdown) or when the machine itself failed.
Thanks for your help in advance,
-- Brent E. Rood -bayroot@xxxxxxxxxx