Hi, Ah, not great… I guess I’d be able to work that around with a script parsing the history (but parsing classads might not be that easy for the newbies that I am), or even just by building an auto-updated
“nodes” file with puppet... I’m wondering though how people do debug batch issues if they can’t even identify there are failing nodes from a batchsystem point of view ? I guess people have monitoring scripts checking for the presence of a stard process (at least), and probably some other trivial things (but which ones ?) in order to be sure the start processes are
correctly registered in the pool ? Regards De : HTCondor-users [mailto:htcondor-users-bounces@xxxxxxxxxxx]
De la part de Marc Volovic You can see drained nodes with condor_status. For nodes that are down, that is a more difficult question – I'd do it using an external means. From: HTCondor-users [mailto:htcondor-users-bounces@xxxxxxxxxxx]
On Behalf Of SCHAER Frederic Hi, I’m used to torque, in which there is a “pbsnodes –l” command that displays nodes that are down or drained. Strangely, I don’t find how to see this information in condor : what would be the condor way of finding this information ? I’m sure this can become hard when the pool is dynamic, but even then there must be traces of nodes which belonged to the pool “one day” or in the last X days ? Thanks |