During the course of a Condor job I frequently call condor_chirp as a Python subprocess on the workers in order to both put to/fetch from the submit machine.
I periodically get the following terminal error:
"chirp: couldn't get file: cannot allocate memory"
I am suspicious that each condor_chirp process is remaining open with some claim on memory, such that after many calls the memory is eventually used up. But I am not sure how to test this (or, if it is the case, how to release the memory after each condor_chirp call).
Thanks for any assistance.
Wes Zell