[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Jobs using large memory but profiler says that jobs are fine?



Dear Condor experts,

I am running a job that seems to be going beyond 8Gb in memory usage. However when I run it with memray (a memory profiler for python projects) I see that my memory usage is only 107Mb, as you can see in the screenshot below:

image.png
I have contacted the python mailing list:

https://discuss.python.org/t/concurrent-features-vs-multiprocessing-vs-suprocess/47653/5

and despite I try to turn off multithreading, I still get this high memory usage. Why is this happening and how should I fix it? I cannot use 16Gb of memory in my jobs, there are very few machines that have that much memory.

Cheers.