Thanks Todd. After your comment I found the following link with details:ÂI was trying to understand further how htcondor is using FDs by submitting a batch of 3k jobs. I reduced the limit of open files 100 (soft) and 2k (hard). I thought maybe I would not be able to run more than 2k jobs, I did see 3k jobs running.ÂNumber of file handles increased by approx 60k and reduced to 21k after removing the jobs.Â# cat /proc/sys/fs/file-nr
21568 0 6573632# cat /proc/sys/fs/file-nr
81472 0 6573632Enabling logging doesn't show me too many FDs used by condor.ÂSCHEDD_DEBUG = D_FDS
SHADOW_DEBUG = D_FDS
SHARED_PORT_DEBUG = D_FDSBasically I am trying to understand: where condor uses FD? It can help me to answer what limits condor can hit if we don't bump the value of descriptors.ÂThanks & Regards,Vikrant AggarwalOn Mon, Mar 8, 2021 at 10:22 PM Todd L Miller <tlmiller@xxxxxxxxxxx> wrote:> Finally able to get the parameters due to which it was happening but didn't
> understand why it's happening.
    IIRC, HTCondor closes (almost?) all FDs after fork()ing* but
before exec()ing the shadow. There was not, until relatively recently, a
way to close all the FDs associated with a process; you had to make a
system call for each FD. When you have to close 102,400 FDs, that's a lot
of system calls, and it takes a while.
- ToddM
*: On Linux, HTCondor actually calls clone().
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/