I am curious if the developers have any updates to the general
description given in
https://research.cs.wisc.edu/htcondor/wiki-archive/pages/HowToManageLargeCondorPools/
about how a AP's cpu & memory requirements scale with the size of the
prospective job queue.
https://htcondor.readthedocs.io/en/latest/admin-manual/configuration-macros.html#condor-schedd-configuration-file-entries
With modern servers able to have hundreds of GBs of system memory, is
it possible to get queues of jobs (pending >> running) into the 250k
range or higher? Or does the speed of storage or network communication
become the bottleneck before you get that large?
Cheers,
Matt
Matthew T. West (he,him,his) | Systems Programmer/Administrator
UNC Charlotte | University Research Computing (office of OneIT)
9214 South Library Lane| Room 301 | Charlotte, NC 28223
Phone: 704-687-8766
mwest53@xxxxxxxxxxxxx | http://www.charlotte.edu/urc/