[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Does HTC limit CPU usage on macOS?




> On 29 Jul 2025, at 2:28âAM, Greg Thain via HTCondor-users <htcondor-users@xxxxxxxxxxx> wrote:
> 
> On 7/27/25 19:45, Neil Clayton via HTCondor-users wrote:
>> Iâve been learning HTC over the weekend, and have a small cluster working, across 3 machinesm a linux box (50+ cores), a M4 Mac Pro, and some other linux thing.
>> 
>> The macOS box (named speedy4) has condor executor installed manually, as root.
>> An executor linux box (50+ cores) exists, named âhappyâ.
>> The central manager + submit + negotiator (everything else, *not* an executor) is on a separate linux box (named: containery).
>> 
>> I can queue + run jobs over the machines in the cluster fine (itâs a java universe btw).  Using dynamic slots, jobs are allocated as Iâd expect.
>> I am queuing directly from âcontaineryâ (a submit node).
> 
> 
> HTCondor doesn't explicitly limit cpu on the Mac, but Java universe jobs are passed the memory-limiting option "-Xmx" to the java runtime, which should roughly correspond to the "request_memory" command in the submit file.  We have seen some cases where a lower allocation of memory to the java job causes the garbage collector to work a lot more than it otherwise would, and slow the job down.  If you run it with a much larger request_memory, does it get faster?

I wondered about this as well, and have seen similar slow downs when running code in an IDE and and forgetting to up the JVM ram upper limit.
However:

a) the job runs as expected if I put âhappyâ (the linux host) back into the cluster. It works fine on that host.  
b) if I put a docker container (executor) into the cluster, running on the m4, that also works fine (and you can see the process itself is taking up 30gb ram, so I figure the -Xmx flag is being passed correctly).


Just re-reading.
OK. So should I not be passing in -Xmx myself (I am, currently, in the java_vm_args)?
If I donât pass my own -Xmx, the VM appears to be allocated around ~16gb RAM (which is too low, and definitely thrashes GC), even though Iâm asking for ~24.

Just tried a macOS native executor on another (intel, macOS 13) Mac, and itâs running up to 100%. 
I donât have another apple silicon machine to try, so atm cannot tell if itâs arch related.  Oh, the M4 is running macOS 15. Who knows if that matters.

Iâll keep plugging away. It helps to know htcondor itself isnât the thing throttling the job...

âneil