[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Cgroups v2 and memory limits for WLCG sites



Hi,

could you please clarify to us how to use memory limits with HTCondor and cgroups v2? Do we understand correctly that cgroups v2 account also page cache (e.g. disk buffers) to the job (process tree) memory? Such behavior makes cgroups v2 unusable for enforcing memory limits, because it is unpredictable how much page cache is used by our jobs (less stressed machine => potentially more memory accounted by job cgroups v2).

What are our options to enforce reasonable memory limits?

  • don't enforce memory limits by cgroups v2 at all as described in https://opensciencegrid.atlassian.net/browse/HTCONDOR-2521
  • sacrifice a bit of performance by aggressively dropping page case with CGROUP_LOW_MEMORY_LIMIT. Which values should be used? Do you have an idea what is the impact on performance?
  • other options? recommendation? Could cgroups v2 be configured to enforce just process memory limits and don't include page cache?

  • We have sites that moved to cgroups v2 and we started to observe random job failures that are very tricky to understand and sure such debugging is very time consuming. We can easily measure how much memory our jobs needs (e.g. scouting jobs estimating memory usage), but page case size is totally unpredictable to us and this seems to make cgroups v2 memory limits pretty unusable. We would like to have clear and simple instruction for HTCondor batch, because otherwise enforcing memory limits become operational nightmare with distributed infrastructure where each site invents their own solution (or even keep killing jobs on page cache size).

    Petr