Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[HTCondor-users] job memory requirements and free memory vs cached
- Date: Fri, 23 Oct 2015 14:42:01 -0700
- From: Michael Paterson <mhp@xxxxxxx>
- Subject: [HTCondor-users] job memory requirements and free memory vs cached
Hello,
I'm trying to run 4 single core jobs on a 4cpu box with partitionable
slots and ~7500m memory, the jobs have 1500m mem set for the memory
requirement.
Sometimes all 4 slots will start up on a machine, but others only get 3.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7804 slot03 30 10 1381m 787m 26m R 100.0 10.5 63:25.24 basf2
10279 slot02 30 10 1347m 793m 64m R 100.0 10.6 41:47.69 basf2
6322 slot01 30 10 1546m 891m 21m R 98.4 11.9 386:24.91 basf2
# free -m
total used free shared buffers cached
Mem: 7514 6824 689 0 69 3320
-/+ buffers/cache: 3434 4080
Swap: 16383 10 16373
Is the ~3G in cache preventing the 4th job from setting a slot? If so is
there a way to limit the cachesize or force the machine to give it up so
condor sees enough free memory to start another job?
I've seen a few references to vm.pagecache but attempting to set it
gives an error:
# sysctl -w vm.pagecache="20"
error: "vm.pagecache" is an unknown key
the machine is running Scientific Linux release 6.6 (Carbon) and
vmlinuz-2.6.32-573
thank you