*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
greg,
Thanks. Itâs working now.
-----------------------------------
GERARD WEATHERBY
Application Architect
NMRhub
nmrhub.org

From:
HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> on behalf of Greg Thain via HTCondor-users <htcondor-users@xxxxxxxxxxx>
Date: Monday, April 22, 2024 at 11:04âAM
To: htcondor-users@xxxxxxxxxxx <htcondor-users@xxxxxxxxxxx>
Cc: Greg Thain <gthain@xxxxxxxxxxx>
Subject: Re: [HTCondor-users] JOB_DEFAULT_REQUESTMEMORY
*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
On 4/19/2024 11:04 AM, Weatherby,Gerard wrote:
Jobs that donât specify
request_memory and exceed 128 MB are getting killed. We would like to increase and have tried setting
JOB_DEFAULT_REQUESTMEMORY = 2048
in configurations files on the noded submitted from, our access point, our central manager and our execute nodes. It does not seem to be having any effect.
condor_config_val JOB_DEFAULT_REQUESTMEMORY does output 2048
Hi Gerard:
JOB_DEFAULT_REQUESTMEMORY only needs to be set on the access point, but takes effect only for jobs that are submitted after the parameter is set.
-greg
|