Hello, I’m running HTCondor 8.8.4 on Windows 10 and am seeing the following error in the ShadowLog of one of my sched hosts: 08/02/19 17:12:36 (321.0) (12116): Job 321.0 terminated: exited with status 0 08/02/19 17:12:36 (321.0) (12116): Reporting job exit reason 100 and attempting to fetch new job. 08/02/19 17:12:36 (321.0) (12116): ERROR: SharedPortEndpoint: Named pipe does not exist. 08/02/19 17:12:36 (321.0) (12116): SharedPortEndpoint: Destructor: Problem in thread shutdown notification: 0 08/02/19 17:12:36 (321.0) (12116): **** condor_shadow (condor_SHADOW) pid 12116 EXITING WITH STATUS 100 The error doesn’t appear to have a negative impact on the function of the Schedd, as jobs are submitted, negotiated, and successfully run. I did a quick search for the error string on the web, and came across the github entry shadow code, but I’m not familiar with the code to determine the impact from there. if(child_pipe == INVALID_HANDLE_VALUE) { dprintf(D_ALWAYS, "ERROR: SharedPortEndpoint: Named pipe does not exist.\n"); MSC_SUPPRESS_WARNING_FOREVER(6258) // warning: Using TerminateThread does not allow proper thread clean up TerminateThread(thread_handle, 0); break; } It seems like maybe its designed to handle an exception which is expected running the SharedPort on Windows, but I’m not sure. Could any one share an insight on how to resolve this Error, or if it matters at all? My condor_config is listed below. Thanks and Regards, Mark O’Neal Leica Geosystems, Inc. ###################################################################### ## ## condor_config ## ## This is the global configuration file for condor. This is where ## you define where the local config file is. Any settings ## made here may potentially be overridden in the local configuration ## file. KEEP THAT IN MIND! To double-check that a variable is ## getting set from the configuration file that you expect, use ## condor_config_val -v <variable name> ## ## condor_config.annotated is a more detailed sample config file ## ## Unless otherwise specified, settings that are commented out show ## the defaults that are used if you don't define a value. Settings ## that are defined here MUST BE DEFINED since they have no default ## value. ## ###################################################################### ## Where have you installed the bin, sbin and lib condor directories?
RELEASE_DIR = C:\condor ## Where is the local condor directory for each host? This is where the local config file(s), logs and ## spool/execute directories are located. this is the default for Linux and Unix systems. #LOCAL_DIR = $(TILDE) ## this is the default on Windows sytems #LOCAL_DIR = $(RELEASE_DIR) ## Where is the machine-specific local config file for each host? LOCAL_CONFIG_FILE = $(LOCAL_DIR)\condor_config.local ## If your configuration is on a shared file system, then this might be a better default #LOCAL_CONFIG_FILE = $(RELEASE_DIR)\etc\$(HOSTNAME).local ## If the local config file is not present, is it an error? (WARNING: This is a potential security issue.) REQUIRE_LOCAL_CONFIG_FILE = FALSE ## The normal way to do configuration with RPMs is to read all of the ## files in a given directory that don't match a regex as configuration files. ## Config files are read in lexicographic order. LOCAL_CONFIG_DIR = $(LOCAL_DIR)\config #LOCAL_CONFIG_DIR_EXCLUDE_REGEXP = ^((\..*)|(.*~)|(#.*)|(.*\.rpmsave)|(.*\.rpmnew))$ ## Use a host-based security policy. By default CONDOR_HOST and the local machine will be allowed use SECURITY : HOST_BASED ## To expand your condor pool beyond a single host, set ALLOW_WRITE to match all of the hosts #ALLOW_WRITE = *.cs.wisc.edu ## FLOCK_FROM defines the machines that grant access to your pool via flocking. (i.e. these machines can join your pool). #FLOCK_FROM = ## FLOCK_TO defines the central managers that your schedd will advertise itself to (i.e. these pools will give matches to your schedd). #FLOCK_TO = condor.cs.wisc.edu, cm.example.edu ##-------------------------------------------------------------------- ## Values set by the condor_configure script: ##-------------------------------------------------------------------- CONDOR_HOST = AATLSRVGCSMGR01 UID_DOMAIN = lgs-net.com CONDOR_ADMIN = SMTP_SERVER = ALLOW_READ = *.lgs-net.com ALLOW_WRITE = *.lgs-net.com ALLOW_ADMINISTRATOR = *.lgs-net.com START = FALSE WANT_VACATE = FALSE WANT_SUSPEND = TRUE DAEMON_LIST = MASTER SCHEDD ##-------------------------------------------------------------------- ## Values configured for implementation of central cred store ## Adapted from default coniguration in
## C:\condor\etc\condor_config.local.credd ##-------------------------------------------------------------------- ## Computer where the condor_credd is running CREDD_HOST = AATLSRVGCSMGR01 ## Required parameters to allow jobs to run as owner, don't change STARTER_ALLOW_RUNAS_OWNER = True CREDD_CACHE_LOCALLY = True ## Required security setting for authentication method ## to access the central cred store, don't change SEC_CLIENT_AUTHENTICATION_METHODS = NTSSPI, PASSWORD ## Domain user who can remotely store pool password from credd host ## use @* to simplify domain notation ALLOW_CONFIG = condor@* ## Required security settings for allow_config setting above,
## don't change. SEC_CONFIG_NEGOTIATION = REQUIRED SEC_CONFIG_AUTHENTICATION = REQUIRED SEC_CONFIG_ENCRYPTION = REQUIRED SEC_CONFIG_INTEGRITY = REQUIRED ## |