Thank you for the suggestion and information. Oddly enough the reverse analyze tells me that the jobs should match the slots:
$ condor_q -analyze -reverse -machine slot1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 2544.0
-- Slot: slot1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx : Analyzing matches for 1 Jobs in 1 autoclusters
The Requirements _expression_ for this slot is
  ( START ) && ( IsValidCheckpointPlatform ) &&
  ( WithinResourceLimits )
  START is ( Owner == "mleblanc" )
This slot defines the following attributes:
  CheckpointPlatform = "LINUX X86_64 3.10.44-74.cernvm.x86_64 normal 0x2aaaaaaab000 ssse3 sse4_1 sse4_2"
  Cpus = 8
  Disk = 153054548
  Memory = 30161
  IsValidCheckpointPlatform = true
  WithinResourceLimits = false
Job 2544.0 has the following attributes:
  TARGET.Owner = "mleblanc"
  TARGET.JobUniverse = 5
  TARGET.NumCkpts = 0
  TARGET.RequestCpus = 1
  TARGET.RequestDisk = 10000000
  TARGET.RequestMemory = 29500
The Requirements _expression_ for this slot reduces to these conditions:
   ÂClusters
Step  ÂMatched ÂCondition
----- Â-------- Â---------
[0] Â Â Â Â Â 1 ÂOwner == "mleblanc"
[1] Â Â Â Â Â 1 ÂIsValidCheckpointPlatform
[3] Â Â Â Â Â 1 ÂWithinResourceLimits
slot1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: Run analysis summary of 1 jobs.
  1 (100.00 %) match both slot and job requirements.
  1 match the requirements of this slot.
  1 have job requirements that match this slot.
But this did get me to check the worker node out more closely, there is an odd message in the StartLog:
That got me to investigate the network on the worker and some of my contextualization scripts mangled the worker's network configuration. I'll get back to you with whether fixing those scripts removed the problems I'm seeing.