Hi Jiang, are the idleing jobs having different requirements than the starting jobs or are all jobs more or less equal? If for example some of the jobs require multiple cores or so, you may have to add a defrag daemon to get the requested resources freed? Cheers, Thomas On 2016-10-22 18:08, jiangxw@xxxxxxxxxxxxxxx wrote: > Dear allï > we used condor in the local cluster, and devided resource into > different groups. > But we have a negotiation problem: > Some jobs in schedd stay idle for a long time, but lots of slots > in collector are in idle. > however, these idle jobs are netotiated to running in the > machines when some new jobs are submitted into schedd. > > Our groups configuration as follow: > GROUP_NAMES = cms, juno, physics, higgs, dyw, hxmt > GROUP_QUOTA_DYNAMIC_cms = 1.0 > GROUP_QUOTA_DYNAMIC_juno = 1.0 > GROUP_QUOTA_DYNAMIC_higgs = 1.0 > GROUP_QUOTA_DYNAMIC_physics = 1.0 > GROUP_QUOTA_DYNAMIC_dyw = 1.0 > GROUP_QUOTA_DYNAMIC_hxmt = 1.0 > GROUP_ACCEPT_SURPLUS = true > > we don't want the quota limits the number of slots used by every > group, so we set dynamic quota. > Are there some wrong configurations lead the negotiation > problem? Thanks for help. > > Best regardsï > Jiang Xiaowei > ------------------------------------------------------------------------ > NAMEïJiang Xiaowei > MAILïjiangxw@xxxxxxxxxxxxxxx > TELï010 8823 6024 > DEPARTMENTïComputing Center of IHEP > > > _______________________________________________ > HTCondor-users mailing list > To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a > subject: Unsubscribe > You can also unsubscribe by visiting > https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users > > The archives can be found at: > https://lists.cs.wisc.edu/archive/htcondor-users/ >
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature