Dear Mr. Candler, On 07/10/2012 04:19 PM, Brian Candler wrote:
If one has a large number of jobs to submit - say 100,000 jobs - what is the recommended way of doing this? Can simply submitting that number of jobs cause problems? These jobs will take their input from a shared filesystem.From what I've read, each job will take ~10KB of RAM in schedd, so 100K jobswould be about 1G of RAM just for the job queue. If I can afford that, is there anything else to worry about?
You can configure more than one scheduler. Also you should consider to do it on 64 bit arch. because of memory allocation by one daemon. -- Best Regards, Martin Kudlej. MRG/Grid Senior Quality Assurance Engineer Red Hat Czech s.r.o.