Mailing List Archives
Authenticated access
|
|
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] memory "sharing" question
- Date: Tue, 21 Mar 2006 16:27:45 -0000
- From: "Kewley, J \(John\)" <j.kewley@xxxxxxxx>
- Subject: Re: [Condor-users] memory "sharing" question
> I have dual CPU computers, with 2 Gb RAM. So the Ad says that
> each CPU has 1 Gb : Does that mean that if a process need
> more memory than 1 Gb, it won't get it ? or will it get this
> "over"-memory ?
If the job requirements (some of which are automatically produced)
say that it needs > 1Gb, I don't think it'll get matched.
> How would it be possible to declare that a CPU has 2 Gb ?
> Because, on the opposite, I have processes which don't use
> any memory (or really little), so, in the best world, I could
> run 2 processes on 2 cpus, one with almost 2 GB Ram, and one
> with almost nothing. But, I don't want to book the CPU for
> only these kinds of jobs, sometimes I don't have any, so the
> CPU must be free for other purposes...
What would be nice would be an option to allow jobs to have dynamic memory
depending on what job was already there, but I don't think that is possible.
You would then advertise jobs as having a SHARED_MAXMEMORY between the processors.
Current alternatives include:
* As now - pretend each proc has a max of 1Gb each.
* (for if you have predominantly large jobs), pretend each node has 1 proc only with 2Gb
* Pretend you have> 1Gb on each node by setting MEMORY yourself
* Set one to 1.5Gb and the other to 0.5Gb, so smaller jobs go to one proc and larger to the
other
There may be other ways, this is best I can do for now.
JK