HI,
I have been using Simics and GEMS for several months. But I just found
the following problem in a recent reinstallation. I want to confirm
with GEMS folks that the root cause I found is really the problem and
hopefully other people facing the same problem can benefit from it.
I was using simics 3.0.23+GEMS 1.3.1 and we used 8-core and 16-core
configurations. However, we only ran single-thread programs on
processor 0 and 1 using processor_bind or psrset. After installing
GEMS 1.4, I just started doing some work using all 16 cores and I
found I often (not every time) got very low (close to 0) user misses
and still quite large supervisor misses. My friend studying at U. of
Wisconsin told me to try Simics 2.2 since GEMS is more robust with it.
However I kept getting the same problem after trying different
configurations and solaris versions.
Looking for the reason, I tried to run a single-thread program on
different processors and found on processor 0,1,2,3 the user miss rate
was normal otherwise it was wrong (close to zero).
I initially had the following configuration for simics 2.2 since
there were posts in both simics forum and GEMS maillist saying
memory distribution among boards doesn't matter.
@boards = {0 : [[0, 4,1024], [1, 4, 1024], [2, 4, 1024], [3, 4, 1024]]}
I changed it to :
@boards = {0 : [[0, 4,4096], [1, 4, 0], [2, 4, 0], [3, 4, 0]]}
Now the problem seems gone, at least for several programs I tried.
I think in simics 3.0, there is a line in the script file to set
mem_per_processor and the default configuration is to spread memory in
a balanced way. I wonder if it is the cause of the unsolved problem in
https://lists.cs.wisc.edu/archive/gems-users/2006-November/msg00109.shtml
. Can someone confirm this and give a reason? Or maybe I did
something else in a wrong way ?
Thanks a lot!
Qingda Lu
|