1. In theory, the two latencies represent the time required to respond with data (response) versus the time to respond with a control message (request), which is in theory related to tag/data array organization. In practice, many protocols simply use one latency and ignore the others. Hence, my previous post stads: grep through the protocol to find out which ones are used.
2. Are you running for a fixed number of instructions? If so, it is likely that one or more processor is spinning, completing instructions very quickly. Other non-spinning processors spend more time waiting on memory (because you have lengthened cache latency), which reduces the total number of misses.
Regards, Dan
On Tue, Apr 20, 2010 at 2:18 PM, Mark Samuelson <msamuelson@xxxxxxxx> wrote:
I meant what does response latency and the request latency measure? Is
the request latency the time between when a request is sent to a cache
and the time the cache receives it? Is the response latency the time
it takes to send a response from the cache? Shouldn't those be the
same thing?
Increasing the latency and the cache misses halved from 100000 to
50000. All subsequent runs at either latency setting are very close to
these values (within one or two misses).
On 4/20/2010 1:06 PM, Dan Gibson wrote:
1. You can grep through a protocol's .sm files to find out
what latencies it uses.
2. You can never trust a comparison of only one run. When simulating
multiprocessors, small timing perturbations can cause threads to take a
different path. This is why we typically use many (~10) runs per
configuration to achieve a reasonable 95% confidence interval. See
Alameldeen and Wood: http://www.cs.wisc.edu/multifacet/papers/hpca03_variability.pdf
Regards,
Dan
On Tue, Apr 20, 2010 at 1:03 PM, Mark
Samuelson <msamuelson@xxxxxxxx>
wrote:
We
are using MOESI_CMP_directory. And yes, we checked to make sure
everything else was the same.
Philip Garcia wrote:
what protocol are you using, I know many of these
parameters aren't used under some protocols, and even if they are,
increasing them shouldn't reduce runtime, are you sure you're running
on the same checkpoint, and that nothing else changed?
Phil
On Apr 20, 2010, at 12:37 PM, Mark Samuelson wrote:
We
are running a simple program and found increasing the L2 request and
response latencies improved both the cache miss rate and the overall
runtime.
Does anyone know what the difference between the request, response, and
tag latencies are?