I meant what does response latency and the request latency measure? Is
the request latency the time between when a request is sent to a cache
and the time the cache receives it? Is the response latency the time
it takes to send a response from the cache? Shouldn't those be the
same thing?
Increasing the latency and the cache misses halved from 100000 to
50000. All subsequent runs at either latency setting are very close to
these values (within one or two misses).
On 4/20/2010 1:06 PM, Dan Gibson wrote:
1. You can grep through a protocol's .sm files to find out
what latencies it uses.
2. You can never trust a comparison of only one run. When simulating
multiprocessors, small timing perturbations can cause threads to take a
different path. This is why we typically use many (~10) runs per
configuration to achieve a reasonable 95% confidence interval. See
Alameldeen and Wood: http://www.cs.wisc.edu/multifacet/papers/hpca03_variability.pdf
Regards,
Dan
On Tue, Apr 20, 2010 at 1:03 PM, Mark
Samuelson <msamuelson@xxxxxxxx>
wrote:
We
are using MOESI_CMP_directory. And yes, we checked to make sure
everything else was the same.
Philip Garcia wrote:
what protocol are you using, I know many of these
parameters aren't used under some protocols, and even if they are,
increasing them shouldn't reduce runtime, are you sure you're running
on the same checkpoint, and that nothing else changed?
Phil
On Apr 20, 2010, at 12:37 PM, Mark Samuelson wrote:
--
http://www.cs.wisc.edu/~gibson
[esc]:wq!
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
|
|