Re: [Gems-users] Discrepancy in Ruby Statistics


Date: Thu, 10 May 2007 14:10:21 -0500 (CDT)
From: Mike Marty <mikem@xxxxxxxxxxx>
Subject: Re: [Gems-users] Discrepancy in Ruby Statistics
I'm not sure what is going on.  I will say that we run the same
configuration on hundreds of cluster nodes, different OSes, 32-bit/64-bit
and we haven't seen discrepencies related to the host machine.

--Mike


>  There could be several things going on:
>
> 1) To verify the setup: Are you issuing identical commands to _Simics_? e.g.
> dstc-disable, cpu-switch-time 1, etc.?
>
> **Yes, the commands used are exactly the same. Basically, I am using the
> same script to generate the dump stats.
>
> 2) The RANDOM_SEED will vary from run to run unless you change it. If your
> workload is sensitive to lock acquisition order (most are) then it can make
> a big difference. This is why we run each data point many times to achieve
> 95% confidence intervals. Since the Ruby_cycles difference is only about 2%,
> it could be easily attributable to differences in RANDOM_SEED.
>
> **g_RANDOM_SEED is set to 1 in rubyconfig.defaults for all my simulations
> and I am not changing it anywhere else in my simulation_script. With this
> can I exclude the randomness factor due to difference in seed number. What
> else could be the reason for this difference, because with similar
> RANDOM_SEED both the models should generate similar results?
>
> 3) Resolving locks in a particular order *is* random noise, but it is not
> something that should be discarded from simulations -- real applications
> will acquire locks in many different orders, just as the simulator does. A
> sound methodology would be to run the same simulation a few more times and
> take an average, rather than to "decide between A and B".
>
> **As I have fixed the random seed parameter to a fixed constant value the
> results produced by different runs of same simulation are exactly same. But,
> the point of concern for me is that the performance benefit (base protocol
> vs new protocol) is coming out to be different in my two simulations models
> (A and B). The performance difference (in terms of ruby cycles) varies in
> two simulation models.  e.g for one application:
>
> Sim A: New Protocol performs 8% better than base protocol.
> Sim B: New Protocol performs at par with base protocol (negligible
> performance gain).
>
> I can try producing results by running the same simulation multiple times
> (varying the RANDOM_SEED) and verify whether, the performance trend remains
> the same or not?
>
> -Thank You
> Nitin Bhardwaj
>
[← Prev in Thread] Current Thread [Next in Thread→]