What is the protocol you are using? If it is a SMP protocol, L2 might
be private. Most CMP ones L2 is shared. But L2 might be non-inclusive
in the CMP ones.
On Thu, Apr 8, 2010 at 8:05 PM, Nilay Vaish <nilay@xxxxxxxxxxx> wrote:
> Dan, thanks for the help.
>
> One more question. Below is an excerpt from the output produced by Ruby --
>
> Chip Config
> -----------
> Total_Chips: 1
>
> L2Cache_L2_TBEs numberPerChip: 16
> TBEs_per_TBETable: 128
>
> L2Cache_L2cacheMemory numberPerChip: 16
> Cache config: L2Cache_0
> cache_associativity: 4
> num_cache_sets_bits: 12
> num_cache_sets: 4096
> cache_set_size_bytes: 262144
> cache_set_size_Kbytes: 256
> cache_set_size_Mbytes: 0.25
> cache_size_bytes: 1048576
> cache_size_Kbytes: 1024
> cache_size_Mbytes: 1
>
>
>> From this it seems that even L2 is private to each processor. Is this
>
> true?
>
> Thanks
> Nilay
>
> On Thu, 8 Apr 2010, Dan Gibson wrote:
>
>> Ahh, ocean. If only I had a dollar for every time I've seen this same
>> issue
>> with ocean. Here is what is going on:
>>
>> 258x258 elements * 4 bytes/integer * 2 copies of the ocean (because of
>> shadowing) =
>> 520 KB working set size.
>>
>> Aggregate L1 cache size = 64KB * 16 Processors = 1MB.
>> Aggregate L2 cache size = 1MB.
>>
>> In other words, after the first full timestep, the entire ocean is
>> resident
>> in the L1 caches. Any misses from the L1s at that point are sharing
>> misses,
>> which 'miss' in the L2 because other L1s must be invalidated.
>>
>> Also, for this particular case, the L2 is no bigger than the aggregation
>> of
>> the L1s. I would hope that the L2 isn't inclusive, but I honestly don't
>> know
>> if it is or not for MESI_CMP_filter_directory. If it IS inclusive,
>> obviously
>> an L2 hit would only happen rarely, on a transient condition.
>>
>> Regards,
>> Dan
>>
>> On Thu, Apr 8, 2010 at 11:44 AM, Nilay Vaish <nilay@xxxxxxxxxxx> wrote:
>>
>>> On Thu, 8 Apr 2010, Dan Gibson wrote:
>>>
>>> It is not out of the question that there might be profiling problems
>>> with
>>>>
>>>> MESI_CMP_filter_directory. However, lets cover the basics first:
>>>> - What are your cache sizes?
>>>>
>>>
>>> L2 cache_size_Kbytes: 1024
>>> L1I cache_size_Kbytes: 64
>>> L1D cache_size_Kbytes: 64
>>>
>>> - What is the workload?
>>>>
>>>
>>> I am simulating the Ocean application with 4 threads. It makes use of a
>>> 258
>>> X 258 grid of integers.
>>>
>>>
>>> - For how long are you running the simulation?
>>>>
>>>
>>> The simulation runs for about 18 minutes.
>>>
>>>
>>>
>>>> Regards,
>>>> Dan
>>>>
>>>> On Thu, Apr 8, 2010 at 10:52 AM, Nilay Vaish <nilay@xxxxxxxxxxx> wrote:
>>>>
>>>> Hi
>>>>>
>>>>> I am using GEMS for studying effect of L2 cache parameters on
>>>>> performance
>>>>> of CMPs. Right now I am using the protocol MESI_CMP_filter_directory.
>>>>> The
>>>>> L2
>>>>> misses obtained as output is suspiciously close to the sum of L1D and
>>>>> L1I
>>>>> cache misses. I am quoting the misses obtained for a 16 processor
>>>>> machine.
>>>>>
>>>>> L1D_cache cache stats:
>>>>> L1D_cache_total_misses: 1334256
>>>>> L1D_cache_total_demand_misses: 1334256
>>>>>
>>>>> L1I_cache cache stats:
>>>>> L1I_cache_total_misses: 2010
>>>>> L1I_cache_total_demand_misses: 2010
>>>>>
>>>>> L2_cache cache stats:
>>>>> L2_cache_total_misses: 1336262
>>>>> L2_cache_total_demand_misses: 1336262
>>>>>
>>>>> Has anyone used GEMS for profiling L2 misses? What might be going on
>>>>> wrong
>>>>> here?
>>>>>
>>>>> --
>>>>> Nilay
>>>>>
>>>>> _______________________________________________
>>>>> Gems-users mailing list
>>>>> Gems-users@xxxxxxxxxxx
>>>>> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>>>>> Use Google to search the GEMS Users mailing list by adding "site:
>>>>> https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> http://www.cs.wisc.edu/~gibson
>>>> <http://www.cs.wisc.edu/%7Egibson>[esc]:wq!
>>>>
>>>>
>>> Thanks
>>>
>>> Nilay
>>> _______________________________________________
>>> Gems-users mailing list
>>> Gems-users@xxxxxxxxxxx
>>> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>>> Use Google to search the GEMS Users mailing list by adding "site:
>>> https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>>>
>>>
>>
>>
>> --
>> http://www.cs.wisc.edu/~gibson [esc]:wq!
>>
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> Use Google to search the GEMS Users mailing list by adding
> "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>
>
--
Regards
James Wang
.-- .- -. --. @ -.-. ... .-.-.- .-- .. ... -.-. .-.-.- . -.. ..-
|