Hi guys,
 thank you all for your suggestions. I am pretty sure they will solve my 
problem.
 Phil, I have already started modifying the files in the config 
directory, so the fun never stops...
Kostis
 Another fairly easy way to do this is to modify interfaces/ 
OpalInterface.C (assuming you're using opal, if not change the simics  
interface file).  The function advanceTime is called every cycle, so  
you an just put a hook in there that will call your function every X  
cycles (if you really want to have fun modify the files in the config  
directory to make this parameterizable at runtime).
Phil
On Jul 1, 2008, at 9:07 AM, Konstantinos Nikas wrote:
   
Hi Mike,
 thanks for the suggestion. I managed to record the information I  
wanted
but now I am stuck at another point :-) .
 What I want to do is make Ruby call a specific method (which is  
going to
look into the L2 cache and do some tricks) every 1 million cycles.
However I am not sure how I am supposed to do that, since (if I have
 understood it correctly) Ruby does not simulate every single cycle,  
but
instead wakes up whenever there is an event from the processor.
Any suggestions?
Kind regards,
Kostis
     
Just add a field to the Entry structure (defined in L2cache.sm) and
then update the field with L2cacheMemory[addr].thread_ID :=
in_msg.Requestor or similar
--Mike
On Thu, Jun 26, 2008 at 9:26 AM, Konstantinos Nikas
<knikas@xxxxxxxxxxxxxxxxx> wrote:
       
Hi all,
 I want to use Gems to study cache replacement policies for CMPs.  
One of
the things I need to do is add to the L2 cache a thread_ID field  
which
will hold the id of the processor/thread that brought the cache  
block
into the L2 cache in the first place.
 So, I have added an array that holds the id to the CacheMemory  
class and
whenever deallocate is called I reset the thread_ID of the touched  
block
(this is the easy part of course :-) ).  The problem is that I am  
not
sure how to set the ID whenever a new block is allocated into the  
cache.
 First of all, I am not sure what happens when a L2 miss is  
detected. I
would assume that the following happens :
 L2 miss detected -> L2_cacheMemory.cacheProbe (to find the  
replacement
victim) -> L2_cacheMemory.deallocate (deallocate the selected  
victim) ->
L2)_cacheMemory.allocate (allocation of the new block).
Is that right? And more importantly where is this sequence defined?
 I 've looked at the generated code of the protocol and I found  
that in
L2Cache_Transitions.C the qq_allocateL2CacheBlock method (which  
calls
the allocate method of CacheMemory) is called when for example the  
state
in L2 is NP (not present) and the event is  
L2Cache_Event_L1_GET_INSTR.
However I wasn't able to find where the victim cache block was  
selected
and deallocated prior to the new block's allocation.
Any pointers/comments/insights will be greatly appreciated!
Kind regards,
Kostis
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
 Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/ 
" to your search.
         
 
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
 Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/ 
" to your search.
       
 
  
 
 |