I would think that save-caches/load-caches should be sufficient for warming up an identical configuration, but I'm not entirely sure it was intended to function in that respect. I have previously only used tracer-output-file's, not save-cache's.
Regards, Dan
On Wed, Mar 17, 2010 at 2:43 PM, sparsh mittal ISU <sparsh@xxxxxxxxxxx> wrote:
Hello Just to confirm, if I need to warm-up an exactly same cache configuration in single core; then save-caches/load-caches should be sufficient, isn't it? Thanking you in anticipation Sparsh
On Fri, Mar 12, 2010 at 9:33 AM, Dan Gibson <degibson@xxxxxxxx> wrote:
I think you guys are victims of bad naming in Ruby.
Despite the name, save-caches isn't the intended method to generate a warm-up file for load-caches. The proper command is tracer-output-file. E.g.,
read-configuration myWorkload.check
[disable istcs, etc.] ruby0.init ruby0.tracer-output-file myWorkload-caches.check c 10000000 write-configuration myWorkload-warm.check q
The save-caches command only prints the current cache state, and Byn points out, it prints incorrectly in some aspects. Current cache state isn't sufficient to warm up an arbitrary cache hierarchy -- just to warm up an identical cache hierarchy, or a strictly smaller hierarchy with lesser associativity. To warm up an arbitrary cache hierarchy, you need the entire request stream. Hence, warm-up requires trace generation.
Regards, Dan
On Fri, Mar 12, 2010 at 8:09 AM, Byn Choi <bynchoi1@xxxxxxxxxxxx> wrote:
Javi,
I see what you were saying in your original comment. That's a clever way to circumvent this issue. I'm also interested in how others deal with this issue - it's very easy not to even notice that this is happening.
In my opinion, this is quite dangerous enough that it should technically be classified as a bug (by default, the lines from different cores should be saved as such), or at least doing save-caches with CMP or SCMP protocol should warn the user about such possibility.
Searching the mailing list turned up this thread: https://lists.cs.wisc.edu/archive/gems-users/2007-December/msg00012.shtml.
I suppose it is a known issue in the GEMS community. I only wish that it was made very explicitly clear somewhere.
@GEMS Folks: Would it be possible to have some provisions for this incorporated into the future versions of GEMS - either a warning or maybe even a complete fix? Or is there a reason why it behaves this way that I am not aware of?
Thanks,
Byn
On Mar 12, 2010, at 7:54 AM, Byn Choi wrote:
My apologies, I misread your comment. Yes, the scheme (_only_) works if you have only one core per chip. But if you read carefully, I assumed a single-chip multicore environment in my original post. As I mentioned, the issue is precisely that the save-caches distinguishes the exiting lines at the chip-granularity, not at the core granularity. As soon as you have more than one core per chip (g_PROCS_PER_CHIP > 1 with CMP or SCMP protocols), you have a problem. To reiterate, after load-caches, all the cores but the first one of every chip remain completely empty.
Byn
On Mar 12, 2010, at 5:56 AM, Javi Merino wrote:
Byn Choi wrote:
If you do CacheMemory::print() before save-caches and also do another
print() after load-caches, you will see that they are substantially
different. Specifically, with single-chip multicore chips, you'll see
that only core 0 has any lines allocated while all other cores (core
1, 2, and 3 in a quad-core setup) are completely empty.
You can also see this by ungzipping the cache checkpoints. With single-
chip multicore system, all the entries will say that they belong to
core 0. What this means is that, when you do load-caches, all the
transactions will only be fed into core 0.
When I use the setup explained (do the save-caches with
MOESI_SMP_directory, do the load-caches with whatever CMP protocol) that
doesn't happen.
And yes, unzipping the cache checkpoints shows accesses for all the
processors.
Byn
On Mar 12, 2010, at 3:48 AM, Javi Merino wrote:
Byn Choi wrote:
Hello,
I think I may have found a bug in the save-caches/load-caches
mechanism.
Here, I'm assuming a single-chip multicore environment.
[...]
CacheMemory::recordCacheContents() has the line
tr.addRecord(m_chip_ptr->getID(), m_cache[i][j].m_Address, Address
(0), ... );
m_chip_ptr->getID() is the same for all the cores in the chip, i.e.
with single-chip multicore, this value is always 0.
[...]
Am I missing something here? Is there an option that should be set to
correct this behavior?
Hi Bin, at my university, we do the warmup part of the checkpoint with
the MOESI_SMP_directory protocol and g_PROCS_PER_CHIP set to 1.
Afterwards, we load the caches of single-chip multicores without
problems: the data goes to the caches of the different cores.
I think this is the correct way of doing it. Even if they are
different
coherence protocols, the data in the cache should be mostly the
same, if
your cache configurations are similar in size. After all, it is the
most
recently used data.
That's my experience, but I'd like to know how do others in the list
do
the save-caches part of the workload creation.
Regards,
Javi
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding
"site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site: https://lists.cs.wisc.edu/archive/gems-users/" to your search.
-- http://www.cs.wisc.edu/~gibson [esc]:wq!
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
-- Thanks and Regards Sparsh Mittal Graduate Student Electrical and Computer Engineering Iowa State University, Iowa, USA
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
-- http://www.cs.wisc.edu/~gibson [esc]:wq!
|