Re: [Gems-users] Managing Latencies


Date: Mon, 8 Oct 2007 10:32:44 -0500 (CDT)
From: Mike Marty <mikem@xxxxxxxxxxx>
Subject: Re: [Gems-users] Managing Latencies

MOSI_SMP_directory_1level assumes an ordered forward network. Therefore the directory can (logically) immediately change its state instead of entering a Busy state and either buffering subsequent requests or Nacking them.

Dealing with the situation below is not modeled in the protocol. If it were, all you would need to do is process the second GETS when the directory transition completes for the first request. You could stall the head of the queue if you wanted.

--Mike


Hello!

I was looking at the MOSI_SMP_directory_1level protocol. I don't
understand what happens when the Directory receives a request for a
given block while the Directory is still serving a previous request for
the same block (for example, because the Directory has to read the block
from the main memory, thus resulting in a huge latency), and the
response msg hasn't been sent yet.
As an example of this situation, please consider the following example:
1. A cache (say Local) issues a GETX request to the Directory
2. The Directory receives the GETX, and has to load the block from MM
(no sharers for the block are stored in the Directory)
3. Another cache (say Remote) issues a second GETS
4. The Directory receives the second GETS, while is still waiting for
the block

Local   Home    Remote
|GETX   |       |
|  \    |       |
|    \  |       |GETS
|      \|      /|
|       L    /  |
|       A  /    |
|       T/      | <--- what happens here?
|       E       |
|       N       |
|       C       |
|       Y       |
|  Data/|       |
|    /  |       |
|  /    |       |
|/      |       |

In this context, the response msg hasn't been created yet when the
second GETS arrives to the Directory. Is it possible, with SLICC, to
manage such situation?

In the SLICC code of the Directory, I can read the following:

 action(b_dataToRequestor, "b", desc="Send data to requestor") {
   peek(requestNetwork_in, RequestMsg) {
     enqueue(responseNetwork_out, ResponseMsg, latency="MEMORY_LATENCY") {
        [...]
 }

Maybe my previous question can be reduced to: How does GEMS manage the
<<latency="MEMORY_LATENCY">> constrain?

I hope that my question is clear (it is not a question on how the
protocol behaves).
Thanks to all!
Marco

_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.

[← Prev in Thread] Current Thread [Next in Thread→]