Re: [Gems-users] Managing Latencies


Date: Wed, 10 Oct 2007 10:25:03 -0500
From: Mike Marty <mikem@xxxxxxxxxxx>
Subject: Re: [Gems-users] Managing Latencies
Marco,

The ForwardNetwork is ordered to prevent races to the Owner. Consider what would happen if a GETS were forwarded to the Owner and then a GETX followed, but these messages passed one another on the interconnect?

As far as stalling the RequestNetwork, yes, that would block the entire queue. You can either pop the queue and store the message in a dedicated buffer like you suggest, or you can use the zz_recycle hack seen in other protocols that just pops the message and re-enqueues it at the back of the same queue.
--Mike


Marco Solinas wrote:
Thank you for your reply, Mike!

Actually I don't understand why I have to consider that the ForwardNw is ordered, since the response msg to the "Local" cache travels on the ResponseNw, while the Fwd_GETS is sent through the ForwardNw. Here you have the SLICC code:

action(b_dataToRequestor, "b", desc="Send data to requestor") {
    peek(requestNetwork_in, RequestMsg) {
      enqueue(responseNetwork_out, ResponseMsg, latency="MEMORY_LATENCY") {
        [...]
  }
  action(d_forwardRequestToOwner, "d", desc="Forward request to owner") {
    peek(requestNetwork_in, RequestMsg) {
enqueue(forwardedRequestNetwork_out, RequestMsg, latency="DIRECTORY_LATENCY") {
        [...]
  }

The first action is performed by the Directory for the first GETX from Local cache, while the second one for the GETS. I guess that this can lead the messages to arrive out-of-order at the Local cache (of course, this is not a problem, since the L1cache controller is able to manage such situation), also because the issue latencies of the two actions are different.

Just a last question: if I stall the queue of RequestNw in order to simulate the situation I depicted in my previous post, any other incoming msg (related to different blocks) will wait until the previous transition is completed. Is it correct? If so, this would be an undesirable situation! ;-) I guess that it would be better to pop the queue and (for example) store the msg in a dedicated data structure of the directory. In any case, does GEMS provide any way to know how long does a msg still have to wait in the queue before entering the network? This need is because the arrival of a new request for an engaged block is a completely asynchronous event, so I need to know the "delay time" I have to wait before re-processing the subsequent msg.

Thank you for your help, Mike!
Marco

Mike Marty ha scritto:
MOSI_SMP_directory_1level assumes an ordered forward network. Therefore the directory can (logically) immediately change its state instead of entering a Busy state and either buffering subsequent requests or Nacking them.

Dealing with the situation below is not modeled in the protocol. If it were, all you would need to do is process the second GETS when the directory transition completes for the first request. You could stall the head of the queue if you wanted.

--Mike


Hello!

I was looking at the MOSI_SMP_directory_1level protocol. I don't
understand what happens when the Directory receives a request for a
given block while the Directory is still serving a previous request for
the same block (for example, because the Directory has to read the block
from the main memory, thus resulting in a huge latency), and the
response msg hasn't been sent yet.
As an example of this situation, please consider the following example:
1. A cache (say Local) issues a GETX request to the Directory
2. The Directory receives the GETX, and has to load the block from MM
(no sharers for the block are stored in the Directory)
3. Another cache (say Remote) issues a second GETS
4. The Directory receives the second GETS, while is still waiting for
the block

Local   Home    Remote
|GETX   |       |
|  \    |       |
|    \  |       |GETS
|      \|      /|
|       L    /  |
|       A  /    |
|       T/      | <--- what happens here?
|       E       |
|       N       |
|       C       |
|       Y       |
|  Data/|       |
|    /  |       |
|  /    |       |
|/      |       |

In this context, the response msg hasn't been created yet when the
second GETS arrives to the Directory. Is it possible, with SLICC, to
manage such situation?

In the SLICC code of the Directory, I can read the following:

 action(b_dataToRequestor, "b", desc="Send data to requestor") {
   peek(requestNetwork_in, RequestMsg) {
     enqueue(responseNetwork_out, ResponseMsg, latency="MEMORY_LATENCY") {
        [...]
 }

Maybe my previous question can be reduced to: How does GEMS manage the
<<latency="MEMORY_LATENCY">> constrain?

I hope that my question is clear (it is not a question on how the
protocol behaves).
Thanks to all!
Marco

_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.

_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.





[← Prev in Thread] Current Thread [Next in Thread→]