views:

107

answers:

3

For example, if a request is made to a resource and another identical request is made before the first has returned a result, the server returns the result of the first request for the second request as well. This to avoid unnecessary processing on the resource. This is not the same thing as caching/memoization since it only concerns identical requests ongoing in parallel.

Is there a term for the reuse of results for currently ongoing requests to a resource for the purpose of minimizing processing?

A: 

If you queue up your requests, the code waiting for the resource can examine the queue to see if there are any identical requests pending and somehow return the same resource for that one too.

Have you done any profiling? I'd bet this is way more work than it is worth.

Byron Whitlock
Oh, I doubt that (knock on wood). The system in question is written in Erlang where notifications (messages) are very cheap. The resource is a hardware component which is not very fast. Although you are right in that benchmarking should be done before such a mechanism is put in front of a resource.
Adam Lindberg
+1  A: 

That's really just caching/memoization , with a few restrictions - some might call it result-reuse.

nos
+1  A: 

I call it request piggybacking.

Joshua
Since I don't see it as caching (it's only "cached" during the actual request duration), this is the term that comes closest.
Adam Lindberg