views:

136

answers:

2

I am developing a data-flow oriented domain-specific language. To simplify, let's just look at Operations. Operations have a number of named parameters and can be asked to compute their result using their current state.

To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer.

An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations.

So far, so good, nicely decoupled design, composable and reusable and, depending on the specific Observer used, as asynchronous as you want it to be.

Now here's my problem: I would love to start coding actual Tests against this design. But with an asynchronous Observer...

  • how should I know that the whole signal-and-parameters-plumbing worked?
  • Do I need to use time outs while waiting for a Signal in order to say that it was emitted successfully or not?
  • How can I be, formally, sure that the Signal will not be emitted if I just wait a little longer (halting problem? ;-))
  • And, how can I be sure that the Signal was emitted because it was me who set a parameter, and not another Operation? It might well be that my test comes to early and sees a Signal that was emitted way before my setting a parameter caused a Decision to emit it.

Currently, I guess the trivial cases are easy to test, but as soon as I want to test complex many-to-many - situations between operations I must resort to hoping that the design Just Works (tm)...

Edit (1):

Let's consider the following scenario:

Imagine the case where an Operation A provides a value to Operations B1, B2 and B3, each having a On-Every-Input-Decision (one that is fulfilled whenever any parameter is updated). Then, have B1 and B2 and B3 each supply their Value to the same Parameter of an Operation C (in order to, say, aggregate these values into a lookup table or some such).

The intended steps are:

  1. A signals that it has a new value (by virtue of its Decision)
  2. Some time later, the asynchronous Observer dispatches the Signal to whatever registered
  3. Ah, an Accessor registered. Its callback is invoked, which in turn fetches the Result of the Operation and multiplexes it to the Parameters of B1, B2 and B3
  4. B1, B2 and B3 inform their Decision about this, which creates three new Signals for the Observer
  5. Some time later, the asynchronous Observer dispatches B1's signal, then B2, then B3
  6. Each signal results in an Accessor fetching the Result of B1 (2, 3) and feed it into C

So, I know that in this case I can mock e.g. the Decision for C to see whether it indeed got informed about what B1, B2 and B3 did. The question is: when am I safe to check this?

Edit (2): My aim seems to be more like end-to-end testing, i.e. putting together the various parts of the DSL and see if the result behaves in the way I expect it to.

Edit (3): Turns out I was overcomplicating things :-)

+1  A: 

I haven't used this but I heard the Reactive Framework can be used for turning events into Linq statements - which can then be used to enable easy unit testing.

This is I believe how they unit test a lot of Silverlight code - infact the Reactive framework is distributed with the Silverlight Toolkit (System.Reactive.dll).

samjudson
The Rx guys have stated that they are testing their framework with PEX. http://research.microsoft.com/en-us/projects/pex/
dtb
The dsl does not use events to communicate between Operations but a more decoupled approach using the Signal/Observer pattern. But thanks for the hint, maybe there is some synergy I can get out of the RX framework.
Arne
+2  A: 

You need to ensure that all of your different components are interfaced out, and then test one specific class at a time, mocking out absolutely everything else.

Note: This explanation presupposes that you are using the principles of dependency inversion as well as a mocking library (like Rhino Mocks).

You state:

To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer.

An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations.

This says to me that you would construct an Operation that has a mocked out IDecision. Your unit test can then orchestrate the behavior of the IDecision in such a way to exercise all possible scenarios that an Operation may have to deal with.

Likewise, your Accessor tests have a mock IDecision that is set up to behave in a realistic fashion so that you can fully test the Accessor class in isolation. It can also have a mock IOperation, and you can test that your Accessor calls the appropriate methods on the mock object(s) in response to the desired stimuli.

Summary: Test each of your classes in isolation, using mocked out objects for all of the other parts to orchestrate the appropriate behaviors.

Andrew Anderson
You gently glossed over the asynchronous Observer part :-) I added a hopefully usable example to my question. Is there actually a problem I need to solve or am I overcomplicating things?
Arne
Let me respond with a question that will help me better understand where you're coming from: are you trying to unit test your classes individually, or test the integration of all of the classes operating together?
Andrew Anderson
The latter, I guess. The problem I see is that isolated testing does seem to guarantee proper functionality when things are put together.
Arne