Strategies for testing reactive, asynchronous code
I am developing a data-flow oriented domain-specific language. To simplify, let's just look at Operations. Operations have a number of named parameters and can be asked to compute their result using their current state.
To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer.
An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations.
So far, so good, nicely decoupled design, composable and reusable and, depending on the specific Observer used, as asynchronous as you want it to be.
Now here's my problem: I would love to start coding actual Tests against this design. But with an asynchronous Observer...
Currently, I guess the trivial cases are easy to test, but as soon as I want to test complex many-to-many - situations between operations I must resort to hoping that the design Just Works (tm)...
Edit (1):
Let's consider the following scenario:
Imagine the case where an Operation A provides a value to Operations B1, B2 and B3, each having a On-Every-Input-Decision (one that is fulfilled whenever any parameter is updated). Then, have B1 and B2 and B3 each supply their Value to the same Parameter of an Operation C (in order to, say, aggregate these values into a lookup table or some such).
The intended steps are:
So, I know that in this case I can mock eg the Decision for C to see whether it indeed got informed about what B1, B2 and B3 did. The question is: when am I safe to check this?
Edit (2): My aim seems to be more like end-to-end testing, ie putting together the various parts of the DSL and see if the result behaves in the way I expect it to.
Edit (3): Turns out I was overcomplicating things :-)
You need to ensure that all of your different components are interfaced out, and then test one specific class at a time, mocking out absolutely everything else.
Note: This explanation presupposes that you are using the principles of dependency inversion as well as a mocking library (like Rhino Mocks).
You state:
To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer.
An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations.
This says to me that you would construct an Operation that has a mocked out IDecision. Your unit test can then orchestrate the behavior of the IDecision in such a way to exercise all possible scenarios that an Operation may have to deal with.
Likewise, your Accessor tests have a mock IDecision that is set up to behave in a realistic fashion so that you can fully test the Accessor class in isolation. It can also have a mock IOperation, and you can test that your Accessor calls the appropriate methods on the mock object(s) in response to the desired stimuli.
Summary: Test each of your classes in isolation, using mocked out objects for all of the other parts to orchestrate the appropriate behaviors.
I haven't used this but I heard the Reactive Framework can be used for turning events into Linq statements - which can then be used to enable easy unit testing.
This is I believe how they unit test a lot of Silverlight code - infact the Reactive framework is distributed with the Silverlight Toolkit (System.Reactive.dll).
链接地址: http://www.djcxy.com/p/1442.html上一篇: 将文件恢复到之前的修订版后,git diff显示没有区别?
下一篇: 测试反应式异步代码的策略