views:

52

answers:

2

I have got a server that controls a complicated industrial system.

We need to write a remote client that can connect to the server and "observe" its internal state. I need to work out:

  • How to establish the current state when the client connects
  • How to keep the client up to date with changes on the server
  • Do we replicate all of the objects/state on the server, or just a subset?

My current approach is to hand-write code that watches for changes to every object on the server and send those as messages to the client. The client receives those messages and applies those changes to its own local model.

The problem is this involves a lot of manual coding, and I have got three classes for each entity: server, message, and client. Even observing for state changes is pretty labour intensive.

I feel there must be a better, generalised way to achieve this?

Incidentally the technologies I am using are .net, c#, WPF and WCF

+2  A: 

What I'm about to suggest may be very hard to retrofit to an existing system, but I think it's an effective pattern. You might call it a "Replicated Model".

My idea is that your server has a Model, and the client should have an identical Model.

All updates to the server Model arrive as events applied to the Model. Now, provided those events are serialisable then we can ship them to the client too, and it will see the same updates.

All we need to do is get the initial state of the client and server models to be in step. That's simplest if we make the Model serializable too. With a timestamp mechanism the client now goes:

 Hey Server, I'd like to to start replicating

the server goes

 Here's the current snaphot and I'll be sending you all updates after that

There's plenty of wrinkles to this. For example how to get back in synch if messages are lost.

The key thing here is that the Model needs to be decoupled from the other code in the Server. It's classes must not refer to processing classes. Instead it must emit events if it is to trigger work. Then in the Server things happen, but in the client these work events are simply ignored.

The advantage of this approach is that once youve got the replication meachism in place there's virtually no maintenance as the Model changes. Provided all model classes and chnage events are serializable then the same code runs in server and client.

djna
Hi djna, I have considered something like this. There are great advantages to sharing the same classes between client / server, but as you have pointed out it is not easy to retrofit. We already have the 'processing' classes on server. We can change them, but I guess the question is whether the shared, replicated, classes is the way to go
Schneider
It's a call as to whether some serious refactoring now is a better investment than trying to fit some other Observer pattern on top of today's more complex structure. My instinct is that this is a case where the benefits of the refactoring will be positive in the server too.
djna
The other thing that bugs me somewhat is that the server objects "know" that they being replicated. In other words you need to build everything from the ground up to have that capability, no easy way to retrofit it to an existing object. I guess there is no way around it
Schneider
The "knowledge" is only that they are serializable, they don't need to know **why** they are serializable. And we don't necesserily need to serialize everything, there may be non-serializeable sub-portions. We can be a little clever here if we know how much detail is needed in the replicas.
djna
+1  A: 

Does the client need to know everything about the server's state? I assume some things happening on the server are unimportant and don't need to be monitored, or are too detailed to monitor completely. So the client will be looking at a summary of the server's state, rather than the full details of everything.

djna's replicated model is a great idea - a separate set of model classes on the server to represent the state that needs to be shared with the client. This would be a simplified summary of the server's complete internal state.


I'm wondering how many different pieces of code on the server make changes that the client needs to know about - changes to the shared model. If changes are coming from many places, could you put a simplified facade on top of the model, to control access to it? So if components A, B, C, and D all need to make changes, they all have to go through the facade. Then you can put the event-tracking logic in the facade, rather than all throughout the model.


If you have to do the tracking in many different classes, you could look into using aspect-oriented programming to automate adding the tracking code to each class. PostSharp is a good tool that can do this by adding code to your .NET assemblies when you compile your app.

Here is a blog post about using PostSharp to automate change tracking using the INotifyPropertyChanged interface. It looks like there's also a PostSharp plugin called PropFu for this.

Since you control the code on both ends (firing and consuming the events), you don't have to use INotifyPropertyChanged - you could define your own interface that's better for your application. But you could use a similar approach.


The change-tracking code could put the events into an in-memory queue on the server. The client could periodically ask the server for the latest events; the server would then check this queue and send all the queued events to the client. (You could also push each event to the client in real time, but that's probably not practical if they're happening really fast.)

When the client connects, the server could send a snapshot to the client as djna described. From then on, the server could keep track of events in its queue. When the client disconnects, the server could stop tracking events, until the client connects again later. When the client reconnects, the server would send another full snapshot, followed by more events.

I've been assuming there's only one instance of the client. If there's more than one, you'd need to separately keep track of which events have been sent to each client, and you would have to keep tracking events as long as at least one client is connected.

Richard Beier
Yeah so it all sounds non-trivial. What surprises me is that I cannot find any frameworks or open source applications that implement something like this.
Schneider