views:

42

answers:

2

I would like input on the design I currently have planned.

Basically, I have some number of external instrumentation, each of which should always be running, collecting specific data. My thought was to create a service for each, always running and polling the instruments, performing logging, etc. There could be one instrument, or there could be 40.

However, I need one application to consume all this data, run some math on it, and do the charting, display, emailing, etc. The kicker is that even if this application is not running, the services should constantly be consuming data. Also, these services should almost always be supposed to run on the same machines as the client application itself, but the ability to network them (like .NET Remoting used to do) could be an interesting feature.

My question is... is this the best design? If it is, how do I go about doing the communication between services and application? I've looked into WCF, but it seems to be geared towards request-response web services, not something that is continually streaming data to anything that might listen to it. Alternatively, should I have these services contact some other Web Service using WCF, that then compiles the data for use in a thin client viewer that polls the web service often?

Any links and resources would be greatly appreciated. .NET namespaces for me to research are also appreciated. If I wasn't clear about something let me know.

+3  A: 

Just a thought....but have you considered perhaps adding a backend database? All services could collate data and persist it then your application that needs to process the information can just query the database rather than setting up loads of IPC between the services.

James
I have thought about this, but the problem is more complex than this... Each instrument generates on the order of 1MB/second. While the client application is running, this is synthesized down to about 1KB/second that is actually needed to be stored, which is more manageable. But, when this client is not running, it is easy to see how that database would grow to unimaginable proportions (for a desktop computer). Basically, I need to have it set up to where if nothing is observing it, the data is ignored (but logged in abbreviated format).
drharris
@drharris, you could have a field in the database then (similar to what websites do) that indicates that the client application is running. Only ever persist data whilst the client is running, otherwise, just log it to file.
James
My first thought was what happens if the application crashes. But I could tag it with a DateTime, and only persist if it was within the past X minutes or something similar. Man, I hate to get into DBA work too, but I think this may be my only option here. MSMQ gets way too complicated in terms of architecture, and while I like WCF for web stuff, it seems like overkill. I think DB is the way to go.
drharris
+1  A: 

WCF can handle streaming. It can also use MSMQ as a transport, which will ensure that no messages are lost, even if your instruments begin producing large quantities of data.

John Saunders
I think this would be the most robust design, but unfortunately I'm in a time crunch on this, and learning WCF/MSMQ without a lot of good resources/tutorials does not seem like the way to go for this particular project. I'll save this idea for when I get some downtime to learn about these things.
drharris
See http://msdn.microsoft.com/en-us/netframework/dd939784.aspx. It's not that hard. And once you have the queue configured, configuring WCF to use it is almost trivial.
John Saunders