tags:

views:

597

answers:

3

I have a process that can have multiple AppDomains. Each AppDomain collect some statistics. After a specified time, I want to accumulate these statistic and save them into a file.

One way to do this is Remoting, which I want to avoid.

The only other technique I have in mind is to save each AppDomain's data in a file, and after a specific time, one of the AppDomain collects all data and accumulate them.

But it would be ideal if this all could be done in-memory, without the cost of serializing the information to pass between AppDomains. Anyone have any ideas?

+1  A: 

I do appreciate you want to keep this in-memory, but my first suggestion would be to write the data to a database and query from there. Remoting is still a remote call, which is where much of the "cost" of using a database server comes from, and you'd have to build in transaction-handling to make sure you don't lose data. If you write to a SQL Server database you have transaction support ready and waiting for you, and it's fast-fast-fast for queries.

Neil Barnwell
While it may be a good idea to use a database and have the data persisted and solved to communication problem with an established technology, I do not think transactions would be a key benefit. If the source application domains crashes, the data is lost no matter if it just was on the wire to the database or in an in memory stream.
Daniel Brückner
+2  A: 

The only way to avoid serialisation is to represent your data using objects which derive from MarshalByRefObject, but in that case you will still have the cost of marshalling across the AppDomain boundaries. This may also involve the refactoring/re-writing of much of your code.

Assuming marshalling by reference is not an option, you will have to serialise at some point. It simply cannot be avoided. One way to do this is as Neil Barnwell suggests, with a database, another would be with a local file as you suggest yourself.

Another way which may or may not feasible depending on your delivery timeline and/or .NET 4.0 adoption, would be to use a memory mapped file, see .Net Framework 4.0: Using memory mapped files.

AdamRalph
I have not written code yet. Just working on the design. Can you tell me of any article that explains the sharing of data using the first approach you posted?
cornerback84
Marshaling by reference will serialize the data, too, but in small pieces. Every method call will return a bit of information effectively serializing a bit of the data. This is probably a good idea if you require only a small portion of the data. But if you have to process (almost) the whole data, getting it bit by bit with many cross domain calls will be incredible slow compared to serializing and transferring the data at once.
Daniel Brückner
If you follow this road, don't forget to override InitializeLifetimeService method; that was driving me crazy a few days ago ("Object '...' has been disconnected or does not exist at the server.")
Rubens Farias
+1  A: 

I tend to say just use remoting. Writing the data to a file requires serialization, too. Serialization seems to be almost unavoidable what ever technology you use. You have to transfer data from one application domain to another using some channel and you will have to serialize the data in order to get it through the channel.

The only way to avoid serialization seems to be using shared memory so that both application domains can access the data without ever going through a channel. Even deep cloning the data from one application domain's memory into the other's memory is at its core nothing more then a binary serialization (where the result is not necessarily stored in consecutive memory locations).

Daniel Brückner
Remoting also involve Reflection. That is Serialization + Reflection. On the other hand my data is just some long and double values that I can write in file without much overhead.
cornerback84
You are looking at the wrong spots. The bottleneck of using a file is the disc access and will take several milliseconds and the transfer rate letting you transfer below one hundred megabyte per second. I am not sure what the actual bottleneck of remoting is (as far as I remember the performance is limited by the number of cross domain calls, not the amount of transfered data) but it is possible to transfer several hundred megabyte per second between application domains. Remoting strings using the fast path achieves transfer rates of several gigabyte per second.
Daniel Brückner