tags:

views:

919

answers:

6

I have a core .NET application that needs to spawn an abitrary number of sub processes. These processes need to be able to access some form of state object in the core application.

What is the best technique? I'll be moving a large amount of data between processes (Bitmaps), so it needs to be fast.

+3  A: 

I have similar requirements and am using Windows Communication Foundation to do that right now. My data sizes are probably a bit smaller though.

For reference I'm doing about 30-60 requests of about 5K-30K per second on a quad core machine. WCF has been holding up quite well so far.

With WCF you have the added advantages of choosing a transport protocol and security mode that is suitable for your application.

fung
+2  A: 

If you truly need to have seperate processes there is always Named Pipes which would perform quite well.

However, would a AppDomain boundary suffice? Then you could do object marshaling and things would be a lot easier. You application could work shared instances of the same object by using the MarshalByRefObject attribute.

Jason Whitehorn
+2  A: 

I'd be hesitant to move large data around, i'd be inclined to move pointers to large data around instead i.e. memory mapped files.

Tim Jarvis
Explain how that would work across process boundaries.
FlySwat
I think Jarvis mean, use memory mapped files to share memory between two or more processes.
Jonke
Of course. Memory Mapped files are designed specifically to be able to share memory between processes. Interestingly, C# 4.0 now has built in MMF classes for just this task.
Tim Jarvis
+3  A: 

You can use .NET remoting for inter-process communication (IPC) with IpcChannel. Otherwise you can search for shared memory wrappers and other IPC forms.

EDIT: There is a MSDN article comparing WCF to a variety of methods including Remoting. However unless I am reading the bar graph wrong, it shows Remoting to be the same or slightly better (unlike the other comment said). There is also a blog post about WCF vs. Remoting. The blog post clearly shows Remoting is faster for binary objects and if you are passing Bitmaps (binary objects) then it seems Remoting or shared memory or other IPC option might be faster, although WCF might not be a bad choice anyway.

Ryan
+3  A: 

WCF would probably fit the bill...

Here's a really good article on .NET remoting for performing distributed intensive analysis. Though remoting has been replaced with WCF, the article is relevant and shows how to make the calls asynchronously, etc.

This article contrasts WCF to .NET remoting -- EDIT: the key take away here shows that WCF throughput out performs remoting for small data but approaches Remoting performance as data size increases.

bryanbcook
A: 

It is also possible to use the Eneter Messaging Framework. The framework offers to use the broker component where all clients can be subscribed to get notifications when the state is changed. In order to connect clients with the broker. The framework offers Named Pipes, Tcp or Http. But it looks the best would be to use the Named Pipes.

The more information about the framework can be found at link text.

Ondrej Uzovic