tags:

views:

1719

answers:

4

We need to develop some kind of buffer management for an application we are developing using C#.

Essentially, the application receives messages from devices as and when they come in (there could be many in a short space of time). We need to queue them up in some kind of buffer pool so that we can process them in a managed fashion.

We were thinking of allocating a block of memory in 256 byte chunks (all messages are less than that) and then using buffer pool management to have a pool of available buffers that can be used for incoming messages and a pool of buffers ready to be processed.

So the flow would be "Get a buffer" (process it) "Release buffer" or "Leave it in the pool". We would also need to know when the buffer was filling up.

Potentially, we would also need a way to "peek" into the buffers to see what the highest priority buffer in the pool is rather than always getting the next buffer.

Is there already support for this in .NET or is there some open source code that we could use?

+5  A: 

C# sharps memory management is actually quite good, so instead of having a pool of buffers, you could just allocate exactly what you need and stick it into a queue. Once you are done with buffer just let the garbage collector handle it.

One other option (knowing only very little about your application), is to process the messages minimally as you get them, and turn them into full fledged objects (with priorities and all), then your queue could prioritize them just by investigating the correct set of attributes or methods.

If your messages come in too fast even for minimal processing you could have a two queue system. One is just a queue of unprocessed buffers, and the next queue is the queue of message objects built from the buffers.

I hope this helps.

grieve
not a good idea. anything that does anything to do with interop will cause fragmentation (and result in out of memory exceptions) due to the GC having to pin objects.
Jonathan C Dickinson
Interop between what? Threads? Applications? Processes?
grieve
I think nzpcmad is working pure-managed w/o P/Invoke.
sixlettervariables
+2  A: 

Why wouldn't you just receive the messages, create a DeviceMessage (for lack of a better name) object, and put that object into a Queue ? If the prioritization is important, implement a PriorityQueue class that handles that automatically (by placing the DeviceMessage objects in priority order as they're inserted into the queue). Seems like a more OO approach, and would simplify maintenance over time with regards to the prioritization.

Harper Shelby
+1  A: 

I'm doing something similar. I have messages coming in on MTA threads that need to be serviced on STA threads.

I used a BlockingCollection (part of the parallel fx extensions) that is monitored by several STA threads (configurable, but defaults to xr * the number of cores). Each thread tries to pop a message off the queue. They either time out and try again or successfully pop a message off and service it.

I've got it wired with perfmon counters to keep track of idle time, job lengths, incoming messages, etc, which can be used to tweak the queue's settings.

You'd have to implement a custom collection, or perhaps extend BC, to implement queue item priorities.

One of the reasons why I implemented it this way is that, as I understand it, queueing theory generally favors single-line, multiple-servers (why do I feel like I'm going to catch crap about that?).

Will
+1  A: 

@grieve: Networking is native, meaning that when buffers are used the receive/send data on the network, they are pinned in memory. see my comments below for elaboration.

Amit Ben Shahar
-1, I'm fairly certain the System.Net namespace abstracts that all away from you the end user keeping you from dealing with any of that. I also do not see any mention in the OP that the application uses interop of any kind.
sixlettervariables
Maybe i wasn't clear - networking always happens in an un-manage context, meaning the buffers you send to the network API (which is at the end native WINSOCK whether you like it or not..) are pinned in memory, and cannot be garbage-collected, nor compressed(moved in memory) when the GC tries to compress the heap. it thus causes huge memory fragmentation and performance penalties as buffers may survive to generation 1 (maybe even gen 2) and will be a pain to GC, while you keep allocated more buffers
Amit Ben Shahar
The correct way to go about it is to create a pool of buffers you would like to use with the network, and so the memory is lost but condensed in one place and won't take the GC overhead to manage, and you won't re-alloc buffers for memory space you already paid for.For instance, in a web-server example, you'll be wise to have a Cache for the files you are serving, keeping the file buffers in memory, and use these buffers when using Socket.Send. - no allocation or penalty, these buffers will be valid for a long time in almost all real-life scenarios.
Amit Ben Shahar