views:

469

answers:

2

I've created a simple buffer manager class to be used with asyncroneous sockets. This will protect against memory fragmentation and improve performance. Any suggestions for further improvements or other approaches?

public class BufferManager
{
    private int[] free;
    private byte[] buffer;
    private readonly int blocksize;

    public BufferManager(int count, int blocksize)
    {
        buffer = new byte[count * blocksize];
        free = new int[count];
        this.blocksize = blocksize;

        for (int i = 0; i < count; i++)
            free[i] = 1;
    }

    public void SetBuffer(SocketAsyncEventArgs args)
    {
        for (int i = 0; i < free.Length; i++)
        {
            if (1 == Interlocked.CompareExchange(ref free[i], 0, 1))
            {
                args.SetBuffer(buffer, i * blocksize, blocksize);
                return;
            }
        }
        args.SetBuffer(new byte[blocksize], 0, blocksize);
    }

    public void FreeBuffer(SocketAsyncEventArgs args)
    {
        int offset = args.Offset;
        byte[] buff = args.Buffer;

        args.SetBuffer(null, 0, 0);

        if (buffer == buff)
            free[offset / blocksize] = 1;
    }
}
A: 

Edit:

The orignal answer below addresses a code construction issue of overly tight coupling. However, considering the solution as whole I would avoid using just one large buffer and handing over slices of it in this way. You expose your code to buffer overrun (and shall we call it buffer "underrun" issues). Instead I would manage an array of byte arrays each being a discrete buffer. Offset handed over is always 0 and size is always the length of the buffer. Any bad code that attempts to read/write parts beyond the boundaries will be caught.

Original answer

You've coupled the class to SocketAsyncEventArgs where in fact all it needs is a function to assign the buffer, change SetBuffer to:-

public void SetBuffer(Action<byte[], int, int> fnSet)
{
    for (int i = 0; i < free.Length; i++)
    {
        if (1 == Interlocked.CompareExchange(ref free[i], 0, 1))
        {
            fnSet(buffer, i * blocksize, blocksize);
            return;
        }
    }
    fnSet(new byte[blocksize], 0, blocksize);
}

Now you can call from consuming code something like this:-

myMgr.SetBuffer((buf, offset, size) => myArgs.SetBuffer(buf, offset, size));

I'm not sure that type inference is clever enough to resolve the types of buf, offset, size in this case. If not you will have to place the types in the argument list:-

myMgr.SetBuffer((byte[] buf, int offset, int size) => myArgs.SetBuffer(buf, offset, size));

However now your class can be used to allocate a buffer for all manner of requirements that also use the byte[], int, int pattern which is very common.

Of course you need to decouple the free operation to but thats:-

public void FreeBuffer(byte[] buff, int offset)
{
    if (buffer == buff)
        free[offset / blocksize] = 1;
}

This requires you to call SetBuffer on the EventArgs in consuming code in the case for SocketAsyncEventArgs. If you are concerned that this approach reduces the atomicity of freeing the buffer and removing it from the sockets use, then sub-class this adjusted buffer manager and include SocketAsyncEventArgs specific code in the sub-class.

AnthonyWJones
Great suggestions! First I decided to use one larg buffer to protect against memory fragmentation. But maybe it doesn't matter when all the bytes will be allocated at the same time.
remdao
Fragmentation only really becomes a problem with buffers larger than about 80K since only 80K or above allocations are taken from the Large object heap which gets no compaction. Any thing less comes from the normal heap which the GC will compact after collection hence removing any fragmentation.
AnthonyWJones
I realized that it's probably better to implement the buffer manager as a stack of byte arrays. Because if it doesn't have to keep track of offsets then there's no reason to have a loop. It will be faster to push and pop the byte array objects.
remdao
A: 

I've created a new class with a completely different approach.

I have a server class that receives byte arrays. It will then invoke different delegates handing them the buffer objects so that other classes can process them. When those classes are done they need a way to push the buffers back to the stack.

public class SafeBuffer
{
    private static Stack bufferStack;
    private static byte[][] buffers;

    private byte[] buffer;
    private int offset, lenght;

    private SafeBuffer(byte[] buffer)
    {
        this.buffer = buffer;
        offset = 0;
        lenght = buffer.Length;
    }

    public static void Init(int count, int blocksize)
    {
        bufferStack = Stack.Synchronized(new Stack());
        buffers = new byte[count][];

        for (int i = 0; i < buffers.Length; i++)
            buffers[i] = new byte[blocksize];

        for (int i = 0; i < buffers.Length; i++)
            bufferStack.Push(new SafeBuffer(buffers[i]));
    }

    public static SafeBuffer Get()
    {
        return (SafeBuffer)bufferStack.Pop();
    }

    public void Close()
    {
        bufferStack.Push(this);
    }

    public byte[] Buffer
    {
        get
        {
            return buffer;
        }
    }

    public int Offset
    {
        get
        {
            return offset;
        }
        set
        {
            offset = value;
        }
    }

    public int Lenght
    {
        get
        {
            return buffer.Length;
        }
    }
}
remdao