I have a very large set of binary files where several thousand raw frames of video are being sequentially read and processed, and I’m now looking to optimize it as it appears to be more CPU-bound than I/O-bound.
The frames are currently being read in this manner, and I suspect this is the biggest culprit:
private byte[] frameBuf;
BinaryReader binRead = new BinaryReader(FS);
// Initialize a new buffer of sizeof(frame)
frameBuf = new byte[VARIABLE_BUFFER_SIZE];
//Read sizeof(frame) bytes from the file
frameBuf = binRead.ReadBytes(VARIABLE_BUFFER_SIZE);
Would it make much of a difference in .NET to re-organize the I/O to avoid creating all these new byte arrays with each frame?
My understanding of .NET’s memory allocation mechanism is weak as I am coming from a pure C/C++ background. My idea is to re-write this to share a static buffer class that contains a very large shared buffer with an integer keeping track of the frame’s actual size, but I love the simplicity and readability of the current implementation and would rather keep it if the CLR already handles this in some way I am not aware of.
Any input would be much appreciated.