views:

257

answers:

1

I have a MemoryStream containing the bytes of a PNG-encoded image, and want to check if there is an exact duplicate of that image data in a directory on disk. The first obvious step is to only look for files that match the exact length, but after this I'd like to know what's the most efficient way to compare the memory against the files. I'm not very experienced working with streams.

I had a couple thoughts on the matter:

First, if I could get a hash code for the file, it would (presumably) be more efficient to compare hash codes rather than every byte of the image. Similarly, I could compare just some of the bytes of the image, giving a "close-enough" answer.

And then of course I could just compare the entire stream, but I don't know how quick that would be.

What's the best way to compare a MemoryStream to a file? Byte-by-byte in a for-loop?

+4  A: 

Firstly, getting hashcode of the two streams won't help - to calculate hashcodes, you'd need to read the entire contents and perform some simple calculation while reading. If you compare the files byte-by-byte or using buffers, then you can stop earlier (after you find first two bytes/blocks) that don't match.

However, this approach would make sense if you needed to compare the MemoryStream against multiple files, because then you'd need to loop through the MemoryStream just once (to calculate the hashcode) and tne loop through all the files.

In any case, you'll have to write code to read the entire file. As you mentioned, this can be done either byte-by-byte or using buffers. Reading data into buffer is a good idea, because it may be more efficient operation when reading from HDD (e.g. reading 1kB buffer). Moreover, you could use asynchronous BeginRead method if you need to process multiple files in parallel.

Summary:

  • If you need to compare multiple files, use hashcode
  • To read/compare content of single file:
    • Read 1kB of data into a buffer from both streams
    • See if there is a difference (if yes, quit)
    • Continue looping

Implement the above steps asynchronously using BeginRead if you need to process mutliple files in parallel.

Tomas Petricek
It's important to be aware of the (unlikely) possibility of hash collisions. Byte comparison would be necessary to avoid this issue.
k_b
So to be clear, I would read 1 kb chunks from the file into a buffer, then compare those buffers to the memstream byte by byte?
chaiguy
BufferedStream as a wrapper for the FileStream should take care of the buffering issue.
Markus
Concurrently reading multiple files from the same HDD isn't necessarily more efficient than one at a time, due to repositioning of the head.
k_b
@chaiguy: Yes, that should be the most efficient option, although if you use `BufferedStream`, reading byte-by-byte should work too. You may also run some performance tests to identify the best buffer size.
Tomas Petricek