views:

1087

answers:

2

I'm developing a back-end application for a search system. The search system copies files to a temporary directory and gives them random names. Then it passes the temporary files' names to my application. My application must process each file within a limited period of time, otherwise it is shut down - that's a watchdog-like security measure. Processing files is likely to take long so I need to design the application capable of handling this scenario. If my application gets shut down next time the search system wants to index the same file it will likely give it a different temporary name.

The obvious solution is to provide an intermediate layer between the search system and the backend. It will queue the request to the backend and wait for the result to arrive. If the request times out in the intermediate layer - no problem, the backend will continue working, only the intermediate layer is restarted and it can retrieve the result from the backend when the request is later repeated by the search system.

The problem is how to identify the files. Their names change randomly. I intend to use a hash function like MD5 to hash the file contents. I'm well aware of the birtday paradox and used an estimation from the linked article to compute the probability. If I assume I have no more than 100 000 files the probability of two files having the same MD5 (128 bit) is about 1,47x10-29.

Should I care of such collision probability or just assume that equal hash values mean equal file contents?

+1  A: 

I think you shouldn't.

However, you should if you have the notion of two equal files having different (real names, not md5-based). Like, in search system two document might have exactly same content, but being distinct because they're located in different places.

alamar
That's the problem of the search system, not of my application. My application only needs to extract text from passed files.
sharptooth
+5  A: 

Equal hash means equal file, unless someone malicious is messing around with your files and injecting collisions. (this could be the case if they are downloading stuff from the internet) If that is the case go for a SHA2 based function.

There are no accidental MD5 collisions, 1,47x10-29 is a really really really small number.

To overcome the issue of rehashing big files I would have a 3 phased identity scheme.

  1. Filesize alone
  2. Filesize + a hash of 64K * 4 in different positions in the file
  3. A full hash

So if you see a file with a new size you know for certain you do not have a duplicate. And so on.

Sam Saffron
Nice point about rehashing big files.
sharptooth
@sharptooth see this question for some tricks you can use: http://stackoverflow.com/questions/788761/algorithm-for-determining-a-files-identity-optimisation
Sam Saffron