I am going to be lame here and give a more theoretical response rather a pin-pointing answer but please take the value in it.
First there are two distinct problems :
a. Collision probability
b. Performance of hashing (i.e.: time, cpu-cycles etc.)
The two problems are mildly corellated. They are not perfectly correlated.
Problem a deals with the difference between the hashee and the resulted hash spaces. When you hash a 1KB file (1024 bytes) file and the hash has 32 bytes there will be :
1,0907481356194159294629842447338e+2466 (i.e. a number with 2466 zeros) possible combinations of input files
and the hash space will have
1,1579208923731619542357098500869e+77 (i.e. a number with 77 zeros)
The difference IS HUGE. there are 2389 zeros difference between them. THERE WILL BE COLLISIONS (a collision is a special case when two DIFFERENT input files will have the exact same hash) since we are reducing 10^2466 cases to 10^77 cases.
The only way to minimize collison risk is to enlarge the hash space and therefore to make the hahs longer. Ideally the hash will have the file length but this is somehow moronic.
The second problem is performance. This only deals with the algorithm of the hash. Ofcourse that a longer hash will most probably require more cpu cycles but a smarter algorithm might not. I have no clear case answer for this question. It's just too tough.
However you can benchmark/measure different hashing implementations and draw pre-conclusions from this.
Good luck ;)