I know, i've read about the difference between double precision and single precision etc. But they should give the same results on most cases right ?
I was solving a problem on a programming contest and there were calculations with floating point numbers that were not really big so i decided to use float instead of double, and i checked it - i was getting the correct results. But when i send the solution, it said only 1 of 10 tests was correct. I checked again and again, until i found that using float is not the same using double. I put double for the calculations and double for the output, and the program gave the SAME results, but this time it passed all the 10 tests correctly.
I repeat, the output was the SAME, the results were the SAME, but putting float didn't work - only double. The values were not so big too, and the program gave the same results on the same tests both with float and double, but the online judge accepted only the double-provided solution.
Why ? What is the difference ?