views:

51

answers:

3

Hello,

I have a Theora video deocder library and application compiled using VS-2008 on windows(Intel x86 architecture). I use this setup to decode theora bit streams(*.ogg files). The source code for this decoder library is used from FFMPEG v0.5 source package with some modifications to make it compile on windows-VS-2008 combination.

Now when i decode same theora bitstream using the ffmpeg(V0.5) application on linux(Intel x86 architecture) that i have built using gcc, and get some decoded output yuv file, this output file has 1 bit differences with the output obtained from the windows-VS2008 setup, and that too for few bytes of the output file, not all. I expected the 2 outputs to be bit-matching.

I am doubting below factors:

a.)Some data type mismatch between the two compilers gcc and MS-VS2008?

b.)I have verified that the code is not using any run time math library function like log, pow, exp, cos, etc...but still my code has some operations like (a+b+c)/3.Could this be an issue?

The implementation of this "divide by three" or any other number can be different in the two setups.

c.)Some kind of rounding/truncation effects happening differently?

d.) Could i be missing any macro which is present in Linux as a makefile/configure option which is not there in windows setup?

But i am not able to narrow the problem and the fix for it.

1.) Are my doubts above valid, or could there be any other issues which could cause these 1 bit differences in the output produced by these two different setups.

2.) How do i debug and fix this?

I guess, this scenario of difference in outputs between linux-gcc setup and Windows MS compilers can be even be true for any generic code(not necessarily specific to my case of video decoder application)

Any pointers would be helpful regarding this.

thanks,

-AD

A: 

1, Probably a different optimization of some floating point lib

2, Is it a problem ?

edit:
Take a look at the "/fprecise" option on VS (http://msdn.microsoft.com/en-us/library/e7s85ffb.aspx) or "-fprecise-math" on gcc.

Martin Beckett
@Martin Becket: I guess u are asking, is the bit difference a problem.then yes, for me it is, since this windows application for theora dec which i have created out of ffmpef source code should be bit-exact with reference, i.e. ffmpeg output.
goldenmean
I meant that for a compressed video stream is a 0.4% (1bit) difference in brightness of a pixel significant? You should probably experimnent with the various /fprecise switches
Martin Beckett
@Martin:Thanks for the pointers-optimization and floating point library behaviour options. I will check them.
goldenmean
A: 

Regarding b), integer and float division are completely specified in C99. C99 specifies round-towards-zero for integers (earlier standards left rounding direction implementation-defined) and IEEE 754 for floating-point.

Having heard that VS2008 did not claim to implement C99, this does not really help. Implementation-defined at least means that you can write a few test cases and make sure which decision was made in your compiler.

If you really care about this, how about instrumenting the code to output verbose traces to a separate file and examine the traces for the first difference? Hey, perhaps the tracing is even already there for debugging purposes!

Pascal Cuoq
@Pascal: Yes adding dump/traces in both setups before each module, is one of the options i have thought about. Keep comparing these dumps at module till the difference is found. But i wanted to tick off the list, all other causes by some higher level analysis/debug, before i jump in detailed debug!
goldenmean
+1  A: 

I think, such behavior may come from x87/sse2 math. What version of gcc do you use? Do you use float (32-bit) or double (64-bit)? Math on x87 have more precision bits internally (82), than can be stored in memory

Try flags for gcc -ffloat-store; -msse2 -mfpmath=sse

Flags for msvc /fp:fast /arch:SSE2

osgx
options from this post with the same problem http://gcc.gnu.org/ml/gcc-help/2009-07/msg00417.html
osgx