Hey everyone,
I came from this thread: http://stackoverflow.com/questions/1536867/flops-intel-core-and-testing-it-with-c-innerproduct
As I began writing simple test scripts, a few questions came into my mind.
Why floating point? What is so significant about floating point that we have to consider? Why not a simple int?
If I want to measure FLOPS, let say I am doing the inner product of two vectors. Must the two vectors be float[] ? How will the measurement be different if I use int[]?
I am not familiar with Intel architectures. Let say I have the following operations:
float a = 3.14159; float b = 3.14158;
for( int i = 0; i < 100; ++i) {
a + b;
}How many "floating point operations" is this?
I am a bit confused because I studied a simplified 32bit MIPS architecture. For every instruction, there is 32 bits, like 5 bit for operand 1 and 5 bit for operand 2 etc. so for intel architectures (specifically the same architecture from the previous thread), I was told that the register can hold 128 bit. For SINGLE PRECISION floating point, 32bit per float point number, does that mean for each instruction fed to the processor, it can take 4 floating point numbers? Don't we also have to account for bits involved in operands and other parts of the instruction? How can we just feed 4 floating point numbers to a cpu without any specific meaning to this?
I don't know whether my approach of thinking everything in bits and pieces make sense. If not, what "height" of perspective should I be looking at?