Having such a simple `c` program:
Code:
#include <stdio.h>
#include <float.h>
int main( void ) {
float fl1 = 0.1;
// Set precision to 24 bits
_control87(_PC_24, MCW_PC);
printf( "fl1: %.10f\n", fl1 );
printf( "fl1*fl1: %.10f\n", fl1*fl1 );
}
and compiling it like that:
Code:
cl /W4 /Od /arch:IA32 teset1.c
gives me:
fl1: 0.1000000015
fl1*fl1: 0.0100000007
Can someone please explain me why does multiplying the `db1` variable returns
instead of
as expected?
PS.
I know that assigning the `0.1` value to the `float` makes it *truncated* from `double` to `float` but anyhow `0.1000000015 x 0.1000000015` results in **`0,01000000030000000225`** as I've checked in the *Windows Programmer Calculator*