Returns 2.29999995231628 here on Windows 7 Ultimate x86, it is represented within standards requirements correctly. Don't confuse precision with representation.
Part of it is how your CPU does floating point representations, even the compiler may choose different floating point representation methods. From what I discovered about C++ (in which I am certainly no expert), the rest I think is explained by how windows stores the mantissa. The mantissa is stored as a binary fraction and has a value greater than or equal to 1 but less than 2. Because this bit is always 1, it is assumed in either the real*4 (8 bit) and real*8 (11 bit) formats which have biases of 127 and 1023 . The binary (not decimal) point is assumed to be just to the right of the leading 1, and PureBasic is also the same way (I think) and the help alludes to this (a little bit) where it says "Another limitation of floating point numbers is that they still work in binary, so they can only store numbers exactly which can be made up of multiples and divisions of 2." . Most compilers have options that let you indicate how you want to work with floating point operations, generally faster is less accurate and more accurate is slower. It will behave differently on different operating systems, its one of the inherent "flaws" in using floating point numbers in that there is no gurantee it will behave the same all the time, I try to stay away from working with them myself unless I am using multiples of 2 and then I generally round. If you need more accuracy, consider doubles instead of floats. Plus your number (2.3) is not a multiple of 2, numbers that are a multiple of 2 are more accurate for floats, for example:
Code: Select all
Debug ValF("2.4") = 2.40000009536743
and
Debug ValF("2.3") = 2.29999995231628
you can see the difference between one which is a multiple of 2 (the 2.4) and the other which is not a multiple of 2 (the 2.3)
The Purebasic help says this:
"Special information about Floats and Doubles
A floating point number is stored in a way that makes the binary point "float" around the number, so that it is possible to store very large numbers or very small numbers. However, you cannot store very large numbers with very high accuracy (big and small numbers at the same time, so to speak).
Another limitation of floating point numbers is that they still work in binary, so they can only store numbers exactly which can be made up of multiples and divisions of 2. This is especially important to realise when you try to print a floating point number in a human readable form (or when performing operations on that float) - storing numbers like 0.5 or 0.125 is easy because they are divisions of 2. Storing numbers such as 0.11 are more difficult and may be stored as a number such as 0.10999999. You can try to display to only a limited range of digits, but do not be surprised if the number displays different from what you would expect!
This applies to floating point numbers in general, not just those in PureBasic.
Like the name says the doubles have double-precision (64 bit) compared to the single-precision of the floats (32 bit). So if you need more accurate results with floating point numbers use doubles instead of floats.
The exact range of values, which can be used with floats and doubles to get correct results from arithmetic operations, looks as follows:
Float: +- 1.175494e-38 till +- 3.402823e+38
Double: +- 2.2250738585072013e-308 till +- 1.7976931348623157e+308"
If you take a look at the IEEE standard at
http://en.wikipedia.org/wiki/IEEE_754 you will find that the representation your seeing meets the standard requirements, so although it may not be as precise as you wish the representation is correct according to the standard.
The advantage of a 64 bit operating system over a 32 bit operating system comes down to only being twice the headache.