In the beginning I was struggling also with type casting, but I'm not so sure anymore it actually matters. Once you got the logic of PureBasic's 'expression evaluation' down it starts making some sense.
Though I (from an intellectual point of view) KNOW that 2.5 and 3.5 are internally converted to binary floating point, I'm still puzzled by the fact that in good ol' GFAbasic the following code would output 1.4 (and that's what my calculator outputs as well) whilst in PureBasic it would output 1.39999997615814.
a.f = 3.5
b.f = 2.5
c.f = a.f / b.f
Debug c.f
In the opposite direction the output is as expected...
a.f = 2.5
b.f = 1.4
c.f = a.f * b.f
Debug c.f
Now Fred is perfectly right. When converting floats to binary floating point, some numbers might not be exactly representable by the binary floating point expression.
This, however, begs the question: how do other languages pull that trick? Why would they be able to pull the proper divide whilst PureBasic cannot?
I sometimes suspect that this is done by manipulation the mantissa as opposed to the exponent (I hope I got the expressions right), ie.
14x10^-1 is the same as 1.4x10^0, but one might be expressed in FP binary whilst the other might not? (This is not my field, so I might be entirely talking nonsense

) Nevertheless, fact is: some languages report 3.5 / 2.5 as 1.4, and PureBasic does not. (Frankly, it's a bit of a bummer, and does cause some concern when doing financial stuff.)