maths functions

Just starting out? Need help? Post your questions and find answers here.
User avatar
Psychophanta
Always Here
Always Here
Posts: 5153
Joined: Wed Jun 11, 2003 9:33 pm
Location: Anare
Contact:

Post by Psychophanta »

Flowers remain here and you are giving me more :P

|Xr-Xs| is the error performed per 1 unique compile pass throw a given considered algorithm.

In the original snippet no doubt that sospel user consider 10000000 passes throw his algorithm, and not 1 unique closed box-algorithm. And there must be understood that he calls "% of error" to the error (modified amount of a value) happened per 100 compile passes throw his formula.

So then the error is relative to 1 unique compiler pass.

About your points:
1. is absolutely false: the error is not MUCH larger in the second example, but the same, coz the error is calculated aritmetically (i.e. |Xr-Xs|), not geometrically neither exponentially.

2. % relative to number of compiler passes performed. As explained above.

3. No, my formula just calculate an average error produced by an individual instruction that is often repeated, e.g. alternating errors are indeed taken into account for a straight growing as the sospel case. That is why it is divided by the number of iterations. And then multiplied by 100 to calculate that average error if passed 100 times by the compiler. This your example is very different case and requires another kind of treatement and formula because the values are balancing.

Thanks for giving me still more flowers :)

And yes, yes, yes: Let's hope Fred fixes it fast :D
http://www.zeitgeistmovie.com

while (world==business) world+=mafia;
Froggerprogger
Enthusiast
Enthusiast
Posts: 423
Joined: Fri Apr 25, 2003 5:22 pm
Contact:

Post by Froggerprogger »

You don't get the point, that they are Trojan flowers. ;)

A) The formula you use does calculate something, let's call it an error-value. There is a clear interpretation of that value. (Read carefully:) It is the cumulation of all signed absolute errors of multiple calls to a function, divided by the number of calls, and multiplied with 100.0
You want to call it average error produced by an individual instruction that is often repeated and define the error performed per 1 unique compile pass with |Xr-Xs|, so the absolute value of the difference. So why don't you cumulate the |Xr-Xs|, but only the signed (Xr-Xs)?
Here's an example:

Code: Select all

Procedure.d getX1(x.d) ; should return exactly x - bad implementation
  If Int(x)&1
    ProcedureReturn x + 0.1
  Else
    ProcedureReturn x - 0.1
  EndIf
EndProcedure

Procedure.d getX2(x.d) ; should return exactly x - good implementation
  ProcedureReturn x
EndProcedure

#N = 1000
#ExpectedResult = ((#N * (#N+1)) / 2)

value1.d = 0
value2.d = 0

absErr1.d = 0
absErr2.d = 0
For x=1 To #N
  value1 = value1 + getX1(x)
  value2 = value2 + getX2(x)
  
  absErr1 + Abs(getX1(x) - x)
  absErr2 + Abs(getX2(x) - x)
Next

Debug "Expected: " + Str(#ExpectedResult)
Debug "value 1 : " + StrD(value1)
Debug "value 2 : " + StrD(value2)
Debug ""
Debug "## Psycho-formula-analysis:"
Debug "error-value 1 : " + StrD(Abs(value1 - #ExpectedResult)/#N*100.0)
Debug "error-value 2 : " + StrD(Abs(value2 - #ExpectedResult)/#N*100.0)
Debug "=> both procedures are wonderful. We cannot determine any difference"
Debug "   though we want to analyse them call-by-call"
Debug ""
Debug "## absolute error per call - analysis:"
Debug "cumulated (by absolute values) absolute error 1 : " + StrD(absErr1)
Debug "cumulated (by absolute values) absolute error 2 : " + StrD(absErr2)
Debug "absolute error per call 1 : " + StrD(absErr1 / #N)
Debug "absolute error per call 2 : " + StrD(absErr2 / #N)
Debug "=> procedure 2 is much better. More: Its optimal!"
And here its output:

Code: Select all

Expected: 500500
value 1 : 500500.0000000000
value 2 : 500500.0000000000

## Psycho-formula-analysis:
error-value 1 : 0.0000000000
error-value 2 : 0.0000000000
=> both procedures are wonderful. We cannot determine any difference
   though we want to analyse them call-by-call

## absolute error per call - analysis:
cumulated (by absolute values) absolute error 1 : 100.0000000000
cumulated (by absolute values) absolute error 2 : 0.0000000000
absolute error per call 1 : 0.1000000000
absolute error per call 2 : 0.0000000000
=> procedure 2 is much better. More: Its optimal!
As you can see, by cumulating the |Xr-Xs| in absErr you can really get the average absolute error per function-call! That is the one you intended to calculate, not by cumulating signed (Xr-Xs).

B) Any error-value (however defined) is intended to contain comparable information about the 'numerical quality' of a function. In practice there are very often errors relative to the input-size, e.g. when rounding to a fixed-length float-mantissa, as used often in all mathematical functions. So a 'good' error-value should work also relative to input-size, as the following example shows, that calculates the average relative error of one function-call and compares it to your error-value:

Code: Select all

Procedure.d getX(x.d) ; should return exactly x - produces always 10%-error x*0.1
  ProcedureReturn x + x*0.1
EndProcedure

#N = 1000

Procedure Test(mode.l)
  If mode = 1
    expectedResult.d = ((1.0 * #N * (#N+1)) / 2)
  Else
    expectedResult.d = (((1.0 * #N * (#N+1)) / 2) / 100.0)
  EndIf
  
  value.d = 0
  relErr.d = 0
  
  For i=1 To #N
    If mode=1
     x.d = i
    Else
     x.d = i/100.0
    EndIf
    
    value = value + getX(x)
    
    relErr + (Abs(getX(x) - x) / x)
  Next
  
  Debug "expected result: " + StrD(expectedResult)
  Debug "but is result: " + StrD(value)
  Debug "## Psycho-formula-analysis:"
  Debug "error-value : " + StrD(Abs(value - expectedResult)/#N*100.0) + "% of something"
  Debug "## relative error per call - analysis:"
  Debug "relative error per call : " + StrD(relErr / #N * 100.0) + "% of input-value"
EndProcedure

Debug "N = " + Str(#N)
Debug "##################################"
Debug "Testing sum of x = 1 .. N"
Debug "##################################"
Test(1)
Debug ""
Debug ""
Debug "##################################"
Debug "Testing sum of x = 1/100.0 .. N/100.0"
Debug "##################################"
Test(2)
and here its output:

Code: Select all

N = 1000
##################################
Testing sum of x = 1 .. N
##################################
expected result: 500500.0000000000
but is result: 550550.0000000000
## Psycho-formula-analysis:
error-value : 5005.0000000000% of something
## relative error per call - analysis:
relative error per call : 10.0000000000% of input-value


##################################
Testing sum of x = 1/100.0 .. N/100.0
##################################
expected result: 5005.0000000000
but is result: 5505.5000000000
## Psycho-formula-analysis:
error-value : 50.0500000000% of something
## relative error per call - analysis:
relative error per call : 10.0000000000% of input-value

As you see, the relative error is identified correctly as 10% and clearly interpretable as percent of (any) inputsize. It's not a 'very creative' and senseless number.

C) Sospel even did something totally different, than we discuss here atm: Because he does not feed the values 1, 2, 3,... into the function, but the intermediate result itself, e.g. 1.00002, 2.00003, 2.9999964, ... so he focusses on the stability of the whole algorithm, including some self-correcting effects as done by alternating errors. There's no need to extract some more or less meaningful information on a per-call-basis, but he wants to analyse how much this total behaviour differs from the expected total result, so he wants to have it relative to 10000001 not N.

Image
%1>>1+1*1/1-1!1|1&1<<$1=1
sospel
User
User
Posts: 16
Joined: Wed Sep 17, 2008 3:34 pm

Post by sospel »

hello !

Thank you for your numerous answers :) . Here are mine:

blueznl : "I think it's a bug, caused by wrong type conversion / enforcement. " ==> I agree.

HELLE obtained a good result, but which depends on the way of programming: this is unacceptable for a high-level language!


Psychophanta : "I agree with sospel in the part that PB should be at least as GFA-BASIC (win32) for accuracy for calculations with floats, doubles, or any." ==> I agree 100% !

blueznl : "You never tried Forth "

Error;) :wink: 20 years ago, I bought the HandHeld Computer "CX07" of CANON, programmable in BASIC ... and in FORTH. I tested, and I put you in the challenge to program a little bit complex mathematical formulae, as in astronomical calculation (my hobby). Furthermore, programs in FORTH are incomprehensible two months (days ?) later, for another programmer as for programmer itself !! On the other hand, I had programmed in GFA - on an AMIGA 2000 computer - a complete calculation of ephemerides of all the planets of the Solar system (~ 4000 lines of instructions in which formulae were programmed as they spell ) and the results were also good as in FORTRAN on a SUN workstation .

Psychophanta : "anyone tested it with blitz?"
I made! But BLITZ-3D gives the worst result = 2968.7 . Anything ... :( !!!

Road Runner : "PowerBASIC answer = 10000001.0010595 "
Thank you for this supplementary test, which proves that a language BASIC can give good results (but it costs 200 $). Can you post the instructions in POWERBASIC?

Right, posts on calculations of the relative errors are very instructive. I globally agree with the arguments of Froggerprogger (" he (sospel) focusses one the stability of the whole algorithm, including some self-correcting effects have done by alternating errors"). But other contributions are also useful for a better definition of this calculation of error. (My calculation should have taken into account that the final value is 10000001 and not 10000000 ... I'm very, very sorry :wink: !!).

Well, in summary, and my opinion did not change : the tree does not have to hide the forest. The main problem is the one of the reliability of these mathematical functions. .
So that PureBASIC becomes at the level of the GFA, we have to can program mathematical formulae as we write them, without gadgets of programming which influence the result. In my humble opinion, it would be interesting that the designers of PureBasic give us their opinion onto this defect of the language. Is it complicated to get back a "armoured" mathematical library and to insert it ?
Cordially,
SosPel :)
Road Runner
User
User
Posts: 48
Joined: Tue Oct 07, 2003 3:10 pm

Post by Road Runner »

Sospel.
Can you post the instructions in POWERBASIC?


The following is the complete PB Console Compiler code (Currently $169)

Code: Select all

DEFDBL a-z 'you want to test DOUBLES so set default variable type to that
FUNCTION PBMAIN () AS LONG

value = 1.0
FOR i=1 TO 10
FOR j=1 TO 1000
FOR k=1 TO 1000
value = TAN(ATN(10^LOG10(SQR(value*value)))) + 1.0
NEXT k
NEXT j
NEXT i

PRINT value

END FUNCTION   
The only changes to the code you posted are minor syntax changes, ATAN -> ATN and POW(10,x) -> 10^x

As it happens, the PowerBASIC DOS compiler costs $49 and runs the same code giving an answer of 10000001.0014146
User avatar
Psychophanta
Always Here
Always Here
Posts: 5153
Joined: Wed Jun 11, 2003 9:33 pm
Location: Anare
Contact:

Post by Psychophanta »

So til now the winner is PowerBasic!?
Any prove in VC++?
http://www.zeitgeistmovie.com

while (world==business) world+=mafia;
Road Runner
User
User
Posts: 48
Joined: Tue Oct 07, 2003 3:10 pm

Post by Road Runner »

I'm surprised any standard, modern compiler "wins". Surely they all use the same FPU and it's that which sets the accuracy.
FPU arithmetic has been standardised for 20+ years.
User avatar
djes
Addict
Addict
Posts: 1806
Joined: Sat Feb 19, 2005 2:46 pm
Location: Pas-de-Calais, France

Post by djes »

One of the hardest thing to code is a good expression evaluator. Why you, good maths fellows, don't try to give Fred a good pseudo-code algorithm to have exactly what you want?
buddymatkona
Enthusiast
Enthusiast
Posts: 252
Joined: Mon Aug 16, 2010 4:29 am

Re: maths functions

Post by buddymatkona »

I was just doing a little informal regression testing with PB64 4.60 RC1. The original 2008 loop code that showed a problem runs better than ever. :)

Code: Select all

value = 1.0
For i=1 To 10
For j=1 To 1000
For k=1 To 1000
value = Tan(ATan(Pow(10.00,Log10(Sqr(value*value))))) + 1.0
Next k
Next j
Next i
Debug StrD(value) ; = 10000001.0000000000  OR using value.d, I got 9999818.1154463831. Pretty good! 
Post Reply