1. We have to specify the error
of what to calculate: e.g. the error of one single math-function, or the error of a whole algorithm as a black-box (in which perhaps the errors of many primitive functions might cumulate).
2. If there's an input to the algorithm/function, then we have to decide if we talk about the maximal possible error, or the mean error, or about some concrete example(s).
3. We have to decide which kind of error to calculate: the absolute or the relative error
Road Runner decided at 1. to calculate the error of the whole (black-box-)algorithm that should produce 10000001 if we would assume a mathematically perfect calculating machine, but it does produce 10000001.0010595...
As long as there's no input, we don't have to care about 2.
At 3. he decided to calculate the relative error (in percent). If Xs is the expected result and Xr is the produced result, then the absolute error is defined as:
|Xr - Xs|
and the relative error as:
|Xr - Xs| / |Xs|
(here both as absolute values). So all is in common with the theory of numerical errors.
Imagine the algorithm as a black box. It should anyhow produce the number 0.0. But it produces e.g. 0.0010595 no matter how.
How now do you calculate?
absolute error: 0.0010595
relative error: not defined (the case Xs = 0.0 is explicitely taken out)
This is how it was teached in my Numeric-lecture at University - so I'm right
