What does round-off error in double arithmetic mean?

$\begingroup$

Let's say I have a function $f$ and furthermore let there also be a recursion formula for $f$.

So I can evaluate $f(x)$ directly and I can evaluate $f(x)$ using the recursion formula.

Now, in the recursion formula when $x$ is close to $0$ we substract two nearly equal numbers (in my specific example), so we get loss of significance.

The task is: Analyze the round-off errors for small $x$ in double-arithmetic.

What are the round-off errors? Is it the absolute difference of $f(x)$ evaluated directly and $f(x)$ evaluated using the recursion?

I would do this using a computer algebra system. Does double arithmetic simply mean that I input my $x$ as double?

Edit:$$ f_j(x)=\frac{1}{(j-1)!}\int_{0}^{1}e^{(1-t)x}t^{j-1}dt , \ \ \ j \ge 1 $$

$$f_0(x)=e^x$$

recursion:\begin{equation} f_j(x)=\frac{f_{j-1}(x)-f_{j-1}(0)}{x}, \ \ \ j \ge 1, \ \ \ x \neq 0. \end{equation}

For example, $j=3$:

$f_3(x)=\frac{f_2(x)}{x}-\frac{f_2(0)}{x}=\frac{f_1(x)}{x^2}-\frac{f_1(0)}{x^2}-\frac{f_2(0)}{x}=\frac{f_0(x)}{x^3}-\frac{f_0(0)}{x^3}-\frac{f_1(0)}{x^2}-\frac{f_2(0)}{x}$

In general, we have $$f_j(x)=\frac{f_0(x)}{x^j}-\sum_{i=0}^{j-1}\frac{f_i(0)}{x^{j-i}}$$I want to implement:

enter image description here

enter image description here

$\endgroup$ 6

2 Answers

$\begingroup$

Interpreting the example

Note first that $f_j(0)=\frac1{j!}$, so that what you are computing is the remainder of the Taylor polynomials of degree $j-1$ of the exponential function, reduced by its leading power, $f_j(x)=\sum_{k=j}^\infty\frac{x^{k-j}}{k!}$.

Code for python

For small $x$, this remainder series can be evaluated as a sum of geometrically falling positive terms, this is very stable numerically.

def f(j,x): term = 1.0; # compute the leading factorial for k in range(2,j+1): term /=k # start the summation of the series, # use the quotient between terms to reduce the number of operations # stop when the numerical result no longer changes k=j; res = term; while res+term != res: k+=1; term *=x/k; res+=term return res 

Then compute and compare the expressions in question

x=0.01;
print("%.17f\n%.17f"%(f(3,x), exp(x)/x**3-1/x**3-1/x**2-0.5/x))

with the result

0.16708416805754217
0.16708416794426739

As one can see the first nine digits after the dot coincide, and the next has a difference of one.

A more theoretical analysis

This is what one would expect from the catastrophic cancellation in the subtractions of the second expression. For small $x$, the value $e^x/x^3$ is very large, to get the small value close to $1/6$ in the result, one needs to subtract equally large correction terms, which implies catastrophic cancellation. Setting $x=10^{−2}$ gives $e^x/x^3≈1/x^3=10^6$, so that in the process of the computation the leading $6$ digits have to cancel, leaving a maximum of $15.5−6=9.5$ digits after the dot to be correct, which is what is happening in the computed results.

$\endgroup$ 9 $\begingroup$

The language in which the task is expressed seems to be non-standard.

I can only guess that "double-arithmetic" is someone's informal way of saying double-precision floating-point arithmetic.(I would even insert the word "computer" before "arithmetic", but almost nobody does that since it's almost always clear from context.)

Double-precision arithmetic is a more standard shortened form of what I think the author was trying to express by "double-arithmetic". (Ugh, even the use of the hyphen there is non-standard.)

Likewise, round-off error is a term with a specific technical definition. In this context it refers to the fact that in general, a double-precision floating-point number in a computer may differ from the "true" value it was supposed to represent by about $10^{-16}$ times the value of the number.

The main source of error you've already identified is actuallycatastrophic cancellation, which occurs when you take the difference between two nearly equal numbers. You might say this is caused by round-off error, but strictly speaking it's a distinct effect of finite-precision arithmetic. It greatly magnifies the effect of the original round-off error.

Again we have to guess a bit about what the author of this problem meant, but I think it's a pretty good guess that when they asked for "round-off error" they meant for you to analyzeall of the floating-point arithmetic errors that occur in this calculation, whose effects might be summarized by the difference between the values of $f(x)$ computed according to the two methods. (Technically, both values are likely to have floating-point arithmetic error, but if one method is much better than the other then its error will be much smaller and the difference will be a good estimate of the error in the other method.)

And yes, I believe you are meant to input the numbers as double types, but also you should represent all the intermediate steps of the calculation as double types. (In actual computers, floating-point arithmetic errors are sometimes reduced by using a larger register for the intermediate results of calculations; I guess you are not supposed to do that.)

$\endgroup$ 1

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like