Absolute errorA−T Relative errorTA−T, let A−T=δ, then A=T(1+δ) avoids the issue with A=0
Connect DigitsA:=5.46729×10−12,T:=5.46417×10−12 Then, δabsolute=A−T=0.000312×10−12 δrelative=0.000312/5.46417=3.12/5.46417×10−3 Let p be the first non-agreed digit, then the magnitude relative error will be around 10−p±1.
0.99... = 1 However, consider A=1.00596×10−10,T=0.99452×10−10, the relative error is much smaller than the magnitude approximation. Therefore, we say that 0.999... agrees with 1.000..., hence A,T agrees on 2 sig digits.
Data error and computational error
Data error let x^:= input value, x be actual value, error = x^−x, for example - we cannot represent A as a terminating floating number (3.14∼π) - measurement error
Computational error let f be the actual mapping, f^ be the approximation function, errors of the computation is f^(x)−f(x), for example - often cos is calculated by its Taylor series, while Taylor series is a infinite sum, and we can only compute by finite approximation.
Truncation error and rounding error (subclasses of computational error)
Truncation error The difference between the true result f and the result that would be produced by a given (finite approximation) algorithm using exact arithmetic f^.
rounding error The difference between the result produced by a given algorithm using exact arithmetic, and the result produced by the same algorithm using a finite-precision, rounded arithmetic.
Example To approximate f′(x) with known f By definition f′(x)=limh→0hf(x+h)−f(x), hence, choose a small but not necessarily 0 value h, we can approximate f′(x) for all x. Assuming f is smooth, then
hf(x+h)−f(x)=h1(k=0∑f(k)(x)k!hk−f(x))=h1(f(x)+f′(x)h+f′′(c)h2/2−f(x))=f′(x)+2f′′(c)hTaylor expansion around xRemainder theorem,c∈[x,x+h]
Therefore, the truncation error will be 2f′′(c)h, by IVT, such error is bounded since f is continuous.
Then, let FL be the mapping to the floating representation result, assume
Note that ϵh is less influenced by h when h is small, then we can looks ϵh−ϵ0=:ϵ as a constant. Then, let E(h) be the total error, E′(h)=f′′(c)/2−h2ϵ⇒argmin(E(h))=ϵh/M
Claimlimh→0FL(f(x+h))−FL(f(x))=0
Notice that when h is particularly smaller than the machine's precision, i.e. x>>h, then x+h will be exactly x.
Therefore, for very small h, FL(hf(x+h)−f(x)−f′(x))=FL(−f′(x))
Forward Error and backward error
Let y=f(x) be the true result, y^=COMP(f(x)) be the computational result, where COMP is all the possible errors caused in the computation. Then, forward error is y^−y. Take x^ such that with the true computation, y^=f(x^), then backward error is x^−x
Conditioning of Problems
Let x^∈(x−δ,x+δ), i.e. any x^ within the neighborhood of some x. Consider the relative forward error, assume f is differentiable around x
yy^−y:=f(x)f(x^)−f(x)=f(x)f′(c)(x^−x)=f(x)xf′(c)xx^−x≈f(x)xf′(x)xx^−x=conditioning number×relative backward errorMVT, c∈[x,x^]c is quiet near x
The problem is well conditioned if ∀x, condition number is small; is ill conditioned if ∃x, condition number is large
Connection to Approximation errory=f(x)⇒y^=f(x^), x^−x will have different conditioning with xx^−x, often we measuring the relative errors
Examplef(x):=x,x≥0 ∀x≥0.c≈f(x)xf′(x)=2xxx=1/2⇒ very well-conditioned
Examplef(x)=ex
c≈exxex=x⇒ less and less well-conditioned when x gets big. However, ex will overflows or underflows if ∣x∣ is large, even before it gets ill-conditioned.
Examplef(x)=sin(x)
c≈sin(x)xcos(x)=xcot(x)⇒ ill-conditioned near kπ,k=0 or ∣x∣ is big and cos(x)=0
t=np.arange(-5.,5.,0.01)plt.figure(figsize=(12,4))plt.plot(t,t*np.cos(t)/np.sin(t))plt.title("conditioning number of sin(x)")plt.savefig("assets/approx_error_1.jpg")
t=np.arange(-100.,100.,0.2)plt.figure(figsize=(12,4))plt.plot(t,t*np.cos(t)/np.sin(t))plt.title("conditioning number of sin(x)")plt.savefig("assets/approx_error_2.jpg")
Stability of Algorithms
An algorithm is stable if small change to the input results in small change in the output
stable ≠ good
Consider f^(x):=0vs.f(x):=sin(x), it is very stable, while it does not accomplish a good approximation as wanted.