|
- By interpolation one means a process to replace a function, given by a finite set of
data, by a "simple" function which reproduces these data and is given by an everywhere evaluable
formula (in the range of interest), with the aim of evaluation or otherwise mathematical manipulation (say integration,
differentiation etc). Here we have the most common case: the "simple" function
is chosen as a polynomial of given maximum degree and the dimension of the independent variable is one.
- Here we have data xi, yi, i=0,...,n,
(with the meaning of yi = f(xi), f not necessarily
known). We want to approximate the inverse function to f, i.e.
f-1
and use for this a polynomial p of given maximum degree n such that
p(yi) = xi, i=0,...,n .
The unique solvability of this problem requires strict monotonicity of
yi, i=0,...,n.
- This problem can be solved in a variety of ways, depending on the choice of the basis
of the linear space of polynomials of maximum degree n. The most obvious choice, the
monomials, is discouraged because of the inherent bad conditioning of the resulting
Vandermonde system. The Lagrange basis
Li(y) = Π j=0,..,i-1,i+1,..,n (y-yj)/(yi-yj)
is often used in theoretical work, but for numerical representation the Newton basis
Ni(y) = Π j=0,..,i-1 (y-yj), i=1,...,n
N0(y) = 1
is much better suited.
The computation of the coefficients is done using the scheme of "divided differences"
ci,0 = xi, i=0,...,n
ci,j+1 = (ci+1,j-ci,j)/(yj+i-yi),
j=0,..,n-1, i=0,..,n-j
with the final form
p(y) = ∑i=0 to n c0,i Ni(y)
If you compare this with the ''ordinary'' polynomial interpolation then you
will see that our work simply consisted in interchanging the role of x and y.
-
The computational effort for the coefficients is O(n2) and
O(n) for a single polynomial value, which can be done especially efficient using the common
factors of the Ni.
- If xi = f -1(yi) and f is
n+1 times continuously differentiable then so is f -1 and the interpolation error is
(f -1)(n+1)(y~)/((n+1)!)*(y-y0)*...*(y-yn)
where y~ is an unknown value between y and the yi
and hence
<= Mn+1/((n+1)!)*|yn-y0|n+1
for an arbitrary distribution of the values yi, where Mn+1
is an upper bound for the absolute value of the n+1-th derivative of f -1
in the interval covered by the yi.
The derivatives of the inverse function can easily be obtained from
(d/dy) f -1(y) = 1 / f'(x)x=f -1(y)
using the chain rule, giving for example
(d/dy)2 f -1(y) = - { f''(x)/(f'(x))3) }x=f -1(y)
and
(d/dy)3 f -1(y) = { (3f''(x)2-f'(x)f'''(x))/(f'(x))5 }x=f -1(y)
- In this situation one doesn't have the chance to predetermine a good distribution of the (original)
y-values (which are the new abscissas) and this may cause trouble if the span of the interval is not small.
Also, ''harmless'' looking functions like the sinus, inverse tangens or the hyperbolic tangens exhibit singularities in their
inverses and hence are not well suited for a polynomial approximation. This should warn you about the predetermined
test cases.
- Inverse interpolation has important applications in zero finding (in 1D) and event detection in solving
ordinary differential equations, for example prediction of switch points.
|