Inverse interpolation by polynomials

Directly to the input form

 
  • By interpolation one means a process to replace a function, given by a finite set of data, by a "simple" function which reproduces these data and is given by an everywhere evaluable formula (in the range of interest), with the aim of evaluation or otherwise mathematical manipulation (say integration, differentiation etc). Here we have the most common case: the "simple" function is chosen as a polynomial of given maximum degree and the dimension of the independent variable is one.
  • Here we have data xi, yi, i=0,...,n, (with the meaning of yi = f(xi), f not necessarily known). We want to approximate the inverse function to f, i.e.
    f-1
    and use for this a polynomial p of given maximum degree n such that
    p(yi) = xi, i=0,...,n .
    The unique solvability of this problem requires strict monotonicity of yi, i=0,...,n.
  • This problem can be solved in a variety of ways, depending on the choice of the basis of the linear space of polynomials of maximum degree n. The most obvious choice, the monomials, is discouraged because of the inherent bad conditioning of the resulting Vandermonde system. The Lagrange basis
    Li(y) = Π j=0,..,i-1,i+1,..,n (y-yj)/(yi-yj)
    is often used in theoretical work, but for numerical representation the Newton basis
    Ni(y) = Π j=0,..,i-1 (y-yj), i=1,...,n
    N0(y) = 1

    is much better suited. The computation of the coefficients is done using the scheme of "divided differences"
    ci,0 = xi, i=0,...,n
    ci,j+1 = (ci+1,j-ci,j)/(yj+i-yi), j=0,..,n-1, i=0,..,n-j

    with the final form
    p(y) = ∑i=0 to n c0,i Ni(y)
    If you compare this with the ''ordinary'' polynomial interpolation then you will see that our work simply consisted in interchanging the role of x and y.
  • The computational effort for the coefficients is O(n2) and O(n) for a single polynomial value, which can be done especially efficient using the common factors of the Ni.
  • If xi = f -1(yi) and f is n+1 times continuously differentiable then so is f -1 and the interpolation error is
    (f -1)(n+1)(y~)/((n+1)!)*(y-y0)*...*(y-yn)
    where y~ is an unknown value between y and the yi and hence
    <= Mn+1/((n+1)!)*|yn-y0|n+1
    for an arbitrary distribution of the values yi, where Mn+1 is an upper bound for the absolute value of the n+1-th derivative of f -1 in the interval covered by the yi. The derivatives of the inverse function can easily be obtained from
    (d/dy) f -1(y) = 1 / f'(x)x=f -1(y)
    using the chain rule, giving for example
    (d/dy)2 f -1(y) = - { f''(x)/(f'(x))3) }x=f -1(y)
    and
    (d/dy)3 f -1(y) = { (3f''(x)2-f'(x)f'''(x))/(f'(x))5 }x=f -1(y)
  • In this situation one doesn't have the chance to predetermine a good distribution of the (original) y-values (which are the new abscissas) and this may cause trouble if the span of the interval is not small. Also, ''harmless'' looking functions like the sinus, inverse tangens or the hyperbolic tangens exhibit singularities in their inverses and hence are not well suited for a polynomial approximation. This should warn you about the predetermined test cases.
  • Inverse interpolation has important applications in zero finding (in 1D) and event detection in solving ordinary differential equations, for example prediction of switch points.
 

Input

 
  • You either might interpolate a discrete data set of your own (getting then a plot showing the polynomial and your data) or might choose a demonstration using predefined or self defined functions for the generation of the y-data.
  • In both cases you have the choice to see the table of divided differences, from which you might draw information about the growth of f -1 's derivatives, since column j, j=0,...,n contains values of
    (f -1(.))(j)/j!
    at unknown intermediate points (as long as the computed values are not spoiled by roundoff).
  • Also in both cases you might have a printed formula of the polynomial as provided by the generalized Horner's scheme.
  • There are 5 predefined cases of functions with different growth properties of their derivatives. You also might define a function of your choice using a piece of code following FORTRAN conventions.
  • You also specify the degree n which is restricted to 1 <= n <= 50 !
  • Then either you give a list of your (x,y)-data or
  • specify the interval from where the abscissae should be drawn.
 

Output

 
  • In case of own discrete data a plot, showing these data and the interpolating polynomial and a printed formula of p in the form of the generalized Horner's scheme.
  • If you have generated the data artificially, then you get a plot of f -1 and p (if the error is very small, you will see only the green (p) curve) and a second one of the error curve f -1(.) - p(.) .
 

Questions?!

 
  • For which functions decreases the error with increasing degree and why?
  • For which functions this appears just opposite (divergence) and why?
  • How improves the precision with shrinking interval length?
 

To the input form

 
 
Back to the top!

14.12.2016