Interpolation by polynomials

Directly to the input form

 
  • By interpolation one means a process to replace a function, given by a finite set of data, by a "simple" function, given as a (in the range of interest) everywhere evaluable formula, with the aim of evaluation or otherwise mathematical manipulation (say integration, differentiation etc). Here we have the most common case: the "simple" function is chosen as a polynomial of given maximum degree and the dimension of the independent variable is one.
  • We have data xi, yi, i=0,...,n, (with the meaning of yi = f(xi), f not necessarily known), and want to compute a polynomial p of maximum degree n such that
    p(xi) = yi, i=0,...,n .
    The unique solvability of this problem requires only that xi, i=0,...,n, are pairwise different.
  • This problem can be solved in a variety of ways, depending on the choice of the basis of the linear space of polynomials of maximum degree n. The most obvious choice, the monomials, is discouraged because of the inherent bad conditioning of the resulting Vandermonde system. The Lagrange basis
    Li(x) = Π j=0,..,i-1,i+1,..,n (x-xj)/(xi-xj)
    is often used in theoretical work, but for numerical representation the Newton basis
    Ni(x) = Π j=0,..,i-1 (x-xj), i=1,...,n
    N0(x) = 1

    is much better suited. The computation of the coefficients is done using the scheme of "divided differences"
    ci,0 = yi, i=0,...,n
    ci,j+1 = (ci+1,j-ci,j)/(xj+i-xi), j=0,..,n-1, i=0,..,n-j

    with the final form
    p(x) = ∑i=0 to n c0,i Ni(x)
  • The computational effort for the coefficients is O(n2) and O(n) for a single polynomial value, which can be done especially efficient using the common factors of the Ni.
  • If yi = f(xi) and f is n+1 times continuously differentiable then the interpolation error is
    f(n+1)(x~)/((n+1)!)*(x-x0)*...*(x-xn)
    where x~ is an unknown value between x and the xi and hence
    <= Mn+1/((n+1)!)*(b-a)n+1
    for an arbitrary distribution of the abscissae in [a,b], where Mn+1 is an upper bound for the absolute value of the n+1-th derivative of f on [a,b]. If this derivative has no zeroes on [a,b] and mn+1>0 is a lower bound for its absolute value on [a,b] then there is at least one x in [a,b] where
    |f(x)-p(x)| >= mn+1/((n+1)!)*2*((b-a)/4)n+1
    whatever (optimal) distribution of abscissae one chooses. This shows that this method always gives good results if the interval span is small, but not in the case that there exists a (possibly complex) singularity of f whose distance from [a,b] is smaller than 1/(b-a) , because then the growth of the derivatives (in n) is stronger than this.
 

Input

 
  • You either might interpolate a discrete data set of your own (getting then a plot showing the polynomial and your data) or might choose a demonstration using predefined or self defined functions for the generation of the y-data.
  • In both cases you have the choice to see the table of divided differences, from which you might draw information about the growth of f 's derivatives, since column j, j=0,...,n contains values of f(.)(j)/j! at unknown intermediate points (as long as the computed values are not spoiled by roundoff).
  • Also in both cases you might have a printed formula of the polynomial as provided by the generalized Horner's scheme.
  • There are 5 predefined cases of functions with different growth properties of their derivatives. You also might define a function of your choice using a piece of code following FORTRAN conventions. Details are given on the input page.
  • You also specify the degree n which is restricted to 1 <= n <= 50 !
  • Then either you give a list of your (x,y)-data or
  • specify the interval from where the abscissae should be drawn.
  • In this case you also have the choice how to distribute the abscissae in this interval: either equidistant (a common case in practice), hence
    a+i*(b-a)/n
    or in an optimal way (as far as it is independent of f), namely
    (a+b)/2+((b-a)/2)*cos(((2*i+1)/(2*n+2))*π),
    the so called Chebyshev abscissae, which are obtained by solving
    min x0,...,xn in [a,b] maxx in [a,b] | Π j=0 to n(x-xj) |.
 

Output

 
  • In case of own discrete data a plot, showing these data and the interpolating polynomial and a printed formula of p in the form of the generalized Horner's scheme.
  • If you have generated the data artificially, then you get a plot of f and one of the error curve f(.) - p(.) .
 

Questions?!

 
  • For which functions decreases the error with increasing degree and why?
  • For which functions this appears just opposite (divergence) and why?
  • What appears if you take the Chebyshev abscissae ?
  • What takes place of you try a nonsmooth function?
 

To the input form

 
 
Back to the top!

18.02.2015