|
- In this chapter we present the post popular methods of interpolation
and approximation. you might try the prepared examples in order to get
an impression ''how it works''. We have prepared some representative functions for which you
can study the approximation properties of the methods. But equally well you
can use this in order to solve your own (not too large) problems.
- Firstly, there is the classical polynomial interpolation in 1D
for which we use the scheme of divided differences. Here you can test
the approximation properties using the equidistant and the Chebyshev grid.
You may also interpolate your own data and see the table of divided differences
and the polynomial formula.
- Interchanging simply the role of x and y and using the same
method we have a possibility to approximate inverses of one-to-one functions.
This has uses in zero finding (in 1D) and in event detection for ordinary differential
equations (for example of switch points). This is known as ''inverse interpolation''.
- Next, there is Hermite's interpolation method, which in addition to the
function values also interpolates the derivative values with the polynomial derivatives.
You have the same options as with Newton's scheme.
- The interpolation or approximation by cubic (natural, Hermite, periodic) and exponential splines
as well as the use of a cubic smoothing splines is provided. The smoothing spline
tries to estimate the noise in the given data. Here we
have the same basic set of test functions, hence you might compare this with
the classical polynomial interpolation.
The case of the periodic cubic spline has a nice application in the construction of
closed two times differentiable curves through a given set of points (with given
order of appearance on that curve) since such a curve can be seen as a vector function
of two periodic components.
- In addition we have a smoothing or interpolating spline with variable nodes
and chooseable order 1 to 5. Here you give an upper bound for the sum of squares
of errors as input.
- The trigonometric interpolation uses sin(2*k*pi*x) and cos(2*k*pi*x) as
basis of a linear subspace of C[a,b] and interpolation of given data can be done
quite efficiently due to the addition theorems of these functions. This leads to the
famous fft. You can apply the fft here to discrete (given or generated) data but
also as an approximation scheme. Using saw-tooth or step functions allows you
to study the famous Gibbs phenomenon.
- For interpolation in two dimensions we provide the continuous piecewise
linear interpolation on a triangulation (allowing to compute such a triangulation
for a given scattered point set) and the classical Lagrangeinterpolation on a
rectangle (The tensor-product method).
- Finally, there is a code for computing a first derivative using function
values only, so called finite differences. This will show you the limitations
of this approach, but also that the loss of precision due to cancellation
can be overcome to a wide extent if one is willing to spend more
work, i.e. more function values.
- In many applications one has positivity constraints for the function values,
for example if the data have physical meaning like mass. A simple method to
fulfill this constraint is to take g(x) = exp(p(x)) with p(x)
interpolating the data log(yi). We extend this approach here by allowing
a shift of the ordinates.
- In many pratical cases, e.g. where the function exhibits itself singularities or bounded asymptotes,
the fitting or interpolation by polynomials is'nt successful, whereas fractions of polynomials
9rational functions) work excellent. We provide here rational interpolation with degrees up to (20,20)
with different distributions of the abscissas.
|