|
- In this chapter we present the basic methods of quadrature
(numerical computation of definite integrals) in one and two dimensions.
There is a common set of prepared test functions of increasing smoothness
which allows you to study the influence of the smoothness property on precision and required
work. But you also might try a function of your own choice.
- All the methods implemented here are of adaptive type and composed rules: using a scheme of
error estimation which essentially is always based on comparison of methods of
different precision on the same (sub)interval the grid of function values
is refined, either locally or globally, until the error estimation signals that the
desired precision has been obtained. These error estimators are based on assumptions
on the smoothness properties of the integrand and hence may fail.
- Adaptive quadrature using the Simpson rule is a scheme often used
in practice. Here the grid is subject to local refinement, but the nodes (evaluation points)
are equidistantly spaced locally. The use of a highly
oscillatory integrand with a regular distribution of maxima and minima can
easily fool such a scheme.
- Romberg's method (essentially Richardson's extrapolation to ''stepsize zero'' applied to
the composed trapezoidal rule) refines the grid globally. This has the effect that
the number of function values used increases drastically if a function
of low smoothness is to be integrated with high precision. On the other side
this method has undergone a lot of deep mathematical analysis in the middle
of the last century with lots of interesting insight. It has triggered the use
of such extrapolation methods for several other applications, especially the
integration of ode's.
- The method of choice in ''real life'' nowadays is the use of Gauss-Kronrod
quadrature formulas (or similars) combined with local grid refinement. Gauss-Kronrod pairs
are pairs of formulae, the second using an extension of the grid of the first one,
with a high order (i.e. exactness for polynomials of high degree).
These schemes, using nonregularly spaced grids, are much more robust than those
using equidistant grids. They converge for every continuous function but
are especially efficient for very smooth functions.
- Integrals over infinite intervals can either be solved by substitution techniques which
make the integrand fast decaying for large arguments (and finally neglecting everything outside
a small interval near zero) or by an interval transformation
on a fixed interval (here [0,1[) with subsequent adaptive quadrature using interior nodes
only. Here we present the second way, using the Gauss-Kronrod approach.
- All the methods mentioned so far run into severe trouble if applied to
highly oscillatory integrands as they occur e.g. in computing Fourier
transforms. For such cases we provide Filon's method in its simplest form,
using quadratic interpolation on subpanels.
- As mentioned already above, integrals over finite intervals with boundary singularities
can be solved by a transformation to the whole axis and solved afterwards
by the trapezoidal rule with equidistant nodes, provided a sufficiently fast decay
for large arguments. Here we present the double exponential transformation
of Mori and Takahasi. You will find this exceptionally efficient.
- In two dimensions computation of definite integrals is named ''cubature''.
This task is much more involved, since now the shape of the region plays an
important role in the construction of rules, namely every region needs a rule of its own.
We present here an obvious idea, namely
the restriction to so called ''normal regions with respect to an axis'' which can be
written for example as B = { (x,y) : a ≤ x ≤ b , ψ(x) ≤ y ≤ φ(x) }.
Here the integral can be expressed as an onedimensional integral over one axis
whose integrand itself is an integral over the other one (and this idea could be repeated for higher
dimensions). Most regions of practical importance could be cut into such special
regions and this would seemingly solve the problem. But here the so called cure of dimension
comes into play, i.e. the number of function evaluations necessary would grow exponentially.
In two dimensions however this may be a choice and we show this here for Simpson's rule and the Gauss-Kronrod
formulas.
|