|
Adaptive quadrature using Simpson's rule |
|
|
|
|
- The numerical approximation of definite integrals can be based on integrating interpolation
polynomials exactly. One replaces the integrand f by an interpolation polynomial,
be it on the whole interval [a,b] or piecewise on small subintervals of this.
These polynomials can be integrated analytically and the obtained value resp. their sum
replaces the true value. The error of this process hence simply is the integral of the
interpolation remainder.
- Here we use Simpson's rule in its composed form, but not on an equidistant grid but
rather on a locally variable grid. Simpson's rule is obtained by integrating a second degree
polynomial, interpolating f at three points in a symmetric position: xl=x,x+h,xr=x+2*h.
This yields the value
I1 = (1/3)*h*(f(xl)+4*f(xl+h)+f(xr)) , xr=xl+2h .
The abbreviation I1 becomes clear below.
- One would originally guess that this is exact for second order polynomials only, but indeed
it turns out that it is exact for third order polynomials too. This is due to its
symmetry. The length of the subintervals [xl,xr] is computed adaptively.
The total error of this process can be bounded from above by
(1/90) × max{ |f(4)(x)| : x in [a,b] } × ∑i hi5.
This bound however can be quite crude.
- The method however works if the integrand is of lower differentiability, the order
of convergence is at least like O(hmaxk)
if f is k, k=1,2,3 times differentiable and even if it is
only continuous, convergence is guaranteed for hmax → 0 since
the formula is a special form of a Riemannian sum.
- Here we determine the length of the subintervals by estimating the quadrature error
locally: a more precise analysis of the remainder of Simpson's rule yields as error
the expression
E1 = (1/180)*h4*(f(3)(xr)-f(3)(xl))+O(h6)
provided f is five times continuously differentiable.
Hence, if we use also the second estimate for the integral, obtained from
I2 = (1/6)*h*(f(xl)+4*f(xl+h/2)+2*f(xl+h)+4*f(xr-h/2)+f(xr))
(i.e. using Simpson's rule twice with the stepsize halved), then we get as error of this
E2 = (1/180)*h4/16*(f(3)(xr)-f(3)(xl))+O(h6)
Hence, up to an error O(h6) the error E1 can be obtained using
Itrue - I1 = E1
Itrue - I2 = E2.
Hence
I2 - I1 = E1 - E2 = (15/16) E1
and hence
E1 = (16/15)*(I2 - I1) = errest(h).
But errest(h) is computable!
- Given a desired bound of the total error eps for the integral we require
|errest(h)| <= eps*(xr-xl)/(b-a)
as precision for the subinterval.
Now let c*hcur be the largest stepsize compatible with this and errest
be obtained with the stepsize hcur . Inserting this into the expression for
E1
yields
c = (eps*hcur*15/(16*(b-a)*(|I2 - I1|)) 0.25.
If it turns out that c >= 1 then we accept I2 for this subinterval and proceed
by replacing xl with xr and hcur by c*hcur.
Otherwise the current step is rejected and the step repeated with a reduced h, in this case
we use here 0.9*c*hcur.
- This adaptive control of the stepsize is enhanced by providing two ''safety'' parameters hmin,
hmax which never will be exceeded.
- Summing up all values errest we get an estimate of the total error.
- One must be aware of the fact that this is not a strict error bound, we neglected higher order
terms in our arguments. Clearly this can fail.
|
|
|
Input |
|
- You first specify the integrand. There are two test cases predefined but you also have the
option to define f yourself, using a piece of code written following
FORTRAN rules.
- You also specify the interval and have the choice to specify eps.
Attention: eps is an absolute error, if f can be quite large this never might be
reachable. We provide a minimal stepsize 1.0e-6*(b-a) (that means you may use about
one million function values), but we terminate prematurely if this is attained.
- Because of the neglection of the higher order terms, we also require a maximal
stepsize hmax to be specified. You might play a bit with hmax
which however is internally bounded from above by (b-a)/10
|
|
|
Output |
|
- You get a short protocol about your input and the obtained result.
- A plot shows the interand (shifted above the x-axis, you see f-min{f}) and below the
x-axis a grid showing the subintervals actually used in the process.
This can be shorter than [a,b] in case of premature termination, in which case
the results are for the indicated subinterval only.
|
|
|
Questions?! |
|
- In which cases you will get trouble ?
- What can be said about the effort, measured in function evaluations ?
- How should hmax be restricted at least?
- Which effect has the choice of hmax on the total effort ?
- What can be observed if you use discontinuous or nonsmooth functions (for example sign(...))?
|
|
|
|