Adaptive quadrature using Filon's method

 

Directly to the input form

 
  • This application deals with the computation of integrals of the type
    ab g(x) cs(ω π x) dx
    where cs means either sin or cos.
  • A quadrature rule for approximating a definite integral numerically is said to have "order exactly k" if it integrates any polynomial of degree <= k-1 exactly, but there is a polynomial of degree k which is not integrated exactly. If such a rule is applied to a function f which is at least k times continuously differentiable on the interval [a,b] under consideration, then the quadrature error will be less than or equal to
    C × (b-a)k+1 max[a,b] |f(k)(x)|
    with C a constant. Now consider the cases
    f(x) = g(x) sin( ω π x) or f(x) = g(x) cos( ω π x) .
    Applying Leibniz' rule of differentiation of a product one gets
    f(k)(x) = j=0 to k (k over j) g(j)(x) (ω π)k-j cs(k-j)(x)
    with cs either sin or cos and it becomes clear that the quadrature error could become enormous large if not (b-a)ωπ << 1. On the other side, if g is a smooth function whose derivatives become not very large, then it can quite well be approximated by polynomials on [a,b]. Hence Filon's idea was to replace g by such a polynomial p and then to integrate
    ab p(x)cs(ω π x) dx
    in closed form (i.e. exactly).
  • Here we use this idea in combination with adative quadrature on subintervals [xi,xi+ hi] (so called ''panels'') using quadratic interpolation on this subinterval (i.e. Simpson's rule). In order to be able to estimate the local error we repeat this on the two halves of each panel. This requires 4 new function values on each new such "panel". This allows us to estimate the third derivative of g on the panel using divided differences. This third derivative enters the interpolation error which reads
    g(3)x)/6 × (x-xi)((x-xi-hi/2)(x-xi-hi).
    This leads to a first error estimate for this type of quadrature rule, namely
    max[a,b] |g(3)(x)|/(72 √3) × (b-a) × maxi {hi3},
    which is obtained by multiplying the interpolation remainder with cs(x) and estimating the integral of this from above. This is not only a very crude estimate, it also completely hides the dependency from ω . For sufficiently small equidistant panel size h Knut Petras in BIT30, (1990), 529--541 proves a bound which gives for a panel
    h5 /180 × max[x,x+h](ω π |g(3)(x)|+|g(4)(x)|) .
    which is much better. In order to change the panel size arbitrarily we use here the following approach for error estimation: we assume that g(3)(x) is almost constant on a panel (the user can help for this by choosing the input hmax small enough) and integrate
    x0x0+h (x-x0)(x-x0-h/2)(x-x0-h)cs(ωπx)dx
    exactly, obtaining the error bounds

    C cos(ω π (x0+h/2)) 12 θ R1(θ)/(ω π)4

    with
    R1(θ) = cos(θ)- (sin(θ)/θ) (1 - (1/3)θ2)
    for the sin integral and

    -2C/(ω π)4 × sin(ω π (x0+h/2)) R2(θ)
    for the cos integral, where
    R2(θ)= (-6+2 θ2)sin(θ)+6 θ cos(θ) .
    Here C stands for |g(3)(ξ)|/6, obtained by finite differencing. Taylor development of R1 and R2 shows that they decrease with increasing ω like 1/ω2.
  • The interpolation polynomial is taken in the Lagrange form and, multiplied with cs(x), is integrated in closed form, obtaining the following two formulas for the panel and its refinement:

    I1s = (h/2) ( α(θ)(fcos0-fcos2)+β(θ)(fsin0+fsin2)/2 +γ(θ)fsin1)

    I2s = (h/4) (α(θ/2)(fcos0-fcos2)+β(θ/2)((fsin0+fsin2)/2+fsin1)+γ(θ/2)(fsin01+fsin11))

    I1c = (h/2) (α(θ)(fsin2-fsin0)+β(θ)(fcos0+fcos2)/2+γ(θ)fcos1 )

    I2c = (h/4) (α(θ/2)(fsin2-fsin0)+β(θ/2)((fcos0+fcos2)/2+fcos1)+γ(θ/2)(fcos01+fcos11))

    Here I1s, I2s, I1c, I2c mean the sin resp. cos integral on the panel and on the refined grid of the panel and the function indices 0,01,1,11,2 refer to the grid points x0,x0+h/4,x0+h/2,x0+3h/4,x0+h and θ = ω π h/2 is related to the grid size.

    The free functions α , β , γ are
    α = 1/θ + sin(2 θ)/(2 θ2)- 2 sin2(θ)/θ3
    β = 2( (1+cos2(θ))/θ2 - sin(2 θ)/θ3)
    γ = 4(sin(θ)/θ3 - cos(θ)/θ2).

    For small θ the evaluation of these formulas lead to severe cancellation. Hence we use their Taylor expansions (up to the power 12) for θ <= 0.3. This guarantees full precision.
    One sees that both integrals involve the function values of g multiplied by the sin and cos value. This is one reason for computing both integrals simultaneously, the other one is the use of this in computing the Fourier transform of g.
  • The acceptance rules we employ are the following: we require that both error bounds for a panel are below
    ε ∫[x,x+h]|g(x)|dx /(b-a)+(ε/(b-a))2h
    where the integral is replaced by Simpson's rule on the fine grid. The differences I1s-I2s, I1c-I2c are taken as actual errors and their sum appear in the output. If this error criterion is satisfied, then we accept the step and possibly increase h, but never by more than 2, if it is violated then we reject the step and decrease h, again not by more than 1/2. Since the local error depends on h4 the increase/reduction factor involves the 4-th root of the quotient between the actual computed errorbound and its allowed upper bound and some safeguards. We let out more details here.
  • The following table shows the effort, measured in evaluations of g, needed by this special Filon technique and a standard Gauss-Kronrod rule of order 22 (n=7) directly applied to f. One sees that increasing ω by ten approximately increases for Gauss-Kronrod the effort also by ten, whereas Filon's method even reduces the effort for very large ω as predicted by the error bounds.
    ==================================================================
    parameters:
    eps=1.0e-7  hmax=0.1 hmin=1.0e-4*hmax
    ==================================================================
    case    function g                              interval
     1     exp(-x^2)*sin(pi*x)                      [-1,1]
     2     cos(pi*x)^2                              [0,1]
     3     1/((x-1/2)^2+1/1000)+1/((x-7/8)^2+1/100) [0,1]
    
    
    ==================================================================
      case omega   #g filon   #g gauss-kronrod n=7
                    order 3           order 22
       1      1        413              45
       1     10        713             465
       1    100       1217            3825
       1   1000         85           47145
       2      1        249              45
       2     10        441             225
       2    100        765            1905
       2   1000        433           20775 
       3      1        493             315
       3     10        829             405
       3    100       1377            2745
       3   1000       1941           29295
    =================================================================
    
  • Iserles shows in a paper (IMA Journal of Numerical Analysis 24 (2004), 365-391 ) that by using the Lobatto points (to which the Simpson grid belongs) as interpolation points on the panel the error of this Filon type quadrature is O(hn-12) if h ω π >>1 and n is the number of Lobatto-points used.
  • One must be aware of the fact that the error bounds used here are not strict, we neglected higher order terms in our arguments. Clearly this can fail.
 

Input

 
  • You first specify the integrand. There are three test cases predefined but you also have the option to define g yourself, using a piece of code written following FORTRAN rules.
  • You also specify the interval and have the choice to specify eps. Attention: eps is an relative error measure with respect to [a,b]|g(x)|dx. We provide as minimal allowed stepsize 1.0e-4*hmax, and we terminate prematurely if this is attained.
  • Because of the neglection of the higher order terms, we also require a maximal stepsize hmax to be specified. You might play a bit with hmax which however is internally bound from above by (b-a)/10
  • Finally you specify ω
 

Output

 
  • You get a short protocol about your input and the obtained results.
  • A plot shows the two integrands (shifted above the x-axis, you see both versions of f-min{f}) and below the x-axis a grid showing the subintervals actually used in the process.
    With a large ω you will necessarily see a block of green color only above the x-axis, due to the limits of the screen and also the blue of the integration grid may appear as a block of color only.
    This can be shorter than [a,b] in case of premature termination, in which case the results are for the indicated subinterval only.
 

Questions?!

 
  • In which cases you will get trouble ?
  • What can be said about the effort, measured in function evaluations ?
  • How should hmax be restricted at least?
  • Which effect has the choice of hmax on the total effort ?
  • What can be observed if you use discontinuous or nonsmooth functions (for example sign(...))?
 

To the input form

 
Back to the top!

31.08.2015