|
Solution of the initial value problem of ode's |
|
|
|
|
|
- Any explicit ordinary differential equation, also such including higher order derivatives,
can be rewritten in the form
y'(t) = f(t,y(t))
Here y und f are elements in Rn, mostly n > 1.
y'(t) denotes the componentwise derivative of y(t) w.r.t
''time'' t. Hence we will use this ''standard'' form here.
One also may remove the special role of t by introducing a component
yn+1 by addition of the following equations to the
initial problem
y'n+1 = 1 ; yn+1(t0) = 0
substituting yn+1 for t, in which case the system is
called ''autonomous''.
- In order to get an uniquely solvable problem one needs n initial
values for y at some time point a=t0.
This means we fix some element from the solution manifold of this equation.
We may write this as
y(a) = ya.
In addition one needs beyond the joint continuity of f w.r.t. t,y some additional
properties, for example Lipschitz continuity w.r.t. y, at least locally
near (a,y(a)), in order to guarantee existence and uniqueness of the solution.
- All the methods we are using here restrict themselves in finding approximate
values for y on a grid of the time axis, which can be equidistant or variable in
gridsize. The endpoint b of this grid is fixed in advance, and of course one
assumes that the solution will exist over [a,b]. This is not trivial since
there are equations with singularities whose position depends also on the
initial value.
- The basic idea of these methods can be twofold: either one discretizes
y' at a time instance ti using neighboring already known
values and inserts this value in the differential equation, using the value
of the right hand side at ti too or one uses integration of the ode
over some time interval [ti-j,ti] and replaces
the right hand side integral by a quadrature rule.
Hence these methods are called ''integrators'' sometimes.
- ti denotes the new time point here, often there holds j=1
(in all the cases implemented here this is the case: these are ''one step methods'').
The numerical quadratue rule however may use (and does here so) additional values
of the right hand side, which themselves must be computed approximately, since
the solution is unknown there. A general Runge-Kutta method can be obtained in the following
steps:
step1: y(ti) = y(ti-1)+∫ti-1ti f(t,y(t)) dt
step2: ∫ti-1ti f(t,y(t)) dt = h*∑
j=1 to mγj y'(ti-1+αj*h) (approximately)
where αj and γj are the nodes and the weights of
a quadrature rule on [0,1] and h is the current step size. Now we are in the need to
approximate the values y'(ti-1+αj*h). Using y'=f(t,y)
we need to approximate
step 3: y(ti-1+αj*h)
= y(ti-1)+∫ti-1ti-1+αj*h f(t,y(t)) dt
and now we use the ''trick'' to approximate these new integrals using the same nodes but
different weights:
step4 : ∫ti-1ti-1+αj*h f(t,y(t)) dt =
h*∑k=1 to m βj,k y'(ti-1+αk*h)
(approximately.)
Such a method is completely described by the array with the coefficients α, β, γ, the so
called Butcher array (from its inventor for the general case). In the general case this results in
a (non)linear system of equations for the intermediate y'-values, often called kj, depending on f:
kj = f(ti-1+αj*h,yi-1+h*∑l=1 to m βj,lkl), j=1,...,m .
But if βj,k=0 for k >= j then these values can be computed one after the other
and we obtain the so called explicit RK methods, of which two are presented here. The data m, α, β, γ
can be chosen from different criteria, for example maximal order of consistency or good absolute stability
etc.
- Since the use of a predefined fixed grid size is often inefficient or even unsafe,
due to a possible blow up of the discretization error, there is the possibility
to use a variably sized grid whose grid size is adapted locally, based on
additional computations. This is done by estimating the currently introduced
error (the so called ''local discretization error'') and one tries to keep
this one below an user defined threshhold.
- There are two different properties of such a method which are of importance:
consistency and absolute stability (for multistep methods, in another application, there
comes also asymptotic stability into play, which is here given automatically from the assumed
Lipschitz continuity of f.)
- One says the order of consistency of such a discretization is p if the
error of one step can be bounded from above by a constant times grid size p+1
locally.
- One says that a method has a region G of absolute stability in the
complex plane, if the discretized solution of
y' = λ*y , y(0)=1 with Re(λ) < 0
using the fixed step size h decays in absolute value provided the
product h*λ is in G.
- The methods which we present here are:
- Euler explicit (order 1) and its modified version of order 2.
The local error is estimated by the difference of these two values.
This method has an interval of absolute stability ]-2,0[ on the real axis.
- The Runge-Kutta-pair of Dormand and Prince (of order 5 resp. 4).
Local error estimation by the difference of the two values.
Interval of absolute stability is ]-3.3,0[ .
- A Rosenbrock-Wanner-formula of order 4. This is the method GRK4a
with coefficients from Kaps and Rentrop in an own implementation.
It uses the Jacobian of f w.r.t. y and can be
considered as a simplification of a singly diagonally implicit
Runge-Kutta method using one Newton step only plus a correction term.
Local error estimation by extrapolation (repetition of the computation with
step size halved, building the difference of the two results).
This has order 5. The region of absolute stability contains a cone in the left half plane
with vertex 0 and an interior angle of 2*α, α=89 (in degrees).
- The fully implicit Runge-Kutta-method RADAU5 (code from Hairer and Wanner)
with two inner nodes and
the new time value as third node, which is A- and L-stable (i.e. the
region of absolute stability contains the left half plane and the
quotient
yi+1/yi converges to zero for fixed i
and fixed h > 0 if λ goes to -infinity.)
The local error is estimated by local extrapolation: repetition of
the computation with the stepsize halved and building the difference of the
computed values.
- The integrators 2,3 und 4 all have order 5.
The stability properties improve in the direction 1 < 2 < 3 < 4.
On the other side we have an considerable increase in computing effort
for a single integration step:
Method 1 is explicit and requires only two evaluations of f.
Method 2 is explicit too, but uses already 6 such evaluations.
(there is a 7. f-value which however can be reused in the next step.)
Method 3 is linearly implicit, it uses a linear system of equations involving
the derivatives (d/dt)f and (d/dy)f. Method 4 is
fully implicit and solves the nonlinear equations using the simplified Newton's scheme.
It needs one evaluation of (d/dt)f and (d/dy)f and a number of f-values depending
on the number of Newton steps, about 12.
- Using these known results one might study different effects.
- All these methods are so called one step methods: they use
f-values taken from the current interval [ti-1,ti] only.
- For the explicit methods there exists a direct coupling between the
stepsize and the stability boundary if the variable step size is used.
- Whereas the control of the local error works reasonably for the variable stepsize mode
there is no control for the global error which is determined by the properties of
the solution manifold of the ode.
- One observes strong variations of the stepsize if the derivatives of y (resp. f)
vary strongly.
- Discontinuities of f or of its higher partial derivatives
reduce the stepsize often dramatically. Hence it is advisable to identify their
locations and to restart the integration from there.
- ''Stiff'' equations (i.e. something corresponding to an huge negative
λ in the model equation above) let the explicit integrators fail.
- Methods of higher order are generally more efficient than those of low order.
- If f is strongly nonlinear then the fully implicit method works better than the
linearly implicit one.
|
|
|
|
|
|
Input: Predefined cases |
|
- There is a choice between 6 different problems which can be solved with the four
different integrators.
- These problems are
- the onedimensional model equation y'(t) = -2y(t)
with initial value y(0) = 1.
- the onedimensional test case y'(t) = (y(t))2
with initial value y(0) = 0.25.
- The restricted three body problem .
- computation of the current in a nonlinear RLC-circuit
with tunnel diode in parallel (simulating a Flip-Flop. A tunnel diode can act as a resistor or as a source,
depending on voltage).
- the reduced Reynolds equation.
- an ode with discontinuous right hand side
f(t,y(t)).
- you select the integrator.
- you select the time step selection method. Either you use a fixed stepsize h > 0 which
then must be given. Think about the stability bound, otherwise you might produce an overflow!
- Or you select the variable stepsize mode, in which case three parameters tol, hmax
and smally must be specified. tol is the bound for the local error, measured relative to the size
of the solution. Important: 0 < tol < 1 !
- The parameter smally is a lower bound for the absolute value of the components of y
which play a role in the definition of the relative error.
smally > 0 of course!
Internally we control the local error in a weighted norm which also varies with time:
||y-y_appr||= sqrt(||D(y-y_appr)||22/n) ,
D=diag(1/max{smally,|y_i_appr_old|,|y_i_appr_cur|,2*macheps/tol})
that means that components which are large compared to smally are measured in their relative error and the
small ones relative to smally.
- hmax is the maximum stepsize allowed, < b-a of course.
- If you want the solution on a fixed predefined grid which might have no relation to the
internally used one you have the possibility to require output on an equidistant grid, defined
by an integer NUM (i.e. this grid size is (b-a)/NUM).
This is not the integrators stepsize! If NUM is greater than one, then you get at the bottom
of your output a table showing time, the number of function evaluations used
so far, the solution and, as far as available, the estimate of the global
discretization error and the true global error.
- For the methods 1,2 and 3 we provide the estimation of the global error by performing the same
computations on a grid of halved stepsizes, where the integration is done simultaneously
and independently from the original grid, but acceptance of the step on the original grid depends on the acceptance of
all three error estimations involved in this.
- there a predefined values for the different cases as follows
case | | hstart | | hmax | | tol | | smally | | num |
1 | | 0.1 | | 0.5 | | 1.e-6 | | 0.1 | | 1 |
2 | | 0.1 | | 0.2 | | 1.e-6 | | 1.0 | | 1 |
3 | | 0.001 | | 0.1 | | 1.e-4 | | 1.e-6 | | 1 |
4 | | 0.001 | | 0.25 | | 1.e-4 | | 0.1 | | 1 |
5 | | 0.000001 | | 0.1 | | 1.e-2 | | 1.e-8 | | 1 |
6 | | 0.03 | | 0.3 | | 1.e-3 | | 1.0 | | 1 |
7 | | 0.001 | | - | | - | | - | | - |
- For all cases h_min=1.0e-12 is the smallest time step allowed. If the stepsize control goes below this,
then the integration is terminated, but you obtain the results computed so far.
|
|
|
Output: predefined case |
|
- The total computation time and an account of the effort used :
- the number of computed time steps.
- the number of rejected time steps (local error too large, step size reduced).
- the number of accepted time steps.
- the number of evaluations of f.
- the number of evaluations of (df(t,y)/dt).
- the number of evaluations of (df(t,y)/dy).
- In addition you get a plot of:
- step sizes used (in log form).
- local error estimation(in log form).
- global error estimation (methods 1,2,3 only)(in log form)
- one or two components of the solution vector.
- !!!! It may occur that a log-scale displays no values
or a message like "all points undefined" appears.
this means that the errors are so small that by the default values of
the graphic program used these are all considered as zero.
This occurs if you use a much too small fixed stepsize for the
simple test cases.
A message "stop irregular" means that the precision of the integration
is already in the roundoff level such that the error estimation fails.
(for example the rule of order p+1 which is used for estimation of the
error in the rule of order p has a larger roundoff than
the latter)
- The plot for the global error contains two curves -- the true and the estimated
error -- in case the analytical solution is known (case 1,2, 6).
- The plots seem to miss the initial point: this is because we attribute
the step information to the next point.
- In order to get reasonable plots we sometimes predefine the range of
values to be plotted. If you get a plot without axis tics that means that your values fall
outside this range, for example if the local or global errors became extremely small.
|
|
|
Questions: Predefined case?! |
|
- Explain the behaviour of the stepsize for the different test cases.
- Behaves the local error as expected?
- In which cases is the global error indeed at most the sum the local errors?
- Makes it sense to use a fixed stepsize and when?
- How influences the choice of smally the stepsize selection?
|
|
|
|
|
|
Input: user defined case |
|
- Here you have the possibility to define an IVP yourself. This requires
- Specification of n. n is the number of components of
y resp. f(t,y).
- Specification of the components of f(t,y) =
(f1,...,fn)T . The functions
fi have the general form fi =
fi(t,y(1),...,y(n)), but of course not all variables must appear
explicitly. You write this as a part of a code which is included a larger environment.
The results of your computation, which could be a larger piece of code, must
be finally of the form f(i)=some arithmetic expression.
The input must conform to the rules of
FORTRAN.
- The interval [a,b], where the computation takes place.
- The initial values y(a).
- For graphical output you can specify two components for plotting.
- Choose the integrator (see above).
- Choose the step size strategy;
for a fixed stepsize you must specify it h. Important 0 < h !
- For the case of a variable stepsize we need three parameters: tol
, smally and hmax. tol is the desired bound for the estimated local error
Important: 0 < tol < 1 !
- smally is a lower bound for the magnitude of the components of the solution
which are considered as ''important''. The local error is measured in a locally variable
componentwise defined norm in which smally replaces components which become very small.
Important: smally > 0 !
- hmax is the maximum stepsize allowed.
- If you want the solution on a fixed predefined grid which might have no relation to the
internally used one, you have the possibility to require output on an equidistant grid, defined by
by an integer NUM (i.e. this grid size is (b-a)/NUM. It is not the integrators stepsize!)
|
|
|
Output: user defined case |
|
- The total computation time and an account of the effort used :
- the number of computed time steps.
- the number of rejected time steps (local error too large, step size reduced).
- the number of accepted time steps.
- the number of evaluations of f.
- the number of evaluations of (df(t,y)/dt).
- the number of evaluations of (df(t,y)/dy).
- In addition you get a plot of:
- step sizes used (in log form).
- local error estimation(in log form).
- global error estimation (methods 1,2,3 only)(in log form)
- one or two components of the solution vector.
- !!!! It may occur that a log-scale displays no values
or a message like "all points undefined" appears.
this means that the errors are so small that by the default values of
the graphic program used these are all considered as zero.
This occurs if you use a much too small fixed stepsize for the
simple test cases.
A message "stop irregular" means that the precision of the integration
is already in the roundoff level such that the error estimation fails.
(for example the rule of order p+1 which is used for estimation of the
error in the rule of order p has a larger roundoff than
the latter)
- The plot for the global error contains two curves -- the true and the estimated
error -- in case the analytical solution is known (case 1,2, 6).
- The plots seem to miss the initial point: this is because we attribute
the step information to the next point.
- If you did set NUM greater than one, then you get an
equidistant table with stepsize (b-a)/NUM of the solution, and, as far available, the global error
estimates and the true global error at the bottom of your output.
|
|
|
Questions: user defined case?! |
|
- This case is meant for the solution of an initial value problem of your
own interest.
- The questions to be asked here are of course the same as for the predefined cases.
|
|
|
|
|