|
- In this chapter we present the most important methods for solving nonlinear equations
and systems of nonlinear equations.
However, since this site is meant as a help for beginners in numerical analysis,
the dimension of the cases to be considered is rather limited.
We have two sections: one unknown and several unknowns, including
nonlinear least squares.
- For equations in one unknown we present the always convergent methods of
interval halving, method of false position and its enhancement, the Illinois
algorithm and finally the method of choice: Brent-Decker (successive inverse
quadratic interpolation with safeguards). These are assempled in one
application, which lets you play with prepared testcases or use your own one.
- Newton's method, the father of all nonlinear solvers, is implemented with
a graphical tool which lets you zoom in during the solution process,
showing the enormous speedup provided by superlinear convergent methods,
here the order 2.
- In former centuries the solution of polynomial equations was a task
occupying lots of mathematicians in theory as well as in the applied field.
We show here five such solvers. The first one, finding only the real zeroes
of an arbitrary real polynomial, is based on using upper and lower bounds for the
function value, obtained using the complete Taylor development at an arbitrary point.
This can cope also with multiple zeroes and might show you the dangerous
effects of roundoff.
- The classical approach for finding all zeroes of a polynomial,
finding one and
dividing out, was shown to be highly roundoff sensitive by Wilkinson in his
famous treatise ''Rounding errors in algebraic processess'' and since then
serves as a deterrent example. We show this here in connection
with Newton's method, combined with a variation, Maehly's trick, which uses this deflation
only implicitly and results in a fast and accurate solver for polynomial zeroes
provided only simple real zeroes are present (like in the case of orthogonal
polynomials w.r.t. to a weight function on an interval).
- If asked for a ''general'' zero finder for polynomials the answer will be
usually ''cpoly'' (or instead of the code name: ''use Jenkins Traub''). We present here this
method which is strongly related to the complex QR method for the nonhermitian
eigenvalue problem. Yes, curiously enough, the matrix eigenvalue problem is
solved not via the characteristic polynomial, but by a genuine solver, and
contrary, the polynomial problem is solved via an eigenvalue solver.
- There is an alternative to Jenkins-Traub, Hirano's method, which is
based on a generalization of Newton's method. This also solves any complex polynomial zero
problem and often even more accurate, however at a higher complexity.
For this method even analytical complexity bounds are known.
- There were earlier attempts to solve to complete zero problem of a
polynomial, for example Laguerre's method, which can find complex zeroes of a real polynomial from real
initial guesses and is cubically convergent. Although there is no proof of global convergence
for the general case, the method is in practice quite successful. We present it here
for complex polynomials and supported with several enhancements.
- Bairstows mathod is another way to find complex zeroes of real polynomials
(this time in real arithmetic), namely finding quadratic factors instead of linear ones directly.
We present it here as a damped Newton's method for the coefficients of this factor
and enhance it with some safeguards. In this form it also shows good reliability and performance.
- In the last century much effort was put into obtaining optimal complexity
bounds for the evaluation of polynomials (which has a high impact on computing
since polynomial or rational approximations are behind every computation of
our transcendental and higher trancendental functions).
Here we present Knuths result, that every polynomial can be evaluated with
floor(n/2)+2 multiplications and n+1 additions).
It needs a polynomial zero solver and hence is included here.
Again here you can study
the possibly destructive effects of rounding errors.
- For nonlinear systems we first have Newton' method for two unknowns,
again with a graphical representation which should help understanding how
this works generally.
- Next comes a professional nonlinear solver, Newtons method in n
variables with a professional implementation (which contains lots
of enhancements). For obtaining ''global'' convergence the method of
step damping is in use here.
- There is a nonlinear least squares solver, based on linearization
and the accurate solution of the linear least squares problems,
the damped Gauss-Newton method. Here you can play with the classical
(and notoriously illconditioned) problem of exponential fitting
which can be seen as a parameter identification problem for a linear
ordinary differential equation with constant coefficients. You also
have the possibility to work with own data and a own fit function.
- We also present a version of the Levenberg-Marquardt method
for the solution of nonlinear systems (a so called ''trust region method'')
here. A variant of this for nonlinear fitting of data can be found in
the section on least squares and unconstrained optimization.
- As an application of nonlinear least squares fitting we present
orthogonal distance fitting of an ellipse in the plane. This allows
you to play with synthetic data and shows how a problem might turn from
simple to unsolvable under the presence of noise and restricted data.
This is a so called implicit model where a sum of simple squares of variables
is minimized subject to a system of nonlinear equations in these variables.
You might want ot try other cases of this type. In this case we recommend to use
the SQP method in the chapter on constrained optimization.
- We also have orthogonal distance minimization for a errors in variables model
where both the independent and the dependent variables are subject to noise.
- We have the SOR-Newton method which
has some applications in large scale nonlinear systems of a special structure.
There is a graphical representation of the solution process
in the plane and as a second case an implementation which allows to study a
general case in dimension up to 1200.
- Finally there is the Newton-GMRES method implemented as a derivative free inexact Newton solver.
|