Newton's method , several unknowns

Directly to the input form

 
  • In the following, x and F denote vectors with n components.
  • Newton's method is the best known method for solving systems of nonlinear equations
    F(x) = F(x1,...,xn) = 0 with F : Rn --> Rn.
  • We assume in the following that F is at least two times continuously differentiable, that a solution of the problem exists and that the Jacobian of F is invertible there. Then the method and its properties can be derived using Taylors theorem. From
    F(x*)-F(x) = JF(x)e + O(||e||2)
    with
    e=x*-x
    we get from our assumptions that
    ||F(x)|| = O(||e||) and ||e|| = O(||F(x)||). (*)
    Neglecting the second order term in e we get a linear system of equations for e
    JF(x)e = -F(x). (**)
    We use this for x=xk in order to define the next iterate. The full step method would be xk+1 = xk + ek.
    From this, using Taylors formula for F(xk+1)-F(xk) we get
    ||F(xk+1)|| = O (||F(xk)||2)
    hence using (*) again
    ||ek+1|| = O(||ek||2)
    and this shows that the method is locally quadratically convergent under these assumptions.
  • However in order to extend the region of convergence of the method it is applied here in a more general form as the damped method:
    xk+1 = xk + tk ek.
    tk is ''damp. fct'' in the iteration protocol.
    This is possible since one can show rather easily that e defined by (**) is a direction of descent for f(x)=||A F(x)||2 for any invertible fixed A. In this program we use, following Deuflhard, a variable matrix for A, namely JF(xk). A mixture of backtracking and interpolation computes tk such that
    f( xk+1) <= (1-δ*tk)f( xk)
    with a small positive δ < 1/2 and tk <= 1, and since one can show that there exists a strictly positive lower bound for tk convergence to a zero follows. The proof of this assumes that the level set of f for x0 is compact and that the Jacobian matrix is invertible there.
  • One can show that tk =1 for k sufficiently large such that quadratic convergence is maintained.
  • The invertibility of the Jacobian matrix is an indispensible requirement for the method, but one time continuous differentiability suffices for proving superlinear convergence.
  • The method is applicable for any dimension.
  • In order to simplify the application we compute the Jacobian analytically using JAKEF from netlib. Remember that you must use strict f77 without the extensions, if you write your equations code.
  • The linear system solver is a standard Gauss code with partial pivoting. The numerical code is nleq1 from the Elib of ZIB.
 

Input

 
  • Here you must define the evaluation of F by a piece of FORTRAN code which will be copied in a prepared subprogram code.
    You specify the dimension n (Important: 1 <= n <= 50 !)
  • and the evaluation of the components F(1),...,F(n) following FORTRAN rules. More details are given on the input form.
  • Additional inputs: Initial value xstart = x(1),...,x(n).
  • desired relative precision eps in xfinal.
  • the maximum number of steps allowed maxstep. (Important: 1 <= maxstep <= 1000 !)
  • The amount of intermediate information.
  • The amount of error information.
  • The amount of final output.
 

Output

 
  • Intermediate and final results as requested
  • two semilogarithmic plots which show the development of ||F||2 and ||xnext-x||2 . Since we require monotonic descent of a weighted norm of F this will typically not be monotonic, but in the tail you should observe the quadratic convergence. The value '-25' means zero here.
 

Questions ?!

 
  • What must be strictly observed in choosing the initial guess?
  • What will take place if the Jacobian matrix is quite illconditioned?
  • If there are several solutions: does the code always identify the one nearest to the initial guess?
 

To the input form

 
Back to the top!

13.08.2013