Wolfram Mathematica® Tutorial Collection
ADVANCED NUMERICAL DIFFERENTIAL EQUATION SOLVING IN MATHEMATICA
For use with Wolfram Mathematica® 7.0 and later.
For the latest updates and corrections to this manual: visit reference.wolfram.com For information on additional copies of this documentation: visit the Customer Service website at www.wolfram.com/services/customerservice or email Customer Service at
[email protected] Comments on this manual are welcomed at:
[email protected] Content authored by: Mark Sofroniou and Rob Knapp
Printed in the United States of America. 15 14 13 12 11 10 9 8 7 6 5 4 3 2
©2008 Wolfram Research, Inc. All rights reserved. No part of this document may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the copyright holder. Wolfram Research is the holder of the copyright to the Wolfram Mathematica software system ("Software") described in this document, including without limitation such aspects of the system as its code, structure, sequence, organization, “look and feel,” programming language, and compilation of command names. Use of the Software unless pursuant to the terms of a license granted by Wolfram Research or as otherwise authorized by law is an infringement of the copyright. Wolfram Research, Inc. and Wolfram Media, Inc. ("Wolfram") make no representations, express, statutory, or implied, with respect to the Software (or any aspect thereof), including, without limitation, any implied warranties of merchantability, interoperability, or fitness for a particular purpose, all of which are expressly disclaimed. Wolfram does not warrant that the functions of the Software will meet your requirements or that the operation of the Software will be uninterrupted or error free. As such, Wolfram does not recommend the use of the software described in this document for applications in which errors or omissions could threaten life, injury or significant loss. Mathematica, MathLink, and MathSource are registered trademarks of Wolfram Research, Inc. J/Link, MathLM, .NET/Link, and webMathematica are trademarks of Wolfram Research, Inc. Windows is a registered trademark of Microsoft Corporation in the United States and other countries. Macintosh is a registered trademark of Apple Computer, Inc. All other trademarks used herein are the property of their respective owners. Mathematica is not associated with Mathematica Policy Research, Inc.
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
The Design of the NDSolve Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
ODE Integration Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Controller Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 The Numerical Method of Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Shooting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Chasing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Boundary Value Problems with Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Differential-Algebraic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 IDA Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Delay Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Comparison and Contrast with ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Propagation and Smoothing of Discontinuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Storing History Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 The Method of Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Norms in NDSolve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 ScaledVectorNorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Stiffness Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Linear Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 "StiffnessTest" Method Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 "NonstiffTest" Method Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Option Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Structured Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Structured Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Numerical Methods for Solving the Lotka|Volterra Equations . . . . . . . . . . . . . . . . . . . . 324 Rigid Body Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Components and Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Creating NDSolve`StateData Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Iterating Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Getting Solution Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 NDSolve`StateData methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
DifferentialEquations Utility Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 InterpolatingFunctionAnatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 NDSolveUtilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Introduction to Advanced Numerical Differential Equation Solving in Mathematica Overview The Mathematica function NDSolve is a general numerical differential equation solver. It can handle a wide range of ordinary differential equations (ODEs) as well as some partial differential equations (PDEs). In a system of ordinary differential equations there can be any number of unknown functions xi , but all of these functions must depend on a single “independent variable” t, which is the same for each function. Partial differential equations involve two or more independent variables. NDSolve can also solve some differential-algebraic equations (DAEs), which are typically a mix of differential and algebraic equations.
NDSolve@8eqn1 ,eqn2 ,…<, u,8t,tmin ,tmax
find a numerical solution for the function u with t in the range tmin to tmax
NDSolve@8eqn1 ,eqn2 ,…<, 8u1 ,u2 ,…<,8t,tmin ,tmax
find numerical solutions for several functions ui
Finding numerical solutions to ordinary differential equations.
NDSolve represents solutions for the functions xi as InterpolatingFunction objects. The InterpolatingFunction objects provide approximations to the xi over the range of values tmin to tmax for the independent variable t. In general, NDSolve finds solutions iteratively. It starts at a particular value of t, then takes a sequence of steps, trying eventually to cover the whole range tmin to tmax . In order to get started, NDSolve has to be given appropriate initial or boundary conditions for the xi and their derivatives. These conditions specify values for xi @tD, and perhaps derivatives xi ‘@tD, at particular points t. When there is only one t at which conditions are given, the equations and initial conditions are collectively referred to as an initial value problem. A boundary value occurs when there are multiple points t. NDSolve can solve nearly all initial value problems that can symbolically be put in normal form (i.e. are solvable for the highest derivative order), but only linear boundary value problems.
2
Advanced Numerical Differential Equation Solving in Mathematica
can solve nearly all initial value problems that can symbolically be put in normal form (i.e. are solvable for the highest derivative order), but only linear boundary value problems. This finds a solution for x with t in the range 0 to 2, using an initial condition for x at t ã 1. In[1]:=
NDSolve@8x ‘@tD == x@tD, x@1D == 3<, x, 8t, 0, 2
Out[1]= 88x Ø InterpolatingFunction@880., 2.<<, <>D<<
When you use NDSolve, the initial or boundary conditions you give must be sufficient to determine the solutions for the xi completely. When you use DSolve to find symbolic solutions to differential equations, you may specify fewer conditions. The reason is that DSolve automatically inserts arbitrary symbolic constants C@iD to represent degrees of freedom associated with initial conditions that you have not specified explicitly. Since NDSolve must give a numerical solution, it cannot represent these kinds of additional degrees of freedom. As a result, you must explicitly give all the initial or boundary conditions that are needed to determine the solution. In a typical case, if you have differential equations with up to nth derivatives, then you need to either give initial conditions for up to Hn - 1Lth derivatives, or give boundary conditions at n points. This solves an initial value problem for a second-order equation, which requires two conditions, and are given at t == 0. In[2]:=
NDSolve@8x ‘‘@tD == x@tD ^ 2, x@0D == 1, x ‘@0D == 0<, x, 8t, 0, 2
Out[2]= 88x Ø InterpolatingFunction@880., 2.<<, <>D<<
This plots the solution obtained. In[3]:=
Plot@Evaluate@x@tD ê. %D, 8t, 0, 2
5
4
Out[3]= 3
2
0.5
1.0
1.5
2.0
Advanced Numerical Differential Equation Solving in Mathematica
3
Here is a simple boundary value problem. In[4]:=
NDSolve@8y ‘‘@xD + x y@xD == 0, y@0D == 1, y@1D == - 1<, y, 8x, 0, 1
Out[4]= 88y Ø InterpolatingFunction@880., 1.<<, <>D<<
You can use NDSolve to solve systems of coupled differential equations as long as each variable has the appropriate number of conditions. This finds a numerical solution to a pair of coupled equations. In[5]:=
sol = NDSolveB:x ‘‘@tD ã y@tD x@tD, y ‘@tD ã -
1 2
x@tD + y@tD2 x@0D ã 1, x ‘@0D ã 0, y@0D ã 0>, 8x, y<, 8t, 0, 100
,
Out[5]= 88x Ø InterpolatingFunction@880., 100.<<, <>D, y Ø InterpolatingFunction@880., 100.<<, <>D<<
Here is a plot of both solutions. In[6]:=
Plot@Evaluate@8x@tD, y@tD< ê. %D, 8t, 0, 100<, PlotRange Ø All, PlotPoints Ø 200D
20
40
60
80
100
-2
Out[6]= -4
-6
You can give initial conditions as equations of any kind. If these equations have multiple solutions, NDSolve will generate multiple solutions. The initial conditions in this case lead to multiple solutions. In[7]:=
NDSolve@8y ‘@xD ^ 2 - y@xD ^ 3 == 0, y@0D ^ 2 == 4<, y, 8x, 1
Out[7]= 98y Ø InterpolatingFunction@880., 1.<<, <>D<, 8y Ø InterpolatingFunction@880., 1.<<, <>D<,
9y Ø InterpolatingFunctionA990., 1.1161 µ 10-8 ==, <>E=, 8y Ø InterpolatingFunction@880., 1.<<, <>D<=
NDSolve was not able to find the solution for y ‘@xD ã - Sqrt@y@xD ^ 3D, y@0D ã - 2 because of problems with the branch cut in the square root function.
4
Advanced Numerical Differential Equation Solving in Mathematica
This shows the real part of the solutions that NDSolve was able to find. (The upper two solutions are strictly real.) In[8]:=
Plot@Evaluate@Part@Re@y@xD ê. %D, 81, 2, 4
Out[8]=
6 4 2 0.2
0.4
0.6
0.8
1.0
-2
NDSolve can solve a mixed system of differential and algebraic equations, referred to as differential-algebraic equations (DAEs). In fact, the example given is a sort of DAE, where the equations are not expressed explicitly in terms of the derivatives. Typically, however, in DAEs, you are not able to solve for the derivatives at all and the problem must be solved using a different method entirely. Here is a simple DAE. In[9]:=
NDSolve@8x ‘‘@tD + y@tD ã x@tD, x@tD ^ 2 + y@tD ^ 2 ã 1, x@0D ã 0, x ‘@0D ã 1<, 8x, y<, 8t, 0, 2
Out[9]= 88x Ø InterpolatingFunction@880., 1.66565<<, <>D, y Ø InterpolatingFunction@880., 1.66565<<, <>D<<
Note that while both of the equations have derivative terms, the variable y appears without any derivatives, so NDSolve issues a warning message. When the usual substitution to convert to first-order equations is made, one of the equations does indeed become effectively algebraic. Also, since y only appears algebraically, it is not necessary to give an initial condition to determine its values. Finding initial conditions that are consistent with DAEs can, in fact, be quite difficult. The tutorial "Numerical Solution of Differential-Algebraic Equations" has more information.
Advanced Numerical Differential Equation Solving in Mathematica
5
This shows a plot of the solutions. In[10]:=
Plot@Evaluate@8x@tD, y@tD< ê. %D, 8t, 0, 1.66
0.8
0.6
Out[10]= 0.4
0.2
0.5
1.0
1.5
From the plot, you can see that the derivative of y is tending to vary arbitrarily fast. Even though it does not explicitly appear in the equations, this condition means that the solver cannot continue further. Unknown functions in differential equations do not necessarily have to be represented by single symbols. If you have a large number of unknown functions, for example, you will often find it more convenient to give the functions names like x@iD or xi . This constructs a set of twenty-five coupled differential equations and initial conditions and solves them. In[11]:=
n = 25; x0 @t_D := 0; xn @t_D := 1; eqns = TableA9xi ‘@tD ã n2 H xi+1 @tD - 2 xi @tD + xi-1 @tDL, xi @0D ã Hi ê nL10 =, 8i, n - 1
Out[16]= 88x1 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD,
x2 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x3 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x4 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x5 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x6 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x7 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x8 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x9 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x10 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x11 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x12 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x13 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x14 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x15 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x16 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x17 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x18 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x19 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x20 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x21 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x22 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x23 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD, x24 @tD Ø InterpolatingFunction@880., 0.25<<, <>D@tD<<
6
Advanced Numerical Differential Equation Solving in Mathematica
This actually computes an approximate solution of the heat equation for a rod with constant temperatures at either end of the rod. (For more accurate solutions, you can increase n.) The result is an approximate solution to the heat equation for a 1-dimensional rod of length 1 with constant temperature maintained at either end. This shows the solutions considered as spatial values as a function of time. In[17]:=
ListPlot3D@Table@vars ê. First@%D, 8t, 0, .25, .025
0.8
Out[17]=
0.6
10
0.4
0.2
0.0 5
5
10 15 20
An unknown function can also be specified to have a vector (or matrix) value. The dimensionality of an unknown function is taken from its initial condition. You can mix scalar and vector unknown functions as long as the equations have consistent dimensionality according to the rules of Mathematica arithmetic. The InterpolatingFunction result will give values with the same dimensionality as the unknown function. Using nonscalar variables is very convenient when a system of differential equations is governed by a process that may be difficult or inefficient to express symbolically. This uses a vector valued unknown function to solve the same system as earlier. In[18]:=
f@x_ ? VectorQD := n ^ 2 * ListConvolve@81, - 2, 1<, x, 82, 2<, 81, 0
Out[19]= 88X Ø InterpolatingFunction@880., 0.25<<, <>D<<
NDSolve is able to solve some partial differential equations directly when you specify more independent variables.
Advanced Numerical Differential Equation Solving in Mathematica
7
NDSolve@8eqn1 ,eqn2 ,…<,u,8t,tmin ,tmax <,8x,xmin ,xmax <,…D solve a system of partial differential equations for a function u@t, x …D with t in the range tmin to tmax and x in the range xmin to xmax , …
NDSolve@8eqn1 ,eqn2 ,…<,8u1 ,u2 ,…<,8t,tmin ,tmax <,8x,xmin ,xmax <,…D solve a system of partial differential equations for several functions ui Finding numerical solutions to partial differential equations. Here is a solution of the heat equation found directly by NDSolve . In[20]:=
NDSolveA9D@u@x, tD, tD ã D@u@x, tD, x, xD, u@x, 0D ã x10 , u@0, tD ã 0, u@1, tD ã 1=, u, 8x, 0, 1<, 8t, 0, .25
Out[20]= 88u Ø InterpolatingFunction@880., 1.<, 80., 0.25<<, <>D<<
Here is a plot of the solution. In[21]:=
Plot3D@Evaluate@First@u@x, tD ê. %DD, 8x, 0, 1<, 8t, 0, .25
1.0
Out[21]=
0.5
0.2
0.0 0.0
0.1 0.5
1.0
0.0
NDSolve currently uses the numerical method of lines to compute solutions to partial differential equations. The method is restricted to problems that can be posed with an initial condition in at least one independent variable. For example, the method cannot solve elliptic PDEs such as Laplace's equation because these require boundary values. For the problems it does solve, the method of lines is quite general, handling systems of PDEs or nonlinearity well, and often quite fast. Details of the method are given in "Numerical Solution of Partial Differential Equations".
8
Advanced Numerical Differential Equation Solving in Mathematica
This finds a numerical solution to a generalization of the nonlinear sine-Gordon equation to two spatial dimensions with periodic boundary conditions. In[22]:=
NDSolveA9D@u@t, x, yD, t, tD ã D@u@t, x, yD, x, xD + D@u@t, x, yD, y, yD - Sin@u@t, x, yDD, u@0, x, yD ã ExpA- Ix2 + y2 ME, Derivative@1, 0, 0D@uD@0, x, yD ã 0, u@t, - 5, yD ã u@t, 5, yD ã 0, u@t, x, - 5D ã u@t, x, 5D ã 0=, u, 8t, 0, 3<, 8x, - 5, 5<, 8y, - 5, 5
Out[22]= 88u Ø InterpolatingFunction@880., 3.<, 8-5., 5.<, 8-5., 5.<<, <>D<<
Here is a plot of the result at t == 3. In[23]:=
Plot3D@First@u@3, x, yD ê. %D, 8x, - 5, 5<, 8y, - 5, 5
t == 3.
0.0
Out[23]=
5
–0.1 –0.2 –5
0 0
5
–5
As mentioned earlier, NDSolve works by taking a sequence of steps in the independent variable t. NDSolve uses an adaptive procedure to determine the size of these steps. In general, if the solution appears to be varying rapidly in a particular region, then NDSolve will reduce the step size to be able to better track the solution. NDSolve allows you to specify the precision or accuracy of result you want. In general, NDSolve makes the steps it takes smaller and smaller until the solution reached satisfies either the AccuracyGoal or the PrecisionGoal you give. The setting for AccuracyGoal effectively determines the absolute error to allow in the solution, while the setting for PrecisionGoal determines the relative error. If you need to track a solution whose value comes close to zero, then you
will
typically
need
AccuracyGoal -> Infinity,
to you
increase tell
the
NDSolve
setting
for
to
PrecisionGoal
use
AccuracyGoal. only.
By
setting
Generally,
AccuracyGoal and PrecisionGoal are used to control the error local to a particular time step. For some differential equations, this error can accumulate, so it is possible that the precision or accuracy of the result at the end of the time interval may be much less than what you might expect from the settings of AccuracyGoal and PrecisionGoal.
Advanced Numerical Differential Equation Solving in Mathematica
9
NDSolve uses the setting you give for WorkingPrecision to determine the precision to use in its internal computations. If you specify large values for AccuracyGoal or PrecisionGoal, then you typically need to give a somewhat larger value for WorkingPrecision. With the default setting of Automatic, both AccuracyGoal and PrecisionGoal are equal to half of the setting for WorkingPrecision. NDSolve uses error estimates for determining whether it is meeting the specified tolerances. When working with systems of equations, it uses the setting of the option NormFunction -> f to combine errors in different components. The norm is scaled in terms of the tolerances, given so that NDSolve tries to take steps such that
f
err1
,
err2
tolr Abs@x1 D + tola tolr Abs@x2 D + tola
,…
§1
where erri is the ith component of the error and xi is the ith component of the current solution. This generates a high-precision solution to a differential equation. In[24]:=
NDSolve@8x ‘‘‘@tD == x@tD, x@0D == 1, x ‘@0D == x ‘‘@0D == 0<, x, 8t, 1<, AccuracyGoal -> 20, PrecisionGoal -> 20, WorkingPrecision -> 25D
Out[24]= 88x Ø InterpolatingFunction@880, 1.000000000000000000000000<<, <>D<<
Here is the value of the solution at the endpoint. In[25]:=
x@1D ê. %
Out[25]= 81.168058313375918525580620<
Through its adaptive procedure, NDSolve is able to solve “stiff” differential equations in which there are several components varying with t at extremely different rates. NDSolve follows the general procedure of reducing step size until it tracks solutions accurately. There is a problem, however, when the true solution has a singularity. In this case, NDSolve might go on reducing the step size forever, and never terminate. To avoid this problem, the option MaxSteps specifies the maximum number of steps that NDSolve will ever take in attempting
to
find
a
solution.
MaxSteps -> 10 000.
For
ordinary
differential
equations,
the
default
setting
is
10
Advanced Numerical Differential Equation Solving in Mathematica
NDSolve stops after taking 10,000 steps. In[26]:=
NDSolve@8y ‘@xD == 1 ê x ^ 2, y@- 1D == 1<, y@xD, 8x, - 1, 0
-172 ==, <>E@xD== Out[26]= 99y@xD Ø InterpolatingFunctionA99-1., -1.00413 µ 10
There is in fact a singularity in the solution at x = 0. In[27]:=
Plot@Evaluate@y@xD ê. %D, 8x, - 1, 0
10
8
Out[27]= 6
4
-1.0
-0.8
-0.6
-0.4
-0.2
The default setting MaxSteps -> 10 000 should be sufficient for most equations with smooth solutions. When solutions have a complicated structure, however, you may sometimes have to choose larger settings for MaxSteps. With the setting MaxSteps -> Infinity there is no upper limit on the number of steps used. NDSolve has several different methods built in for computing solutions as well as a mechanism for adding additional methods. With the default setting Method -> Automatic, NDSolve will choose a method which should be appropriate for the differential equations. For example, if the equations have stiffness, implicit methods will be used as needed, or if the equations make a DAE, a special DAE method will be used. In general, it is not possible to determine the nature of solutions to differential equations without actually solving them: thus, the default Automatic methods are good for solving as wide variety of problems, but the one chosen may not be the best one available for your particular problem. Also, you may want to choose methods, such as symplectic integrators, which preserve certain properties of the solution. Choosing an appropriate method for a particular system can be quite difficult. To complicate it further, many methods have their own settings, which can greatly affect solution efficiency and accuracy. Much of this documentation consists of descriptions of methods to give you an idea of when they should be used and how to adjust them to solve particular problems. Furthermore, NDSolve has a mechanism that allows you to define your own methods and still have the equations and results processed by NDSolve just as for the built-in methods.
Advanced Numerical Differential Equation Solving in Mathematica
11
When NDSolve computes a solution, there are typically three phases. First, the equations are processed, usually into a function that represents the right-hand side of the equations in normal form. Next, the function is used to iterate the solution from the initial conditions. Finally, data saved during the iteration procedure is processed into one or more InterpolatingFunction objects. Using functions in the NDSolve` context, you can run these steps separately and, more importantly, have more control over the iteration process. The steps are tied by an NDSolve`StateData object, which keeps all of the data necessary for solving the differential equations.
The Design of the NDSolve Framework Features Supporting a large number of numerical integration methods for differential equations is a lot of work. In order to cut down on maintenance and duplication of code, common components are shared between methods. This approach also allows code optimization to be carried out in just a few central routines. The principal features of the NDSolve framework are: † Uniform design and interface † Code reuse (common code base) † Objection orientation (method property specification and communication) † Data hiding † Separation of method initialization phase and run-time computation † Hierarchical and reentrant numerical methods † Uniform treatment of rounding errors (see [HLW02], [SS03] and the references therein) † Vectorized framework based on a generalization of the BLAS model [LAPACK99] using optimized in-place arithmetic
12
Advanced Numerical Differential Equation Solving in Mathematica
† Tensor framework that allows families of methods to share one implementation † Type and precision dynamic for all methods † Plug-in capabilities that allow user extensibility and prototyping † Specialized data structures
Common Time Stepping A common time-stepping mechanism is used for all one-step methods. The routine handles a number of different criteria including: † Step sizes in a numerical integration do not become too small in value, which may happen in solving stiff systems † Step sizes do not change sign unexpectedly, which may be a consequence of user programming error † Step sizes are not increased after a step rejection † Step sizes are not decreased drastically toward the end of an integration † Specified (or detected) singularities are handled by restarting the integration † Divergence of iterations in implicit methods (e.g. using fixed, large step sizes) † Unrecoverable integration errors (e.g. numerical exceptions) † Rounding error feedback (compensated summation) is particularly advantageous for highorder methods or methods that conserve specific quantities during the numerical integration
Data Encapsulation Each method has its own data object that contains information that is needed for the invocation of the method. This includes, but is not limited to, coefficients, workspaces, step-size control parameters, step-size acceptance/rejection information, and Jacobian matrices. This is a generalization of the ideas used in codes like LSODA ([H83], [P83]).
Advanced Numerical Differential Equation Solving in Mathematica
13
Method Hierarchy Methods are reentrant and hierarchical, meaning that one method can call another. This is a generalization of the ideas used in the Generic ODE Solving System, Godess (see [O95], [O98] and the references therein), which is implemented in C++.
Initial Design The original method framework design allowed a number of methods to be invoked in the solver. NDSolve ö “ExplicitRungeKutta“ NDSolve ö “ImplicitRungeKutta“
First Revision This was later extended to allow one method to call another in a sequential fashion, with an arbitrary number of levels of nesting. NDSolve ö “Extrapolation“ ö “ExplicitMidpoint“ The construction of compound integration methods is particularly useful in geometric numerical integration. NDSolve ö “Projection“ ö “ExplicitRungeKutta“
Second Revision A more general tree invocation process was required to implement composition methods. ç
“ExplicitEuler“
ª
ª
NDSolve ö “Composition“ ö “ImplicitEuler“ ª
ª
é “ExplicitEuler“ This is an example of a method composed with its adjoint.
14
Advanced Numerical Differential Equation Solving in Mathematica
Current State The tree invocation process was extended to allow for a subfield to be solved by each method, instead of the entire vector field. This example turns up in the ABC Flow subsection of "Composition and Splitting Methods for NDSolve". ç NDSolve ö “Splitting“ f = f1 + f2
“LocallyExact“ f1
ö “ImplicitMidpoint“ f2 é
“LocallyExact“ f1
User Extensibility Built-in methods can be used as building blocks for the efficient construction of special-purpose (compound) integrators. User-defined methods can also be added.
Method Classes Methods such as “ExplicitRungeKutta“ include a number of schemes of different orders. Moreover, alternative coefficient choices can be specified by the user. This is a generalization of the ideas found in RKSUITE [BGS93].
Automatic Selection and User Controllability The framework provides automatic step-size selection and method-order selection. Methods are user-configurable via method options. For example a user can select the class of “ExplicitRungeKutta“ methods, and the code will automatically attempt to ascertain the "optimal" order according to the problem, the relative and absolute local error tolerances, and the initial step-size estimate.
Advanced Numerical Differential Equation Solving in Mathematica
15
Here is a list of options appropriate for “ExplicitRungeKutta“. In[1]:=
Options@NDSolve`ExplicitRungeKuttaD
Out[1]= :Coefficients Ø EmbeddedExplicitRungeKuttaCoefficients, DifferenceOrder Ø Automatic,
EmbeddedDifferenceOrder Ø Automatic, StepSizeControlParameters Ø Automatic, 1 StepSizeRatioBounds Ø : , 4>, StepSizeSafetyFactors Ø Automatic, StiffnessTest Ø Automatic> 8
MethodMonitor In order to illustrate the low-level behaviour of some methods, such as stiffness switching or order variation that occurs at run time , a new “MethodMonitor“ has been added. This fits between the relatively coarse resolution of “StepMonitor“ and the fine resolution of “EvaluationMonitor“ . StepMonitor
MethodMonitor
EvaluationMonitor This feature is not officially documented and the functionality may change in future versions.
Shared Features These features are not necessarily restricted to NDSolve since they can also be used for other types of numerical methods. † Function evaluation is performed using a NumericalFunction that dynamically changes type as needed, such as when IEEE floating-point overflow or underflow occurs. It also calls Mathematica's compiler Compile for efficiency when appropriate. † Jacobian evaluation uses symbolic differentiation or finite difference approximations, including automatic or user-specifiable sparsity detection. † Dense linear algebra is based on LAPACK, and sparse linear algebra uses special-purpose packages such as UMFPACK.
16
Advanced Numerical Differential Equation Solving in Mathematica
† Common subexpressions in the numerical evaluation of the function representing a differential system are detected and collected to avoid repeated work. † Other supporting functionality that has been implemented is described in "Norms in NDSolve". This system dynamically switches type from real to complex during the numerical integration, automatically recompiling as needed. In[2]:=
y@1 ê 2D ê. NDSolve@8y ‘@tD ã Sqrt@y@tDD - 1, y@0D ã 1 ê 10<, y, 8t, 0, 1<, Method Ø “ExplicitRungeKutta“D
Out[2]= 8-0.349043 + 0.150441 Â<
Some Basic Methods order
method
formula
1
Explicit Euler
yn+1 = yn + hn f Htn , yn L
2
Explicit Midpoint
yn+1ê2 = yn +
hn
f Htn , yn L
2
yn+1 = yn + hn f Htn+1ê2 , yn+1ê2 L 1
Backward or Implicit Euler (1-stage RadauIIA)
yn+1 = yn + hn f Htn+1 , yn+1 L
2
Implicit Midpoint (1-stage Gauss)
yn+1 = yn + hn f Jtn+1ê2 ,
2
Trapezoidal (2-stage Lobatto IIIA)
yn+1 = yn +
1
Linearly Implicit Euler
HI - hn JL Hyn+1 - yn L = hn f Htn , yn L
2
Linearly Implicit Midpoint
JI JI -
hn 2 hn 2
hn 2
1 2
H f Htn , yn L + f Htn+1 , yn+1 LL
JN Hyn+1ê2 - yn L = JN
Hyn+1 + yn LN
ID yn -D yn-1ê2 M 2
=
hn 2 hn 2
f Htn , yn L f Htn+1ê2 , yn+1ê2 L - D yn-1ê2
Some of the one-step methods that have been implemented.
Here D yn = yn+1 - yn+1ê2 , I denotes the identity matrix, and J denotes the Jacobian matrix
∂f ∂y
Htn , yn L.
Although the implicit midpoint method has not been implemented as a separate method, it is available through the one-stage Gauss scheme of the “ImplicitRungeKutta“ method.
Advanced Numerical Differential Equation Solving in Mathematica
17
ODE Integration Methods Methods "ExplicitRungeKutta" Method for NDSolve Introduction This loads packages containing some test problems and utility functions. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
Euler's Method One of the first and simplest methods for solving initial value problems was proposed by Euler: yn+1 = yn + h f Htn , yn L.
(1)
Euler's method is not very accurate. Local accuracy is measured by how high terms are matched with the Taylor expansion of the solution. Euler's method is first-order accurate, so that errors occur one order higher starting at powers of h2 . Euler's method is implemented in NDSolve as “ExplicitEuler“. In[5]:=
NDSolve@8y ‘@tD ã - y@tD, y@0D ã 1<, y@tD, 8t, 0, 1<, Method Ø “ExplicitEuler“, “StartingStepSize“ Ø 1 ê 10D
Out[5]= 88y@tD Ø InterpolatingFunction@880., 1.<<, <>D@tD<<
Generalizing Euler's Method The idea of Runge|Kutta methods is to take successive (weighted) Euler steps to approximate a Taylor series. In this way function evaluations (and not derivatives) are used.
18
Advanced Numerical Differential Equation Solving in Mathematica
For example, consider the one-step formulation of the midpoint method.
k1
= f Htn , yn L
k2
= f Jtn +
1 2
h, yn +
1 2
h k1 N
(1)
yn+1 = yn + h k2 The midpoint method can be shown to have a local error of OIh3 M, so it is second-order accurate. The midpoint method is implemented in NDSolve as “ExplicitMidpoint“. In[6]:=
NDSolve@8y ‘@tD ã - y@tD, y@0D ã 1<, y@tD, 8t, 0, 1<, Method Ø “ExplicitMidpoint“, “StartingStepSize“ Ø 1 ê 10D
Out[6]= 88y@tD Ø InterpolatingFunction@880., 1.<<, <>D@tD<<
Runge|Kutta Methods Extending the approach in (1), repeated function evaluation can be used to obtain higher order methods. Denote the Runge|Kutta method for the approximate solution to an initial value problem at tn+1 = tn + h, by: gi
= yn + h ⁄sj=1 ai, j k j ,
ki
= f Htn + ci h, gi L,
i = 1, 2, …, s,
(1)
yn+1 = yn + h ⁄si=1 bi ki where s is the number of stages. It is generally assumed that the row-sum conditions hold: ci = ⁄si=1 ai, j
(2)
These conditions effectively determine the points in time at which the function is sampled and are a particularly useful device in the derivation of high-order Runge|Kutta methods. The coefficients of the method are free parameters that are chosen to satisfy a Taylor series expansion through some order in the time step h. In practice other conditions such as stability can also constrain the coefficients. Explicit Runge|Kutta methods are a special case where the matrix A is strictly lower triangular: ai, j = 0, j ¥ i, j = 1, …, s.
Advanced Numerical Differential Equation Solving in Mathematica
19
It has become customary to denote the method coefficients c = @ci DT , b = @bi DT , and A = Aai, j E using a Butcher table, which has the following form for explicit Runge|Kutta methods: 0 0 c2 a2,1
0 0
0 0
0 0 (3)
ª ª ª ª ª cs as,1 as,2 as,s-1 0 b2
b1
bs-1
bs
The row-sum conditions can be visualized as summing across the rows of the table. Notice that a consequence of explicitness is c1 = 0, so that the function is sampled at the beginning of the current integration step.
Example The Butcher table for the explicit midpoint method (1) is given by: 0 0 0 1 2
1 2
(1)
0
0 1
FSAL Schemes A particularly interesting special class of explicit Runge|Kutta methods, used in most modern codes, are those for which the coefficients have a special structure known as First Same As Last (FSAL): as,i = bi , i = 1, …, s - 1 and bs = 0.
(1)
For consistent FSAL schemes the Butcher table (3) has the form: 0 c2
0 a2,1
0 0
0 0
0 0
ª ª ª ª ª cs-1 as-1,1 as-1,2 0 0 b2 bs-1 0 1 b1 b1
b2
(2)
bs-1 0
The advantage of FSAL methods is that the function value ks at the end of one integration step is the same as the first function value k1 at the next integration step.
20
Advanced Numerical Differential Equation Solving in Mathematica
The function values at the beginning and end of each integration step are required anyway when constructing the InterpolatingFunction that is used for dense output in NDSolve.
Embedded Pairs and Local Error Estimation An efficient means of obtaining local error estimates for adaptive step-size control is to consider ` two methods of different orders p and p that share the same coefficient matrix (and hence function values). 0 c2
0 a2,1
0 0
0 0
0 0
ª ª ª 0 ª cs-1 as-1,1 as-1,2 0 0 as,1 as,2 as,s-1 0 cs b2 ` b2
b1 ` b1
bs-1 ` bs-1
(1)
bs ` bs
These give two solutions: yn+1 = yn + h ⁄si=1 bi ki
(2)
` ` yn+1 = yn + h ⁄si=1 bi ki
(3)
` ` ` A commonly used notation is pHpL, typically with p = p - 1 or p = p + 1. In most modern codes, including the default choice in NDSolve, the solution is advanced with ` the more accurate formula so that p = p - 1, which is known as local extrapolation. ` ` ` T The vector of coefficients e = Bb1 - b1 , b2 - b2 , …, bs - bs F gives an error estimator avoiding subtractive cancellation of yn in floating-point arithmetic when forming the difference between (2) and (3). s
errn = h ‚ ei ki i=1
The quantity °errn ¥ gives a scalar measure of the error that can be used for step size selection.
Advanced Numerical Differential Equation Solving in Mathematica
21
Step Control The classical Integral (or I) step-size controller uses the formula: ~
hn+1 = hn K
Tol ±errn µ
1íp
O
(1)
~ ` where p = minIp, pM + 1.
The error estimate is therefore used to determine the next step size to use from the current step size. The notation Tol ê °errn ¥ is explained within "Norms in NDSolve".
Overview Explicit Runge|Kutta pairs of orders 2(1) through 9(8) have been implemented. Formula pairs have the following properties: † First Same As Last strategy. † Local extrapolation mode, that is, the higher-order formula is used to propagate the solution. † Stiffness detection capability (see "StiffnessTest Method Option for NDSolve"). † Proportional-Integral step-size controller for stiff and quasi-stiff systems [G91]. Optimal formula pairs of orders 2(1), 3(2), and 4(3) subject to the already stated requirements have been derived using Mathematica, and are described in [SS04]. The 5(4) pair selected is due to Bogacki and Shampine [BS89b, S94] and the 6(5), 7(6), 8(7), and 9(8) pairs are due to Verner. For the selection of higher-order pairs, issues such as local truncation error ratio and stability region compatibility should be considered (see [S94]). Various tools have been written to assess these qualitative features. Methods are interchangeable so that, for example, it is possible to substitute the 5(4) method of Bogacki and Shampine with a method of Dormand and Prince. Summation of the method stages is implemented using level 2 BLAS which is often highly optimized for particular processors and can also take advantage of multiple cores.
22
Advanced Numerical Differential Equation Solving in Mathematica
Example Define the Brusselator ODE problem, which models a chemical reaction. In[7]:=
system = GetNDSolveProblem@“BrusselatorODE“D
£ 2 £ 2 Out[7]= NDSolveProblemB:9HY1 L @TD ã 1 - 4 Y1 @TD + Y1 @TD Y2 @TD, HY2 L @TD ã 3 Y1 @TD - Y1 @TD Y2 @TD=,
3
:Y1 @0D ã
2
, Y2 @0D ã 3>, 8Y1 @TD, Y2 @TD<, 8T, 0, 20<, 8<, 8<, 8<>F
This solves the system using an explicit Runge|Kutta method. In[8]:=
sol = NDSolve@system, Method Ø “ExplicitRungeKutta“D
Out[8]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
Extract the interpolating functions from the solution. In[9]:=
ifuns = system@“DependentVariables“D ê. First@solD; Plot the solution components.
In[10]:=
ParametricPlot@Evaluate@ifunsD, Evaluate@system@“TimeData“DDD
4
Out[10]=
3
2
1.0
1.5
2.0
2.5
3.0
3.5
Method Comparison Sometimes you may be interested to find out what methods are being used in NDSolve. Here you can see the coefficients of the default 2(1) embedded pair. In[11]:=
NDSolve`EmbeddedExplicitRungeKuttaCoefficients@2, InfinityD
Out[11]= ::81<, :
1 2
,
1
1 1 1 2 1 >>, : , , 0>, 81, 1<, :- , , - >> 2 2 2 2 3 6
Advanced Numerical Differential Equation Solving in Mathematica
23
You also may want to compare some of the different methods to see how they perform for a specific problem.
Utilities You will make use of a utility function CompareMethods for comparing various methods. Some useful NDSolve features of this function for comparing methods are: † The option EvaluationMonitor, which is used to count the number of function evaluations † The option StepMonitor , which is used to count the number of accepted and rejected integration steps This displays the results of the method comparison using a GridBox . In[12]:=
TabulateResults@labels_List, names_List, data_ListD := DisplayForm@ FrameBox@ GridBox@ Apply@8labels, ÒÒ< &, MapThread@Prepend, 8data, names
Reference Solution A number of examples for comparing numerical methods in the literature rely on the fact that a closed-form solution is available, which is obviously quite limiting. In NDSolve it is possible to get very accurate approximations using arbitrary-precision adaptive step size; these are adaptive order methods based on “Extrapolation“. The following reference solution is computed with a method that switches between a pair of “Extrapolation“ methods, depending on whether the problem appears to be stiff. In[13]:=
sol = NDSolve@system, Method Ø “StiffnessSwitching“, WorkingPrecision Ø 32D; refsol = First@FinalSolutions@system, solDD;
Automatic Order Selection When you select “DifferenceOrder“ -> Automatic, the code will automatically attempt to choose the optimal order method for the integration. Two
algorithms
have
been
implemented
for
this
"SymplecticPartitionedRungeKutta Method for NDSolve".
purpose
and
are
described
within
24
Advanced Numerical Differential Equation Solving in Mathematica
Example 1 Here is an example that compares built-in methods of various orders, together with the method that is selected automatically. This selects the order of the methods to choose between and makes a list of method options to pass to NDSolve . In[15]:=
orders = Join@Range@2, 9D, 8Automatic
In[17]:=
data = CompareMethods@system, refsol, methodsD; Display the results in a table.
In[18]:=
labels = 8“Method“, “Steps“, “Cost“, “Error“<; TabulateResults@labels, orders, dataD
Out[19]//DisplayForm=
Method 2
Steps
Cost
8124 381, 0< 248 764
Error 1.90685 µ 10-8 3.45492 µ 10-8
3
84247, 2<
12 749
4
8940, 6<
3786
8.8177 µ 10-9
5
8188, 16<
1430
1.01784 µ 10-8
6
8289, 13<
2418
1.63157 µ 10-10
7
8165, 19<
1842
2.23919 µ 10-9
8
887, 16<
1341
1.20179 µ 10-8
9
891, 24<
1842
1.01705 µ 10-8
Automatic
891, 24<
1843
1.01705 µ 10-8
The default method has order nine, which is close to the optimal order of eight in this example. One function evaluation is needed during the initialization phase to determine the order.
Example 2 A limitation of the previous example is that it did not take into account the accuracy of the solution obtained by each method, so that it did not give a fair reflection of the cost. Rather than taking a single tolerance to compare methods, it is preferable to use a range of tolerances. The following example compares various “ExplicitRungeKutta“ methods of different orders using a variety of tolerances.
Advanced Numerical Differential Equation Solving in Mathematica
25
This selects the order of the methods to choose between and makes a list of method options to pass to NDSolve . In[20]:=
orders = Join@Range@4, 9D, 8Automatic
In[22]:=
data = Table@Map@Rest, CompareMethods@system, refsol, methods, AccuracyGoal Ø tol, PrecisionGoal Ø tolDD, 8tol, 3, 14
In[23]:=
ListLogLogPlot@Transpose@dataD, Joined Ø True, Axes Ø False, Frame Ø True, PlotMarkers Ø Map@Style@Ò, MediumD &, Join@Drop@orders, - 1D, 8“A“
10-6
Out[23]=
10-9
10-12
10-15
55A 4 8 A 4 9 7 568 6849 75 89 4 A 769 A 9 4 5A 78 6 875 A 4 9 867A 4 95 876A 4 95 7 8A 96 5 7 8A 96 5 8A 9 76 5 89 7 6 A 7 500
1000
4 4 4 4
5 6 6
5000 1 µ 104
5 µ 104 1 µ 105
The order-selection algorithms are heuristic in that the optimal order may change through the integration but, as the examples illustrate, a reasonable default choice is usually made. Ideally, a selection of different problems should be used for benchmarking.
Coefficient plug-in The implementation of “ExplicitRungeKutta“ provides a default method pair at each order. Sometimes, however, it is convenient to use a different method, for example: † To replicate the results of someone else. † To use a special-purpose method that works well for a specific problem. † To experiment with a new method.
26
Advanced Numerical Differential Equation Solving in Mathematica
The Classical Runge|Kutta Method This shows how to define the coefficients of the classical explicit Runge|Kutta method of order four, approximated to precision p. In[24]:=
crkamat = 881 ê 2<, 80, 1 ê 2<, 80, 0, 1<<; crkbvec = 81 ê 6, 1 ê 3, 1 ê 3, 1 ê 6<; crkcvec = 81 ê 2, 1 ê 2, 1<; ClassicalRungeKuttaCoefficients@4, p_D := N@8crkamat, crkbvec, crkcvec<, pD;
The method has no embedded error estimate and hence there is no specification of the coefficient error vector. This means that the method is invoked with fixed step sizes. Here is an example of the calling syntax. In[27]:=
NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 4, “Coefficients“ Ø ClassicalRungeKuttaCoefficients<, StartingStepSize Ø 1 ê 10D
Out[27]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
ode23 This defines the coefficients for a 3(2) FSAL explicit Runge|Kutta pair. The third-order formula is due to Ralston, and the embedded method was derived by Bogacki and Shampine [BS89a]. This defines a function for computing the coefficients to a desired precision. In[28]:=
BSamat = 881 ê 2<, 80, 3 ê 4<, 82 ê 9, 1 ê 3, 4 ê 9<<; BSbvec = 82 ê 9, 1 ê 3, 4 ê 9, 0<; BScvec = 81 ê 2, 3 ê 4, 1<; BSevec = 8- 5 ê 72, 1 ê 12, 1 ê 9, - 1 ê 8<; BSCoefficients@4, p_D := N@8BSamat, BSbvec, BScvec, BSevec<, pD;
The method is used in the Texas Instruments TI-85 pocket calculator, Matlab and RKSUITE [S94]. Unfortunately it does not allow for the form of stiffness detection that has been chosen. A Method of Fehlberg This defines the coefficients for a 4(5) explicit Runge|Kutta pair of Fehlberg that was popular in the 1960s [F69]. The fourth-order formula is used to propagate the solution, and the fifth-order formula is used only for the purpose of error estimation.
Advanced Numerical Differential Equation Solving in Mathematica
27
This defines the function for computing the coefficients to a desired precision. In[33]:=
Fehlbergamat = 8 81 ê 4<, 83 ê 32, 9 ê 32<, 81932 ê 2197, - 7200 ê 2197, 7296 ê 2197<, 8439 ê 216, - 8, 3680 ê 513, - 845 ê 4104<, 8- 8 ê 27, 2, - 3544 ê 2565, 1859 ê 4104, - 11 ê 40<<; Fehlbergbvec = 825 ê 216, 0, 1408 ê 2565, 2197 ê 4104, - 1 ê 5, 0<; Fehlbergcvec = 81 ê 4, 3 ê 8, 12 ê 13, 1, 1 ê 2<; Fehlbergevec = 8- 1 ê 360, 0, 128 ê 4275, 2197 ê 75 240, - 1 ê 50, - 2 ê 55<; FehlbergCoefficients@4, p_D := N@8Fehlbergamat, Fehlbergbvec, Fehlbergcvec, Fehlbergevec<, pD;
In contrast to the classical Runge|Kutta method of order four, the coefficients include an additional entry that is used for error estimation. The Fehlberg method is not a FSAL scheme since the coefficient matrix is not of the form (2); it is a six-stage scheme, but it requires six function evaluations per step because of the function evaluation that is required at the end of the step to construct the InterpolatingFunction.
A Dormand|Prince Method Here is how to define a 5(4) pair of Dormand and Prince coefficients [DP80]. This is currently the method used by ode45 in Matlab. This defines a function for computing the coefficients to a desired precision. In[38]:=
DOPRIamat = 8 81 ê 5<, 83 ê 40, 9 ê 40<, 844 ê 45, - 56 ê 15, 32 ê 9<, 819 372 ê 6561, - 25 360 ê 2187, 64 448 ê 6561, - 212 ê 729<, 89017 ê 3168, - 355 ê 33, 46 732 ê 5247, 49 ê 176, - 5103 ê 18 656<, 835 ê 384, 0, 500 ê 1113, 125 ê 192, - 2187 ê 6784, 11 ê 84<<; DOPRIbvec = 835 ê 384, 0, 500 ê 1113, 125 ê 192, - 2187 ê 6784, 11 ê 84, 0<; DOPRIcvec = 81 ê 5, 3 ê 10, 4 ê 5, 8 ê 9, 1, 1<; DOPRIevec = 871 ê 57 600, 0, - 71 ê 16 695, 71 ê 1920, - 17 253 ê 339 200, 22 ê 525, - 1 ê 40<; DOPRICoefficients@5, p_D := N@8DOPRIamat, DOPRIbvec, DOPRIcvec, DOPRIevec<, pD;
The Dormand|Prince method is a FSAL scheme since the coefficient matrix is of the form (2); it is a seven-stage scheme, but effectively uses only six function evaluations. Here is how the coefficients of Dormand and Prince can be used in place of the built-in choice. Since the structure of the coefficients includes an error vector, the implementation is able to ascertain that adaptive step sizes can be computed. In[43]:=
NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 5, “Coefficients“ Ø DOPRICoefficients, “StiffnessTest“ Ø False
Out[43]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
28
Advanced Numerical Differential Equation Solving in Mathematica
Method Comparison Here you solve a system using several explicit Runge|Kutta pairs. For the Fehlberg 4(5) pair, the option “EmbeddedDifferenceOrder“ is used to specify the order of the embedded method. In[44]:=
Fehlberg45 = 8“ExplicitRungeKutta“, “Coefficients“ Ø FehlbergCoefficients, “DifferenceOrder“ Ø 4, “EmbeddedDifferenceOrder“ Ø 5, “StiffnessTest“ Ø False<; The Dormand and Prince 5(4) pair is defined as follows.
In[45]:=
DOPRI54 = 8“ExplicitRungeKutta“, “Coefficients“ Ø DOPRICoefficients, “DifferenceOrder“ Ø 5, “StiffnessTest“ Ø False<; The 5(4) pair of Bogacki and Shampine is the default order-five method.
In[46]:=
BS54 = 8“ExplicitRungeKutta“, “Coefficients“ Ø “EmbeddedExplicitRungeKuttaCoefficients“, “DifferenceOrder“ Ø 5, “StiffnessTest“ Ø False<; Put the methods and some descriptive names together in a list.
In[47]:=
names = 8“Fehlberg 4H5L“, “Dormand-Prince 5H4L“, “Bogacki-Shampine 5H4L“<; methods = 8Fehlberg45, DOPRI54, BS54<; Compute the number of integration steps, function evaluations, and the endpoint global error.
In[49]:=
data = CompareMethods@system, refsol, methodsD; Display the results in a table.
In[50]:=
labels = 8“Method“, “Steps“, “Cost“, “Error“<; TabulateResults@labels, names, dataD
Out[51]//DisplayForm=
Method
Steps
Cost
Error
Fehlberg 4 H5L
8320, 11< 1977 1.52417 µ 10-7
Dormand - Prince 5 H4L
8292, 10< 1814 1.73878 µ 10-8
Bogacki - Shampine 5 H4L 8188, 16< 1430 1.01784 µ 10-8
The default method was the least expensive and provided the most accurate solution.
Method Plug-in This shows how to implement the classical explicit Runge|Kutta method of order four using the method plug-in environment.
Advanced Numerical Differential Equation Solving in Mathematica
29
This definition is optional since the method in fact has no data. However, any expression can be stored inside the data object. For example, the coefficients could be approximated here to avoid coercion from rational to floating-point numbers at each integration step. In[52]:=
ClassicalRungeKutta ê: NDSolve`InitializeMethod@ClassicalRungeKutta, __D := ClassicalRungeKutta@D; The actual method implementation is written using a stepping procedure.
In[53]:=
ClassicalRungeKutta@___D@“Step“@f_, t_, h_, y_, yp_DD := Block@8deltay, k1, k2, k3, k4<, k1 = yp; k2 = f@t + 1 ê 2 h, y + 1 ê 2 h k1D; k3 = f@t + 1 ê 2 h, y + 1 ê 2 h k2D; k4 = f@t + h, y + h k3D; deltay = h H1 ê 6 k1 + 1 ê 3 k2 + 1 ê 3 k3 + 1 ê 6 k4L; 8h, deltay< D;
Notice that the implementation closely resembles the description that you might find in a textbook. There are no memory allocation/deallocation statements or type declarations, for example. In fact the implementation works for machine real numbers or machine complex numbers, and even using arbitrary-precision software arithmetic. Here is an example of the calling syntax. For simplicity the method only uses fixed step sizes, so you need to specify what step sizes to take. In[54]:=
NDSolve@system, Method Ø ClassicalRungeKutta, StartingStepSize Ø 1 ê 10D
Out[54]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
Many of the methods that have been built into NDSolve were first prototyped using top-level code before being implemented in the kernel for efficiency.
Stiffness Stiffness is a combination of problem, initial data, numerical method, and error tolerances. Stiffness can arise, for example, in the translation of diffusion terms by divided differences into a large system of ODEs. In order to understand more about the nature of stiffness it is useful to study how methods behave when applied to a simple problem.
30
Advanced Numerical Differential Equation Solving in Mathematica
Linear Stability Consider applying a Runge|Kutta method to a linear scalar equation known as Dahlquist's equation: y£ HtL = l yHtL, l œ , ReHlL < 0.
(1)
The result is a rational function of polynomials RHzL where z = h l (see for example [L87]). This utility function finds the linear stability function RHzL for Runge|Kutta methods. The form depends on the coefficients and is a polynomial if the Runge|Kutta method is explicit. Here is the stability function for the fifth-order scheme in the Dormand|Prince 5(4) pair. In[55]:=
DOPRIsf = RungeKuttaLinearStabilityFunction@DOPRIamat, DOPRIbvec, zD
Out[55]= 1 + z +
z2 2
+
z3 6
+
z4 24
+
z5 120
+
z6 600
This function finds the linear stability function RHzL for Runge|Kutta methods. The form depends on the coefficients and is a polynomial if the Runge|Kutta method is explicit. The following package is useful for visualizing linear stability regions for numerical methods for differential equations. In[56]:=
Needs@“FunctionApproximations`“D; You can now visualize the absolute stability region †RHzL§ = 1.
In[57]:=
Out[57]=
OrderStarPlot@DOPRIsf, 1, zD
Advanced Numerical Differential Equation Solving in Mathematica
31
Depending on the magnitude of l in (1), if you choose the step size h such that †RHh lL§ < 1, then errors in successive steps will be damped, and the method is said to be absolutely stable. If †RHh lL§ > 1, then step-size selection will be restricted by stability and not by local accuracy.
Stiffness Detection The device for stiffness detection that is used with the option “StiffnessTest“ is described within "StiffnessTest Method Option for NDSolve". Recast in terms of explicit Runge|Kutta methods, the condition for stiffness detection can be formulated as: ~
l=
±ks -ks-1 µ
(2)
±gs -gs-1 µ
with gi and ki defined in (1). The difference gs - gs-1 can be shown to correspond to a number of applications of the power method applied to h J. The difference is therefore a good approximation of the eigenvector corresponding to the leading eigenvalue. ~
The product £h lß gives an estimate that can be compared to the stability boundary in order to detect stiffness. An s-stage explicit Runge|Kutta has a form suitable for (2) if cs-1 = cs = 1. 0 c2
0 a2,1
0 0
0 0
0 0
ª ª ª ª ª 1 as-1,1 as-1,2 0 0 as,2 as,s-1 0 1 as,1 b1
b2
bs-1
(3)
bs
The default embedded pairs used in “ExplicitRungeKutta“ all have the form (3). An important point is that (2) is very cheap and convenient; it uses already available information from the integration and requires no additional function evaluations. Another advantage of (3) is that it is straightforward to make use of consistent FSAL methods (1).
32
Advanced Numerical Differential Equation Solving in Mathematica
Another advantage of (3) is that it is straightforward to make use of consistent FSAL methods (1).
Examples Select a stiff system modeling a chemical reaction. In[58]:=
system = GetNDSolveProblem@“Robertson“D;
This applies a built-in explicit Runge|Kutta method to the stiff system. By default stiffness detection is enabled, since it only has a small impact on the running time. In[59]:=
NDSolve@system, Method Ø “ExplicitRungeKutta“D; NDSolve::ndstf : At T == 0.012555829610695773`, system appears to be stiff. Methods Automatic, BDF or StiffnessSwitching may be more appropriate. à
The coefficients of the Dormand|Prince 5(4) pair are of the form (3) so stiffness detection is enabled. In[60]:=
NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 5, “Coefficients“ Ø DOPRICoefficients
Since no “LinearStabilityBoundary“ property has been specified, a default value is chosen. In this case the value corresponds to a generic method of order 5. In[61]:=
genlsb = NDSolve`LinearStabilityBoundary@5D
2 3 4 5 Out[61]= RootA240 + 120 Ò1 + 60 Ò1 + 20 Ò1 + 5 Ò1 + Ò1 &, 1E
You can set up an equation in terms of the linear stability function and solve it exactly to find the point where the contour crosses the negative real axis. In[62]:=
DOPRIlsb = Reduce@Abs@DOPRIsfD ã 1 && z < 0, zD
2 3 4 5 Out[62]= z ã RootA600 + 300 Ò1 + 100 Ò1 + 25 Ò1 + 5 Ò1 + Ò1 &, 1E
The default generic value is very slightly smaller in magnitude than the computed value. In[63]:=
N@8genlsb, DOPRIlsb@@2DD
Out[63]= 8-3.21705, -3.30657<
In general, there may be more than one point of intersection, and it may be necessary to choose the appropriate solution.
Advanced Numerical Differential Equation Solving in Mathematica
33
The following definition sets the value of the linear stability boundary. In[64]:=
DOPRICoefficients@5D@“LinearStabilityBoundary“D = Root@600 + 300 * Ò1 + 100 * Ò1 ^ 2 + 25 * Ò1 ^ 3 + 5 * Ò1 ^ 4 + Ò1 ^ 5 &, 1, 0D; Using the new value for this example does not affect the time at which stiffness is detected.
In[65]:=
NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 5, “Coefficients“ Ø DOPRICoefficients
The Fehlberg 4(5) method does not have the correct coefficient structure (3) required for stiffness detection, since cs = 1 ê 2 ≠ 1. The default value “StiffnessTest“ -> Automatic checks to see if the method coefficients provide a stiffness detection capability; if they do, then stiffness detection is enabled.
Step Control Revisited There are some reasons to look at alternatives to the standard Integral step controller (1) when considering mildly stiff problems. This system models a chemical reaction. In[66]:=
system = GetNDSolveProblem@“Robertson“D; This defines an explicit Runge|Kutta method based on the Dormand|Prince coefficients that does not use stiffness detection.
In[67]:=
IERK = 8“ExplicitRungeKutta“, “Coefficients“ Ø DOPRICoefficients, “DifferenceOrder“ Ø 5, “StiffnessTest“ Ø False<; This solves the system and plots the step sizes that are taken using the utility function StepDataPlot.
In[68]:=
isol = NDSolve@system, Method Ø IERKD; StepDataPlot@isolD 0.0015
Out[69]=
0.0010
0.00 0.05 0.10 0.15 0.20 0.25 0.30
Solving a stiff or mildly stiff problem with the standard step-size controller leads to large oscillations, sometimes leading to a number of undesirable step-size rejections. The study of this issue is known as step-control stability.
34
Advanced Numerical Differential Equation Solving in Mathematica
It can be studied by matching the linear stability regions for the high- and low-order methods in an embedded pair. One approach to addressing the oscillation is to derive special methods, but this compromises the local accuracy.
PI Step Control An appealing alternative to Integral step control (1) is Proportional-Integral or PI step control. In this case the step size is selected using the local error in two successive integration steps according to the formula: ~
~
hn+1 = hn K
Tol ±errn µ
k1 íp
O
K
±errn-1 µ k2 íp ±errn µ
O
(1)
This has the effect of damping and hence gives a smoother step-size sequence. Note that Integral step control (1) is a special case of (1) and is used if a step is rejected: k1 = 1, k2 = 0 . The option “StepSizeControlParameters“ -> 8k1 , k2 < can be used to specify the values of k1 and k2 . The scaled error estimate in (1) is taken to be °errn-1 ¥ = °errn ¥ for the first integration step.
Examples Stiff Problem This defines a method similar to IERK that uses the option “StepSizeControlParameters“ to specify a PI controller. Here you use generic control parameters suggested by Gustafsson: k1 = 3 ê 10, k2 = 2 ê 5 This specifies the step-control parameters. In[70]:=
PIERK = 8“ExplicitRungeKutta“, “Coefficients“ Ø DOPRICoefficients, “DifferenceOrder“ Ø 5, “StiffnessTest“ Ø False, “StepSizeControlParameters“ Ø 83 ê 10, 2 ê 5<<;
Advanced Numerical Differential Equation Solving in Mathematica
35
Solving the system again, it can be observed that the step-size sequence is now much smoother. In[71]:=
pisol = NDSolve@system, Method Ø PIERKD; StepDataPlot@pisolD 0.0015
Out[72]=
0.0010
0.00 0.05 0.10 0.15 0.20 0.25 0.30
Nonstiff Problem In general the I step controller (1) is able to take larger steps for a nonstiff problem than the PI step controller (1) as the following example illustrates. Select and solve a nonstiff system using the I step controller. In[73]:=
system = GetNDSolveProblem@“BrusselatorODE“D;
In[74]:=
isol = NDSolve@system, Method Ø IERKD; StepDataPlot@isolD
Out[75]=
0.200 0.150 0.100 0.070 0.050 0.030 0.020 0.015 0
5
10
15
20
Using the PI step controller the step sizes are slightly smaller. In[76]:=
Out[77]=
pisol = NDSolve@system, Method Ø PIERKD; StepDataPlot@pisolD 0.150 0.100 0.070 0.050 0.030 0.020 0.015 0.010
0
5
10
15
20
For this reason, the default setting for “StepSizeControlParameters“ is Automatic , which is interpreted as: † Use the I step controller (1) if “StiffnessTest“ -> False. † Use the PI step controller (1) if “StiffnessTest“ -> True.
36
Advanced Numerical Differential Equation Solving in Mathematica
Fine-Tuning Instead of using (1) directly, it is common practice to use safety factors to ensure that the error is acceptable at the next step with high probability, thereby preventing unwanted step rejections. The option “StepSizeSafetyFactors“ -> 8s1 , s2 < specifies the safety factors to use in the stepsize estimate so that (1) becomes: ~
~
hn+1 = hn s1 K
s2 Tol ±errn µ
k1 íp
O
K
±errn-1 µ k2 íp ±errn µ
O
(1)
.
Here s1 is an absolute factor and s2 typically scales with the order of the method. The option “StepSizeRatioBounds“ -> 8srmin , srmax < specifies bounds on the next step size to take such that: srmin § ¢
hn+1 hn
(2)
¶ § srmax .
Option summary option name
default value
"Coefficients"
EmbeddedExplicÖ specify the coefficients of the explicit itRungeKuttaÖ Runge|Kutta method Coefficients
"DifferenceOrder"
Automatic
specify the order of local accuracy
"EmbeddedDifferenceOrder"
Automatic
specify the order of the embedded method in a pair of explicit Runge|Kutta methods
"StepSizeControlParameters "
Automatic
specify the PI step-control parameters (see
"StepSizeRatioBounds"
: 8 ,4>
specify the bounds on a relative change in
Automatic
specify the safety factors to use in the step-
"StepSizeSafetyFactors"
(1)) 1
the new step size (see (2)) size estimate (see (1))
"StiffnessTest"
Automatic
Options of the method “ExplicitRungeKutta“.
specify whether to use the stiffness detection capability
Advanced Numerical Differential Equation Solving in Mathematica
37
The default setting of Automatic for the option “DifferenceOrder“ selects the default coefficient order based on the problem, initial values-and local error tolerances, balanced against the work of the method for each coefficient set. The default setting of Automatic for the option “EmbeddedDifferenceOrder“ specifies that the default order of the embedded method is one lower than the method order. This depends on the value of the “DifferenceOrder“ option. The default setting of Automatic for the option “StepSizeControlParameters“ uses the values 81, 0< if stiffness detection is active and 83 ê 10, 2 ê 5< otherwise. The default setting of Automatic for the option “StepSizeSafetyFactors“ uses the values 817 ê 20, 9 ê 10< if the I step controller (1) is used and 89 ê 10, 9 ê 10< if the PI step controller (1)
is
used.
The
step
controller
used
depends
on
the
values
of
the
options
“StepSizeControlParameters“ and “StiffnessTest“. The default setting of Automatic for the option “StiffnessTest“ will activate the stiffness test if if the coefficients have the form (3).
"ImplicitRungeKutta" Method for NDSolve Introduction Implicit Runge|Kutta methods have a number of desirable properties. The Gauss|Legendre methods, for example, are self-adjoint, meaning that they provide the same solution when integrating forward or backward in time. This loads packages defining some example problems and utility functions. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
Coefficients A generic framework for implicit Runge|Kutta methods has been implemented. The focus so far is on methods with interesting geometric properties and currently covers the following schemes: † “ImplicitRungeKuttaGaussCoefficients“ † “ImplicitRungeKuttaLobattoIIIACoefficients“
38
Advanced Numerical Differential Equation Solving in Mathematica
† “ImplicitRungeKuttaLobattoIIIBCoefficients“ † “ImplicitRungeKuttaLobattoIIICCoefficients“ † “ImplicitRungeKuttaRadauIACoefficients“ † “ImplicitRungeKuttaRadauIIACoefficients“ The derivation of the method coefficients can be carried out to arbitrary order and arbitrary precision.
Coefficient Generation † Start with the definition of the polynomial, defining the abscissas of the s stage coefficients. For example, the abscissas for Gauss|Legendre methods are defined as
ds dxs
xs H1 - xLs .
† Univariate polynomial factorization gives the underlying irreducible polynomials defining the roots of the polynomials. † Root objects are constructed to represent the solutions (using unique root isolation and Jenkins|Traub for the numerical approximation). † Root objects are then approximated numerically for precision coefficients. † Condition estimates for Vandermonde systems governing the coefficients yield the precision to take in approximating the roots numerically. † Specialized solvers for nonconfluent Vandermonde systems are then used to solve equations for the coefficients (see [GVL96]). † One step of iterative refinement is used to polish the approximate solutions and to check that the coefficients are obtained to the requested precision. This generates the coefficients for the two-stage fourth-order Gauss|Legendre method to 50 decimal digits of precision. In[5]:=
NDSolve`ImplicitRungeKuttaGaussCoefficients@4, 50D
Out[5]= 8880.25000000000000000000000000000000000000000000000000, -0.038675134594812882254574390250978727823800875635063<, 80.53867513459481288225457439025097872782380087563506, 0.25000000000000000000000000000000000000000000000000<<, 80.50000000000000000000000000000000000000000000000000, 0.50000000000000000000000000000000000000000000000000<, 80.21132486540518711774542560974902127217619912436494, 0.78867513459481288225457439025097872782380087563506<<
The coefficients have the form 9a, bT , cT =.
Advanced Numerical Differential Equation Solving in Mathematica
39
This generates the coefficients for the two-stage fourth-order Gauss|Legendre method exactly. For high-order methods, generating the coefficients exactly can often take a very long time. In[6]:=
NDSolve`ImplicitRungeKuttaGaussCoefficients@4, InfinityD
Out[6]= :::
1 4
,
1
3-2
3
>, :
12
1
3+2
3
12
,
1 4
1 1 1 >>, : , >, : 2 2 6
3-
3
,
1
3+
3
>>
6
This generates the coefficients for the six-stage tenth-order RaduaIA implicit Runge|Kutta method to 20 decimal digits of precision. In[7]:=
NDSolve`ImplicitRungeKuttaRadauIACoefficients@10, 20D
Out[7]= 8880.040000000000000000000, -0.087618018725274235050,
0.085317987638600293760, -0.055818078483298114837, 0.018118109569972056127<, 80.040000000000000000000, 0.12875675325490976116, -0.047477730403197434295, 0.026776985967747870688, -0.0082961444756796453993<, 80.040000000000000000000, 0.23310008036710237092, 0.16758507013524896344, -0.032883343543501401775, 0.0086077606722332473607<, 80.040000000000000000000, 0.21925333267709602305, 0.33134489917971587453, 0.14621486784749350665, -0.013656113342429231907<, 80.040000000000000000000, 0.22493691761630663460, 0.30390571559725175840, 0.30105430635402060050, 0.072998864317903324306<<, 80.040000000000000000000, 0.22310390108357074440, 0.31182652297574125408, 0.28135601514946206019, 0.14371356079122594132<, 80, 0.13975986434378055215, 0.41640956763108317994, 0.72315698636187617232, 0.94289580388548231781<<
Examples Load an example problem. In[8]:=
system = GetNDSolveProblem@“PerturbedKepler“D; vars = system@“DependentVariables“D; This problem has two invariants that should remain constant. A numerical method may not be able to conserve these invariants.
In[10]:=
invs = system@“Invariants“D
Out[10]= :-
1 2
1
2 3ë2
400 IY1 @TD + Y2 @TD M
2
Y1 @TD + Y2 @TD
+ 2
1 2
IY3 @TD2 + Y4 @TD2 M, -Y2 @TD Y3 @TD + Y1 @TD Y4 @TD>
This solves the system using an implicit Runge|Kutta Gauss method. The order of the scheme is selected using the “DifferenceOrder“ method option. In[11]:=
sol = NDSolve@system, Method Ø 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “DifferenceOrder“ Ø 10<<, StartingStepSize Ø 1 ê 10D
Out[11]= 88Y1 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD, Y4 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD<<
40
Advanced Numerical Differential Equation Solving in Mathematica
A plot of the error in the invariants shows an increase as the integration proceeds. In[12]:=
InvariantErrorPlot@invs, vars, T, sol, PlotStyle Ø 8Red, Blue<, InvariantErrorSampleRate Ø 1D 2. µ 10-10
1.5 µ 10-10
Out[12]=
1. µ 10-10
5. µ 10-11
0 0
20
40
60
80
100
The “ImplicitSolver“ method of “ImplicitRungeKutta“ has options AccuracyGoal and PrecisionGoal that specify the absolute and relative error to aim for in solving the nonlinear system of equations. These options have the same default values as the corresponding options in NDSolve, since often there is little point in solving the nonlinear system to much higher accuracy than the local error of the method. However, for certain types of problems it can be useful to solve the nonlinear system up to the working precision. In[13]:=
sol = NDSolve@system, Method Ø 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “DifferenceOrder“ Ø 10, “ImplicitSolver“ Ø 8“Newton“, AccuracyGoal Ø MachinePrecision, PrecisionGoal Ø MachinePrecision, “IterationSafetyFactor“ Ø 1<<<, StartingStepSize Ø 1 ê 10D
Out[13]= 88Y1 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD, Y4 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD<<
The first invariant is the Hamiltonian of the system, and the error is now bounded, as it should be, since the Gauss implicit Runge|Kutta method is a symplectic integrator.
Advanced Numerical Differential Equation Solving in Mathematica
41
The second invariant is conserved exactly (up to roundoff) since the Gauss implicit Runge|Kutta method conserves quadratic invariants. In[14]:=
InvariantErrorPlot@invs, vars, T, sol, PlotStyle Ø 8Red, Blue<, InvariantErrorSampleRate Ø 1D 6. µ 10-11
5. µ 10-11
4. µ 10-11
Out[14]= 3. µ 10-11
2. µ 10-11
1. µ 10-11
0 0
20
40
60
80
100
This defines the implicit midpoint method as the one-stage implicit Runge|Kutta method of order two. For this problem it can be more efficient to use a fixed-point iteration instead of a Newton iteration to solve the nonlinear system. In[15]:=
ImplicitMidpoint = 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “Coefficients“ -> “ImplicitRungeKuttaGaussCoefficients“, “DifferenceOrder“ Ø 2, “ImplicitSolver“ Ø 8“FixedPoint“, “AccuracyGoal“ Ø MachinePrecision, “PrecisionGoal“ Ø MachinePrecision, “IterationSafetyFactor“ Ø 1 <<<;
In[16]:=
NDSolve@system, 8T, 0, 1<, Method Ø ImplicitMidpoint, StartingStepSize Ø 1 ê 100D
Out[16]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD, Y4 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
At present, the implicit Runge|Kutta method framework does not use banded Newton techniques for uncoupling the nonlinear system.
42
Advanced Numerical Differential Equation Solving in Mathematica
Option Summary "ImplicitRungeKutta" Options option name
default value
"Coefficients"
"ImplicitRungeÖ specify the coefficients to use in the KuttaGausÖ implicit Runge|Kutta method sCoefficiÖ ents"
"DifferenceOrder"
Automatic
specify the order of local accuracy of the method
"ImplicitSolver"
"Newton"
specify the solver to use for the nonlinear system; valid settings are FixedPoint or
"Newton" specify the step control parameters
"StepSizeControlParameters "
Automatic
"StepSizeRatioBounds"
: 8 ,4>
specify the bounds on a relative change in the new step size
"StepSizeSafetyFactors"
Automatic
specify the safety factors to use in the step size estimate
1
Options of the method “ImplicitRungeKutta“.
The default setting of Automatic for the option “StepSizeSafetyFactors“ uses the values 89 ê 10, 9 ê 10<.
"ImplicitSolver" Options option name
default value
AccuracyGoal
Automatic
specify the absolute tolerance to use in solving the nonlinear system
“IterationSafetyFactor“
1 100
specify the safety factor to use in solving the nonlinear system
MaxIterations
Automatic
specify the maximum number of iterations to use in solving the nonlinear system
PrecisionGoal
Automatic
specify the relative tolerance to use in solving the nonlinear system
Common options of “ImplicitSolver“.
Advanced Numerical Differential Equation Solving in Mathematica
option name
default value
"JacobianEvaluationParameÖ ter"
1 1000
specify when to recompute the Jacobian matrix in Newton iterations
"LinearSolveMethod"
Automatic
specify the linear solver to use in Newton iterations
"LUDecompositionEvaluatioÖ nParameter"
6 5
specify when to compute LU decompositions in Newton iterations
43
Options specific to the “Newton“ method of “ImplicitSolver“.
"SymplecticPartitionedRungeKutta" Method for NDSolve Introduction When numerically solving Hamiltonian dynamical systems it is advantageous if the numerical method yields a symplectic map. † The phase space of a Hamiltonian system is a symplectic manifold on which there exists a natural symplectic structure in the canonically conjugate coordinates. † The time evolution of a Hamiltonian system is such that the Poincaré integral invariants associated with the symplectic structure are preserved. † A symplectic integrator computes exactly, assuming infinite precision arithmetic, the evolution of a nearby Hamiltonian, whose phase space structure is close to that of the original system. If the Hamiltonian can be written in separable form, H Hp, qL = T HpL + V HqL, there exists an efficient class of explicit symplectic numerical integration methods. An important property of symplectic numerical methods when applied to Hamiltonian systems is that a nearby Hamiltonian is approximately conserved for exponentially long times (see [BG94], [HL97], and [R99]).
Hamiltonian Systems Consider a differential equation dy dt
= FHt, yL, yHt0 L = y0 .
(1)
44
A
Advanced Numerical Differential Equation Solving in Mathematica
d-degree
of
freedom
Hamiltonian
system
is
a
particular
instance
of
(1)
with
y = Hp1 , …, pd , q1 …, qd LT , where dy dt
= J -1 “ H.
(2)
Here “ represents the gradient operator: “ = H∂ ê ∂ p1 , …, ∂ ê ∂ pd , ∂ ê ∂ q1 , … ∂ ê ∂ qd LT and J is the skew symmetric matrix: J=
0 I -I 0
where I and 0 are the identity and zero d×d matrices. The components of q are often referred to as position or coordinate variables and the components of p as the momenta. If H is autonomous, dH ê dt = 0. Then H is a conserved quantity that remains constant along solutions of the system. In applications, this usually corresponds to conservation of energy. A numerical method applied to a Hamiltonian system (2) is said to be symplectic if it produces a symplectic map. That is, let Hp* , q* L = yHp, qL be a C1 transformation defined in a domain W.:
" Hp, qL œ W, y£ T J y£ =
∂ Hp* , q* LT ∂ Hp, qL
J
∂ Hp* , q* L ∂ Hp, qL
=J
where the Jacobian of the transformation is:
y£ =
∂ Hp* , q* L ∂ Hp, qL
∂p*
=
∂p*
∂p
∂q
∂q*
∂q*
∂p
∂q
.
The flow of a Hamiltonian system is depicted together with the projection onto the planes formed by canonically conjugate coordinate and momenta pairs. The sum of the oriented areas remains constant as the flow evolves in time.
Advanced Numerical Differential Equation Solving in Mathematica
45
p2 p dq Ct A2
p1 q2
p dq A1 q1
Partitioned Runge|Kutta Methods It is sometimes possible to integrate certain components of (1) using one Runge|Kutta method and other components using a different Runge|Kutta method. The overall s-stage scheme is called a partitioned Runge|Kutta method and the free parameters are represented by two Butcher tableaux: a11 a1 s
A11 A1 s
ª ª as1 ass
ª ª . As1 Ass
b1
B1
bs
(1)
Bs
Symplectic Partitioned Runge|Kutta (SPRK) Methods For general Hamiltonian systems, symplectic Runge|Kutta methods are necessarily implicit. However, for separable Hamiltonians HHp, q, tL = THpL + VHq, tL there exist explicit schemes corresponding to symplectic partitioned Runge|Kutta methods.
46
Advanced Numerical Differential Equation Solving in Mathematica
Instead of (1) the free parameters now take either the form: 0
0
0
B1
0
0
b1
ª
B1 B2
ª
0
ª ª b1 bs-1 0
ª ª ª B1 B2 Bs
b1 bs-1 bs
B1 B2 Bs
(1)
or the form: b1
0
0
0
0
b1 b2
ª
B1
0
0 ª
ª ª ª b1 b2 bs
ª B1 Bs-1
b1 b2 bs
B1 Bs-1 Bs
(2)
ª . 0
The 2 d free parameters of (2) are sometimes represented using the shorthand notation @b1 , …, bs D HB1 , …Bs L. The differential system for a separable Hamiltonian system can be written as: dpi dt
= f Hq, tL = -
∂ VHq, tL ∂ qi
,
dqi dt
= gHpL =
∂ THpL ∂ pi
,
i = 1, …, d.
In general the force evaluations -∂ VHq, tL ê ∂ q are computationally dominant and (2) is preferred over (1) since it is possible to save one force evaluation per time step when dense output is required.
Standard Algorithm The structure of (2) permits a particularly simple implementation (see for example [SC94]). Algorithm 1 (Standard SPRK) P0 = pn Q1 = qn for i = 1, …, s
Advanced Numerical Differential Equation Solving in Mathematica
47
Pi = Pi-1 + hn+1 bi f HQi , tn + Ci hn+1 L Qi+1 = Qi + hn+1 Bi gHPi L Return pn+1 = Ps and qn+1 = Qs+1 . j-1
The time-weights are given by: C j = ⁄i=1 Bi , j = 1, …, s. If Bs = 0 then Algorithm 1 effectively reduces to an s - 1 stage scheme since it has the First Same As Last (FSAL) property.
Example This loads some useful packages. In[1]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
The Harmonic Oscillator The Harmonic oscillator is a simple Hamiltonian problem that models a material point attached to a spring. For simplicity consider the unit mass and spring constant for which the Hamiltonian is given in separable form: HHp, qL = THpL + VHqL = p2 ë 2 + q2 ë 2. The equations of motion are given by: dp dt
∂H
= - ∂q = -q,
dq dt
=
∂H ∂p
= p,
qH0L = 1,
pH0L = 0.
(1)
Input In[3]:=
system = GetNDSolveProblem@“HarmonicOscillator“D; eqs = 8system@“System“D, system@“InitialConditions“D<; vars = system@“DependentVariables“D; H = system@“Invariants“D; time = 8T, 0, 100<; step = 1 ê 25;
Explicit Euler Method Numerically integrate the equations of motion for the Harmonic oscillator using the explicit Euler method. In[9]:=
solee = NDSolve@eqs, vars, time, Method Ø “ExplicitEuler“, StartingStepSize Ø step, MaxSteps Ø InfinityD;
48
Advanced Numerical Differential Equation Solving in Mathematica
Since the method is dissipative, the trajectory spirals into or away from the fixed point at the origin. In[10]:=
ParametricPlot@Evaluate@vars ê. First@soleeDD, Evaluate@timeD, PlotPoints Ø 100D
6
4
2
Out[10]= -6
-4
-2
2
4
6
-2
-4
-6
A dissipative method typically exhibits linear error growth in the value of the Hamiltonian. In[11]:=
InvariantErrorPlot@H, vars, T, solee, PlotStyle Ø GreenD 25
20
15
Out[11]= 10
5
0 0
20
40
60
80
100
Symplectic Method Numerically integrate the equations of motion for the Harmonic oscillator using a symplectic partitioned Runge|Kutta method. In[12]:=
sol = NDSolve@eqs, vars, time, Method Ø 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 2, “PositionVariables“ Ø 8Y1 @TD<<, StartingStepSize Ø step, MaxSteps Ø InfinityD;
Advanced Numerical Differential Equation Solving in Mathematica
49
The solution is now a closed curve. In[13]:=
ParametricPlot@Evaluate@vars ê. First@solDD, Evaluate@timeDD 1.0
0.5
Out[13]=
-1.0
-0.5
0.5
1.0
-0.5
-1.0
In contrast to dissipative methods, symplectic integrators yield an error in the Hamiltonian that remains bounded. In[14]:=
InvariantErrorPlot@H, vars, T, sol, PlotStyle Ø BlueD 0.00020
0.00015
Out[14]= 0.00010
0.00005
0.00000 0
20
40
60
80
100
Rounding Error Reduction In certain cases, lattice symplectic methods exist and can avoid step-by-step roundoff accumulation, but such an approach is not always possible [ET92].
50
Advanced Numerical Differential Equation Solving in Mathematica
Consider the previous example where the combination of step size and order of the method is now chosen such that the error in the Hamiltonian is around the order of unit roundoff in IEEE double-precision arithmetic. In[15]:=
solnoca = NDSolve@eqs, vars, time, Method Ø 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 10, “PositionVariables“ Ø 8Y1 @TD<<, StartingStepSize Ø step, MaxSteps Ø Infinity, “CompensatedSummation“ Ø FalseD; InvariantErrorPlot@H, vars, T, solnoca, PlotStyle Ø BlueD
1.5 µ 10-15
Out[16]=
1. µ 10-15
5. µ 10-16
0 0
20
40
60
80
100
There is a curious drift in the error in the Hamiltonian that is actually a numerical artifact of floating-point arithmetic. This phenomenon can have an impact on long time integrations. This section describes the formulation used by “SymplecticPartitionedRungeKutta“ in order to reduce the effect of such errors. There are two types of errors in integrating a flow numerically, those along the flow and those transverse to the flow. In contrast to dissipative systems, the rounding errors in Hamiltonian systems that are transverse to the flow are not damped asymptotically.
Advanced Numerical Differential Equation Solving in Mathematica
et
51
ey
yH,h ` yH,h
Many numerical methods for ordinary differential equations involve computations of the form: yn+1 = yn + dn where the increments dn are usually smaller in magnitude than the approximations yn . Let eHxL denote the exponent and mHxL, 1 > mHxL ¥ 1 ê b, the mantissa of a number x in precision p radix b arithmetic: x = mHxL ä beHxL . Then you can write: yn = mHyn L ä beHyn L = yhn + yln ä beHdn L and dn = mHdn L ä beHdn L = dnh + dnl ä beHyn L-p . Aligning according to exponents these quantities can be represented pictorially as: yln dnl
yhn
dnh
where numbers on the left have a smaller scale than numbers on the right. Of interest is an efficient way of computing the quantities dnl that effectively represent the radix b digits discarded due to the difference in the exponents of yn and dn .
52
Advanced Numerical Differential Equation Solving in Mathematica
Compensated Summation The basic motivation for compensated summation is to simulate 2 n bit addition using only n bit arithmetic.
Example This repeatedly adds a fixed amount to a starting value. Cumulative roundoff error has a significant influence on the result. In[17]:=
reps = 106 ; base = 0.; inc = 0.1; Do@base = base + inc, 8reps
Out[21]//InputForm= 100000.00000133288
In many applications the increment may vary and the number of operations is not known in advance.
Algorithm Compensated summation (see for example [B87] and [H96]) computes the rounding error along with the sum so that yn+1 = yn + h f Hyn L is replaced by: Algorithm 2 (Compensated Summation) yerr = 0 for i = 1, …, N D yn = h f Hyn L + yerr yn+1 = yn + D yn yerr = Hyn - yn+1 L + D yn The algorithm is carried out component-wise for vectors.
Example The function CompensatedPlus (in the Developer` context) implements the algorithm for compensated summation.
Advanced Numerical Differential Equation Solving in Mathematica
53
By repeatedly feeding back the rounding error from one sum into the next, the effect of rounding errors is significantly reduced. In[22]:=
err = 0.; base = 0.; inc = 0.1; Do@ 8base, err< = Developer`CompensatedPlus@base , inc, errD, 8reps
Out[26]//InputForm= 100000.
An undocumented option CompensatedSummation controls whether built-in integration methods in NDSolve use compensated summation.
An Alternative Algorithm There are various ways that compensated summation can be used. One way is to compute the error in every addition update in the main loop in Algorithm 1. An alternative algorithm, which was proposed because of its more general applicability, together with reduced arithmetic cost, is given next. The essential ingredients are the increments D Pi = Pi - pn and D Qi = Qi - qn . Algorithm 3 (Increment SPRK) D P0 = 0 D Q1 = 0 for i = 1, …, s D Pi = D Pi-1 + hn+1 bi f Hqn + D Qi , tn + Ci hn+1 L D Qi+1 = D Qi + hn+1 Bi gHpn + D Pi L Return D pn+1 = D Ps and D qn+1 = D Qs+1 . The desired values pn+1 = pn + D pn+1 and qn+1 = qn + D qn+1 are obtained using compensated summation. Compensated summation could also be used in every addition update in the main loop of Algorithm 3, but our experiments have shown that this adds a non-negligible overhead for a relatively small gain in accuracy.
54
Advanced Numerical Differential Equation Solving in Mathematica
Numerical Illustration Rounding Error Model The amount of expected roundoff error in the relative error of the Hamiltonian for the harmonic oscillator (1) will now be quantified. A probabilistic average case analysis is considered in preference to a worst case upper bound. For a one-dimensional random walk with equal probability of a deviation, the expected absolute distance after N steps is OI n M. The relative error for a floating-point operation +, -, *, ê using IEEE round to nearest mode satisfies the following bound [K93]: eround § 1 ê 2 b-p+1 º 1.11022 ä 10-16 where the base b = 2 is used for representing floating-point numbers on the machine and p = 53 for IEEE double-precision. Therefore the roundoff error after n steps is expected to be approximately: ke
n
for some constant k. In the examples that follow a constant step size of 1/25 is used and the integration is performed over the interval [0, 80000] for a total of 2 µ 106 integration steps. The error in the Hamiltonian is sampled every 200 integration steps. The 8th -order 15-stage (FSAL) method D of Yoshida is used. Similar results have been obtained for the 6th -order 7-stage (FSAL) method A of Yoshida with the same number of integration steps and a step size of 1/160.
Without Compensated Summation The relative error in the Hamiltonian is displayed here for the standard formulation in Algorithm 1 (green) and for the increment formulation in Algorithm 3 (red) for the Harmonic oscillator (1).
Advanced Numerical Differential Equation Solving in Mathematica
55
Algorithm 1 for a 15-stage method corresponds to n = 15 µ 2 µ 106 = 3 µ 107 . In the incremental Algorithm 3 the internal stages are all of the order of the step size and the only significant rounding error occurs at the end of each integration step; thus n = 2 µ 106 , which is in good agreement with the observed improvement. This shows that for Algorithm 3, with sufficiently small step sizes, the rounding error growth is independent of the number of stages of the method, which is particularly advantageous for high order.
With Compensated Summation The relative error in the Hamiltonian is displayed here for the increment formulation in Algorithm 3 without compensated summation (red) and with compensated summation (blue) for the Harmonic oscillator (1).
Using compensated summation with Algorithm 3, the error growth appears to satisfy a random walk with deviation h e so that it has been reduced by a factor proportional to the step size.
56
Advanced Numerical Differential Equation Solving in Mathematica
Arbitrary Precision The relative error in the Hamiltonian is displayed here for the increment formulation in Algorithm 3 with compensated summation using IEEE double-precision arithmetic (blue) and with 32-decimal-digit software arithmetic (purple) for the Harmonic oscillator (1).
However, the solution obtained using software arithmetic is around an order of magnitude slower than machine arithmetic, so strategies to reduce the effect of roundoff error are worthwhile.
Examples Electrostatic Wave Here is a non-autonomous Hamiltonian (it has a time-dependent potential) that models n perturbing electrostatic waves, each with the same wave number and amplitude, but different temporal frequencies wi (see [CR91]). HHp, qL =
p2 2
+
q2 2
+ e ⁄ni=1 HcosHq - wi LL.
(1)
This defines a differential system from the Hamiltonian (1) for dimension n = 3 with frequencies w1 = 7, w2 = 14, w3 = 21. In[27]:=
H = p@tD ^ 2 ê 2 + q@tD ^ 2 ê 2 + Sum@Cos@q@tD - 7 i tD, 8i, 3
Advanced Numerical Differential Equation Solving in Mathematica
57
A general technique for computing Poincaré sections is described within "EventLocator Method for NDSolve". Specifying an empty list for the variables avoids storing all the data of the numerical integration. The integration is carried out with a symplectic method with a relatively large number of steps and the solutions are collected using Sow and Reap when the time is a multiple of 2 p. The “Direction“ option of “EventLocator“ is used to control the sign in the detection of the event. In[33]:=
sprkmethod = 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 4, “PositionVariables“ -> 8q@tD<<; sprkdata = Block@8k = 1<, Reap@ NDSolve@8eqs, ics<, 8<, time, Method Ø 8“EventLocator“, “Direction“ Ø 1, “Event“ ß Ht - 2 k PiL, “EventAction“ ß Hk ++; Sow@8q@tD, p@tD
This displays the solution at time intervals of 2 p. In[35]:=
ListPlot@sprkdata@@- 1, 1DD, Axes Ø False, Frame Ø True, AspectRatio Ø 1, PlotRange Ø AllD 30 20 10
Out[35]=
0 –10 –20 –30
–20
–10
0
10
20
30
58
Advanced Numerical Differential Equation Solving in Mathematica
For comparison a Poincaré section is also computed using an explicit Runge|Kutta method of the same order. In[36]:=
rkmethod = 8“FixedStep“, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 4<<; rkdata = Block@8k = 1<, Reap@ NDSolve@8eqs, ics<, 8<, time, Method Ø 8“EventLocator“, “Direction“ Ø 1, “Event“ ß Ht - 2 k PiL, “EventAction“ ß Hk ++; Sow@8q@tD, p@tD
Fine structural details are clearly resolved in a less satisfactory way with this method. In[38]:=
ListPlot@rkdata@@- 1, 1DD, Axes Ø False, Frame Ø True, AspectRatio Ø 1, PlotRange Ø AllD 10 5
Out[38]=
0 –5 –10 –10
–5
0
5
10
Toda Lattice The Toda lattice models particles on a line interacting with pairwise exponential forces and is governed by the Hamiltonian: n
H Hp, qL = ‚ k=1
1 2
pk 2 + Hexp Hqk+1 - qk L - 1L .
Consider the case when periodic boundary conditions qn+1 = q1 are enforced. The Toda lattice is an example of an isospectral flow. Using the notation
ak = -
1 2
pk , bk =
1 2
exp
1 2
Hqk+1 - qk L
Advanced Numerical Differential Equation Solving in Mathematica
59
then the eigenvalues of the following matrix are conserved quantities of the flow: a1 b1 bn b1 a2 b2 0 b2 a3 b3 L= . 0 bn-2 an-1 bn-1 bn bn-1 an Define the input for the Toda lattice problem for n = 3. In[39]:=
n = 3; periodicRule = 8qn+1 @tD Ø q1 @tD<; n
H =‚ k=1
pk @tD2 2
+ HExp@qk+1 @tD - qk @tDD - 1L
ê. periodicRule;
eigenvalueRule = 9ak_ @tD ß - pk @tD ê 2, bk_ @tD ß 1 ê 2 Exp@1 ê 2 Hqk+1 @tD - qk @tDLD=;
L =
a1 @tD b1 @tD b3 @tD b1 @tD a2 @tD b2 @tD b3 @tD b2 @tD a3 @tD
ê. eigenvalueRule ê. periodicRule;
eqs = 8q1 ‘@tD == D@H, p1 @tDD, q2 ‘@tD == D@H, p2 @tDD, q3 ‘@tD == D@H, p3 @tDD, p1 ‘@tD == - D@H, q1 @tDD, p2 ‘@tD == - D@H, q2 @tDD, p3 ‘@tD == - D@H, q3 @tDD<; ics = 8q1 @0D ã 1, q2 @0D ã 2, q3 @0D ã 4, p1 @0D ã 0, p2 @0D ã 1, p3 @0D ã 1 ê 2<; eqs = 8eqs, ics<; vars = 8q1 @tD, q2 @tD, q3 @tD, p1 @tD, p2 @tD, p3 @tD<; time = 8t, 0, 50<; Define a function to compute the eigenvalues of a matrix of numbers, sorted in increasing order. This avoids computing the eigenvalues symbolically. In[49]:=
NumberMatrixQ@m_D := MatrixQ@m, NumberQD; NumberEigenvalues@m_ ? NumberMatrixQD := Sort@Eigenvalues@mDD; Integrate the equations for the Toda lattice using the “ExplicitMidpoint“ method.
In[51]:=
emsol = NDSolve@eqs, vars, time, Method Ø “ExplicitMidpoint“, StartingStepSize Ø 1 ê 10D;
The absolute error in the eigenvalues is now plotted throughout the integration interval. Options are used to specify the dimension of the result of NumberEigenvalues (since it is not an explicit list) and that the absolute error specified using InvariantErrorFunction should include the sign of the error (the default uses Abs).
60
Advanced Numerical Differential Equation Solving in Mathematica
The eigenvalues are clearly not conserved by the “ExplicitMidpoint“ method. In[52]:=
InvariantErrorPlot@NumberEigenvalues@LD, vars, t, emsol, InvariantErrorFunction Ø HÒ1 - Ò2 &L, InvariantDimensions Ø 8n<, PlotStyle Ø 8Red, Blue, Green
0.5
Out[52]= 0.0
-0.5
0
10
20
30
40
50
Integrate the equations for the Toda lattice using the “SymplecticPartitionedRungeKutta“ method. In[53]:=
sprksol = NDSolve@eqs, vars, time, Method Ø 8“SymplecticPartitionedRungeKutta“, DifferenceOrder Ø 2, “PositionVariables“ Ø 8q1 @tD, q2 @tD, q3 @tD<<, StartingStepSize Ø 1 ê 10D; The error in the eigenvalues now remains bounded throughout the integration.
In[54]:=
InvariantErrorPlot@NumberEigenvalues@LD, vars, t, sprksol, InvariantErrorFunction Ø HÒ1 - Ò2 &L, InvariantDimensions Ø 8n<, PlotStyle Ø 8Red, Blue, Green
0.005
Out[54]=
0.000
-0.005
0
10
20
30
40
50
Some recent work on numerical methods for isospectral flows can be found in [CIZ97], [CIZ99], [DLP98a], and [DLP98b].
Advanced Numerical Differential Equation Solving in Mathematica
61
Available Methods Default Methods The following table lists the current default choice of SPRK methods. Order f evaluations
Method
Symmetric FSAL
1
1
Symplectic Euler
No
No
2
1
Symplectic pseudo Leapfrog
Yes
Yes
3
3
McLachlan and Atela AMA92E
No
No
4
5
Suzuki AS90E
Yes
Yes
6
11
Sofroniou and Spaletta ASS05E
Yes
Yes
8
19
Sofroniou and Spaletta ASS05E
Yes
Yes
10
35
Sofroniou and Spaletta ASS05E
Yes
Yes
Unlike the situation for explicit Runge|Kutta methods, the coefficients for high-order SPRK methods are only given numerically in the literature. Yoshida [Y90] only gives coefficients accurate to 14 decimal digits of accuracy for example. Since NDSolve also works for arbitrary precision, you need a process for obtaining the coefficients to the same precision as that to be used in the solver. When the closed form of the coefficients is not available, the order equations for the symmetric composition coefficients can be refined in arbitrary precision using FindRoot, starting from the known machine-precision solution.
Alternative Methods Due to the modular design of the new NDSolve framework it is straightforward to add an alternative method and use that instead of one of the default methods. Several checks are made before any integration is carried out: † The two vectors of coefficients should be nonempty, the same length, and numerical approximations should yield number entries of the correct precision. † Both coefficient vectors should sum to unity so that they yield a consistent (order 1) method.
62
Advanced Numerical Differential Equation Solving in Mathematica
Example Select the perturbed Kepler problem. In[55]:=
system = GetNDSolveProblem@“PerturbedKepler“D; time = 8T, 0, 290<; step = 1 ê 25; Define a function for computing a numerical approximation to the coefficients for a fourth-order method of Forest and Ruth [FR90], Candy and Rozmus [CR91], and Yoshida [Y90].
In[58]:=
YoshidaCoefficients@4, prec_D := N@ 88Root@- 1 + 12 * Ò1 - 48 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@1 - 24 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@1 - 24 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@- 1 + 12 * Ò1 - 48 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D<, 8Root@- 1 + 6 * Ò1 - 12 * Ò1 ^ 2 + 6 * Ò1 ^ 3 &, 1, 0D, Root@1 - 3 * Ò1 + 3 * Ò1 ^ 2 + 3 * Ò1 ^ 3 &, 1, 0D, Root@- 1 + 6 * Ò1 - 12 * Ò1 ^ 2 + 6 * Ò1 ^ 3 &, 1, 0D, 0<<, precD; Here are machine-precision approximations for the coefficients.
In[59]:=
YoshidaCoefficients@4, MachinePrecisionD
Out[59]= 880.675604, -0.175604, -0.175604, 0.675604<, 81.35121, -1.70241, 1.35121, 0.<<
This invokes the symplectic partitioned Runge|Kutta solver using Yoshida's coefficients. In[60]:=
Yoshida4 = 8“SymplecticPartitionedRungeKutta“, “Coefficients“ Ø YoshidaCoefficients, “DifferenceOrder“ Ø 4, “PositionVariables“ Ø 8Y1 @TD, Y2 @TD<<; Yoshida4sol = NDSolve@system, time, Method Ø Yoshida4, StartingStepSize Ø step, MaxSteps Ø InfinityD; This plots the solution of the position variables, or coordinates, in the Hamiltonian formulation.
In[62]:=
ParametricPlot@Evaluate@8Y1 @TD, Y2 @TD< ê. Yoshida4solD, Evaluate@timeDD 1.5
1.0
0.5
Out[62]=
-1.5
-1.0
-0.5
0.5
-0.5
-1.0
-1.5
1.0
1.5
Advanced Numerical Differential Equation Solving in Mathematica
63
Automatic Order Selection Given that a variety of methods of different orders are available, it is useful to have a means of automatically selecting an appropriate method. In order to accomplish this we need a measure of work for each method. A reasonable measure of work for an SPRK method is the number of stages s (or s - 1 if the method is FSAL). Definition (Work per unit step) Given a step size hk and a work estimate k for one integration step with a method of order k, the work per unit step is given by k = k ê hk . Let P be a nonempty set of method orders, Pk denote the kth element of P, and †P§ denote the cardinality (number of elements). A comparison of work for the default SPRK methods gives P = 82, 3, 4, 6, 8, 10<. A prerequisite is a procedure for estimating the starting step hk of a numerical method of order k (see for example [GSB87] or [HNW93]). The first case to be considered is when the starting step estimate h can be freely chosen. By bootstrapping from low order, the following algorithm finds the order that locally minimizes the work per unit step. Algorithm 4 (h free) Set W = ¶ for k = 1, …, †P§ compute hPk if > Pk ë hPk set = Pk ë hPk else if k = †P§ return Pk else return Pk-1 . The second case to be considered is when the starting step estimate h is given. The following algorithm then gives the order of the method that minimizes the computational cost while satisfying given absolute and relative local error tolerances.
64
Advanced Numerical Differential Equation Solving in Mathematica
Algorithm 5 (h specified) for k = 1, …, †P§ compute hPk if hPk > h or k = †P§ return Pk . Algorithms 4 and 5 are heuristic since the optimal step size and order may change through the integration, although symplectic integration often involves fixed choices. Despite this, both algorithms incorporate salient integration information, such as local error tolerances, system dimension, and initial conditions, to avoid poor choices.
Examples Consider Kepler's problem that describes the motion in the configuration plane of a material point that is attracted toward the origin with a force inversely proportional to the square of the distance: HHp, qL =
1 2
Ip1 2 + p2 2 M -
1
.
2
q1 +q2 2
(1)
For initial conditions take
p1 H0L = 0, p2 H0L =
1+e 1-e
, q1 H0L = 1 - e, q2 H0L = 0
with eccentricity e = 3 ê 5.
Algorithm 4 The following figure shows the methods chosen automatically at various tolerances for the Kepler problem (1) according to Algorithm 4 on a log-log scale of maximum absolute phase error versus work.
Advanced Numerical Differential Equation Solving in Mathematica
65
It can be observed that the algorithm does a reasonable job of staying near the optimal method, although it switches over to the 8th -order method slightly earlier than necessary. This can be explained by the fact that the starting step size routine is based on low-order derivative estimation and this may not be ideal for selecting high-order methods.
Algorithm 5 The following figure shows the methods chosen automatically with absolute local error tolerance of 10-9 and step sizes 1/16, 1/32, 1/64, 1/128 for the Kepler problem (1) according to Algorithm 5 on a log-log scale of maximum absolute phase error versus work.
With the local tolerance and step size fixed the code can only choose the order of the method. For large step sizes a high-order method is selected, whereas for small step sizes a low-order method is selected. In each case the method chosen minimizes the work to achieve the given tolerance.
66
Advanced Numerical Differential Equation Solving in Mathematica
Option Summary option name
default value
"Coefficients"
"SymplecticParÖ specify the coefficients of the symplectic titionedRÖ partitioned Runge|Kutta method ungeKuttaÖ CoefficieÖ nts"
"DifferenceOrder"
Automatic
specify the order of local accuracy of the method
"PositionVariables"
8<
specify a list of the position variables in the Hamiltonian formulation
Options of the method “SymplecticPartitionedRungeKutta“.
Controller Methods "Composition" and "Splitting" Methods for NDSolve Introduction In some cases it is useful to split the differential system into subsystems and solve each subsystem using appropriate integration methods. Recombining the individual solutions often allows certain dynamical properties, such as volume, to be conserved. More information on splitting and composition can be found in [MQ02, HLW02], and specific aspects related to NDSolve are discussed in [SS05, SS06].
Definitions Of concern are initial value problems y ‘ HtL = f HyHtLL, where yH0L = y0 œ n .
"Composition" Composition is a useful device for raising the order of a numerical integration scheme. In contrast to the Aitken|Neville algorithm used in extrapolation, composition can conserve geometric properties of the base integration method (e.g. symplecticity).
Advanced Numerical Differential Equation Solving in Mathematica
67
Let FHiL f, gi h be a basic integration method that takes a step of size gi h with g1 , …, gs given real numbers. Then the s-stage composition method Y f ,h is given by H1L Y f ,h = FHsL f ,gs h È È F f ,g1 h .
Often interest is F = FHiL , i = 1, …, s.
in
composition
methods
Y f ,h
that
involve
the
same
base
method
An interesting special case is symmetric composition: gi = gs-i+1 , i = 1, …, ds ê 2t. The most common types of composition are: † Symmetric composition of symmetric second-order methods † Symmetric composition of first-order methods (e.g. a method F with its adjoint F* ) † Composition of first-order methods
"Splitting" An s-stage splitting method is a generalization of a composition method in which f is broken up in an additive fashion: f = f1 + + fk ,
k § s.
The essential point is that there can often be computational advantages in solving problems involving fi instead of f . An s-stage splitting method is a composition of the form H1L Y f ,h = FHsL fs ,gs h È È F f1 ,g1 h ,
with f1 , …, fs not necessarily distinct. Each base integration method now only solves part of the problem, but a suitable composition can still give rise to a numerical scheme with advantageous properties. If the vector field fi is integrable, then the exact solution or flow j fi ,h can be used in place of a numerical integration method.
68
Advanced Numerical Differential Equation Solving in Mathematica
A splitting method may also use a mixture of flows and numerical methods. An example is Lie|Trotter splitting [T59]: H1L Split f = f1 + f2 with g1 = g2 = 1; then Y f ,h = jH2L f2 ,h È j f1 ,h yields a first-order integration method.
Computationally it can be advantageous to combine flows using the group property j fi ,h1 +h2 = j fi ,h2 È j fi ,h1 .
Implementation Several changes to the new NDSolve framework were needed in order to implement splitting and composition methods. † Allow a method to call an arbitrary number of submethods. † Add the ability to pass around a function for numerically evaluating a subfield, instead of the entire vector field. † Add a “LocallyExact“ method to compute the flow; analytically solve a subsystem and advance the (local) solution numerically. † Add cache data for identical methods to avoid repeated initialization. Data for numerically evaluating identical subfields is also cached. A simplified input syntax allows omitted vector fields and methods to be filled in cyclically. These must be defined unambiguously: 8 f1 , f2 , f1 , f2 < can be input as 8 f1 , f2 <. 8 f1 , f2 , f3 , f2 , f1 < cannot be input as 8 f1 , f2 , f3 < since this corresponds to 8 f1 , f2 , f3 , f1 , f2 <.
Advanced Numerical Differential Equation Solving in Mathematica
69
Nested Methods The following example constructs a high-order splitting method from a low-order splitting using “Composition“. ç ç
“Splitting“ f = f1 + f2
“LocallyExact“ f1
ö ImplicitMidpoint f2 é
ª
“LocallyExact“ f1
ª ç
NDSolve ö “Composition“ ö
“LocallyExact“ f1
“Splitting“ f = f1 + f2 ö ImplicitMidpoint f2 é
ª
“LocallyExact“ f1
ª ç
“LocallyExact“ f1
é “Splitting“ f = f1 + f2 ö ImplicitMidpoint f2 é
“LocallyExact“ f1
Simplification A more efficient integrator can be obtained in the previous example using the group property of flows and calling the “Splitting“ method directly. ç ª
“LocallyExact“ f1 ImplicitMidpoint f2 ª “LocallyExact“ f1
NDSolve ö “Splitting“ f = f1 + f2 ö
ImplicitMidpoint f2 “LocallyExact“ f1
ª é
ª ImplicitMidpoint f2 “LocallyExact“ f1
Examples The following examples will use a second-order symmetric splitting known as the Strang splitting [S68], [M68]. The splitting coefficients are automatically determined from the structure of the equations.
70
Advanced Numerical Differential Equation Solving in Mathematica
This defines a method known as symplectic leapfrog in terms of the method “SymplecticPartitionedRungeKutta“. In[2]:=
SymplecticLeapfrog = 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 2, “PositionVariables“ :> qvars<; Load a package with some useful example problems.
In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D;
Symplectic Splitting Symplectic Leapfrog “SymplecticPartitionedRungeKutta“ is an efficient method for solving separable Hamiltonian systems HHp, qL = THpL + VHqL with favorable long-time dynamics. “Splitting“ is a more general-purpose method, but it can be used to construct partitioned symplectic
methods
(though
it
is
somewhat
less
efficient
than
“SymplecticPartitionedRungeKutta“). Consider the harmonic oscillator that arises from a linear differential system that is governed by the separable Hamiltonian HHp, qL = p2 ë 2 + q2 ë 2. In[5]:=
system = GetNDSolveProblem@“HarmonicOscillator“D
£ £ Out[5]= NDSolveProblemB:8Y1 @TD ã Y2 @TD, Y2 @TD ã -Y1 @TD<,
8Y1 @0D ã 1, Y2 @0D ã 0<, 8Y1 @TD, Y2 @TD<, 8T, 0, 10<, 8<, :
1 2
IY1 @TD2 + Y2 @TD2 M>>F
Split the Hamiltonian vector field into independent components governing momentum and position. This is done by setting the relevant right-hand sides of the equations to zero. In[6]:=
eqs = system@“System“D; Y1 = eqs; Part@Y1, 1, 2D = 0; Y2 = eqs; Part@Y2, 2, 2D = 0; This composition of weighted (first-order) Euler integration steps corresponds to the symplectic (second-order) leapfrog method.
In[11]:=
tfinal = 1; time = 8T, 0, tfinal<; qvars = 8Subscript@Y, 1D@TD<; splittingsol = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø 8“Splitting“, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y1, Y2, Y1<, “Method“ Ø 8“ExplicitEuler“, “ExplicitEuler“, “ExplicitEuler“<
Out[14]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
Advanced Numerical Differential Equation Solving in Mathematica
71
The method “ExplicitEuler“ could only have been specified once, since the second and third instances would have been filled in cyclically. This is the result at the end of the integration step. In[15]:=
InputForm@splittingsol ê. T Ø tfinalD
Out[15]//InputForm= {{Subscript[Y, 1][1] -> 0.5399512509335085, Subscript[Y, 2][1] -> -0.8406435124348495}}
This invokes the built-in integration method corresponding to the symplectic leapfrog integrator. In[16]:=
sprksol = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø SymplecticLeapfrogD
Out[16]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
The result at the end of the integration step is identical to the result of the splitting method. In[17]:=
InputForm@sprksol ê. T Ø tfinalD
Out[17]//InputForm= {{Subscript[Y, 1][1] -> 0.5399512509335085, Subscript[Y, 2][1] -> -0.8406435124348495}}
Composition of Symplectic Leapfrog This takes the symplectic leapfrog scheme as the base integration method and constructs a fourth-order symplectic integrator using a symmetric composition of Ruth|Yoshida [Y90]. In[18]:=
YoshidaCoefficients = RootReduce@81 ê H2 - 2 ^ H1 ê 3LL, - 2 ^ H1 ê 3L ê H2 - 2 ^ H1 ê 3LL, 1 ê H2 - 2 ^ H1 ê 3LL
Out[20]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
This is the result at the end of the integration step. In[21]:=
InputForm@splittingsol ê. T Ø tfinalD
Out[21]//InputForm= {{Subscript[Y, 1][1] -> 0.5403078808898406, Subscript[Y, 2][1] -> -0.8414706295697821}}
72
Advanced Numerical Differential Equation Solving in Mathematica
This invokes the built-in symplectic integration method using coefficients for the fourth-order methods of Ruth and Yoshida. In[22]:=
SPRK4@4, prec_D := N@88Root@- 1 + 12 * Ò1 - 48 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@1 - 24 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@1 - 24 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D, Root@- 1 + 12 * Ò1 - 48 * Ò1 ^ 2 + 48 * Ò1 ^ 3 &, 1, 0D<, 8Root@- 1 + 6 * Ò1 - 12 * Ò1 ^ 2 + 6 * Ò1 ^ 3 &, 1, 0D, Root@1 - 3 * Ò1 + 3 * Ò1 ^ 2 + 3 * Ò1 ^ 3 &, 1, 0D, Root@- 1 + 6 * Ò1 - 12 * Ò1 ^ 2 + 6 * Ò1 ^ 3 &, 1, 0D, 0<<, precD; sprksol = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø 8“SymplecticPartitionedRungeKutta“, “Coefficients“ Ø SPRK4, “DifferenceOrder“ Ø 4, “PositionVariables“ Ø qvars
Out[23]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
The result at the end of the integration step is identical to the result of the composition method. In[24]:=
InputForm@sprksol ê. T Ø tfinalD
Out[24]//InputForm= {{Subscript[Y, 1][1] -> 0.5403078808898406, Subscript[Y, 2][1] -> -0.8414706295697821}}
Hybrid Methods While a closed-form solution often does not exist for the entire vector field, in some cases it is possible to analytically solve a system of differential equations for part of the vector field. When a solution can be found by DSolve, direct numerical evaluation can be used to locally advance the solution. This idea is implemented in the method “LocallyExact“.
Harmonic Oscillator Test Example This example checks that the solution for the exact flows of split components of the harmonic oscillator equations is the same as applying Euler's method to each of the split components. In[25]:=
system = GetNDSolveProblem@“HarmonicOscillator“D; eqs = system@“System“D; Y1 = eqs; Part@Y1, 1, 2D = 0; Y2 = eqs; Part@Y2, 2, 2D = 0; tfinal = 1; time = 8T, 0, tfinal<;
In[33]:=
solexact = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø 8NDSolve`Splitting, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y1, Y2, Y1<, “Method“ Ø 8“LocallyExact“<
In[34]:=
InputForm@solexact ê. T Ø 1D
Out[34]//InputForm= {{Subscript[Y, 1][1] -> 0.5399512509335085, Subscript[Y, 2][1] -> -0.8406435124348495}}
Advanced Numerical Differential Equation Solving in Mathematica
In[37]:=
73
soleuler = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø 8NDSolve`Splitting, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y1, Y2, Y1<, “Method“ Ø 8“ExplicitEuler“<
Out[38]//InputForm= {{Subscript[Y, 1][1] -> 0.5399512509335085, Subscript[Y, 2][1] -> -0.8406435124348495}}
Hybrid Numeric-Symbolic Splitting Methods (ABC Flow) Consider the Arnold, Beltrami, and Childress flow, a widely studied model for volume-preserving three-dimensional flows. In[39]:=
system = GetNDSolveProblem@“ArnoldBeltramiChildress“D
£ Out[39]= NDSolveProblemB::Y1 @TD ã
3 4
Cos@Y2 @TDD + Sin@Y3 @TDD,
Y2 £ @TD ã Cos@Y3 @TDD + Sin@Y1 @TDD, Y3 £ @TD ã Cos@Y1 @TDD + :Y1 @0D ã
1 4
, Y2 @0D ã
1 3
, Y3 @0D ã
1 2
3 4
Sin@Y2 @TDD>,
>, 8Y1 @TD, Y2 @TD, Y3 @TD<, 8T, 0, 100<, 8<, 8<>F
When applied directly, a volume-preserving integrator would not in general preserve symmetries. A symmetry-preserving integrator, such as the implicit midpoint rule, would not preserve volume. This defines a splitting of the system by setting some of the right-hand side components to zero. In[40]:=
eqs = system@“System“D; Y1 = eqs; Part@Y1, 2, 2D = 0; Y2 = eqs; Part@Y2, 81, 3<, 2D = 0;
In[45]:=
Y1
£ Out[45]= :Y1 @TD ã
In[46]:=
3 4
Cos@Y2 @TDD + Sin@Y3 @TDD, Y2 £ @TD ã 0, Y3 £ @TD ã Cos@Y1 @TDD +
3 4
Sin@Y2 @TDD>
Y2
£ £ £ Out[46]= 8Y1 @TD ã 0, Y2 @TD ã Cos@Y3 @TDD + Sin@Y1 @TDD, Y3 @TD ã 0<
The system for Y1 is solvable exactly by DSolve so that you can use the “LocallyExact“ method. Y2 is not solvable, however, so you need to use a suitable numerical integrator in order to obtain the desired properties in the splitting method.
74
Advanced Numerical Differential Equation Solving in Mathematica
This defines a method for computing the implicit midpoint rule in terms of the built-in “ImplicitRungeKutta“ method. In[47]:=
ImplicitMidpoint = 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “Coefficients“ Ø “ImplicitRungeKuttaGaussCoefficients“, “DifferenceOrder“ Ø 2, ImplicitSolver Ø 8FixedPoint, AccuracyGoal Ø MachinePrecision, PrecisionGoal Ø MachinePrecision, “IterationSafetyFactor“ Ø 1<<<; This defines a second-order, volume-preserving, reversing symmetry-group integrator [MQ02].
In[48]:=
splittingsol = NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø 8“Splitting“, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y2, Y1, Y2<, “Method“ Ø 8“LocallyExact“, ImplicitMidpoint, “LocallyExact“<
Out[48]= 88Y1 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 100.<<, <>D@TD<<
Lotka|Volterra Equations Various numerical integrators for this system are compared within "Numerical Methods for Solving the Lotka|Volterra Equations".
Euler's Equations Various numerical integrators for Euler's equations are compared within "Rigid Body Solvers".
Non-Autonomous Vector Fields Consider the Duffing oscillator, a forced planar non-autonomous differential system. In[49]:=
system = GetNDSolveProblem@“DuffingOscillator“D
£ £ Out[49]= NDSolveProblemB::Y1 @TD ã Y2 @TD, Y2 @TD ã
3 Cos@TD
+ Y1 @TD - Y1 @TD3 +
10 8Y1 @0D ã 0, Y2 @0D ã 1<, 8Y1 @TD, Y2 @TD<, 8T, 0, 10<, 8<, 8<>F
This defines a splitting of the system. In[50]:=
Y1 = :Y1 £ @TD ã Y2 @TD, Y2 £ @TD ã Y2 = :Y1 £ @TD ã 0, Y2 £ @TD ã
Y2 @TD
4 3 Cos@TD 10
>;
+ Y1 @TD - Y1 @TD3 >;
Y2 @TD 4
>,
Advanced Numerical Differential Equation Solving in Mathematica
75
The splitting of the time component among the vector fields is ambiguous, so the method issues an error message. In[52]:=
splittingsol = NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø 8“Splitting“, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y2, Y1, Y1<, “Method“ Ø 8“LocallyExact“<
+ Y1 @TD - Y1 @TD3 > in the method Splitting depends on T 10 which is ambiguous. The differential system should be in autonomous form. à
The differential system :0,
NDSolve::initf : The initialization of the method NDSolve`Splitting failed. Out[52]= 88Y1 @TD Ø InterpolatingFunction@880., 0.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.<<, <>D@TD<<
The equations can be extended by introducing a new "dummy" variable Z@TD such that Z@TD == T and specifying how it should be distributed in the split differential systems. In[53]:=
Y1 = :Y1 £ @TD ã Y2 @TD, Y2 £ @TD ã
Y2 @TD
, Z ‘@TD ã 1>; 4 3 Cos@Z@TDD + Y1 @TD - Y1 @TD3 , Z ‘@TD ã 0>; Y2 = :Y1 £ @TD ã 0, Y2 £ @TD ã 10 eqs = Join@system@“System“D, 8Z ‘@TD ã 1
In[59]:=
splittingsol = NDSolve@8eqs, ics<, vars, time, StartingStepSize Ø 1 ê 10, Method Ø 8NDSolve`Splitting, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y2, Y1, Y2<, “Method“ Ø 8“LocallyExact“<
Out[59]= 88Y1 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD, Z@TD Ø InterpolatingFunction@880., 10.<<, <>D@TD<<
76
Advanced Numerical Differential Equation Solving in Mathematica
Here is a plot of the solution. In[60]:=
ParametricPlot@Evaluate@system@“DependentVariables“@DD ê. First@splittingsolDD, Evaluate@timeD, AspectRatio -> 1D
5
Out[60]= -3
-2
-1
1
2
3
-5
Option Summary The default coefficient choice in “Composition“ tries to automatically select between “SymmetricCompositionCoefficients“
and
“SymmetricCompositionSymmetricMethodÖ
Coefficients “ depending on the properties of the methods specified using the Method option. option name
default value
“Coefficients“
Automatic
specify the coefficients to use in the composition method
“DifferenceOrder“
Automatic
specify the order of local accuracy of the method
Method
None
specify the base methods to use in the numerical integration
Options of the method “Composition“.
Advanced Numerical Differential Equation Solving in Mathematica
option name
default value
“Coefficients“
8<
specify the coefficients to use in the splitting method
“DifferenceOrder“
Automatic
specify the order of local accuracy of the method
“Equations“
8<
specify the way in which the equations should be split
Method
None
specify the base methods to use in the numerical integration
77
Options of the method “Splitting“.
Submethods "LocallyExact" Method for NDSolve Introduction A differential system can sometimes be solved by analytic means. The function DSolve implements many of the known algorithmic techniques. However, differential systems that can be solved in closed form constitute only a small subset. Despite this fact, when a closed-form solution does not exist for the entire vector field, it is often possible to analytically solve a system of differential equations for part of the vector field. An example of this is the method “Splitting“, which breaks up a vector field f into subfields f1 , …, fn such that f = f1 + + fn . The idea underlying the method “LocallyExact“ is that rather than using a standard numerical integration scheme, when a solution can be found by DSolve direct numerical evaluation can be used to locally advance the solution. Since the method “LocallyExact“ makes no attempt to adaptively adjust step sizes, it is primarily intended for use as a submethod between integration steps.
Examples Load a package with some predefined problems. In[1]:=
Needs@“DifferentialEquations`NDSolveProblems`“D;
78
Advanced Numerical Differential Equation Solving in Mathematica
Harmonic Oscillator Numerically solve the equations of motion for a harmonic oscillator using the method “LocallyExact“. The result is two interpolating functions that approximate the solution and the first derivative. In[2]:=
system = GetNDSolveProblem@“HarmonicOscillator“D; vars = system@“DependentVariables“D; tdata = system@“TimeData“D; sols = vars ê. First@NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø “LocallyExact“DD
Out[5]= 8InterpolatingFunction@880., 10.<<, <>D@TD, InterpolatingFunction@880., 10.<<, <>D@TD<
The solution evolves on the unit circle. In[6]:=
ParametricPlot@Evaluate@solsD, Evaluate@tdataD, AspectRatio Ø 1D 1.0
0.5
Out[6]=
-1.0
-0.5
0.5
1.0
-0.5
-1.0
Global versus Local The method “LocallyExact“ is not intended as a substitute for a closed-form (global) solution. Despite the fact that the method “LocallyExact“ uses the analytic solution to advance the solution, it only produces solutions at the grid points in the numerical integration (or even inside grid points if called appropriately). Therefore, there can be errors due to sampling at interpolation points that do not lie exactly on the numerical integration grid.
Advanced Numerical Differential Equation Solving in Mathematica
79
Plot the error in the first solution component of the harmonic oscillator and compare it with the exact flow. In[7]:=
Plot@Evaluate@First@solsD - Cos@TDD, Evaluate@tdataDD 2. µ 10-7 1. µ 10-7
Out[7]=
2
4
6
8
10
-1. µ 10-7 -2. µ 10-7
Simplification The method “LocallyExact“ has an option “SimplificationFunction“ that can be used to simplify the results of DSolve. Here is the linearized component of the differential system that turns up in the splitting of the Lorenz equations using standard values for the parameters. In[8]:=
eqs = 8Y1 ‘@TD ã s HY2 @TD - Y1 @TDL, Y2 ‘@TD ã r Y1 @TD - Y2 @TD, Y3 ‘@TD ã - b Y3 @TD< ê. 8s Ø 10, r Ø 28, b Ø 8 ê 3<; ics = 8Y1 @0D ã - 8, Y2 @0D ã 8, Y3 @0D ã 27<; vars = 8Y1 @TD, Y2 @TD, Y3 @TD<;
80
Advanced Numerical Differential Equation Solving in Mathematica
This subsystem is exactly solvable by DSolve. In[11]:=
DSolve@eqs, vars, TD
Out[11]= ::Y1 @TD Ø
1
1
1201 ‰ 2
-11-
1201
1
T
+9
1201 ‰ 2
-11-
1201
T
1
+ 1201 ‰ 2
-11+
1201
T
1
-9
1201 ‰ 2
-11+
1201
T
2402 1
10 ‰ 2
-11-
1201
T
1
- ‰2
-11+
1201
T
C@2D
C@1D -
, 1201 1
28 ‰ 2
-11-
1201
T
1
- ‰2
-11+
1201
T
C@1D
1
+
Y2 @TD Ø 1
-11+
1201
T
1
+9
-11-
1201
T
1
-9
1201 ‰ 2
-11-
1201
T
2402
1201 1201 ‰ 2
1
1201 ‰ 2
1201 ‰ 2
-11+
1201
T
C@2D, Y3 @TD Ø ‰-8 Të3 C@3D>>
Often the results of DSolve can be simplified. This defines a function to simplify an expression and also prints out the input and the result. In[12]:=
myfun@x_D := Module@8simpx<, Print@“Before simplification “, xD; simpx = FullSimplify@ExpToTrig@xDD; Print@“After simplification “, simpxD; simpx D; The function can be passed as an option to the method “LocallyExact“.
In[13]:=
NDSolve@8eqs, ics<, vars, 8T, 0, 1<, StartingStepSize Ø 1 ê 10, Method Ø 8“LocallyExact“, “SimplificationFunction“ Ø myfun
+
Advanced Numerical Differential Equation Solving in Mathematica
Before simplification 1 1 J-11- 1201 : 1201 ‰ 2 2402 1
1201 ‰ 2 1
10 ‰ 2
NT
J-11+ 1201 N T
J-11- 1201 N T
1
- ‰2
1
+9
1201 ‰ 2 1
-9
1201 ‰ 2
J-11+ 1201 N T
J-11- 1201 N T
J-11+ 1201 N T
81
+ Y1 @TD -
Y2 @TD ,
1201 1
28 ‰ 2
J-11- 1201 N T
1
- ‰2
J-11+ 1201 N T
Y1 @TD
-
+ 1201 1 2402
1
1201 ‰ 2 1
1201 ‰ 2
J-11- 1201 N T
J-11+ 1201 N T
1
-9
1201 ‰ 2 1
+9
After simplification 1 : ‰-11 Tê2 1201 CoshB 1201
1201 ‰ 2
1201 T 2
J-11- 1201 N T
J-11+ 1201 N T
F Y1 @TD +
1201 T F 2
H56 Y1 @TD + 9 Y2 @TDL
Y2 @TD, ‰-8 Tê3 Y3 @TD>
1201 SinhB
H- 9 Y1 @TD + 20 Y2 @TDL , ‰-11 Tê2 CoshB ‰-11 Tê2 SinhB
+
1201 T 2
1201 T 2
F
F Y2 @TD +
, ‰-8 Tê3 Y3 @TD>
1201 Out[13]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
The simplification is performed only once during the initialization phase that constructs the data object for the numerical integration method.
Option Summary option name
default value
“SimplificationFunction“
None
Option of the method “LocallyExact“.
function to use in simplifying the result of
DSolve
82
Advanced Numerical Differential Equation Solving in Mathematica
"DoubleStep" Method for NDSolve Introduction The method “DoubleStep“ performs a single application of Richardson's extrapolation for any one-step integration method. Although it is not always optimal, it is a general scheme for equipping a method with an error estimate (hence adaptivity in the step size) and extrapolating to increase the order of local accuracy. “DoubleStep“ is a special case of extrapolation but has been implemented as a separate method for efficiency. Given a method of order p: † Take a step of size h to get a solution y1 . † Take two steps of size h ê 2 to get a solution y2 . † Find an error estimate of order p as: e=
y2 - y1 2p- 1
.
(1)
† The correction term e can be used for error estimation enabling an adaptive step-size scheme for any base method. † Either use y2 for the new solution, or form an improved approximation using local extrapolation as: ` y2 = y2 + e.
(2)
† If the base numerical integration method is symmetric, then the improved approximation has order p + 2; otherwise it has order p + 1.
Examples Load some package with example problems and utility functions. In[5]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; Select a nonstiff problem from the package.
In[7]:=
nonstiffsystem = GetNDSolveProblem@“BrusselatorODE“D;
Advanced Numerical Differential Equation Solving in Mathematica
83
Select a stiff problem from the package. In[8]:=
stiffsystem = GetNDSolveProblem@“Robertson“D;
Extending Built-in Methods The method “ExplicitEuler“ carries out one integration step using Euler's method. It has no local error control and hence uses fixed step sizes. This integrates a differential system using one application of Richardson's extrapolation (see (2)) with the base method “ExplicitEuler“. The local error estimate (1) is used to dynamically adjust the step size throughout the integration. In[9]:=
eesol = NDSolve@nonstiffsystem, 8T, 0, 1<, Method Ø 8“DoubleStep“, Method Ø “ExplicitEuler“
Out[9]= 88Y1 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 1.<<, <>D@TD<<
This illustrates how the step size varies during the numerical integration. In[10]:=
StepDataPlot@eesolD
0.00020
0.00015
Out[10]=
0.00010
0.0
0.2
0.4
0.6
0.8
1.0
The stiffness detection device (described within "StiffnessTest Method Option for NDSolve") ascertains that the “ExplicitEuler“ method is restricted by stability rather than local accuracy. In[11]:=
NDSolve@stiffsystem, Method Ø 8“DoubleStep“, Method Ø “ExplicitEuler“
Out[11]= 88Y1 @TD Ø InterpolatingFunction@880., 0.00725321<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.00725321<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 0.00725321<<, <>D@TD<<
84
Advanced Numerical Differential Equation Solving in Mathematica
An alternative base method is more appropriate for this problem. In[12]:=
liesol = NDSolve@stiffsystem, Method Ø 8“DoubleStep“, Method Ø “LinearlyImplicitEuler“
Out[12]= 88Y1 @TD Ø InterpolatingFunction@880., 0.3<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.3<<, <>D@TD, Y3 @TD Ø InterpolatingFunction@880., 0.3<<, <>D@TD<<
User-Defined Methods and Method Properties Integration methods can be added to the NDSolve framework. In order for these to work like built-in methods it can be necessary to specify various method properties. These properties can then be used by other methods to build up compound integrators. Here is how to define a top-level plug-in for the classical Runge|Kutta method (see "NDSolve Method Plug-in Framework: Classical Runge|Kutta" and "ExplicitRungeKutta Method for NDSolve" for more details). In[13]:=
ClassicalRungeKutta@___D@“Step“@f_, t_, h_, y_, yp_DD := Block@8deltay, k1, k2, k3, k4<, k1 = yp; k2 = f@t + 1 ê 2 h, y + 1 ê 2 h k1D; k3 = f@t + 1 ê 2 h, y + 1 ê 2 h k2D; k4 = f@t + h, y + h k3D; deltay = h H1 ê 6 k1 + 1 ê 3 k2 + 1 ê 3 k3 + 1 ê 6 k4L; 8h, deltay< D;
Method properties used by “DoubleStep“ are now described.
Order and Symmetry This attempts to integrate a system using one application of Richardson's extrapolation based on the classical Runge|Kutta method. In[14]:=
NDSolve@nonstiffsystem, Method Ø 8“DoubleStep“, Method Ø ClassicalRungeKutta
Without knowing the order of the base method, “DoubleStep“ is unable to carry out Richardson's extrapolation. This defines a method property to communicate to the framework that the classical Runge|Kutta method has order four. In[15]:=
ClassicalRungeKutta@___D@“DifferenceOrder“D := 4;
Advanced Numerical Differential Equation Solving in Mathematica
85
The method “DoubleStep“ is now able to ascertain that ClassicalRungeKutta is of order four and can use this information when refining the solution and estimating the local error. In[16]:=
NDSolve@nonstiffsystem, Method Ø 8“DoubleStep“, Method Ø ClassicalRungeKutta
Out[16]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
The order of the result of Richardson's extrapolation depends on whether the extrapolated method has a local error expansion in powers of h or h2 (the latter occurs if the base method is symmetric). If no method property for symmetry is defined, the “DoubleStep“ method assumes by default that the base integrator is not symmetric. This explicitly specifies that the classical Runge|Kutta method is not symmetric using the “SymmetricMethodQ“ property. In[17]:=
ClassicalRungeKutta@___D@“SymmetricMethodQ“D := False;
Stiffness Detection Details of the scheme used for stiffness detection can be found within "StiffnessTest Method Option for NDSolve". Stiffness detection relies on knowledge of the linear stability boundary of the method, which has not been defined. Computing the exact linear stability boundary of a method under extrapolation can be quite complicated. Therefore a default value is selected which works for all methods. This corresponds to considering the p-th order power series approximation to the exponential at 0 and ignoring higher order terms. † If “LocalExtrapolation“ is True then a generic value is selected corresponding to a method of order p + 2 (symmetric) or p + 1. † If “LocalExtrapolation“ is False then the property “LinearStabilityBoundary“ of the base method is checked. If no value has been specified then a default for a method of order p is selected. This computes the linear stability boundary for a generic method of order 4. In[18]:=
ReduceBAbsBSumB
zi i!
, 8i, 0, 4
2 3 Out[18]= z ã RootA24 + 12 Ò1 + 4 Ò1 + Ò1 &, 1E
86
Advanced Numerical Differential Equation Solving in Mathematica
A default value for the “LinearStabilityBoundary“ property is used. In[19]:=
NDSolve@stiffsystem, Method Ø 8“DoubleStep“, Method Ø ClassicalRungeKutta, “StiffnessTest“ Ø True
This shows how to specify the linear stability boundary of the method for the framework. This value will only be used if “DoubleStep“ is invoked with “LocalExtrapolation“ Ø True . In[20]:=
ClassicalRungeKutta@___D@“LinearStabilityBoundary“D := RootA24 + 12 Ò1 + 4 Ò12 + Ò13 &, 1E;
“DoubleStep“ assumes by default that a method is not appropriate for stiff problems (and hence uses stiffness detection) when no “StiffMethodQ“ property is specified. This shows how to define the property. In[21]:=
ClassicalRungeKutta@___D@“StiffMethodQ“D := False;
Higher Order The following example extrapolates the classical Runge-Kutta method of order four using two applications of (2). The inner specification of “DoubleStep“ constructs a method of order five. A second application of “DoubleStep“ is used to obtain a method of order six, which uses adaptive step sizes. Nested applications of “DoubleStep“ are used to raise the order and provide an adaptive stepsize estimate. In[22]:=
NDSolve@nonstiffsystem, Method Ø 8“DoubleStep“, Method Ø 8“DoubleStep“, Method -> ClassicalRungeKutta<
Out[22]= 88Y1 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 20.<<, <>D@TD<<
In general the method “Extrapolation“ is more appropriate for constructing high-order integration schemes from low-order methods.
Advanced Numerical Differential Equation Solving in Mathematica
87
Option Summary option name
default value
“LocalExtrapolation“
True
specify whether to advance the solution using local extrapolation according to (2)
Method
None
“StepSizeRatioBounds“
: ,4>
1 8
specify the method to use as the base integration scheme specify the bounds on a relative change in the new step size hn+1 from the current step size hn as low § hn+1 êhn § high
“StepSizeSafetyFactors“
Automatic
specify the safety factors to incorporate into the error estimate (1) used for adaptive step sizes
“StiffnessTest“
Automatic
specify whether to use the stiffness detection capability
Options of the method “DoubleStep“.
The default setting of Automatic for the option “StiffnessTest“ indicates that the stiffness test is activated if a nonstiff base method is used. The default setting of Automatic for the option “StepSizeSafetyFactors“ uses the values 89 ê 10, 4 ê 5< for a stiff base method and 89 ê 10, 13 ê 20< for a nonstiff base method.
"EventLocator" Method for NDSolve Introduction It is often useful to be able to detect and precisely locate a change in a differential system. For example, with the detection of a singularity or state change, the appropriate action can be taken, such as restarting the integration. An event for a differential system: Y ‘ HtL = f Ht, YHtLL is a point along the solution at which a real-valued event function is zero: gHt, YHtLL = 0 It is also possible to consider Boolean-valued event functions, in which case the event occurs when the function changes from True to False or vice versa.
88
Advanced Numerical Differential Equation Solving in Mathematica
The “EventLocator“ method that is built into NDSolve works effectively as a controller method; it handles checking for events and taking the appropriate action, but the integration of the differential system is otherwise left completely to an underlying method. In this section, examples are given to demonstrate the basic use of the “EventLocator“ method and options. Subsequent sections show more involved applications of event location, such as period detection, Poincaré sections, and discontinuity handling. These initialization commands load some useful packages that have some differential equations to solve and define some utility functions. In[1]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D; Needs@“GUIKit`“D;
A simple example is locating an event, such as when a pendulum started at a non-equilibrium position will swing through its lowest point and stopping the integration at that point. This integrates the pendulum equation up to the first point at which the solution y@tD crosses the axis. In[5]:=
sol = NDSolve@8y ‘‘@tD + Sin@y@tDD ã 0, y ‘@0D ã 0, y@0D ã 1<, y, 8t, 0, 10<, Method Ø 8“EventLocator“, “Event“ Ø y@tD
Out[5]= 88y Ø InterpolatingFunction@880., 1.67499<<, <>D<<
From the solution you can see that the pendulum reaches its lowest point y@tD = 0 at about t = 1.675. Using the InterpolatingFunctionAnatomy package, it is possible to extract the value from the InterpolatingFunction object. This extracts the point at which the event occurs and makes a plot of the solution (black) and its derivative (blue) up to that point. In[6]:=
end = InterpolatingFunctionDomain@First@y ê. solDD@@1, - 1DD; Plot@Evaluate@8y@tD, y ‘@tD< ê. First@solDD, 8t, 0, end<, PlotStyle Ø 88Black<, 8Blue<
0.5
Out[7]=
0.5
1.0
1.5
-0.5
-1.0
When you use the event locator method, the events to be located and the action to take upon finding an event are specified through method options of the “EventLocator“ method.
Advanced Numerical Differential Equation Solving in Mathematica
89
The default action on detecting an event is to stop the integration as demonstrated earlier. The event action can be any expression. It is evaluated with numerical values substituted for the problem variables whenever an event is detected. This prints the time and values each time the event y ‘@tD = y@tD is detected for a damped pendulum. In[8]:=
NDSolve@8y ‘‘@tD + .1 y ‘@tD + Sin@y@tDD ã 0, y ‘@0D ã 0, y@0D ã 1<, y, 8t, 0, 10<, Method Ø 8“EventLocator“, “Event“ Ø y ‘@tD - y@tD, “EventAction“ ß Print@“y‘@“, t, “D = y@“, t, “D = “, y@tDD
y‘@2.49854D =
[email protected] = -0.589753 y‘@5.7876D =
[email protected] = 0.501228 y‘@9.03428D =
[email protected] = -0.426645 Out[8]= 88y Ø InterpolatingFunction@880., 10.<<, <>D<<
Note that in the example, the “EventAction“ option was given using RuleDelayed (ß) to prevent it from evaluating except when the event is located. You can see from the printed output that when the action does not stop the integration, multiple instances of an event can be detected. Events are detected when the sign of the event expression changes. You can restrict the event to be only for a sign change in a particular direction using the “Direction“ option. This collects the points at which the velocity changes from negative to positive for a damped driven pendulum. Reap and Sow are programming constructs that are useful for collecting data when you do not, at first, know how much data there will be. Reap@exprD gives the value of expr together with all expressions to which Sow has been applied during its evaluation. Here
Reap encloses the use of NDSolve and Sow is a part of the event action, which allows you to collect data for each instance of an event. In[9]:=
Reap@NDSolve@8y ‘‘@tD + .1 y ‘@tD + Sin@y@tDD ã .1 Cos@tD, y ‘@0D ã 0, y@0D ã 1<, y, 8t, 0, 50<, Method Ø 8“EventLocator“, “Event“ Ø y ‘@tD, “Direction“ Ø 1, “EventAction“ ß Sow@8t, y@tD, y ‘@tD
Out[9]= 988y Ø InterpolatingFunction@880., 50.<<, <>D<<,
9993.55407, -0.879336, 1.87524 µ 10-15 =, 910.4762, -0.832217, -5.04805 µ 10-16 =, 917.1857, -0.874939, -4.52416 µ 10-15 =, 923.7723, -0.915352, 1.62717 µ 10-15 =, 930.2805, -0.927186, -1.17094 µ 10-16 =, 936.7217, -0.910817, -2.63678 µ 10-16 =, 943.1012, -0.877708, 1.33227 µ 10-15 =, 949.4282, -0.841083, -8.66494 µ 10-16 ====
You may notice from the output of the previous example that the events are detected when the derivative is only approximately zero. When the method detects the presence of an event in a step of the underlying integrator (by a sign change of the event expression), then it uses a numerical method to approximately find the position of the root. Since the location process is numerical,
you
should
expect
only
approximate
results.
Location
method
options
AccuracyGoal, PrecisionGoal, and MaxIterations can be given to those location methods that use FindRoot to control tolerances for finding the root.
You may notice from the output of the previous example that the events are detected when the derivative is Numerical only approximately zero. When the method detects the presence of an event in a 90 Advanced Differential Equation Solving in Mathematica numerical method to approximately find the position of the root. Since the location process is numerical,
you
should
expect
only
approximate
results.
Location
method
options
AccuracyGoal, PrecisionGoal, and MaxIterations can be given to those location methods that use FindRoot to control tolerances for finding the root. For Boolean valued event functions, an event occurs when the function switches from True to False or vice versa. The “Direction“ option can be used to restrict the event only from changes from True to False (“Direction“ -> - 1) or only from changes from False to True (“Direction“ -> 1). This opens up a small window with a button, which when clicked changes the value of the variable stop to True from its initialized value of False . In[10]:=
NDSolve`stop = False; GUIRun@Widget@“Panel“, 8Widget@“Button“, 8 “label“ Ø “Stop“, BindEvent@“action“, Script@NDSolve`stop = TrueDD
In[12]:=
NDSolve@8y ‘‘@tD + Sin@y@tDD ã 0, y@0D ã 1, y ‘@0D ã 0<, y, 8t, 0, ¶<, Method Ø 8“EventLocator“, “Event“ ß NDSolve`stop<, MaxSteps Ø ¶D
Out[12]= 88y Ø InterpolatingFunction@880., 620 015.<<, <>D<<
Take note that in this example, the “Event“ option was specified with RuleDelayed (:>) to prevent the immediate value of stop from being evaluated and set up as the function. You can specify more than one event. If the event function evaluates numerically to a list, then each component of the list is considered to be a separate event. You can specify different actions, directions, etc. for each of these events by specifying the values of these options as lists of the appropriate length. This integrates the pendulum equation up until the point at which the button is clicked. The number of complete swings of the pendulum is kept track of during the integration. In[13]:=
NDSolve`stop = False; swings = 0; 8 NDSolve@8y ‘‘@tD + Sin@y@tDD ã 0, y@0D ã 0, y ‘@0D ã 1<, y, 8t, 0, 1 000 000<, Method Ø 8“EventLocator“, “Event“ ß 8y@tD, NDSolve`stop<, “EventAction“ ß 8swings ++, Throw@Null, “StopIntegration“D<, “Direction“ Ø 81, All<<, MaxSteps Ø InfinityD, swings<
Out[13]= 888y Ø InterpolatingFunction@880., 24 903.7<<, <>D<<, 3693<
Advanced Numerical Differential Equation Solving in Mathematica
91
As you can see from the previous example, it is possible to mix real- and Boolean-valued event functions. The expected number of components and type of each component are based on the values at the initial condition and needs to be consistent throughout the integration. The “EventCondition“ option of “EventLocator“ allows you to specify additional Boolean conditions that need to be satisfied for an event to be tested. It is advantageous to use this instead of a Boolean event when possible because the root finding process can be done more efficiently. This stops the integration of a damped pendulum at the first time that y HtL = 0 once the decay has reduced the energy integral to -0.9. In[14]:=
sol = NDSolve@8y ‘‘@tD + .1 y ‘@tD + Sin@y@tDD ã 0, y ‘@0D ã 1, y@0D ã 0<, y, 8t, 0, 100<, Method Ø 8“EventLocator“, “Event“ Ø y@tD, “EventCondition“ Ø Hy ‘@tD ^ 2 ê 2 - Cos@y@tDD < - 0.9L, “EventAction“ ß Throw@end = t, “StopIntegration“D
Out[14]= 88y Ø InterpolatingFunction@880., 19.4446<<, <>D<<
This makes a plot of the solution (black), the derivative (blue), and the energy integral (green). The energy theshold is shown in red. In[15]:=
Plot@Evaluate@8y@tD, y ‘@tD, y ‘@tD ^ 2 ê 2 - Cos@y@tDD, - .9< ê. First@solDD, 8t, 0, end<, PlotStyle Ø 88Black<, 8Blue<, 8Green<, 8Red<
Out[15]= 5
10
15
-0.5
The Method option of “EventLocator“ allows the specification of the numerical method to use in the integration.
Event Location Methods The “EventLocator“ method works by taking a step of the underlying method and checking to see if the sign (or parity) of any of the event functions is different at the step endpoints. Event functions are expected to be real- or Boolean-valued, so if there is a change, there must be an event in the step interval. For each event function which has an event occurrence in a step, a refinement procedure is carried out to locate the position of the event within the interval. There are several different methods which can be used to refine the position. These include simply taking the solution at the beginning or the end of the integration interval, a linear interpolation of the event value, and using bracketed root-finding methods. The appropriate method to use depends on a trade off between execution speed and location accuracy.
92
Advanced Numerical Differential Equation Solving in Mathematica
If the event action is to stop the integration then the particular value at which the integration is stopped
depends
on
the
value
obtained
from
the
“EventLocationMethod“
option
of
“EventLocator“. Location of a single event is usually fast enough so that the method used will not significantly influence the overall computation time. However, when an event is detected multiple times, the location refinement method can have a substantial effect.
"StepBegin" and "StepEnd" Methods The crudest methods are appropriate for when the exact position of the event location does not really matter or does not reflect anything with precision in the underlying calculation. The stop button example from the previous section is such a case: time steps are computed so quickly that there is no way that you can time the click of a button to be within a particular time step, much less at a particular point within a time step. Thus, based on the inherent accuracy of the event, there is no point in refining at all. You can specify this by using the “StepBegin“ or “StepEnd“ location methods. In any example where the definition of the event is heuristic or somewhat imprecise, this can be an appropriate choice.
"LinearInterpolation" Method When event results are needed for the purpose of points to plot in a graph, you only need to locate the event to the resolution of the graph. While just using the step end is usually too crude for this, a single linear interpolation based on the event function values suffices. Denote the event function values at successive mesh points of the numerical integration: wn = gHtn , yn L, wn+1 = gHtn+1 , yn+1 L Linear interpolation gives:
we =
wn wn+1 - wn
A linear approximation of the event time is then: te = tn + we hn
Advanced Numerical Differential Equation Solving in Mathematica
93
Linear interpolation could also be used to approximate the solution at the event time. However, since derivative values fn = f Htn , yn L and fn+1 = f Htn+1 , yn+1 L are available at the mesh points, a better approximation of the solution at the event can be computed cheaply using cubic Hermite interpolation as: ye = kn yn + kn+1 yn+1 + ln fn + ln+1 fn+1 for suitably defined interpolation weights: = Hwe - 1L2 H2 we + 1L
kn
kn+1 =
H3 - 2 we L we 2
=
hn Hwe - 1L2 we
ln+1 =
hn Hwe - 1L we 2
ln
You
can
specify
refinement
based
on
a
single
linear
interpolation
with
the
setting
“LinearInterpolation“. This computes the solution for a single period of the pendulum equation and plots the solution for that period. In[16]:=
sol = First@NDSolve@8y ‘‘@tD + Sin@y@tDD ã 0, y@0D ã 3, y ‘@0D ã 0<, y, 8t, 0, ¶<, Method Ø 8“EventLocator“, “Event“ Ø y ‘@tD, “EventAction“ ß Throw@end = t, “StopIntegration“D, “Direction“ Ø - 1, “EventLocationMethod“ -> “LinearInterpolation“, Method -> “ExplicitRungeKutta“
Out[17]=
5
10
15
-1 -2 -3
At the resolution of the plot over the entire period, you cannot see that the endpoint may not be exactly where the derivative hits the axis. However, if you zoom in enough, you can see the error.
94
Advanced Numerical Differential Equation Solving in Mathematica
This shows a plot just near the endpoint. In[18]:=
Plot@Evaluate@y ‘@tD ê. solD, 8t, end * H1 - .001L, end<, PlotStyle Ø BlueD 0.0020
0.0015
Out[18]=
0.0010
0.0005
16.150
16.155
The linear interpolation method is sufficient for most viewing purposes, such as the Poincaré section examples shown in the following section. Note that for Boolean-valued event functions, linear interpolation is effectively only one bisection step, so the linear interpolation method may be inadequate for graphics.
Brent's Method The default location method is the event location method “Brent“, finding the location of the event using FindRoot with Brent's method. Brent's method starts with a bracketed root and combines steps based on interpolation and bisection, guaranteeing a convergence rate at least as good as bisection. You can control the accuracy and precision to which FindRoot tries to get the root of the event function using method options for the “Brent“ event location method. The default is to find the root to the same accuracy and precision as NDSolve is using for local error control. For methods that support continuous or dense output, the argument for the event function can be found quite efficiently simply by using the continuous output formula. However, for methods that do not support continuous output, the solution needs to be computed by taking a step of the underlying method, which can be relatively expensive. An alternate way of getting a solution approximation that is not accurate to the method order, but is consistent with using FindRoot on the InterpolatingFunction object returned from NDSolve is to use cubic Hermite interpolation, obtaining approximate solution values in the middle of the step by interpolation based on the solution values and solution derivative values at the step ends.
Advanced Numerical Differential Equation Solving in Mathematica
95
Comparison This example integrates the pendulum equation for a number of different event location methods and compares the time when the event is found. This defines the event location methods to use. In[19]:=
eventmethods = 8“StepBegin“, “StepEnd“, “LinearInterpolation“, Automatic<; This integrates the system and prints out the method used and the value of the independent variable when the integration is terminated.
In[20]:=
Map@ NDSolve@8y ‘‘@tD + Sin@y@tDD ã 0, y@0D ã 3, y ‘@0D ã 0<, y, 8t, 0, ¶<, Method Ø 8“EventLocator“, “Event“ Ø y ‘@tD, “EventAction“ ß Throw@Print@Ò, “: t = “, t, “ y‘@tD = “, y ‘@tDD, “StopIntegration“D, “Direction“ Ø - 1, Method -> “ExplicitRungeKutta“, “EventLocationMethod“ Ø Ò
StepBegin: t = 15.8022 y‘@tD = 0.0508999 StepEnd: t = 16.226 y‘@tD = -0.00994799 LinearInterpolation: t = 16.1567 y‘@tD = -0.000162503 Automatic: t = 16.1555 y‘@tD = -2.35922 µ 10-16
Examples Falling Body This system models a body falling under the force of gravity encountering air resistance (see [M04]). The event action stores the time when the falling body hits the ground and stops the integration. In[21]:=
sol = y@tD ê. First@NDSolve@8y ‘‘@tD ã - 1 + y ‘@tD ^ 2, y@0D ã 1, y ‘@0D ã 0<, y, 8t, 0, Infinity<, Method Ø 8“EventLocator“, “Event“ ß y@tD, “EventAction“ ß Throw@tend = t, “StopIntegration“D
Out[21]= InterpolatingFunction@880., 1.65745<<, <>D@tD
96
Advanced Numerical Differential Equation Solving in Mathematica
This plots the solution and highlights the initial and final points (green and red) by encircling them. In[22]:=
plt = Plot@sol, 8t, 0, tend<, Frame Ø True, Axes Ø False, PlotStyle Ø Blue, DisplayFunction Ø IdentityD; grp = Graphics@ 88Green, Circle@80, 1<, 0.025D<, 8Red, Circle@8tend, sol ê. t Ø tend<, 0.025D<
0.8
0.6
Out[24]= 0.4
0.2
0.0 0.0
0.5
1.0
1.5
Period of the Van der Pol Oscillator The Van der Pol oscillator is an example of an extremely stiff system of ODEs. The event locator method can call any method for actually doing the integration of the ODE system. The default method, Automatic, automatically switches to a method appropriate for stiff systems when necessary, so that stiffness does not present a problem. This integrates the Van der Pol system for a particular value of the parameter m = 1000 up to the point where the variable y2 reaches its initial value and direction. In[25]:=
y1 @0D ã 2 y1 ‘@tD ã y2 @tD O, y2 ‘@tD ã 1000 H1 - y1 @tD ^ 2L y2 @tD - y1 @tD y2 @0D ã 0 8y1 , y2 <, 8t, 3000<, Method Ø 8“EventLocator“, “Event“ Ø y2 @tD, “Direction“ Ø - 1
vsol = NDSolveBK
Out[25]= 88y1 Ø InterpolatingFunction@880., 1614.29<<, <>D,
y2 Ø InterpolatingFunction@880., 1614.29<<, <>D<<
Note that the event at the initial condition is not considered. By selecting the endpoint of the NDSolve solution, it is possible to write a function that returns the period as a function of m.
Advanced Numerical Differential Equation Solving in Mathematica
97
This defines a function that returns the period as a function of m. In[26]:=
vper@m_D := ModuleB8vsol<, y1 @0D ã 2 y1 ‘@tD ã y2 @tD O, y2 ‘@tD ã m H1 - y1 @tD ^ 2L y2 @tD - y1 @tD y2 @0D ã 0 8y1 , y2 <, 8t, Max@100, 3 mD<, Method Ø 8“EventLocator“, “Event“ Ø y2 @tD, “Direction“ Ø - 1
vsol = FirstBy2 ê. NDSolveBK
InterpolatingFunctionDomain@vsolD@@1, - 1DDF; This uses the function to compute the period at m = 1000. In[27]:=
vper@1000D
Out[27]= 1614.29
Of course, it is easy to generalize this method to any system with periodic solutions.
Poincaré Sections Using Poincaré sections is a useful technique for visualizing the solutions of high-dimensional differential systems. For an interactive graphical interface see the package EquationTrekker.
The Hénon|Heiles System Define the Hénon|Heiles system that models stellar motion in a galaxy. This gets the Hénon|Heiles system from the NDSolveProblems package. In[28]:=
system = GetNDSolveProblem@“HenonHeiles“D; vars = system@“DependentVariables“D; eqns = 8system@“System“D, system@“InitialConditions“D<
£ £ £ Out[29]= :9HY1 L @TD ã Y3 @TD, HY2 L @TD ã Y4 @TD, HY3 L @TD ã -Y1 @TD H1 + 2 Y2 @TDL,
HY4 L£ @TD ã -Y1 @TD2 + H-1 + Y2 @TDL Y2 @TD=, :Y1 @0D ã
3 25
, Y2 @0D ã
3 25
, Y3 @0D ã
3 25
, Y4 @0D ã
3
>>
25
The Poincaré section of interest in this case is the collection of points in the Y2 - Y4 plane when the orbit passes through Y1 = 0. Since the actual result of the numerical integration is not required, it is possible to avoid storing all the data in InterpolatingFunction by specifying the output variables list (in the second argument to NDSolve) as empty, or 8<. This means that NDSolve
will produce no
InterpolatingFunction as output, avoiding storing a lot of unnecessary data. NDSolve does give a message NDSolve::noout warning there will be no output functions, but it can safely be turned off in this case since the data of interest is collected from the event actions.
98
Advanced Numerical Differential Equation Solving in Mathematica
The linear interpolation event location method is used because the purpose of the computation here is to view the results in a graph with relatively low resolution. If you were doing an example where you needed to zoom in on the graph to great detail or to find a feature, such as a fixed point of the Poincaré map, it would be more appropriate to use the default location method. This turns off the message warning about no output. In[30]:=
Off@NDSolve::nooutD; This integrates the Hénon|Heiles system using a fourth-order explicit Runge|Kutta method with fixed step size of 0.25. The event action is to use Sow on the values of Y2 and Y4 .
In[31]:=
data = Reap@ NDSolve@eqns, 8<, 8T, 10 000<, Method Ø 8“EventLocator“, “Event“ Ø Y1 @TD, “EventAction“ ß Sow@8Y2 @TD, Y4 @TD
“LinearInterpolation“, “Method“ Ø 8“FixedStep“, “Method“ Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 4<<<, StartingStepSize Ø 0.25, MaxSteps Ø ¶D; D; This plots the Poincaré section. The collected data is found in the last part of the result of Reap and the list of points is the first part of that.
In[32]:=
psdata = data@@- 1, 1DD; ListPlot@psdata, Axes Ø False, Frame Ø True, AspectRatio Ø 1D 0.2
0.1
Out[33]=
0.0
-0.1
-0.2 -0.2
-0.1
0.0
0.1
0.2
Since the Hénon|Heiles system is Hamiltonian, a symplectic method gives much better qualitative results for this example.
Advanced Numerical Differential Equation Solving in Mathematica
99
This integrates the Hénon|Heiles system using a fourth-order symplectic partitioned Runge| Kutta method with fixed step size of 0.25. The event action is to use Sow on the values of Y2 and Y4 . In[34]:=
sdata = Reap@ NDSolve@eqns, 8<, 8T, 10 000<, Method Ø 8“EventLocator“, “Event“ Ø Y1 @TD, “EventAction“ ß Sow@8Y2 @TD, Y4 @TD “LinearInterpolation“, “Method“ Ø 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 4, “PositionVariables“ Ø 8Y1 @TD, Y2 @TD<<<, StartingStepSize Ø 0.25, MaxSteps Ø ¶D; D; This plots the Poincaré section. The collected data is found in the last part of the result of Reap and the list of points is the first part of that.
In[35]:=
psdata = sdata@@- 1, 1DD; ListPlot@psdata, Axes Ø False, Frame Ø True, AspectRatio Ø 1D 0.2
0.1
Out[36]=
0.0
-0.1
-0.2 -0.2
-0.1
0.0
0.1
0.2
The ABC Flow This loads an example problem of the Arnold|Beltrami|Childress (ABC) flow that is used to model chaos in laminar flows of the three-dimensional Euler equations. In[37]:=
system = GetNDSolveProblem@“ArnoldBeltramiChildress“D; eqs = system@“System“D; vars = system@“DependentVariables“D; icvars = vars ê. T Ø 0; This defines a splitting Y1, Y2 of the system by setting some of the right-hand side components to zero.
In[41]:=
Y1 = eqs; Y1@@2, 2DD = 0; Y1
£ Out[41]= :HY1 L @TD ã
In[42]:=
3 4
Cos@Y2 @TDD + Sin@Y3 @TDD, HY2 L£ @TD ã 0, HY3 L£ @TD ã Cos@Y1 @TDD +
Y2 = eqs; Y2@@81, 3<, 2DD = 0; Y2
£ £ £ Out[42]= 8HY1 L @TD ã 0, HY2 L @TD ã Cos@Y3 @TDD + Sin@Y1 @TDD, HY3 L @TD ã 0<
3 4
Sin@Y2 @TDD>
100
Advanced Numerical Differential Equation Solving in Mathematica
This defines the implicit midpoint method. In[43]:=
ImplicitMidpoint = 8“ImplicitRungeKutta“, “Coefficients“ Ø “ImplicitRungeKuttaGaussCoefficients“, “DifferenceOrder“ Ø 2, “ImplicitSolver“ Ø 8“FixedPoint“, AccuracyGoal Ø 10, PrecisionGoal Ø 10, “IterationSafetyFactor“ Ø 1<<; This constructs a second-order splitting method that retains volume and reversing symmetries.
In[44]:=
ABCSplitting = 8“Splitting“, “DifferenceOrder“ Ø 2, “Equations“ Ø 8Y2, Y1, Y2<, “Method“ Ø 8“LocallyExact“, ImplicitMidpoint, “LocallyExact“<<; This defines a function that gives the Poincaré section for a particular initial condition.
In[45]:=
psect@ics_D := Module@8reapdata<, reapdata = Reap@ NDSolve@8eqs, Thread@icvars ã icsD<, 8<, 8T, 1000<, Method Ø 8“EventLocator“, “Event“ Ø Y2 @TD, “EventAction“ ß Sow@8Y1 @TD, Y3 @TD “LinearInterpolation“, Method Ø ABCSplitting<, StartingStepSize Ø 1 ê 4, MaxSteps Ø ¶D D; reapdata@@- 1, 1DD D;
Advanced Numerical Differential Equation Solving in Mathematica
101
This finds the Poincaré sections for several different initial conditions and flattens them together into a single list of points. In[46]:=
data = Mod@Map@psect, 884.267682454609692, 0, 0.9952906114885919<, 81.6790790859443243, 0, 2.1257099470901704<, 82.9189523719753327, 0, 4.939152797323216<, 83.1528896559036776, 0, 4.926744120488727<, 80.9829282640373566, 0, 1.7074633238173198<, 80.4090394012299985, 0, 4.170087631574883<, 86.090600411133905, 0, 2.3736566160602277<, 86.261716134007686, 0, 1.4987884558838156<, 81.005126683795467, 0, 1.3745418575363608<, 81.5880780704325377, 0, 1.3039536044289253<, 83.622408133554125, 0, 2.289597511313432<, 80.030948690635763183, 0, 4.306922133429981<, 85.906038850342371, 0, 5.000045498132029<
4
Out[47]=
3
2
1
1
2
3
4
5
6
Bouncing Ball This example is a generalization of an example in [SGT03]. It models a ball bouncing down a ramp with a given profile. The example is good for demonstrating how you can use multiple invocations of NDSolve with event location to model some behavior. This defines a function that computes the solution from one bounce to the next. The solution is computed until the next time the path intersects the ramp. In[48]:=
OneBounce@k_, ramp_D@8t0_, x0_, xp0_, y0_, yp0_
102
Advanced Numerical Differential Equation Solving in Mathematica
This defines the function for the bounce when the ball hits the ramp. The formula is based on reflection about the normal to the ramp assuming only the fraction k of energy is left after a bounce. In[49]:=
Reflection@k_, ramp_D@8x_, xp_, y_, yp_
In[50]:=
BouncingBall@k_, ramp_, 8x0_, y0_ 0.01D, 2, 25DD, _, RuleD; end = data@@1, 1DD; data = Last@dataD; bounces = H“Bounces“ ê. dataL; xmax = Max@bounces@@All, 1DDD; xmin = Min@bounces@@All, 1DDD; ymax = Max@bounces@@All, 2DDD; ymin = Min@bounces@@All, 2DDD; Show@8Plot@ramp@xD, 8x, xmin, xmax<, PlotRange Ø 88xmin, xmax<, 8ymin, ymax<<, Epilog Ø [email protected], Map@Point, bouncesD<, AspectRatio Ø Hymax - yminL ê Hxmax - xminLD, ParametricPlot@Evaluate@8Piecewise@“X“ ê. dataD, Piecewise@“Y“ ê. dataD
In[51]:=
ramp@x_D := If@x < 1, 1 - x, 0D; [email protected], ramp, 80, 1.25
1.0
0.8
Out[52]= 0.6 0.4
0.2
0.5
1.0
1.5
Advanced Numerical Differential Equation Solving in Mathematica
103
The ramp is now defined to be a quarter circle. In[53]:=
circle@x_D := If@x < 1, Sqrt@1 - x ^ 2D, 0D; [email protected], circle, 8.1, 1.25
1.0
0.8
Out[54]=
0.6
0.4
0.2
1.0
1.5
This adds a slight waviness to the ramp. In[55]:=
wavyramp@x_D := If@x < 1, 1 - x + .05 Cos@11 Pi xD , 0D; [email protected], wavyramp, 80, 1.25
Out[56]= 0.6 0.4 0.2
0.5
1.0
1.5
Event Direction Ordinary Differential Equation This example illustrates the solution of the restricted three-body problem, a standard nonstiff test system of four equations. The example traces the path of a spaceship traveling around the moon and returning to the earth (see p. 246 of [SG75]). The ability to specify multiple events and the direction of the zero crossing is important.
104
Advanced Numerical Differential Equation Solving in Mathematica
The initial conditions have been chosen to make the orbit periodic. The value of m corresponds to a spaceship traveling around the moon and the earth. In[57]:=
1
m=
82.45 m = 1 - m;
;
*
r1 =
Hy1 @tD + mL2 + y2 @tD2 ;
r2 =
Hy1 @tD - m* L2 + y2 @tD2 ;
eqns = :8y1 £ @tD ã y3 @tD, y1 @0D ã 1.2<, 8y2 £ @tD ã y4 @tD, y2 @0D ã 0<, :y3 £ @tD ã 2 y4 @tD + y1 @tD -
m* Hy1 @tD + mL
:y4 £ @tD ã - 2 y3 @tD + y2 @tD -
r31 * m y2 @tD
-
-
m Hy1 @tD - m* L r32
, y3 @0D ã 0>,
m y2 @tD
, r32 y4 @0D ã - 1.04935750983031990726`20.020923474937767>>; r31
The event function is the derivative of the distance from the initial conditions. A local maximum or minimum occurs when the value crosses zero. In[62]:=
ddist = 2 Hy3 @tD Hy1 @tD - 1.2L + y4 @tD y2 @tDL; There are two events, which for this example are the same. The first event (with Direction 1) corresponds to the point where the distance from the initial point is a local minimum, so that the spaceship returns to its original position. The event action is to store the time of the event in the variable tfinal and to stop the integration. The second event corresponds to a local maximum. The event action is to store the time that the spaceship is farthest from the starting position in the variable tfar.
In[63]:=
sol = First@NDSolve@eqns, 8y1 , y2 , y3 , y4 <, 8t, ¶<, Method Ø 8“EventLocator“, “Event“ -> 8ddist, ddist<, “Direction“ Ø 81, - 1<, “EventAction“ ß 8Throw@tfinal = t, “StopIntegration“D, tfar = t<, Method Ø “ExplicitRungeKutta“
Out[63]= 8y1 Ø InterpolatingFunction@880., 6.19217<<, <>D, y2 Ø InterpolatingFunction@880., 6.19217<<, <>D,
y3 Ø InterpolatingFunction@880., 6.19217<<, <>D, y4 Ø InterpolatingFunction@880., 6.19217<<, <>D<
The first two solution components are coordinates of the body of infinitesimal mass, so plotting one against the other gives the orbit of the body. This displays one half-orbit when the spaceship is at the furthest point from the initial position. In[64]:=
ParametricPlot@8y1 @tD, y2 @tD< ê. sol, 8t, 0, tfar
Out[64]=
-0.5
-0.1 -0.2 -0.3 -0.4 -0.5 -0.6
0.5
1.0
Advanced Numerical Differential Equation Solving in Mathematica
105
This displays one complete orbit when the spaceship returns to the initial position. In[65]:=
ParametricPlot@8y1 @tD, y2 @tD< ê. sol, 8t, 0, tfinal
Out[65]=
-1.0
-0.5
0.5
1.0
-0.2 -0.4 -0.6
Delay Differential Equation The following system models an infectious disease (see [HNW93], [ST00] and [ST01]). In[66]:=
system = 8y1 ‘@tD ã - y1@tD y2@t - 1D + y2@t - 10D, y1@t ê; t § 0D ã 5, y2 ‘@tD ã y1@tD y2@t - 1D - y2@tD, y2@t ê; t § 0D ã 1 ê 10, y3 ‘@tD ã y2@tD - y2@t - 10D, y3@t ê; t § 0D ã 1<; vars = 8y1@tD, y2@tD, y3@tD<; Collect the data for a local maximum of each component as the integration proceeds. A separate tag for Sow and Reap is used to distinguish the components.
In[68]:=
data = Reap@ sol = First@NDSolve@system, vars, 8t, 0, 40<, Method Ø 8“EventLocator“, “Event“ ß 8y1 ‘@tD, y2 ‘@tD, y3 ‘@tD<, “EventAction“ ß 8Sow@8t, y1@tD<, 1D, Sow@8t, y2@tD<, 2D, Sow@8t, y3@tD<, 3D<, “Direction“ Ø 8- 1, - 1, - 1<
In[69]:=
colors = 88Red<, 8Blue<, 8Green<<; plots = Plot@Evaluate@vars ê. solD, 8t, 0, 40<, PlotStyle Ø colorsD; max = ListPlot@Part@data, - 1, All, 1D, PlotStyle Ø colorsD; Show@plots, maxD 6 5 4
Out[72]= 3 2 1 10
20
30
40
106
Advanced Numerical Differential Equation Solving in Mathematica
Discontinuous Equations and Switching Functions In many applications the function in a differential system may not be analytic or continuous everywhere. A common discontinuous problem that arises in practice involves a switching function g: y£ =
fI Ht, yL if g Ht, yL > 0 fII Ht, yL if g Ht, yL < 0
In order to illustrate the difficulty in crossing a discontinuity, consider the following example [GØ84] (see also [HNW93]): if Jt +
1 2 N 20
+ Jy +
3 2 N 20
§1
2 t ^ 2 + 3 y@tD ^ 2 - 2 if Jt +
1 2 N 20
+ Jy +
3 2 N 20
>1
t2 + 2 y2
y£ =
Here is the input for the entire system. The switching function is assigned to the symbol event, and the function defining the system depends on the sign of the switching function. In[73]:=
t0 = 0; ics0 =
3 10
;
t+
1
2
+ y@tD +
3
2
- 1; 20 20 system = 9y ‘@tD ã IfAevent <= 0, t2 + 2 y@tD2 , 2 t2 + 3 y@tD2 - 2E, y@t0D ã ics0=; event =
The symbol odemethod is used to indicate the numerical method that should be used for the integration. For comparison, you might want to define a different method, such as “ExplicitRungeKutta“, and rerun the computations in this section to see how other methods behave. In[77]:=
odemethod = Automatic; This solves the system on the interval [0, 1] and collects data for the mesh points of the integration using Reap and Sow.
In[78]:=
data = Reap@ sol = y@tD ê. First@NDSolve@system, y, 8t, t0, 1<, Method Ø odemethod, MaxStepFraction Ø 1, StepMonitor ß Sow@tDDD D@@2, 1DD; sol
Out[79]= InterpolatingFunction@880., 1.<<, <>D@tD
Advanced Numerical Differential Equation Solving in Mathematica
107
Here is a plot of the solution. In[80]:=
dirsol = Plot@sol, 8t, t0, 1
0.7
0.6
Out[80]= 0.5
0.4
0.2
0.4
0.6
0.8
1.0
Despite the fact that a solution has been obtained, it is not clear whether it has been obtained efficiently. The following example shows that the crossing of the discontinuity presents difficulties for the numerical solver. This defines a function that displays the mesh points of the integration together with the number of integration steps that are taken. In[81]:=
StepPlot@data_, opts___ ? OptionQD := Module@8sdata<, sdata = Transpose@8data, Range@Length@dataDD
In[82]:=
StepPlot@dataD 100 80 60
Out[82]= 40 20 0 0.0
0.2
0.4
0.6
0.8
1.0
One of the most efficient methods of crossing a discontinuity is to break the integration by restarting at the point of discontinuity. The following example shows how to use the “EventLocator“ method to accomplish this.
108
Advanced Numerical Differential Equation Solving in Mathematica
This numerically integrates the first part of the system up to the point of discontinuity. The switching function is given as the event. The direction of the event is restricted to a change from negative to positive. When the event is found, the solution and the time of the event are stored by the event action. In[83]:=
system1 = 9y ‘@tD ã t2 + 2 y@tD2 , y@t0D ã ics0=; data1 = Reap@sol1 = y@tD ê. First@NDSolve@system1, y, 8t, t0, 1<, Method Ø 8“EventLocator“, “Event“ -> event, Direction Ø 1, EventAction ß Throw@t1 = t; ics1 = y@tD; , “StopIntegration“D, Method Ø odemethod<, MaxStepFraction Ø 1, StepMonitor ß Sow@tDDD D@@2, 1DD; sol1
Out[85]= InterpolatingFunction@880., 0.623418<<, <>D@tD
Using the discontinuity found by the “EventLocator“ method as a new initial condition, the integration can now be continued. This defines a system and initial condition, solves the system numerically, and collects the data used for the mesh points. In[86]:=
system2 = 9y ‘@tD ã 2 t2 + 3 y@tD2 - 2, y@t1D ã ics1=; data2 = Reap@ sol2 = y@tD ê. First@NDSolve@system2, y, 8t, t1, 1<, Method Ø odemethod, MaxStepFraction Ø 1, StepMonitor ß Sow@tDDD D@@2, 1DD; sol2
Out[88]= [email protected], 1.<<, <>D@tD
A plot of the two solutions is very similar to that obtained by solving the entire system at once. In[89]:=
evsol = Plot@If@t § t1, sol1, sol2D, 8t, 0, 1
0.7
0.6
Out[89]= 0.5
0.4
0.2
0.4
0.6
0.8
1.0
Advanced Numerical Differential Equation Solving in Mathematica
109
Examining the mesh points, it is clear that far fewer steps were taken by the method and that the problematic behavior encountered near the discontinuity has been eliminated. In[90]:=
StepPlot@Join@data1, data2DD 60 50 40
Out[90]= 30 20 10 0 0.0
0.2
0.4
0.6
0.8
1.0
The value of the discontinuity is given as 0.6234 in [HNW93], which coincides with the value found by the “EventLocator“ method. In this example it is possible to analytically solve the system and use a numerical method to check the value. The solution of the system up to the discontinuity can be represented in terms of Bessel and gamma functions. In[91]:=
dsol = FullSimplify@First@DSolve@system1, y@tD, tDDD
Out[91]= :y@tD Ø t 3 BesselJB-
3 4
2
1 -3 BesselJB , 4
,
t2
1 3 t2 3 F GammaB F + 10 µ 21ë4 BesselJB , F GammaB F 4 4 4 2 2 t2 1 1 t2 3 F GammaB F + 10 µ 21ë4 BesselJB- , F GammaB F 4 4 4 2 2
ì
>
Substituting in the solution into the switching function, a local minimization confirms the value of the discontinuity. In[92]:=
FindRoot@event ê. dsol, 8t, 3 ê 5
Out[92]= 8t Ø 0.623418<
Avoiding Wraparound in PDEs Many evolution equations model behavior on a spatial domain that is infinite or sufficiently large to make it impractical to discretize the entire domain without using specialized discretization methods. In practice, it is often the case that it is possible to use a smaller computational domain for as long as the solution of interest remains localized.
110
Advanced Numerical Differential Equation Solving in Mathematica
In situations where the boundaries of the computational domain are imposed by practical considerations rather than the actual model being studied, it is possible to pick boundary conditions appropriately. Using a pseudospectral method with periodic boundary conditions can make it possible to increase the extent of the computational domain because of the superb resolution of the periodic pseudospectral approximation. The drawback of periodic boundary conditions is that signals that propagate past the boundary persist on the other side of the domain, affecting the solution through wraparound. It is possible to use an absorbing layer near the boundary to minimize these effects, but it is not always possible to completely eliminate them. The sine-Gordon equation turns up in differential geometry and relativistic field theory. This example integrates the equation, starting with a localized initial condition that spreads out. The periodic pseudospectral method is used for the integration. Since no absorbing layer has been instituted near the boundaries, it is most appropriate to stop the integration once wraparound becomes
significant.
This
condition
is
easily
detected
with
event
location
using
the
“EventLocator“ method. The integration is stopped when the size of the solution at the periodic wraparound point crosses a threshold of 0.01, beyond which the form of the wave would be affected by periodicity. In[93]:=
TimingAsgsol = FirstANDSolveA9∂t,t u@t, xD ã ∂x,x u@t, xD - Sin@u@t, xDD, 2
2
u@0, xD ã ‰-Hx-5L + ‰-Hx+5L ë2 , uH1,0L @0, xD ã 0, u@t, - 50D ã u@t, 50D=, u, 8t, 0, 1000<, 8x, - 50, 50<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ -> “Pseudospectral“<, Method Ø 8“EventLocator“, “Event“ ß Abs@u@t, - 50DD - 0.01, “EventLocationMethod“ -> “StepBegin“<D<<
This extracts the ending time from the InterpolatingFunction object and makes a plot of the computed solution. You can see that the integration has been stopped just as the first waves begin to reach the boundary. In[94]:=
end = InterpolatingFunctionDomain@u ê. sgsolD@@1, - 1DD; DensityPlot@u@t, xD ê. sgsol, 8x, - 50, 50<, 8t, 0, end<, Mesh Ø False, PlotPoints Ø 100D 40
30
Out[95]= 20
10
0
–40
–20
0
20
40
Advanced Numerical Differential Equation Solving in Mathematica
111
The “DiscretizedMonitorVariables“ option affects the way the event is interpreted for PDEs; with the setting True , u@t, xD is replaced by a vector of discretized values. This is much more efficient because it avoids explicitly constructing the InterpolatingFunction to evaluate the event. In[96]:=
TimingAsgsol = FirstANDSolveA9∂t,t u@t, xD ã ∂x,x u@t, xD - Sin@u@t, xDD, 2
2
u@0, xD ã ‰-Hx-5L + ‰-Hx+5L ë2 , uH1,0L @0, xD ã 0, u@t, - 50D ã u@t, 50D=, u, 8t, 0, 1000<, 8x, - 50, 50<, Method Ø 8“MethodOfLines“, “DiscretizedMonitorVariables“ Ø True, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ -> “Pseudospectral“<, Method Ø 8“EventLocator“, “Event“ ß Abs@First@u@t, xDDD - 0.01, “EventLocationMethod“ -> “StepBegin“<D<<
Performance Comparison The following example constructs a table making a comparison for two different integration methods. This defines a function that returns the time it takes to compute a solution of a mildly damped pendulum equation up to the point at which the bob has momentarily been at rest 1000 times. In[97]:=
EventLocatorTiming@locmethod_, odemethod_D := BlockB8Second = 1, y, t, p = 0<, FirstB 1
y ‘@tD + Sin@y@tDD ã 0, y@0D ã 3, y ‘@0D ã 0>, 1000 y, 8t, ¶<, Method Ø 8“EventLocator“, “Event“ Ø y ‘@tD, “EventAction“ ß If@p ++ ¥ 1000, Throw@end = t, “StopIntegration“DD, “EventLocationMethod“ Ø locmethod, “Method“ Ø odemethod<, MaxSteps Ø ¶FFF
TimingBNDSolveB:y ‘‘@tD +
F; This uses the function to make a table comparing the different location methods for two different ODE integration methods. In[98]:=
elmethods = 8“StepBegin“, “StepEnd“, “LinearInterpolation“, 8“Brent“, “SolutionApproximation“ -> “CubicHermiteInterpolation“<, Automatic<; odemethods = 8Automatic, “ExplicitRungeKutta“<; TableForm@Outer@EventLocatorTiming, elmethods, odemethods, 1D, TableHeadings Ø 8elmethods, odemethods
StepBegin StepEnd Out[100]//TableForm= LinearInterpolation 8Brent, SolutionApproximation Ø CubicHermiteInterpolation< Automatic
Automatic 0.234964 0.218967 0.221967 0.310953 0.352947
ExplicitRungeKutta 0.204969 0.205968 0.212967 0.314952 0.354946
112
Advanced Numerical Differential Equation Solving in Mathematica
While simple step begin/end and linear interpolation location are essentially the same low cost, the better location methods are more expensive. The default location method is particularly expensive for the explicit Runge|Kutta method because it does not yet support a continuous output formula~it therefore needs to repeatedly invoke the method with different step sizes during the local minimization. It is worth noting that, often, a significant part of the extra time for computing events arises from the need to evaluate the event functions at each time step to check for the possibility of a sign change. In[101]:=
TableFormB :MapB BlockB8Second = 1, y, t, p = 0<, 1
y ‘@tD + Sin@y@tDD ã 0, 1000 y@0D ã 3, y ‘@0D ã 0>, y, 8t, end<, Method Ø Ò, MaxSteps Ø ¶FFFF &, odemethods F>, FirstBTimingBNDSolveB:y ‘‘@tD +
TableHeadings Ø 8None, odemethods
Automatic ExplicitRungeKutta 0.105984 0.141979
An optimization is performed for event functions involving only the independent variable. Such events are detected automatically at initialization time. For example, this has the advantage that interpolation of the solution of the dependent variables is not carried out at each step of the local optimization search~it is deferred until the value of the independent variable has been found.
Limitations One limitation of the event locator method is that since the event function is only checked for sign changes over a step interval, if the event function has multiple roots in a step interval, all or some of the events may be missed. This typically only happens when the solution to the ODE varies much more slowly than the event function. When you suspect that this may have occurred, the simplest solution is to decrease the maximum step size the method can take by using the MaxStepSize option to NDSolve. More sophisticated approaches can be taken, but the best approach depends on what is being computed. An example follows that demonstrates the problem and shows two approaches for fixing it.
Advanced Numerical Differential Equation Solving in Mathematica
113
This should compute the number of positive integers less than ‰5 (there are 148). However, most are missed because the method is taking large time steps because the solution x@tD is so simple. In[102]:=
BlockA8n = 0<, NDSolveA9y ‘@tD ã y@tD, y@- 1D ã ‰-1 =, y, 8t, 5<, Method Ø 8“EventLocator“, “Event“ Ø Sin@p y@tDD, “EventAction“ ß n ++
Out[102]= 18
This restricts the maximum step size so that all the events are found. In[103]:=
BlockA8n = 0<, NDSolveA9y ‘@tD ã y@tD, y@- 1D ã ‰-1 =, y, 8t, 5<, Method Ø 8“EventLocator“, “Event“ Ø Sin@p y@tDD, “EventAction“ ß n ++<, MaxStepSize Ø 0.001E; nE
Out[103]= 148
It is quite apparent from the nature of the example problem that if the endpoint is increased, it is likely that a smaller maximum step size may be required. Taking very small steps everywhere is quite inefficient. It is possible to introduce an adaptive time step restriction by setting up a variable that varies on the same time scale as the event function. This introduces an additional function to integrate that is the event function. With this modification and allowing the method to take as many steps as needed, it is possible to find the correct value up to t = 10 in a reasonable amount of time. In[104]:=
BlockA8n = 0<, NDSolveA 9y ‘@tD ã y@tD, y@- 1D ã ‰-1 , z ‘@tD ã D@Sin@p y@tDD, tD, z@- 1D ã SinAp ‰-1 E=, 8y, z<, 8t, 10<, Method Ø 8“EventLocator“, “Event“ Ø z@tD, “EventAction“ ß n ++<, MaxSteps Ø ¶E; nE
Out[104]= 22 026
114
Advanced Numerical Differential Equation Solving in Mathematica
Option Summary "EventLocator" Options option name
default value
“Direction“
All
the direction of zero crossing to allow for the event; 1 means from negative to positive, -1 means from positive to negative, and All includes both directions
“Event“
None
an expression that defines the event; an event occurs at points where substituting the numerical values of the problem variables makes the expression equal to zero
“EventAction“
Throw[Null, what to do when an event occurs: problem “StopIntegratioÖ variables are substituted with their numerin“] cal values at the event; in general, you need to use RuleDelayed (ß) to prevent the option from being evaluated except with numerical values
“EventLocationMethod“
Automatic
the method to use for refining the location of a given event
“Method“
Automatic
the method to use for integrating the system of ODEs
“EventLocator“ method options.
"EventLocationMethod" Options “Brent“
use FindRoot with Method -> “Brent“ to locate the event; this is the default with the setting Automatic
“LinearInterpolation“
locate the event time using linear interpolation; cubic Hermite interpolation is then used to find the solution at the event time
“StepBegin“
the event is given by the solution at the beginning of the step
“StepEnd“
the event is given by the solution at the end of the step
Settings for the “EventLocationMethod“ option.
Advanced Numerical Differential Equation Solving in Mathematica
115
"Brent" Options option name
default value
“MaxIterations“
100
the maximum number of iterations to use for locating an event within a step of the method
“AccuracyGoal“
Automatic
accuracy goal setting passed to FindRoot ; if Automatic, the value passed to FindRoot is based on the local error setting for NDSolve
“PrecisionGoal“
Automatic
precision goal setting passed to FindRoot ; if Automatic, the value passed to FindRoot is based on the local error setting for NDSolve
“SolutionApproximation“
Automatic
how to approximate the solution for evaluating the event function during the refinement process; can be Automatic or
“CubicHermiteInterpolation“ Options for event location method “Brent“.
"Extrapolation" Method for NDSolve Introduction Extrapolation methods are a class of arbitrary-order methods with automatic order and stepsize control. The error estimate comes from computing a solution over an interval using the same method with a varying number of steps and using extrapolation on the polynomial that fits through the computed solutions, giving a composite higher-order method [BS64]. At the same time, the polynomials give a means of error estimation. Typically, for low precision, the extrapolation methods have not been competitive with Runge| Kutta-type methods. For high precision, however, the arbitrary order means that they can be arbitrarily faster than fixed-order methods for very precise tolerances. The order and step-size control are based on the codes odex.f and seulex.f described in [HNW93] and [HW96]. This loads packages that contain some utility functions for plotting step sequences and some predefined problems. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
116
Advanced Numerical Differential Equation Solving in Mathematica
"Extrapolation" The method “DoubleStep“ performs a single application of Richardson's extrapolation for any one-step integration method and is described within "DoubleStep Method for NDSolve". “Extrapolation“ generalizes the idea of Richardson's extrapolation to a sequence of refinements. Consider a differential system y£ HtL = f Ht, yHtLL, yHt0 L = y0 .
(1)
Let H > 0 be a basic step size; choose a monotonically increasing sequence of positive integers n1 < n2 < n3 < < nk and define the corresponding step sizes h1 > h2 > h3 > > hk by
hi =
H ni
, i = 1, 2, …, k.
Choose a numerical method of order p and compute the solution of the initial value problem by carrying out ni steps with step size hi to obtain: Ti,1 = yhi Hto + HL, i = 1, 2, …, k. Extrapolation is performed using the Aitken|Neville algorithm by building up a table of values: Ti, j = Ti, j-1 +
Ti, j-1 -Ti-1, j-1 ni ni- j+1
w
-1
, i = 2, …, k, j = 2, …, i,
(2)
where w is either 1 or 2 depending on whether the base method is symmetric under extrapolation.
Advanced Numerical Differential Equation Solving in Mathematica
117
A dependency graph of the values in (2) illustrates the relationship: T11 å T21 ô T22 å å T31 ô T32 ô T33 å å å T41 ô T42 ô T43 ô T44 Considering k = 2, n1 = 1, n2 = 2 is equivalent to Richardson's extrapolation. For non-stiff problems the order of Tk,k in (2) is
p + Hk - 1L w. For stiff problems the analysis is
more complicated and involves the investigation of perturbation terms that arise in singular perturbation problems [HNW93, HW96].
Extrapolation Sequences Any extrapolation sequence can be specified in the implementation. Some common choices are as follows. This is the Romberg sequence. In[5]:=
NDSolve`RombergSequenceFunction@1, 10D
Out[5]= 81, 2, 4, 8, 16, 32, 64, 128, 256, 512<
This is the Bulirsch sequence. In[6]:=
NDSolve`BulirschSequenceFunction@1, 10D
Out[6]= 81, 2, 3, 4, 6, 8, 12, 16, 24, 32<
This is the harmonic sequence. In[7]:=
NDSolve`HarmonicSequenceFunction@1, 10D
Out[7]= 81, 2, 3, 4, 5, 6, 7, 8, 9, 10<
w
A sequence that satisfies Ini ë ni- j+1 M ¥ 2 has the effect of minimizing the roundoff errors for an order-p base integration method.
118
Advanced Numerical Differential Equation Solving in Mathematica
For a base method of order two, the first entries in the sequence are given by the following. In[8]:=
NDSolve`OptimalRoundingSequenceFunction@1, 10, 2D
Out[8]= 81, 2, 3, 5, 8, 12, 17, 25, 36, 51<
Here is an example of adding a function to define the harmonic sequence where the method order is an optional pattern. In[9]:=
Default@myseqfun, 3D = 1; myseqfun@n1_, n2_, p_.D := Range@n1, n2D
The sequence with lowest cost is the Harmonic sequence, but this is not without problems since rounding errors are not damped.
Rounding Error Accumulation For high-order extrapolation an important consideration is the accumulation of rounding errors in the Aitken|Neville algorithm (2). As an example consider Exercise 5 of Section II.9 in [HNW93]. Suppose that the entries T11 , T21 , T31 , … are disturbed with rounding errors e, -e, e, … and compute the propagation of these errors into the extrapolation table. Due to the linearity of the extrapolation process (2), suppose that the Ti, j are equal to zero and take e = 1. This shows the evolution of the Aitken|Neville algorithm (2) on the initial data using the harmonic sequence and a symmetric order-two base integration method, w = p = 2. 1. -1. 1. -1. 1. -1. 1. -1.
-1.66667 2.6 3.13333 -3.57143 -5.62857 -6.2127 4.55556 9.12698 11.9376 12.6938 -5.54545 -13.6263 -21.2107 -25.3542 -26.4413 6.53846 19.1259 35.0057 47.6544 54.144 55.8229 -7.53333 -25.6256 -54.3125 -84.0852 -105.643 -116.295 -119.027
Hence, for an order-sixteen method approximately two decimal digits are lost due to rounding error accumulation.
Advanced Numerical Differential Equation Solving in Mathematica
119
This model is somewhat crude because, as you will see later, it is more likely that rounding errors are made in Ti+1,1 than in Ti,1 for i ¥ 1.
Rounding Error Reduction It seems worthwhile to look for approaches that can reduce the effect of rounding errors in highorder extrapolation. Selecting a different step sequence to diminish rounding errors is one approach, although the drawback is that the number of integration steps needed to form the Ti,1 in the first column of the extrapolation table requires more work. Some codes, such as STEP, take active measures to reduce the effect of rounding errors for stringent tolerances [SG75]. An alternative strategy, which does not appear to have received a great deal of attention in the context of extrapolation, is to modify the base-integration method in order to reduce the magnitude of the rounding errors in floating-point operations. This approach, based on ideas that dated back to [G51], and used to good effect for the two-body problem in [F96b] (for background see also [K65], [M65a], [M65b], [V79]), is explained next.
Base Methods The following methods are the most common choices for base integrators in extrapolation. † “ExplicitEuler“ † “ExplicitMidpoint“ † “ExplicitModifiedMidpoint“ (Gragg smoothing step (1)) † “LinearlyImplicitEuler“ † “LinearlyImplicitMidpoint“ (Bader|Deuflhard formulation without smoothing step (1)) † “LinearlyImplicitModifiedMidpoint“ (Bader|Deuflhard formulation with smoothing step (1)) For efficiency, these have been built into NDSolve and can be called via the Method option as individual methods. The implementation of these methods has a special interpretation for multiple substeps within “DoubleStep“ and “Extrapolation“.
120
Advanced Numerical Differential Equation Solving in Mathematica
The NDSolve. framework for one step methods uses a formulation that returns the increment or update to the solution. This is advantageous for geometric numerical integration where numerical errors are not damped over long time integrations. It also allows the application of efficient correction strategies such as compensated summation. This formulation is also useful in the context of extrapolation. The methods are now described together with the increment reformulation that is used to reduce rounding error accumulation.
Multiple Euler Steps Given t0 , y0 and H, consider a succession of n = nk integration steps with step size h = H ê n carried out using Euler's method: y1 = y2 = y3 =
y0 + h f Ht0 , y0 L y1 + h f Ht1 , y1 L y2 + h f Ht2 , y2 L
(1)
ª ª ª yn = yn-1 + h f Htn-1 , yn-1 L where ti = t0 + i h.
Correspondence with Explicit Runge|Kutta Methods It is well-known that, for certain base integration schemes, the entries Ti, j in the extrapolation table produced from (2) correspond to explicit Runge|Kutta methods (see Exercise 1, Section II.9 in [HNW93]). For example, (1) is equivalent to an n-stage explicit Runge|Kutta method: ki = f It0 + ci H, y0 + H ⁄nj=1 ai, j k j M, i = 1, …, n, y0 + H ⁄ni=1 bi ki
yn =
(1)
where the coefficients are represented by the Butcher table: 0 1ên
1ên
ª ª Hn - 1L ê n 1 ê n 1 ê n 1ên 1ên 1ên
(2)
Advanced Numerical Differential Equation Solving in Mathematica
121
Reformulation Let D yn = yn+1 - yn . Then the integration (1) can be rewritten to reflect the correspondence with an explicit Runge|Kutta method (1, 2) as: D y0 D y1 D y2
= = =
h f Ht0 , y0 L h f Ht1 , y0 + D y0 L h f Ht2 , y0 + HD y0 + D y1 LL
ª
ª
ª
(1)
D yn-1 = h f Itn-1 , y0 + ID y0 + D y1 + + Dyn-2 MM where terms in the right-hand side of (1) are now considered as departures from the same value y0 . The D yi in (1) correspond to the h ki in (1). Let SD yn = ⁄n-1 i=0 D yi ; then the required result can be recovered as: yn = y0 + SD yn
(2)
Mathematically the formulations (1) and (1, 2) are equivalent. For n > 1, however, the computations in (1) have the advantage of accumulating a sum of smaller OHhL quantities, or increments, which reduces rounding error accumulation in finite-precision floating-point arithmetic.
Multiple Explicit Midpoint Steps Expansions in even powers of h are extremely important for an efficient implementation of Richardson's extrapolation and an elegant proof is given in [S70]. Consider a succession of integration steps n = 2 nk with step size h = H ê n carried out using one Euler step followed by multiple explicit midpoint steps: y1 = y2 = y3 =
y0 + h f Ht0 , y0 L y0 + 2 h f Ht1 , y1 L y1 + 2 h f Ht2 , y2 L
ª ª ª yn = yn-2 + 2 h f Htn-1 , yn-1 L
(1)
122
Advanced Numerical Differential Equation Solving in Mathematica
If (1) is computed with 2 nk - 1 midpoint steps, then the method has a symmetric error expansion ([G65], [S70]).
Reformulation Reformulation of (1) can be accomplished in terms of increments as: D y0 D y1 D y2
= = =
h f Ht0 , y0 L 2 h f Ht1 , y0 + D y0 L - D y0 2 h f Ht2 , y0 + HD y0 + D y1 LL - D y1
(1)
ª ª ª D yn-1 = 2 h f Htn-1 , y0 + HD y0 + D y1 + + D yn-2 LL - D yn-2
Gragg's Smoothing Step The smoothing step of Gragg has its historical origins in the weak stability of the explicit midpoint rule: S yh HnL = 1 ê 4 Hyn-1 + 2 yn + yn+1 L
(1)
In order to make use of (1), the formulation (1) is computed with 2 nk steps. This has the advantage of increasing the stability domain and evaluating the function at the end of the basic step [HNW93]. Notice that because of the construction, a sum of increments is available at the end of the algorithm together with two consecutive increments. This leads to the following formulation: S D yh HnL = S yh HnL - y0 = SD yn + 1 ê 4 HD yn - D yn-1 L.
(2)
Moreover (2) has an advantage over (1) in finite-precision arithmetic because the values yi , which typically have a larger magnitude than the increments D yi , do not contribute to the computation. Gragg's smoothing step is not of great importance if the method is followed by extrapolation, and Shampine proposes an alternative smoothing procedure that is slightly more efficient [SB83]. The method “ExplicitMidpoint“ uses 2 nk - 1 steps and “ExplicitModifiedMidpoint“ uses 2 nk steps followed by the smoothing step (2).
Advanced Numerical Differential Equation Solving in Mathematica
123
Stability Regions The following figures illustrate the effect of the smoothing step on the linear stability domain (carried out using the package FunctionApproximations.m). Linear stability regions for Ti,i , i = 1, …, 5 for the explicit midpoint rule (left) and the explicit midpoint rule with smoothing (right). 6
6
4
4
2
2
0
0
-2
-2
-4
-4
-6
-6
In[11]:=
-6 -4 -2
Out[11]=
0
2
4
-6 -4 -2
6
6
6
4
4
2
2
0
0
-2
-2
-4
-4
-6
-6 -6
-4
-2
0
2
4
6
-6
-4
-2
0
0
2
2
4
4
6
6
Since the precise stability boundary can be complicated to compute for an arbitrary base method, a simpler approximation is used. For an extrapolation method of order p, the intersection with the negative real axis is considered to be the point at which: p
‚ i=1
zi i!
= 1
The stabillity region is approximated as a disk with this radius and origin (0,0) for the negative half-plane.
124
Advanced Numerical Differential Equation Solving in Mathematica
Implicit Differential Equations A generalization of the differential system (1) arises in many situations such as the spatial discretization of parabolic partial differential equations: M y£ HtL = f Ht, yHtLL, yHt0 L = y0 .
(1)
where M is a constant matrix that is often referred to as the mass matrix. Base methods in extrapolation that involve the solution of linear systems of equations can easily be modified to solve problems of the form (1).
Multiple Linearly Implicit Euler Steps Increments arise naturally in the description of many semi-implicit and implicit methods. Consider a succession of integration steps carried out using the linearly implicit Euler method for the system (1) with n = nk and h = H ê n. HM - h JL D y0 y1 HM - h JL D y1 y2 HM - h JL D y2 y3
= = = = = =
h f Ht0 , y0 L y0 + D y0 h f Ht1 , y1 L y1 + D y1 h f Ht2 , y2 L y2 + D y2
(1)
ª ª ª HM - h JL D yn-1 = h f Htn-1 , yn-1 L Here M denotes the mass matrix and J denotes the Jacobian of f :
J=
∂f ∂y
Ht0 , y0 L.
The solution of the equations for the increments in (1) is accomplished using a single LU decomposition of the matrix M - h J followed by the solution of triangular linear systems for each righthand side. The desired result is obtained from (1) as: yn = yn-1 + D yn-1 .
Advanced Numerical Differential Equation Solving in Mathematica
125
Reformulation Reformulation in terms of increments as departures from y0 can be accomplished as follows: HM - h JL D y0 HM - h JL D y1 HM - h JL D y2
= = =
h f Ht0 , y0 L h f Ht1 , y0 + D y0 L h f Ht2 , y0 + HD y0 + D y1 LL
(1)
ª ª ª HM - h JL D yn-1 = h f Htn-1 , y0 + HD y0 + D y1 + + D yn-2 LL The result for yn using (1) is obtained from (2). Notice that (1) and (1) are equivalent when J = 0, M = I.
Multiple Linearly Implicit Midpoint Steps Consider one step of the linearly implicit Euler method followed by multiple linearly implicit midpoint steps with n = 2 nk and h = H ê n, using the formulation of Bader and Deuflhard [BD83]: HM - h JL D y0 y1 HM - h JL HD y1 - D y0 L y2 HM - h JL HD y2 - D y1 L y3
= = = = = =
h f Ht0 , y0 L y0 + D y0 2 Hh f Ht1 , y1 L - D y0 L y1 + D y1 2 Hh f Ht2 , y2 L - D y1 L y2 + D y2
(1)
ª ª ª HM - h JL HD yn-1 - D yn-2 L = 2 Hh f Htn-1 , yn-1 L - D yn-2 L If (1) is computed for 2 nk - 1 linearly implicit midpoint steps, then the method has a symmetric error expansion [BD83].
Reformulation Reformulation of (1) in terms of increments can be accomplished as follows: HM - h JL D y0 HM - h JL HD y1 - D y0 L HM - h JL HD y2 - D y1 L
= = =
h f Ht0 , y0 L 2 Hh f Ht1 , y0 + D y0 L - D y0 L 2 Hh f Ht2 , y0 + HD y0 + D y1 LL - D y1 L
ª ª ª HM - h JL HD yn-1 - D yn-2 L = 2 Hh f Htn-1 , y0 + HD y0 + D y1 + + D yn-2 LL - D yn-2 L
(1)
126
Advanced Numerical Differential Equation Solving in Mathematica
Smoothing Step An appropriate smoothing step for the linearly implicit midpoint rule is [BD83]: S yh HnL =
1 2
(1)
Hyn-1 + yn+1 L.
Bader's smoothing step (1) rewritten in terms of increments becomes: S D yh HnL = S yh HnL - y0 = SD yn +
1 2
(2)
HD yn - D yn-1 L.
The required quantities are obtained when (1) is run with 2 nk steps. The smoothing step for the linearly implicit midpoint rule has a different role from Gragg's smoothing for the explicit midpoint rule (see [BD83] and [SB83]). Since there is no weakly stable term to eliminate, the aim is to improve the asymptotic stability. The method “LinearlyImplicitMidpoint“ uses
2 nk - 1 steps and “LinearlyImplicitÖ
ModifiedMidpoint “ uses 2 nk steps followed by the smoothing step (2).
Polynomial Extrapolation in Terms of Increments You have seen how to modify Ti,1 , the entries in the first column of the extrapolation table, in terms of increments. However, for certain base integration methods, each of the Ti, j corresponds to an explicit Runge| Kutta method. Therefore, it appears that the correspondence has not yet been fully exploited and further refinement is possible. Since the Aitken|Neville algorithm (2) involves linear differences, the entire extrapolation process can be carried out using increments. This leads to the following modification of the Aitken|Neville algorithm: D Ti, j = D Ti, j-1 +
D Ti, j-1 -D Ti-1, j-1 ni ni- j+1
p
-1
, i = 2, …, k, j = 2, …, i.
(1)
Advanced Numerical Differential Equation Solving in Mathematica
127
The quantities D Ti, j = Ti, j - y0 in (1) can be computed iteratively, starting from the initial quantities Ti,1 that are obtained from the modified base integration schemes without adding the contribution from y0 . The final desired value Tk,k can be recovered as D Tk,k + y0 . The advantage is that the extrapolation table is built up using smaller quantities, and so the effect of rounding errors from subtractive cancellation is reduced.
Implementation Issues There are a number of important implementation issues that should be considered, some of which are mentioned here.
Jacobian Reuse The Jacobian is evaluated only once for all entries Ti,1 at each time step by storing it together with the associated time that it is evaluated. This also has the advantage that the Jacobian does not need to be recomputed for rejected steps.
Dense Linear Algebra For dense systems, the LAPACK routines xyyTRF can be used for the LU decomposition and the routines xyyTRS for solving the resulting triangular systems [LAPACK99].
Adaptive Order and Work Estimation In order to adaptively change the order of the extrapolation throughout the integration, it is important to have a measure of the amount of work required by the base scheme and extrapolation sequence. A measure of the relative cost of function evaluations is advantageous. The dimension of the system, preferably with a weighting according to structure, needs to be incorporated for linearly implicit schemes in order to take account of the expense of solving each linear system.
128
Advanced Numerical Differential Equation Solving in Mathematica
Stability Check Extrapolation methods use a large basic step size that can give rise to some difficulties. "Neither code can solve the van der Pol equation problem in a straightforward way because of overflow..." [S87]. Two forms of stability check are used for the linearly implicit base schemes (for further discussion, see [HW96]). One check is performed during the extrapolation process. Let err j = ±T j, j-1 - T j, j µ. If err j ¥ err j-1 for some j ¥ 3, then recompute the step with H = H ê 2. In order to interrupt computations in the computation of T1,1 , Deuflhard suggests checking if the Newton iteration applied to a fully implicit scheme would converge. For the implicit Euler method this leads to consideration of: HM - h JL D0 = h f Ht0 , y0 L HM - h JL D1 = h f Ht0 , y0 + D0 L - D0
(1)
Notice that (1) differs from (1) only in the second equation. It requires finding the solution for a different right-hand side but no extra function evaluation. For the implicit midpoint method, D0 = D y0 and D1 = 1 ê 2 HD y1 - D y0 L, which simply requires a few basic arithmetic operations. If °D1 ¥ ¥ °D0 ¥ then the implicit iteration diverges, so recompute the step with H = H ê 2. Increments are a more accurate formulation for the implementation of both forms of stability check.
Advanced Numerical Differential Equation Solving in Mathematica
129
Examples Work-Error Comparison For comparing different extrapolation schemes, consider an example from [HW96]. In[12]:=
t0 = p ê 6; h0 = 1 ê 10; y0 = :2 í 3 >; eqs = 8y ‘@tD ã H- y@tD Sin@tD + 2 Tan@tDL y@tD, y@t0D ã y0<; exactsol = y@tD ê. First@DSolve@eqs, y@tD, tDD ê. t Ø t0 + h0; idata = 88eqs, y@tD, t<, h0, exactsol<;
The exact solution is given by yHtL = 1 ê cosHtL.
Increment Formulation This example involves an eighth-order extrapolation of “ExplicitEuler“ with the harmonic sequence. Approximately two digits of accuracy are gained by using the increment-based formulation throughout the extrapolation process. † The results for the standard formulation (1) are depicted in green. † The results for the increment formulation (1) followed by standard extrapolation (2) are depicted in blue. † The results for the increment formulation (1) with extrapolation carried out on the increments using (1) are depicted in red.
Plot of work vs error on a log-log scale -14
1. µ 10 30
-11
1. µ 10
-8
1. µ 10
0.00001
0.01 30
20 15
20 15
10 7 5
10 7 5
3
3
2 1.5 1 -14 -11 -8 0.00001 1. µ 10 1. µ 10 1. µ 10
2 1.5 1 0.01
Approximately two decimal digits of accuracy are gained by using the increment-based formulation throughout the extrapolation process.
130
Advanced Numerical Differential Equation Solving in Mathematica
This compares the relative error in the integration data that forms the initial column of the extrapolation table for the previous example. Reference values were computed using software arithmetic with 32 decimal digits and converted to the nearest IEEE double-precision floating-point numbers, where an ULP signifies a Unit in the Last Place or Unit in the Last Position. T11
T21
T31
T41
T51
T61
T71
T81
Standard formulation
0
-1 ULP
0
1 ULP
0
1.5 ULPs
0
1 ULP
Increment formulation applied to the base method
0
0
0
0
1 ULP
0
0
1 ULP
Notice that the rounding-error model that was used to motivate the study of rounding-error growth is limited because in practice, errors in Ti,1 can exceed 1 ULP. The increment formulation used throughout the extrapolation process produces rounding errors in Ti,1 that are smaller than 1 ULP.
Method Comparison This compares the work required for extrapolation based on “ExplicitEuler“ (red), the “ExplicitMidpoint“ (blue), and “ExplicitModifiedMidpoint“ (green). All computations are carried out using software arithmetic with 32 decimal digits.
Plot of work vs error on a log-log scale 23
1. × 10
19
1. × 10
15
1. × 10
11
1. × 10
7
1. × 10
0.001
50
50
20
20
10
10
5
5
2
2
1 15 23 19 11 7 1. × 10 1. × 10 1. × 10 1. × 10 1. × 10
0.001
1
Advanced Numerical Differential Equation Solving in Mathematica
Order Selection Select a problem to solve. In[32]:=
system = GetNDSolveProblem@“Pleiades“D; Define a monitor function to store the order and the time of evaluation.
In[33]:=
OrderMonitor@t_, method_NDSolve`ExtrapolationD := Sow@8t, method@“DifferenceOrder“D
In[34]:=
data = Reap@ NDSolve@system, Method Ø 8“Extrapolation“, Method -> “ExplicitModifiedMidpoint“<, “MethodMonitor“ :> OrderMonitor@T, NDSolve`SelfDD D@@ - 1, 1DD; Display how the order varies during the integration.
In[35]:=
ListLinePlot@dataD 14
13
12
Out[35]= 11 10
9
0.5
1.0
1.5
2.0
2.5
Method Comparison Select the problem to solve. In[67]:=
system = GetNDSolveProblem@“Arenstorf“D; A reference solution is computed with a method that switches between a pair of “Extrapolation“ methods, depending on whether the problem appears to be stiff.
In[68]:=
sol = NDSolve@system, Method Ø “StiffnessSwitching“, WorkingPrecision Ø 32D; refsol = First@FinalSolutions@system, solDD;
131
132
Advanced Numerical Differential Equation Solving in Mathematica
Define a list of methods to compare. In[70]:=
methods = 88“ExplicitRungeKutta“, “StiffnessTest“ Ø False<, 8“Extrapolation“, Method -> “ExplicitModifiedMidpoint“, “StiffnessTest“ Ø False<<; The data comparing accuracy and work is computed using CompareMethods for a range of tolerances.
In[71]:=
data = Table@Map@Rest, CompareMethods@system, refsol, methods, AccuracyGoal Ø tol, PrecisionGoal Ø tolDD, 8tol, 4, 14
In[73]:=
ListLogLogPlot@Transpose@dataD, Joined Ø True, Axes Ø False, Frame Ø True, PlotStyle Ø 88Green<, 8Red<
Out[72]=
10-6 10-8 10-10 10-12 1000
1500
2000
3000
5000
7000
Stiff Systems One of the simplest nonlinear equations describing a circuit is van der Pol's equation. In[18]:=
system = GetNDSolveProblem@“VanderPol“D; vars = system@“DependentVariables“D; time = system@“TimeData“D; This solves the equations using “Extrapolation“ with the “ExplicitModifiedMidpoint“ base method with the default double-harmonic sequence 2, 4, 6, …. The stiffness detection device terminates the integration and an alternative method is suggested.
In[21]:=
vdpsol = Flatten@vars ê. NDSolve@system, Method Ø 8“Extrapolation“, Method Ø “ExplicitModifiedMidpoint“
Out[21]= 8InterpolatingFunction@880., 0.0229201<<, <>D@TD,
InterpolatingFunction@880., 0.0229201<<, <>D@TD<
Advanced Numerical Differential Equation Solving in Mathematica
133
This solves the equations using “Extrapolation“ with the “LinearlyImplicitEuler“ base method with the default sub-harmonic sequence 2, 3, 4, …. In[22]:=
vdpsol = Flatten@vars ê. NDSolve@system, Method Ø 8“Extrapolation“, Method Ø “LinearlyImplicitEuler“
Out[22]= 8InterpolatingFunction@880., 2.5<<, <>D@TD, InterpolatingFunction@880., 2.5<<, <>D@TD<
Notice that the Jacobian matrix is computed automatically (user-specifiable by using either numerical differences or symbolic derivatives) and appropriate linear algebra routines are selected and invoked at run time. This plots the first solution component over time. In[23]:=
Plot@Evaluate@First@vdpsolDD, Evaluate@timeD, Frame Ø True, Axes Ø FalseD 2
1
Out[23]=
0
-1
-2 0.0
0.5
1.0
1.5
2.0
2.5
This plots the step sizes taken in computing the solution. In[24]:=
StepDataPlot@vdpsolD 0.050
0.020
Out[24]=
0.010 0.005
0.002 0.0
0.5
1.0
1.5
2.0
2.5
High-Precision Comparison Select the Lorenz equations. In[25]:=
system = GetNDSolveProblem@“Lorenz“D;
134
Advanced Numerical Differential Equation Solving in Mathematica
This invokes a bigfloat, or software floating-point number, embedded explicit Runge|Kutta method of order 9(8) [V78]. In[26]:=
Timing@ erksol = NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 9<, WorkingPrecision Ø 32D; D
Out[26]= 83.3105, Null<
This invokes the “Adams“ method using a bigfloat version of LSODA. The maximum order of these methods is twelve. In[27]:=
Timing@ adamssol = NDSolve@system, Method Ø “Adams“, WorkingPrecision Ø 32D; D
Out[27]= 81.81172, Null<
This invokes the “Extrapolation“ method with “ExplicitModifiedMidpoint“ as the base integration scheme. In[28]:=
Timing@ extrapsol = NDSolve@system, Method Ø 8“Extrapolation“, Method -> “ExplicitModifiedMidpoint“<, WorkingPrecision Ø 32D; D
Out[28]= 80.622906, Null<
Here are the step sizes taken by the various methods. The high order used in extrapolation means that much larger step sizes can be taken. In[29]:=
methods = 8“ExplicitRungeKutta“, “Adams“, “Extrapolation“<; solutions = 8erksol, adamssol, extrapsol<; MapThread@StepDataPlot@Ò2, PlotLabel Ø Ò1D &, 8methods , solutions
0.006
Out[31]=
{
Adams 0.12
0.004
0.005 0.004
,
0.003 0.002
0.003
,
0.002 0.001
0.001 0.000
Extrapolation
0
5
10
15
0.000
0
5
10
15
0.10 0.08 0.06 0.04 0.02 0.00
} 0
5
10
15
Mass Matrix - fem2ex Consider the partial differential equation: ∂u ∂t
= expHtL
∂2 u ∂x2
, uH0, xL = sinHxL , uHt, 0L = uHt, pL = 0.
(1)
Advanced Numerical Differential Equation Solving in Mathematica
135
Given an integer n define h = p ê Hn + 1L and approximate at xk = k h with k = 0, …, n + 1 using the Galerkin discretization: uHt, xk L º ⁄nk=1 ck HtL fk HxL
(2)
where fk HxL is a piecewise linear function that is 1 at xk and 0 at x j ≠ xk . The discretization (2) applied to (1) gives rise to a system of ordinary differential equations with constant mass matrix formulation as in (1). The ODE system is the fem2ex problem in [SR97] and is also found in the IMSL library. The problem is set up to use sparse arrays for matrices which is not necessary for the small dimension being considered, but will scale well if the number of discretization points is increased. A vector-valued variable is used for the initial conditions. The system will be solved over the interval @0, pD. In[35]:=
n = 9; h = N@p ê Hn + 1LD; amat = SparseArray@ 88i_, i_< Ø 2 h ê 3, 8i_, j_< ê; Abs@i - jD ã 1 Ø h ê 6<, 8n + 2, n + 2<, 0.D; rmat = SparseArray@88i_, i_< Ø - 2 ê h, 8i_, j_< ê; Abs@i - jD ã 1 Ø 1 ê h<, 8n + 2, n + 2<, 0.D; vars = 8y@tD<; eqs = 8amat.y ‘@tD ã rmat.HExp@tD y@tDL<; ics = 8y@0D ã Table@Sin@k hD, 8k, 0, n + 1
In[44]:=
sollim = NDSolve@system, vars, time, Method -> 8“Extrapolation“, Method Ø “LinearlyImplicitEuler“<, “SolveDelayed“ Ø “MassMatrix“, MaxStepFraction Ø 1D; This plot shows the relatively large step sizes that are taken by the method.
In[45]:=
StepDataPlot@sollimD 0.50
Out[45]= 0.30 0.20 0.5
1.0
1.5
2.0
2.5
3.0
The default method for this type of problem is “IDA“ which is a general purpose differential algebraic equation solver [HT99]. Being much more general in scope, this method somewhat overkill for this example but serves for comparison purposes. In[46]:=
soldae = NDSolve@system, vars, time, MaxStepFraction Ø 1D;
136
Advanced Numerical Differential Equation Solving in Mathematica
The following plot clearly shows that a much larger number of steps are taken by the DAE solver. In[47]:=
StepDataPlot@soldaeD 0.010 0.005 0.001
Out[47]= 5 µ 10-4
1 µ 10-4 5 µ 10-5 1 µ 10-5 0.0
0.5
1.0
1.5
2.0
2.5
3.0
Define a function that can be used to plot the solutions on a grid. In[48]:=
PlotSolutionsOn3DGrid@8ndsol_<, opts___ ? OptionQD := Module@8if, m, n, sols, tvals, xvals<, tvals = First@Head@ndsolD@“Coordinates“DD; sols = Transpose@ndsol ê. t Ø tvalsD; m = Length@tvalsD; n = Length@solsD; xvals = Range@0, n - 1D; data = Table@88Part@tvals, jD, Part@xvals, iD<, Part@sols, i, jD<, 8j, m<, 8i, n
In[49]:=
femsol = PlotSolutionsOn3DGrid@vars ê. First@sollimD, Ticks Ø 8Table@i p, 8i, 0, 1, 1 ê 2
Out[49]=
solution
1.0 10 9
0.5 8 7 0.0 0
6 5 4 3
p
time
2
2
1 p
0
index
Advanced Numerical Differential Equation Solving in Mathematica
137
Fine-Tuning "StepSizeSafetyFactors" As with most methods, there is a balance between taking too small a step and trying to take too big a step that will be frequently rejected. The option “StepSizeSafetyFactors“ -> 8s1 , s2 < constrains the choice of step size as follows. The step size chosen by the method for order p satisfies: 1
hn+1 = hn s1 Ks2
Tol ±errn µ
O
p+1
.
(1)
This includes both an order-dependent factor and an order-independent factor.
"StepSizeRatioBounds" The option “StepSizeRatioBounds“ -> 8srmin , srmax < specifies bounds on the next step size to take such that:
srmin §
hn+1 hn
§ srmax .
"OrderSafetyFactors" An important aspect in “Extrapolation“ is the choice of order. Each extrapolation step k has an associated work estimate k . The work estimate for explicit base methods is based on the number of function evaluations and the step sequence used. The work estimate for linearly implicit base methods also includes an estimate of the cost of evaluating the Jacobian, the cost of an LU decomposition, and the cost of backsolving the linear equations. Estimates for the work per unit step are formed from the work estimate k and the expected new step size to take for a method of order k (computed from (1)): k = k ë hkn+1 . Comparing consecutive estimates, k allows a decision about when a different order method will be more efficient.
138
Advanced Numerical Differential Equation Solving in Mathematica
The option “OrderSafetyFactors“ -> 8 f1 , f2 < specifies safety factors to be included in the comparison of estimates k . An order decrease is made when k-1 < f1 k . An order increase is made when k+1 < f2 k . There are some additional restrictions, such as when the maximal order increase per step is one (two for symmetric methods), and when an increase in order is prevented immediately after a rejected step. For a nonstiff base method the default values are 84 ê 5, 9 ê 10< whereas for a stiff method they are 87 ê 10, 9 ê 10<.
Option Summary option name
default value
"ExtrapolationSequence"
Automatic
specify the sequence to use in extrapolation
"MaxDifferenceOrder"
Automatic
specify the maximum order to use
Method
"ExplicitModifÖ specify the base integration method to use iedMidpoiÖ nt"
"MinDifferenceOrder"
Automatic
specify the minimum order to use
"OrderSafetyFactors"
Automatic
specify the safety factors to use in the estimates for adaptive order selection
"StartingDifferenceOrder"
Automatic
specify the initial order to use
"StepSizeRatioBounds"
Automatic
specify the bounds on a relative change in the new step size hn+1 from the current step size hn as low § hn+1 ê hn § high
"StepSizeSafetyFactors"
Automatic
specify the safety factors to incorporate into the error estimate used for adaptive step sizes
"StiffnessTest"
Automatic
specify whether to use the stiffness detection capability
Options of the method “Extrapolation“.
The default setting of Automatic for the option “ExtrapolationSequence“ selects a sequence based on the stiffness and symmetry of the base method. The default setting of Automatic for the option “MaxDifferenceOrder“ bounds the maximum order by two times the decimal working precision.
Advanced Numerical Differential Equation Solving in Mathematica
139
The default setting of Automatic for the option “MinDifferenceOrder“ selects the minimum number of two extrapolations starting from the order of the base method. This also depends on whether the base method is symmetric. The default setting of Automatic for the option “OrderSafetyFactors“ uses the values 87 ê 10, 9 ê 10< for a stiff base method and 84 ê 5, 9 ê 10< for a nonstiff base method. The default setting of Automatic for the option “StartingDifferenceOrder“ depends on the setting of “MinDifferenceOrder“ pmin . It is set to pmin + 1 or pmin + 2 depending on whether the base method is symmetric. The default setting of Automatic for the option “StepSizeRatioBounds“ uses the values 81 ê 10, 4< for a stiff base method and 81 ê 50, 4< for a nonstiff base method. The default setting of Automatic for the option “StepSizeSafetyFactors“ uses the values 89 ê 10, 4 ê 5< for a stiff base method and 89 ê 10, 13 ê 20< for a nonstiff base method. The default setting of Automatic for the option “StiffnessTest“ indicates that the stiffness test is activated if a nonstiff base method is used. option name
default value
“StabilityCheck“
True
specify whether to carry out a stability check on consecutive implicit solutions (see e.g. (1))
Option of the method “LinearlyImplicitEuler“, “LinearlyImplicitMidpoint“, and “LinearlyImplicitModifiedMidpoint“.
"FixedStep" Method for NDSolve Introduction It is often useful to carry out a numerical integration using fixed step sizes. For example, certain methods such as “DoubleStep“ and “Extrapolation“ carry out a sequence of fixed-step integrations before combining the solutions to obtain a more accurate method with an error estimate that allows adaptive step sizes to be taken. The method “FixedStep“ allows any one-step integration method to be invoked using fixed step sizes.
140
Advanced Numerical Differential Equation Solving in Mathematica
This loads a package with some example problems and a package with some utility functions. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
Examples Define an example problem. In[5]:=
system = GetNDSolveProblem@“BrusselatorODE“D
£ 2 £ 2 Out[5]= NDSolveProblemB:9HY1 L @TD ã 1 - 4 Y1 @TD + Y1 @TD Y2 @TD, HY2 L @TD ã 3 Y1 @TD - Y1 @TD Y2 @TD=,
:Y1 @0D ã
3 2
, Y2 @0D ã 3>, 8Y1 @TD, Y2 @TD<, 8T, 0, 20<, 8<, 8<, 8<>F
This integrates a differential system using the method “ExplicitEuler“ with a fixed step size of 1 ê 10. In[6]:=
NDSolve@8y ‘‘@tD ã - y@tD, y@0D ã 1, y ‘@0D ã 0<, y, 8t, 0, 1<, StartingStepSize Ø 1 ê 10, Method Ø 8“FixedStep“, Method Ø “ExplicitEuler“
Out[6]= 88y Ø InterpolatingFunction@880., 1.<<, <>D<<
Actually the “ExplicitEuler“ method has no adaptive step size control. Therefore, the integration is already carried out using fixed step sizes so the specification of “FixedStep“ is unnecessary. In[7]:=
sol = NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø “ExplicitEuler“D; StepDataPlot@sol, PlotRange Ø 80, 0.2
Out[8]= 0.1
0
5
10
15
20
Advanced Numerical Differential Equation Solving in Mathematica
141
Here are the step sizes taken by the method “ExplicitRungeKutta“ for this problem. In[9]:=
sol = NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø “ExplicitRungeKutta“D; StepDataPlot@solD 0.30 0.20
Out[10]=
0.15 0.10
0
5
10
15
20
This specifies that fixed step sizes should be used for the method “ExplicitRungeKutta“. In[11]:=
sol = NDSolve@system, StartingStepSize Ø 1 ê 10, Method Ø 8“FixedStep“, Method Ø “ExplicitRungeKutta“
Out[12]= 0.1
0
5
10
15
20
The option MaxStepFraction provides an absolute bound on the step size that depends on the integration interval. Since the default value of MaxStepFraction is 1 ê 10, the step size in this example is bounded by one-tenth of the integration interval, which leads to using a constant step size of 1 ê 20. In[13]:=
time = 8T, 0, 1 ê 2<; sol = NDSolve@system, time, StartingStepSize Ø 1 ê 10, Method Ø 8“FixedStep“, Method Ø “ExplicitRungeKutta“
0.070
Out[15]= 0.050
0.030 0.1
0.2
0.3
0.4
0.5
142
Advanced Numerical Differential Equation Solving in Mathematica
By setting the value of MaxStepFraction to a different value, the dependence of the step size on the integration interval can be relaxed or removed entirely. In[16]:=
sol = NDSolve@system, time, StartingStepSize Ø 1 ê 10, MaxStepFraction Ø Infinity, Method Ø 8“FixedStep“, Method Ø “ExplicitRungeKutta“
0.15
Out[17]= 0.10
0.1
0.2
0.3
0.4
0.5
Option Summary option name
default value
Method
None
specify the method to use with fixed step sizes
Option of the method “FixedStep“.
"OrthogonalProjection" Method for NDSolve Introduction Consider the matrix differential equation: y£ HtL = f Ht, yHtLL, t > 0, where the initial value y0 = yH0L œ mµp is given. Assume that y0 T y0 = I, that the solution has the property of preserving orthonormality, yHtLT yHtL = I, and that it has full rank for all t ¥ 0. From a numerical perspective, a key issue is how to numerically integrate an orthogonal matrix differential system in such a way that the numerical solution remains orthogonal. There are several strategies that are possible. One approach, suggested in [DRV94], is to use an implicit Runge|Kutta method (such as the Gauss scheme). Some alternative strategies are described in [DV99] and [DL01]. The approach taken here is to use any reasonable numerical integration method and then postprocess using a projective procedure at the end of each integration step.
Advanced Numerical Differential Equation Solving in Mathematica
143
An important feature of this implementation is that the basic integration method can be any built-in numerical method, or even a user-defined procedure. In the following examples an explicit Runge|Kutta method is used for the basic time stepping. However, if greater accuracy is required an extrapolation method could easily be used, for example, by simply setting the appropriate Method option. Projection Step At the end of each numerical integration step you need to transform the approximate solution matrix of the differential system to obtain an orthogonal matrix. This can be carried out in several ways (see for example [DRV94] and [H97]): † Newton or Schulz iteration † QR decomposition † Singular value decomposition The Newton and Schulz methods are quadratically convergent, and the number of iterations may vary depending on the error tolerances used in the numerical integration. One or two iterations are usually sufficient for convergence to the orthonormal polar factor (see the following) in IEEE double-precision arithmetic. QR decomposition is cheaper than singular value decomposition (roughly by a factor of two), but it does not give the closest possible projection. Definition (Thin singular value decomposition [GVL96]): Given a matrix A œ mµp with m ¥ p there exist two matrices U œ mµp and V œ pµp such that U T A V is the diagonal matrix of singular values of A, S = diagIs1 , …, s p M œ pµp , where s1 ¥ ¥ s p ¥ 0. U has orthonormal columns and V is orthogonal. Definition (Polar decomposition): Given a matrix A and its singular value decomposition U S V T , the polar decomposition of A is given by the product of two matrices Z and P where Z = U V T and P = V S V T . Z has orthonormal columns and P is symmetric positive semidefinite. The orthonormal polar factor Z of A is the matrix that solves: min 9 »» A - Z »» : Z T Z = I=
Zœmµp
for the 2 and Frobenius norms [H96].
144
Advanced Numerical Differential Equation Solving in Mathematica
Schulz Iteration The approach chosen is based on the Schulz iteration, which works directly for m ¥ p. In contrast, Newton iteration for m > p needs to be preceded by QR decomposition. Comparison with direct computation based on the singular value decomposition is also given. The Schulz iteration is given by: Yi+1 = Yi + Yi II - YiT Yi M ë 2, Y0 = A.
(1)
The Schulz iteration has an arithmetic operation count per iteration of 2 m2 p + 2 m p2 floatingpoint operations, but is rich in matrix multiplication [H97]. In a practical implementation, GEMM-based level 3 BLAS of LAPACK [LAPACK99] can be used in conjunction with architecture-specific optimizations via the Automatically Tuned Linear Algebra Software [ATLAS00]. Such considerations mean that the arithmetic operation count of the Schulz iteration is not necessarily an accurate reflection of the observed computational cost. A useful bound on the departure from orthonormality of A is in [H89]: »» AT A - I »»F . Comparison with the Schulz iteration gives the stopping criterion »» AT A - I »»F < t for some tolerance t.
Standard Formulation Assume that an initial value yn for the current solution of the ODE is given, together with a solution yn+1 = yn + D yn from a one-step numerical integration method. Assume that an absolute tolerance t for controlling the Schulz iteration is also prescribed. The following algorithm can be used for implementation. Step 1. Set Y0 = yn+1 and i = 0. Step 2. Compute E = I - YiT Yi . Step 3. Compute Yi+1 = Yi + Yi E ê 2. Step 4. If »» E »»F § t or i = imax , then return Yi+1 . Step 5. Set i = i + 1 and go to step 2.
Advanced Numerical Differential Equation Solving in Mathematica
145
Increment Formulation NDSolve uses compensated summation to reduce the effect of rounding errors made by repeatedly adding the contribution of small quantities D yn to yn at each integration step [H96]. Therefore, the increment D yn is returned by the base integrator. An appropriate orthogonal correction D Yi for the projective iteration can be determined using the following algorithm. Step 1. Set D Y0 = 0 and i = 0. Step 2. Set Yi = D Yi + yn+1 . Step 3. Compute E = I - YiT Yi . Step 4. Compute D Yi+1 = D Yi + Yi E ê 2. Step 5. If »» E »»F § t or i = imax , then return D Yi+1 + D yn . Step 6. Set i = i + 1 and go to step 2. This modified algorithm is used in “OrthogonalProjection“ and shows an advantage of using an iterative process over a direct process, since it is not obvious how an orthogonal correction can be derived for direct methods.
Examples Orthogonal Error Measurement A function to compute the Frobenius norm »» A »»F of a matrix A can be defined in terms of the Norm function as follows. In[1]:=
FrobeniusNorm@a_ ? MatrixQD := Norm@a, FrobeniusD; An upper bound on the departure from orthonormality of A can then be measured using this function [H97].
In[2]:=
OrthogonalError@a_ ? MatrixQD := FrobeniusNorm@[email protected] - IdentityMatrix@Last@Dimensions@aDDDD;
146
Advanced Numerical Differential Equation Solving in Mathematica
This defines the utility function for visualizing the orthogonal error during a numerical integration. In[4]:=
H* Utility function for extracting a list of values of the independent variable at which the integration method has sampled *L TimeData@8v_ ? VectorQ, ___ ? VectorQ
In[6]:=
H* Utility function for plotting the orthogonal error in a numerical integration *L OrthogonalErrorPlot@sol_D := ModuleA8errdata, samples, soldata<, H* Form a list of times at which the method is invoked *L samples = TimeData@solD; H* Form a list of solutions at the integration times *L soldata = Map@Hsol ê. t Ø ÒL &, samplesD; H* Form a list of the orthogonal errors *L errdata = Map@OrthogonalError, soldataD; ListLinePlotATranspose@8samples, errdata
Square Systems This example concerns the solution of a matrix differential system on the orthogonal group O3 HL (see [Z98]). The matrix differential system is given by Y£ =
FHYL Y
= IA + II - Y Y T MM Y with
A =
0 -1 1 1 0 1 -1 -1 0
and Y0 = I3 . The solution evolves as: YHtL = exp@t AD.
Advanced Numerical Differential Equation Solving in Mathematica
The eigenvalues of YHtL are l1 = 1, l2 = expJt i
3 N, l3 = expJ-t i
147
3 N. Thus as t approaches
3 , two of the eigenvalues of YHtL approach -1. The numerical integration is carried out on
pí
the interval @0, 2D. In[7]:=
n = 3; 0 -1 1 1 0 1 ; -1 -1 0
A =
Y = Table@y@i, jD@tD, 8i, n<, 8j, n
H* Vector differential system *L system = Thread@Flatten@D@Y, tDD ã [email protected]; H* Vector initial conditions *L ics = Thread@Flatten@HY ê. t Ø 0LD ã Flatten@IdentityMatrix@Length@YDDDD; eqs = 8system, ics<; vars = Flatten@YD; time = 8t, 0, 2<; This computes the solution using an explicit Runge|Kutta method. The appropriate initial step size and method order are selected automatically, and the step size may vary throughout the integration interval, which is chosen in order to satisfy local relative and absolute error tolerances. Alternatively, the order of the method could be specified by using a Method option.
In[16]:=
solerk = NDSolve@eqs, vars, time, Method Ø “ExplicitRungeKutta“D; This computes the orthogonal error, or absolute deviation from the orthogonal manifold, as the integration progresses. The error is of the order of the local accuracy of the numerical method.
In[17]:=
solerk = Y ê. First@solerkD; OrthogonalErrorPlot@solerkD Orthogonal error »»Y T Y - I»»F vs time 1. µ 10
-9
8. µ 10-10
Out[18]=
6. µ 10-10 4. µ 10-10 2. µ 10-10 0 0.0
0.5
1.0
1.5
2.0
148
Advanced Numerical Differential Equation Solving in Mathematica
This computes the solution using an orthogonal projection method with an explicit Runge|Kutta method used for the basic integration step. The initial step size and method order are the same as earlier, but the step size sequence in the integration may differ. In[19]:=
solop = NDSolve@eqs, vars, time, Method Ø 8“OrthogonalProjection“, Method Ø “ExplicitRungeKutta“, Dimensions Ø Dimensions@YD
In[20]:=
solop = Y ê. First@solopD; OrthogonalErrorPlot@solopD Orthogonal error »»Y T Y - I»»F vs time 4. µ 10
-16
3.5 µ 10-16 3. µ 10-16
Out[21]= 2.5 µ 10-16 2. µ 10-16 1.5 µ 10-16 1. µ 10-16 5. µ 10-17 0.0
0.5
1.0
1.5
2.0
The Schulz iteration, using the incremental formulation, generally yields smaller errors than the direct singular value decomposition.
Rectangular Systems In the following example it is shown how the implementation of the orthogonal projection method also works for rectangular matrix differential systems. Formally stated, the interest is in solving ordinary differential equations on the Stiefel manifold, the set of n×p orthogonal matrices with p < n.
Advanced Numerical Differential Equation Solving in Mathematica
149
Definition The Stiefel manifold of n×p orthogonal matrices is the set Vn,p HL = 9Y œ nµp Y T Y = I p =, 1 § p < n, where I p is the p×p identity matrix. Solutions that evolve on the Stiefel manifold find numerous applications such as eigenvalue problems in numerical linear algebra, computation of Lyapunov exponents for dynamical systems and signal processing. Consider an example adapted from [DL01]: q£ HtL = A qHtL, t > 0, qH0L = q0 T
where q0 = 1 ì
n @1, …, 1D , A = diag@a1 , …, an D œ nµn , with ai = H-1Li a, i = 1, …, n and a > 0.
The exact solution is given by:
qHtL =
expHa1 tL
1
. ª expHan tL
n
Normalizing qHtL as:
YHtL =
qHtL »» qHtL »»
œ nµ1
it follows that YHtL satisfies the following weak skew-symmetric system on Vn,1 HL: Y£ =
FHYL Y
= IIn - Y Y T M A Y
150
Advanced Numerical Differential Equation Solving in Mathematica
In the following example, the system is solved on the interval @0, 5D with a = 9 ê 10 and dimension n = 2. In[22]:=
p = 1; n = 2; a =
9 10
ics =
; 1
Table@1, 8n
n avec = TableAH- 1Li a, 8i, n
solexact = TransposeB:
Ò Norm@Ò, 2D
>F & ü
Exp@avec tD
;
n
This computes the solution using an explicit Runge|Kutta method. In[37]:=
solerk = NDSolve@eqs, vars, time, Method Ø “ExplicitRungeKutta“D; solerk = Y ê. First@solerkD; This computes the componentwise absolute global error at the end of the integration interval.
In[39]:=
Hsolexact - solerkL ê. t Ø tfinal
-11 -13 Out[39]= 99-2.03407 µ 10 =, 92.96319 µ 10 ==
Advanced Numerical Differential Equation Solving in Mathematica
151
This computes the orthogonal error~a measure of the deviation from the Stiefel manifold. In[40]:=
OrthogonalErrorPlot@solerkD Orthogonal error »»Y T Y - I»»F vs time 6. µ 10-10 5. µ 10-10
Out[40]=
4. µ 10-10 3. µ 10-10 2. µ 10-10 1. µ 10-10 0 0
1
2
3
4
5
This computes the solution using an orthogonal projection method with an explicit Runge|Kutta method as the basic numerical integration scheme. In[41]:=
solop = NDSolve@eqs, vars, time, Method Ø 8“OrthogonalProjection“, Method Ø “ExplicitRungeKutta“, Dimensions Ø Dimensions@YD
In[43]:=
Hsolexact - solopL ê. t Ø tfinal
-11 -15 Out[43]= 99-2.03407 µ 10 =, 92.55351 µ 10 ==
Using the orthogonal projection method, however, the deviation from the Stiefel manifold is reduced to the level of roundoff. In[44]:=
OrthogonalErrorPlot@solopD Orthogonal error »»Y T Y - I»»F vs time 2. µ 10-16
1.5 µ 10-16
Out[44]= 1. µ 10-16
5. µ 10-17
0 0
1
2
3
4
5
152
Advanced Numerical Differential Equation Solving in Mathematica
Implementation The implementation of the method “OrthogonalProjection“ has three basic components: † Initialization. Set up the base method to use in the integration, determining any method coefficients and setting up any workspaces that should be used. This is done once, before any actual integration is carried out, and the resulting MethodData object is validated so that it does not need to be checked at each integration step. At this stage the system dimensions and initial conditions are checked for consistency. † Invoke the base numerical integration method at each step. † Perform an orthogonal projection. This performs various tests such as checking that the basic integration proceeded correctly and that the Schulz iteration converges. Options can be used to modify the stopping criteria for the Schulz iteration. One option provided by the code is “IterationSafetyFactor“ which allows control over the tolerance t of the iteration. The factor is combined with a Unit in the Last Place, determined according to the working precision used in the integration (ULP º 2.22045 ä 10-16 for IEEE double precision). The Frobenius norm used for the stopping criterion can be computed efficiently using the LAPACK LANGE functions [LAPACK99]. The option MaxIterations controls the maximum number of iterations that should be carried out.
Option Summary option name
default value
Dimensions
8<
"IterationSafetyFactor"
1 10
specify the dimensions of the matrix differential system specify the safety factor to use in the termination criterion for the Schulz iteration (1)
MaxIterations
Automatic
specify the maximum number of iterations to use in the Schulz iteration (1)
Method
"StiffnessSwitÖ specify the method to use for the numeri ching" cal integration
Options of the method “OrthogonalProjection“.
Advanced Numerical Differential Equation Solving in Mathematica
153
"Projection" Method for NDSolve Introduction When a differential system has a certain structure, it is advantageous if a numerical integration method preserves the structure. In certain situations it is useful to solve differential equations in which solutions are constrained. Projection methods work by taking a time step with a numerical integration method and then projecting the approximate solution onto the manifold on which the true solution evolves. NDSolve includes a differential algebraic solver which may be appropriate and is described in more detail within "Numerical Solution of Differential-Algebraic Equations". Sometimes the form of the equations may not be reduced to the form required by a DAE solver. Furthermore so-called index reduction techniques can destroy certain structural properties, such as symplecticity, that the differential system may possess (see [HW96] and [HLW02]). An example that illustrates this can be found in the documentation for DAEs. In such cases it is often possible to solve a differential system and then use a projective procedure to ensure that the constraints are conserved. This is the idea behind the method “Projection“. If the differential system is r-reversible then a symmetric projection process can be advantageous (see [H00]). Symmetric projection is generally more costly than projection and has not yet been implemented in NDSolve.
Invariants Consider a differential equation ° y = f HyL, yHt0 L = y0 ,
(1)
where y may be a vector or a matrix. Definition: A nonconstant function IHyL is called an invariant of (1) if I £ HyL f HyL = 0 for all y. This implies that every solution yHtL of (1) satisfies IHyHtLL = I Hy0 L = Constant. Synonymous with invariant, the terms first integral, conserved quantity, or constant of the motion are also common.
154
Advanced Numerical Differential Equation Solving in Mathematica
Manifolds Given an Hn - mL-dimensional submanifold of n with g : n # m : = 8y; gHyL = 0<.
(1)
Given a differential equation (1) then y0 œ implies yHtL œ for all t. This is a weaker assumption than invariance and gHyL is called a weak invariant (see [HLW02]).
Projection Algorithm ~
Let yn+1 denote the solution from a one-step numerical integrator. Considering a constrained minimization problem leads to the following system (see [AP91], [HW96] and [HLW02]): ~
yn+1 = yn+1 + g£ Hyn+1 LT l =
0
(1)
gHyn+1 L. ~
To save work gHyn+1 L is approximated as gJyn+1 N. Substituting the first relation into the second relation in (1) leads to the following simplified Newton scheme for l: ~
~
T -1
Dli = -K g£ Jyn+1 N g£ Jyn+1 N O li+1 =
~
~
T
gKyn+1 + g£ Jyn+1 N li O ,
(2)
li + Dli
with l0 = 0. p+1
The first increment Dl0 is of size OJhn N so that (2) usually converges quickly. The added expense of using a higher-order integration method can be offset by fewer Newton iterations in the projective step. For the termination criterion in the method “Projection“, the option “IterationSafetyÖ Factor “ is combined with one Unit in the Last Place in the working precision used by NDSolve.
Examples Load some utility packages. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D;
Advanced Numerical Differential Equation Solving in Mathematica
155
Linear Invariants Define a stiff system modeling a chemical reaction. In[5]:=
system = GetNDSolveProblem@“Robertson“D; vars = system@“DependentVariables“D; This system has a linear invariant.
In[7]:=
invariant = system@“Invariants“D
Out[7]= 8Y1 @TD + Y2 @TD + Y3 @TD<
Linear invariants are generally conserved by numerical integrators (see [S86]), including the default NDSolve method, as can be observed in a plot of the error in the invariant. In[8]:=
sol = NDSolve@systemD; InvariantErrorPlot@invariant, vars, T, solD 3. µ 10-16
2.5 µ 10-16
2. µ 10-16
Out[9]=
1.5 µ 10-16
1. µ 10-16
5. µ 10-17
0 0.00
0.05
0.10
0.15
0.20
0.25
0.30
Therefore in this example there is no need to use the method “Projection“. Certain numerical methods preserve quadratic invariants exactly (see for example [C87]). The implicit midpoint rule, or one-stage Gauss implicit Runge|Kutta method, is one such method.
Harmonic Oscillator Define the harmonic oscillator. In[10]:=
system = GetNDSolveProblem@“HarmonicOscillator“D; vars = system@“DependentVariables“D;
156
Advanced Numerical Differential Equation Solving in Mathematica
The harmonic oscillator has the following invariant. In[12]:=
invariant = system@“Invariants“D
Out[12]= :
1 2
IY1 @TD2 + Y2 @TD2 M>
Solve the system using the method “ExplicitRungeKutta“. The error in the invariant grows roughly linearly, which is typical behavior for a dissipative method applied to a Hamiltonian system. In[13]:=
erksol = NDSolve@system, Method Ø “ExplicitRungeKutta“D; InvariantErrorPlot@invariant, vars, T, erksolD 2. µ 10-9
1.5 µ 10-9
Out[14]=
1. µ 10-9
5. µ 10-10
0 0
2
4
6
8
10
This also solves the system using the method “ExplicitRungeKutta“ but it projects the solution at the end of each step. A plot of the error in the invariant shows that it is conserved up to roundoff. In[15]:=
projerksol = NDSolve@system, Method Ø 8“Projection“, Method Ø “ExplicitRungeKutta“, “Invariants“ Ø invariant
8. µ 10-17
6. µ 10-17
Out[16]= 4. µ 10-17
2. µ 10-17
0 0
2
4
6
8
10
Advanced Numerical Differential Equation Solving in Mathematica
157
Since the system is Hamiltonian (the invariant is the Hamiltonian), a symplectic integrator performs well on this problem, giving a small bounded error. In[17]:=
projerksol = NDSolve@system, Method Ø 8“SymplecticPartitionedRungeKutta“, “DifferenceOrder“ Ø 8, “PositionVariables“ Ø 8Y1 @TD<<, StartingStepSize Ø 1 ê 5D; InvariantErrorPlot@invariant, vars, T, projerksolD
1.5 µ 10-13
1. µ 10-13
Out[18]=
5. µ 10-14
0 0
2
4
6
8
10
Perturbed Kepler Problem This loads a Hamiltonian system known as the perturbed Kepler problem, sets the integration interval and the step size to take, as well as defining the position variables in the Hamiltonian formalism. In[19]:=
system = GetNDSolveProblem@“PerturbedKepler“D; time = system@“TimeData“D; step = 3 ê 100; pvars = Take@system@“DependentVariables“D, 2D
Out[22]= 8Y1 @TD, Y2 @TD<
The system has two invariants, which are defined as H and L. In[23]:=
8H, L< = system@“Invariants“D
Out[23]= :-
1
1
2 3ë2
400 IY1 @TD2 + Y2 @TD M
2
Y1 @TD + Y2 @TD
+ 2
1 2
IY3 @TD2 + Y4 @TD2 M, -Y2 @TD Y3 @TD + Y1 @TD Y4 @TD>
An experiment now illustrates the importance of using all the available invariants in the projective process (see [HLW02]). Consider the solutions obtained using: † The method “ExplicitEuler“ † The method “Projection“ with “ExplicitEuler“, projecting onto the invariant L
158
Advanced Numerical Differential Equation Solving in Mathematica
† The method “Projection“ with “ExplicitEuler“, projecting onto the invariant H † The method “Projection“ with “ExplicitEuler“, projecting onto both the invariants H and L In[24]:=
sol = NDSolve@system, Method Ø “ExplicitEuler“, StartingStepSize Ø stepD; ParametricPlot@Evaluate@pvars ê. First@solDD, Evaluate@timeDD
Out[25]=
In[26]:=
2 1 -30
-25
-20
-15
-10
-5
-1 -2
sol = NDSolve@system, Method Ø 8“Projection“, Method -> “ExplicitEuler“, “Invariants“ Ø 8H<<, StartingStepSize Ø stepD; ParametricPlot@Evaluate@pvars ê. First@solDD, Evaluate@timeDD 0.5
Out[27]=
-1.0
-0.5
0.5
-0.5
-1.0
In[28]:=
sol = NDSolve@system, Method Ø 8“Projection“, Method -> “ExplicitEuler“, “Invariants“ Ø 8L<<, StartingStepSize Ø stepD; ParametricPlot@Evaluate@pvars ê. First@solDD, Evaluate@timeDD -6
-4
-2 -1 -2
Out[29]=
-3 -4 -5 -6
Advanced Numerical Differential Equation Solving in Mathematica
In[30]:=
159
sol = NDSolve@system, Method Ø 8“Projection“, Method -> “ExplicitEuler“, “Invariants“ Ø 8H, L<<, StartingStepSize Ø stepD; ParametricPlot@Evaluate@pvars ê. First@solDD, Evaluate@timeDD
1.0
0.5
Out[31]=
-1.0
-0.5
0.5
1.0
1.5
-0.5
-1.0
-1.5
It can be observed that only the solution with projection onto both invariants gives the correct qualitative behavior~for comparison, results using an efficient symplectic solver can be found in "SymplecticPartitionedRungeKutta Method for NDSolve".
Lotka Volterra An example of constraint projection for the Lotka|Volterra system is given within "Numerical Methods for Solving the Lotka|Volterra Equations".
Euler's Equations An example of constraint projection for Euler's equations is given within "Rigid Body Solvers".
Option Summary option name
default value
"Invariants"
None
specify the invariants of the differential system
"IterationSafetyFactor"
1 10
specify the safety factor to use in the iterative solution of the invariants
MaxIterations
Automatic
specify the maximum number of iterations to use in the iterative solution of the invariants
Method
"StiffnessSwitÖ specify the method to use for integrating ching" the differential system numerically
Options of the method “Projection“.
160
Advanced Numerical Differential Equation Solving in Mathematica
"StiffnessSwitching" Method for NDSolve Introduction The basic idea behind the “StiffnessSwitching“ method is to provide an automatic means of switching between a nonstiff and a stiff solver. The “StiffnessTest“ and “NonstiffTest“ options (described within "Stiffness Detection in NDSolve") provides a useful means of detecting when a problem appears to be stiff. The “StiffnessSwitching“ method traps any failure code generated by “StiffnessTest“ and switches to an alternative solver. The “StiffnessSwitching“ method also uses the method specified in the “NonstiffTest“ option to switch back from a stiff to a nonstiff method. “Extrapolation“ provides a powerful technique for computing highly accurate solutions using dynamic order and step size selection (see "Extrapolation Method for NDSolve" for more details) and is therefore used as the default choice in “StiffnessSwitching“.
Examples This loads some useful packages. In[3]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; This selects a stiff problem and specifies a longer integration time interval than the default specified by NDSolveProblem.
In[5]:=
system = GetNDSolveProblem@“VanderPol“D; time = 8T, 0, 10<; The default “Extrapolation“ base method is not appropriate for stiff problems and gives up quite quickly.
In[7]:=
NDSolve@system, time, Method Ø “Extrapolation“D NDSolve::ndstf : At T == 0.022920104414210326`, system appears to be stiff. Methods Automatic, BDF or StiffnessSwitching may be more appropriate. à
Out[7]= 88Y1 @TD Ø InterpolatingFunction@880., 0.0229201<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.0229201<<, <>D@TD<<
Instead of giving up, the “StiffnessSwitching“ method continues the integration with a stiff solver. In[8]:=
NDSolve@system, time, Method Ø “StiffnessSwitching“D
Out[8]= 88Y1 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD<<
Advanced Numerical Differential Equation Solving in Mathematica
161
The “StiffnessSwitching“ method uses a pair of extrapolation methods as the default. The nonstiff solver uses the “ExplicitModifiedMidpoint“ base method, and the stiff solver uses the “LinearlyImplicitEuler“ base method. For small values of the AccuracyGoal and PrecisionGoal tolerances, it is sometimes preferable to use an explicit Runge|Kutta method for the nonstiff solver. The “ExplicitRungeKutta“ method eventually gives up when the problem is considered to be stiff. In[9]:=
NDSolve@system, time, Method Ø “ExplicitRungeKutta“, AccuracyGoal Ø 5, PrecisionGoal Ø 4D NDSolve::ndstf : At T == 0.028229404169279455`, system appears to be stiff. Methods Automatic, BDF or StiffnessSwitching may be more appropriate. à
Out[9]= 88Y1 @TD Ø InterpolatingFunction@880., 0.0282294<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.0282294<<, <>D@TD<<
This sets the “ExplicitRungeKutta“ method as a submethod of “StiffnessSwitching“. In[10]:=
sol = NDSolve@system, time, Method Ø 8StiffnessSwitching, Method Ø 8ExplicitRungeKutta, Automatic<<, AccuracyGoal Ø 5, PrecisionGoal Ø 4D
Out[10]= 88Y1 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 10.<<, <>D@TD<<
A switch to the stiff solver occurs at T º 0.0282294, and a plot of the step sizes used shows that the stiff solver takes much larger steps. In[11]:=
StepDataPlot@solD
0.020 0.010
Out[11]=
0.005
0.002 0.001 0
2
4
6
8
10
162
Advanced Numerical Differential Equation Solving in Mathematica
Option Summary option name
default value
Method
9Automatic, Automatic=
specify the methods to use for the nonstiff and stiff solvers respectively
“NonstiffTest“
Automatic
specify the method to use for deciding whther to switch to a nonstiff solver
Options of the method “StiffnessSwitching“.
Extensions NDSolve Method Plug-in Framework Introduction The control mechanisms set up for NDSolve enable you to define your own numerical integration algorithms and use them as specifications for the Method option of NDSolve. NDSolve accesses its numerical algorithms and the information it needs from them in an objectoriented manner. At each step of a numerical integration, NDSolve keeps the method in a form so that it can keep private data as needed.
AlgorithmIdentifier @dataD
an algorithm object that contains any data that a particular numerical ODE integration algorithm may need to use; the data is effectively private to the algorithm; AlgorithmIdentifier should be a Mathematica symbol, and the algorithm is accessed from NDSolve by using the option Method -> AlgorithmIdentifier
The structure for method data used in NDSolve .
NDSolve does not access the data associated with an algorithm directly, so you can keep the information needed in any form that is convenient or efficient to use. The algorithm and information that might be saved in its private data are accessed only through method functions of the algorithm object.
Advanced Numerical Differential Equation Solving in Mathematica
AlgorithmObject@ “Step“@rhs,t,h,y,ypDD
attempt to take a single time step of size h from time t to time t + h using the numerical algorithm, where y and yp are the approximate solution vector and its time derivative, respectively, at time t; the function should generally return a list 8newh, D y< where newh is the best size for the next step determined by the algorithm and D y is the increment such that the approximate solution at time t + h is given by y + D y; if the time step is too large, the function should only return the value 8hnew< where hnew should be small enough for an acceptable step (see later for complete descriptions of possible return values)
AlgorithmObject@“DifferenceOrder“D
return the current asymptotic difference order of the algorithm
AlgorithmObject@“StepMode“D
return the step mode for the algorithm object where the step mode should either be Automatic or Fixed; Automatic means that the algorithm has a means to estimate error and determines an appropriate size newh for the next time step; Fixed means that the algorithm will be called from a time step controller and is not expected to do any error estimation
163
Required method functions for algorithms used from NDSolve .
These method functions must be defined for the algorithm to work with NDSolve. The “Step“ method function should always return a list, but the length of the list depends on whether the step was successful or not. Also, some methods may need to compute the function value rhs@t + h, y + D yD at the step end, so to avoid recomputation, you can add that to the list.
164
Advanced Numerical Differential Equation Solving in Mathematica
“Step“@rhs, t, h, y, ypD method
interpretation
8newh,D y<
successful step with computed solution increment D y and recommended next step newh
8newh,D y,yph<
successful step with computed solution increment D y and recommended next step newh and time derivatives computed at the step endpoint, yph = rhs@t + h, y + D yD
8newh,D y,yph,newobj<
successful step with computed solution increment D y and recommended next step newh and time derivatives computed at the step endpoint, yph = rhs@t + h, y + D yD; any changes in the object data are returned in the new instance of the method object, newobj
9newh,D y,None,newobj=
successful step with computed solution increment D y and recommended next step newh; any changes in the object data are returned in the new instance of the method object, newobj
8newh<
rejected step with recommended next step newh such that
output
†newh§ < †h§ 9newh,$Failed,None,newobj=
rejected step with recommended next step newh such that †newh§ < †h§; any changes in the object data are returned in the new instance of the method object, newobj
Interpretation of “Step“ method output.
Classical Runge|Kutta Here is an example of how to set up and access a simple numerical algorithm. This defines a method function to take a single step toward integrating an ODE using the classical fourth-order Runge|Kutta method. Since the method is so simple, it is not necessary to save any private data. In[1]:=
CRK4@D@“Step“@rhs_, t_, h_, y_, yp_DD := Module@8k0, k1, k2, k3<, k0 = h yp; k1 = h rhs@t + h ê 2, y + k0 ê 2D; k2 = h rhs@t + h ê 2, y + k1 ê 2D; k3 = h rhs@t + h, y + k2D; 8h, Hk0 + 2 k1 + 2 k2 + k3L ê 6
In[2]:=
CRK4@___D@“DifferenceOrder“D := 4
Advanced Numerical Differential Equation Solving in Mathematica
165
This defines a method function for the step mode so that NDSolve will know how to control time steps. This algorithm method does not have any step control, so you define the step mode to be Fixed. In[3]:=
CRK4@___D@“StepMode“D := Fixed This integrates the simple harmonic oscillator equation with fixed step size.
In[4]:=
fixed = NDSolve@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, 8t, 0, 2 p<, Method Ø CRK4D
Out[4]= 88x Ø InterpolatingFunction@880., 6.28319<<, <>D<<
Generally using a fixed step size is less efficient than allowing the step size to vary with the local difficulty of the integration. Modern explicit Runge|Kutta methods (accessed in NDSolve with Method -> “ExplicitRungeKutta“) have a so-called embedded error estimator that makes it possible to very efficiently determine appropriate step sizes. An alternative is to use built-in step controller methods that use extrapolation. The method “DoubleStep“ uses an extrapolation based on integrating a time step with a single step of size h and two steps of size h ê 2. The method “Extrapolation“ does a more sophisticated extrapolation and modifies the degree of extrapolation automatically as the integration is performed, but is generally used with base methods of difference orders 1 and 2. This integrates the simple harmonic oscillator using the classical fourth-order Runge|Kutta method with steps controlled by using the “DoubleStep“ method. In[5]:=
dstep = NDSolve@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, 8t, 0, 2 p<, Method Ø 8“DoubleStep“, Method Ø CRK4
Out[5]= 88x Ø InterpolatingFunction@880., 6.28319<<, <>D<<
166
Advanced Numerical Differential Equation Solving in Mathematica
This makes a plot comparing the error in the computed solutions at the step ends. The error for the “DoubleStep“ method is shown in blue. In[6]:=
ploterror@8sol_<, opts___D := Module@8 points = x ü “Coordinates“@1D ê. sol, values = x ü “ValuesOnGrid“ ê. sol<, ListPlot@Transpose@8points, values - Cos@pointsD
Out[7]=
5. µ 10-9
1
2
3
4
5
6
-5. µ 10-9 -1. µ 10-8
The fixed step size ended up with smaller overall error mostly because the steps are so much smaller; it required more than three times as many steps. For a problem where the local solution structure changes more significantly, the difference can be even greater. A facility for stiffness detection is described within "DoubleStep Method for NDSolve". For more sophisticated methods, it may be necessary or more efficient to set up some data for the method to use. When NDSolve uses a particular numerical algorithm for the first time, it calls an initialization function. You can define rules for the initialization that will set up appropriate data for your method.
InitializeMethod@Algorithm Identifier,stepmode,state,Algorithm OptionsD the expression that NDSolve evaluates for initialization when it first uses an algorithm for a particular integration where stepmode is either Automatic or Fixed depending on whether your method is expected to be called within the framework of a step controller or another method or not; state is the NDSolveState object used by NDSolve , and Algorithm Options is a list that contains any options given specifically with the specification to use the particular algorithm, for example, 8opts< in
Method -> 8Algorithm Identifier, opts<
Advanced Numerical Differential Equation Solving in Mathematica
167
Algorithm Identifierê:InitializeMethod@Algorithm Identifier,stepmode_,rhs _NumericalFunction,state_NDSolveState,8opts___?OptionQ
As a system symbol, InitializeMethod is protected, so to attach rules to it, you would need to unprotect it first. It is better to keep the rules associated with your method. A tidy way to do this is to make the initialization definition using TagSet as shown earlier. As an example, suppose you want to redefine the Runge|Kutta method shown earlier so that instead of using the exact coefficients 2, 1/2, and 1/6, numerical values with the appropriate precision are used instead to make the computation slightly faster. This defines a method function to take a single step toward integrating an ODE using the classical fourth-order Runge|Kutta method using saved numerical values for the required coefficients. In[15]:=
CRK4@8two_, half_, sixth_
In[16]:=
CRK4 ê: NDSolve`InitializeMethod@CRK4, stepmode_, rhs_, state_, opts___D := Module@8prec<, prec = state ü “WorkingPrecision“; CRK4@N@82, 1 ê 2, 1 ê 6<, precDDD
Saving the numerical values of the numbers gives between 5 and 10 percent speedup for a longer integration using “DoubleStep“.
Adams Methods In terms of the NDSolve framework, it is not really any more difficult to write an algorithm that controls steps automatically. However, the requirements for estimating error and determining an appropriate step size usually make this much more difficult from both the mathematical and programming standpoints. The following example is a partial adaptation of the Fortran DEABM code of Shampine and Watts to fit into the NDSolve framework. The algorithm adaptively chooses both step size and order based on criteria described in [SG75].
168
Advanced Numerical Differential Equation Solving in Mathematica
The first stage is to define the coefficients. The integration method uses variable step-size coefficients. Given a sequence of step sizes 8hn-k+1 , hn-k+2 , …, hn <, where hn is the current step to take, the coefficients for the method with Adams|Bashforth predictor of order k and Adams| Moulton corrector of order k + 1, g j HnL such that yn+1 = pn+1 + hn gk HnL Fk Hn + 1L k-1
pn+1 = yn + hn ‚g j HnL F*k HnL, j=0
where the F j HnL are the divided differences. j-1
F j HnL == ‰Htn - tn-i L dk f Atn , …, tn- j E i=0
j-1 *
IF j M HnL = b j HnL F j HnL with b j HnL = ‰
tn+1 - tn-i
i=0 tn
- t-i+n-1
.
This defines a function that computes the coefficients F j and b j , along with s j , that are used in error estimation. The formulas are from [HNW93] and use essentially the same notation. In[17]:=
AdamsBMCoefficients@hlist_ListD := ModuleB8k, h, Dh, brat, b, a, s, c<, k = Length@hlistD; h = Last@hlistD; Dh = Drop@FoldList@Plus, 0, Reverse@hlistDD, 1D; Drop@Dh, - 1D brat = ; Drop@Dh, 1D - h b = FoldList@Times, 1, bratD; h ; a= Dh s = FoldList@Times, 1, a Range@Length@aDDD; 1 c@0D = TableB , 8q, 1, k
Advanced Numerical Differential Equation Solving in Mathematica
169
hlist is the list of step sizes 8hn-k , hn-k+1, …, hn < from past steps. The constant-coefficient Adams coefficients can be computed once, and much more easily. Since the constant step size Adams| Moulton coefficients are used in error prediction for changing the method order, it makes sense to define them once with rules that save the values. This defines a function that computes and saves the values of the constant step size Adams| Moulton coefficients. In[18]:=
Moulton@0D = 1; Moulton@m_D := Moulton@mD = - Sum@Moulton@kD ê H1 + m - kL, 8k, 0, m - 1
The next stage is to set up a data structure that will keep the necessary information between steps and define how that data should be initialized. The key information that needs to be saved is the list of past step sizes, hlist, and the divided differences, F. Since the method does the error estimation, it needs to get the correct norm to use from the NDSolve`StateData object. Some other data such as precision is saved for optimization and convenience. This defines a rule for initializing the AdamsBM method from NDSolve . In[20]:=
AdamsBM ê: NDSolve`InitializeMethod@AdamsBM, 8Automatic, DenseQ_<, rhs_, ndstate_, opts___D := Module@8prec, norm, hlist, F, mord<, mord = MaxDifferenceOrder ê. Flatten@8opts, Options@AdamsBMD 0L, Return@$FailedDD; prec = ndstate@“WorkingPrecision“D; norm = ndstate@“Norm“D; hlist = 8<; F = 8ndstate@“SolutionDerivativeVector“@“Active“DD<; AdamsBM@88hlist, F, N@0, precD FP1T<, 8norm, prec, mord, 0, True<
hlist is initialized to 8< since at initialization time there have been no steps. F is initialized to the derivative of the solution vector at the initial condition since the 0th divided difference is just the function value. Note that F is a matrix. The third element, which is initialized to a zero vector, is used for determining the best order for the next step. It is effectively an additional divided difference. The use of the other parts of the data is clarified in the definition of the stepping function. The initialization is also set up to get the value of an option that can be used to limit the maximum order of the method to use. In the code DEABM, this is limited to 12, because this is a practical limit for machine-precision calculations. However, in Mathematica, computations can be done in higher precision where higher-order methods may be of significant advantage, so there is no good reason for an absolute limit of this sort. Thus, you set the default of the option to be ¶.
170
Advanced Numerical Differential Equation Solving in Mathematica
This sets the default for the MaxDifferenceOrder option of the AdamsBM method. In[21]:=
Options@AdamsBMD = 8MaxDifferenceOrder Ø ¶<;
Before proceeding to the more complicated “Step“ method functions, it makes sense to define the simple “StepMode“ and “DifferenceOrder“ method functions. This defines the step mode for the AdamsBM method to always be Automatic. This means that it cannot be called from step controller methods that request fixed step sizes of possibly varying sizes. In[22]:=
AdamsBM@___D@“StepMode“D = Automatic; This defines the difference order for the AdamsBM method. This varies with the number of past values saved.
In[23]:=
AdamsBM@data_D@“DifferenceOrder“D := Length@data@@1, 2DDD;
Finally, here is the definition of the “Step“ function. The actual process of taking a step is only a few lines. The rest of the code handles the automatic step size and order selection following very closely the DEABM code of Shampine and Watts. This defines the “Step“ method function for AdamsBM that returns step data according to the templates described earlier. In[24]:=
AdamsBM@data_D@“Step“@rhs_, t_, h_, y_, yp_DD := ModuleB8prec, norm, hlist, F, F1, ns, starting, k, zero, g, b, s, p, f, Dy, normh, ev, err, PE, knew, hnew, temp<, 88hlist, F, F1<, 8norm, prec, mord, ns, starting<< = data; H* Norm scaling will be based on current solution y. *L normh = HAbs@hD temp@Ò1, yD &L ê. 8temp Ø norm<; k = Length@FD; zero = N@0, precD; H* Keep track of number of steps at this stepsize h. *L If@Length@hlistD > 0 && Last@hlistD == h, ns ++, ns = 1D; hlist = Join@hlist, 8h 1, H* Rejected step: reduce h by half, make sure starting mode flag is unset and reset F to previous values *L ,
Advanced Numerical Differential Equation Solving in Mathematica
171
h
; Dy = $Failed; f = None; starting = False; F = dataP1, 2T, 2 H* Sucessful step: CE: Correct and evaluate *L Dy = h Hp + ev Last@gDL; f = rhs@h + t, y + DyD; temp = f - Last@FD; H* Update the divided differences *L F = Htemp + Ò1 &L êü F; H* Determine best order and stepsize for the next step *L F1 = temp - F1; knew = ChooseNextOrder@starting, PE, k, knew, F1, normh, s, mord, nsD; hnew = ChooseNextStep@PE, knew, hDF; H* Truncate hlist and F to the appropriate length for the chosen order. *L hlist = Take@hlist, 1 - knewD; If@Length@FD > knew, F1 = FPLength@FD - knewT; F = Take@F, - knewD;D; H* Return step data along with updated method data *L 8hnew, Dy, f, AdamsBM@88hlist, F, F1<, 8norm, prec, mord, ns, starting<
There are a few deviations from DEABM in the code here. The most significant is that coefficients are recomputed at each step, whereas DEABM computes only those that need updating. This modification was made to keep the code simpler, but does incur a clear performance loss, particularly for small to moderately sized systems. A second significant modification is that much of the code for limiting rejected steps is left to NDSolve, so there are no checks in this code to see if the step size is too small or the tolerances are too large. The stiffness detection heuristic has also been left out. The order and step-size determination code has been modularized into separate functions. This defines a function that constructs error estimates PE j for j == k - 2, k - 1, and k and determines if the order should be lowered or not. In[25]:=
OrderCheck@PE_, k_, F_, ev_, normh_, s_D := ModuleB8knew = k<, PEk = Abs@sPk + 1T Moulton@kD normh@evDD; IfBk > 1, PEk-1 = Abs@sPkT Moulton@k - 1D normh@ev + FP2TDD; If@k > 2, PEk-2 = Abs@sPk - 1T Moulton@k - 2D normh@ev + FP3TDD; If@Max@PEk-1 , PEk-2 D < PEk , knew = k - 1DD, PEk , knew = k - 1F; IfBPEk-1 < 2 F; knew F; This defines a function that determines the best order to use after a successful step.
In[26]:=
SetAttributes@ChooseNextOrder, HoldFirstD; ChooseNextOrder@starting_, PE_, k_, knw_, F1_, normh_, s_, mord_, ns_D := ModuleB8knew = knw<, starting = starting && knew ¥ k && k < mord; IfBstarting, ,
172
Advanced Numerical Differential Equation Solving in Mathematica
IfBstarting, knew = k + 1; PEk+1 = 0, IfBknew ¥ k && ns ¥ k + 1, PEk+1 = Abs@Moulton@k + 1D normh@F1DD; IfBk > 1, If@PEk-1 § Min@PEk , PEk+1 D, knew = k - 1, If@PEk+1 < PEk && k < mord, knew = k + 1D D, PEk IfBPEk+1 < , knew = k + 1F 2 F; F; F; knew F; This defines a function that determines the best step size to use after a successful step of size h. In[28]:=
ChooseNextStep@PE_, k_, h_D := IfBPEk < 2-Hk+2L , 2 h, 1
IfBPEk <
1 2
1
9
, h, h MaxB , MinB , 2 10
1 2 PEk
k+1
FFF
F;
Once these definitions are entered, you can access the method in NDSolve by simply using Method -> AdamsBM. This solves the harmonic oscillator equation with the Adams method defined earlier. In[29]:=
asol = NDSolve@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, 8t, 0, 2 p<, Method Ø AdamsBMD
Out[29]= 88x Ø InterpolatingFunction@880., 6.28319<<, <>D<<
This shows the error of the computed solution. It is apparent that the error is kept within reasonable bounds. Note that after the first few points, the step size has been increased. In[30]:=
ploterror@asolD 2. µ 10-8
1. µ 10-8
Out[30]= 1 -1. µ 10-8
-2. µ 10-8
2
3
4
5
6
Advanced Numerical Differential Equation Solving in Mathematica
173
Where this method has the potential to outperform some of the built-in methods is with highprecision computations with strict tolerances. This is because the built-in methods are adapted from codes with the restriction to order 12. In[31]:=
LorenzEquations = 8 8x ‘@tD == - 3 Hx@tD - y@tDL, x@0D == 0<, 8y ‘@tD == - x@tD z@tD + 53 ê 2 x@tD - y@tD, y@0D == 1<, 8z ‘@tD == x@tD y@tD - z@tD, z@0D == 0<<; vars = 8x@tD, y@tD, z@tD<;
A lot of time is required for coefficient computation. In[33]:=
Timing@NDSolve@LorenzEquations, vars, 8t, 0, 20<, Method Ø AdamsBMDD
Out[33]= 87.04 Second, 88x@tD Ø InterpolatingFunction@880., 20.<<, <>D@tD,
y@tD Ø InterpolatingFunction@880., 20.<<, <>D@tD, z@tD Ø InterpolatingFunction@880., 20.<<, <>D@tD<<<
This is not using as high an order as might be expected. In any case, about half the time is spent generating coefficients, so to make it better, you need to figure out the coefficient update. In[34]:=
Timing@NDSolve@LorenzEquations, vars, 8t, 0, 20<, Method Ø AdamsBM, WorkingPrecision Ø 32DD
Out[34]= 811.109, 88x@tD Ø InterpolatingFunction@880, 20.000000000000000000000000000000<<, <>D@tD,
y@tD Ø InterpolatingFunction@880, 20.000000000000000000000000000000<<, <>D@tD, z@tD Ø InterpolatingFunction@880, 20.000000000000000000000000000000<<, <>D@tD<<<
174
Advanced Numerical Differential Equation Solving in Mathematica
Numerical Solution of Partial Differential Equations The Numerical Method of Lines Introduction The numerical method of lines is a technique for solving partial differential equations by discretizing in all but one dimension, and then integrating the semi-discrete problem as a system of ODEs or DAEs. A significant advantage of the method is that it allows the solution to take advantage of the sophisticated general-purpose methods and software that have been developed for numerically integrating ODEs and DAEs. For the PDEs to which the method of lines is applicable, the method typically proves to be quite efficient. It is necessary that the PDE problem be well-posed as an initial value (Cauchy) problem in at least one dimension, since the ODE and DAE integrators used are initial value problem solvers. This rules out purely elliptic equations such as Laplace's equation, but leaves a large class of evolution equations that can be solved quite efficiently. A simple example illustrates better than mere words the fundamental idea of the method. Consider the following problem (a simple model for seasonal variation of heat in soil). ut ==
1 8
uxx , uH0, tL == sinH2 p tL, ux H1, tL == 0, uHx, 0L ã 0
(1)
This is a candidate for the method of lines since you have the initial value u Hx, 0L == 0. Problem (1) will be discretized with respect to the variable x using second-order finite differences, in particular using the approximation uxx Hx, tL >
uHx+h,tL-2 uHx,tL-uHx-h,tL h2
(2)
Even though finite difference discretizations are the most common, there is certainly no requirement that discretizations for the method of lines be done with finite differences; finite volume or even finite element discretizations can also be used.
Advanced Numerical Differential Equation Solving in Mathematica
175
To use the discretization shown, choose a uniform grid xi , 0 § i § n with spacing h == 1 ê n such that xi == i h. Let ui @tD be the value of uHxi , tL. For the purposes of illustrating the problem setup, a particular value of n is chosen. This defines a particular value of n and the corresponding value of h used in the subsequent commands. This can be changed to make a finer or coarser spatial approximation. In[1]:=
n = 10; hn =
1 n
;
This defines the vector of ui . In[2]:=
U@t_D = Table@ui @tD, 8i, 0, n
Out[2]= 8u0 @tD, u1 @tD, u2 @tD, u3 @tD, u4 @tD, u5 @tD, u6 @tD, u7 @tD, u8 @tD, u9 @tD, u10 @tD<
For 1 § i § 9, you can use the centered difference formula (2) to obtain a system of ODEs. However, before doing this, it is useful to incorporate the boundary conditions first. The Dirichlet boundary condition at x == 0 can easily be handled by simply defining u0 as a function of t. An alternative option is to differentiate the boundary condition with respect to time and use the corresponding differential equation. In this example, the latter method will be used. The Neumann boundary condition at x == 1 is a little more difficult. With second-order differences, one way to handle it is with reflection: imagine that you are solving the problem on the interval 0 § x § 2 with the same boundary conditions at x == 0 and x == 2. Since the initial condition and boundary conditions are symmetric with respect to x, the solution should be symmetric with respect to x for all time, and so symmetry is equivalent to the Neumann boundary condition at x 1. Thus, uH1 + h, tL ã uH1 - h, tL, so un+1 @tD ã un-1 @tD.
176
Advanced Numerical Differential Equation Solving in Mathematica
This uses ListCorrelate to apply the difference formula. The padding 8un-1 @tD< implements the Neumann boundary condition. In[3]:=
eqns = ThreadAD@U@tD, tD ã JoinA8D@Sin@2 p tD, tD<, ListCorrelateA81, - 2, 1< ë hn 2 , U@tD, 81, 2<, 8un-1 @tD
£ £ Out[3]= :u0 @tD ã 2 p Cos@2 p tD, u1 @tD ã
u2 £ @tD ã u3 £ @tD ã u5 £ @tD ã u7 £ @tD ã u9 £ @tD ã
1 8 1 8 1 8 1 8 1 8
1 8
H100 u0 @tD - 200 u1 @tD + 100 u2 @tDL,
H100 u1 @tD - 200 u2 @tD + 100 u3 @tDL, H100 u2 @tD - 200 u3 @tD + 100 u4 @tDL, u4 £ @tD ã H100 u4 @tD - 200 u5 @tD + 100 u6 @tDL, u6 £ @tD ã H100 u6 @tD - 200 u7 @tD + 100 u8 @tDL, u8 £ @tD ã
1 8 1 8 1 8
H100 u8 @tD - 200 u9 @tD + 100 u10 @tDL, u10 £ @tD ã
H100 u3 @tD - 200 u4 @tD + 100 u5 @tDL, H100 u5 @tD - 200 u6 @tD + 100 u7 @tDL, H100 u7 @tD - 200 u8 @tD + 100 u9 @tDL, 1 8
H200 u9 @tD - 200 u10 @tDL>
This sets up the zero initial condition. In[4]:=
initc = Thread@U@0D ã Table@0, 8n + 1
Out[4]= 8u0 @0D ã 0, u1 @0D ã 0, u2 @0D ã 0, u3 @0D ã 0, u4 @0D ã 0,
u5 @0D ã 0, u6 @0D ã 0, u7 @0D ã 0, u8 @0D ã 0, u9 @0D ã 0, u10 @0D ã 0<
Now the PDE has been partially discretized into an ODE initial value problem that can be solved by the ODE integrators in NDSolve. This solves the ODE initial value problem. In[5]:=
lines = NDSolve@8eqns, initc<, U@tD, 8t, 0, 4
Out[5]= 88u0 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD,
u1 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u2 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u3 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u4 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u5 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u6 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u7 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u8 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u9 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u10 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD<<
Advanced Numerical Differential Equation Solving in Mathematica
177
This shows the solutions uHxi , tL plotted as a function of x and t. In[6]:=
ParametricPlot3D@Evaluate@Table@8i hn , t, First@ui @tD ê. linesD<, 8i, 0, n
3 2
1 Out[6]=
0 1.0 0.5 u 0.0 –0.5 –1.0 0.0
0.5 1.0 x
The plot indicates why this technique is called the numerical "method of lines". The solution in between lines can be found by interpolation. When NDSolve computes the solution for the PDE, the result is a two-dimensional InterpolatingFunction. This uses NDSolve to compute the solution of the heat equation (1) directly. In[7]:=
1
D@u@x, tD, x, xD, u@x, 0D ã 0, 8 u@0, tD ã Sin@2 p tD, HD@u@x, tD, xD ê. x Ø 1L ã 0>, u, 8x, 0, 1<, 8t, 0, 4
solution = NDSolveB:D@u@x, tD, tD ã
Out[7]= 88u Ø InterpolatingFunction@880., 1.<, 80., 4.<<, <>D<<
This creates a surface plot of the solution. In[8]:=
Out[8]=
Plot3D@Evaluate@First@u@x, tD ê. solutionDD, 8x, 0, 1<, 8t, 0, 4<, PlotPoints Ø 814, 36<, PlotRange Ø AllD
1.0 0.5
4
0.0
3
–0.5 –1.0 0.0
2 1
0.5 0
178
Advanced Numerical Differential Equation Solving in Mathematica
The setting n == 10 used did not give a very accurate solution. When NDSolve computes the solution, it uses spatial error estimates on the initial condition to determine what the grid spacing should be. The error in the temporal (or at least time-like) variable is handled by the adaptive ODE integrator. In the example (1), the distinction between time and space was quite clear from the problem context. Even when the distinction is not explicit, this tutorial will refer to "spatial" and "temporal" variables. The "spatial" variables are those to which the discretization is done. The "temporal" variable is the one left in the ODE system to be integrated. option name
default value
TemporalVariable
Automatic
what variable to keep derivatives with respect to the derived ODE or DAE system
Method
Automatic
what method to use for integrating the ODEs or DAEs
SpatialDiscretization
TensorProductGÖ what method to use for spatial discretizarid tion
DifferentiateBoundaryCondÖ itions
True
whether to differentiate the boundary conditions with respect to the temporal variable
ExpandFunctionSymbolically
False
whether to expand the effective function symbolically or not
DiscretizedMonitorVariablÖ es
False
whether to interpret dependent variables given in monitors like StepMonitor or in method options for methods like EventLocator and Projection as functions of the spatial variables or vectors representing the spatially discretized values
Options for NDSolve`MethodOfLines.
Use of some of these options requires further knowledge of how the method of lines works and will be explained in the sections that follow. Currently, the only method implemented for spatial discretization is the TensorProductGrid method, which uses discretization methods for one spatial dimension and uses an outer tensor product
to
derive
methods
for
multiple
spatial
dimensions
on
rectangular
regions.
TensorProductGrid has its own set of options that you can use to control the grid selection process. The following sections give sufficient background information so that you will be able to use these options if necessary.
Advanced Numerical Differential Equation Solving in Mathematica
179
Spatial Derivative Approximations Finite Differences The essence of the concept of finite differences is embodied in the standard definition of the derivative f £ Hxi L == lim
f Hh + xi L - f Hxi L h
hØ0
where instead of passing to the limit as h approaches zero, the finite spacing to the next adjacent point, xi+1 ã xi + h, is used so that you get an approximation. f £ Hxi Lapprox ==
f Hxi+1 L - f Hxi L h
The difference formula can also be derived from Taylor's formula,
f Hxi+1 L ã f Hxi L + h f £ Hxi L +
h2 2
f ££ Hxi L; xi < xi < xi+1
which is more useful since it provides an error estimate (assuming sufficient smoothness)
f £ Hxi L ã
f Hxi+1 L - f Hxi L h
-
h 2
f ££ Hxi L
An important aspect of this formula is that xi must lie between xi and xi+1 so that the error is local to the interval enclosing the sampling points. It is generally true for finite difference formulas that the error is local to the stencil, or set of sample points. Typically, for convergence and other analysis, the error is expressed in asymptotic form: f £ Hxi L ã
f Hxi+1 L - f Hxi L h
+ OHhL
This formula is most commonly referred to as the first-order forward difference. The backward difference would use xi-1 .
180
Advanced Numerical Differential Equation Solving in Mathematica
Taylor's formula can easily be used to derive higher-order approximations. For example, sub-tracting
f Hxi+1 L ã f Hxi L + h f £ Hxi L +
h2 2
f ££ Hxi L + OIh3 M
from
f Hxi-1 L ã f Hxi L - h f £ Hxi L +
h2 2
f ££ Hxi L + OIh3 M
and solving for f ' Hxi L gives the second-order centered difference formula for the first derivative, f £ Hxi L ã
f Hxi+1 L - f Hxi-1 L 2h
+ OIh2 M
If the Taylor formulas shown are expanded out one order farther and added and then combined with the formula just given, it is not difficult to derive a centered formula for the second derivative. f ££ Hxi L ã
f Hxi+1 L - 2 f Hxi L + f Hxi-1 L h2
+ OIh2 M
Note that the while having a uniform step size h between points makes it convenient to write out the formulas, it is certainly not a requirement. For example, the approximation to the second derivative is in general f ££ Hxi L ==
2 H f Hxi+1 L Hxi-1 - xi L + f Hxi-1 L Hxi - xi+1 L + f Hxi L Hxi+1 - xi-1 LL Hxi-1 - xi L Hxi-1 - xi+1 L Hxi - xi+1 L
+ OHhL
where h corresponds to the maximum local grid spacing. Note that the asymptotic order of the three-point formula has dropped to first order; that it was second order on a uniform grid is due to fortuitous cancellations. In general, formulas for any given derivative with asymptotic error of any chosen order can be derived from the Taylor formulas as long as a sufficient number of sample points are used. However, this method becomes cumbersome and inefficient beyond the simple examples shown. An alternate formulation is based on polynomial interpolation: since the Taylor formulas are exact (no error term) for polynomials of sufficiently low order, so are the finite difference
However, this method becomes cumbersome and inefficient beyond the simple examples shown. An alternate formulation is based on Advanced polynomial interpolation: since theinTaylor formulas Numerical Differential Equation Solving Mathematica 181 formulas. It is not difficult to show that the finite difference formulas are equivalent to the derivatives of interpolating polynomials. For example, a simple way of deriving the formula just shown for the second derivative is to interpolate a quadratic and find its second derivative (which is essentially just the leading coefficient). This finds the three-point finite difference formula for the second derivative by differentiating the polynomial interpolating the three points Hxi-1 , f Hxi-1 LL, Hxi , f Hxi LL, and Hxi+1 , f Hxi+1 LL. In[9]:=
D@InterpolatingPolynomial@Table@8 xi+k , f@xi+k D<, 8k, - 1, 1
-fAx-1+i E+f@xi D -x-1+i +xi
Out[9]=
-f@xi D+fAx1+i E
+
-xi +x1+i
O
-x-1+i + x1+i
In this form of the formula, it is easy to see that it is effectively a difference of the forward and backward first-order derivative approximations. Sometimes it is advantageous to use finite differences in this way, particularly for terms with coefficients inside of derivatives, such as HaHxL ux Lx , which commonly appear in PDEs. Another property made apparent by considering interpolation formulas is that the point at which you get the derivative approximation need not be on the grid. A common use of this is with staggered grids where the derivative may be wanted at the midpoints between grid points. This generates a fourth-order approximation for the first derivative on a uniform staggered grid, xi , where the main grid points xi+kê2 are at xi + h k ê 2, for odd k. In[10]:=
Simplify@ D@InterpolatingPolynomial@Table@8 xi + k h ê 2, f@xi+kê2 D<, 8k, - 3, 3, 2
Out[10]=
3
- +i
F - 27 fBx
2
1
F + 27 fBx 1
2
2
- +i
+i
F - fBx 3 2
+i
F
24 h
The fourth-order error coefficient for this formula is
3 640
h4 f H5L Hxi L versus
1 30
h4 f H5L Hxi L for the stan-
dard fourth-order formula derived next. Much of the reduced error can be attributed to the reduced stencil size. This generates a fourth-order approximation for the first derivative at a point on a uniform grid. In[11]:=
Out[11]=
Simplify@ D@InterpolatingPolynomial@Table@8 xi + k h, f@xi+k D<, 8k, - 2, 2, 1
182
Advanced Numerical Differential Equation Solving in Mathematica
In general, a finite difference formula using n points will be exact for functions that are polynomials of degree n - 1 and have asymptotic order at least n - m. On uniform grids, you can expect higher asymptotic order, especially for centered differences. Using efficient polynomial interpolation techniques is a reasonable way to generate coefficients, but B. Fornberg has developed a fast algorithm for finite difference weight generation [F92], [F98], which is substantially faster. In [F98], Fornberg presents a one-line Mathematica formula for explicit finite differences. This is the simple formula of Fornberg for generating weights on a uniform grid. Here it has been modified slightly by making it a function definition. In[12]:=
UFDWeights@m_, n_, s_D := CoefficientList@Normal@Series@xs Log@xDm , 8x, 1, n
Here m is the order of the derivative, n is the number of grid intervals enclosed in the stencil, and s is the number of grid intervals between the point at which the derivative is approximated and the leftmost edge of the stencil. There is no requirement that s be an integer; noninteger values simply lead to staggered grid approximations. Setting s to be n ê 2 always generates a centered formula. This uses the Fornberg formula to generate the weights for a staggered fourth-order approximation to the first derivative. This is the same one computed earlier with InterpolatingPolynomial. In[13]:=
UFDWeights@1, 3, 3 ê 2D
Out[13]= :
1
9
, -
24 h
8h
,
9 8h
, -
1
>
24 h
A table of some commonly used finite difference formulas follows for reference.
formula
error term
f £ Hxi L >
f Ixi-2 M-4 f Ixi-1 M+3 f Ixi M
f £ Hxi L >
f Ixi+1 M- f Ixi-1 M
f £ Hxi L >
-3 f Ixi M+4 f Ixi+1 M- f Ixi+2 M
2h
2h
2h
1 3
h2 f H3L
1 6
h2 f H3L
1 3
h2 f H3L
Advanced Numerical Differential Equation Solving in Mathematica
f £ Hxi L >
3 f Ixi-4 M-16 f Ixi-3 M+36 f Ixi-2 M-48 f Ixi-1 M+25 f Ixi M
f £ Hxi L >
- f Ixi-3 M+6 f Ixi-2 M-18 f Ixi-1 M+10 f Ixi M+3 f Ixi+1 M
f £ Hxi L >
f Ixi-2 M-8 f Ixi-1 M+8 f Ixi+1 M- f Ixi+2 M
f £ Hxi L >
-3 f Ixi-1 M-10 f Ixi M+18 f Ixi+1 M-6 f Ixi+2 M+ f Ixi+3 M
f £ Hxi L >
-25 f Ixi M+48 f Ixi+1 M-36 f Ixi+2 M+16 f Ixi+3 M-3 f Ixi+4 M
f £ Hxi L >
10 f Ixi-6 M-72 f Ixi-5 M+225 f Ixi-4 M-400 f Ixi-3 M+450 f Ixi-2 M-360 f Ixi-1 M+147 f Ixi M
f £ Hxi L >
-2 f Ixi-5 M+15 f Ixi-4 M-50 f Ixi-3 M+100 f Ixi-2 M-150 f Ixi-1 M+77 f Ixi M+10 f Ixi+1 M
f £ Hxi L >
f Ixi-4 M-8 f Ixi-3 M+30 f Ixi-2 M-80 f Ixi-1 M+35 f Ixi M+24 f Ixi+1 M-2 f Ixi+2 M
f £ Hxi L >
- f Ixi-3 M+9 f Ixi-2 M-45 f Ixi-1 M+45 f Ixi+1 M-9 f Ixi+2 M+ f Ixi+3 M
f £ Hxi L >
2 f Ixi-2 M-24 f Ixi-1 M-35 f Ixi M+80 f Ixi+1 M-30 f Ixi+2 M+8 f Ixi+3 M- f Ixi+4 M
f £ Hxi L >
-10 f Ixi-1 M-77 f Ixi M+150 f Ixi+1 M-100 f Ixi+2 M+50 f Ixi+3 M-15 f Ixi+4 M+2 f Ixi+5 M
f £ Hxi L >
-147 f Ixi M+360 f Ixi+1 M-450 f Ixi+2 M+400 f Ixi+3 M-225 f Ixi+4 M+72 f Ixi+5 M-10 f Ixi+6 M
12 h
12 h
12 h
12 h
12 h
60 h
60 h
60 h
60 h
60 h
60 h
60 h
1 5
h4 f H5L
1 20
h4 f H5L
1 30
h4 f H5L
1 20
h4 f H5L
1 5
h4 f H5L
1 7
h6 f H7L
1 42
h6 f H7L
1 105
h6 f H7L
1 140
h6 f H7L
1 105
h6 f H7L
1 42 1 7
h6 f H7L
h6 f H7L
Finite difference formulas on uniform grids for the first derivative.
formula
error term
f ££ Hxi L >
- f Ixi-3 M+4 f Ixi-2 M-5 f Ixi-1 M+2 f Ixi M
f ££ Hxi L >
f Ixi-1 M-2 f Ixi M+ f Ixi+1 M
f ££ Hxi L >
2 f Ixi M-5 f Ixi+1 M+4 f Ixi+2 M- f Ixi+3 M
f ££ Hxi L >
-10 f Ixi-5 M+61 f Ixi-4 M-156 f Ixi-3 M+214 f Ixi-2 M-154 f Ixi-1 M+45 f Ixi M
f ££ Hxi L >
f Ixi-4 M-6 f Ixi-3 M+14 f Ixi-2 M-4 f Ixi-1 M-15 f Ixi M+10 f Ixi+1 M
f ££ Hxi L >
- f Ixi-2 M+16 f Ixi-1 M-30 f Ixi M+16 f Ixi+1 M- f Ixi+2 M
h2
h2
h2
12 h2
12 h2
12 h2
11 12
h2 f H4L
1 12
h2 f H4L
11 12
h2 f H4L
137 180
h4 f H6L
13 180
h4 f H6L
1 90
h4 f H6L
183
184
Advanced Numerical Differential Equation Solving in Mathematica
f ££ Hxi L >
10 f Ixi-1 M-15 f Ixi M-4 f Ixi+1 M+14 f Ixi+2 M-6 f Ixi+3 M+ f Ixi+4 M
f ££ Hxi L >
45 f Ixi M-154 f Ixi+1 M+214 f Ixi+2 M-156 f Ixi+3 M+61 f Ixi+4 M-10 f Ixi+5 M
f ££ Hxi L >
12 h2
12 h2 1 180 h2
H-126 f Hxi-7 L + 1019 f Hxi-6 L - 3618 f Hxi-5 L +
13 180
h4 f H6L
137 180
h4 f H6L
363 560
h6 f H8L
29 560
h6 f H8L
7380 f Hxi-4 L - 9490 f Hxi-3 L + 7911 f Hxi-2 L - 4014 f Hxi-1 L + 938 f Hxi LL f ££ Hxi L >
1 180 h2
H11 f Hxi-6 L - 90 f Hxi-5 L + 324 f Hxi-4 L 670 f Hxi-3 L + 855 f Hxi-2 L - 486 f Hxi-1 L - 70 f Hxi L + 126 f Hxi+1 LL
f ££ Hxi L >
1 180 h2
H-2 f Hxi-5 L + 16 f Hxi-4 L - 54 f Hxi-3 L +
47 5040
h6 f H8L
85 f Hxi-2 L + 130 f Hxi-1 L - 378 f Hxi L + 214 f Hxi+1 L - 11 f Hxi+2 LL f ££ Hxi L > f ££ Hxi L >
2 f Ixi-3 M-27 f Ixi-2 M+270 f Ixi-1 M-490 f Ixi M+270 f Ixi+1 M-27 f Ixi+2 M+2 f Ixi+3 M 180 h2 1 180 h2
H-11 f Hxi-2 L + 214 f Hxi-1 L - 378 f Hxi L +
1 560
h6 f H8L
47 5040
h6 f H8L
130 f Hxi+1 L + 85 f Hxi+2 L - 54 f Hxi+3 L + 16 f Hxi+4 L - 2 f Hxi+5 LL f ££ Hxi L >
1 180 h2
H126 f Hxi-1 L - 70 f Hxi L - 486 f Hxi+1 L +
29 560
h6 f H8L
363 560
h6 f H8L
855 f Hxi+2 L - 670 f Hxi+3 L + 324 f Hxi+4 L - 90 f Hxi+5 L + 11 f Hxi+6 LL f ££ Hxi L >
1 180 h2
H938 f Hxi L - 4014 f Hxi+1 L + 7911 f Hxi+2 L - 9490 f Hxi+3 L + 7380 f Hxi+4 L - 3618 f Hxi+5 L + 1019 f Hxi+6 L - 126 f Hxi+7 LL
Finite difference formulas on uniform grids for the second derivative.
One thing to notice from this table is that the farther the formulas get from centered, the larger the error term coefficient, sometimes by factors of hundreds. For this reason, sometimes where one-sided derivative formulas are required (such as at boundaries), formulas of higher order are used to offset the extra error.
NDSolve`FiniteDifferenceDerivative Fornberg [F92], [F98] also gives an algorithm that, though not quite so elegant and simple, is more general and, in particular, is applicable to nonuniform grids. It is not difficult to program in Mathematica, but to make it as efficient as possible, a new kernel function has been provided as a simpler interface (along with some additional features).
Advanced Numerical Differential Equation Solving in Mathematica
185
NDSolve`FiniteDifferenceDerivativeADerivative@mD,grid,valuesE approximate the mth -order derivative for the function that takes on values on the grid
NDSolve`FiniteDifferenceDerivativeA Derivative@m1 ,m2 ,…,mn D,8grid1 ,grid2 ,…,gridn<,valuesE approximate the partial derivative of order (m1 , m2 , …, mn ) for the function of n variables that takes on values on the tensor product grid defined by the outer product of (grid1 ,
grid2 , …, gridn ) NDSolve`FiniteDifferenceDerivativeADerivative@m1 ,m2 ,…,mn D,8grid1 ,grid2 ,…,gridn
NDSolve`FiniteDifferenceDerivativeFunction, which can be repeatedly applied to values on the grid Finding finite difference approximations to derivatives. This defines a uniform grid with points spaced apart by a symbolic distance h. In[14]:=
ugrid = h Range@0, 8D
Out[14]= 80, h, 2 h, 3 h, 4 h, 5 h, 6 h, 7 h, 8 h<
This gives the first derivative formulas on the grid for a symbolic function f. In[15]:=
NDSolve`FiniteDifferenceDerivative@Derivative@1D, ugrid, Map@f, ugridDD
Out[15]= :-
25 f@0D
+
4 f@hD
-
3 f@2 hD
+
4 f@3 hD
-
f@4 hD
, 12 h h h 3h 4h f@0D 5 f@hD 3 f@2 hD f@3 hD f@4 hD f@0D 2 f@hD 2 f@3 hD f@4 hD + + , + , 4h 6h 2h 2h 12 h 12 h 3h 3h 12 h f@hD 2 f@2 hD 2 f@4 hD f@5 hD f@2 hD 2 f@3 hD 2 f@5 hD f@6 hD + , + , 12 h 3h 3h 12 h 12 h 3h 3h 12 h f@3 hD 2 f@4 hD 2 f@6 hD f@7 hD f@4 hD 2 f@5 hD 2 f@7 hD f@8 hD + , + , 12 h 3h 3h 12 h 12 h 3h 3h 12 h f@4 hD f@5 hD 3 f@6 hD 5 f@7 hD f@8 hD f@4 hD 4 f@5 hD 3 f@6 hD 4 f@7 hD 25 f@8 hD + + + , + + > 12 h 2h 2h 6h 4h 4h 3h h h 12 h
The derivatives at the endpoints are computed using one-sided formulas. The formulas shown in the previous example are fourth-order accurate, which is the default. In general, when you use a symbolic grid and/or data, you get symbolic formulas. This is often useful for doing analysis on the methods; however, for actual numerical grids, it is usually faster and more accurate to give the numerical grid to
The at the endpoints are computed using one-sided formulas. The formulas shown 186 derivatives Advanced Numerical Differential Equation Solving in Mathematica use a symbolic grid and/or data, you get symbolic formulas. This is often useful for doing analysis on the methods; however, for actual numerical grids, it is usually faster and more accurate to give the numerical grid to NDSolve`FiniteDifferenceDerivative rather than using the symbolic formulas. This defines a randomly spaced grid between 0 and 2 p. In[16]:=
rgrid = Sort@Join@80, 2 p<, Table@2 p RandomReal@D, 810
Out[16]= 80, 0.94367, 1.005, 1.08873, 1.72052, 1.78776, 2.41574, 2.49119, 2.93248, 4.44508, 6.20621, 2 p<
This approximates the derivative of the sine function at each point on the grid. In[17]:=
NDSolve`FiniteDifferenceDerivative@Derivative@1D, rgrid, Sin@rgridDD
Out[17]= 80.989891, 0.586852, 0.536072, 0.463601, -0.149152, -0.215212, -0.747842, -0.795502, -0.97065, -0.247503, 0.99769, 0.999131<
This shows the error in the approximations. In[18]:=
% - Cos@rgridD
-6 Out[18]= 9-0.0101091, 0.000031019, -0.0000173088, -0.0000130366, 9.03135 µ 10 , 0.0000521639,
0.0000926836, 0.000336785, 0.00756426, 0.0166339, 0.000651758, -0.000869237=
In multiple dimensions, NDSolve`FiniteDifferenceDerivative works on tensor product grids, and you only need to specify the grid points for each dimension. This defines grids xgrid and ygrid for the x and y direction, gives an approximation for the mixed xy partial derivative of the Gaussian on the tensor product of xgrid and ygrid , and makes a surface plot of the error. In[19]:=
Out[23]=
xgrid = Range@0, 8D; ygrid = Range@0, 10D; gaussian@x_, y_D = ExpA- IHx - 4L2 + Hy - 5L2 M ë 10E; values = Outer@gaussian, xgrid, ygridD; ListPlot3D@NDSolve`FiniteDifferenceDerivative@81, 1<, 8xgrid, ygrid<, valuesD Outer@Function@8x, y<, Evaluate@D@gaussian@x, yD, x, yDDD, xgrid, ygridDD
0.002
8
0.000
–0.002
6 4
5 10
2
Note that the values need to be given in a matrix corresponding to the outer product of the grid coordinates.
Advanced Numerical Differential Equation Solving in Mathematica
187
NDSolve`FiniteDifferenceDerivative does not compute weights for sums of derivatives. This means that for common operators like the Laplacian, you need to combine two approximations. This makes a function that approximates the Laplacian operator on a tensor product grid. In[24]:=
lap@values_, 8xgrid_, ygrid_
In[25]:=
Out[25]=
ListPlot3D@lap@values, 8xgrid, ygrid
0.0 –0.1 –0.2 –0.3 –0.4
8 6 4
5 10
2
option name
default value
“DifferenceOrder“
4
asymptotic order of the error
PeriodicInterpolation
False
whether to consider the values as those of a periodic function with the period equal to the interval enclosed by the grid
Options for NDSolve`FiniteDifferenceDerivative. This approximates the derivatives for the sine function on the random grid defined earlier, assuming that the function repeats periodically. In[26]:=
NDSolve`FiniteDifferenceDerivative@ 1, rgrid, Sin@rgridD, PeriodicInterpolation Ø TrueD
Out[26]= 80.99895, 0.586765, 0.536072, 0.463601, -0.149152,
-0.215212, -0.747842, -0.795502, -0.97065, -0.247503, 0.994585, 0.99895<
When using PeriodicInterpolation -> True, you can omit the last point in the values since it should always be the same as the first. This feature is useful when solving a PDE with periodic boundary conditions.
188
Advanced Numerical Differential Equation Solving in Mathematica
This generates second-order finite difference formulas for the first derivative of a symbolic function. In[27]:=
NDSolve`FiniteDifferenceDerivative@1, 8x-1 , x0 , x1 <, 8f-1 , f0 , f1 <, “DifferenceOrder“ Ø 2D f1 Hx-1 - x0 L
Out[27]= :
f0 H-x-1 + x1 L
+
H-x-1 + x1 L H-x0 + x1 L
H-x-1 + x0 L H-x0 + x1 L
f1 H-x-1 + x0 L
f-1 H-x0 + x1 L
-
H-x-1 + x1 L H-x0 + x1 L -
f-1 Hx0 - x1 L H-x-1 + x0 L H-x-1 + x1 L
f-1 J-1 +
-x-1 +x0
N ,
-x-1 + x1 f0 J-1 + +
H-x-1 + x0 L H-x-1 + x1 L -
-x-1 +x1
f0 H-x-1 + x1 L H-x-1 + x0 L H-x0 + x1 L
-x0 +x1 -x-1 +x0
N ,
-x0 + x1 f1 H-x-1 + x0 L J +
-x-1 +x1 -x-1 +x0
+
-x0 +x1 -x-1 +x0
N >
H-x-1 + x1 L H-x0 + x1 L
Fourth-order differences typically provide a good balance between truncation (approximation) error and roundoff error for machine precision. However, there are some applications where fourth-order differences produce excessive oscillation (Gibb's phenomena), so second-order differences are better. Also, for high-precision, higher-order differences may be appropriate. Even values of “DifferenceOrder“ use centered formulas, which typically have smaller error coefficients than noncentered formulas, so even values are recommended when appropriate.
NDSolve`FiniteDifferenceDerivativeFunction When computing the solution to a PDE, it is common to repeatedly apply the same finite difference approximation to different values on the same grid. A significant savings can be made by storing the necessary weight computations and applying them to the changing data. When you omit the (third) argument with function values in NDSolve`FiniteDifferenceDerivative, the result will be an NDSolve`FiniteDifferenceDerivativeFunction, which is a data object that stores the weight computations in a efficient form for future repeated use.
Advanced Numerical Differential Equation Solving in Mathematica
189
NDSolve`FiniteDifferenceDerivative@8m1 ,m2 ,…<,8grid1 ,grid2 ,…
NDSolve`FiniteDifferenceDerivativeFunction object
NDSolve`FiniteDifferenceDerivativeFunctionADerivative@mD,dataE a data object that contains the weights and other data needed to quickly approximate the mth -order derivative of a function; in the standard output form, only the Derivative@mD operator it approximates is shown
NDSolve`FiniteDifferenceDerivativeFunction@dataD@valuesD approximate the derivative of the function that takes on values on the grid used to determine data Computing finite difference weights for repeated use. This defines a uniform grid with 25 points on the unit interval and evaluates the sine function with one period on the grid. In[2]:=
n = 24; grid = N@Range@0, nD ê nD; values = Sin@2 p gridD
Out[4]= 90., 0.258819, 0.5, 0.707107, 0.866025, 0.965926, 1., 0.965926, 0.866025,
0.707107, 0.5, 0.258819, 1.22465 µ 10-16 , -0.258819, -0.5, -0.707107, -0.866025, -0.965926, -1., -0.965926, -0.866025, -0.707107, -0.5, -0.258819, -2.44929 µ 10-16 =
This defines an NDSolve`FiniteDifferenceDerivativeFunction, which can be repeatedly applied to different values on the grid to approximate the second derivative. In[5]:=
fddf = NDSolve`FiniteDifferenceDerivative@Derivative@2D, gridD
Out[5]= NDSolve`FiniteDifferenceDerivativeFunction@Derivative@2D, <>D
Note that the standard output form is abbreviated and only shows the derivative operators that are approximated. This computes the approximation to the second derivative of the sine function. In[6]:=
fddf@valuesD
Out[6]= 90.0720267, -10.2248, -19.7382, -27.914, -34.1875, -38.1312, -39.4764, -38.1312,
-34.1875, -27.914, -19.7382, -10.2172, 3.39687 µ 10-13 , 10.2172, 19.7382, 27.914, 34.1875, 38.1312, 39.4764, 38.1312, 34.1875, 27.914, 19.7382, 10.2248, -0.0720267=
190
Advanced Numerical Differential Equation Solving in Mathematica
This function is only applicable for values defined on the particular grid used to construct it. If your problem requires changing the grid, you will need to use NDSolve`FiniteDifferenceÖ Derivative to generate weights each time the grid changes. However, when you can use NDSolve`FiniteDifferenceDerivativeFunction
objects,
evaluation
will
be
substantially
faster. This compares timings for computing the Laplacian with the function just defined and with the definition of the previous section. A loop is used to repeat the calculation in each case because it is too fast for the differences to show up with Timing. In[9]:=
repeats = 10 000; 8First@Timing@Do@fddf@valuesD, 8repeats
Out[10]= 80.047, 2.25<
An NDSolve`FiniteDifferenceDerivativeFunction can be used repeatedly in many situations. As a simple example, consider a collocation method for solving the boundary value problem uxx + sinHxL u = l u; uH0L = uH1L = 0 on the unit interval. (This simple method is not necessarily the best way to solve this particular problem, but it is useful as an example.) This defines a function that will have all components zero at an approximate solution of the boundary value problem. Using the intermediate vector v and setting its endpoints (parts {1,-1}) to 0 is a fast and simple trick to enforce the boundary conditions. Evaluation is prevented except for numbers l because this would not work otherwise. (Also, because Times is Listable , Sin@2 Pi gridD u would thread componentwise.) In[11]:=
Clear@funD; fun@u_, l_ ? NumberQD := Module@8n = Length@uD, v = fddf@uD + H Sin@gridD - lL u<, v@@81, - 1
Advanced Numerical Differential Equation Solving in Mathematica
191
This uses FindRoot to find an approximate eigenfunction using the constant coefficient case for a starting value and shows a plot of the eigenfunction. In[13]:=
s4 = FindRootAfun@u, lD, 8u, values<, 9l, - 4 p2 =E; ListPlot@Transpose@8grid, u ê. s4 -39.4004
0.3 0.2 0.1
Out[14]= 0.2
0.4
0.6
0.8
1.0
-0.1 -0.2 -0.3
Since the setup for this problem is so simple, it is easy to compare various alternatives. For example, to compare the solution above, which used the default fourth-order differences, to the usual use of second-order differences, all that needs to be changed is the “DifferenceOrder“. This solves the boundary value problem using second-order differences and shows a plot of the difference between it and the fourth-order solution. In[39]:=
fddf = NDSolve`FiniteDifferenceDerivative@ Derivative@2D, grid, “DifferenceOrder“ Ø 2D; s2 = FindRootAfun@u, lD, 8u, values<, 9l, - 4 p2 =E; ListPlot@Transpose@8grid, Hu ê. s4L - Hu ê. s2L
0.2
Out[41]=
0.4
0.6
0.8
1.0
-0.02
-0.04
-0.06
One way to determine which is the better solution is to study the convergence as the grid is refined. This will be considered to some extent in the section on differentiation matrices below. While the most vital information about an NDSolve`FiniteDifferenceDerivativeFunction object, the derivative order, is displayed in its output form, sometimes it is useful to extract this and other information from an NDSolve`FiniteDifferenceDerivativeFunction, say for use in a program. The structure of the way the data is stored may change between versions of Mathematica, so extracting the information by using parts of the expression is not recommended. A better alternative is to use any of the several method functions provided for this purpose.
192
Advanced Numerical Differential Equation Solving in Mathematica
Let FDDF represent an NDSolve`FiniteDifferenceDerivativeFunction@dataD object.
FDDFü“DerivativeOrder“
get the derivative order that FDDF approximates
FDDFü“DifferenceOrder“
get the list with the difference order used for the approximation in each dimension
FDDFü“PeriodicInterpolation“
get the list with elements True or False indicating whether periodic interpolation is used for each dimension
FDDFü“Coordinates“
get the list with the grid coordinates in each dimension
FDDFü“Grid“
form the tensor of the grid points; this is the outer product of the grid coordinates
FDDFü“DifferentiationMatrix“
compute the sparse differentiation matrix mat such that mat.Flatten@valuesD is equivalent to
Flatten@FDDF@valuesDD Method functions for exacting information from an
NDSolve`FiniteDifferenceDerivativeFunction@dataD object. Any of the method functions that return a list with an element for each of the dimensions can be used with an integer argument dim, which will return only the value for that particular dimension such that FDDF ü method@dimD = HFDDF ü methodL@@dimDD. The following examples show how you might use some of these methods. Here is an NDSolve`FiniteDifferenceDerivativeFunction object created with random grids having between 10 and 16 points in each dimension. In[15]:=
fddf = NDSolve`FiniteDifferenceDerivative@Derivative@0, 1, 2D, Table@Sort@Join@80., 1.<, Table@RandomReal@D, 8RandomInteger@88, 14
Out[15]= NDSolve`FiniteDifferenceDerivativeFunction@Derivative@0, 1, 2D, <>D
This shows the dimensions of the outer product grid. In[20]:=
Dimensions@tpg = fddf ü “Grid“D
Out[20]= 815, 10, 11, 3<
Note that the rank of the grid point tensor is one more than the dimensionality of the tensor product. This is because the three coordinates defining each point are in a list themselves. If you
have
a
function
that
depends
on
the
grid
variables,
you
can
use
Apply@ f , fddf @“Grid“D, 8n
Advanced Numerical Differential Equation Solving in Mathematica
193
This defines a Gaussian function of 3 variables and applies it to the grid on which the NDSolve`FiniteDifferenceDerivativeFunction is defined. In[21]:=
f = Function@8x, y, z<, Exp@- HHx - .5L ^ 2 + Hy - .5L ^ 2 + Hz - .5L ^ 2LDD; values = Apply@f, fddf ü “Grid“, 8Length@fddf@“DerivativeOrder“DD
In[23]:=
Module@8dvals = fddf@valuesD, maxval, minval<, maxval = Max@dvalsD; minval = Min@dvalsD; Graphics3D@MapThread@8Hue@HÒ2 - minvalL ê Hmaxval - minvalLD, Point@Ò1D< &, 8fddf@“Grid“D, fddf@valuesD<, Length@fddf@“DerivativeOrder“DDDDD
Out[23]=
For a moderate-sized tensor product grid like the example here, using Apply is reasonably fast. However, as the grid size gets larger, this approach may not be the fastest because Apply can only be used in limited ways with the Mathematica compiler and hence, with packed arrays. If you can define your function so you can use Map instead of Apply, you may be able to use a CompiledFunction since Map has greater applicability within the Mathematica compiler than does Apply.
194
Advanced Numerical Differential Equation Solving in Mathematica
This defines a CompiledFunction that uses Map to get the values on the grid. (If the first grid dimension is greater than the system option “MapCompileLength“, then you do not need to construct the CompiledFunction since the compilation is done automatically when grid is a packed array.) In[24]:=
cf = Compile@88grid, _Real, 4<<, Map@Function@8X<, Module@8Xs = X - .5<, Exp@- HXs.XsLDDD, grid, 83
Out[24]= CompiledFunctionA8grid<,
MapAFunctionA8X<, ModuleA8Xs = X - 0.5<, ‰-Xs.Xs EE, grid, 83
An even better approach, when possible, is to take advantage of listability when your function consists of operations and functions which have the Listable attribute. The trick is to separate the x, y, and z values at each of the points on the tensor product grid. The fastest way
to
do
this
is
using
Transpose@ fddf @"Grid"DD, RotateLeft@Range@n + 1DDD,
where
n = Length@ fddf @"DerivativeOrder "DD is the dimensionality of the space in which you are approximating the derivative. This will return a list of length n, which has the values on the grid for each of the component dimensions separately. With the Listable attribute, functions applied to this will thread over the grid. This defines a function that takes advantage of the fact that Exp has the Listable attribute to find the values on the grid. In[25]:=
fgrid@grid_D := Apply@f, Transpose@grid, RotateLeft@Range@TensorRank@gridDD, 1DDD This compares timings for the three methods. The commands are repeated several times to get more accurate timings.
In[26]:=
Module@ 8repeats = 100, grid = fddf@“Grid“D, n = Length@fddf@“DerivativeOrder“DD<, 8First@Timing@Do@Apply@f, grid, 8n
Out[26]= 81.766, 0.125, 0.047<
The example timings show that using the CompiledFunction is typically much faster than using Apply and taking advantage of listability is a little faster yet.
Pseudospectral Derivatives The maximum value the difference order can take on is determined by the number of points in the grid. If you exceed this, a warning message will be given and the order reduced automatically.
Advanced Numerical Differential Equation Solving in Mathematica
195
This uses maximal order to approximate the first derivative of the sine function on a random grid. In[50]:=
NDSolve`FiniteDifferenceDerivative@1, rgrid, Sin@rgridD, “DifferenceOrder“ Ø Length@rgridDD NDSolve`FiniteDifferenceDerivative::ordred : There are insufficient points in dimension 1 to achieve the requested approximation order. Order will be reduced to 11.
Out[50]= 81.00001, 0.586821, 0.536089, 0.463614, -0.149161, -0.215265,
-0.747934, -0.795838, -0.978214, -0.264155, 0.997089, 0.999941<
Using a limiting order is commonly referred to as a pseudospectral derivative. A common problem with these is that artificial oscillations (Runge's phenomena) can be extreme. However, there are two instances where this is not the case: a uniform grid with periodic repetition and a grid with points at the zeros of the Chebyshev polynomials, Tn , or Chebyshev|Gauss|Lobatto points [F96a], [QV94]. The computation in both of these cases can be done using a fast Fourier transform, which is efficient and minimizes roundoff error.
“DifferenceOrder“->n
use nth -order finite differences to approximate the derivative
“DifferenceOrder“->Length@gridD
use the highest possible order finite differences to approximate the derivative on the grid (not generally recommended)
“DifferenceOrder“-> “Pseudospectral“
use a pseudospectral derivative approximation; only applicable when the grid points are spaced corresponding to the Chebyshev|Gauss|Lobatto points or when the grid is uniform with PeriodicInterpolation -> True
“DifferenceOrder“->8n1 ,n2 ,…<
use difference orders n1 , n2 , … in dimensions 1, 2, … respectively
Settings for the “DifferenceOrder“ option. This gives a pseudospectral approximation for the second derivative of the sine function on a uniform grid. In[27]:=
ugrid = N@2 p Range@0, 10D ê 10D; NDSolve`FiniteDifferenceDerivative@1, ugrid, Sin@ugridD, PeriodicInterpolation Ø True, “DifferenceOrder“ -> “Pseudospectral“D
Out[28]= 81., 0.809017, 0.309017, -0.309017, -0.809017, -1., -0.809017, -0.309017, 0.309017, 0.809017, 1.<
196
Advanced Numerical Differential Equation Solving in Mathematica
This computes the error at each point. The approximation is accurate to roundoff because the effective basis for the pseudospectral derivative on a uniform grid for a periodic function are the trigonometric functions. In[29]:=
% - Cos@ugridD
-16 -16 -16 -16 -16 -16 Out[29]= 96.66134 µ 10 , -7.77156 µ 10 , 4.996 µ 10 , 1.11022 µ 10 , -3.33067 µ 10 , 4.44089 µ 10 ,
-3.33067 µ 10-16 , 3.33067 µ 10-16 , -3.88578 µ 10-16 , -1.11022 µ 10-16 , 6.66134 µ 10-16 =
The Chebyshev-Gauss-Lobatto points are the zeros of I1 - x2 M Tn £ HxL. Using the property Tn HxL = Tn HcosHqLL == cosHn qL, these can be shown to be at x j = cosJ
pj n
N.
This defines a simple function that generates a grid of n points with leftmost point at x0 and interval length L having the spacing of the Chebyshev|Gauss|Lobatto points. In[30]:=
CGLGrid@x0_, L_, n_Integer ê; n > 1D := 1 x0 + L H1 - Cos@p Range@0, n - 1D ê Hn - 1LDL 2 This computes the pseudospectral derivative for a Gaussian function.
In[31]:=
cgrid = CGLGrid@- 5, 10., 16D; NDSolve`FiniteDifferenceDerivativeA 1, cgrid, ExpA- cgrid2 E, “DifferenceOrder“ -> “Pseudospectral“E
Out[31]= 80.0402426, -0.0209922, 0.0239151, -0.0300589, 0.0425553, -0.0590871, 0.40663, 0.60336,
-0.60336, -0.40663, 0.0590871, -0.0425553, 0.0300589, -0.0239151, 0.0209922, -0.0402426<
This shows a plot of the approximation and the exact values. In[32]:=
ShowA9 ListPlot@Transpose@8cgrid, %
0.5
Out[32]=
-4
-2
2
-0.5
4
Advanced Numerical Differential Equation Solving in Mathematica
197
This shows a plot of the derivative computed using a uniform grid with the same number of points with maximal difference order. In[35]:=
ugrid = - 5 + 10. Range@0, 15D ê 15; ShowA9 ListPlotA TransposeA9ugrid, NDSolve`FiniteDifferenceDerivativeA1, ugrid, ExpA- ugrid2 E, “DifferenceOrder“ Ø Length@ugridD - 1E=E, PlotStyle Ø [email protected], PlotAEvaluateADAExpA- x2 E, xEE, 8x, - 5, 5
10
Out[36]=
-4
-2
2
4
-10
-20
Even though the approximation is somewhat better in the center (because the points are more closely spaced there in the uniform grid), the plot clearly shows the disastrous oscillation typical of overly high-order finite difference approximations. Using the Chebyshev|Gauss|Lobatto spacing has minimized this. This shows a plot of the pseudospectral derivative approximation computed using a uniform grid with periodic repetition. In[70]:=
ugrid = - 5 + 10. Range@0, 15D ê 15; ShowA 9 ListPlotATransposeA9ugrid, NDSolve`FiniteDifferenceDerivativeA 1, ugrid, ExpA- ugrid2 E, “DifferenceOrder“ Ø “Pseudospectral“, PeriodicInterpolation Ø TrueE=E, PlotStyle Ø [email protected], PlotAEvaluateADAExpA- x2 E, xEE, 8x, - 5, 5
0.5
Out[71]=
-4
-2
2
-0.5
4
198
Advanced Numerical Differential Equation Solving in Mathematica
With the assumption of periodicity, the approximation is significantly improved. The accuracy of the periodic pseudospectral approximations is sufficiently high to justify, in some cases, using a larger computational domain to simulate periodicity, say for a pulse like the example. Despite the great accuracy of these approximations, they are not without pitfalls: one of the worst is probably aliasing error, whereby an oscillatory function component with too great a frequency can be misapproximated or disappear entirely.
Accuracy and Convergence of Finite Difference Approximations When using finite differences, it is important to keep in mind that the truncation error, or the asymptotic approximation error induced by cutting off the Taylor series approximation, is not the only source of error. There are two other sources of error in applying finite difference formulas; condition error and roundoff error [GMW81]. Roundoff error comes from roundoff in the arithmetic computations required. Condition error comes from magnification of any errors in the function values, typically from the division by a power of the step size, and so grows with decreasing step size. This means that in practice, even though the truncation error approaches zero as h does, the actual error will start growing beyond some point. The following figures demonstrate the typical behavior as h becomes small for a smooth function. 10 0.01 0.00001 1. µ 10-8
100
1000
10000
1. µ 10-11 1. µ 10-14
A logarithmic plot of the maximum error for approximating the first derivative of the Gaussian 2
f HxL = ‰-H15 Hx-1ê2LL at points on a grid covering the interval @0, 1D as a function of the number of grid points, n, using machine precision. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and gray, respectively.
Advanced Numerical Differential Equation Solving in Mathematica
199
10 0.0001 100
1. µ 10-9
1000
10000
1. µ 10-14 1. µ 10-19 1. µ 10-24
A logarithmic plot of the truncation error (dotted) and the condition and roundoff error (solid line) for 2
approximating the first derivative of the Gaussian f HxL = ‰-H15 Hx-1ê2LL at points on a grid covering the interval @0, 1D as a function of the number of grid points, n. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and gray, respectively. The truncation error was computed by computing the approximations with very high precision. The roundoff and condition error was estimated by subtracting the machine-precision approximation from the high-precision approximation. The roundoff and condition error tends to increase linearly (because of the 1 ê h factor common to finite difference formulas for the first derivative) and tends to be a little bit higher for higherorder derivatives. The pseudospectral derivatives show more variations because the error of the FFT computations vary with length. Note that the truncation error for the uniform (periodic) pseudospectral derivative does not decrease below about 10-22 . This is because, mathematically, the Gaussian is not a periodic function; this error in essence gives the deviation from periodicity.
0.1 0.0001 1. µ 10-7 1. µ 10-10 1. µ 10-13 1. µ 10-16
0
0.2
0.4
0.6
0.8
1
2
A semilogarithmic plot of the error for approximating the first derivative of the Gaussian f HxL = ‰-Hx-1ê2L as a function of x at points on a 45-point grid covering the interval @0, 1D. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and gray, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1 ê 45. It is apparent that the error for the pseudospectral derivatives is not so localized; not surprising since the approximation at any point is based on the values over the whole grid. The error for the finite difference approximations are localized and the magnitude of the errors follows the size of the Gaussian (which is parabolic on a semilogarithmic plot).
200
Advanced Numerical Differential Equation Solving in Mathematica
From the second plot, it is apparent that there is a size for which the best possible derivative approximation is found; for larger h, the truncation error dominates, and for smaller h, the condition and roundoff error dominate. The optimal h tends to give better approximations for higher-order differences. This is not typically an issue for spatial discretization of PDEs because computing to that level of accuracy would be prohibitively expensive. However, this error balance is a vitally important issue when using low-order differences to approximate, for example, Jacobian matrices. To avoid extra function evaluations, first-order forward differences are usually used, and the error balance is proportional to the square root of unit roundoff, so picking a good value of h is important [GMW81]. The plots showed the situation typical for smooth functions where there were no real boundary effects. If the parameter in the Gaussian is changed so the function is flatter, boundary effects begin to appear.
0.0001 1. µ 10-7
0
0.2
0.4
0.6
0.8
1
1. µ 10-10 1. µ 10-13
2
A semilogarithmic plot of the error for approximating the first derivative of the Gaussian f HxL = ‰-H15 Hx-1ê2LL as a function of x at points on a 45-point grid covering the interval @0, 1D. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (nonperiodic) and Chebyshev spacing are shown in black and gray, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1 ê 45. The error for the finite difference approximations are localized, and the magnitude of the errors follows the magnitude of the first derivative of the Gaussian. The error near the boundary for the uniform spacing pseudospectral (order-45 polynomial) approximation becomes enormous; as h decreases, this is not bounded. On the other hand, the error for the Chebyshev spacing pseudospectral is more uniform and overall quite small.
From what has so far been shown, it would appear that the higher the order of the approximation, the better. However, there are two additional issues to consider. The higher-order approximations lead to more expensive function evaluations, and if implicit iteration is needed (as for a stiff problem), then not only is computing the Jacobian more expensive, but the eigenvalues of the matrix also tend to be larger, leading to more stiffness and more difficultly for iterative solvers. This is at an extreme for pseudospectral methods, where the Jacobian has essentially
From what has so far been shown, it would appear theDifferential higher Equation the order ofinthe approximaAdvanced that Numerical Solving Mathematica 201 mations lead to more expensive function evaluations, and if implicit iteration is needed (as for a stiff problem), then not only is computing the Jacobian more expensive, but the eigenvalues of the matrix also tend to be larger, leading to more stiffness and more difficultly for iterative solvers. This is at an extreme for pseudospectral methods, where the Jacobian has essentially no nonzero entries [F96a]. Of course, these problems are a trade-off for smaller system (and hence matrix) size. The other issue is associated with discontinuities. Typically, the higher order the polynomial approximation, the worse the approximation. To make matters even worse, for a true discontinuity, the errors magnify as the grid spacing is reduced.
75 50 25 0.2
0.4
0.6
0.8
1
-25 -50 -75
A plot of approximations for the first derivative of the discontinuous unit step function f HxL = UnitStep Hx - 1 ê 2L as a function of x at points on a 128-point grid covering the interval @0, 1D. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and gray, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1 ê 128. All show oscillatory behavior, but it is apparent that the Chebyshev pseudospectral derivative does better in this regard.
There are numerous alternatives that are used around known discontinuities, such as front tracking. First-order forward differences minimize oscillation, but introduce artificial viscosity terms. One good alternative are the so-called essentially nonoscillatory (ENO) schemes, which have full order away from discontinuities but introduce limits near discontinuities that limit the approximation order and the oscillatory behavior. At this time, ENO schemes are not implemented in NDSolve. In summary, choosing an appropriate difference order depends greatly on the problem structure. The default of 4 was chosen to be generally reasonable for a wide variety of PDEs, but you may want to try other settings for a particular problem to get better results.
202
Advanced Numerical Differential Equation Solving in Mathematica
Differentiation Matrices Since differentiation, and naturally finite difference approximation, is a linear operation, an alternative way of expressing the action of a FiniteDifferenceDerivativeFunction is with a matrix. A matrix that represents an approximation to the differential operator is referred to as a differentiation matrix [F96a]. While differentiation matrices may not always be the optimal way of applying finite difference approximations (particularly in cases where an FFT can be used to reduce complexity and error), they are invaluable as aids for analysis and, sometimes, for use in the linear solvers often needed to solve PDEs. Let FDDF represent an NDSolve`FiniteDifferenceDerivativeFunction@dataD object.
FDDFü“DifferentiationMatrix“
recast the linear operation of FDDF as a matrix that represents the linear operator
Forming a differentiation matrix. This creates a FiniteDifferenceDerivativeFunction object. In[37]:=
fdd = NDSolve`FiniteDifferenceDerivative@2, Range@0, 10DD
Out[37]= NDSolve`FiniteDifferenceDerivativeFunction@Derivative@2D, <>D
This makes a matrix representing the underlying linear operator. In[38]:=
smat = fdd@“DifferentiationMatrix“D
Out[38]= SparseArray@<59>, 811, 11
The matrix is given in a sparse form because, in general, differentiation matrices have relatively few nonzero entries.
Advanced Numerical Differential Equation Solving in Mathematica
203
This converts to a normal dense matrix and displays it using MatrixForm . In[39]:=
MatrixForm@mat = Normal@smatDD 15 4 5 6
0 0 Out[39]//MatrixForm=
0
-
77
107
6
6
-
5
1
4
12
3
0 0
4
-13
-
1
7
3
6
-
5
4
2
3
1
4
12
3
0
1
4
12
3
-
61 12
5
4
2
3
1
4
12
3
0
0
0
0
-
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-
5 6
1
1
2
12
1 12
0 -
5
4
2
3
1
4
12
3
-
1 12
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 -
5
4
2
3
1
4
12
3
-
0 1 12
-
0
-
1 12
0 -
5
4
2
3
1
4
12
3
1
7
2
6
5
61
6
12
1 12
0 -
1
5
4
2
3
12
-13
-
1 12
0
-
5
4
2
3
-
1
-
5
5
4
6
-
77
15
6
4
3
107 6
-
1 12
This shows that all three of the representations are roughly equivalent in terms of their action on data. In[40]:=
data = MapAExpA- Ò2 E &, N@Range@0, 10DDE; 8fdd@dataD, smat.data, mat.data<
Out[41]= 99-0.646094, 0.367523, 0.361548, -0.00654414, -0.00136204, -0.0000101341,
-9.35941 µ 10-9 , -1.15702 µ 10-12 , 9-0.646094, 0.367523, 0.361548, -9.35941 µ 10-9 , -1.15702 µ 10-12 , 9-0.646094, 0.367523, 0.361548, -9.35941 µ 10-9 , -1.15702 µ 10-12 ,
-1.93287 µ 10-17 , 1.15721 µ 10-12 , -1.15721 µ 10-11 =, -0.00654414, -0.00136204, -0.0000101341, -1.93287 µ 10-17 , 1.15721 µ 10-12 , -1.15721 µ 10-11 =, -0.00654414, -0.00136204, -0.0000101341, -1.93287 µ 10-17 , 1.15721 µ 10-12 , -1.15721 µ 10-11 ==
As mentioned previously, the matrix form is useful for analysis. For example, it can be used in a direct solver or to find the eigenvalues that could, for example, be used for linear stability analysis. This computes the eigenvalues of the differentiation matrix. In[42]:=
Eigenvalues@N@smatDD
Out[42]= 9-4.90697, -3.79232, -2.38895, -1.12435, -0.287414,
8.12317 µ 10-6 + 0.0000140698 Â, 8.12317 µ 10-6 - 0.0000140698 Â, -0.0000162463, -8.45104 µ 10-6 , 4.22552 µ 10-6 + 7.31779 µ 10-6 Â, 4.22552 µ 10-6 - 7.31779 µ 10-6 Â=
For pseudospectral derivatives, which can be computed using fast Fourier transforms, it may be faster to use the differentiation matrix for small size, but ultimately, on a larger grid, the better complexity and numerical properties of the FFT make this the much better choice.
204
Advanced Numerical Differential Equation Solving in Mathematica
For multidimensional derivatives, the matrix is formed so that it is operating on the flattened data, the KroneckerProduct of the matrices for the one-dimensional derivatives. It is easiest to understand this through an example. This evaluates a Gaussian function on the grid that is the outer product of grids in the x and y direction. In[4]:=
xgrid = N@Range@- 2, 2, 1 ê 10DD; ygrid = N@Range@- 2, 2, 1 ê 8DD; data = OuterAExpA- IHÒ1L2 + HÒ2L2 ME &, xgrid, ygridE; This defines an NDSolve`FiniteDifferenceDerivativeFunction which computes the mixed x-y partial of the function using fourth-order differences.
In[7]:=
fdd = NDSolve`FiniteDifferenceDerivative@81, 1<, 8xgrid, ygrid
Out[7]= NDSolve`FiniteDifferenceDerivativeFunction@Derivative@1, 1D, <>D
This computes the associated differentiation matrix. In[8]:=
dm = fdd@“DifferentiationMatrix“D
Out[8]= SparseArray@<22 848>, 81353, 1353
Note that the differentiation matrix is a 1353×1353 matrix. The number 1353 is the total number of points on the tensor product grid, that, of course, is the product of the number of points on the x and y grids. The differentiation matrix operates on a vector of data which comes from flattening data on the tensor product grid. The matrix is also very sparse; only about one-half of a percent of the entries are nonzero. This is easily seen with a plot of the positions with nonzero values. Show a plot of the positions with nonzero values for the differentiation matrix. In[9]:=
MatrixPlot@Unitize@dmDD 1
500
1000
1
1353 1
500
500
1000
1000
Out[9]=
1353 1
500
1000
1353 1353
Advanced Numerical Differential Equation Solving in Mathematica
205
This compares the computation of the mixed x-y partial with the two methods. In[53]:=
[email protected]@dataD - Flatten@fdd@dataDDD
-15 Out[53]= 3.60822 µ 10
The matrix is the KroneckerProduct, or direct matrix product of the 1-dimensional matrices. Get the 1-dimensional differentiation matrices and form their direct matrix product. In[16]:=
fddx = NDSolve`FiniteDifferenceDerivative@81<, 8xgrid
Out[17]= True
Using the differentiation matrix results in slightly different values for machine numbers because the order of operations is different which, in turn, leads to different roundoff errors. The differentation matrix can be advantageous when what is desired is a linear combination of derivatives. For example, the computation of the Laplacian operator can be put into a single matrix. This makes a function that approximates the Laplacian operator on a the tensor product grid. In[18]:=
flap = Function@Evaluate@NDSolve`FiniteDifferenceDerivative@82, 0<, 8xgrid, ygrid
Out[18]= NDSolve`FiniteDifferenceDerivativeFunction@Derivative@0, 2D, <>D@Ò1D +
NDSolve`FiniteDifferenceDerivativeFunction@Derivative@2, 0D, <>D@Ò1D &
This computes the differentiation matrices associated with the derivatives in the x and y direction. In[19]:=
dmlist = Map@HHead@ÒD@“DifferentiationMatrix“DL &, List üü First@flapDD
Out[19]= 8SparseArray@<6929>, 81353, 1353, 81353, 1353
This adds the two sparse matrices together resulting in a single matrix for the Laplacian operator. In[68]:=
slap = Total@dmlistD
Out[68]= SparseArray@<12 473>, 81353, 1353
206
Advanced Numerical Differential Equation Solving in Mathematica
This shows a plot of the positions with nonzero values for the differentiation matrix. In[69]:=
MatrixPlot@Unitize@slapDD 1
500
1000
1
1353 1
500
500
1000
1000
Out[69]=
1353 1
500
1000
1353 1353
This compares the values and timings for the two different ways of approximating the Laplacian. In[64]:=
Block@8repeats = 1000, l1, l2<, data = Developer`ToPackedArray@dataD; fdata = Flatten@dataD; Map@First, 8 Timing@Do@l1 = flap@dataD, 8repeats
-14 Out[64]= 90.14, 0.047, 1.39888 µ 10 =
Interpretation of Discretized Dependent Variables When a dependent variable is given in a monitor (e.g. StepMonitor ) option or in a method where interpretation of the dependent variable is needed (e.g. EventLocator and Projection), for ODEs, the interpretation is generally clear: at a particular value of time (or the independent variable), use the value for that component of the solution for the dependent variable. For PDEs, the interpretation to use is not so obvious. Mathematically speaking, the dependent variable at a particular time is a function of space. This leads to the default interpretation, which is to represent the dependent variable as an approximate function across the spatial domain using an InterpolatingFunction.
Advanced Numerical Differential Equation Solving in Mathematica
207
Another possible interpretation for PDEs is to consider the dependent variable at a particular time as representing the spatially discretized values at that time~that is, discretized both in time and space. You can request that monitors and methods use this fully discretized interpretation by using the MethodOfLines option DiscretizedMonitorVariables -> True. The best way to see the difference between the two interpretations is with an example. This solves Burgers' equation. The StepMonitor is set so that it makes a plot of the solution at the step time of every tenth time step, producing a sequence of curves of gradated color. You can animate the motion by replacing Show with ListAnimate ; note that the motion of the wave in the animation does not reflect actual wave speed since it effectively includes the step size used by NDSolve . In[5]:=
curves = Reap@Block@8count = 0<, Timing@ NDSolve@8D@u@t, xD , tD ã 0.01 D@u@t, xD, x, xD + u@t, xD D@u@t, xD, xD, u@0, xD ã Cos@2 Pi xD, u@t, 0D ã u@t, 1D<, u, 8t, 0, 1<, 8x, 0, 1<, StepMonitor ß If@Mod@count ++, 10D ã 0, Sow@Plot@u@t, xD, 8x, 0, 1<, PlotRange Ø 880, 1<, 8- 1, 1<<, PlotStyle Ø Hue@tDDDD, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “MinPoints“ Ø 100, “DifferenceOrder“ Ø “Pseudospectral“<
In[8]:=
Show@curvesD 1.0 0.5
Out[6]=
0.2
0.4
0.6
0.8
1.0
-0.5 -1.0
In executing the command above, u@t, xD in the StepMonitor is effectively a function of x, so it can be plotted with plot. You could do other operations on it, such as numerical integration. This solves Burgers' equation. The StepMonitor is set so that it makes a list plot of the spatially discretized solution at the step time every tenth step. You can animate the motion by replacing Show with ListAnimate . In[10]:=
discretecurves = Reap@Block@8count = 0<, Timing@NDSolve@8D@u@t, xD , tD ã 0.01 D@u@t, xD, x, xD + u@t, xD D@u@t, xD, xD, u@0, xD ã Cos@2 Pi xD, u@t, 0D ã u@t, 1D<, u, 8t, 0, 1<, 8x, 0, 1<, StepMonitor ß If@Mod@count ++, 10D ã 0, Sow@ListPlot@u@t, xD, PlotRange Ø 8- 1, 1<, PlotStyle Ø Hue@tDDD;D, Method Ø 8“MethodOfLines“, “DiscretizedMonitorVariables“ Ø True, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “MinPoints“ Ø 100, “DifferenceOrder“ Ø “Pseudospectral“<
In[11]:=
Show@discretecurvesD 1.0 0.5
Out[11]=
20 -0.5 -1.0
40
60
80
100
208
Advanced Numerical Differential Equation Solving in Mathematica
In this case, u@t, xD is given at each step as a vector with the discretized values of the solution on the spatial grid. Showing the discretization points makes for a more informative monitor in this example since it allows you to see how well the front is resolved as it forms. The vector of values contains no information about the grid itself; in the example, the plot is made versus the index values, which shows the correct spacing for a uniform grid. Note that when u is interpreted as a function, the grid will be contained in the InterpolatingFunction used to represent the spatial solution, so if you need the grid, the easiest way to get it is to extract it from the InterpolatingFunction, which represents u@t, xD. Finally note that using the discretized representation is significantly faster. This may be an important issue if you are using the representation in solution method such as Projection or EventLocator. An example where event detection is used to prevent solutions from going beyond a computational domain is computed much more quickly by using the discretized interpretation.
Boundary Conditions Often, with PDEs, it is possible to determine a good numerical way to apply boundary conditions for a particular equation and boundary condition. The example given previously in the introduction of "The Numerical Method of Lines" is such a case. However, the problem of finding a general algorithm is much more difficult and is complicated somewhat by the effect that boundary conditions can have on stiffness and overall stability. Periodic boundary conditions are particularly simple to deal with: periodic interpolation is used for the finite differences. Since pseudospectral approximations are accurate with uniform grids, solutions can often be found quite efficiently.
Advanced Numerical Differential Equation Solving in Mathematica
209
NDSolve@8eqn1 ,eqn2 ,…,u1 @t,xmin D==u1 @t,xmax D,u2 @t,xmin D==u2 @t,xmax D,…<, 8u1 @t,xD,u2 @t,xD,…<,8t,tmin ,tmax <,8x,xmin ,xmax
NDSolve@8eqn1 ,eqn2 ,…,u1 @t,x1 min ,x2 ,…D==u1 @t,x1 max x2 ,…D, u2 @t,x1 min ,x2 ,…D==u2 @t,x1 max x2 ,…D,…<, 8u1 @t,x1 ,x2 ,…D,u2 @t,x1 ,x2 ,…D,…<, 8t,tmin ,tmax <,8x,xmin ,xmax
If you are solving for several functions u1 , u2 , … then for any of the functions to have periodic boundary conditions, all of them must (the condition need only be specified for one function). If you are working with more than one spatial dimension, you can have periodic boundary conditions in some independent variable dimensions and not in others. This solves a generalization of the sine-Gordon equation to two spatial dimensions with periodic boundary conditions using a pseudospectral method. Without the pseudospectral method enabled by the periodicity, the problem could take much longer to solve. In[2]:=
sol = NDSolveA9D@u@t, x, yD, t, tD ã D@u@t, x, yD, x, xD + D@u@t, x, yD, y, yD - Sin@u@t, x, yDD, u@0, x, yD ã ExpA- Ix2 + y2 ME, Derivative@1, 0, 0D@uD@0, x, yD ã 0, u@t, - 10, yD ã u@t, 10, yD, u@t, x, - 10D ã u@t, x, 10D=, u, 8t, 0, 6<, 8x, - 10, 10<, 8y, - 10, 10<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ -> “Pseudospectral“<
Out[2]= 88u Ø InterpolatingFunction@880., 6.<, 8-10., 10.<, 8-10., 10.<<, <>D<<
In the InterpolatingFunction object returned as a solution, the ellipses in the notation 8…, xmin , xmax , …< are used to indicate that this dimension repeats periodically
210
Advanced Numerical Differential Equation Solving in Mathematica
This makes a surface plot of a part of the solution derived from periodic continuation at t == 6. In[7]:=
Plot3D@First@u@6, x, yD ê. solD, 8x, 20, 40<, 8y, - 15, 15<, PlotRange Ø All, PlotPoints Ø 40D
0.10
Out[7]=
0.05 0.00
10
–0.05
–0.10 20
0
25 30
–10
35 40
NDSolve uses two methods for nonperiodic boundary conditions. Both have their merits and drawbacks. The first method is to differentiate the boundary conditions with respect to the temporal variable and solve for the resulting differential equation(s) at the boundary. The second method is to discretize each boundary condition as it is. This typically results in an algebraic equation for the boundary solution component, so the equations must be solved with a DAE solver. This is controlled with the DifferentiateBoundaryConditions option to MethodOfLines. To see how the differentiation method works, consider again the simple example of the method of lines introduction section. In the first formulation, the Dirichlet boundary condition at x == 0 was handled by differentiation with respect to t. The Neumann boundary condition was handled using
the
idea
of
reflection,
which
worked
fine
for
a
second-order
finite
difference
approximation, but does not generalize quite as easily to higher order (though it can be done easily for this problem by computing the entire reflection). The differentiation method, however, can be used for any order differences on the Neumann boundary condition at x == 1. As an example, a solution to the problem will be developed using fourth-order differences.
Advanced Numerical Differential Equation Solving in Mathematica
211
This is a setting for the number of and spacing between spatial points. It is purposely set small so you can see the resulting equations. You can change it later to improve the accuracy of the approximations. In[8]:=
n = 10; hn = 1 ê n; This defines the vector of ui .
In[9]:=
U@t_D = Table@ui @tD, 8i, 0, n
Out[9]= 8u0 @tD, u1 @tD, u2 @tD, u3 @tD, u4 @tD, u5 @tD, u6 @tD, u7 @tD, u8 @tD, u9 @tD, u10 @tD<
This discretizes the Neumann boundary condition at x == 1 in the spatial direction. In[10]:= Out[10]=
bc = Last@NDSolve`FiniteDifferenceDerivative@1, hn Range@0, nD, U@tDDD ã 0 5 u6 @tD
-
40 u7 @tD
2
3
+ 30 u8 @tD - 40 u9 @tD +
125 u10 @tD
ã0
6
This differentiates the discretized boundary condition with respect to t. In[11]:= Out[11]=
bcprime = D@bc, tD 5 2
u6 £ @tD -
40 3
u7 £ @tD + 30 u8 £ @tD - 40 u9 £ @tD +
125 6
u10 £ @tD ã 0
Technically, it is not necessary that the discretization of the boundary condition be done with the same difference order as the rest of the DE; in fact, since the error terms for the one-sided derivatives are much larger, it may sometimes be desirable to increase the order near the boundaries. NDSolve does not do this because it is desirable that the difference order and the InterpolatingFunction interpolation order be consistent across the spatial direction.
212
Advanced Numerical Differential Equation Solving in Mathematica
This is another way of generating the equations using NDSolve`FiniteDifferenceDerivative. The first and last will have to be replaced with the appropriate equations from the boundary conditions. In[12]:=
eqns = ThreadB D@U@tD, tD ã
£ Out[12]= :u0 @tD ã
u1 £ @tD ã u2 £ @tD ã u3 £ @tD ã u4 £ @tD ã u5 £ @tD ã u6 £ @tD ã u7 £ @tD ã u8 £ @tD ã
1 8 1 8 1 8 1 8 1 8 1 8 1 8 1 8 1
1 8
NDSolve`FiniteDifferenceDerivative@2, hn Range@0, nD, U@tDDF
375 u0 @tD 250 u0 @tD 3 -
25 3 25 3 25 3 25 3 25 3 25 3 25
3850 u1 @tD
+
3
- 125 u1 @tD -
u0 @tD + u1 @tD + u2 @tD + u3 @tD + u4 @tD + u5 @tD +
400 u1 @tD 3 400 u2 @tD 3 400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3 400 u6 @tD 3 400 u7 @tD
5350 u2 @tD
3 100 u2 @tD
+
3
- 250 u2 @tD + - 250 u3 @tD + - 250 u4 @tD + - 250 u5 @tD + - 250 u6 @tD + - 250 u7 @tD +
- 1300 u3 @tD + 350 u3 @tD 3 400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3 400 u6 @tD 3 400 u7 @tD 3 400 u8 @tD
1525 u4 @tD
-
3 400 u9 @tD
250 u5 @tD
3
25 u4 @tD 3 25 u5 @tD 3 25 u6 @tD 3 25 u7 @tD 3 25 u8 @tD 3 25 u9 @tD
,
3
- 50 u4 @tD + -
-
25 u5 @tD
,
3 , , , , , ,
3 25 u10 @tD
, u6 @tD + - 250 u8 @tD + 3 3 3 3 25 u5 @tD 350 u7 @tD 100 u8 @tD 250 u10 @tD , u9 £ @tD ã - 50 u6 @tD + - 125 u9 @tD + 8 3 3 3 3 1525 u6 @tD 5350 u8 @tD 3850 u9 @tD 1 250 u5 @tD + u10 £ @tD ã - 1300 u7 @tD + + 375 u10 @tD > 8 3 3 3 3 8 1
Now you can replace the first and last equation with the boundary condition. In[13]:=
eqns@@1, 2DD = D@Sin@2 p tD, tD; eqns@@- 1DD = bcprime; eqns
£ £ Out[15]= :u0 @tD ã 2 p Cos@2 p tD, u1 @tD ã
u2 £ @tD ã u3 £ @tD ã u4 £ @tD ã u5 £ @tD ã u6 £ @tD ã
1 8 1 8 1 8 1 8 1 8
-
25 3 25 3 25 3 25 3 25 3
u0 @tD + u1 @tD + u2 @tD + u3 @tD + u4 @tD +
1
8 400 u1 @tD 3 400 u2 @tD 3 400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3
250 u0 @tD 3
- 125 u1 @tD -
- 250 u2 @tD + - 250 u3 @tD + - 250 u4 @tD + - 250 u5 @tD + - 250 u6 @tD +
400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3 400 u6 @tD 3 400 u7 @tD 3
-
100 u2 @tD 3 25 u4 @tD 3 25 u5 @tD 3 25 u6 @tD 3 25 u7 @tD 3 25 u8 @tD
+
350 u3 @tD 3
, , , , ,
3 ,
- 50 u4 @tD +
25 u5 @tD 3
,
Advanced Numerical Differential Equation Solving in Mathematica
u6 £ @tD ã u7 £ @tD ã u8 £ @tD ã
8 1 8 1
-
3 25 3 25
u4 @tD + u5 @tD +
3 400 u6 @tD 3 400 u7 @tD
- 250 u6 @tD + - 250 u7 @tD +
3 400 u8 @tD 3 400 u9 @tD
-
213
, 3 25 u9 @tD
,
3 25 u10 @tD
, u6 @tD + - 250 u8 @tD + 3 3 3 3 25 u5 @tD 350 u7 @tD 100 u8 @tD 250 u10 @tD , u9 £ @tD ã - 50 u6 @tD + - 125 u9 @tD + 8 3 3 3 3 5 40 125 u6 £ @tD u7 £ @tD + 30 u8 £ @tD - 40 u9 £ @tD + u10 £ @tD ã 0> 2 3 6 8 1
NDSolve is capable of solving the system as is for the appropriate derivatives, so it is ready for the ODEs. In[16]:=
diffsol = NDSolve@8eqns, Thread@U@0D ã Table@0, 811
Out[16]= 88u0 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD,
u1 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u2 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u3 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u4 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u5 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u6 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u7 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u8 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u9 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u10 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD<<
This shows a plot of how well the boundary condition at x == 1 was satisfied. In[17]:=
Plot@Evaluate@Apply@Subtract, bcD ê. diffsolD, 8t, 0, 4
1. µ 10-15
5. µ 10-16
Out[17]= 1
2
3
4
-5. µ 10-16
Treating the boundary conditions as algebraic conditions saves a couple of steps in the processing at the expense of using a DAE solver.
214
Advanced Numerical Differential Equation Solving in Mathematica
This replaces the first and last equations (from before) with algebraic conditions corresponding to the boundary conditions. In[18]:=
eqns@@1DD = u0 @tD ã Sin@2 p tD; eqns@@- 1DD = bc; eqns
£ Out[20]= :u0 @tD ã Sin@2 p tD, u1 @tD ã
u2 £ @tD ã u3 £ @tD ã u4 £ @tD ã u5 £ @tD ã u6 £ @tD ã u7 £ @tD ã u8 £ @tD ã
1 8 1 8 1 8 1 8 1 8 1 8 1
-
25 3 25 3 25 3 25 3 25 3 25 3 25
u0 @tD + u1 @tD + u2 @tD + u3 @tD + u4 @tD + u5 @tD +
1
250 u0 @tD
8 400 u1 @tD 3 400 u2 @tD 3 400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3 400 u6 @tD 3 400 u7 @tD
3
- 125 u1 @tD -
- 250 u2 @tD + - 250 u3 @tD + - 250 u4 @tD + - 250 u5 @tD + - 250 u6 @tD + - 250 u7 @tD +
100 u2 @tD
400 u3 @tD 3 400 u4 @tD 3 400 u5 @tD 3 400 u6 @tD 3 400 u7 @tD 3 400 u8 @tD 3 400 u9 @tD
-
+
350 u3 @tD
3 25 u4 @tD 3 25 u5 @tD 3 25 u6 @tD 3 25 u7 @tD 3 25 u8 @tD 3 25 u9 @tD
3
- 50 u4 @tD +
25 u5 @tD 3
, , , , , ,
3 25 u10 @tD
, u6 @tD + - 250 u8 @tD + 3 3 3 3 25 u5 @tD 350 u7 @tD 100 u8 @tD 250 u10 @tD , u9 £ @tD ã - 50 u6 @tD + - 125 u9 @tD + 8 3 3 3 3 5 u6 @tD 40 u7 @tD 125 u10 @tD + 30 u8 @tD - 40 u9 @tD + ã 0> 2 3 6 8 1
This solves the system of DAEs with NDSolve . In[21]:=
daesol = NDSolve@8eqns, Thread@U@0D ã Table@0, 811
Out[21]= 88u0 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD,
u1 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u2 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u3 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u4 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u5 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u6 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u7 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u8 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u9 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD, u10 @tD Ø InterpolatingFunction@880., 4.<<, <>D@tD<<
,
Advanced Numerical Differential Equation Solving in Mathematica
215
This shows how well the boundary condition was satisfied. In[22]:=
Plot@Evaluate@Apply@Subtract, bcD ê. daesolD, 8t, 0, 4<, PlotRange Ø AllD 1.5 µ 10-14 1. µ 10-14 5. µ 10-15
Out[22]= 1
2
3
4
-5. µ 10-15 -1. µ 10-14 -1.5 µ 10-14
For this example, the boundary condition was satisfied well within tolerances in both cases, but the differentiation method did very slightly better. This is not always true; in some cases, with the differentiation method, the boundary condition can experience cumulative drift since the error control in this case is only local. The Dirichlet boundary condition at x == 0 in this example shows some drift. This makes a plot that compares how well the Dirichlet boundary condition at x == 0 was satisfied with the two methods. The solution with the differentiated boundary condition is shown in black. In[23]:=
Plot@Evaluate@8u0 @tD ê. diffsol, u0 @tD ê. daesol< - Sin@2 p tD D, 8t, 0, 4<, PlotStyle Ø 88Black<, 8Blue<<, PlotRange Ø AllD 4. µ 10-7 3. µ 10-7 2. µ 10-7
Out[23]=
1. µ 10-7 1
2
3
4
-1. µ 10-7 -2. µ 10-7
When using NDSolve, it is easy to switch between the two methods by using the DifferentiateBoundaryConditions
option.
Remember
that
when
you
use
DifferentiateBoundaryConditions -> False, you are not as free to choose integration methods; the method needs to be a DAE solver. With systems of PDEs or equations with higher-order derivatives having more complicated boundary conditions, both methods can be made to work in general. When there are multiple boundary conditions at one end, it may be necessary to attach some conditions to interior points. Here is an example of a PDE with two boundary conditions at each end of the spatial interval.
216
Advanced Numerical Differential Equation Solving in Mathematica
This solves a differential equation with two boundary conditions at each end of the spatial interval. The StiffnessSwitching integration method is used to avoid potential problems with stability from the fourth-order derivative. In[25]:=
dsol = NDSolveB:D@u@x, tD, t, tD ã - D@u@x, tD, x, x, x, xD, :u@x, tD ã
x2
-
x3
+
x4
, 2 3 12 D@u@x, tD, tD ã 0> ê. t Ø 0, Table@HD@u@x, tD, 8x, d, u, 8x, 0, 1<, 8t, 0, 2<, Method Ø "StiffnessSwitching", InterpolationOrder Ø AllF Out[25]= 88u Ø InterpolatingFunction@880., 1.<, 80., 2.<<, <>D<<
Understanding the message about spatial error will be addressed in the next section. For now, ignore the message and consider the boundary conditions.
In[26]:=
This forms a list of InterpolatingFunction s differentiated to the same order as each of the boundary conditions. bct = Table@HD@u@x, tD, 8x, d
Out[26]= 88InterpolatingFunction@880., 1.<, 80., 2.<<, <>D@0, tD,
InterpolatingFunction@880., 1.<, 80., 2.<<, <>D@0, tD<, 8InterpolatingFunction@880., 1.<, 80., 2.<<, <>D@1, tD, InterpolatingFunction@880., 1.<, 80., 2.<<, <>D@1, tD<<
This makes a logarithmic plot of how well each of the four boundary conditions is satisfied by the solution computed with NDSolve as a function of t. In[27]:=
LogPlot@Evaluate@Map@Abs, bct, 82
10
Out[27]=
10-8 10-12 10-16 0.5
1.0
1.5
2.0
It is clear that the boundary conditions are satisfied to well within the tolerances allowed by AccuracyGoal and PrecisionGoal options. It is typical that conditions with higher-order derivatives will not be satisfied as well as those with lower-order derivatives.
Advanced Numerical Differential Equation Solving in Mathematica
217
Inconsistent Boundary Conditions It is important that the boundary conditions you specify be consistent with both the initial condition and the PDE. If this is not the case, NDSolve will issue a message warning about the inconsistency. When this happens, the solution may not satisfy the boundary conditions, and in the worst cases, instability may appear. In this example for the heat equation, the boundary condition at x == 0 is clearly inconsistent with the initial condition. sol = NDSolve@8D@u@t, xD, tD ã D@u@t, xD, x, xD, u@t, 0D ã 1, u@t, 1D ã 0, u@0, xD ã .5<, u, 8t, 0, 1<, 8x, 0, 1
In[2]:=
NDSolve::ibcinc : Warning: Boundary and initial conditions are inconsistent. à Out[2]= 88u Ø InterpolatingFunction@880., 1.<, 80., 1.<<, <>D<<
This shows a plot of the solution at x == 0 as a function of t. The boundary condition uHt, 0L = 1 is clearly not satisfied. Plot@Evaluate@First@u@t, 0D ê. solDD, 8t, 0, 1
In[3]:=
1.0
0.8
0.6
Out[3]= 0.4
0.2
0.2
0.4
0.6
0.8
1.0
The reason the boundary condition is not satisfied is because once it is differentiated, it becomes ut Ht, 0L = 0, so the solution will be whatever constant value comes from the initial condition.
218
Advanced Numerical Differential Equation Solving in Mathematica
When the boundary conditions are not differentiated, the DAE solver in effect modifies the initial conditions so that the boundary condition is satisfied. In[4]:=
daesol = NDSolve@8D@u@t, xD, tD ã D@u@t, xD, x, xD, u@t, 0D ã 1, u@t, 1D ã 0, u@0, xD ã 0<, u, 8t, 0, 1<, 8x, 0, 1<, Method Ø 8“MethodOfLines“, “DifferentiateBoundaryConditions“ Ø False
1. µ 10-15
Out[5]= 0.2
0.4
0.6
0.8
1.0
-1. µ 10-15
It is not always the case that the DAE solver will find good initial conditions that lead to an effectively correct solution like this. A better way to handle this problem is to give an initial condition that is consistent with the boundary conditions, even if it is discontinuous. In this case the unit step function does what is needed. This uses a discontinuous initial condition to match the boundary condition, giving a solution correct to the resolution of the spatial discretization. In[6]:=
Out[7]=
usol = NDSolve@8D@u@t, xD, tD ã D@u@t, xD, x, xD, u@t, 0D ã 1, u@t, 1D ã 0, u@0, xD ã UnitStep@- xD<, u, 8t, 0, 1<, 8x, 0, 1
1.0 1.0
0.5 0.0 0.0
0.5 0.5 1.0
0.0
Advanced Numerical Differential Equation Solving in Mathematica
219
In general, with discontinuous initial conditions, spatial error estimates cannot be satisfied, since they are predicated on smoothness so, in general, it is best to choose how well you want to model the effect of the discontinuity by either giving a smooth function which approximates the discontinuity or by specifying explicitly the number of points to use in the spatial discretization. More detail on spatial error estimates and discretization is given in "Spatial Error Estimates". A more subtle inconsistency arises when the temporal variable has higher-order derivatives and boundary conditions may be differentiated more than once. Consider the wave equation
utt = uxx
with initial conditions
uH0, xL = sinHxL
ut H0, xL = 0
and boundary conditions
uHt, 0L = 0
ux Ht, 0L = ‰t
The initial condition sinHxL satisfies the boundary conditions, so you might be surprised that NDSolve issues the NDSolve::ibcinc message. In this example, the boundary and initial conditions appear to be consistent at first glance, but actually have inconsistencies which show up under differentiation. In[8]:=
isol = NDSolve@ 8D@u@t, xD, t, tD ã D@u@t, xD, x, xD, u@0, xD ã Sin@xD, HD@u@t, xD, tD ê. t Ø 0L ã 0, u@t, 0D ã 0, HD@u@t, xD, xD ê. x Ø 0L ã Exp@tD<, u, 8t, 0, 1<, 8x, 0, 2 p
Out[8]= 88u Ø InterpolatingFunction@880., 1.<, 80., 6.28319<<, <>D<<
The inconsistency appears when you differentiate the second initial condition with respect to x, giving ut x Hx, 0L = 0, and differentiate the second boundary condition with respect to t, giving ux t H0, tL = ‰t . These two are inconsistent at x = t = 0. Occasionally, NDSolve will issue the NDSolve::ibcinc message warning about inconsistent boundary conditions when they are actually consistent. This happens due to discretization error in approximating Neumann boundary conditions or any boundary condition that involves a spatial derivative. The reason this happens is that spatial error estimates (see "Spatial Error Estimates") used to determine how many points to discretize with are based on the PDE and the initial condition, but not the boundary conditions. The one-sided finite difference formulas that are used to approximate the boundary conditions also have larger error than a centered formula of the same order, leading to additional discretization error at the boundary. Typically this is not a problem, but it is possible to construct examples where it does occur.
220
Advanced Numerical Differential Equation Solving in Mathematica
In this example, because of discretization error, NDSolve incorrectly warns about inconsistent boundary conditions. In[9]:=
sol = NDSolve@8D@u@x, tD, tD ã D@u@x, tD, x, xD, u@x, 0D ã 1 - Sin@4 * Pi * xD ê H4 * PiL, u@0, tD ã 1, u@1, tD + Derivative@1, 0D@uD@1, tD ã 0<, u, 8x, 0, 1<, 8t, 0, 1
Out[9]= 88u Ø InterpolatingFunction@880., 1.<, 80., 1.<<, <>D<<
A plot of the boundary condition shows that the error, while not large, is outside of the default tolerances. In[10]:=
Plot@First@u@1, tD + Derivative@1, 0D@uD@1, tD ê. solD, 8t, 0, 1
Out[10]= 0.000234638 0.000234638
0.2
0.4
0.6
0.8
1.0
When the boundary conditions are consistent, a way to correct this error is to specify that NDSolve use a finer spatial discretization. With a finer spatial discretization, there is no message and the boundary condition is satisfied better. In[13]:=
fsol = NDSolve@8D@u@x, tD, tD ã D@u@x, tD, x, xD, u@x, 0D ã 1 - Sin@4 * Pi * xD ê H4 * PiL, u@0, tD ã 1, u@1, tD + Derivative@1, 0D@uD@1, tD ã 0<, u, 8x, 0, 1<, 8t, 0, 1<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “MinPoints“ Ø 100<
Out[14]=
1.3851 µ 10-6 1.385 µ 10-6
0.2
0.4
0.6
0.8
1.0
Advanced Numerical Differential Equation Solving in Mathematica
221
Spatial Error Estimates Overview When NDSolve solves a PDE, unless you have specified the spatial grid for it to use, by giving it explicitly or by giving equal values for the MinPoints and MaxPoints options, NDSolve needs to make a spatial error estimate. Ideally, the spatial error estimates would be monitored over time and the spatial mesh updated according to the evolution of the solution. The problem of grid adaptivity is difficult enough for a specific type of PDE and certainly has not been solved in any general way. Furthermore, techniques such as local refinement can be problematic with the method of lines since changing the number of mesh points requires a complete restart of the ODE methods. There are moving mesh techniques that appear promising for this approach, but at this point, NDSolve uses a static grid. The grid to use is determined by an a priori error estimate based on the initial condition. An a posteriori check is done at the end of the temporal interval for reasonable consistency and a warning message is given if that fails. This can, of course, be fooled, but in practice it provides a reasonable compromise. The most common cause of failure is when initial conditions have little variation, so the estimates are essentially meaningless. In this case, you may need to choose some appropriate grid settings yourself. Load a package that will be used for extraction of data from InterpolatingFunction objects. In[1]:=
Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D
A priori Error Estimates When NDSolve solves a PDE using the method of lines, a decision has to be made on an appropriate spatial grid. NDSolve does this using an error estimate based on the initial condition (thus, a priori). It is easiest to show how this works in the context of an example. For illustrative purposes, consider the sine-Gordon equation in one dimension with periodic boundary conditions. This solves the sine-Gordon equation with a Gaussian initial condition. In[5]:=
ndsol = NDSolve@8D@u@x, tD, t, tD ã D@u@x, tD, x, xD - Sin@u@x, tDD, u@x, 0D ã Exp@- Hx ^ 2LD, Derivative@0, 1D@uD@x, 0D ã 0, u@- 5, tD ã u@5, tD<, u, 8x, - 5, 5<, 8t, 0, 5
Out[5]= 88u Ø InterpolatingFunction@88-5., 5.<, 80., 5.<<, <>D<<
222
Advanced Numerical Differential Equation Solving in Mathematica
This gives the number of spatial and temporal points used, respectively. In[6]:=
Map@Length, InterpolatingFunctionCoordinates@First@u ê. ndsolDDD
Out[6]= 897, 15<
The temporal points are chosen adaptively by the ODE method based on local error control. NDSolve used 97 (98 including the periodic endpoint) spatial points. This choice will be illustrated through the steps that follow. In the equation processing phase of NDSolve, one of the first things that happen is that equations with second- (or higher-) order temporal derivatives are replaced with systems with only first-order temporal derivatives. This is a first-order system equivalent to the sine-Gordon equation earlier. In[7]:=
8D@u@x, tD, tD ã v@x, tD, D@v@x, tD, tD ã D@u@x, tD, x, xD + - Sin@u@x, tDD<
H0,1L @x, tD ã v@x, tD, vH0,1L @x, tD ã -Sin@u@x, tDD + uH2,0L @x, tD= Out[7]= 9u
The next stage is to solve for the temporal derivatives. This is the solution for the temporal derivatives, with the right-hand side of the equations in normal (ODE) form. In[8]:=
rhs = 8D@u@x, tD, tD, D@v@x, tD, tD< ê. Solve@%, 8D@u@x, tD, tD, D@v@x, tD, tD
H2,0L @x, tD== Out[8]= 99v@x, tD, -Sin@u@x, tDD + u
Now the problem is to choose a uniform grid that will approximate the derivative to within local error tolerances as specified by AccuracyGoal and PrecisionGoal. For this illustration, use the default “DifferenceOrder“ (4) and the default AccuracyGoal and PrecisionGoal (both 4 for PDEs). The methods used to integrate the system of ODEs that results from discretization base their own error estimates on the assumption of sufficiently accurate function values. The estimates here have the goal of finding a spatial grid for which (at least with the initial condition) the spatial error is somewhat balanced with the local temporal error. This sets variables to reflect the default settings for “DifferenceOrder“, AccuracyGoal , and PrecisionGoal . In[9]:=
p = 4; atol = 1.*^-4; rtol = 1.*^-4;
Advanced Numerical Differential Equation Solving in Mathematica
223
The error estimate is based on Richardson extrapolation. If you know that the error is OHh p L and you have two approximations y1 and y2 at different values, h1 and h2 of h, then you can, in effect, extrapolate to the limit h Ø 0 to get an error estimate p
p
p
y1 - y2 = Ic h1 + yM - Ic h2 + yM = c h1 1 -
h2
p
h1
so the error in y1 is estimated to be p
°y1 - y¥ @ c h1 =
°y1 -y2 ¥ 1-
h2 p
(1)
h1
Here y1 and y2 are vectors of different length and y is a function, so you need to choose an appropriate norm. If you choose h1 = 2 h2 , then you can simply use a scaled norm on the components common to both vectors, which is all of y1 and every other point of y2 . This is a good choice because it does not require any interpolation between grids. For a given interval on which you want to set up a uniform grid, you can define a function hHnL = L ê n, where L is the length of the interval such that the grid is 8x0 , x1 , x1 , …, xn <, where x j ã x0 + j hHnL. This defines functions that return the step size h and a list of grid points as a function of n for the sine-Gordon equation. In[12]:=
Clear@h, gridD; 10 ; h@n_D := n grid@n_D := N@- 5 + Range@0, nD * h@nDD;
For a given grid, the equation can be discretized using finite differences. This is easily done using NDSolve`FiniteDifferenceDerivative. This defines a symbolic discretization of the right-hand side of the sine-Gordon equation as a function of a grid. It returns a function of u and v, which gives the approximate values for ut and vt in a list. (Note that in principle this works for any grid, uniform or not, though in the following, only uniform grids will be used.) In[15]:=
sdrhs@grid_D := Block@8app, u, v<, app = rhs ê. Derivative@i_, 0D@var : Hu vLD@x, tD ß NDSolve`FiniteDifferenceDerivative@ i, grid, “DifferenceOrder“ Ø p, PeriodicInterpolation Ø TrueD@varD; app = app ê. Hvar : Hu vLL@x, tD ß var; Function@8u, v<, Evaluate@appDDD
For a given step size and grid, you can also discretize the initial conditions for u and v.
224
Advanced Numerical Differential Equation Solving in Mathematica
This defines a function that discretizes the initial conditions for u and v. The last grid point is dropped because, by periodic continuation, it is considered the same as the first. In[16]:=
dinit@n_D := Transpose@Map@Function@8x<, 8Exp@- x ^ 2D, 0
The quantity of interest is the approximation of the right-hand side for a particular value of n with this initial condition. This defines a function that returns a vector consisting of the approximation of the right-hand side of the equation for the initial condition for a given step size and grid. The vector is flattened to make subsequent analysis of it simpler. In[17]:=
rhsinit@n_D := Flatten@Apply@sdrhs@grid@nDD, dinit@nDDD
Starting with a particular value of n, you can obtain the error estimate by generating the right hand side for n and 2 n points. This gives an example of the right-hand side approximation vector for a grid with 10 points. In[18]:=
rhsinit@10D
Out[18]= 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0000202683, -0.00136216, -0.00666755,
0.343233, 0.0477511, -2.36351, 0.0477511, 0.343233, -0.00666755, -0.00136216<
This gives an example of the right-hand side approximation vector for a grid with 20 points. In[19]:=
rhsinit@20D
-8 Out[19]= 90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -5.80538 µ 10 ,
-1.01297 µ 10-6 , -0.0000168453, -0.0000373357, 0.00285852, 0.0419719, 0.248286, 0.640267, 0.337863, -1.48981, -2.77952, -1.48981, 0.337863, 0.640267, 0.248286, 0.0419719, 0.00285852, -0.0000373357, -0.0000168453, -1.01297 µ 10-6 =
As mentioned earlier, every other point on the grid with 2 n points lies on the grid with n points. Thus, for simplicity, you can use a norm that only compares points common to both grids. Because the goal is to ultimately satisfy absolute and relative tolerance criteria, it is appropriate to use a scaled norm. In addition to taking into account the size of the right-hand side for the scaling, it is also important to include the size of the corresponding components of u and v on the grid since error in the right-hand side is ultimately included in u and v. This defines a norm function for the difference in the approximation of the right-hand side. In[20]:=
dnorm@rhsn_, rhs2n_, uv_D := Module@8rhs2 = Take@rhs2n, 81, - 1, 2
Advanced Numerical Differential Equation Solving in Mathematica
225
This applies the norm function to the two approximations found. In[21]:=
dnorm@rhsinit@10D, rhsinit@20D, Flatten@dinit@10DDD
Out[21]= 2168.47
To get the error estimate form the distance, according to the Richardson extrapolation formula (3), this simply needs to be divided by H1 - Hh2 ê h1 L p L = H1 - 2-p L. This computes the error estimate for n == 10. Since this is based on a scaled norm, the tolerance criteria are satisfied if the result is less than 1. In[22]:=
% ê H1 - 2-p L
Out[22]= 2313.04
This makes a function that combines the earlier functions to give an error estimate as a function of n. In[23]:=
errest@n_D := dnorm@rhsinit@nD, rhsinit@2 nD, Flatten@dinit@nDDD ê H1 - 2-p L
The goal is to find the minimum value of n, such that the error estimate is less than or equal to 1 (since it is based on a scaled norm). In principle, it would be possible to use a root-finding algorithm on this, but since n can only be an integer, this would be overkill and adjustments would have to be made to the stopping conditions. An easier solution is simply to use the simple Richardson extrapolation formula to predict what value of n would be appropriate and repeat the prediction process until the appropriate n is found. The condition to satisfy is p
c hopt = 1 and you have estimated that c hHnL p > errestHnL so you can project that
hopt > hHnL
1
1êp
errestHnL
or in terms of n, which is proportional to the reciprocal of h, nopt > en errestHnL1êp u
226
Advanced Numerical Differential Equation Solving in Mathematica
This computes the predicted optimal value of n based on the error estimate for n == 10 computed earlier. In[24]:=
CeilingA10 errest@10D1êp E
Out[24]= 70
This computes the error estimate for the new value of n. In[25]:=
errest@%D
Out[25]= 3.75253
Often the case that a prediction based on a very coarse grid will be inadequate. A coarse grid may completely miss some solution features that contribute to the error on a finer grid. Also, the error estimate is based on an asymptotic formula, so for coarse spacings, the estimate itself may not be very good, even when all the solution features are resolved to some extent. In practice, it can be fairly expensive to compute these error estimates. Also, it is not necessary to find the very optimal n, but one that satisfies the error estimate. Remember, everything can change as the PDE evolves, so it is simply not worth a lot of extra effort to find an optimal spacing for just the initial time. A simple solution is to include an extra factor greater than 1 in the prediction formula for the new n. Even with an extra factor, it may still take a few iterations to get to an acceptable value. It does, however, typically make the process faster. This defines a function that gives a predicted value for the number of grid points, which should satisfy the error estimate. In[26]:=
pred@n_D := CeilingA1.05 n errest@nD1êp E This iterates the predictions until a value is found that satisfies the error estimate.
In[27]:=
NestWhileList@pred, 10, Herrest@ÒD > 1L &D
Out[27]= 810, 73, 100<
It is important to note that this process must have a limiting value since it may not be possible to satisfy the error tolerances, for example, with a discontinuous initial function. In NDSolve, the MaxSteps option provides the limit; for spatial discretization, this defaults to a total of 10000 across all spatial dimensions. Pseudospectral derivatives cannot use this error estimate since they have an exponential rather than a polynomial convergence. An estimate can be made based on the formula used earlier in the limit
Pseudospectral derivatives cannot use this error estimate since theyEquation have Solving an exponential Advanced Numerical Differential in Mathematicarather 227 the limit p -> Infinity. What this amounts to is considering the result on the finer grid to be exact and basing the error estimate on the difference since 1 - 2-p approaches 1. A better approach is to use the fact that on a given grid with n points, the pseudospectral method is OHhn L. When comparing for two grids, it is appropriate to use the smaller n for p. This provides an imperfect, but adequate estimate for the purpose of determining grid size. This modifies the error estimation function so that it will work with pseudospectral derivatives. In[28]:=
errest@n_D := dnorm@rhsinit@nD, rhsinit@2 nD, Flatten@dinit@nDDD ë I1 - 2-If@p === “Pseudospectral“,n,pD M
The prediction formula can be modified to use n instead of p in a similar way. This modifies the function predicting an appropriate value of n to work with pseudospectral derivatives. This formulation does not try to pick an efficient FFT length. In[29]:=
pred@n_D := CeilingA1.05 n errest@nD1êIf@p === “Pseudospectral“,n,pD E
When finalizing the choice of n for a pseudospectral method, an additional consideration is to choose a value that not only satisfies the tolerance conditions, but is also an efficient length for computing FFTs. In Mathematica, an efficient FFT does not require a power of two length since the Fourier command has a prime factor algorithm built in. Typically, the difference order has a profound effect on the number of points required to satisfy the error estimate. This makes a table of the number of points required to satisfy the a priori error estimate as a function of the difference order. In[30]:=
TableForm@Map@Block@8p = Ò<, 8p, NestWhile@pred, 10, Herrest@ÒD > 1L &D
Out[30]//TableForm=
DifferenceOrder 2 4 6 8 Pseudospectral
Number of points 804 100 53 37 24
A table of the number of points required as a function of difference order goes a long way toward explaining why the default setting for the method of lines is “DifferenceOrder“ -> 4: the improvement from 2 to 4 is usually most dramatic and in the default tolerance range, fourth-order differences do not tend to produce large roundoff errors, which can be the case with higher orders. Pseudospectral differences are often a good choice, particularly with periodic boundary conditions, but they are not a good choice for the default because they lead to full Jacobian matrices, which can be expensive to generate and solve if needed for stiff equations.
228
Advanced Numerical Differential Equation Solving in Mathematica
For nonperiodic grids, the error estimate is done using only interior points. The reason is that the error coefficients for the derivatives near the boundary are different. This may miss features that are near the boundary, but the main idea is to keep the estimate simple and inexpensive since the evolution of the PDE may change everything anyway. For multiple spatial dimensions, the determination is made one dimension at a time. Since better resolution in one dimension may change the requirements for another, the process is repeated in reverse order to improve the choice.
A posteriori Error Estimates When the solution of a PDE is computed with NDSolve, a final step is to do a spatial error estimate on the evolved solution and issue a warning message if this is excessively large. These error estimates are done in a manner similar to the a priori estimates described previously. The only real difference is that, instead of using grids with n and 2 n points to estimate the error, grids with n ê 2 and n points are used. This is because, while there is no way to generate the values on a grid of 2 n points without using interpolation, which would introduce its own errors, values are readily available on a grid of n ê 2 points simply by taking every other value. This is easily done in the Richardson extrapolation formula by using h2 ã 2 h1 , which gives °y1 - y¥ @
°y1 - y2 ¥ H2 p - 1L
This defines a function (based on functions defined in the previous section) that can compute an error estimate on the solution of the sine-Gordon equation from solutions for u and v expressed as vectors. The function has been defined to be a function of the grid since this is applied to a grid already constructed. (Note, as defined here, this only works for grids of even length. It is not difficult to handle odd length, but it makes the function somewhat more complicated.) In[31]:=
posterrest@8uvec_, vvec_<, grid_D := ModuleB8 huvec = Take@uvec, 81, - 1, 2
In[41]:=
ndsol = First@NDSolve@8D@u@x, tD, t, tD ã D@u@x, tD, x, xD + - Sin@u@x, tDD, u@x, 0D ã Exp@- Hx ^ 2LD, Derivative@0, 1D@uD@x, 0D ã 0, u@- 5, tD ã u@5, tD<, u, 8x, - 5, 5<, 8t, 0, 5<, InterpolationOrder Ø AllDD
Out[41]= 8u Ø InterpolatingFunction@88-5., 5.<, 80., 5.<<, <>D<
Advanced Numerical Differential Equation Solving in Mathematica
229
This is the grid used in the spatial direction that is the first set of coordinates used in the InterpolatingFunction . A grid with the last point dropped is used to obtain the values because of periodic continuation. In[42]:=
ndgrid = InterpolatingFunctionCoordinates@u ê. ndsolD@@1DD pgrid = Drop@ndgrid, - 1D;
Out[42]= 8-5., -4.89583, -4.79167, -4.6875, -4.58333, -4.47917, -4.375, -4.27083, -4.16667, -4.0625,
-3.95833, -3.85417, -3.75, -3.64583, -3.54167, -3.4375, -3.33333, -3.22917, -3.125, -3.02083, -2.91667, -2.8125, -2.70833, -2.60417, -2.5, -2.39583, -2.29167, -2.1875, -2.08333, -1.97917, -1.875, -1.77083, -1.66667, -1.5625, -1.45833, -1.35417, -1.25, -1.14583, -1.04167, -0.9375, -0.833333, -0.729167, -0.625, -0.520833, -0.416667, -0.3125, -0.208333, -0.104167, 0., 0.104167, 0.208333, 0.3125, 0.416667, 0.520833, 0.625, 0.729167, 0.833333, 0.9375, 1.04167, 1.14583, 1.25, 1.35417, 1.45833, 1.5625, 1.66667, 1.77083, 1.875, 1.97917, 2.08333, 2.1875, 2.29167, 2.39583, 2.5, 2.60417, 2.70833, 2.8125, 2.91667, 3.02083, 3.125, 3.22917, 3.33333, 3.4375, 3.54167, 3.64583, 3.75, 3.85417, 3.95833, 4.0625, 4.16667, 4.27083, 4.375, 4.47917, 4.58333, 4.6875, 4.79167, 4.89583, 5.<
This makes a function that gives the a posteriori error estimate at a particular numerical value of t. In[44]:=
peet@t_ ? NumberQD := posterrest@8 u@pgrid, tD, Derivative@0, 1D@uD@pgrid, tD< ê. ndsol, ndgridD This makes a plot of the a posteriori error estimate as a function of t.
In[45]:=
Plot@peet@tD, 8t, 0, 5<, PlotRange Ø AllD 6 5 4
Out[45]=
3 2 1
2
3
4
5
The large amount of local variation seen in this function is typical. For that reason, NDSolve does not warn about excessive error unless this estimate gets above 10 (rather than the value of 1, which is used to choose the grid based on initial conditions). The extra factor of 10 is further justified by the fact that the a posteriori error estimate is less accurate than the a priori one. Thus, when NDSolve issues a warning message based on the a posteriori error estimate, it is usually because new solution features have appeared or because there is instability in the solution process.
230
Advanced Numerical Differential Equation Solving in Mathematica
This is an example with the same initial condition used in the earlier examples, but where NDSolve gives a warning message based on the a posteriori error estimate. In[46]:=
bsol = FirstANDSolveA9D@u@x, tD, tD ã 0.01 D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, 2
u@x, 0D ã ‰-x , u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 0, 4D<
This shows a plot of the solution at t == 4. It is apparent that the warning message is appropriate because the oscillations near the peak are not physical. In[47]:=
Plot@u@x, 4D ê. bsol, 8x, - 5, 5<, PlotRange Ø AllD 0.8
0.6
Out[47]=
0.4
0.2
-4
-2
2
4
When the NDSolve::eerr message does show up, it may be necessary for you to use options to control the grid selection process since it is likely that the default settings did not find an accurate solution.
Controlling the Spatial Grid Selection The NDSolve implementation of the method of lines has several ways to control the selection of the spatial grid.
Advanced Numerical Differential Equation Solving in Mathematica
option name
default value
AccuracyGoal
Automatic
the number of digits of absolute tolerance for determining grid spacing
PrecisionGoal
Automatic
the number of digits of relative tolerance for determining grid spacing
“DifferenceOrder“
Automatic
the order of finite difference approximation to use for spatial discretization
Coordinates
Automatic
the list of coordinates for each spatial dimension 88x1 ,x2 ,…<,8y1 ,y2 ,…<,…< for independent variable dimensions x,y,…; this overrides the settings for all the options following in this list
MinPoints
Automatic
the minimum number of points to be used for each dimension in the grid; for Automatic, value will be determined by the minimum number of points needed to compute an error estimate for the given difference order
MaxPoints
Automatic
the maximum number of points to be used in the grid
StartingPoints
Automatic
the number of points to begin the process of grid refinement using the a priori error estimates
MinStepSize
Automatic
the minimum grid spacing to use
MaxStepSize
Automatic
the maximum grid spacing to use
StartingStepSize
Automatic
the grid spacing to use to begin the process of grid refinement using the a priori error estimates
231
Tensor product grid options for the method of lines.
All the options for tensor product grid discretization can be given as a list with length equal to the number of spatial dimensions, in which case the parameter for each spatial dimension is determined by the corresponding component of the list. With the exception of pseudospectral methods on nonperiodic problems, discretization is done with uniform grids, so when solving a problem on interval length L, there is a direct correspondence between the Points and StepSize options: MaxPoints Ø n ó MaxStepSize Ø L ê n MinPoints Ø n ó MinStepSize Ø L ê n StartingPoints Ø n ó StartingStepSize Ø L ê n
232
Advanced Numerical Differential Equation Solving in Mathematica
The StepSize options are effectively converted to the equivalent Points values. They are simply provided for convenience since sometimes it is more natural to relate problem specification to step size rather then the number of discretization points. When values other then Automatic are specified for both the Points and corresponding StepSize option, generally, the more stringent restriction is used. In the previous section an example was shown where the solution was not resolved sufficiently because the solution steepened as it evolved. The examples that follow will show some different ways of modifying the grid parameters so that the near shock is better resolved. One way to avoid the oscillations that showed up in the solution as the profile steepened is to make sure that you use sufficient points to resolve the profile at its steepest. In the one-hump solution of Burgers' equation, ut + u ux = n uxx it can be shown [W76] that the width of the shock profile is proportional to n as n Ø 0. More than 95% of the change is included in a layer of width 10 n. Thus, if you pick a maximum step size of half the profile width, there will always be a point somewhere in the steep part of the profile, and there is a hope of resolving it without significant oscillation. This computes the solution to Burgers' equation, such that there are sufficient points to resolve the shock profile. In[48]:=
n = 0.01; bsol2 = FirstANDSolveA 2
9D@u@x, tD, tD ã n D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, u@x, 0D ã ‰-x , u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 0, 4<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “MaxStepSize“ Ø 10 n ê 2<D<
Note that resolving the profile alone is by no means sufficient to meet the default tolerances of NDSolve, which requires an accuracy of 10-4 . However, once you have sufficient point to resolve the basic profile, it is not unreasonable to project based on the a posteriori error estimate shown in the NDSolve::eerr message (with an extra 10% since, after all, it is just a projection).
Advanced Numerical Differential Equation Solving in Mathematica
233
This computes the solution to Burgers' equation with the maximum step size chosen so that it should be small enough to meet the default error tolerances based on a projection from the error of the previous calculation. In[50]:=
n = 0.01; bsol3 = FirstBNDSolveB9D@u@x, tD, tD ã n D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, 2
u@x, 0D ã ‰-x , u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 0, 4<, Method Ø :“MethodOfLines“, “SpatialDiscretization“ Ø 1
:“TensorProductGrid“, “MaxStepSize“ Ø H10 n ê 2L ì
H1.1L 85 4 >>FF
Out[51]= 8u Ø InterpolatingFunction@88-5., 5.<, 80., 4.<<, <>D<
To compare solutions like this, it is useful to look at a plot of the solution only at the spatial grid points. Because the grid points are stored as a part of the InterpolatingFunction, it is fairly simple to define a function that does this. This defines a function that plots a solution only at the spatial grid points at a time t. In[52]:=
GridPointPlot@8u Ø if_InterpolatingFunction<, t_, opts___D := Module@8grid = InterpolatingFunctionCoordinates@ifD@@1DD<, ListPlot@Transpose@8grid, if@grid, tD
In[53]:=
Show@Block@8 t = 4<, 8 GridPointPlot@bsol3, 4D, GridPointPlot@bsol2, 4, PlotStyle Ø Hue@1 ê 3DD, GridPointPlot@bsol, 4, PlotStyle Ø Hue@1DD < D, PlotRange Ø AllD 0.8
0.6
Out[53]=
0.4
0.2
-4
-2
2
4
In this example, the left-hand side of the domain really does not need so many points. The points need to be clustered where the steep profile evolves, so it might make sense to consider explicitly specifying a grid that has more points where the profile appears.
234
Advanced Numerical Differential Equation Solving in Mathematica
This solves Burgers' equation on a specified grid that has most of its points to the right of x = 1. In[54]:=
mygrid = Join@- 5. + 10 Range@0, 48D ê 80, 1. + Range@1, 4 µ 70D ê 70D; n = 0.01; bsolg = FirstANDSolveA 2
9D@u@x, tD, tD ã n D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, u@x, 0D ã ‰-x , u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 0, 4<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “Coordinates“ Ø 8mygrid<<D<
This makes a plot of the values of the solution at the assigned spatial grid points. In[57]:=
GridPointPlot@bsolg, 4D 0.8
0.6
Out[57]=
0.4
0.2
-4
-2
2
4
Many of the same principles apply to multiple spatial dimensions. Burgers' equation in two dimensions with anisotropy provides a good example. This solves a variant of Burgers' equation in 2 dimensions with different velocities in the x and y directions. In[58]:=
n = 0.075; sol1 = FirstANDSolveA9D@u@t, x, yD, tD ã n HD@u@t, x, yD, x, xD + D@u@t, x, yD, y, yDL u@t, x, yD H2 D@u@t, x, yD, xD - D@u@t, x, yD, yDL, u@0, x, yD ã ExpA- Ix2 + y2 ME, u@t, - 4, yD ã u@t, 4, yD, u@t, x, - 4D ã u@t, x, 4D=, u, 8t, 0, 2<, 8x, - 4, 4<, 8y, - 4, 4
Out[59]= 8u Ø InterpolatingFunction@880., 2.<, 8-4., 4.<, 8-4., 4.<<, <>D<
Advanced Numerical Differential Equation Solving in Mathematica
235
This shows a surface plot of the leading edge of the solution at t = 2. In[60]:=
Plot3D@u@2, x, yD ê. sol1, 8x, 0, 4<, 8y, - 4, 0<, PlotRange Ø AllD
0.4
Out[60]=
0
0.2 0.0
–1 0
–2 1
–3
2 3 4
–4
Similar to the one-dimensional case, the leading edge steepens. Since the viscosity term (n) is larger, the steepening is not quite so extreme, and this default solution actually resolves the front reasonably well. Therefore it should be possible to project from the error estimate to meet the default tolerances. A simple scaling argument indicates that the profile width in the x direction will be narrower than in the y direction by a factor of
2 . Thus, it makes sense that the
step sizes in the y direction can be larger than those in the x direction by this factor, or, correspondingly, that the minimum number of points can be a factor of 1 í
2 less.
This solves the 2-dimensional variant of Burgers' equation with appropriate step size restrictions in the x and y direction projected from the a posteriori error estimate of the previous computation, which was done with 69 points in the x direction. In[61]:=
n = 0.075; sol2 = FirstBNDSolveB9D@u@t, x, yD, tD ã n HD@u@t, x, yD, x, xD + D@u@t, x, yD, y, yDL u@t, x, yD H2 D@u@t, x, yD, xD - D@u@t, x, yD, yDL, u@0, x, yD ã ExpA- Ix2 + y2 ME, u@t, - 4, yD ã u@t, 4, yD, u@t, x, - 4D ã u@t, x, 4D=, u, 8t, 0, 2<, 8x, - 4, 4<, 8y, - 4, 4<, Method Ø :“MethodOfLines“, “SpatialDiscretization“ Ø 1
:“TensorProductGrid“, “MinPoints“ Ø CeilingB:1 , 1 í
2 > 69 31 4 F>>FF
Out[62]= 8u Ø InterpolatingFunction@880., 2.<, 8-4., 4.<, 8-4., 4.<<, <>D<
This solution takes a substantial amount of time to compute, which is not surprising since the solution involves solving a system of more than 18000 ODEs. In many cases, particularly in more than one spatial dimension, the default tolerances may be unrealistic to achieve, so you may have to reduce them by using AccuracyGoal and/or PrecisionGoal appropriately. Sometimes, especially with the coarser grids that come with less stringent tolerances, the systems are not stiff and it is possible to use explicit methods, that avoid the numerical linear algebra, which can be problematic, especially for higher-dimensional problems. For this
This solution takes a substantial amount of time to compute, which is not surprising since the solution involves solving a system of more than 18000 ODEs. In many cases, particularly in 236 Advanced Numerical Differential Equation Solving in Mathematica may have to reduce them by using AccuracyGoal and/or PrecisionGoal appropriately. Sometimes, especially with the coarser grids that come with less stringent tolerances, the systems are not stiff and it is possible to use explicit methods, that avoid the numerical linear algebra, which can be problematic, especially for higher-dimensional problems. For this example, using Method -> ExplicitRungeKutta gets the solution in about half the time. Any of the other grid options can be specified as a list giving the values for each dimension. When only a single value is given, it is used for all the spatial dimensions. The two exceptions to this are MaxPoints, where a single value is taken to be the total number of grid points in the outer product, and Coordinates, where a grid must be specified explicitly for each dimension. This chooses parts of the grid from the previous solutions so that it is more closely spaced where the front is steeper. In[63]:=
n = 0.075; xgrid = Join@Select@Part@u ê. sol1, 3, 2D, NegativeD, 80.<, Select@Part@u ê. sol2, 3, 2D, PositiveDD; ygrid = Join@Select@Part@u ê. sol2, 3, 3D, NegativeD, 80.<, Select@Part@u ê. sol1, 3, 3D, PositiveDD; sol3 = FirstANDSolveA9D@u@t, x, yD, tD ã n HD@u@t, x, yD, x, xD + D@u@t, x, yD, y, yDL u@t, x, yD H2 D@u@t, x, yD, xD - D@u@t, x, yD, yDL, u@0, x, yD ã ExpA- Ix2 + y2 ME, u@t, - 4, yD ã u@t, 4, yD, u@t, x, - 4D ã u@t, x, 4D=, u, 8t, 0, 2<, 8x, - 4, 4<, 8y, - 4, 4<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “Coordinates“ Ø 8xgrid, ygrid<<
Out[65]= 8u Ø InterpolatingFunction@880., 2.<, 8-4., 4.<, 8-4., 4.<<, <>D<
It is important to keep in mind that the a posteriori spatial error estimates are simply estimates of the local error in computing spatial derivatives and may not reflect the actual accumulated spatial error for a given solution. One way to get an estimate on the actual spatial error is to compute the solution to very stringent tolerances in time for different spatial grids. To show how this works, consider again the simpler one-dimensional Burgers' equation. This computes a list of solutions using 833, 65, …, 4097< spatial grid points to compute the solution to Burgers' equation for difference orders 2, 4, 6, and pseudospectral. The temporal accuracy and precision tolerances are set very high so that essentially all of the error comes from the spatial discretization. Note that by specifying 8t, 4, 4< in NDSolve , only the solution at t = 4 is saved. Without this precaution, some of the solutions for the finer grids (which take many more time steps) could exhaust available memory. Even so, the list of solutions takes a substantial amount of time to compute.
Advanced Numerical Differential Equation Solving in Mathematica
In[66]:=
237
n = 0.01; solutions = MapATableA n = 2i + 1; u ê. FirstANDSolveA9D@u@x, tD, tD ã n D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, u@x, 0D ã ExpA- x2 E, u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 4, 4<, AccuracyGoal Ø 10, PrecisionGoal Ø 10, MaxSteps Ø Infinity, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ Ø Ò, AccuracyGoal Ø 0, PrecisionGoal Ø 0, “MaxPoints“ Ø n, “MinPoints“ Ø n<
Given two solutions, a comparison needs to be done between the two. To keep out any sources of error except for that in the solutions themselves, it is best to use the data that is interpolated to make the InterpolatingFunction. This can be done by using points common to the two solutions. This defines a function to estimate error by comparing two different solutions at the points common to both. The arguments coarse and fine should be the solutions on the coarser and finer grids, respectively. This works for the solutions generated earlier with grid spacing varying by powers of 2. In[68]:=
Clear@errfunD; errfun@t_, coarse_InterpolatingFunction, fine_InterpolatingFunctionD := Module@8cgrid = InterpolatingFunctionCoordinates@coarseD@@1DD, c, f<, c = coarse@cgrid, tD; f = fine@cgrid, tD; Norm@f - c, ¶D ê Length@cgridDD
To get an indication of the general trend in error (in cases of instability, solutions do not converge, so this does not assume that), you can compare the difference of successive pairs of solutions. This defines a function that will plot a sequence of error estimates for the successive solutions found for a given difference order and uses it to make a logarithmic plot of the estimated error as a function of the number of grid points. In[69]:=
Clear@errplotD; errplot@t_, sols : 8_InterpolatingFunction ..<, opts___D := Module@8errs, lens<, errs = MapThread@errfun@t, ÒÒD &, Transpose@Partition@sols, 2, 1DDD; lens = Map@Length, Drop@sols@@All, 3, 1DD, - 1DD; ListLogLogPlot@Transpose@8lens, errs
238
Advanced Numerical Differential Equation Solving in Mathematica
In[71]:=
colors = 8RGBColor@1, 0, 0D, RGBColor@0, 1, 0D, RGBColor@0, 0, 1D, RGBColor@0, 0, 0D<; Show@Block@8c = - 1 ê 3<, MapThread@errplot@4, Ò1, PlotStyle Ø [email protected], Ò2
Out[71]=
10-6 100
150 200
300
500 700 1000 1500 2000
A logarithmic plot of the maximum spatial error in approximating the solution of Burgers' equation at t = 4 as a function of the number of grid points. Finite differences of order 2, 4, and 6 on a uniform grid are shown in red, green, and blue, respectively. Pseudospectral derivatives with uniform (periodic) spacing are shown in black.
The upper-left part of the plot are the grids where the profile is not adequately resolved, so differences are simply of magnitude order 1 (it would be a lot worse if there was instability). However, once there are a sufficient number of points to resolve the profile without oscillation, convergence becomes quite rapid. Not surprisingly, the slope of the logarithmic line is -4, which corresponds to the difference order NDSolve uses by default. If your grid is fine enough to be in the asymptotically converging part, a simpler error estimate could be effected by using Richardson extrapolation as in the previous two sections, but on the overall solution rather than the local error. On the other hand, computing more values and viewing a plot gives a better indication of whether you are in the asymptotic regime or not. It is fairly clear from the plot that the best solution computed is the pseudospectral one with 2049 points (the one with more points was not computed because its spatial accuracy far exceeds the temporal tolerances that were set). This solution can, in effect, be treated almost as an exact solution, at least up to error tolerances of 10-9 or so. To get a perspective of how best to solve the problem, it is useful to do the following: for each solution found that was at least a reasonable approximation, recompute it with the temporal accuracy tolerance set to be comparable to the possible spatial accuracy of the solution and plot the resulting accuracy as a function of solution time. The following (somewhat complicated) commands do this.
Advanced Numerical Differential Equation Solving in Mathematica
239
This identifies the "best" solution that will be used, in effect, as an exact solution in the computations that follow. It is dropped from the list of solutions to compare it to since the comparison would be meaningless. In[72]:=
best = Last@Last@solutionsDD; solutions@@- 1DD = Drop@solutions@@- 1DD, - 1D; This defines a function that, given a difference order, do, and a solution, sol, computed with that difference order, recomputes it with local temporal tolerance slightly more stringent than the actual spatial accuracy achieved if that accuracy is sufficient. The function output is a list of {number of grid points, difference order, time to compute in seconds, actual error of the recomputed solution}.
In[74]:=
TimeAccuracy@do_D@sol_D := BlockA8tol, ag, n, solt, Second = 1<, tol = errfun@4, sol, bestD; ag = - Log@10., tolD; IfAag < 2, $Failed, n = Length@sol@@3, 1DDD; secs = FirstATimingAsolt = FirstA u ê. NDSolveA9D@u@x, tD, tD ã n D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, u@x, 0D ã ExpA- x2 E, u@- 5, tD ã u@5, tD=, u, 8x, - 5, 5<, 8t, 4, 4<, AccuracyGoal Ø ag + 1, PrecisionGoal Ø Infinity, MaxSteps Ø Infinity, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ Ø do, AccuracyGoal Ø 0, PrecisionGoal Ø 0, “MaxPoints“ Ø n, “MinPoints“ Ø n<
In[75]:=
results = MapThread@Map@TimeAccuracy@Ò1D, Ò2D &, 882, 4, 6, “Pseudospectral“<, solutions
Out[75]= 99$Failed, $Failed, 8129, 2, 0.06, 0.00432122<, 8257, 2, 0.12, 0.000724265<,
8513, 2, 0.671, 0.0000661853<, 91025, 2, 1.903, 4.44696 µ 10-6 =, 92049, 2, 5.879, 3.10464 µ 10-7 =, 94097, 2, 17.235, 2.4643 µ 10-8 ==, 9$Failed, 865, 4, 0.02, 0.00979942<, 8129, 4, 0.1, 0.00300281<, 8257, 4, 0.161, 0.000213248<, 9513, 4, 1.742, 6.02345 µ 10-6 =, 91025, 4, 5.438, 1.13695 µ 10-7 =, 92049, 4, 43.793, 2.10218 µ 10-9 =, 94097, 4, 63.551, 6.48318 µ 10-11 ==, 9$Failed, 865, 6, 0.03, 0.00853295<, 8129, 6, 0.14, 0.00212781<, 8257, 6, 0.37, 0.0000935051<, 9513, 6, 1.392, 1.1052 µ 10-6 =, 91025, 6, 7.14, 6.38732 µ 10-9 =, 92049, 6, 35.121, 3.22349 µ 10-11 =, 94097, 6, 89.809, 2.15934 µ 10-11 ==, 9833, Pseudospectral, 0.02, 0.00610004<, 865, Pseudospectral, 0.03, 0.00287949<, 8129, Pseudospectral, 0.08, 0.000417946<, 9257, Pseudospectral, 0.22, 3.72935 µ 10-6 =, 9513, Pseudospectral, 2.063, 2.28232 µ 10-9 =, 91025, Pseudospectral, 544.974, 8.81844 µ 10-13 ===
240
Advanced Numerical Differential Equation Solving in Mathematica
This removes the cases that were not recomputed and makes a logarithmic plot of accuracy as a function of computation time. In[76]:=
fres = Map@DeleteCases@Ò, $FailedD &, resultsD; ListLogLogPlot@fres@@All, All, 83, 4
129 257 257 257 257
10-6
513 513 513
1025 2049 1025
Out[76]=
4097
10-8
513
1025
10-10
2049 4097 2049 4097
0.1
1
10
100
1025
A logarithmic plot of the error in approximating the solution of Burgers' equation at t = 4 as a function of the computation time. Each point shown indicates the number of spatial grid points used to compute the solution. Finite differences of order 2, 4, and 6 on a uniform grid are shown in red, blue, and green, respectively. Pseudospectral derivatives with uniform (periodic) spacing are shown in black. Note that the cost of the pseudospectral method jumps dramatically from 513 to 1025. This is because the method has switched to the stiff solver, which is very expensive with the dense Jacobian produced by the discretization.
The resulting graph demonstrates quite forcefully that, when they work, as in this case, periodic pseudospectral approximations are incredibly efficient. Otherwise, up to a point, the higher the difference order, the better the approximation will generally be. These are all features of smooth problems, which this particular instance of Burgers' equation is. However, the higherorder solutions would generally be quite poor if you went toward the limit n Ø 0. One final point to note is that the above graph was computed using the Automatic method for the temporal direction. This uses LSODA, which switches between a stiff and nonstiff method depending on how the solution evolves. For the coarser grids, strictly explicit methods are typically a bit faster, and, except for the pseudospectral case, the implicit BDF methods are faster for the finer grids. A variety of alternative ODE methods are available in NDSolve.
Advanced Numerical Differential Equation Solving in Mathematica
241
Error at the Boundaries The a priori error estimates are computed in the interior of the computational region because the differences used there all have consistent error terms that can be used to effectively estimate the number of points to use. Including the boundaries in the estimates would complicate the process beyond what is justified for such an a priori estimate. Typically, this approach is successful in keeping the error under reasonable control. However, there are a few cases which can lead to difficulties. Occasionally it may occur that because the error terms are larger for the one-sided derivatives used at the boundary, NDSolve will detect an inconsistency between boundary and initial conditions, which is an artifact of the discretization error. This solves the one-dimensional heat equation with the left end held at constant temperature and the right end radiating into free space. In[2]:=
solution = FirstBNDSolveB:∂t u@x, tD == ∂x,x u@x, tD, u@x, 0D == 1 -
Sin@4 p xD
4p u@0, tD == 1, u@1, tD + uH1,0L @1, tD == 0>, u, 8x, 0, 1<, 8t, 0, 1
,
NDSolve::ibcinc : Warning: Boundary and initial conditions are inconsistent. Out[2]= 8u Ø InterpolatingFunction@880., 1.<, 80., 1.<<, <>D<
The NDSolve:ibcinc message is issued, in this case, completely to the larger discretization error at the right boundary. For this particular example, the extra error is not a problem because it gets damped out due to the nature of the equation. However, it is possible to eliminate the message by using just a few more spatial points. This computes the solution to the same equation as above, but using a minimum of 50 points in the x direction. In[3]:=
solution = Sin@4 p xD
, u@0, tD == 1, 4p u@1, tD + uH1,0L @1, tD == 0>, u, 8x, 0, 1<, 8t, 0, 1<, Method Ø 8“MethodOfLines“,
FirstBNDSolveB:∂t u@x, tD == ∂x,x u@x, tD, u@x, 0D == 1 -
“SpatialDiscretization“ Ø 8“TensorProductGrid“, MinPoints Ø 50<D<
One other case where error problems at the boundary can affect the discretization unexpectedly is when periodic boundary conditions are given with a function that is not truly periodic, so that an unintended discontinuity is introduced into the computation.
242
Advanced Numerical Differential Equation Solving in Mathematica
This begins the computation of the solution to the sine-Gordon equation with a Gaussian initial condition and periodic boundary conditions. The NDSolve command is wrapped with TimeConstrained since solving the given problem can take a very long time and a large amount of system memory. In[4]:=
L = 1; TimeConstrained@ sol1 = First@NDSolve@8D@u@t, xD, t, tD ã D@u@t, xD, x, xD - Sin@u@t, xDD, u@0, xD ã Exp@- x ^ 2D, Derivative@1, 0D@uD@0, xD ã 0, u@t, - 1D ã u@t, 1D<, u, 8t, 0, 1<, 8x, - 1, 1<, Method Ø StiffnessSwitchingDD, 10D NDSolve::mxsst : Using maximum number of grid points 10000 allowed by the MaxPoints or MinStepSize options for independent variable x.
Out[5]= $Aborted
The problem here is that the initial condition is effectively discontinuous when the periodic continuation is taken into account. This shows a plot of the initial condition over the extent of three full periods. In[6]:=
Plot@Exp@- HMod@x + 1, 2D - 1L ^ 2D, 8x, - 3, 3
Out[6]=
0.7 0.6 0.5
-3
-2
-1
1
2
3
Since there is always a large derivative error at the cusps, NDSolve is forced to use the maximum number of points in an attempt to satisfy the a priori error bound. To make matters worse, the extreme change makes solving the resulting ODEs more difficult, leading to a very long solution time which uses a lot of memory. If the discontinuity is really intended, you will typically want to specify a number of points or spacing for the spatial grid that will be sufficient to handle the aspects of the discontinuity you are interested in. To model discontinuities with high accuracy will typically take specialized methods that are beyond the scope of the general methods that NDSolve provides. On the other hand, if the discontinuity was unintended, say in this example by simply choosing a computational domain that was too small, it can usually be fixed easily enough by extending the domain or by adding in terms to smooth things between periods.
Advanced Numerical Differential Equation Solving in Mathematica
243
This solves the sine-Gordon problem on a computational domain large enough so that the discontinuity in the initial condition is negligible compared to the error allowed by the default tolerances. In[7]:=
L = 10; Timing@sol2 = First@NDSolve@8D@u@t, xD, t, tD ã D@u@t, xD, x, xD - Sin@u@t, xDD, u@0, xD ã Exp@- x ^ 2D, Derivative@1, 0D@uD@0, xD ã 0, u@t, - LD ã u@t, LD<, u, 8t, 0, 1<, 8x, - L, L
Out[8]= 80.031, 8u Ø InterpolatingFunction@880., 1.<, 8-10., 10.<<, <>D<<
Numerical Solution of Boundary Value Problems "Shooting" Method The shooting method works by considering the boundary conditions as a multivariate function of initial conditions at some point, reducing the boundary value problem to finding the initial conditions that give a root. The advantage of the shooting method is that it takes advantage of the speed and adaptivity of methods for initial value problems. The disadvantage of the method is that it is not as robust as finite difference or collocation methods: some initial value problems with growing modes are inherently unstable even though the BVP itself may be quite well posed and stable. Consider the BVP system X £ HtL = FHt, XHtLL; GHXHt1 L, XHt2 L, …, XHtn LL = 0, t1 < t2 < … < tn The shooting method looks for initial conditions XHt0 L = c so that G = 0. Since you are varying the initial conditions, it makes sense to think of X = Xc as a function of them, so shooting can be thought of as finding c such that with Xc £ HtL = FHt, Xc HtLL; Xc Ht0 L = c GHXc Ht1 L, Xc Ht2 L, …, Xc Htn LL = 0 After setting up the function for G, the problem is effectively passed to FindRoot to find the initial conditions c giving the root. The default method is to use Newton's method, which involves computing the Jacobian. While the Jacobian can be computed using finite differences, the sensitivity of solutions of an IVP to its initial conditions may be too much to get reasonably accurate derivative values, so it is advantageous to compute the Jacobian as a solution to ODEs.
244
Advanced Numerical Differential Equation Solving in Mathematica
involves computing the Jacobian. While the Jacobian can be computed using finite differences, the sensitivity of solutions of an IVP to its initial conditions may be too much to get reasonably accurate derivative values, so it is advantageous to compute the Jacobian as a solution to ODEs.
Linearization and Newton's Method Linear problems can be described by Xc £ HtL = JHtL Xc HtL + F0 HtL; Xc Ht0 L = c GHXc Ht1 L, Xc Ht2 L, …, Xc Htn LL = B0 + B1 Xc Ht1 L + B2 Xc Ht2 L + … Bn Xc Htn L Where JHtL is a matrix and F0 HtL is a vector both possibly depending on t, B0 is a constant vector, and B1 , B2 , …, Bn are constant matrices. Let Y =
∂Xc HtL ∂c
. Then, differentiating both the IVP and boundary conditions with respect to c gives
Y £ HtL = JHtL YHtLL; YHt0 L = I ∂G ∂c
= B1 YHt1 L + B2 YHt2 L + … Bn YHtn L = 0 ∂G
Since G is linear, when thought of as a function of c, you have GHcL = GHc0 L + J ∂c N Hc - c0 L, so the value of c for which GHcL = 0 satisfies
c = c0 +
∂G
-1
GHc0 L
∂c
for any particular initial condition c0 . For nonlinear problems, let JHtL be the Jacobian for the nonlinear ODE system, and let Bi be the Jacobian of the ith boundary condition. Then computation of
∂G ∂c
for the linearized system gives
the Jacobian for the nonlinear system for a particular initial condition, leading to a Newton iteration,
cn+1 = cn +
∂G ∂c
-1
Hcn L
GHcn L
Advanced Numerical Differential Equation Solving in Mathematica
245
"StartingInitialConditions" For boundary value problems, there is no guarantee of uniqueness as there is in the initial value problem case. “Shooting“ will find only one solution. Just as you can affect the particular solution FindRoot gets for a system of nonlinear algebraic equations by changing the starting values, you can change the solution that “Shooting“ finds by giving different initial conditions to start the iterations from. “StartingInitialConditions“ is an option of the “Shooting“ method that allows you to specify the values and position of the initial conditions to start the shooting process from. The shooting method by default starts with zero initial conditions so that if there is a zero solution, it will be returned. This computes a very simple solution to the boundary value problem
x″ + sinHxL ã 0 with xH0L = xH1L = 0. In[105]:=
sols = Map@First@NDSolve@8x ‘‘@tD + Sin@x@tDD ã 0, x@0D ã x@10D ã 0<, x, t, Method Ø 8“Shooting“, “StartingInitialConditions“ Ø 8x@0D ã 0, x ‘@0D ã Ò<
Out[106]= 2
4
6
8
10
-1 -2
By default, “Shooting“ starts from the left side of the interval and shoots forward in time. There are cases where is it advantageous to go backwards, or even from a point somewhere in the middle of the interval. Consider the linear boundary value problem x£££ HtL - 2 lx″ HtL - l2 x£ HtL + 2 l3 xHtL = Il2 + p2 M H2 l cosHp tL + p sinHp tLL
xH0L = 1 +
1 + ‰-2 l + ‰-l 2 + ‰-l
, xH1L = 0, x ' H1L =
3 l - ‰-l l 2 + ‰-l
246
Advanced Numerical Differential Equation Solving in Mathematica
that has a solution
xHtL =
‰l Ht-1L + ‰2 l Ht-1L + ‰-l t 2 + ‰-l
+ cosHp tL
For moderate values of l, the initial value problem starting at t = 0 becomes unstable because of the growing ‰l Ht-1L and ‰2 l Ht-1L terms. Similarly, starting at t = 1, instability arises from the ‰-l t term, though this is not as large as the term in the forward direction. Beyond some value of l, shooting will not be able to get a good solution because the sensitivity in either direction will be too great. However, up to that point, choosing a point in the interval that balances the growth in the two directions will give the best solution. This gives the equation, boundary conditions, and exact solution as Mathematica input. In[107]:=
eqn = x ‘‘‘@tD - 2 l x ‘‘@tD - l2 x ‘@tD + 2 l3 x@tD ã Il2 + p2 M H2 l Cos@p tD + p Sin@p tDL; 1 + ‰-2 l + ‰-l 3 l - ‰-l l , x@1D ã 0, x ‘@1D ã >; bcs = :x@0D ã 1 + 2 + ‰-l 2 + ‰-l l Ht-1L 2 l Ht-1L -l t ‰ +‰ +‰ + Cos@p tD; xsol@t_D = 2 + ‰-l This solves the system with l = 10 shooting from the default t = 0.
In[110]:=
Block@8l = 10<, sol = First@NDSolve@8eqn, bcs<, x, tDD; Plot@8xsol@tD, x@tD ê. sol<, 8t, 0, 1
Out[110]=
0.5 0.2
0.4
0.6
0.8
1.0
-0.5
Shooting from t = 0, the “Shooting“ method gives warnings about an ill-conditioned matrix, and further that the boundary conditions are not satisfied as well as they should be. This is because a small error at t = 0 is amplified by e20 > 4 ä 108 . Since the reciprocal of this is of the same order of magnitude as the local truncation error, visible errors as those seen in the plot are not surprising. In the reverse direction, the magnification will be much less:
method gives warnings about an ill-conditioned matrix, and further that the boundary conditions are not Advanced satisfied as well as they should be. This is because Numerical Differential Equation Solving in Mathematica 247 20
8
of magnitude as the local truncation error, visible errors as those seen in the plot are not surprising. In the reverse direction, the magnification will be much less: e10 > 2 ä 104 , so the solution should be much better. This computes the solution using shooting from t = 1. In[111]:=
BlockB8l = 10<, sol = FirstBNDSolveB8eqn, bcs<, x, t, Method Ø :“Shooting“, “StartingInitialConditions“ Ø :x@1D ã 0, x ‘@1D ã
3 l - ‰-l l
, x ‘‘@1D ã 0>>FF; 2 + ‰-l Plot@8xsol@tD, x@tD ê. sol<, 8t, 0, 1
Out[111]=
0.5 0.2
0.4
0.6
0.8
1.0
-0.5
A good point to choose is actually one that will balance the sensitivity in each direction, which is about at t = 2 ê 3. With this, the error with l = 15 will still be under reasonable control. This computes the solution for l = 15 shooting from t = 2 ê 3. In[112]:=
Block@8l = 15<, sol = First@NDSolve@8eqn, bcs<, x, t, Method Ø 8“Shooting“, “StartingInitialConditions“ Ø 8x@2 ê 3D ã 0, x ‘@2 ê 3D ã 0, x ‘‘@2 ê 3D ã 0<
Out[112]=
0.5 0.2 -0.5
0.4
0.6
0.8
1.0
248
Advanced Numerical Differential Equation Solving in Mathematica
Option summary option name
default value
"StartingInitialConditions "
Automatic
the initial conditions to initiate the shooting method from
"ImplicitSolver"
Automatic
the method to use for solving the implicit equation defined by the boundary conditions; this should be an acceptable value for the Method option of FindRoot
"MaxIterations"
Automatic
how many iterations to use for the implicit solver method
"Method"
Automatic
the method to use for integrating the system of ODEs
"Shooting" method options.
"Chasing" Method The method of chasing came from a manuscript of Gel'fand and Lokutsiyevskii first published in English in [BZ65] and further described in [Na79]. The idea is to establish a set of auxiliary problems that can be solved to find initial conditions at one of the boundaries. Once the initial conditions are determined, the usual methods for solving initial value problems can be applied. The chasing method is, in effect, a shooting method that uses the linearity of the problem to good advantage. Consider the linear ODE X £ HtL == AHtL XHtL + A0 HtL
(2)
where XHtL = Hx1 HtL, x2 HtL, …, xn HtLL, AHtL is the coefficient matrix, and A0 HtL is the inhomogeneous coefficient vector, with n linear boundary conditions Bi .XHti L ã bi0 , i = 1, 2, , n
(3)
where Bi = Hbi1 , bi2 , , bin L is a coefficient vector. From this, construct the augmented homogenous system £
X HtL = AHtL XHtL,
Bi .XHti L = 0
(4)
Advanced Numerical Differential Equation Solving in Mathematica
249
where 1 x1 HtL XHtL = x2 HtL , ª xn HtL
a01 HtL a11 HtL a12 HtL a1 n HtL a02 HtL a21 HtL a22 HtL a2 n HtL
bi0 bi1 AHtL = , and Bi = bi2 ª ª ª ª a0 n HtL an1 HtL an2 HtL ann HtL ª bin 0 0 0 0
The chasing method amounts to finding a vector function Fi HtL such that Fi Hti L = Bi and Fi HtL XHtL = 0. Once the function Fi HtL is known, if there is a full set of boundary conditions, solving F1 Ht0 L F2 Ht0 L ª Fn Ht0 L
XHt0 L = 0
(5)
can be used to determine initial conditions Hx1 Ht0 L, x2 Ht0 L, , xn Ht0 LL that can be used with the usual initial value problem solvers. Note that the solution to system (3) is nontrivial because the first component of X is always 1. Thus, solving the boundary value problem is reduced to solving the auxiliary problems for the Fi HtL. Differentiating the equation for Fi HtL gives £
Fi HtL X HtL + XHtL Fi £ HtL = 0 Substituting the differential equation, AHtL XHtL Fi HtL + XHtL Fi £ HtL = 0 and transposing T
XHtL JFi £ HtL + A HtL Fi HtLN = 0 Since this should hold for all solutions X, you have the initial value problem for Fi , T
Fi £ HtL + A HtL Fi HtL = 0 with initial condition Fi Hti L = Bi
(6)
Given t0 where you want to have solutions to all of the boundary value problems, Mathematica just uses NDSolve to solve the auxiliary problems for F1 , F2 , …, Fm by integrating them to t0 . The results are then combined into the matrix of (3) that is solved for
250
Advanced Numerical Differential Equation Solving in Mathematica
t0 . The results are then combined into the matrix of (3) that is solved for XHt0 L to obtain the initial value problem that NDSolve integrates to give the returned solution. This variant of the method is further described in and used by the MathSource package [R98], which also allows you to solve linear eigenvalue problems. There is an alternative, nonlinear way to set up the auxiliary problems that is closer to the original method proposed by Gel'fand and Lokutsiyevskii. Assume that the boundary conditions are linearly independent (if not, then the problem is insufficiently specified). Then in each Bi , there is at least one nonzero component. Without loss of generality, assume that bij ≠ 0. Now è è Fij in terms of the other components of Fi , Fij = Bi .Fi , where solve for è è Fi = I1, Fi1 , , Fij-1 , , Fij+1 , , Fin M and Bi = Hbi0, bi1 , , bij-1 , , bij+1 , , bin M ë -bij . Using (5) and replacing Fij , and thinking of xn HtL in terms of the other components of xHtL you get the nonlinear equation èT è£ è è è Fi HtL = -A @tD Fi HtL + IA j .Fi HtLM Fi HtL è where A is A with the jth column removed and A j is the jth column of A. The nonlinear method can be more numerically stable than the linear method, but it has the disadvantage that integration along the real line may lead to singularities. This problem can be eliminated by integrating on a contour in the complex plane. However, the integration in the complex plane typically has more numerical error than a simple integration along the real line, so in practice, the nonlinear method does not typically give results better than the linear method. For this reason, and because it is also generally faster, the default for Mathematica is to use the linear method. This solves a two-point boundary value problem for a second-order equation. In[113]:=
nsol1 = NDSolve@8y ‘‘@tD + y@tD ê 4 ã 8, y@0D ã 0, y@10D ã 0<, y, 8t, 0, 10
Out[113]= 88y Ø InterpolatingFunction@880., 10.<<, <>D<<
Advanced Numerical Differential Equation Solving in Mathematica
251
This shows a plot of the solution. In[114]:=
Plot@First@y@tD ê. nsol1D, 8t, 0, 10
Out[114]=
40 30 20 10 2
4
6
8
10
The solver can solve multipoint boundary value problems of linear systems of equations. (Note that each boundary equation must be at one specific value of t.) In[115]:=
bconds = 8 x@0D + x ‘@0D + y@0D + y ‘@0D ã 1, x@1D + 2 x ‘@1D + 3 y@1D + 4 y ‘@1D ã 5, y@2D + 2 y ‘@2D ã 4, x@3D - x ‘@3D ã 7<; nsol2 = NDSolve@8 x ‘‘@tD + x@tD + y@tD ã t, y ‘‘@tD + y@tD ã Cos@tD, bconds<, 8x, y<, 8t, 0, 4
Out[116]= 88x Ø InterpolatingFunction@880., 4.<<, <>D, y Ø InterpolatingFunction@880., 4.<<, <>D<<
In general, you cannot expect the boundary value equations to be satisfied to the close tolerance of Equal. This checks to see if the boundary conditions are "satisfied". In[117]:=
bconds ê. First@nsol2D
Out[117]= 8True, False, False, False<
They are typically only be satisfied at most tolerances that come from the AccuracyGoal and PrecisionGoal options of NDSolve. Usually, the actual accuracy and precision is less because these are used for local, not global, error control. This checks the residual error at each of the boundary conditions. In[118]:=
Apply@Subtract, bconds, 1D ê. First@nsol2D
-7 -8 -8 Out[118]= 90., -2.5751 µ 10 , -4.13357 µ 10 , -2.95508 µ 10 =
When you give NDSolve a problem that has no solution, numerical error may make it appear to be a solvable problem. Typically, NDSolve will issue a warning message.
252
Advanced Numerical Differential Equation Solving in Mathematica
This is a boundary value problem that has no solution. In[125]:=
NDSolve@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x@PiD ã 0<, x, 8t, 0, Pi<, Method Ø “Chasing“D NDSolve::bvluc : The equations derived from the boundary conditions are numerically ill-conditioned. The boundary conditions may not be sufficient to uniquely define a solution. The computed solution may match the boundary conditions poorly.
Out[125]= 88x Ø InterpolatingFunction@880., 3.14159<<, <>D<<
In this case, it is not able to integrate over the entire interval because of nonexistence. Another situation in which the equations can be ill-conditioned is when the boundary conditions do not give a unique solution. Here is a boundary value problem that does not have a unique solution. Its general solution is shown as computed symbolically with DSolve. In[120]:=
dsol = First@x ê. DSolve@8x ‘‘@tD + x@tD ã t, x ‘@0D ã 1, x@Pi ê 2D ã Pi ê 2<, x, tDD DSolve::bvsing : Unable to resolve some of the arbitrary constants in the general solution using the given boundary conditions. It is possible that some of the conditions have been specified at a singular point for the equation.
Out[120]= Function@8t<, t + C@1D Cos@tDD
NDSolve issues a warning message because the matrix to solve for the initial conditions is singular, but has a solution. In[122]:=
onesol = First@x ê. NDSolve@8x ‘‘@tD + x@tD ã t, x ‘@0D ã 1, x@Pi ê 2D ã Pi ê 2<, x, 8t, 0, Pi ê 2<, Method Ø “Chasing“DD NDSolve::bvluc : The equations derived from the boundary conditions are numerically ill-conditioned. The boundary conditions may not be sufficient to uniquely define a solution. The computed solution may match the boundary conditions poorly.
Out[122]= InterpolatingFunction@880., 1.5708<<, <>D
Advanced Numerical Differential Equation Solving in Mathematica
253
You can identify which solution it found by fitting it to the interpolating points. This makes a plot of the error relative to the actual best fit solution. In[126]:=
ip = onesol ü “Coordinates“@1D; points = Transpose@8ip, onesol@ipD
Out[130]=
1. µ 10-8 0.5
1.0
1.5
-1. µ 10-8 -2. µ 10-8 -3. µ 10-8
Typically the default values Mathematica uses work fine, but you can control the chasing method by giving NDSolve the option Method -> 8“Chasing“, chasing options<. The possible chasing options are shown in the following table. option name
default value
Method
Automatic
the numerical method to use for computing the initial value problems generated by the chasing algorithm
“ExtraPrecision“
0
number of digits of extra precision to use for solving the auxiliary initial value problems
“ChasingType“
“LinearChasing“ the type of chasing to use, which can be either “LinearChasing“ or “NonlinearChasing“
Options for the “Chasing“ method of NDSolve .
254
Advanced Numerical Differential Equation Solving in Mathematica
The method “ChasingType“ -> “NonlinearChasing“ itself has two options.
option name
default value
“ContourType“
Ellipse
the shape of contour to use when integration in the complex plane is required, which can either be “Ellipse“ or “Rectangle“
“ContourRatio“
1ê10
the ratio of the width to the length of the contour; typically a smaller number gives more accurate results, but yields more numerical difficulty in solving the equations
Options for the “NonlinearChasing“ option of the “Chasing“ method.
These options, especially “ExtraPrecision“ can be useful in cases where there is a strong sensitivity to computed initial conditions. Here is a boundary value problem with a simple solution computed symbolically using DSolve. In[131]:=
bvp = 8x ‘‘@tD + 1000 x@tD ã 0, x@0D ã 0, x@1D ã 1<; dsol = First@x ê. DSolve@bvp, x, tDD
Out[132]= FunctionB8t<, CscB10
10 F SinB10
10 tFF
This shows the error in the solution computed using the chasing method in NDSolve . In[133]:=
sol = First@x ê. NDSolve@8x ‘‘@tD + 1000 x@tD ã 0, x@0D ã 0, x@1D ã 1<, x, 8t, 0, 1<, Method Ø “Chasing“DD; Plot@sol@tD - dsol@tD, 8t, 0, 1
Out[134]=
0.2 -0.00001 -0.00002
0.4
0.6
0.8
1.0
Advanced Numerical Differential Equation Solving in Mathematica
255
Using extra precision to solve for the initial conditions reduces the error substantially. In[135]:=
sol = First@x ê. NDSolve@8x ‘‘@tD + 1000 x@tD ã 0, x@0D ã 0, x@1D ã 1<, x, 8t, 0, 1<, Method Ø 8“Chasing“, “ExtraPrecision“ Ø 10
Out[136]= 0.2
0.4
0.6
0.8
1.0
-2. µ 10-7 -4. µ 10-7 -6. µ 10-7
Increasing the extra precision beyond this really will not help because a significant part of the error results from computing the solution once the initial conditions are found. To reduce this, you need to give more stringent AccuracyGoal and PrecisionGoal options to NDSolve. This uses extra precision to compute the initial conditions along with more stringent settings for the AccuracyGoal and PrecisionGoal options. In[137]:=
sol = First@x ê. NDSolve@8x ‘‘@tD + 1000 x@tD ã 0, x@0D ã 0, x@1D ã 1<, x, 8t, 0, 1<, Method Ø 8“Chasing“, “ExtraPrecision“ Ø 10<, AccuracyGoal Ø 10, PrecisionGoal Ø 10DD; Plot@sol@tD - dsol@tD, 8t, 0, 1
-9
5. × 10
0.2
Out[138]=
–5. × 10
-9
–1. × 10
-8
0.4
0.6
0.8
1.0
Boundary Value Problems with Parameters In many of the applications where boundary value problems arise, there may be undetermined parameters, such as eigenvalues, in the problem itself that may be a part of the desired solution. By introducing the parameters as dependent variables, the problem can often be written as a boundary value problem in standard form.
256
Advanced Numerical Differential Equation Solving in Mathematica
For example, the flow in a channel can be modeled by 2
f £££ - RIH f £ L - f f ″ M + Ra = 0 f H0L = f £ H0L = 0, f H1L = 1, f £ H1L = 0 where R (the Reynolds number) is given, but a is to be determined. To find the solution f and the value of a, just add the equation a£ = 0. This solves the flow problem with R = 1 for f and a, plots the solution f and returns the value of a. In[1]:=
Block@8R = 1<, sol = NDSolve@8f ‘‘‘@tD - R HHf ‘@tDL ^ 2 - f@tD f ‘‘@tDL + R a@tD ã 0, a ‘@tD ã 0, f@0D ã f ‘@0D ã f ‘@1D ã 0, f@1D ã 1<, 8f, a<, tD; Column@8Plot@f@tD ê. First@solD, 8t, 0, 1
Out[1]= 0.4 0.2 0.2
0.4
0.6
0.8
1.0
14.3659
Numerical Solution of DifferentialAlgebraic Equations Introduction In general, a system of ordinary differential equations (ODEs) can be expressed in the normal form, x£ = f Ht, xL. The derivatives of the dependent variables x are expressed explicitly in terms of the independent variable t and the dependent variables x. As long as the function f has sufficient continuity, a unique solution can always be found for an initial value problem where the values of the dependent variables are given at a specific value of the independent variable.
Advanced Numerical Differential Equation Solving in Mathematica
257
With differential-algebraic equations (DAEs), the derivatives are not, in general, expressed explicitly. In fact, derivatives of some of the dependent variables typically do not appear in the equations. The general form of a system of DAEs is FHt, x, x£ L = 0,
(7)
where the Jacobian with respect to x£ , ∂ F ê ∂ x£ may be singular. A system of DAEs can be converted to a system of ODEs by differentiating it with respect to the independent variable t. The index of a DAE is effectively the number of times you need to differentiate the DAEs to get a system of ODEs. Even though the differentiation is possible, it is not generally used as a computational technique because properties of the original DAEs are often lost in numerical simulations of the differentiated equations. Thus, numerical methods for DAEs are designed to work with the general form of a system of DAEs. The methods in NDSolve are designed to generally solve index-1 DAEs, but may work for higher index problems as well. This tutorial will show numerous examples that illustrate some of the differences between solving DAEs and ODEs. This loads packages that will be used in the examples and turns off a message. In[10]:=
Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D;
The specification of initial conditions is quite different for DAEs than for ODEs. For ODEs, as already mentioned, a set of initial conditions uniquely determines a solution. For DAEs, the situation is not nearly so simple; it may even be difficult to find initial conditions that satisfy the equations at all. To better understand this issue, consider the following example [AP98]. Here is a system of DAEs with three equations, but only one differential term. In[11]:=
DAE =
x1 £ @tD ã x3 @tD ; x2 @tD H1 - x2 @tDL ã 0 x1 @tD x2 @tD + x3 @tD H1 - x2 @tDL ã t
The initial conditions are clearly not free; the second equation requires that x2 @t0 D be either 0 or 1. This solves the system of DAEs starting with a specified initial condition for the derivative of x1 . In[12]:=
sol = NDSolve@8DAE, x1 ‘@0D ã 1<, 8x1 , x2 , x3 <, 8t, 0, 1
Out[12]= 88x1 Ø InterpolatingFunction@880., 1.<<, <>D,
x2 Ø InterpolatingFunction@880., 1.<<, <>D, x3 Ø InterpolatingFunction@880., 1.<<, <>D<<
258
Advanced Numerical Differential Equation Solving in Mathematica
To get this solution, NDSolve first searches for initial conditions that satisfy the equations, using a combination of Solve and a procedure much like FindRoot. Once consistent initial conditions are found, the DAE is solved using the IDA method. This shows the initial conditions found by NDSolve . In[13]:=
88x1 ‘@0D<, 8x1 @0D, x2 @0D, x3 @0D<< ê. First@solD
Out[13]= 881.<, 80., 1., 1.<<
This shows a plot of the solution. The solution x2 @0D is obscured by the solution x3 @0D, which has the same constant value of 1. In[15]:=
Plot@Evaluate@8x1 @tD, x2 @tD, x3 @tD< ê. First@solDD, 8t, 0, 1<, PlotStyle Ø 8Red, Black, Blue
0.8
0.6
Out[15]= 0.4
0.2
0.2
0.4
0.6
0.8
1.0
However, there may not be a solution from all initial conditions that satisfy the equations. This tries to find a solution with x2 @0D starting from steady state with derivative 0. In[16]:=
sols = NDSolve@8DAE, x1 ‘@0D ã 0<, 8x1 , x2 , x3 <, 8t, 0, 1
Out[16]= 88x1 Ø InterpolatingFunction@880., 0.<<, <>D,
x2 Ø InterpolatingFunction@880., 0.<<, <>D, x3 Ø InterpolatingFunction@880., 0.<<, <>D<<
This shows the initial conditions found by NDSolve . In[17]:=
88x1 ‘@0D<, 8x1 @0D, x2 @0D, x3 @0D<< ê. First@solsD
Out[17]= 880.<, 80., 1., 0.<<
If you look at the equations with x2 set to 1, you can see why it is not possible to advance beyond t == 1. Substitute x2 @tD = 1 into the equations. In[18]:=
DAE ê. x2 @tD Ø 1
£ Out[18]= 88x1 @tD ã x3 @tD<, 8True<, 8x1 @tD ã t<<
Advanced Numerical Differential Equation Solving in Mathematica
259
The middle equation effectively drops out. If you differentiate the last equation with x2 @tD = 1, you get the condition x1 ‘@tD = 1, but then the first equation is inconsistent with the value of x3 @tD = 0 in the initial conditions. It turns out that the only solution with x2 @tD = 1 is 8x2 @tD = t, x2 @tD = 1, x3 @tD = 1<, and along this solution, the system has index 2. The other set of solutions for the problem is when x2 @tD = 0. You can find these by specifying that as an initial condition. This finds a solution with x2 @tD = 0. It is also necessary to specify a value for x1 @0D because it is a differential variable. In[19]:=
sol0 = NDSolve@8DAE, x1 @0D ã 1, x2 @0D ã 0<, 8x1 , x2 , x3 <, 8t, 0, 1
Out[19]= 88x1 Ø InterpolatingFunction@880., 1.<<, <>D, x2 Ø InterpolatingFunction@880., 1.<<, <>D, x3 Ø InterpolatingFunction@880., 1.<<, <>D<<
This shows a plot of the nonzero components of the solution. In[21]:=
Plot@Evaluate@8x1 @tD, x3 @tD< ê. First@sol0DD, 8t, 0, 1<, PlotStyle Ø 8Red, Blue
Out[21]=
0.8 0.6 0.4 0.2 0.2
0.4
0.6
0.8
1.0
In general, you must specify initial conditions for the differential variables because typically there is a parametrized general solution. For this problem with x2 @tD = 0, the general solution is 8x1 @tD = x1 @0D + t2 ê 2, x2 @tD = 0, x3 @tD == t<, so it is necessary to give x1 @0D to determine the solution. NDSolve cannot always find initial conditions consistent with the equations because sometimes this is a difficult problem. "Often the most difficult part of solving a DAE system in applications is to determine a consistent set of initial conditions with which to start the computation". [BCP89]
260
Advanced Numerical Differential Equation Solving in Mathematica
NDSolve fails to find a consistent initial condition. In[22]:=
NDSolve@8DAE, x1 @0D ã 1<, 8x1 , x2 , x3 <, 8t, 0, 1
Out[22]= 8<
If NDSolve fails to find consistent initial conditions, you can use FindRoot with a good starting value or some other procedure to obtain consistent initial conditions and supply them. If you know values close to a good starting guess, NDSolve uses these values to start its search, which may help. You may specify values of the dependent variables and their derivatives. With index-1 systems of DAEs, it is often possible to differentiate and use an ODE solver to get the solution. Here is the Robertson chemical kinetics problem. Because of the large and small rate constants, the problem is quite stiff. In[23]:=
kinetics = :y1 ‘@tD ã -
1
y1 @tD + 104 y2 @tD y3 @tD, y2 ‘@tD ã
25 balance = y1 @tD + y2 @tD + y3 @tD ã 1; start = 8y1 @0D ã 1, y2 @0D ã 0, y3 @0D ã 0<;
1 25
y1 @tD - 3 µ 107 y2 @tD2 >;
This solves the Robertson kinetics problem as an ODE by differentiating the balance equation. In[26]:=
odesol = First@NDSolve@8kinetics, D@balance, tD, start<, 8y1 , y2 , y3 <, 8t, 0, 40 000
Out[26]= 8y1 Ø InterpolatingFunction@880., 40 000.<<, <>D,
y2 Ø InterpolatingFunction@880., 40 000.<<, <>D, y3 Ø InterpolatingFunction@880., 40 000.<<, <>D<
The stiffness of the problem is supported by y1 and y2 having their main variation on two completely different time scales.
Advanced Numerical Differential Equation Solving in Mathematica
261
This shows the solutions y1 and y2 . In[27]:=
GraphicsRow@8 Plot@y1 @tD ê. odesol, 8t, 0, 25<, PlotRange Ø All, ImageSize Ø 200D, Plot@y2 @tD ê. odesol, 8t, 0, 0.01<, PlotRange Ø All, ImageSize Ø 200, Ticks -> 880.0, 0.005, 0.01<, 80.0, 0.000005, 0.000015, 0.000025, 0.000035<
0.000035
0.98 0.000025
0.96
Out[27]=
0.94
0.000015
0.92
5. µ 10-6 5
10
15
20
0.005
25
0.01
This solves the Robertson kinetics problem as a DAE. In[33]:=
daesol = First@NDSolve@8kinetics, balance, start<, 8y1 , y2 , y3 <, 8t, 0, 40 000
Out[33]= 8y1 Ø InterpolatingFunction@880., 40 000.<<, <>D,
y2 Ø InterpolatingFunction@880., 40 000.<<, <>D, y3 Ø InterpolatingFunction@880., 40 000.<<, <>D<
The solutions for a given component will appear quite close, but comparing the chemical balance constraint shows a difference between them. Here is a graph of the error in the balance equation for the ODE and DAE solutions, shown in black and blue respectively. A log-log scale is used because of the large variation in t and the magnitude of the error. In[34]:=
Out[37]=
berr@t_D = Abs@Apply@Subtract, balanceDD; gode = First@InterpolatingFunctionCoordinates@y1 ê. odesolDD; gdae = First@InterpolatingFunctionCoordinates@y1 ê. daesolDD; Show@8 ListLogLogPlot@Transpose@8gode, berr@godeD ê. odesol
-15
7.0 × 10
-15
2.0 × 10
-16
5.0 × 10
-16
1.0 × 10
10
-5
0.01
10
10
4
262
Advanced Numerical Differential Equation Solving in Mathematica
In this case, both solutions satisfied the balance equations well beyond expected tolerances. Note that even though the error in the balance equation was greater at some points for the DAE solution, over the long term, the DAE solution is brought back to better satisfy the constraint once the range of quick variation is passed. You may want to solve some DAEs of the form x£ HtL = f Ht, xHtLL gHt, xHtLL = 0, such that the solution of the differential equation is required to satisfy a particular constraint. NDSolve cannot handle such DAEs directly because the index is too high and NDSolve expects the number of equations to be the same as the number of dependent variables. NDSolve does, however, have a Projection method that will often solve the problem. A very simple example of such a constrained system is a nonlinear oscillator modeling the motion of a pendulum. This defines the equation, invariant constraint, and starting condition for a simulation of the motion of a pendulum. In[55]:=
equation = x ‘‘@tD + Sin@x@tDD ã 0; invariant = x ‘@tD2 - 2 Cos@x@tDD; start = 8x@0D ã 1, x ‘@0D ã 0<;
Note that the differential equation is effectively the derivative of the invariant, so one way to solve the equation is to use the invariant. This solves for the motion of a pendulum using the invariant equation. The SolveDelayed option tells NDSolve not to symbolically solve the quadratic equation for x£ , but instead to solve the system as a DAE. In[58]:=
isol = First@ NDSolve@8invariant ã - 2 Cos@1D, start<, x, 8t, 0, 1000<, SolveDelayed Ø TrueDD
Out[58]= 8x Ø InterpolatingFunction@880., 1000.<<, <>D<
However, this solution may not be quite what you expect: the invariant equation has the solution x@tD == constant when it starts with x ‘@tD == 0. In fact it does not have unique solutions from this starting point. This is because if you do actually solve for x£ , the function does not satisfy the continuity requirements for uniqueness.
Advanced Numerical Differential Equation Solving in Mathematica
263
This solves for the motion of a pendulum using only the differential equation. The method “ExplicitRungeKutta“ is used because it can also be a submethod of the projection method. In[59]:=
dsol = First@NDSolve@8equation, start<, x, 8t, 0, 2000<, Method Ø “ExplicitRungeKutta“DD
Out[59]= 8x Ø InterpolatingFunction@880., 2000.<<, <>D<
This shows the solution plotted over the last several periods. In[60]:=
Plot@x@tD ê. dsol, 8t, 1950, 2000
0.5
Out[60]=
1960
1970
1980
1990
2000
-0.5
-1.0
This shows a plot of the invariant at the ends of the time steps NDSolve took. In[61]:=
ts = First@InterpolatingFunctionCoordinates@x ê. dsolDD; ListPlot@Transpose@8ts, invariant + 2 Cos@1D ê. dsol ê. t Ø ts
1000
1500
2000
-5. µ 10-8 -1. µ 10-7
Out[62]= -1.5 µ 10-7 -2. µ 10-7 -2.5 µ 10-7 -3. µ 10-7
The error in the invariant is not large, but it does show a steady and consistent drift. Eventually, it could be large enough to affect the fidelity of the solution. This solves for the motion of the pendulum, constraining the motion at each step to lie on the invariant. In[63]:=
psol = First@NDSolve@8equation, start<, x, 8t, 0, 2000<, Method Ø 8Projection, Method Ø “ExplicitRungeKutta“, Invariants Ø invariant
Out[63]= 8x Ø InterpolatingFunction@880., 2000.<<, <>D<
264
Advanced Numerical Differential Equation Solving in Mathematica
This shows a plot of the invariant at the ends of the time steps NDSolve took with the projection method. ts = First@InterpolatingFunctionCoordinates@x ê. psolDD; ListPlot@Transpose@8ts, invariant + 2 Cos@1D ê. psol ê. t Ø ts
In[64]:=
4. µ 10-16 2. µ 10-16
500
Out[65]=
1000
1500
2000
-2. µ 10-16 -4. µ 10-16 -6. µ 10-16
IDA Method for NDSolve The IDA package is part of the SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers) developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory. As described in the IDA user guide [HT99], “IDA is a general purpose solver for the initial value problem for systems of differential-algebraic equations (DAEs). The name IDA stands for Implicit Differential-Algebraic solver. IDA is based on DASPK ...” DASPK [BHP94], [BHP98] is a Fortran code for solving large-scale differential-algebraic systems. In Mathematica, an interface has been provided to the IDA package so that rather than needing to write a function in C for evaluating the residual and compiling the program, Mathematica generates the function automatically from the equations you input to NDSolve. IDA solves the system (1) with Backward Differentiation Formula (BDF) methods of orders 1 through 5, implemented using a variable-step form. The BDF of order k is at time tn = tn-1 + hn is given by the formula k
‚an,i xn-i = hn xn £ . i=1
The coefficients an,i depend on the order k and past step sizes. Applying the BDF to the DAE (1) gives a system of nonlinear equations to solve:
F tn , xn ,
1 hn
k
‚an,i xn-i = 0. i=1
Advanced Numerical Differential Equation Solving in Mathematica
265
The solution of the system is achieved by Newton-type methods, typically using an approximation to the Jacobian J=
∂F ∂x
+ cn
∂F , ∂x‘
where cn =
an,0
(8)
.
hn
“Its [IDAs] most notable feature is that, in the solution of the underlying nonlinear system at each time step, it offers a choice of Newton/direct methods or an Inexact Newton/Krylov (iterative) method.” [HT99] In Mathematica, you can access these solvers using method options or use the default Mathematica LinearSolve , which switches automatically to direct sparse solvers for large problems. At each step of the solution, IDA computes an estimate En of the local truncation error and the step size and order are chosen so that the weighted norm Norm@En ê wn D is less than 1. The jth component, wn, j , of wn is given by
wn, j =
1 10
-prec
°xn, j • + 10-acc
.
The values prec and acc are taken from the NDSolve settings for the PrecisionGoal -> prec and AccuracyGoal -> acc. Because IDA provides a great deal of flexibility, particularly in the way nonlinear equations are solved, there are a number of method options which allow you to control how this is done. You can
use
the
method
options
to
IDA
by
giving
NDSolve
the
option
Method -> 8IDA, ida method options<. The options for the IDA method are associated with the symbol IDA in the NDSolve` context. In[1]:=
Options@NDSolve`IDAD
Out[1]= 8MaxDifferenceOrder Ø 5, ImplicitSolver Ø Newton<
IDA method option name
default value
“ImplicitSolver“
“Newton“
how to solve the implicit equations
“MaxDifferenceOrder“
5
the maximum order BDF to use
IDA method options.
266
Advanced Numerical Differential Equation Solving in Mathematica
When strict accuracy of intermediate values computed with the InterpolatingFunction object returned from NDSolve is important, you will want to use the NDSolve method option setting InterpolationOrder -> All that uses interpolation based on the order of the method, sometimes called dense output, to represent the solution between times steps. By default NDSolve stores a minimal amount of data to represent the solution well enough for graphical purposes. Keeping the amount of data small saves on both memory and time for more complicated solutions. As an example which highlights the difference between minimal output and full method interpolation order, consider tracking a quantity, f HtL = xHtL2 + yHtL2 derived from the solution of a simple linear equation where the exact solution can be computed using DSolve. This defines the function f giving the quantity as a function of time based on solutions x@tD and y@tD. In[2]:=
f@t_D := x@tD2 + y@tD2 ; This defines the linear equations along with initial conditions.
In[3]:=
eqns = 8x ‘@tD ã x@tD - 2 y@tD, y ‘@tD ã x@tD + y@tD<; ics = 8x@0D ã 1, y@0D ã 1<; The exact value of f as a function of time can be computed symbolically using DSolve.
In[4]:=
fexact @t_D = First@f@tD ê. DSolve@8eqns, ics<, 8x, y<, tDD
2t CosB Out[4]= ‰
2
2 tF -
2 SinB
2 tF
+
1
‰2 t 2 CosB
2
2 tF +
2 SinB
2 tF
4
The exact solution will be compared with solutions computed with and without dense output. A simple way to track the quantity is to create a function which derives it from the numerical solution of the differential equation. In[5]:=
f1@t_D = First@f@tD ê. NDSolve@8eqns, ics<, 8x, y<, 8t, 0, 1
2 2 Out[5]= InterpolatingFunction@880., 1.<<, <>D@tD + InterpolatingFunction@880., 1.<<, <>D@tD
It can also be computed with dense output. In[6]:=
f1dense @t_D = First@f@tD ê. NDSolve@8eqns, ics<, 8x, y<, 8t, 0, 1<, InterpolationOrder Ø AllDD
2 2 Out[6]= InterpolatingFunction@880., 1.<<, <>D@tD + InterpolatingFunction@880., 1.<<, <>D@tD
Advanced Numerical Differential Equation Solving in Mathematica
267
This plot shows the error in the two computed solutions. The computed solution at the time steps are indicated by black dots. The default output error is shown in gray and the dense output error in black. In[7]:=
Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D; t1 = Cases@f1@tD, Hif_InterpolatingFunctionL@tD Ø InterpolatingFunctionCoordinates@ifD, InfinityD@@1, 1DD; pode = Show@Block@8$DisplayFunction = Identity<, 8ListPlot@Transpose@8t1, fexact @t1D - f1@t1D
0.4
0.6
0.8
1.0
–6
–2. × 10 Out[7]=
–6
–4. × 10
–6
–6. × 10 –8. × 10
–6
From the plot, it is quite apparent that when the time steps get large, the default solution output has much larger error between time steps. The dense output solution represents the solution computed by the solver even between time steps. Keep in mind, however, that the dense output solution takes up much more space. This compares the sizes of the default and dense output solutions. In[8]:=
ByteCount êü 8f1@tD, f1dense @tD<
Out[8]= 83560, 17 648<
When the quantity you want to derive from the solution is complicated, you can ensure that it is locally kept within tolerances by giving it as an algebraic quantity, forcing the solver to keep its error in control. This adds a dependent variable with an algebraic equation that sets the dependent variable equal to the quantity of interest. In[9]:=
f2@t_D = First@g@tD ê. NDSolve@8eqns, ics, g@tD ã f@tD<, 8x, y, g<, 8t, 0, 1
Out[9]= InterpolatingFunction@880., 1.<<, <>D@tD
This computes the same solution with dense output. In[10]:=
f2dense @t_D = First@g@tD ê. NDSolve@8eqns, ics, g@tD ã f@tD<, 8x, y, g<, 8t, 0, 1<, InterpolationOrder Ø AllDD
Out[10]= InterpolatingFunction@880., 1.<<, <>D@tD
268
Advanced Numerical Differential Equation Solving in Mathematica
This makes a plot comparing the error for all four solutions. The time steps for IDA are shown as blue points and the dense output from IDA is shown in blue with the default output shown in light blue. In[11]:=
t2 = InterpolatingFunctionCoordinates@Head@f2@tDDD@@1DD; Show@8pode, ListPlot@Transpose@8t2, fexact @t2D - f2@t2D
1. × 10
–8
5. × 10
Out[11]= 0.2
0.4
0.6
0.8
1.0
–8
–5. × 10 –1. × 10
–7
You can see from the plot that the error is somewhat smaller when the quantity is computed algebraically along with the solution. The remainder of this documentation will focus on suboptions of the two possible settings for the “ImplicitSolver“ option, which can be “Newton“ or “GMRES“. With “Newton“, the Jacobian or an approximation to it is computed and the linear system is solved to find the Newton step. On the other hand, “GMRES“ is an iterative method and, rather than computing the entire Jacobian, a directional derivative is computed for each iterative step. The “Newton“ method has one method option, “LinearSolveMethod“, which you can use to tell Mathematica how to solve the linear equations required to find the Newton step. There are several possible values for this option.
Automatic
this is the default, automatically switch between using the Mathematica LinearSolve and Band methods depending on the band width of the Jacobian; for systems with larger band width, this will automatically switch to a direct sparse solver for large systems with sparse Jacobians
“Band“
use the IDA band method (see the IDA user manual for more information)
“Dense“
use the IDA dense method (see the IDA user manual for more information)
Possible settings for the “LinearSolveMethod“ option.
Advanced Numerical Differential Equation Solving in Mathematica
269
The “GMRES“ method may be substantially faster, but is typically quite a bit more tricky to use because to really be effective typically requires a preconditioner, which can be supplied via a method option. There are also some other method options that control the Krylov subspace process. To use these, refer to the IDA user guide [HT99]. GMRES method option name
default value
"Preconditioner"
Automatic
"OrthogonalizationType"
"ModifiedGramSÖ this can also be chmidt" "ClassicalGramSchmidt" (see variable gstype in the IDA user guide)
"MaxKrylovSubspaceDimensiÖ on"
Automatic
maximum susbspace dimension (see variable maxl in the IDA user guide)
"MaxKrylovRestarts"
Automatic
maximum number of restarts (see variable maxrs in the IDA user guide)
a Mathematica function that returns another function that preconditions
“GMRES“ method options. As an example problem, consider a two-dimensional Burgers’ equation.
ut = n Iuxx + uyy M -
1 2
JIu2 Mx + Iu2 My N
This can typically be solved with an ordinary differential equation solver, but in this example two things are achieved by using the DAE solver. First, boundary conditions are enforced as algebraic conditions. Second, NDSolve is forced to use conservative differencing by using an algebraic term. For comparison, a known exact solution will be used for initial and boundary conditions. This defines a function that satisfies Burger’s equation. In[12]:=
Bsol@t_, x_, y_D = 1 ê H1 + Exp@Hx + y - tL ê H2 nLDL; This defines initial and boundary conditions for the unit square consistent with the exact solution.
In[13]:=
ic = u@0, x, yD ã Bsol@0, x, yD; bc = 8 u@t, 0, yD ã Bsol@t, 0, yD, u@t, 1, yD ã Bsol@t, 1, yD, u@t, x, 0D ã Bsol@t, x, 0D, u@t, x, 1D ã Bsol@t, x, 1D<; This defines the differential equation.
In[14]:=
de = D@u@t, x, yD, tD ã n H D@u@t, x, yD, x, xD + D@u@t, x, yD, y, yDL u@t, x, yD HD@u@t, x, yD, xD + D@u@t, x, yD, yDL; This sets the diffusion constant n to a value for which we can find a solution in a reasonable amount of time and shows a plot of the solution at t == 1.
270
Advanced Numerical Differential Equation Solving in Mathematica
This sets the diffusion constant n to a value for which we can find a solution in a reasonable amount of time and shows a plot of the solution at t == 1. In[15]:=
n = 0.025; Plot3D@Bsol@1, x, yD, 8x, 0, 1<, 8y, 0, 1
1.0
Out[15]=
1.0
0.5 0.0 0.0
0.5 0.5 1.0
0.0
You can see from the plot that with n = 0.025, there is a fairly steep front. This moves with constant speed. This solves the problem using the default settings for NDSolve and the IDA method with the exception of the “DifferentiateBoundaryConditions“ option for “MethodOfLines“, which causes NDSolve to treat the boundary conditions as algebraic. In[16]:=
Timing@sol = NDSolve@8de, ic, bc<, u, 8t, 0, 1<, 8x, 0, 1<, 8y, 0, 1<, Method Ø 8“MethodOfLines“, “DifferentiateBoundaryConditions“ Ø False
Out[16]= 82.233, 88u Ø InterpolatingFunction@880., 1.<, 80., 1.<, 80., 1.<<, <>D<<<
Since there is an exact solution to compare to, the overall solution accuracy can be compared as well. This defines a function that finds the maximum deviation between the exact and computed solutions at the grid points at all of the time steps. In[17]:=
errfun@sol_D := Module@8ifun = First@u ê. solD, grid, exvals, gvals<, grid = InterpolatingFunctionGrid@ifunD; gvals = InterpolatingFunctionValuesOnGrid@ifunD; exvals = Apply@Bsol, Transpose@grid, RotateLeft@Range@ArrayDepth@gridDD, 1DDD; Max@Abs@exvals - gvalsDDD This computes the maximal error for the computed solution.
In[18]:=
errfun@solD
Out[18]= 0.000749446
Advanced Numerical Differential Equation Solving in Mathematica
271
In the following, a comparison will be made with different settings for the options of the IDA method. To emphasize the option settings, a function will be defined to time the computation of the solution and give the maximal error. This defines a function for comparing different IDA option settings. In[19]:=
TimeSolution@idaopts___D := Module@8time, err, steps<, time = First@Timing@sol = NDSolve@8de, ic, bc<, u, 8t, 0, 1<, 8x, 0, 1<, 8y, 0, 1<, Method Ø 8“MethodOfLines“, “DifferentiateBoundaryConditions“ Ø False, Method Ø 8IDA, idaopts<
In[20]:=
TimeSolution@D
Out[20]= 82.184, 0.000749446, 88 Steps<
This uses the “Band“ method. In[21]:=
TimeSolution@“ImplicitSolver“ Ø 8“Newton“, “LinearSolveMethod“ Ø “Band“
Out[21]= 88.543, 0.000749497, 88 Steps<
The “Band“ method is not very effective because the bandwidth of the Jacobian is relatively large, partly because of the fourth-order derivatives used and partly because the one-sided stencils used near the boundary add width at the top and bottom. You can specify the bandwidth explicitly. This uses the “Band“ method with the width set to include the stencil of the differences in only one direction. In[22]:=
TimeSolution@ “ImplicitSolver“ Ø 8“Newton“, “LinearSolveMethod“ Ø 8“Band“, “BandWidth“ Ø 3<
Out[22]= 87.441, 0.000937962, 311 Steps<
While the solution time was smaller, notice that the error is slightly greater and the total number of time steps is a lot greater. If the problem was more stiff, the iterations likely would not have converged because it was missing information from the other direction. Ideally, the bandwidth should not eliminate information from an entire dimension.
272
Advanced Numerical Differential Equation Solving in Mathematica
This computes the grids used in the X and Y directions and shows the number of points used. In[23]:=
8X, Y< = InterpolatingFunctionCoordinates@First@u ê. solDD@@82, 3
Out[23]= 851, 51<
This uses the “Band“ method with the width set to include at least part of the stencil in both directions. In[24]:=
TimeSolution@ “ImplicitSolver“ Ø 8“Newton“, “LinearSolveMethod“ Ø 8“Band“, “BandWidth“ Ø 51<
Out[24]= 82.273, 0.00085973, 88 Steps<
With the more appropriate setting of the bandwidth, the solution is still slightly slower than in the default case. The “Band“ method can sometimes be effective on two-dimensional problems, but is usually most effective on one-dimensional problems. This computes the solution using the “GMRES“ implicit solver without a preconditioner. In[25]:=
TimeSolution@“ImplicitSolver“ Ø “GMRES“D
Out[25]= 826.137, 0.00435431, 672 Steps<
This is incredibly slow! Using the “GMRES“ method without a preconditioner is not recommended for this very reason. However, finding a good preconditioner is not usually trivial. For this example, a diagonal preconditioner will be used. The setting of the “Preconditioner“ option should be a function f , which accepts four arguments that will be given to it by NDSolve such that f @t, x, x ‘, cD returns another function that will apply the preconditioner to the residual vector. (See IDA user guide [HT99] for details on how the preconditioner is used.) The arguments t, x, x ‘, c are the current time, solution vector, solution derivative vector, and the constant c in formula (2) above. For example, if you can determine a procedure that would generate an appropriate preconditioner matrix P as a function of these arguments, you could use “Preconditioner“ -> Function@8t, x, xp, c<, LinearSolve @P@t, x, xp, cDDD to produce a LinearSolveFunction object which will effectively invert the preconditioner matrix P. Typically, for each time the preconditioner function is set up, it is applied to the residual vector several times, so using some sort of factorization such as is contained in a LinearSolveFunction is a good idea.
Advanced Numerical Differential Equation Solving in Mathematica
273
For the diagonal case, the inverse can be effected simply by using the reciprocal. The most difficult part of setting up a diagonal preconditioner is keeping in mind that values on the boundary should not be affected by it. This finds the diagonal elements of the differentiation matrix for computing the preconditioner. In[26]:=
DM = NDSolve`FiniteDifferenceDerivative@82, 0<, 8X, Y
Out[26]//Short= 818 750., 6250., 3125., 3125., á2593à, 3125., 3125., 6250., 18 750.<
This gets the positions where elements at the boundary that satisfy a simple algebraic condition are in the flattened solution vector. In[27]:=
bound = SparseArray@ 88i_, 1< Ø 1., 8i_, ny< Ø 1., 81, i_< Ø 1., 8nx, i_< Ø 1.<, 8nx, ny<, 0.D; Short@pos = Drop@ArrayRules@Flatten@boundDD, - 1D@@All, 1, 1DDD
Out[27]//Short= 81, 2, 3, 4, 5, 6, 7, 8, 9, 10, á180à, 2592,
2593, 2594, 2595, 2596, 2597, 2598, 2599, 2600, 2601<
This defines the function that sets up the function called to get the effective inverse of the preconditioner. For the diagonal case, the inverse is done simply by taking the reciprocal. In[28]:=
pfun@t_, x_, xp_, c_D := Module@8d, dd<, d = 1. ê Hc - n diagL; d@@posDD = 1.; Function@Ò ddD ê. dd Ø dD This uses the preconditioned “GMRES“ method to compute the solution.
In[29]:=
TimeSolution@“ImplicitSolver“ Ø 8“GMRES“, “Preconditioner“ Ø pfun
Out[29]= 81.161, 0.000716006, 88 Steps<
Thus, even with a crude preconditioner, the “GMRES“ method computes the solution faster than the using the direct sparse solvers. For PDE discretizations with higher-order temporal derivatives or systems of PDEs, you may need to look at the corresponding NDSolve`StateData object to determine how the variables are ordered so that you can get the structural form of the preconditioner correctly.
274
Advanced Numerical Differential Equation Solving in Mathematica
Delay Differential Equations A delay differential equation is a differential equation where the time derivatives at the current time depend on the solution and possibly its derivatives at previous times: X £ HtL = F Ht, X HtL, X Ht - t1 L, , X Ht - tn L, X £ Ht - s1 L, , X Ht - sm L; t ¥ t0 X HtL = fHtL ; t § t0 Instead of a simple initial condition, an initial history function fHtL needs to be specified. The quantities ti ¥ 0, i = 1, …, n and si ¥ 0, i = 1, …, k are called the delays or time lags. The delays may be constants, functions tH tL and sH tL of t (time dependent delays), or functions tHt, XHtLL and sH t, XHtLL (state dependent delays).
Delay equations with delays s of the derivatives are
referred to as neutral delay differential equations (NDDEs). The equation processing code in NDSolve has been designed so that you can input a delay differential equation in essentially mathematical notation.
x@t-tD
dependent variable x with delay t
x@t ê; t§t0 Dãf
specification of initial history function as expression f for t less than t0
Inputting delays and initial history.
Currently, the implementation for DDEs in NDSolve only supports constant delays. Solve a second order delay differential equation. In[1]:=
sol = NDSolve@8x ‘‘@tD + x@t - 1D ã 0, x@t ê; t § 0D ã t ^ 2<, x, 8t, - 1, 5
Out[1]= 88x Ø InterpolatingFunction@88-1., 5.<<, <>D<<
Plot the solution and its first two derivatives. In[2]:=
Plot@Evaluate@8x@tD, x ‘@tD, x ‘‘@tD< ê. First@solDD, 8t, - 1, 5<, PlotRange Ø AllD 2 1
Out[2]=
-1
1 -1 -2
2
3
4
5
Advanced Numerical Differential Equation Solving in Mathematica
275
For simplicity, this documentation is written assuming that integration always proceeds from smaller to larger t. However, NDSolve supports integration in the other direction if the initial history function is given for value above t0 and the delays are negative. Solve a second order delay differential equation in the direction of negative t. In[3]:=
nsol = NDSolve@8x ‘‘@tD + x@t + 1D ã 0, x@t ê; t ¥ 0D ã t ^ 2<, x, 8t, - 5, 1
Out[3]=
0.5 -5
-4
-3
-2
-1
1
-0.5 -1.0
Comparison and Contrast with ODEs While DDEs look a lot like ODEs the theory for them is quite a bit more complicated and there are some surprising differences with ODEs.
This section will show a few examples of the
differences. Look at the solutions of x£ HtL = xHt - 1L HxHtL - 1L for different initial history functions. In[47]:=
Manipulate@ Module@ 8sol = NDSolve@8x ‘@tD ã x@t - 1D H1 - x@tDL, x@t ê; t § 0D ã f<, x, 8t, - 2, 2
f
‰t
Cos@tD
1-t
1 - Sin@tD
1.0
0.8
Out[47]=
0.6
0.4
0.2
-2
-1
1
2
276
Advanced Numerical Differential Equation Solving in Mathematica
As long as the initial function satisfies fH0L = 1, the solution for t > 0 is always 1. [Z06]
With
ODEs, you could always integrate backwards in time from a solution to obtain the initial condition. Investigate at the solutions of x£ HtL = a xHtL H1 - xHt - 1LL for different values of the parameter a. In[1]:=
Manipulate@ Module@8T = 50, sol, x, t<, sol = First@x ê. NDSolve@ 8x ‘@tD ã a x@tD H1 - x@t - 1DL, x@t ê; t § 0D ã 0.1<, x, 8t, 0, T
3.0 2.5
Out[1]=
2.0 1.5 1.0 0.5
10
20
1
30
40
For a < ‰ , the solutions are monotonic, for solutions approach a limit cycle.
50
1 ‰
§a§
p 2
the solutions oscillate. and for a >
p 2
Of course, for the scalar ODE, solutions are monotonic inde-
pendent of a.
Solve the Ikeda delay differential equation, x£ HtL sinHxHt - 2 pLL for two nearby constant initial functions. In[88]:=
the
sol1 = First@NDSolve@8x ‘@tD ã Sin@x@t - 20DD, x@t ê; t § 0D ã .0001<, x, 8t, 0, 500
Advanced Numerical Differential Equation Solving in Mathematica
277
Plot the solutions. In[90]:=
Plot@Evaluate@x@tD ê. 8sol1, sol2
Out[90]=
30 20 10 100
200
300
400
500
This simple scalar delay differential equation has chaotic solutions and the motion shown above looks very much like Brownian motion. [S07] As the delay t is increased beyond t = p ê 2 a limit cycle appears, followed eventually by a period doubling cascade leading to chaos before t = 5.
278
Advanced Numerical Differential Equation Solving in Mathematica
Compare solutions for t=4.9, 5.0, and 5.1 In[104]:=
Grid@Table@sol = First@NDSolve@8x ‘@tD ã Sin@x@t - tDD, x@t ê; t § 0D ã .1<, x, 8t, 100 t, 200 t<, MaxSteps Ø InfinityDD; 8ParametricPlot@Evaluate@8x@t - 1D, x@tD< ê. solD, 8t, 101 t, 200 t
5
4
4
3 2
2
1
1
1
2
3
4
600
5
700
800
900
15
15
Out[104]=
3
.
10
.
10
5
5
5
10
600
15
40
700
800
900
1000
40
35
35
. 30
30
30
35
40
25
600
700
800
900
1000
Stability is much more complicated for delay equations as well. It is well known that the linear ODE test equation x£ HtL = lxHtL has asymptotically stable solutions if ReHlL < 0 and is unstable if ReHlL > 0. The closest corresponding DDE is x£ HtL = l xHtL + m xHt - 1L. Even if you consider just real l and m the situation is no longer so clear cut. Shown below are some plots of solutions indicating this.
Advanced Numerical Differential Equation Solving in Mathematica
The solution is stable with l = In[110]:=
1 2
279
and m = -1
Block@8l = 1 ê 2, m = - 1, T = 25<, Plot@ Evaluate@First@x@tD ê. NDSolve@8x ‘@tD ã l x@tD + m x@t - 1D, x@t ê; t § 0D ã 1 - t<, x, 8t, 0, T
Out[110]=
5
10
15
20
25
-0.5 -1.0
The solution is unstable with l = In[111]:=
7 2
and m = 4
Block@8l = - 7 ê 2, m = 4, T = 25<, Plot@ Evaluate@First@x@tD ê. NDSolve@8x ‘@tD ã l x@tD + m x@t - 1D, x@t ê; t § 0D ã 1 - t<, x, 8t, 0, T
Out[111]= 10 5
10
15
20
25
So the solution can be stable with l > 0 and unstable with l < 0 depending on the value of m. A Manipulate is set up below so that you can investigate the l-m plane. Investigate by varying l and m In[113]:=
Manipulate@Module@8T = 25, x, t<, Plot@Evaluate@First@x@tD ê. NDSolve@8x ‘@tD ã l x@tD + m x@t - 1D, x@t ê; t § 0D ã 1 - t<, x, 8t, 0, T
l m
1.30 1.25
Out[113]=
1.20 1.15 1.10 1.05
5
10
15
20
25
280
Advanced Numerical Differential Equation Solving in Mathematica
Propagation and Smoothing of Discontinuities The way discontinuities are propagated by the delays is an important feature of DDEs and has a profound effect on numerical methods for solving them. Solve x£ HtL xHt - 1L with xHtL = 1 for t § 0. In[3]:=
sol = First@NDSolve@8x ‘@tD ã x@t - 1D, x@t ê; t § 0D ã 1<, x, 8t, - 1, 3
Out[3]= 8x Ø InterpolatingFunction@88-1., 3.<<, <>D<
In[4]:=
Plot@Evaluate@8x@tD, x ‘@tD, x ‘‘@tD< ê. solD, 8t, - 1, 3
Out[4]=
3 2 1 -1
1
2
3
In the example above, xHtL is continuous, but there is a jump discontinuity in x£ HtL at t = 0 since approaching from the left the value is 0, given by the derivative of the initial history function x£ HtL = f£ HtL = 0 while approaching from the right the value is given by the DDE, giving x£ HtL = xHt - 1L = fHt - 1L = 1. Near t=1, we have by the continuity of x at 0 limtØ1- x£ HtL = limtØ1- xHt - 1L = limzØ0- xHzL = limzØ0+ xHzL = limtØ1+ x£ HtL and so x£ HtLis continuous at t = 1. Differentiating the equation, we can conclude that Hx£ L£ HtL x£ Ht - 1L so Hx£ L£ HtL has a jump discontinuity at t = 1. Using essentially the same argument as above, we can conclude that at t = 2 the second derivative is continuous. Similarly, xHkL HtL is continuous at t = k or, in other words, at t = k, xHtL is k times differentiable. This is referred to as smoothing and holds generally for non-neutral delay equations. In some cases the smoothing can be faster than one order per interval.[Z06] For neutral delay equations the situation is quite different. Solve x£ HtL x ‘ Ht - 1L with xHtL = -t for t § 0. In[10]:=
sol = First@NDSolve@8x ‘@tD ã - x ‘@t - 1D, x@t ê; t § 0D ã t<, x, 8t, - 1, 3
Out[10]= 8x Ø InterpolatingFunction@88-1., 3.<<, <>D<
Advanced Numerical Differential Equation Solving in Mathematica
In[11]:=
281
Plot@Evaluate@8x@tD, x ‘@tD< ê. solD, 8t, - 1, 3
Out[11]=
-1
1
2
3
-0.5 -1.0
It is easy to see that the solution is piecewise with x[t] continuous. However, x£ HtL =
- 1 0 < modHt, 2L < 1 1 1 < modHt, 2L < 2
which has a discontinuity at every non negative integer. In general, there is no smoothing of discontinuities for neutral DDEs. The propagation of discontinuities is very important from the standpoint of numerical solvers. If the possible discontinuity points are ignored, then the order of the solver will be reduced. If a discontinuity point is known a more accurate solution can be found by integrating just up to the discontinuity point and then restarting the method just past the point with the new function values.
This way, the integration method is used on smooth parts of the solution leading to
better accuracy and fewer rejected steps. From any given discontinuity points, future discontinuity points can be determined from the delays and detected by treating them as events to be located. When there are multiple delays, the propagation of discontinuities can become quite complicated. Solve a neutral delay differential equation with two delays. In[109]:=
sol = NDSolve@8x ‘@tD ã x@tD Hx@t - PiD - x ‘@t - 1DL, x@t ê; t § 0D ã Cos@tD<, x, 8t, - 1, 8
Out[109]= 88x Ø InterpolatingFunction@88-1., 8.<<, <>D<<
282
Advanced Numerical Differential Equation Solving in Mathematica
Plot the solution. In[110]:=
Plot@Evaluate@8x@tD, x ‘@tD< ê. First@solDD, 8t, - 1, 8<, PlotRange Ø AllD 10
5
Out[110]=
2
4
6
8
-5
-10
It is clear from the plot that there is a discontinuity at each non negative integer as would be expected from the neutral delay s = 1. However, looking at the second and third derivative, it is clear that there are also discontinuities associated with points like t = p, 1 + p, 2 + p propagated from the jump discontinuities in x£ HtL. Plot the second derivative In[111]:=
Plot@Evaluate@x ‘‘@tD ê. First@solDD, 8t, 2.5, 5.5<, PlotRange Ø AllD 2 1
Out[111]=
3.0
3.5
4.0
4.5
5.0
5.5
-1 -2 -3
In fact, there is a whole tree of discontinuities that are propagated forward in time.
A way of
determining and displaying the discontinuity tree for a solution interval is shown in the subsection below.
Advanced Numerical Differential Equation Solving in Mathematica
283
Discontinuity Tree Define a command that gives the graph for the propagated discontinuities for a DDE with the given delays In[112]:=
DiscontinuityTree@t0_, Tend_, delays_D := Module@8dt, next, ord<, ord@t_D := Infinity; ord@t0D = 0; next@b_, order_, del_D := Map@dt@b, Ò, order , delD &, delD; dt@t_ , 8d_, nq_<, order_, del_D := Module@8b = t + d<, If@b § Tend, o = order + Boole@! nqD; ord@bD = Min@ord@bD, oD; Sow@8t Ø b, d
In[113]:=
Out[113]=
tree = Tally@DiscontinuityTree@0, 8, 881, True<, 8p, False<
8880, 0< Ø 81, 0<, 1<, 881, 0< Ø 82, 0<, 1<, 882, 0< Ø 83, 0<, 1<, 883, 0< Ø 84, 0<, 1<, 884, 0< Ø 85, 0<, 1<, 885, 0< Ø 86, 0<, 1<, 886, 0< Ø 87, 0<, 1<, 887, 0< Ø 88, 0<, 1<, 884, 0< Ø 84 + p, 1<, p<, 883, 0< Ø 83 + p, 1<, p<, 883 + p, 1< Ø 84 + p, 1<, 1<, 882, 0< Ø 82 + p, 1<, p<, 882 + p, 1< Ø 83 + p, 1<, 1<, 881, 0< Ø 81 + p, 1<, p<, 881 + p, 1< Ø 82 + p, 1<, 1<, 881 + p, 1< Ø 81 + 2 p, 2<, p<, 880, 0< Ø 8p, 1<, p<, 88p, 1< Ø 81 + p, 1<, 1<, 88p, 1< Ø 82 p, 2<, p<, 882 p, 2< Ø 81 + 2 p, 2<, 1<<
Define a command that shows a plot of xHkL HtL and x Ik+1M HtL for a discontinuity of order k. In[116]:=
ShowDiscontinuity@8dt_, o_<, ifun_, D_D := Quiet@ Plot@Evaluate@8Derivative@oD@ifunD@tD, Derivative@o + 1D@ifunD@tD
284
Advanced Numerical Differential Equation Solving in Mathematica
Plot as a layered graph, showing the discontinuity plot as a tooltip for each discontinuity. In[117]:=
LayeredGraphPlot@tree, Left, VertexLabeling Ø True, VertexRenderingFunction Ø Function@Tooltip@8White, EdgeForm@BlackD, Disk@Ò, .3D, Black, Text@Ò2@@1DD, Ò1D<, ShowDiscontinuity@Ò2, First@x ê. solD, 1DDDD 1
Out[117]=
0
p
1
2
p
1 p
1
1
3
p 1+p
p
1
1
4
p 2+p
1
1
5
1
6
1
7
1
8
p 3+p
1
4+p
p 2p
1
1+2p
Storing History Data Once the solution has advanced beyond the first discontinuity point, some of the delayed values that need to be computed are outside of the domain of the initial history function and the computed solution needs to be used to get the values, typically be interpolating between steps previously taken. For the DDE solution to be accurate it is essential that the interpolation be as accurate as the method.
This is achieved by using dense output for the ODE integration
method (the output you get if you use the option InterpolationOrder -> All in NDSolve). NDSolve has a general algorithm for obtaining dense output from most methods, so you can use just about any method as the integrator. Some methods, including the default for DDEs have their own way of getting dense output which is usually more efficient than the general method.
Methods that are low enough order, such as “ExplicitRungeKutta“ with
“DifferenceOrder“ -> 3 can just use a cubic Hermite polynomial as the dense output so there is essentially no extra cost in keeping the history. Since the history data is accessed frequently, it needs to have a quick look up mechanism to determine which step to interpolate within. In NDSolve, this is done with a binary search mechanism and the search time is negligible compared with the cost of actual function evaluation. The data for each successful step is saved before attempting the next step and is saved in a data structure that can repeatedly be expanded efficiently. When NDSolve produces the solution, it simply takes this data and restructures it into an InterpolatingFunction object, so DDE solutions are always returned with dense output.
Advanced Numerical Differential Equation Solving in Mathematica
285
The Method of Steps For constant delays, it is possible to get the entire set of discontinuities as fixed time. The idea of the method of steps is to simply integrate the smooth function over these intervals and restart on the next interval, being sure to reevaluate the function from the right. As long as the intervals do not get too small, the method works quite well in practice. The method currently implemented for NDSolve is based on the method of steps.
Symbolic method of steps This section defines a symbolic method of steps that illustrates how the method works. Note that to keep the code simpler and more to the point, it does not do any real argument checking. Also, the data structure and look up for the history is not done in an efficient way, but for symbolic solutions this is a minor issue. Use DSolve to integrate over an interval where the solution is smooth. In[16]:=
IntegrateSmooth@rhs_, history_, delayvars_, pfun_, dvars_, 8t_, t0_, t1_ Map@Function@Evaluate@8t
286
Advanced Numerical Differential Equation Solving in Mathematica
Define a method of steps function that returns Piecewise functions. In[21]:=
DDESteps@rhsin_, phin_, dvarsin_, 8t_, tinit_, tend_
In[24]:=
sol = DDESteps@x@t - 1D - x@tD, Sin@tD, x, 8t, 0, 3
Out[24]= :x Ø FunctionB8t<,
t<0
Sin@tD -
1 2 1 2 1 4
‰-t I-Cos@1D + ‰t Cos@1 - tD - Sin@1D + ‰t Sin@1 - tDM
0§t
‰-t I‰ - Cos@1D - ‰ t Cos@1D + ‰t Cos@2 - tD - Sin@1D + ‰ Sin@1D - ‰ t Sin@1DM
1§t
‰-t I2 ‰ - 2 ‰2 + 2 ‰2 t - 2 Cos@1D - ‰2 Cos@1D - 2 ‰ t Cos@1D +
2§t
2 ‰2 t Cos@1D - ‰2 t2 Cos@1D + ‰t Cos@3 - tD - 2 Sin@1D + 2 ‰ Sin@1D 3 ‰2 Sin@1D - 2 ‰ t Sin@1D + 4 ‰2 t Sin@1D - ‰2 t2 Sin@1D - ‰t Sin@3 - tDM
Indeterminate
True
F>
Plot the solution. In[25]:=
Plot@Evaluate@8x@tD, x ‘@tD< ê. solD, 8t, 0, 3
Out[25]=
-0.2 -0.4 -0.6 -0.8
1.0
1.5
2.0
2.5
3.0
Advanced Numerical Differential Equation Solving in Mathematica
287
Check the quality of the solution found by NDSolve by comparing to the exact solution. In[26]:=
Out[27]=
ndsol = First@NDSolve@8x ‘@tD ã - x@tD + x@t - 1D, x@t ê; t § 0D ã Sin@tD<, x, 8t, 0, 3
-1. µ 10-8 -2. µ 10-8
1.0
1.5
2.0
2.5
3.0
The method will also work for neutral DDEs. Find the solution for the neutral DDE x£ HtL x ‘ Ht - 1L - xHtL with fHtL sinHtL. In[28]:=
sol = DDESteps@x ‘@t - 1D - x@tD, Sin@tD, x, 8t, 0, 3
Out[28]= :x Ø FunctionB8t<,
t<0
Sin@tD 1 2 1 2 1 4
‰
-t
‰
-t
‰
-t
t
t
I-Cos@1D + ‰ Cos@1 - tD + Sin@1D - ‰ Sin@1 - tDM t
I‰ - Cos@1D - 2 ‰ Cos@1D + ‰ t Cos@1D + ‰ Cos@2 - tD + Sin@1D + ‰ Sin@1D - ‰ t Sin@1DM 2
2
2
I2 ‰ + 6 ‰ - 2 ‰ t - 2 Cos@1D - 4 ‰ Cos@1D - 13 ‰ Cos@1D +
0§t 1§t 2§t
2 ‰ t Cos@1D + 8 ‰2 t Cos@1D - ‰2 t2 Cos@1D + ‰t Cos@3 - tD + 2 Sin@1D + 2 ‰ Sin@1D + 7 ‰2 Sin@1D - 2 ‰ t Sin@1D - 6 ‰2 t Sin@1D + ‰2 t2 Sin@1D + ‰t Sin@3 - tDM
Indeterminate
Tru e
F>
Plot the solution. In[29]:=
Plot@Evaluate@8x@tD, x ‘@tD< ê. solD, 8t, 0, 3
Out[29]=
0.2
-0.2
0.5
1.0
1.5
2.0
2.5
3.0
-0.4
Check the quality of the solution found by NDSolve by comparing to the exact solution. In[30]:=
ndsol = First@NDSolve@8x ‘@tD ã - x@tD + x ‘@t - 1D, x@t ê; t § 0D ã Sin@tD<, x, 8t, 0, 3
Out[31]=
1. µ 10-7 5. µ 10-8 0.5
1.0
1.5
2.0
2.5
3.0
The symbolic method will also work with symbolic parameter values as long as DSolve is able to still able to find the solution.
288
Advanced Numerical Differential Equation Solving in Mathematica
Find the solution to a simple linear DDE with symbolic coefficients. In[32]:=
sol = DDESteps@l x@tD + m x@t - 1D, t, x, 8t, 0, 2
Out[32]=
The reason the code was designed to take lists was so that it would work with systems Solve a system of DDEs. In[33]:=
ssol = DDESteps@8y@tD, - x@t - 1D<, 8t ^ 2, 2 t<, 8x, y<, 8t, 0, 5
Out[33]= :x Ø FunctionB8t<,
t2
t<0
1
I-6 t2 + 4 t3 - t4 M
12 1 360
0§t
I52 - 216 t + 165 t2 - 140 t3 + 60 t4 - 12 t5 + t6 M
1§t
-3744+8640 t-18 088 t2 +11 872 t3 -5040 t4 +1456 t5 -252 t6 +24 t7 -t8
2§t
20 160 1 1 814 400
I804 654 - 2 371 680 t + 2 210 265 t2 - 1 643 400 t3 + 4
5
6
7
8
3§t 9
10
771 120 t - 236 376 t + 51 030 t - 7560 t + 720 t - 40 t + t M 1 239 500 800
I-168 512 584 + 394 727 040 t - 534 391 836 t2 + 359 788 000 t3 - 165 844 800 t4 +
4§t
55 576 224 t5 - 13 370 280 t6 + 2 347 488 t7 - 300 960 t8 + 27 280 t9 - 1650 t10 + 60 t11 - t12 M Indeterminate
True
F, y Ø FunctionB8t<, 2t
t<0
1
0§t
3 1 60
I-3 t + 3 t2 - t3 M I-36 + 55 t - 70 t2 + 40 t3 - 10 t4 + t5 M
1080-4522 t+4452 t2 -2520 t3 +910 t4 -189 t5 +21 t6 -t7 2520 -237 168+442 053 t-493 020 t2 +308 448 t3 -118 188 t4 +30 618 t5 -5292 t6 +576 t7 -36 t8 +t9 181 440 1 19 958 400
I32 893 920 - 89 065 306 t + 89 947 000 t2 - 55 281 600 t3 + 23 156 760 t4 -
1§t 2§t 3§t 4§t
6 685 140 t5 + 1 369 368 t6 - 200 640 t7 + 20 460 t8 - 1375 t9 + 55 t10 - t11 M Indeterminate F>
Tru e
Advanced Numerical Differential Equation Solving in Mathematica
289
Plot the solution. In[34]:=
Plot@Evaluate@8x@tD, y@tD< ê. ssolD, 8t, 0, 5
Out[34]=
0.5
1
2
3
4
5
0.5
Check the quality of the solution found by NDSolve by comparing to the exact solution. In[35]:=
ndssol = First@NDSolve@8x ‘@tD ã y@tD, y ‘@tD ã - x@t - 1D, x@t ê; t § 0D ã t ^ 2, y@t ê; t § 0D ã 2 t<, 8x, y<, 8t, 0, 5
Out[36]=
5. µ 10-8 1
2
3
4
5
-5. µ 10-8
Since the method computes the discontinuity tree, it will also work for multiple constant delays. However, with multiple delays, the solution may become quite complicated quickly and DSolve can bog down with huge expressions. Solve a nonlinear neutral DDE with two delays. In[37]:=
sol = DDESteps@x@tD Hx@t - Log@2DD - x ‘@t - 1DL, 1, x, 8t, 0, 2
Out[37]= :x Ø FunctionB8t<,
1
t<0
‰t
0 § t < Log@2D -1+
2‰
1
2 ‰2 2‰
‰t
Log@2D § t < 1
2
H-2+‰L ‰-1+t
2-‰-1+t -
2-2 ‰
-1+
1 § t < 2 Log@2D
2 ExpIntegralEiA1E ‰
-
2 ExpIntegralEiA1E
2‰ Indeterminate F>
+
‰t
F
4
2 Log@2D § t < 1 + Log@2D
‰ ‰
‰-1+t 2
2 ExpIntegralEiB
‰
-2 ExpIntegralEiB
1 2
2 ExpIntegralEiB F
H-2+‰LF+
2
‰
+2 ExpIntegralEiB
1 4
-1+t
H-2+‰L ‰
F
1 + Log@2D § t § 2 True
290
Advanced Numerical Differential Equation Solving in Mathematica
Plot the solution. In[38]:=
Out[38]=
Plot@Evaluate@8x@tD, x ‘@tD< ê. solD, 8t, 0, 2
0.5
1.0
1.5
2.0
Check the quality of the solution found by NDSolve by comparing to the exact solution. In[39]:=
Out[40]=
ndsol = First@NDSolve@ 8x ‘@tD ã x@tD Hx@t - Log@2DD - x ‘@t - 1DL, x@t ê; t § 0D ã 1<, x, 8t, 0, 2
-1. µ 10-7
0.5
1.0
1.5
2.0
-2. µ 10-7 -3. µ 10-7 -4. µ 10-7
Examples Lotka-Volterra equations with delay The Lotka-Volterra system models the growth and interaction of animal species assuming that the effect of one species on another is continuous and immediate.
A delayed effect of one
species on another can be modeled by introducing time lags in the interaction terms. Consider the system Y1 ‘ HtL = Y1 HtL HY2 Ht - t2 L - 1L, Y2 ‘ HtL = Y2 HtL H2 - Y1 Ht - t1 L L.
(9)
With no delays, t1 = t2 = 0 the system (1) has an invariant HHtL = 2 ln Y1 - Y1 + ln Y2 - Y1 that is constant for all t and there is a (neutrally) stable periodic solution.
Advanced Numerical Differential Equation Solving in Mathematica
291
Compare the solution with and without delays. In[13]:=
lvsystem@t1_, t2_D := 8 Y1 ‘@tD ã Y1 @tD HY2 @t - t1D - 1L, Y1 @0D ã 1, Y2 ‘@tD ã Y2 @tD H2 - Y1 @t - t2DL, Y2 @0D ã 1<; lv = First@NDSolve@lvsystem@0, 0D, 8Y1 , Y2 <, 8t, 0, 25
Out[16]= 1.0 0.5
1.5
2.0
2.5
3.0
3.5
4.0
In this example, the effect of even a small delay is to destabilize the periodic orbit. With different parameters in the delayed Lotka-Volterra system it has been shown that there are globally attractive equilibria.[TZ08]
Enzyme kinetics Consider the system y1 £ HtL Is - z y1 HtL y2 HtL z y1 HtL - y2 HtL y3 £ HtL y2 HtL - y3 HtL £
y2 £ HtL y3 HtL -
1 2
z=
k1 1+a Hy4 Ht-tLLn
(10)
y4 HtL
modeling enzyme kinetics where Is is a substrate supply maintained at a constant level and n molecules of the end product y4 inhibits the reaction step y1 Ø y2 . [HNW93] The system has an equilibrium when 8y1 = Is ê z, y2 = y3 = Is , y4 = 2 Is <.
292
Advanced Numerical Differential Equation Solving in Mathematica
Investigate solutions of (1) starting a small perturbation away from the equilibrium. In[43]:=
Manipulate@ Module@8t, y1, y2, y3, y4, z, sol<, z = k1 ê H1 + a y4@t - tD ^ nL; sol = First@NDSolve@8 y1 ‘@tD ã Is - z y1@tD, y1@t ê; t § 0D ã Is * H1 + a H2 IsL ^ nL + e, y2 ‘@tD ã z y1@tD - y2@tD, y2@t ê; t § 0D ã Is, y3 ‘@tD ã y2@tD - y3@tD, y3@t ê; t § 0D ã Is, y4 ‘@tD ã y3@tD - y4@tD ê 2, y4@t ê; t § 0D ã 2 Is<, 8y1, y2, y3, y4<, 8t, 0, 200
Mackey-Glass equation The Mackey-Glass equation x'[t]=a x[t-t]/(1 + x[t-t]^n) - b x[t] was proposed to model the production of white blood cells. There are both periodic and chaotic solutions. Here is a periodic solution of the Mackey-Glass equation. The plot is only shown after t = 300 to let transients die out. In[31]:=
sol = First@ NDSolve@8x ‘@tD ã H1 ê 4L x@t - 15D ê H1 + x@t - 15D ^ 10L - x@tD ê 10, x@t ê; t § 0D ã 1 ê 2<, x, 8t, 0, 500
Out[32]=
1.0 0.8 0.6
0.6
0.8
1.0
1.2
1.4
Here is a chaotic solution of the Mackey-Glass equation. In[44]:=
sol = First@ NDSolve@8x ‘@tD ã H1 ê 4L x@t - 17D ê H1 + x@t - 17D ^ 10L - x@tD ê 10, x@t ê; t § 0D ã 1 ê 2<, x, 8t, 0, 500
Out[45]= 1.0 0.8 0.6
0.6
0.8
1.0
1.2
1.4
1.6
Advanced Numerical Differential Equation Solving in Mathematica
293
This shows an embedding of the solution above in 3D 8xHtL, xHt - tL, xHt - 2 tL<. In[14]:=
sol = First@ NDSolve@8x ‘@tD ã H1 ê 4L x@t - 17D ê H1 + x@t - 17D ^ 10L - x@tD ê 10, x@t ê; t § 0D ã 1 ê 2<, x, 8t, 0, 5000<, MaxSteps Ø ¶DD; ParametricPlot3D@Evaluate@8x@tD, x@t - 17D, x@t - 34D< ê. solD, 8t, 500, 5000
Out[15]=
1.0 0.5 0.5 1.0 1.5
It is interesting to check the accuracy of the chaotic solution. Compute the chaotic solution with another method and plot log10 d for the difference d between xHtL computed by the different methods. In[16]:=
solrk = First@ NDSolve@8x ‘@tD ã H1 ê 4L x@t - 17D ê H1 + x@t - 17D ^ 10L - x@tD ê 10, x@t ê; t § 0D ã 1 ê 2<, x, 8t, 0, 5000<, MaxSteps Ø ¶, Method Ø 8“ExplicitRungeKutta“, “DifferenceOrder“ Ø 3
2000
3000
4000
5000
-2
Out[17]= -4 -6 -8
By the end of the interval, the differences between methods is order 1.
Large deviation is
typical in chaotic systems and in practice it is not possible or even necessary to get a very accurate solution for a large interval. However, if you do want a high quality solution, NDSolve allows you to use higher precision. For DDEs with higher precision, the “StiffnessSwitching“ method is recommended. Compute the chaotic solution with higher precision and tolerances. In[18]:=
hpsol = First@ NDSolve@8x ‘@tD ã H1 ê 4L x@t - 17D ê H1 + x@t - 17D ^ 10L - x@tD ê 10, x@t ê; t § 0D ã 1 ê 2<, x, 8t, 0, 5000<, MaxSteps Ø ¶, Method Ø “StiffnessSwitching“, WorkingPrecision Ø 32 DD;
294
Advanced Numerical Differential Equation Solving in Mathematica
Plot the three solutions near the final time. In[19]:=
Plot@Evaluate@x@tD ê. 8hpsol, sol, solrk
Out[19]= 1.0 0.8 0.6 4920
4940
4960
4980
5000
Norms in NDSolve NDSolve uses norms of error estimates to determine when solutions satisfy error tolerances. In nearly all cases the norm has been weighted, or scaled, such that it is less than 1 if error tolerances have been satisfied and greater than one if error tolerances are not satisfied. One significant advantage of such a scaled norm is that a given method can be written without explicit reference to tolerances: the satisfaction of tolerances is found by comparing the scaled norm to 1, thus simplifying the code required for checking error estimates within methods. Suppose that v is vector and u is a reference vector to compute weights with (typically u is an approximate solution vector). Then the scaled vector w to which the norm is applied has components: wi = where
vi
(11)
ta +tr ui
absolute
and
relative
tolerances
ta
and
tr
are
derived
respectively
from
the
AccuracyGoal -> ag and PrecisionGoal -> pg options by ta = 10-ag and tr = 10-pg . The actual norm used is determined by the setting for the NormFunction option given to NDSolve. option name
default value
NormFunction
Automatic
NormFunction option to NDSolve .
a function to use to compute norms of error estimates in NDSolve
Advanced Numerical Differential Equation Solving in Mathematica
295
The setting for the NormFunction option can be any function that returns a scalar for a vector argument and satisfies the properties of a norm. If you specify a function that does not satisfy the required properties of a norm, NDSolve will almost surely run into problems and give an answer, if any, which is incorrect. The default value of Automatic means that NDSolve may use different norms for different methods. Most methods use an infinity-norm, but the IDA method for DAEs uses a 2-norm because that helps maintain smoothness in the merit function for finding roots of the residual. It is strongly recommended that you use Norm with a particular value of p. For this reason, you can also use the shorthand NormFunction -> p in place of NormFunction -> HNorm@Ò, pD ê Length@ÒD ^ H1 ê pL &L. The most commonly used implementations for p = 1, p = 2, and p = ¶ have been specially optimized for speed. This compares the overall error for computing the solution to the simple harmonic oscillator over 100 cycles with different norms specified. In[1]:=
Map@ First@H1 - x@100 pDL ê. NDSolve@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, 8t, 0, 100 p<, Method Ø ExplicitRungeKutta, NormFunction Ø ÒDD &, 81, 2, ¶
-8 -8 -8 Out[1]= 98.62652 µ 10 , 7.50564 µ 10 , 5.81547 µ 10 =
The reason that error decreases with increasing p is because the norms are normalized by multiplying with 1 ë n1êp , where n is the length of the vector. This is often important in NDSolve because in many cases, an attempt is being made to check the approximation to a function, where more points should give a better approximation, or less error. Consider a finite difference approximation to the first derivative of a periodic function u given by u‘i =
ui+1 -ui h
where ui = uHxi L on a grid with uniform spacing h = xi+1 - xi . In Mathematica, this can
easily be computed using ListCorrelate. This computes the error of the first derivative approximation for the cosine function on a grid with 16 points covering the interval @0, 2 pD. In[2]:=
h = 2 p ê 16.; grid = h Range@16D; err16 = Sin@gridD - ListCorrelate@81, - 1< ê h, Cos@gridD, 81, 1
Out[2]= 8-0.169324, -0.11903, -0.0506158, 0.0255046, 0.0977423, 0.1551, 0.188844, 0.193839,
0.169324, 0.11903, 0.0506158, -0.0255046, -0.0977423, -0.1551, -0.188844, -0.193839<
296
Advanced Numerical Differential Equation Solving in Mathematica
This computes the error of the first derivative approximation for the cosine function on a grid with 32 points covering the interval @0, 2 pD. In[3]:=
h = 2 p ê 32.; grid = h Range@32D; err32 = Sin@gridD - ListCorrelate@81, - 1< ê h, Cos@gridD, 81, 1
Out[3]= 8-0.0947283, -0.0879564, -0.0778045, -0.0646625, -0.0490356, -0.0315243, -0.0128016, 0.00641315,
0.0253814, 0.0433743, 0.0597003, 0.0737321, 0.0849304, 0.0928648, 0.0972306, 0.0978598, 0.0947283, 0.0879564, 0.0778045, 0.0646625, 0.0490356, 0.0315243, 0.0128016, -0.00641315, -0.0253814, -0.0433743, -0.0597003, -0.0737321, -0.0849304, -0.0928648, -0.0972306, -0.0978598<
It is quite apparent that the pointwise error is significantly less with a larger number of points. The 2 norms of the vectors are of the same order of magnitude. In[4]:=
8Norm@err16, 2D, Norm@err32, 2D<
Out[4]= 80.552985, 0.392279<
The norms of the vectors are comparable because is because the number of components in the vector has increased, so the usual linear algebra norm does not properly reflect the convergence. Normalizing by multiplying by 1 ë n1êp reflects the convergence in the function space properly. The normalized 2 norms of the vectors reflect the convergence to the actual function. Since the approximation is first order, doubling the number of grid points should approximately halve the error. In[5]:=
8Norm@err16, 2D ê Sqrt@16D, Norm@err32, 2D ê Sqrt@32D<
Out[5]= 80.138246, 0.0693457<
Note that if you specify a function an option value, and you intend to use it for PDE or function approximation solutions, you should be sure to include a proper normalization in the function.
ScaledVectorNorm Methods that have error control need to determine whether a step satisfies local error tolerances or not. To simplify the process of checking this, utility function ScaledVectorNorm does the scaling (1) and computes the norm. The table includes the formulas for specific values of p for reference.
Advanced Numerical Differential Equation Solving in Mathematica
ScaledVectorNorm@p,8tr ,ta
compute the normalized p-norm of the vector v scaling using scaling (1) with reference vector u and relative and absolute tolerances ta and tr
ScaledVectorNorm@ fun,8tr ,ta
compute the norm of the vector v using scaling (1) with reference vector u and relative and absolute tolerances ta
297
and tr and the norm function fun
ScaledVectorNorm@2,8tr ,ta
compute
1 n
n
⁄i=1 J t
2
vi
a +tr
ui
N where n is the length of vectors
v and u ScaledVectorNorm@¶,8tr ,ta
compute maxJ
vi ta +tr ui
N, 1 § i § n where n is the length of
vectors v and u
ScaledVectorNorm. This sets up a scaled vector norm object with the default machine-precision tolerances used in NDSolve . In[10]:=
svn = NDSolve`ScaledVectorNormA2, 910.-8 , 10.-8 =E
-8 -8 Out[10]= NDSolve`ScaledVectorNormA2, 91. µ 10 , 1. µ 10 =E
This applies the scaled norm object with a sample error and solution reference vector. In[11]:=
svnA99. µ 10.-9 , 10.-8 =, 82., 1.
Out[11]= 0.412311
Because of the absolute tolerance term, the value comes out reasonably even if some of the components of the reference solution are zero. In[12]:=
svnA99. µ 10.-9 , 10.-8 , 2 µ 10-8 =, 81., 0., 0.
Out[12]= 1.31688
When setting up a method for NDSolve, you can get the appropriate ScaledVectorNorm object to use using the “Norm“ method function of the NDSolve`StateData object. Here is an NDSolve`StateData object. In[13]:=
state = First@NDSolve`ProcessEquations@8x ‘‘@tD + x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, tDD
Out[13]= NDSolve`StateData@<0.>D
298
Advanced Numerical Differential Equation Solving in Mathematica
This gets the appropriate scaled norm to use from the state data. In[14]:=
svn = state@“Norm“D
-8 -8 Out[14]= NDSolve`ScaledVectorNormA¶, 91.05367 µ 10 , 1.05367 µ 10 =, NDSolveE
This applies it to a sample error vector using the initial condition as reference vector. In[15]:=
svnA910.-9 , 10.-8 =, state ü “SolutionVector“@“Forward“DE
Out[15]= 0.949063
Stiffness Detection Overview Many differential equations exhibit some form of stiffness which restricts the step-size and hence effectiveness of explicit solution methods. A number of implicit methods have been developed over the years to circumvent this problem. For the same step size, implicit methods can be substantially less efficient than explicit methods, due to the overhead associated with the intrinsic linear algebra. This cost can offset by the fact that, in certain regions, implicit methods can take substantially larger step sizes. Several attempts have been made to provide user-friendly codes that automatically attempt to detect stiffness at runtime and switch between appropriate methods as necessary. A number of strategies that have been proposed to automatically equip a code with a stiffness detection device are outlined here. Particular attention is given to the problem of estimation of the dominant eigenvalue of a matrix in order to describe how stiffness detection is implemented in NDSolve. Numerical examples illustrate the effectiveness of the strategy.
Advanced Numerical Differential Equation Solving in Mathematica
299
Initialization Load some packages with predefined examples and utility functions. In[1]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; Needs@“FunctionApproximations`“D;
Introduction Consider the numerical solution of initial value problems: y£ HtL = f Ht, yHtLL, yH0L = y0 , f : ä n # n
(12)
Stiffness is a combination of problem, solution method, initial condition and local error tolerances. Stiffness limits the effectiveness of explicit solution methods due to restrictions on the size of steps that can be taken. Stiffness arises in many practical systems as well as in the numerical solution of partial differential equations by the method of lines.
Example The van der Pol oscillator is a non-conservative oscillator with nonlinear damping and is an example of a stiff system of ordinary differential equations: y1 £ HtL = y2 HtL , ε y2 £ HtL = -y1 HtL + I1 - y1 HtL2 M y2 HtL , with ε = 3/1000. Consider initial conditions. y1 H0L = 2, y2 H0L = 0 and solve over the interval t œ [0, 10]. The method “StiffnessSwitching“ uses a pair of extrapolation methods by default: † Explicit modified midpoint (Gragg smoothing), double-harmonic sequence 2, 4, 6,… † Linearly implicit Euler, sub-harmonic sequence 2, 3, 4,…
300
Advanced Numerical Differential Equation Solving in Mathematica
Solution This loads the problem from a package. In[4]:=
system = GetNDSolveProblem@“VanderPol“D; Solve the system numerically using a nonstiff method.
In[5]:=
solns = NDSolve@system, 8T, 0, 10<, Method Ø “Extrapolation“D; NDSolve::ndstf : At T == 0.022920104414210326`, system appears to be stiff. Methods Automatic, BDF or StiffnessSwitching may be more appropriate. à
Solve the system using a method that switches when stiffness occurs. In[6]:=
sols = NDSolve@system, 8T, 0, 10<, Method Ø 8“StiffnessSwitching“, “NonstiffTest“ -> False
In[7]:=
Plot@Evaluate@Part@sols, 1, All, 2DD, 8T, 0, 10<, PlotStyle -> 88Red<, 8Blue<<, Axes -> False, Frame -> TrueD 6 4 2
Out[7]=
0 -2 -4 -6 0
2
4
6
8
10
Stiffness can often occur in regions that follow rapid transients. This plots the step sizes taken against time. In[8]:=
StepDataPlot@solsD 0.050 0.020
Out[8]= 0.010 0.005 0.002 0
2
4
6
8
10
The problem is that when the solution is changing rapidly, there is little point using a stiff solver, since local accuracy is the dominant issue. For efficiency, it would be useful if the method could automatically detect regions where local accuracy (and not stability) is important.
Advanced Numerical Differential Equation Solving in Mathematica
301
Linear Stability Linear stability theory arises from the study of Dahlquist's scalar linear test equation: y£ HtL = l yHtL,
l œ ,
(13)
ReHlL < 0
as a simplified model for studying the initial value problem (12). Stability is characterized by analyzing a method applied to (1) to obtain (14)
yn+1 = RHzL yn where z = h l and R(z) is the (rational) stability function. The boundary of absolute stability is obtained by considering the region: †RHzL§ = 1
Explicit Euler Method The explicit or forward Euler method: yn+1 = yn + h f Htn , yn L applied to (1) gives: RHzL = 1 + z. The shaded region represents instability, where In[9]:=
RHzL > 1.
OrderStarPlot@1 + z, 1, z, FrameTicks -> TrueD 1.0 0.5
Out[9]=
0.0 –0.5 –1.0
–2.0
–1.5
–1.0
–0.5
0.0
0.5
1.0
The Linear Stability Boundary is often taken as the intersection with the negative real axis. For the explicit Euler method LSB = -2.
302
Advanced Numerical Differential Equation Solving in Mathematica
For an eigenvalue of l = -1, linear stability requirements mean that the step-size needs to satisfy h < 2, which is a very mild restriction. However, for an eigenvalue of l = - 106 , linear stability requirements mean that the step size needs to satisfy h < 2 ä 10-6 , which is a very severe restriction.
Example This example shows the effect of stiffness on the step-size sequence when using an explicit Runge-Kutta method to solve a stiff system. This system models a chemical reaction. In[10]:=
system = GetNDSolveProblem@“Robertson“D; The system is solved by disabling the built-in stiffness detection.
In[11]:=
sol = NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “StiffnessTest“ -> False
In[12]:=
StepDataPlot@solD 0.0020 0.0015
Out[12]=
0.0010
0.00
0.05
0.10
0.15
0.20
0.25
0.30
† A large number of step rejections often has a negative impact on performance. † The large number of steps taken adversely affects the accuracy of the computed solution. The built-in detection does an excellent job of locating when stiffness occurs. In[13]:=
sol = NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “StiffnessTest“ -> True
Advanced Numerical Differential Equation Solving in Mathematica
303
Implicit Euler Method The implicit or backward Euler method: yn+1 = yn + h f Htn , yn+1 L applied to (1) gives:
RHzL =
1 1-z
The method is unconditionally stable for the entire left half-plane. In[14]:=
OrderStarPlot@1 ê H1 - zL, 1, z, FrameTicks -> TrueD 1.0 0.5
Out[14]=
0.0 –0.5 –1.0
–1.0
–0.5
0.0
0.5
1.0
1.5
2.0
This means that to maintain stability there is no longer a restriction on the step size. The drawback is that an implicit system of equations now has to be solved at each integration step.
Type Insensitivity A type-insensitive solver recognizes and responds efficiently to stiffness at each step and so is insensitive to the (possibly changing) type of the problem. One of the most established solvers of this class is LSODA [H83], [P83]. Later generations of LSODA such as CVODE no longer incorporate a stiffness detection device. The reason is because LSODA use norm bounds to estimate the dominant eigenvalue and these bounds, as will be seen later, can be quite inaccurate. The low order of A(a)-stable BDF methods means that LSODA and CVODE are not very suitable for solving systems with high accuracy or systems where the dominant eigenvalue has a large imaginary part. Alternative methods, such as those based on extrapolation of linearly implicit schemes, do not suffer from these issues.
The orderNumerical of A(a)-stable BDF methods means that LSODA and CVODE are not very suitable 304 low Advanced Differential Equation Solving in Mathematica imaginary part. Alternative methods, such as those based on extrapolation of linearly implicit schemes, do not suffer from these issues. Much of the work on stiffness detection was carried out in the 1980s and 1990s using standalone FORTRAN codes. New linear algebra techniques and efficient software have since become available and these are readily accessible in Mathematica. Stiffness can be a transient phenomenon, so detecting nonstiffness is equally important [S77], [B90].
"StiffnessTest" Method Option There are several approaches that can be used to switch from a nonstiff to a stiff solver.
Direct Estimation A convenient way of detecting stiffness is to directly estimate the dominant eigenvalue of the Jacobian J of the problem (see [S77], [P83], [S83], [S84a], [S84c], [R87] and [HW96]). Such an estimate is often available as a by-product of the numerical integration and so it is reasonably inexpensive. If v denotes an approximation to the eigenvector corresponding to dominant eigenvalue of the Jacobian, with °v¥ sufficiently small, then by the mean value theorem a good approximation to the leading eigenvalue is: ~
l=
° f Ht, y + vL - f Ht, yL¥ °v¥
.
Richardson's extrapolation provides a sequence of refinements that yields a quantity of this form, as do certain explicit Runge|Kutta methods. Cost is at most two function evaluations, but often at least one of these is available as a byproduct of the numerical integration, so it is reasonably inexpensive. Let LSB denote the linear stability boundary~the intersection of the linear stability region with the negative real axis.
Advanced Numerical Differential Equation Solving in Mathematica
305
~
The product h l gives an estimate that can be compared to the linear stability boundary of a method in order to detect stiffness: ~
£h lß § s †LSB§
(15)
where s is a safety factor.
Description The methods “DoubleStep“, “Extrapolation“, and “ExplicitRungeKutta“ have the option “StiffnessTest“, which can be used to identify whether the method applied with the specified AccuracyGoal and PrecisionGoal tolerances to a given problem is stiff. The method option “StiffnessTest“ itself accepts a number of options that implement a weak form of (15) where the test is allowed to fail a specified number of times. The reason for this is that some problems can be only mildly stiff in a certain region and an explicit integration method may still be efficient.
"NonstiffTest" Method Option The “StiffnessSwitching“ method has the option “NonstiffTest“, which is used to switch back from a stiff method to a nonstiff method. The following settings are allowed for the option “NonstiffTest“ † None or False (perform no test). † "NormBound". † "Direct". † "SubspaceIteration". † "KrylovIteration". † "Automatic".
306
Advanced Numerical Differential Equation Solving in Mathematica
Switching to a Nonstiff Solver An approach that is independent of the stiff method is used. Given the Jacobian J (or an approximation) compute one of: Norm Bound: ° J ¥ Spectral Radius: rHJL = max li Dominant Eigenvalue li :
li
>
lj
Many linear algebra techniques focus on solving a single problem to high accuracy. For stiffness detection, a succession of problems with solutions to one or two digits are adequate. For a numerical discretization 0 = t0 < t1 < < tn = T consider a sequence k of matrices in some sub-interval(s) Jti , Jti+1 , … Jti+k-1 The spectra of the succession of matrices often changes very slowly from step to step. The
goal
is
to
find
a
way
of
estimating
(bounds
on)
dominant
eigenvalues
of a succession of matrices Jti that: † Costs less than the work carried out in the linear algebra at each step in the stiff solver. † Takes account of the step-to-step nature of the solver.
NormBound A simple and efficient technique of obtaining a bound on the dominant eigenvalue is to use the norm of the Jacobian ° J ¥ p where typically p = 1 or p = ¶.
Advanced Numerical Differential Equation Solving in Mathematica
307
The method has complexity OIn2 M, which is less than the work carried out in the stiff solver. This is the approach used by LSODA. † Norm bounds for dense matrices overestimate and the bounds become worse as the dimension increases. † Norm bounds can be tight for sparse or banded matrices of quite large dimension. The setting “NormBound“ of the option “NonstiffTest“ computes ° J ¥1 and ° J ¥¶ and returns the smaller of the two values.
Example The following Jacobian matrix arises in the numerical solution of the van der Pol system using a stiff solver. In[18]:=
a = 880., 1.<, 82623.532160943381, - 69.56342161343568<<; Bounds based on norms overestimate the spectral radius by more than an order of magnitude.
In[19]:=
8Abs@First@Eigenvalues@aDDD, Norm@a, 1D, Norm@a, InfinityD<
Out[19]= 896.6954, 2623.53, 2693.1<
Direct Eigenvalue Computation For small problems (n § 32) it can be efficient just to compute the dominant eigenvalue directly. † Hermitian matrices use the LAPACK function xgeev † General matrices use the LAPACK function xsyevr The setting “Direct“ of the option “NonstiffTest“ computes the dominant eigenvalue of J using the same LAPACK routines as Eigenvalues. For larger problems the cost of direct eigenvalue computation is OIn3 M which becomes prohibitive when compared to the cost of the linear algebra work in a stiff solver. A number of iterative schemes have been implemented for this purpose. These effectively work by approximating the dominant eigenspace in a smaller subspace and using dense eigenvalue methods for the smaller problem.
308
Advanced Numerical Differential Equation Solving in Mathematica
The Power Method Shampine has proposed the use of the power method for estimating the dominant eigenvalue of the Jacobian [S91]. The power method is perhaps not a very well-respected method, but has received a resurgence of interest due to its use in Google's page ranking. The power method can be used when † A œ n ä n has n linearly independent eigenvectors (diagonalizable) † The eigenvalues can be ordered in magnitude as † l1 § > † l2 § ¥ ¥ †ln § † l1 is the dominant eigenvalue of A.
Description Given a starting vector v0 œ n compute vk = A vk-1 The Rayleigh quotient is used to compute an approximation to the dominant eigenvalue:
lHkL 1 =
v*k-1 A vk-1 v*k-1 vk-1
=
v*k vk-1 v*k-1 vk-1
In practice, the approximate eigenvector is scaled at each step: vk ` vk = ° vk ¥
Properties The power method converges linearly with rate: l1 l2 which can be slow. In particular, the method does not converge when applied to a matrix with a dominant complex conjugate pair of eigenvalues.
Advanced Numerical Differential Equation Solving in Mathematica
309
Generalizations The power method can be adapted to overcome the issue of equimodular eigenvalues (e.g. NAPACK) However the modification does not generally address the issue of the slow rate of convergence for clustered eigenvalues. There are two main approaches to generalizing the power method: † Subspace iteration for small to medium dimensions † Arnoldi iteration for large dimensions Although the methods work quite differently, there are a number of core components that can be shared and optimized. Subspace and Krylov iteration cost OIn2 mM operations. They project an n ä n matrix to an m ä m matrix, where m << n. The small matrix represents the dominant eigenspace and approximation uses dense eigenvalue routines.
Subspace Iteration Subspace (or simultaneous) iteration generalizes the ideas in the power method by acting on m vectors at each step. Start with an orthonormal set of vectors V H0L = n ä m , where usually m << n: V H0L = @v1 , …, vm D Form the product with the matrix A: Z HkL = A V Hk-1L
310
Advanced Numerical Differential Equation Solving in Mathematica
In order to prevent all vectors from converging to multiples of the same dominant eigenvector v1 of A, they are orthonormalized: QHkL RHkL = Z HkL reduced QR factorization V HkL = QHkL The orthonormalization step is expensive compared to the matrix product.
Rayleigh-Ritz Projection Input: matrix A and an orthonormal set of vectors V † Compute the Rayleigh quotient S = V * A V † Compute the Schur decomposition U * S U = T The matrix S has small dimension m ä m. Note that the Schur decomposition can be computed in real arithmetic when S œ m ä m using a quasi upper-triangular matrix T.
Convergence Subspace (or simultaneous) iteration generalizes the ideas in the power method by acting on m vectors at each step. SRRIT converges linearly with rate: li lm+1
, i = 1, …, m
In particular the rate for the dominant eigenvalue is: l1 lm+1 Therefore it can be beneficial to take e.g. m = 3 or more even if we are only interested in the dominant eigenvalue.
Advanced Numerical Differential Equation Solving in Mathematica
311
Error Control A relative error test on successive approximations, dominant eigenvalue is: Hk-1L lHkL 1 - l1
lHkL 1
§ tol
This is not sufficient since it can be satisfied when convergence is slow. If †li § = †li-1 § or †li § = †li+1 § then the ith column of QHkL is not uniquely determined. The residual test used in SRRIT is: ` HkL ` HkL rHkL = A qi - Q tiHkL ,
± rHkL µ2 § tol
` HkL ` HkL ` HkL where Q = QHkL U HkL , qi is the ith column of Q and tiHkL is the ith column of T HkL . This is advantageous since it works for equimodular eigenvalues. The first column position of the upper triangular matrix T HkL is tested because of the use of an ordered Schur decomposition.
Implementation There are several implementations of subspace iteration. † LOPSI [SJ81] † Subspace iteration with Chebyshev acceleration [S84b], [DS93] † Schur Rayleigh|Ritz iteration ([BS97] and [SLEPc05]) The implementation for use in “NonstiffTest“ is based on: † Schur Rayleigh|Ritz iteration [BS97] "An attractive feature of SRRIT is that it displays monotonic consistency, that is, as the convergence tolerance decreases so does the size of the computed residuals" [LS96]. SRRIT makes use of an ordered Schur decomposition where eigenvalues of largest modulus appear in the upper-left entries. Modified Gram|Schmidt with reorthonormalization is used to form QHkL , which is faster than Householder transformations.
312
Advanced Numerical Differential Equation Solving in Mathematica
The approximate dominant subspace VtHkL at integration time ti is used to start the iteration at i the next integration step ti+1 : VtH0L = VtHkL i+1 i
KrylovIteration Given an n ä m matrix V whose columns vi comprise an orthogonal basis of a given subspace : V T V = I and span 8v1 , v2 , …, vm < = The Rayleigh|Ritz procedure consists of computing H = V T A V and solving the associated eigenproblem H yi = qi yi . è è è è The approximate eigenpairs of the original problem li , xi satisfy l = qi and xi = V yi , which are called Ritz values and Ritz vectors. The process works best when the subspace approximates an invariant subspace of A. This process is effective when is equal to the Krylov subspace associated with a matrix A and a given initial vector x as: Km HA, xL = span 9x, A x, A2 x, …, Am-1 x=.
Description The method of Arnoldi is a Krylov-based projection algorithm that computes an orthogonal basis of the Krylov subspace and produces a projected m ä m matrix H with m << n. Input: matrix A, the number of steps m, an initial vector v1 of norm 1 Output: HVm , Hm , f , b L with b = ° f ¥2 For j = 1, 2, …, m - 1 w = A vj Orthogonalize w with respect to V j to obtain hi, j for i = 1, …, j h j+1, j = w (if h j+1, j = 0 stop) v j+1 = w ë h j+1, j end f = A vm Orthogonalize f with respect to Vm to obtain hi, m for i = 1, …, m b = ° f ¥2
Advanced Numerical Differential Equation Solving in Mathematica
313
In the case of Arnoldi, H has an unreduced upper Hessenberg form (upper triangular with an additional nonzero subdiagonal). Orthogonalization is usually carried out by means of a Gram-Schmidt procedure. The quantities computed by the algorithm satisfy: A Vm = Vm Hm + f e*m The residual f gives an indication of proximity to an invariant subspace and the associated norm b indicates the accuracy of the computed Ritz pairs: è è è è è ±A xi - li xi µ2 = ±A Vm yi - qi Vm xi µ2 = ±IA Vm - Vm xi M yi µ2 = b ° e*m yi •
Restarting The Ritz pairs converge quickly if the initial vector x is rich in the direction of the desired eigenvalues. When this is not the case then a restarting strategy is required in order to avoid excessive growth in both work and memory. There are a several of strategies for restarting, in particular: † Explicit restart ~ a new starting vector is a linear combination of a subset of the Ritz vectors. † Implicit restart ~ a new starting vector is formed from the Arnoldi process combined with an implicitly shifted QR algorithm. Explicit restart is relatively simple to implement, but implicit restart is more efficient since it retains the relevant eigeninformation of the larger problem. However implicit restart is difficult to implement in a numerically stable way. An alternative which is much simpler to implement, but achieves the same effect as implicit restart, is a Krylov|Schur method [S01].
Implementation A number of software implementations are available, in particular: † ARPACK [ARPACK98] † SLEPc [SLEPc05] The implementation in “NonstiffTest“ is based on Krylov|Schur Iteration.
314
Advanced Numerical Differential Equation Solving in Mathematica
Automatic Strategy The “Automatic“ setting uses an amalgamation of the methods as follows. † For n § 2*m direct eigenvalue computation is used. Either m = minHn, msi L or m = minHn, mki L is used depending on which method is active. † For n > 2 * m subspace iteration is used with a default basis size of msi = 8. If the method succeeds then the resulting basis is used to start the method at the next integration step. † If subspace iteration fails to converge after maxsi iterations then the dominant vector is used to start the Krylov method with a default basis size of mki = 16. Subsequent integration steps use the Krylov method, starting with the resulting vector from the previous step. † If Krylov iteration fails to converge after maxki iterations then norm bounds are used for the current step. The next integration step will continue to try to use Krylov iteration. † Since they are so inexpensive, norm bounds are always computed when subspace or Krylov iteration is used and the smaller of the absolute values is used.
Step Rejections Caching of the time of evaluation ensures that the dominant eigenvalue estimate is not recomputed for rejected steps. Stiffness detection is also performed for rejected steps since: † Step rejections often occur for nonstiff solvers when working near the stability boundary † Step rejections often occur for stiff solvers when resolving fast transients
Iterative Method Options The iterative methods of “NonstiffTest“ have options that can be modified: In[20]:=
Options@NDSolve`SubspaceIterationD
Out[20]= :BasisSize Ø Automatic, MaxIterations Ø Automatic, Tolerance Ø
1
>
10 In[21]:=
Options@NDSolve`KrylovIterationD
Out[21]= :BasisSize Ø Automatic, MaxIterations Ø Automatic, Tolerance Ø
1 10
>
Advanced Numerical Differential Equation Solving in Mathematica
315
The default tolerance aims for just one correct digit, but often obtains substantially more accurate values~especially after a few successful iterations at successive steps. The default values limiting the number of iterations are: † For subspace iteration maxsi = max H25, n ê H2 msi )). † For Krylov iteration maxki = maxH50, n ê mki ). If these values are set too large then a convergence failure becomes too costly. In difficult problems, it is better to share the work of convergence across steps. Since the methods effectively refine the basis vectors from the previous step, there is a reasonable chance of convergence in subsequent steps.
Latency and Switching It is important to incorporate some form of latency in order to avoid a cycle where the “StiffnessSwitching“ method continually tries to switch between stiff and nonstiff methods. The options “MaxRepetitions“ and “SafetyFactor“ of “StiffnessTest“ and “NonstiffTest“ are used for this purpose. The default settings allow switching to be quite reactive, which is appropriate for one-step integration methods. † “StiffnessTest“ is carried out at the end of a step with a nonstiff method. When either value of the option “MaxRepetitions“ is reached, a step rejection occurs and the step is recomputed with a stiff method. † “NonstiffTest“ is preemptive. It is performed before a step is taken with a stiff solve using the Jacobian matrix from the previous step.
Examples Van der Pol Select an example system. In[22]:=
system = GetNDSolveProblem@“VanderPol“D;
316
Advanced Numerical Differential Equation Solving in Mathematica
StiffnessTest The system is integrated successfully with the given method and the default option settings for “StiffnessTest“. In[23]:=
NDSolve@system, Method Ø “ExplicitRungeKutta“D
Out[23]= 88Y1 @TD Ø InterpolatingFunction@880., 2.5<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 2.5<<, <>D@TD<<
A longer integration is aborted and a message is issued when the stiffness test condition is not satisfied. In[24]:=
NDSolve@system, 8T, 0, 10<, Method Ø “ExplicitRungeKutta“D NDSolve::ndstf : At T == 4.353040548903924`, system appears to be stiff. Methods Automatic, BDF or StiffnessSwitching may be more appropriate.
Out[24]= 88Y1 @TD Ø InterpolatingFunction@880., 4.35304<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 4.35304<<, <>D@TD<<
Using a unit safety factor and specifying that only one stiffness failure is allowed effectively gives a strict test. The specification uses the nested method option syntax. In[25]:=
NDSolve@system, Method Ø 8“ExplicitRungeKutta“, “StiffnessTest“ Ø 8True, “MaxRepetitions“ Ø 81, 1<, “SafetyFactor“ Ø 1<
Out[25]= 88Y1 @TD Ø InterpolatingFunction@880., 0.<<, <>D@TD,
Y2 @TD Ø InterpolatingFunction@880., 0.<<, <>D@TD<<
NonstiffTest For such a small system, direct eigenvalue computation is used. The example serves as a good test that the overall stiffness switching framework is behaving as expected. Set up a function to monitor the switch between stiff and nonstiff methods and the step size taken. Data for the stiff and nonstiff solvers is put in separate lists by using a different tag for "Sow". In[26]:=
SetAttributes@SowSwitchingData, HoldFirstD; SowSwitchingData@told_, t_, method_NDSolve`StiffnessSwitchingD := HSow@8t, t - told<, method@“ActiveMethodPosition“DD; told = t;L;
Advanced Numerical Differential Equation Solving in Mathematica
Solve the system and collect the data for the method switching. T0 = 0; data = Last@ Reap@ sol = NDSolve@system, 8T, 0, 10<, Method Ø StiffnessSwitching, “MethodMonitor“ ß HSowSwitchingData@T0, T, NDSolve`SelfD;L D; D D;
In[28]:=
Plot the step sizes taken using an explicit solver (blue) and an implicit solver (red). ListLogPlot@data, Axes Ø False, Frame Ø True, PlotStyle Ø 8Blue, Red
In[30]:=
0.050 0.020
Out[30]= 0.010 0.005 0.002 0
2
4
6
8
10
Compute the number of nonstiff and stiff steps taken (including rejected steps). In[31]:=
Map@Length, dataD
Out[31]= 8266, 272<
CUSP The cusp catastrophe model for the nerve impulse mechanism [Z72]: - ε y£ HtL = yHtL3 + a yHtL + b Combining with the van der Pol oscillator gives rise to the CUSP system [HW96]: 1 ∂2 y = - Iy3 + a y + bM + s ∂t ε ∂ x2
∂y
∂a ∂t ∂b ∂t
= -b +
7 100
v+ s
= I1 - a2 M b - a -
2 5
∂2 a ∂ x2
y+
7 200
v+ s
∂2 b ∂ x2
317
318
Advanced Numerical Differential Equation Solving in Mathematica
where v =
u u - 1 ê 10
, u = Hy - 7 ê 10L Hy - 13 ê 10L
and s = 1 ê 144 and ε = 10-4 . Discretization of the diffusion terms using the method of lines is used to obtain a system of ODEs of dimension 3 n = 96. Unlike the van der Pol system, because of the size of the problem, iterative methods are used for eigenvalue estimation.
Step Size and Order Selection Select the problem to solve. In[32]:=
system = GetNDSolveProblem@“CUSP-Discretized“D; Set up a function to monitor the type of method used and step size. Additionally the order of the method is included as a Tooltip.
In[33]:=
SetAttributes@SowOrderData, HoldFirstD; SowOrderData@told_, t_, method_NDSolve`StiffnessSwitchingD := HSow@ Tooltip@8t, t - told<, method@“DifferenceOrder“DD, method@“ActiveMethodPosition“D D; told = t;L; Collect the data for the order of the method as the integration proceeds.
In[35]:=
T0 = 0; data = Last@ Reap@ sol = NDSolve@system, Method Ø “StiffnessSwitching“, “MethodMonitor“ ß HSowOrderData@T0, T, NDSolve`SelfD;L D; D D;
Advanced Numerical Differential Equation Solving in Mathematica
319
Plot the step sizes taken using an explicit solver (blue) and an implicit solver (red). A Tooltip shows the order of the method at each step. In[37]:=
ListLogPlot@data, Axes Ø False, Frame Ø True, PlotStyle Ø 8Blue, Red
0.010 0.005
Out[37]= 0.001 5 ´ 10-4 1 ´ 10-4 0.0
0.2
0.4
0.6
0.8
1.0
Compute the total number of nonstiff and stiff steps taken (including rejected steps). In[39]:=
Map@Length, dataD
Out[39]= 846, 120<
Jacobian Example Define a function to collect the first few Jacobian matrices. In[41]:=
SetAttributes@StiffnessJacobianMonitor, HoldFirstD; StiffnessJacobianMonitor@i_, method_NDSolve`StiffnessSwitchingD := If@SameQ@method@“ActiveMethodPosition“D, 2D && i < 5, If@MatrixQ@ÒD, Sow@ÒD; i = i+1 D & ü method@“Jacobian“D D;
In[43]:=
i = 0; jacdata = Reap@sol = NDSolve@system, Method Ø “StiffnessSwitching“, “MethodMonitor“ ß HStiffnessJacobianMonitor@i, NDSolve`SelfD;LD; D@@ - 1, 1DD;
A switch to a stiff method occurs near 0.00113425 and the first test for nonstiffness occurs at the next step tk º 0.00127887.
320
Advanced Numerical Differential Equation Solving in Mathematica
Graphical illustration of the Jacobian Jtk . MatrixPlot@First@jacdataDD
In[45]:=
1
Out[45]=
20
40
60
80
96
1
1
20
20
40
40
60
60
80
80
96
96 1
20
40
60
80
96
Define a function to compute and display the first few eigenvalues of Jtk , Jtk+1 ,… and the norm bounds. In[46]:=
DisplayJacobianData@jdata_D := Module@8evdata, hlabels, vlabels<, evdata = Map@ Join@Eigenvalues@Normal@ÒD, 4D, 8Norm@Ò, 1D, Norm@Ò, InfinityD
In[47]:=
DisplayJacobianData@jacdataD J t1 l1 l2 l3
Out[47]=
J t2
J t3
Jt4
J t5
-56 013.2 -56 009.7 -56 000. -55 988.2 -55 959.6 -56 007.9 -56 003.8 -55 992.2 -55 978. -55 943.5 -55 671.3 -55 670.7 -55 669.1 -55 667.1 -55 662.2
-55 660.3 -55 658.3 -55 652.6 -55 645.7 -55 628.9 l4 °Jtk ¥1 56 027.5 56 024.1 56 014.4 56 002.6 55 973.9 °Jtk ¥¶
81 315.4
81 311.3
81 299.7
81 285.6
81 251.4
Norm bounds are quite sharp in this example.
Korteweg|deVries The Korteweg|deVries partial differential equation is a mathematical model of waves on shallow water surfaces: ∂U ∂t
+ 6U
∂U ∂x
+
∂3 U ∂ x3
=0
Advanced Numerical Differential Equation Solving in Mathematica
321
We consider boundary conditions: 2
UH0, xL = ‰-x , UHt, -5L = UHt, 5L and solve over the interval t œ [0, 1]. Discretization using the method of lines is used to form a system of 192 ODEs.
Step Sizes Select the problem to solve. In[48]:=
system = GetNDSolveProblem@“Korteweg-deVries-PDE“D; The Backward Differentiation Formula methods used in LSODA run into difficulties solving this problem.
In[49]:=
First@Timing@sollsoda = NDSolve@system, Method Ø LSODAD;DD NDSolve::eerr : Warning: Scaled local spatial error estimate of 806.6079731642326` at T = 1.` in the direction of independent variable X is much greater than prescribed error tolerance. Grid spacing with 193 points may be too large to achieve the desired accuracy or precision. A singularity may have formed or you may want to specify a smaller grid spacing using the MaxStepSize or MinPoints method options. à
Out[49]= 0.971852
A plot shows that the step sizes rapidly decrease. In[50]:=
StepDataPlot@sollsodaD
Out[50]=
In contrast StiffnessSwitching immediately switches to using the linearly implicit Euler method which needs very few integration steps. In[51]:=
First@Timing@sol = NDSolve@system, Method -> “StiffnessSwitching“D;DD
Out[51]= 0.165974
322
Advanced Numerical Differential Equation Solving in Mathematica
In[52]:=
StepDataPlot@solD
Out[52]=
The extrapolation methods never switch back to a nonstiff solver once the stiff solver is chosen at the beginning of the integration. Therefore this is a form of worst case example for the nonstiff detection. Despite this, the cost of using subspace iteration is only a few percent of the total integration time. Compute the time taken with switching to a nonstiff method disabled. In[53]:=
First@Timing@sol = NDSolve@system, Method -> 8“StiffnessSwitching“, “NonstiffTest“ -> False
Out[53]= 0.160974
Jacobian Example Collect data for the first few Jacobian matrices using the previously defined monitor function. In[54]:=
i = 0; jacdata = Reap@sol = NDSolve@system, Method Ø “StiffnessSwitching“, “MethodMonitor“ ß HStiffnessJacobianMonitor@i, NDSolve`SelfD;LD; D@@ - 1, 1DD; Graphical illustration of the initial Jacobian Jt0 .
In[56]:=
Out[56]=
MatrixPlot@First@jacdataDD
Advanced Numerical Differential Equation Solving in Mathematica
323
Compute and display the first few eigenvalues of Jtk , Jtk+1 ,… and the norm bounds. In[57]:=
Out[57]=
DisplayJacobianData@jacdataD J t1
J t2
Jt3
Jt4
J t5
l1
1.37916 µ 10-8 + 32 608. Â
5.3745 µ 10-6 + 32 608. Â
0.0000209094 + 32 608. Â
0.0000428279 + 32 608. Â
0.0000678117 + 32 608.1 Â
l2
1.37916 µ 10-8 32 608. Â
5.3745 µ 10-6 32 608. Â
0.0000209094 32 608. Â
0.0000428279 32 608. Â
0.0000678117 32 608.1 Â
l3
5.90398 µ 10-8 + 32 575.5 Â
0.0000103621 + 32 575.5 Â
0.0000406475 + 32 575.5 Â
0.0000817789 + 32 575.5 Â
0.000125286 + 32 575.6 Â
l4
0.0000103621 32 575.5 Â
0.0000406475 32 575.5 Â
0.0000817789 32 575.5 Â
0.000125286 32 575.6 Â
°Jtk ¥1
5.90398 µ 10-8 32 575.5 Â 38 928.4
38 928.4
38 928.4
38 930.
38 932.9
°Jtk ¥¶
38 928.4
38 928.4
38 928.4
38 930.1
38 933.
Norm bounds overestimate slightly, but more importantly they give no indication of the relative size of real and imaginary parts.
Option Summary StiffnessTest option name
default value
“MaxRepetitions“
83,5<
specify the maximum number of successive and total times that the stiffness test (15) is allowed to fail
“SafetyFactor“
4 5
Options of the method option “StiffnessTest“.
specify the safety factor to use in the righthand side of the stiffness test (15)
324
Advanced Numerical Differential Equation Solving in Mathematica
NonstiffTest option name
default value
“MaxRepetitions“
82,¶<
specify the maximum number of successive and total times that the stiffness test (15) is allowed to fail
“SafetyFactor“
4 5
specify the safety factor to use in the righthand side of the stiffness test (15)
Options of the method option “NonstiffTest“.
Structured Systems Numerical Methods for Solving the Lotka|Volterra Equations Introduction The Lotka|Volterra system arises in mathematical biology and models the growth of animal species. Consider two species where Y1 HTL denotes the number of predators and Y2 HTL denotes the number of prey. A particular case of the Lotka|Volterra differential system is: ° ° Y1 = Y1 HY2 - 1L, Y2 = Y2 H2 - Y1 L ,
(1)
where the dot denotes differentiation with respect to time T. The Lotka|Volterra system (9) has an invariant H, which is constant for all T: HHY1 , Y2 L = 2 ln Y1 - Y1 + ln Y2 - Y2 .
(2)
Advanced Numerical Differential Equation Solving in Mathematica
325
The level curves of the invariant (2) are closed so that the solution is periodic. It is desirable that the numerical solution of (9) is also periodic, but this is not always the case. Note that (9) is a Poisson system: ° Y = BHYL “ H HYL =
0 -Y1 Y2 Y1 Y2 0
2 Y1
-1
1 Y2
-1
(3)
where HHYL is defined in (2). Poisson systems and Poisson integrators are discussed in Chapter VII.2 of [HLW02] and [MQ02]. Load a package with some predefined problems and select the Lotka|Volterra system. In[10]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D; system = GetNDSolveProblem@“LotkaVolterra“D; invts = system@“Invariants“D; time = system@“TimeData“D; vars = system@“DependentVariables“D; step = 3 ê 25; Define a utility function for visualizing solutions.
In[18]:=
LotkaVolterraPlot@sol_, vars_, time_, opts___ ? OptionQD := Module@8data, data1, data2, ifuns, lplot, pplot<, ifuns = First@vars ê. solD; data1 = Part@ifuns, 1, 0D@“ValuesOnGrid“D; data2 = Part@ifuns, 2, 0D@“ValuesOnGrid“D; data = Transpose@8data1, data2
326
Advanced Numerical Differential Equation Solving in Mathematica
Explicit Euler Use the explicit or forward Euler method to solve the system (9). In[19]:=
fesol = NDSolve@system, Method Ø “ExplicitEuler“, StartingStepSize Ø stepD; LotkaVolterraPlot@fesol, vars, timeD
Out[20]=
Backward Euler Define the backward or implicit Euler method in terms of the RadauIIA implicit Runge|Kutta method and use it to solve (9). The resulting trajectory spirals from the initial conditions toward a fixed point at H2, 1L in a clockwise direction. In[21]:=
BackwardEuler = 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “Coefficients“ Ø “ImplicitRungeKuttaRadauIIACoefficients“, “DifferenceOrder“ Ø 1, “ImplicitSolver“ Ø 8“FixedPoint“, AccuracyGoal Ø MachinePrecision, PrecisionGoal Ø MachinePrecision, “IterationSafetyFactor“ Ø 1<<<; besol = NDSolve@system, Method Ø BackwardEuler, StartingStepSize Ø stepD; LotkaVolterraPlot@besol, vars, timeD
Out[23]=
Advanced Numerical Differential Equation Solving in Mathematica
327
Projection Projection of the forward Euler method using the invariant (2) of the Lotka|Volterra equations gives a periodic solution. In[24]:=
pfesol = NDSolve@system, Method Ø 8Projection, Method Ø “ExplicitEuler“, Invariants Ø invts<, StartingStepSize Ø stepD; LotkaVolterraPlot@pfesol, vars, timeD
Out[25]=
Splitting Another approach for obtaining the correct qualitative behavior is to additively split (9) into two systems: ° ° Y1 = Y1 HY2 - 1 L Y2 = 0 ° ° 0 Y2 = Y2 H2 - Y1 L. Y1 = By appropriately solving (4) it is possible to construct Poisson integrators. Define the equations for splitting of the Lotka|Volterra equations. In[26]:=
eqs = system@“System“D; Y1 = eqs; Part@Y1, 2, 2D = 0; Y2 = eqs; Part@Y2, 1, 2D = 0;
Symplectic Euler Define the symplectic Euler method in terms of a splitting method using the backward and forward Euler methods for each system in (4). In[31]:=
SymplecticEuler = 8“Splitting“, “DifferenceOrder“ Ø 1, “Equations“ Ø 8Y1, Y2<, “Method“ Ø 8BackwardEuler, “ExplicitEuler“<<; sesol = NDSolve@system, Method Ø SymplecticEuler, StartingStepSize Ø stepD;
(4)
328
Advanced Numerical Differential Equation Solving in Mathematica
The numerical solution using the symplectic Euler method is periodic. In[33]:=
LotkaVolterraPlot@sesol, vars, timeD
Out[33]=
Flows Consider splitting the Lotka|Volterra equations and computing the flow (or exact solution) of each system in (4). The solutions can be found as follows, where the constants should be related to the initial conditions at each step. In[34]:=
DSolve@Y1, vars, TD
T H-1+C@1DL C@2D== Out[34]= 99Y2 @TD Ø C@1D, Y1 @TD Ø ‰
In[35]:=
DSolve@Y2, vars, TD
T H2-C@1DL C@2D== Out[35]= 99Y1 @TD Ø C@1D, Y2 @TD Ø ‰
An advantage of locally computing the flow is that it yields an explicit, and hence very efficient, integration procedure. The “LocallyExact“ method provides a general way of computing the flow of each splitting using DSolve only during the initialization phase. Set up a hybrid symbolic-numeric splitting method and use it to solve the Lotka|Volterra system. In[36]:=
SplittingLotkaVolterra = 8“Splitting“, “DifferenceOrder“ Ø 1, “Equations“ Ø 8Y1, Y2<, “Method“ Ø 8“LocallyExact“, “LocallyExact“<<; spsol = NDSolve@system, Method Ø SplittingLotkaVolterra, StartingStepSize Ø stepD; The numerical solution using the splitting method is periodic.
In[38]:=
LotkaVolterraPlot@spsol, vars, timeD
Out[38]=
Rigid Body Solvers
Advanced Numerical Differential Equation Solving in Mathematica
329
Rigid Body Solvers Introduction The equations of motion for a free rigid body whose center of mass is at the origin are given by the following Euler equations (see [MR99]). ° y1 ° y2 ° y3
=
0 y3 ê I3 -y2 ê I2 -y3 ê I3 0 y1 ê I 1 y2 ê I2 -y1 ê I1 0
y1 y2 y3
Two quadratic first integrals of the system are: y1 2 + y2 2 + y3 2
IHyL = HHyL =
1 2
K
y1 2 I1
+
y2 2 I2
+
y3 2 I3
O
.
The first constraint effectively confines the motion from 3 to a sphere. The second constraint represents the kinetic energy of the system and, in conjunction with the first invariant, effectively confines the motion to ellipsoids on the sphere. Numerical experiments for various methods are given in [HLW02] and a variety of NDSolve methods will now be compared.
Manifold Generation and Utility Functions Load some useful packages. In[6]:=
Needs@“DifferentialEquations`NDSolveProblems`“D; Needs@“DifferentialEquations`NDSolveUtilities`“D; Define Euler's equations for rigid body motion together with the invariants of the system.
In[8]:=
system = GetNDSolveProblem@“RigidBody“D; eqs = system@“System“D; vars = system@“DependentVariables“D; time = system@“TimeData“D; invariants = system@“Invariants“D; The equations of motion evolve as closed curves on the unit sphere. This generates a threedimensional graphics object to represent the unit sphere.
In[13]:=
UnitSphere = Graphics3D@8EdgeForm@D, Sphere@D<, Boxed Ø FalseD; This function superimposes a solution from NDSolve on a given manifold.
330
Advanced Numerical Differential Equation Solving in Mathematica
This function superimposes a solution from NDSolve on a given manifold. In[14]:=
PlotSolutionOnManifold@sol_, vars_, time_, manifold_, opts___ ? OptionQD := Module@8solplot<, solplot = ParametricPlot3D@ Evaluate@vars ê. solD, time, opts, Boxed Ø False, Axes Ø FalseD; Show@solplot, manifold, optsD D This function plots the various solution components.
In[15]:=
PlotSolutionComponents@sols_, vars_, time_, opts___ ? OptionQD := Module@8ifuns, plotopts<, ifuns = vars ê. First@solsD; Table@plotopts = Sequence@PlotLabel Ø StringForm@“`1` vs time“, Part@vars, iDD, Frame Ø True, Axes Ø FalseD; Plot@Evaluate@Part@ifuns, iDD, time, opts, Evaluate@plotoptsDD, 8i, Length@varsD
Method Comparison Various integration methods can be used to solve Euler's equations and they each have different associated costs and different dynamical properties.
Adams Multistep Method Here an Adams method is used to solve the equations of motion. In[21]:=
AdamsSolution = NDSolve@system, Method Ø “Adams“D;
Advanced Numerical Differential Equation Solving in Mathematica
331
This shows the solution trajectory by superimposing it on the unit sphere. In[22]:=
PlotSolutionOnManifold@AdamsSolution, vars, time, UnitSphere, PlotRange Ø AllD
Out[22]=
The solution appears visually to give a closed curve on the sphere. However, a plot of the error reveals that neither constraint is conserved particularly well. In[23]:=
InvariantErrorPlot@invariants, vars, T, AdamsSolution, PlotStyle Ø 8Red, Blue
2.5 µ 10-7
2. µ 10-7
Out[23]= 1.5 µ 10-7
1. µ 10-7
5. µ 10-8
0 0
5
10
15
20
25
30
Euler and Implicit Midpoint Methods This solves the equations of motion using Euler's method with a specified fixed step size. In[16]:=
EulerSolution = NDSolve@system, Method Ø 8“FixedStep“, Method Ø “ExplicitEuler“<, StartingStepSize Ø 1 ê 20D;
332
Advanced Numerical Differential Equation Solving in Mathematica
This solves the equations of motion using the implicit midpoint method with a specified fixed step size. In[17]:=
ImplicitMidpoint = 8“FixedStep“, Method Ø 8“ImplicitRungeKutta“, “Coefficients“ Ø “ImplicitRungeKuttaGaussCoefficients“, DifferenceOrder Ø 2, “ImplicitSolver“ Ø 8FixedPoint, “AccuracyGoal“ Ø MachinePrecision, “PrecisionGoal“ Ø MachinePrecision, “IterationSafetyFactor“ Ø 1<<<; IMPSolution = NDSolve@system, Method Ø ImplicitMidpoint, StartingStepSize Ø 3 ê 10D; This shows the superimposition on the unit sphere of the numerical solution of the equations of motion for Euler's method (left) and the implicit midpoint rule (right).
In[19]:=
EulerPlotOnSphere = PlotSolutionOnManifold@EulerSolution, vars, time, UnitSphere, PlotRange Ø AllD; IMPPlotOnSphere = PlotSolutionOnManifold@IMPSolution, vars, time, UnitSphere, PlotRange Ø AllD; GraphicsArray@8EulerPlotOnSphere, IMPPlotOnSphere
Out[21]=
This shows the components of the numerical solution using Euler's method (left) and the implicit midpoint rule (right). In[30]:=
EulerSolutionPlots = PlotSolutionComponents@EulerSolution, vars, timeD; IMPSolutionPlots = PlotSolutionComponents@IMPSolution, vars, timeD; GraphicsArray@Transpose@8EulerSolutionPlots, IMPSolutionPlots
0.6 0.4 0.2 0.0 -0.2 -0.4
0.2 0.0 -0.2 -0.4 0
5 10 15 20 25 30
Y2 HTL vs time 0.5
Out[32]=
Y1 HTL vs time 0.4
0.0 -0.5 0
5
0
10 15 20 25 30
Y2 HTL vs time
0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6
10 15 20 25 30
5
0
Y3 HTL vs time
5 10 15 20 25 30
Y3 HTL vs time 0.88 0.86 0.84 0.82 0.80 0.78
0.90 0.85 0.80 0.75 0
5 10 15 20 25 30
0
5 10 15 20 25 30
Advanced Numerical Differential Equation Solving in Mathematica
333
Orthogonal Projection Method Here the “OrthogonalProjection“ method is used to solve the equations. In[33]:=
OPSolution = NDSolve@system, Method Ø 8“OrthogonalProjection“, Dimensions Ø 83, 1<, Method Ø “ExplicitEuler“<, StartingStepSize Ø 1 ê 20D; Only the orthogonal constraint is conserved so the curve is not closed.
In[34]:=
PlotSolutionOnManifold@OPSolution, vars, time, UnitSphere, PlotRange Ø AllD
Out[34]=
Plotting the error in the invariants against time, it can be seen that the orthogonal projection method conserves only one of the two invariants. In[35]:=
InvariantErrorPlot@invariants, vars, T, OPSolution, PlotStyle Ø 8Red, Blue
0.030
0.025
0.020
Out[35]= 0.015
0.010
0.005
0.000 0
5
10
15
20
25
30
Projection Method The method “Projection“ takes a set of constraints and projects the solution onto a manifold at the end of each integration step.
334
Advanced Numerical Differential Equation Solving in Mathematica
Generally all the invariants of the problem should be used in the projection; otherwise the numerical solution may actually be qualitatively worse than the unprojected solution. The following specifies the integration method and defers determination of the constraints until the invocation of NDSolve . In[36]:=
ProjectionMethod = 8Projection, Method Ø 8“FixedStep“, Method Ø “ExplicitEuler“<, “Invariants“ ß invts<;
Projecting One Constraint This projects the first constraint onto the manifold. In[37]:=
invts = First@invariantsD; projsol1 = NDSolve@system, Method Ø ProjectionMethod, StartingStepSize Ø 1 ê 20D; PlotSolutionOnManifold@projsol1, vars, time, UnitSphere, PlotRange Ø AllD
Out[39]=
Only the first invariant is conserved. In[40]:=
InvariantErrorPlot@invariants, vars, T, projsol1, PlotStyle Ø 8Red, Blue
0.030
0.025
0.020
Out[40]= 0.015
0.010
0.005
0.000 0
5
10
15
20
25
30
Advanced Numerical Differential Equation Solving in Mathematica
335
This projects the second constraint onto the manifold. In[41]:=
invts = Last@invariantsD; projsol2 = NDSolve@system, Method Ø ProjectionMethod, StartingStepSize Ø 1 ê 20D; PlotSolutionOnManifold@projsol2, vars, time, UnitSphere, PlotRange Ø AllD
Out[43]=
Only the second invariant is conserved. In[44]:=
InvariantErrorPlot@invariants, vars, T, projsol2, PlotStyle Ø 8Red, Blue
0.06
0.05
0.04
Out[44]= 0.03
0.02
0.01
0.00 0
5
10
15
20
25
30
336
Advanced Numerical Differential Equation Solving in Mathematica
Projecting Multiple Constraints This projects both constraints onto the manifold. In[45]:=
invts = invariants; projsol = NDSolve@system, Method Ø ProjectionMethod, StartingStepSize Ø 1 ê 20D; PlotSolutionOnManifold@projsol, vars, time, UnitSphere, PlotRange Ø AllD
Out[47]=
Now both invariants are conserved. In[48]:=
InvariantErrorPlot@invariants, vars, T, projsol, PlotStyle Ø 8Red, Blue
4. µ 10-16
3. µ 10-16
Out[48]=
2. µ 10-16
1. µ 10-16
0 0
5
10
15
20
25
30
"Splitting" Method A splitting that yields an efficient explicit integration method was derived independently by McLachlan [M93] and Reich [R93]. ° Write the flow of an ODE y = Y as yHtL = expHt YL HyH0L.
Advanced Numerical Differential Equation Solving in Mathematica
337
The differential system is split into three components, YH1, YH2, and YH3, each of which is Hamiltonian and can be solved exactly. The Hamiltonian systems are solved and recombined at each integration step as: expHt YL º expH1 ê 2 t YH1L expH1 ê 2 t YH2L expHt YH3L expH1 ê 2 t YH2L expH1 ê 2 t YH1L. This defines an appropriate splitting into Hamiltonian vector fields. In[49]:=
Grad@H_, x_ ? VectorQD := Map@D@H, ÒD &, xD; isub = 8I1 -> 2, I2 -> 1, I3 -> 2 ê 3<; H1 = Y1 @TD ^ 2 ê H2 I1 L ê. isub; H2 = Y2 @TD ^ 2 ê H2 I2 L ê. isub; H3 = Y3 @TD ^ 2 ê H2 I3 L ê. isub; JX = 880, - Y3 @TD, Y2 @TD<, 8Y3 @TD, 0, - Y1 @TD<, 8- Y2 @TD, Y1 @TD, 0<<; YH1 = Thread@D@vars, TD == JX.Grad@H1, varsDD; YH2 = Thread@D@vars, TD == JX.Grad@H2, varsDD; YH3 = Thread@D@vars, TD == JX.Grad@H3, varsDD; Here is the differential system for Euler's equations.
In[58]:=
eqs
£ Out[58]= :Y1 @TD ã
1 2
Y2 @TD Y3 @TD, Y2 £ @TD ã -Y1 @TD Y3 @TD, Y3 £ @TD ã
1 2
Y1 @TD Y2 @TD>
Here are the three split vector fields. In[59]:=
YH1
£ £ Out[59]= :Y1 @TD ã 0, Y2 @TD ã
In[60]:=
1 2
Y1 @TD Y3 @TD, Y3 £ @TD ã -
1 2
Y1 @TD Y2 @TD>
YH2
£ £ £ Out[60]= 8Y1 @TD ã -Y2 @TD Y3 @TD, Y2 @TD ã 0, Y3 @TD ã Y1 @TD Y2 @TD<
In[61]:=
YH3
£ Out[61]= :Y1 @TD ã
3 2
Y2 @TD Y3 @TD, Y2 £ @TD ã -
3 2
Y1 @TD Y3 @TD, Y3 £ @TD ã 0>
Solution This defines a symmetric second-order splitting method. The coefficients are automatically determined from the structure of the equations and are an extension of the Strang splitting. In[62]:=
SplittingMethod = 8“Splitting“, “DifferenceOrder“ Ø 2, “Equations“ Ø 8YH1, YH2, YH3, YH2, YH1<, “Method“ Ø 8“LocallyExact“<<;
338
Advanced Numerical Differential Equation Solving in Mathematica
This solves the system and graphically displays the solution. In[63]:=
splitsol = NDSolve@system, Method Ø SplittingMethod, StartingStepSize Ø 1 ê 20D; PlotSolutionOnManifold@splitsol, vars, time, UnitSphere, PlotRange Ø AllD
Out[64]=
One of the invariants is preserved up to roundoff while the error in the second invariant remains bounded. In[65]:=
InvariantErrorPlot@invariants, vars, T, splitsol, PlotStyle Ø 8Red, Blue
0.00012
0.00010
0.00008
Out[65]= 0.00006
0.00004
0.00002
0.00000 0
5
10
15
20
25
30
Advanced Numerical Differential Equation Solving in Mathematica
339
Components and Data Structures in NDSolve Introduction NDSolve is broken up into several basic steps. For advanced usage, it can sometimes be advantageous to access components to carry out each of these steps separately. † Equation processing and method selection † Method initialization † Numerical solution † Solution processing NDSolve performs each of these steps internally, hiding the details from a casual user. However, for advanced usage it can sometimes be advantageous to access components to carry out each of these steps separately. Here are the low-level functions that are used to break up these steps. † NDSolve`ProcessEquations † NDSolve`Iterate † NDSolve`ProcessSolutions NDSolve`ProcessEquations classifies the differential system into initial value problem, boundary value problem, differential-algebraic problem, partial differential problem, etc. It also chooses appropriate default integration methods and constructs the main NDSolve`StateData data structure. NDSolve`Iterate advances the numerical solution. The first invocation (there can be several) initializes the numerical integration methods. NDSolve`ProcessSolutions converts numerical data into an InterpolatingFunction to represent each solution.
340
Advanced Numerical Differential Equation Solving in Mathematica
Note that NDSolve`ProcessEquations can take a significant portion of the overall time to solve a differential system. In such cases, it can be useful to perform this step only once and use NDSolve`Reinitialize to repeatedly solve for different options or initial conditions.
Example Process equations and set up data structures for solving the differential system. In[1]:=
ndssdata = First@NDSolve`ProcessEquations@8y ‘‘@tD + y@tD ã 0, y@0D ã 1, y ‘@0D ã 0<, 8y, y ‘<, t, Method Ø “ExplicitRungeKutta“DD
Out[1]= NDSolve`StateData@<0.>D
Initialize the method “ExplicitRungeKutta“ and integrate the system up to time 10. The return value of NDSolve`Iterate is Null in order to avoid extra references, which would lead to undesirable copying. In[2]:=
NDSolve`Iterate@ndssdata, 10D Convert each set of solution data into an InterpolatingFunction .
In[3]:=
ndsol = NDSolve`ProcessSolutions@ndssdataD
£ Out[3]= 8y Ø InterpolatingFunction@880., 10.<<, <>D, y Ø InterpolatingFunction@880., 10.<<, <>D<
Representing the solution as an InterpolatingFunction allows continuous output even for points that are not part of the numerical solution grid. In[4]:=
ParametricPlot@8y@tD, y ‘@tD< ê. ndsol, 8t, 0, 10
0.5
Out[4]=
-1.0
-0.5
0.5
-0.5
-1.0
1.0
Advanced Numerical Differential Equation Solving in Mathematica
341
Creating NDSolve`StateData Objects ProcessEquations The first stage of any solution using NDSolve is processing the equations specified into a form that can be efficiently accessed by the actual integration algorithms. This stage minimally involves determining the differential order of each variable, making substitutions needed to get a first-order system, solving for the time derivatives of the functions in terms of the functions, and forming the result into a “NumericalFunction“ object. If you want to save the time of repeating this process for the same set of equations or if you want more control over the numerical
integration
process,
the
processing
stage
can
be
executed
separately
with
NDSolve`ProcessEquations. NDSolve`ProcessEquations@8eqn1 ,eqn2 ,…<,8u1 ,u2 ,…<,tD process the differential equations 8eqn1 , eqn2 , …< for the functions 8u1 , u2 , …< into a normal form; return a list of NDSolve`StateData objects containing the solution and data associated with each solution for the time derivatives of the functions in terms of the functions; t may be specified in a list with a range of values as in NDSolve
NDSolve`ProcessEquations@8eqn1 ,eqn2 ,…<,8u1 ,u2 ,…<,8x1 ,x1 min ,x1 max <,8x2 ,x2 min ,x2 max <,…D process the partial differential equations 8eqn1 , eqn2 , …< for the functions 8u1 , u2 , …< into a normal form; return a list of NDSolve`StateData objects containing the solution and data associated with each solution for the time derivatives of the functions in terms of the functions; if x j is the temporal variable, it need not be specified with the boundaries x j min , x j max Processing equations for NDSolve . This creates a list of two NDSolve`StateData objects because there are two possible solutions for the y£ in terms of y. In[1]:=
NDSolve`ProcessEquations@8y ‘@xD ^ 2 ã y@xD + x, y@0D ã 1<, y, xD
Out[1]= 8NDSolve`StateData@<0.>D, NDSolve`StateData@<0.>D<
342
Advanced Numerical Differential Equation Solving in Mathematica
Reinitialize It is not uncommon that the solution to a more sophisticated problem involves solving the same differential equation repeatedly, but with different initial conditions. In some cases, processing equations may be as time-consuming as numerically integrating the differential equations. In these situations, it is a significant advantage to be able to simply give new initial values.
NDSolve`Reinitialize@ state,conditionsD
assuming the equations and variables are the same as the ones used to create the NDSolve`StateData object state, form a list of new NDSolve`StateData objects, one for each of the possible solutions for the initial values of the functions of the equations conditions
Reusing processed equations. This creates an NDSolve`StateData object for the harmonic oscillator. In[2]:=
state = First@NDSolve`ProcessEquations@8x ‘‘@tD + x@tD ã 0, x@0D ã 0, x ‘@0D ã 1<, x, tDD
Out[2]= NDSolve`StateData@<0.>D
This creates three new NDSolve`StateData objects, each with a different initial condition. In[3]:=
newstate = NDSolve`Reinitialize@state, 8x@1D ^ 3 ã 1, x ‘@1D ã 0
Out[3]= 8NDSolve`StateData@<1.>D, NDSolve`StateData@<1.>D, NDSolve`StateData@<1.>D<
Using NDSolve`Reinitialize may save computation time when you need to solve the same differential equation for many different initial conditions, as you might in a shooting method for boundary value problems. A subset of NDSolve options can be specified as options to NDSolve`Reinitialize. This creates a new NDSolve`StateData object, specifying a starting step size. In[3]:=
newstate = NDSolve`Reinitialize@state, 8x@0D ã 0, x ‘@0D ã 1<, StartingStepSize Ø 1 ê 10D
Out[3]= 8NDSolve`StateData@<0.>D<
Advanced Numerical Differential Equation Solving in Mathematica
343
Iterating Solutions One important use of NDSolve`StateData objects is to have more control of the integration. For some problems, it is appropriate to check the solution and start over or change parameters, depending on certain conditions.
NDSolve`Iterate@state,tD
compute the solution of the differential equation in an NDSolve`StateData object that has been assigned as the value of the variable state from the current time up to time t
Iterating solutions to differential equations. This creates an NDSolve`StateData object that contains the information needed to solve the equation for an oscillator with a varying coefficient using an explicit Runge|Kutta method. In[4]:=
state = First@NDSolve`ProcessEquations@8x ‘‘@tD + H1 + 4 UnitStep@Sin@tDDL x@tD ã 0, x@0D ã 1, x ‘@0D ã 0<, x, t, Method Ø “ExplicitRungeKutta“DD
Out[4]= NDSolve`StateData@<0.>D
Note that when you use NDSolve`ProcessEquations, you do not need to give the range of the t variable explicitly because that information is not needed to set up the equations in a form ready to solve. (For PDEs, you do have to give the ranges of all spatial variables, however, since that information is essential for determining an appropriate discretization.) This computes the solution out to time t = 1. In[5]:=
NDSolve`Iterate@state, 1D
NDSolve`Iterate does not return a value because it modifies the NDSolve`StateData object assigned to the variable state. Thus, the command affects the value of the variable in a manner similar to setting parts of a list, as described in "Manipulating Lists by Their Indices". You can see that the value of state has changed since it now displays the current time to which it is integrated. The output form of state shows the range of times over which the solution has been integrated. In[6]:=
state
Out[6]= NDSolve`StateData@<0.,1.>D
344
Advanced Numerical Differential Equation Solving in Mathematica
If you want to integrate further, you can call NDSolve`Iterate again, but with a larger value for time. This computes the solution out to time t = 3. In[7]:=
NDSolve`Iterate@state, 3D
You can specify a time that is earlier than the first current time, in which case the integration proceeds backwards with respect to time. This computes the solution from the initial condition backwards to t = -p ê 2. In[8]:=
NDSolve`Iterate@state, - Pi ê 2D
NDSolve`Iterate allows you to specify intermediate times at which to stop. This can be useful, for example, to avoid discontinuities. Typically, this strategy is more effective with so-called one-step methods, such as the explicit Runge|Kutta method used in this example. However, it generally works with the default NDSolve method as well. This computes the solution out to t = 10 p, making sure that the solution does not have problems with the points of discontinuity in the coefficients at t = p, 2 p, …. In[9]:=
NDSolve`Iterate@state, p Range@10DD
Getting Solution Functions Once you have integrated a system up to a certain time, typically you want to be able to look at the current solution values and to generate an approximate function representing the solution computed so far. The command NDSolve`ProcessSolutions allows you to do both.
NDSolve`ProcessSolutions@stateD
give the solutions that have been computed in state as a list of rules with InterpolatingFunction objects
Getting solutions as InterpolatingFunction objects. This extracts the solution computed in the previous section as an InterpolatingFunction object. In[10]:=
sol = NDSolve`ProcessSolutions@stateD
Out[10]= 8x Ø [email protected], 31.4159<<, <>D<
Advanced Numerical Differential Equation Solving in Mathematica
345
This plots the solution. In[11]:=
Plot@Evaluate@x@tD ê. solD, 8t, 0, 10 Pi
1
Out[11]= 5
10
15
20
25
30
-1
Just as when using NDSolve directly, there will be a rule for each function you specified in the second argument to NDSolve`ProcessEquations. Only the specified components of the solutions are saved in such a way that an InterpolatingFunction object can be created.
NDSolve`ProcessSolutions@ state,dirD
give the solutions that have been most recently computed in direction dir in state as a list of rules with values for both the functions and their derivatives
Obtaining the current solution values. This gives the current solution values and derivatives in the forward direction. In[12]:=
sol = NDSolve`ProcessSolutions@state, “Forward“D
£ ££ Out[12]= [email protected] Ø 0.843755, x @31.4159D Ø -1.20016, x @31.4159D Ø -0.843755<
The choices you can give for the direction dir are “Forward“ and “Backward“, which refer to the integration forward and backward from the initial condition.
“Forward“
integration in the direction of increasing values of the temporal variable
“Backward“
integration in the direction of decreasing values of the temporal variables
“Active“
integration in the direction that is currently being integrated; typically, this value should only be called from method initialization that is used during an active integration
Integration direction specifications.
346
Advanced Numerical Differential Equation Solving in Mathematica
The output given by NDSolve`ProcessSolution is always given in terms of the dependent variables, either at a specific value of the independent variable, or interpolated over all of the saved values. This means that when a partial differential equation is being integrated, you will get results representing the dependent variables over the spatial variables. This computes the solution to the heat equation from time t = -1 ê 4 to t = 2. In[13]:=
state = First@NDSolve`ProcessEquations@8D@u@t, xD, tD ã D@u@t, xD, x, xD, u@0, xD ã Cos@p ê 2 xD, u@t, 0D ã 1 , u@t, 1D ã 0<, u, t, 8x, 0, 1
In[15]:=
NDSolve`ProcessSolutions@state, “Forward“D
Out[15]= 9u@2., xD Ø InterpolatingFunction@880., 1.<<, <>D@xD,
uH1,0L @2., xD Ø InterpolatingFunction@880., 1.<<, <>D@xD=
The solution is given as an InterpolatingFunction object that interpolates over the spatial variable x. This gives the solution at t = -1 ê 4. In[16]:=
NDSolve`ProcessSolutions@state, “Backward“D NDSolve::eerr : Warning: Scaled local spatial error estimate of 638.6378240455119` at t = -0.25 in the direction of independent variable x is much greater than prescribed error tolerance. Grid spacing with 15 points may be too large to achieve the desired accuracy or precision. A singularity may have formed or you may want to specify a smaller grid spacing using the MaxStepSize or MinPoints method options. à
Out[16]= [email protected], xD Ø InterpolatingFunction@880., 1.<<, <>D@xD,
uH1,0L @-0.25, xD Ø InterpolatingFunction@880., 1.<<, <>D@xD=
When you process the current solution for partial differential equations, the spatial error estimate is checked. (It is not generally checked except when solutions are produced because doing so would be quite time consuming.) Since it is excessive, the NDSolve::eerr message is issued. The typical association of the word "backward" with the heat equation as implying instability gives a clue to what is wrong in this example.
Advanced Numerical Differential Equation Solving in Mathematica
347
Here is a plot of the solution at t = 1 ê 4. In[17]:=
Plot@Evaluate@u@- 0.25, xD ê. %D, 8x, 0, 1
3 × 10
105
2 × 10
105
1 × 10
Out[17]= 105
0.2
0.4
0.6
0.8
1.0
–1 × 10
105
–2 × 10
105
–3 × 10
The plot of the solution shows that instability is indeed the problem. Even though the heat equation example is simple enough to know that the solution backward in time is problematic, using NDSolve`Iterate and NDSolve`ProcessSolutions to monitor the solution of a PDE can be used to save computing a solution that turns out not to be as accurate as desired. Another simple form of monitoring follows.
348
Advanced Numerical Differential Equation Solving in Mathematica
Entering the following commands generates a sequence of plots showing the solution of a generalization of the sine-Gordon equation as it is being computed. In[58]:=
L = - 10; state = FirstANDSolve`ProcessEquationsA9D@u@t, x, yD, t, tD ã D@u@t, x, yD, x, xD + D@u@t, x, yD, y, yD - Sin@u@t, x, yDD, u@0, x, yD ã ExpA- Ix2 + y2 ME, Derivative@1, 0, 0D@uD@0, x, yD ã 0, u@t, - L, yD ã u@t, L, yD, u@t, x, - LD ã u@t, x, LD=, u, t, 8x, - L, L<, 8y, - L, L<, Method Ø 8“MethodOfLines“, “SpatialDiscretization“ Ø 8“TensorProductGrid“, “DifferenceOrder“ -> “Pseudospectral“<
0.2 0.0 –0.2 –10
10 0 0 10
Out[60]=
0 10
10 0 10
10
0
–10
0.2 0.0 –0.2 –10 0
0.2 0.0 –0.2 –10 –10
0.2 0.0 –0.2 –10
–10
10 0 0 10
–10
When you monitor a solution in this way, it is usually possible to interrupt the computation if you see that the solution found is sufficient. You can still use the NDSolve`StateData object to get the solutions that have been computed.
NDSolve`StateData Methods An NDSolve`StateData object contains a lot of information, but it is arranged in a manner which makes it easy to iterate solutions, and not in a manner which makes it easy to understand where the information is kept. However, sometimes you will want to get information from the state data object: for this reason several method functions have been defined to make accessing the information easy.
Advanced Numerical Differential Equation Solving in Mathematica
349
stateü“TemporalVariable“
give the independent variable that the dependent variables (functions) depend on
stateü“DependentVariables“
give a list of the dependent variables (functions) to be solved for
stateü“VariableDimensions“
give the dimensions of each of the dependent variables (functions)
stateü“VariablePositions“
give the positions in the solution vector for each of the dependent variables
stateü“VariableTransformation“
give the transformation of variables from the original problem variables to the working variables
stateü“NumericalFunction“
give the “NumericalFunction“ object used to evaluate the derivatives of the solution vector with respect to the temporal variable t
stateü“ProcessExpression“@args,expr,dimsD process the expression expr using the same variable transformations that NDSolve used to generate state to give a “NumericalFunction“ object for numerically evaluating expr; args are the arguments for the numerical function and should either be All or a list of arguments that are dependent variables of the system; dims should be Automatic or an explicit list giving the expected dimensions of the numerical function result
stateü“SystemSize“
give the effective number of first-order ordinary differential equations being solved
stateü“MaxSteps“
give the maximum number of steps allowed for iterating the differential equations
stateü“WorkingPrecision“
give the working precision used to solve the equations
stateü“Norm“
the scaled norm to use for gauging error
General method functions for an NDSolve`StateData object state.
Much
of
the
available
information
depends
on
the
current
solution
values.
Each
NDSolve`StateData object keeps solution information for solutions in both the forward and backward direction. At the initial condition these are the same, but once the problem has been iterated in either direction, these will be different.
350
Advanced Numerical Differential Equation Solving in Mathematica
stateü“CurrentTime“@dirD
give the current value of the temporal variable in the integration direction dir
stateü“SolutionVector“@dirD
give the current value of the solution vector in the integration direction dir
stateü“SolutionDerivativeVector“@dirD give the current value of the derivative with respect to the temporal variable of the solution vector in the integration direction dir
stateü“TimeStep“@dirD
give the time step size for the next step in the integration direction dir
stateü“TimeStepsUsed“@dirD
give the number of time steps used to get to the current time in the integration direction dir
stateü“MethodData“@dirD
give the method data object used in the integration direction dir
Directional method functions for an NDSolve`StateData object state.
If the direction argument is omitted, the functions will return a list with the data for both directions (a list with a single element at the initial condition). Otherwise, the direction can be “Forward“, “Backward“, or “Active“ as specified in the previous subsection. Here is an NDSolve`StateData object for a solution of the nonlinear Schrodinger equation that has been computed up to t = 1. In[24]:=
state = First@NDSolve`ProcessEquations@ 8I D@u@t, xD, tD ã D@u@t, xD, x, xD + Abs@u@t, xDD ^ 2 u@t, xD, u@0, xD ã Sech@xD Exp@p I xD, u@t, - 15D ã u@t, 15D<, u, t, 8x, - 15, 15<, Method Ø StiffnessSwitchingDD; NDSolve`Iterate@state, 1D; state
Out[24]= NDSolve`StateData@<0.,1.>D
“Current” refers to the most recent point reached in the integration. This gives the current time in both the forward and backward directions. In[27]:=
state ü “CurrentTime“
Out[27]= 80., 1.<
This gives the size of the system of ordinary differential equations being solved. In[28]:=
state ü “SystemSize“
Out[28]= 400
Advanced Numerical Differential Equation Solving in Mathematica
351
The method functions are relatively low-level hooks into the data structure; they do little processing on the data returned to you. Thus, unlike NDSolve`ProcessSolutions, the solutions given are simply vectors of data points relating to the system of ordinary differential equations NDSolve is solving. This makes a plot of the modulus of current solution in the forward direction. In[29]:=
ListPlot@Abs@state ü SolutionVector@“Forward“DDD 0.8
0.6
Out[29]= 0.4
0.2
100
200
300
400
This plot does not show the correspondence with the x-grid values correctly. To get the correspondence with the spatial grid correctly, you must use NDSolve`ProcessSolutions. There is a tremendous amount of control provided by these methods, but an exhaustive set of examples is beyond the scope of this documentation. One of the most important uses of the information from an NDSolve`StateData object is to initialize integration methods. Examples are shown in "The NDSolve Method Plug-in Framework".
Utility Packages for Numerical Differential Equation Solving InterpolatingFunctionAnatomy NDSolve returns solutions as InterpolatingFunction objects. Most of the time, simply using these as functions does what is needed, but occasionally it is useful to access the data inside, which includes the actual values and points NDSolve computed when taking steps. The exact structure of an InterpolatingFunction object is arranged to make the data storage efficient and evaluation at a given point fast. This structure may change between Mathematica versions, so
code
that
is
written
in
terms
of
accessing
parts
of
352
Advanced Numerical Differential Equation Solving in Mathematica
and evaluation at a given point fast. This structure may change between Mathematica versions, so
code
that
is
written
in
terms
of
accessing
parts
of
InterpolatingFunction
objects may not work with new versions of Mathematica. The DifferentialEquations`InterÖ polatingFunctionAnatomy`
package
provides
an
interface
to
the
data
in
an
InterpolatingFunction object that will be maintained for future Mathematica versions.
InterpolatingFunctionDomain@ ifunD
return a list with the domain of definition for each of the dimensions of the InterpolatingFunction object ifun
InterpolatingFunctionCoordinaÖ tes@ifunD
return a list with the coordinates at which data is specified in each of the dimensions for the InterpolatingFunction object ifun
InterpolatingFunctionGrid@ifunD
return the grid of points at which data is specified for the InterpolatingFunction object ifun
InterpolatingFunctionValuesOnÖ Grid@ifunD
return the values that would be returned by evaluating the InterpolatingFunction object ifun at each of its grid points
InterpolatingFunctionInterpolÖ ationOrder@ifunD
return the interpolation order used for each of the dimensions for the InterpolatingFunction object ifun
InterpolatingFunctionDerivatiÖ veOrder@ifunD
return the order of the derivative of the base function for which values are specified when evaluating the InterpolatingFunction object ifun
Anatomy of InterpolatingFunction objects. This loads the package. In[21]:=
Needs@“DifferentialEquations`InterpolatingFunctionAnatomy`“D;
One common situation where the InterpolatingFunctionAnatomy package is useful is when NDSolve cannot compute a solution over the full range of values that you specified, and you want to plot all of the solution that was computed to try to understand better what might have gone wrong. Here is an example of a differential equation which cannot be computed up to the specified endpoint. In[2]:=
ifun = First@x ê. NDSolve@8x ‘@tD ã Exp@x@tDD - x@tD, x@0D ã 1<, x, 8t, 0, 10
Out[2]= InterpolatingFunction@880., 0.516019<<, <>D
Advanced Numerical Differential Equation Solving in Mathematica
353
This gets the domain. In[3]:=
domain = InterpolatingFunctionDomain@ifunD
Out[3]= 880., 0.516019<<
Once the domain has been returned in a list, it is easy to use Part to get the desired endpoints and make the plot. In[4]:=
8begin, end< = domain@@1DD; Plot@ifun@tD, 8t, begin, end
3.5
3.0
Out[5]= 2.5 2.0
1.5
0.1
0.2
0.3
0.4
0.5
From the plot, it is quite apparent that a singularity has formed and it will not be possible to integrate the system any further. Sometimes it is useful to see where NDSolve took steps. Getting the coordinates is useful for doing this. This shows the values that NDSolve computed at each step it took. It is quite apparent from this that nearly all of the steps were used to try to resolve the singularity. In[6]:=
coords = First@InterpolatingFunctionCoordinates@ifunDD; ListPlot@Transpose@8coords, ifun@coordsD
25
20
Out[7]=
15
10
5
0.485
0.490
0.495
0.500
0.505
0.510
0.515
354
Advanced Numerical Differential Equation Solving in Mathematica
The package is particularly useful for analyzing the computed solutions of PDEs. With this initial condition, Burgers' equation forms a steep front. In[8]:=
mdfun = First@u ê. NDSolve@8D@u@x, tD, tD ã 0.01 D@u@x, tD, x, xD - u@x, tD D@u@x, tD, xD, u@0, tD ã u@1, tD, u@x, 0D ã Sin@2 Pi xD<, u, 8x, 0, 1<, 8t, 0, 0.5
Out[8]= InterpolatingFunction@88..., 0., 1., ...<, 80., 0.472151<<, <>D
This shows the number of points used in each dimension. In[9]:=
Map@Length, InterpolatingFunctionCoordinates@mdfunDD
Out[9]= 827, 312<
This shows the interpolation order used in each dimension. In[10]:=
InterpolatingFunctionInterpolationOrder@mdfunD
Out[10]= 85, 3<
This shows that the inability to resolve the front has manifested itself as numerical instability. In[11]:=
Max@Abs@InterpolatingFunctionValuesOnGrid@mdfunDDD
12 Out[11]= 1.14928 µ 10
This shows the values computed at the spatial grid points at the endpoint of the temporal integration. In[12]:=
end = InterpolatingFunctionDomain@mdfunD@@2, - 1DD; X = InterpolatingFunctionCoordinates@mdfunD@@1DD; ListPlot@Transpose@8X, mdfun@X, endD
0.5
Out[14]=
0.2
-0.5
-1.0
0.4
0.6
0.8
1.0
Advanced Numerical Differential Equation Solving in Mathematica
355
It is easily seen from the point plot that the front has not been resolved. This makes a 3D plot showing the time evolution for each of the spatial grid points. The initial condition is shown in red. In[15]:=
Show@Graphics3D@8Map@Line, MapThread@Append, 8InterpolatingFunctionGrid@mdfunD, InterpolatingFunctionValuesOnGrid@mdfunD<, 2DD, 8RGBColor@1, 0, 0D, Line@Transpose@8X, 0. X, mdfun@X, 0.D
Out[15]=
When
a
derivative
of
an
InterpolatingFunction
object
is
taken,
a
new
InterpolatingFunction object is returned that gives the requested derivative when evaluated at a point. The InterpolatingFunctionDerivativeOrder is a way of determining what derivative will be evaluated. The derivative returns a new InterpolatingFunction object. In[16]:=
dmdfun = Derivative@0, 1D@mdfunD
Out[16]= InterpolatingFunction@88..., 0., 1., ...<, 80., 0.472151<<, <>D
This shows what derivative will be evaluated. In[17]:=
InterpolatingFunctionDerivativeOrder@dmdfunD
Out[17]= Derivative@0, 1D
356
Advanced Numerical Differential Equation Solving in Mathematica
NDSolveUtilities A number of utility routines have been written to facilitate the investigation and comparison of various
NDSolve
methods.
These
functions
have
been
collected
in
the
package
DifferentialEquations`NDSolveUtilities`.
CompareMethods@ sys,refsol,methods,optsD
return statistics for various methods applied to the system
FinalSolutions@sys,solsD
return the solution values at the end of the numerical integration for various solutions sols corresponding to the system sys
InvariantErrorPlot@ invts,dvars,ivar,sol,optsD
return a plot of the error in the invariants invts for the solution sol
RungeKuttaLinearStabilityFuncÖ tion@amat,bvec,varD
return the linear stability function for the Runge|Kutta method with coefficient matrix amat and weight vector bvec using the variable var
StepDataPlot@sols,optsD
return plots of the step sizes taken for the solutions sols on a logarithmic scale
sys
Functions provided in the NDSolveUtilities package. This loads the package. In[18]:=
Needs@“DifferentialEquations`NDSolveUtilities`“D
A useful means of analyzing Runge|Kutta methods is to study how they behave when applied to a scalar linear test problem (see the package FunctionApproximations.m). This assigns the (exact or infinitely precise) coefficients for the 2-stage implicit Runge|Kutta Gauss method of order 4. In[19]:=
8amat, bvec, cvec< = NDSolve`ImplicitRungeKuttaGaussCoefficients@4, InfinityD
Out[19]= :::
1
,
4
1 12
3-2
3
>, :
1 12
3+2
3
,
1 4
1 1 1 >>, : , >, : 2 2 6
3-
3
,
1
3+
3
>>
6
This computes the linear stability function, which corresponds to the (2,2) Padé approximation to the exponential at the origin. In[20]:=
RungeKuttaLinearStabilityFunction@amat, bvec, zD 1+
z
1-
z
Out[20]=
2
2
+ +
z2 12 z2 12
Advanced Numerical Differential Equation Solving in Mathematica
357
Examples of the functions CompareMethods, FinalSolutions, RungeKuttaLinearStabilityÖ Function, and StepDataPlot can be found within "ExplicitRungeKutta Method for NDSolve". Examples of the function InvariantErrorPlot can be found within "Projection Method for NDSolve".
InvariantErrorPlot Options The function InvariantErrorPlot has a number of options that can be used to control the form of the result. option name
default value
InvariantDimensions
Automatic
specify the dimensions of the invariants
InvariantErrorFunction
AbsASubtract@ Ò1,Ò2DE&
specify the function to use for comparing errors
InvariantErrorSampleRate
Automatic
specify how often errors are sampled
Options of the function InvariantErrorPlot.
The default value for InvariantDimensions is to determine the dimensions from the structure of the input, Dimensions@invtsD. The default value for InvariantErrorFunction is a function to compute the absolute error. The default value for InvariantErrorSampleRate is to sample all points if there are less than 1000 steps taken. Above this threshold a logarithmic sample rate is used.
358
Advanced Numerical Differential Equation Solving in Mathematica
Advanced Numerical Differential Equation Solving in Mathematica: References [AP91] Ascher U. and L. Petzold. "Projected Implicit Runge|Kutta Methods for Differential Algebraic Equations" SIAM J. Numer. Anal. 28 (1991): 1097|1120 [AP98] Ascher U. and L. Petzold. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM Press (1998) [ARPACK98] Lehoucq, R. B., D. C. Sorensen, and C. Yang. ARPACK Users’ Guide, Solution of Large-Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods, SIAM (1998) [ATLAS00] Whaley R. C., A. Petitet, and J. J. Dongarra. "Automated Empirical Optimization of Software and the ATLAS Project" Available electronically from http://mathatlas.sourceforge.net/ [BD83] Bader G. and P. Deuflhard. "A Semi-Implicit Mid-Point Rule for Stiff Systems of Ordinary Differential Equations" Numer. Math 41 (1983): 373|398 [BS97] Bai Z. and G. W. Stewart. "SRRIT: a Fortran Subroutine to Calculate the Dominant Invariant Subspace of a Nonsymmetric Matrix" ACM Trans. Math. Soft. 23 4 (1997): 494|513 [BG94] Benettin G. and A. Giorgilli. "On the Hamiltonian Interpolation of Near to the Identity Symplectic Mappings with Application to Symplectic Integration Algorithms" J. Stat. Phys. 74 (1994): 1117|1143 [BZ65] Berezin I. S. and N. P. Zhidkov. Computing Methods, Volume 2. Pergamon (1965) [BM02] Blanes S. and P. C. Moan. "Practical Symplectic Partitioned Runge|Kutta and Runge| Kutta|Nyström Methods" J. Comput. Appl. Math. 142 (2002): 313|330 [BCR99a] Blanes S., F. Casas, and J. Ros. "Symplectic Integration with Processing: A General Study" SIAM J. Sci. Comput. 21 (1999): 711|727 [BCR99b] Blanes S., F. Casas, and J. Ros. "Extrapolation of Symplectic Integrators" Report DAMTP NA09, Cambridge University (1999) [BS89a] Bogacki P. and L. F. Shampine. "A 3(2) Pair of Runge|Kutta Formulas" Appl. Math. Letters 2 (1989): 1|9
Advanced Numerical Differential Equation Solving in Mathematica
359
[BS89b] Bogacki P. and L. F. Shampine. "An Efficient Runge|Kutta (4, 5) Pair" Report 89|20, Math. Dept. Southern Methodist University, Dallas, Texas (1989) [BGS93] Brankin R. W., I. Gladwell and L. F. Shampine. "RKSUITE: A Suite of Explicit Runge| Kutta Codes" In Contributions to Numerical Mathematics, R. P. Agarwal, ed., 41|53 (1993) [BCP89] Brenan K., S. Campbell, and L. Petzold. Numerical Solutions of Initial-Value Problems in Differential-Algebraic Equations. Elsevier Science Publishing (1989) [BHP94] Brown P. N., A. C. Hindmarsh, and L. R. Petzold. "Using Krylov Methods in the Solution of Large-Scale Differential-Algebraic Systems" SIAM J. Sci. Comput. 15 (1994): 1467|1488 [BHP98] Brown P. N., A. C. Hindmarsh, and L. R. Petzold. "Consistent Initial Condition Calculation for Differential-Algebraic Systems" SIAM J. Sci. Comput. 19 (1998): 1495|1512 [B87] Butcher J. C. The Numerical Analysis of Ordinary Differential Equations: Runge|Kutta and General Linear Methods. John Wiley (1987) [B90] Butcher J. C. "Order, Stepsize and Stiffness Switching" Computing. 44 3, (1990): 209|220 [BS64] Bulirsch R. and J. Stoer. "Fehlerabschätzungen und Extrapolation mit Rationalen Funktionen bei Verfahren vom Richardson|Typus" Numer. Math. 6 (1964): 413|427 [CIZ97] Calvo M. P., A. Iserles, and A. Zanna. "Numerical Solution of Isospectral Flows" Math. Comp. 66, no. 220 (1997): 1461|1486 [CIZ99] Calvo M. P., A. Iserles, and A. Zanna. "Conservative Methods for the Toda Lattice Equations" IMA J. Numer. Anal. 19 (1999): 509|523 [CR91] Candy J. and R. Rozmus. "A Symplectic Integration Algorithm for Separable Hamiltonian Functions" J. Comput. Phys. 92 (1991): 230|256 [CH94] Cohen S. D. and A. C. Hindmarsh. CVODE User Guide. Lawrence Livermore National Laboratory report UCRL-MA-118618, September 1994 [CH96] Cohen S. D. and A. C. Hindmarsh. "CVODE, a Stiff/Nonstiff ODE Solver in C" Computers in Physics 10, no. 2 (1996): 138|143 [C87] Cooper G. J. "Stability of Runge|Kutta Methods for Trajectory Problems" IMA J. Numer. Anal. 7 (1987): 1|13 [DP80] Dormand J. R. and P. J. Prince. "A Family of Embedded Runge|Kutta Formulae" J. Comp. Appl. Math. 6 (1980): 19|26
360
Advanced Numerical Differential Equation Solving in Mathematica
[DL01] Del Buono N. and L. Lopez. "Runge|Kutta Type Methods Based on Geodesics for Systems of ODEs on the Stiefel Manifold" BIT 41 (5 (2001): 912|923 [D83] Deuflhard P. "Order and Step Size Control in Extrapolation Methods" Numer. Math. 41 (1983): 399|422 [D85] Deuflhard P. "Recent Progress in Extrapolation Methods for Ordinary Differential Equations" SIAM Rev. 27 (1985): 505|535 [DN87] Deuflhard P. and U. Nowak. "Extrapolation Integrators for Quasilinear Implicit ODEs" In Large-scale scientific computing, (P. Deuflhard and B. Engquist eds.) Birkhäuser, (1987) [DS93] Duff I. S. and J. A. Scott. "Computing Selected Eigenvalues of Sparse Unsymmetric Matrices Using Subspace Iteration" ACM Trans. Math. Soft. 19 2, (1993): 137|159 [DHZ87] Deuflhard P., E. Hairer, and J. Zugck. "One-Step and Extrapolation Methods for Differential-Algebraic Systems" Numer. Math. 51 (1987): 501|516 [DRV94] Dieci L., R. D. Russel, and E. S. Van Vleck. "Unitary Integrators and Applications to Continuous Orthonormalization Techniques" SIAM J. Num. Anal. 31 (1994): 261|281 [DV99] Dieci L. and E. S. Van Vleck. "Computation of Orthonormal Factors for Fundamental Solution Matrices" Numer. Math. 83 (1999): 599|620 [DLP98a] Diele F., L. Lopez, and R. Peluso. "The Cayley Transform in the Numerical Solution of Unitary Differential Systems" Adv. Comput. Math. 8 (1998): 317|334 [DLP98b] Diele F., L. Lopez, and T. Politi. "One Step Semi-Explicit Methods Based on the Cayley Transform for Solving Isospectral Flows" J. Comput. Appl. Math. 89 (1998): 219|223 [ET92] Earn D. J. D. and S. Tremaine. "Exact Numerical Studies of Hamiltonian Maps: Iterating without Roundoff Error" Physica D. 56 (1992): 1|22 [F69] Fehlberg E. "Low-Order Classical Runge|Kutta Formulas with Step Size Control and Their Application to Heat Transfer Problems" NASA Technical Report 315, 1969 (extract published in Computing 6 (1970): 61|71) [FR90] Forest E. and R. D. Ruth. "Fourth Order Symplectic Integration" Physica D. 43 (1990): 105|117 [F92] Fornberg B. "Fast Generation of Weights in Finite Difference Formulas" In Recent Developments in Numerical Methods and Software for ODEs/DAEs/PDEs (G. D. Byrne and W. E. Schiesser eds.). World Scientific (1992)
Advanced Numerical Differential Equation Solving in Mathematica
361
[F96a] Fornberg B. A Practical Guide to Pseudospectral Methods. Cambridge University Press (1996) [F98] Fornberg B. "Calculation of Weights in Finite Difference Formulas" SIAM Review 40, no. 3 (1998): 685|691 (Available in PDF) [F96b] Fukushima T. "Reduction of Round-off Errors in the Extrapolation Methods and its Application to the Integration of Orbital Motion" Astron. J. 112, no. 3 (1996): 1298|1301 [G51] Gill S. "A Process for the Step-by-Step Integration of Differential Equations in an Automatic Digital Computing Machine" Proc. Cambridge Philos. Soc. 47 (1951): 96|108 [G65] Gragg W. B. "On Extrapolation Algorithms for Ordinary Initial Value Problems" SIAM J. Num. Anal. 2 (1965): 384|403 [GØ84] Gear C. W. and O. Østerby. "Solving Ordinary Differential Equations with Discontinuities" ACM Trans. Math. Soft. 10 (1984): 23|44 [G91] Gustafsson K. "Control Theoretic Techniques for Stepsize Selection in Explicit Runge| Kutta Methods" ACM Trans. Math. Soft. 17, (1991): 533|554 [G94] Gustafsson K. "Control Theoretic Techniques for Stepsize Selection in Implicit Runge| Kutta Methods" ACM Trans. Math. Soft. 20, (1994): 496|517 [GMW81] Gill P., W. Murray, and M. Wright. Practical Optimization. Academic Press (1981) [GDC91] Gladman B., M. Duncan, and J. Candy. "Symplectic Integrators for Long-Term Integrations in Celestial Mechanics" Celest. Mech. 52 (1991): 221|240 [GSB87] Gladwell I., L. F. Shampine and R. W. Brankin. "Automatic Selection of the Initial Step Size for an ODE Solver" J. Comp. Appl. Math. 18 (1987): 175|192 [GVL96] Golub G. H. and C. F. Van Loan. Matrix Computations, 3rd ed. Johns Hopkins University Press (1996) [H83] Hindmarsh A. C. "ODEPACK, A Systematized Collection of ODE Solvers" In Scientific Computing (R. S. Stepleman et al. eds.) Vol. 1 of IMACS Transactions on Scientific Computation (1983): 55|64 [H94] Hairer E. "Backward Analysis of Numerical Integrators and Symplectic Methods" Annals of Numerical Mathematics 1 (1984): 107|132 [H97] Hairer E. "Variable Time Step integration with Symplectic Methods" Appl. Numer. Math. 25 (1997): 219|227
362
Advanced Numerical Differential Equation Solving in Mathematica
[H00] Hairer E. "Symmetric Projection Methods for Differential Equations on Manifolds" BIT 40, no. 4 (2000): 726|734 [HL97] Hairer E. and Ch. Lubich. "The Life-Span of Backward Error Analysis for Numerical Integrators" Numer. Math. 76 (1997): 441|462. Erratum: http://www.unige.ch/math/folks/hairer [HL88a] Hairer E. and Ch. Lubich. "Extrapolation at Stiff Differential Equations" Numer. Math. 52 (1988): 377|400 [HL88b] Hairer E. and Ch. Lubich. "On Extrapolation Methods for Stiff and Differential-Algebraic Equations" Teubner Texte zur Mathematik 104 (1988): 64|73 [HO90] Hairer E. and A. Ostermann. "Dense Output for Extrapolation Methods" Numer. Math. 58 (1990): 419|439 [HW96] Hairer E. and G. Wanner, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd ed. Springer-Verlag (1996) [HW99] Hairer E. and G. Wanner. "Stiff Differential Equations Solved by Radau Methods" J. Comp. Appl. Math. 111 (1999): 93|111 [HLW02] Hairer E., Ch. Lubich, and G. Wanner. Geometric Numerical Integration: StructurePreserving Algorithms for Ordinary Differential Equations. Springer-Verlag (2002) [HNW93] Hairer E., S. P. Nørsett, and G. Wanner. Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd ed. Springer-Verlag (1993) [H97] Higham D. ""Time-Stepping and Preserving Orthonormality" BIT 37, no. 1 (1997): 24|36 [H89] Higham N. J. "Matrix Nearness Problems and Applications" In Applications of Matrix Theory (M. J. C. Gover and S. Barnett eds.). Oxford University Press (1989): 1|27 [H96] Higham N. J. Accuracy and Stability of Numerical Algorithms. SIAM (1996) [H83] Hindmarsh A. C. "ODEPACK, A Systematized Collection of ODE Solvers" In Scientific Computing (R. S. Stepleman et al. eds). North-Holland (1983): 55|64 [HT99] Hindmarsh A. and A. Taylor. User Documentation for IDA: A Differential-Algebraic Equation Solver for Sequential and Parallel Computers. Lawrence Livermore National Laboratory report, UCRL-MA-136910, December 1999.
Advanced Numerical Differential Equation Solving in Mathematica
363
[KL97] Kahan W. H. and R. C. Li. "Composition Constants for Raising the Order of Unconventional Schemes for Ordinary Differential Equations" Math. Comp. 66 (1997): 1089|1099 [K65] Kahan W. H. "Further Remarks on Reducing Truncation Errors" Comm. ACM. 8 (1965): 40 [K93] Koren I. Computer Arithmetic Algorithms. Prentice Hall (1993) [L87] Lambert J. D. Numerical Methods for Ordinary Differential Equations. John Wiley (1987) [LS96] Lehoucq, R. B and J. A. Scott. "An Evaluation of Software for Computing Eigenvalues of Sparse Nonsymmetric Matrices." Preprint MCS-P547-1195, Argonne National Laboratory, (1996) [LAPACK99] Anderson E., Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorenson. LAPACK Users' Guide, 3rd ed. SIAM (1999) [M68] Marchuk G. "Some Applications of Splitting-Up Methods to the Solution of Mathematical Physics Problems" Aplikace Matematiky 13 (1968): 103|132 [MR99] Marsden J. E. and T. Ratiu. Introduction to Mechanics and Symmetry: Texts in Applied Mathematics, Vol. 17, 2nd ed. Springer-Verlag (1999) [M93] McLachlan R. I. "Explicit Lie|Poisson Integration and the Euler Equations" Phys. Rev. Lett. 71 (1993): 3043|3046 [M95a] McLachlan R. I. "On the Numerical Integration of Ordinary Differential Equations by Symmetric Composition Methods" SIAM J. Sci. Comp. 16 (1995): 151|168 [M95b] McLachlan R. I. "Composition Methods in the Presence of Small Parameters" BIT 35 (1995): 258|268 [M01] McLachlan R. I. "Families of High-Order Composition Methods" Numerical Algorithms 31 (2002): 233|246 [MA92] McLachlan R. I. and P. Atela. "The Accuracy of Symplectic Integrators" Nonlinearity 5 (1992): 541|562 [MQ02] McLachlan R. I. and G. R. W. Quispel. "Splitting Methods" Acta Numerica 11 (2002): 341|434 [MG80] Mitchell A. and D. Griffiths. The Finite Difference Method in Partial Differential Equations. John Wiley and Sons (1980)
364
Advanced Numerical Differential Equation Solving in Mathematica
[M65a] Møller O. "Quasi Double-Precision in Floating Point Addition" BIT 5 (1965): 37|50 [M65b] Møller O. "Note on Quasi Double-Precision" BIT 5 (1965): 251|255 [M97] Murua A. "On Order Conditions for Partitioned Symplectic Methods" SIAM J. Numer. Anal. 34, no. 6 (1997): 2204|2211 [MS99] Murua A. and J. M. Sanz-Serna. "Order Conditions for Numerical Integrators Obtained by Composing Simpler Integrators" Phil. Trans. Royal Soc. A 357 (1999): 1079|1100 [M04] Moler C. B. Numerical Computing with MATLAB. SIAM (2004) [Na79] Na T. Y. Computational Methods in Engineering: Boundary Value Problems. Academic Press (1979) [OS92] Okunbor D. I. and R. D. Skeel. "Explicit Canonical Methods for Hamiltonian Systems" Math. Comp. 59 (1992): 439|455 [O95] Olsson H. "Practical Implementation of Runge|Kutta Methods for Initial Value Problems" Licentiate thesis, Department of Computer Science, Lund University, 1995 [O98] Olsson H. "Runge|Kutta Solution of Initial Value Problems: Methods, Algorithms and Implementation" PhD Thesis, Department of Computer Science, Lund University, 1998 [OS00] Olsson H. and G. Söderlind. "The Approximate Runge|Kutta Computational Process" BIT 40 (2 (2000): 351|373 [P83] Petzold L. R. "Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Ordinary Differential Equations" SIAM J. Sci. Stat. Comput. 4 (1983): 136|148 [QSS00] Quarteroni A., R. Sacco, and F. Saleri. Numerical Mathematics. Springer-Verlag (2000) [QV94] Quarteroni A. and A. Valli. Numerical Approximation of Partial Differential Equations. Springer-Verlag (1994) [QT90] Quinn T. and S. Tremaine. "Roundoff Error in Long-Term Planetary Orbit Integrations" Astron. J. 99, no. 3 (1990): 1016|1023 [R93] Reich S. "Numerical Integration of the Generalized Euler Equations" Tech. Rep. 93|20, Dept. Comput. Sci. Univ. of British Columbia (1993) [R99] Reich S. "Backward Error Analysis for Numerical Integrators" SIAM J. Num. Anal. 36 (1999): 1549|1570
Advanced Numerical Differential Equation Solving in Mathematica
365
[R98] Rubinstein B. "Numerical Solution of Linear Boundary Value Problems" Mathematica MathSource package, http://library.wolfram.com/database/MathSource/2127/ [RM57] Richtmeyer R. and K. Morton. Difference Methods for Initial Value Problems. Krieger Publishing Company (1994) (original edition 1957) [R87] Robertson B. C. "Detecting Stiffness with Explicit Runge|Kutta Formulas" Report 193/87, Dept. Comp. Sci. University of Toronto (1987) [S84b] Saad Y. "Chebyshev Acceleration Techniques for Solving Nonsymmetric Eigenvalue Problems" Math. Comp. 42, (1984): 567|588 [SC94] Sanz-Serna J. M. and M. P. Calvo. Numerical Hamiltonian Problems: Applied Mathematics and Mathematical Computation, no. 7. Chapman and Hall (1994) [S91] Schiesser W. The Numerical Method of Lines. Academic Press (1991) [S77] Shampine L. F." Stiffness and Non-Stiff Differential Equation Solvers II: Detecting Stiffness with Runge-Kutta Methods" ACM Trans. Math. Soft. 3 1 (1977): 44|53 [S83] Shampine L. F. "Type-Insensitive ODE Codes Based on Extrapolation Methods" SIAM J. Sci. Stat. Comput. 4 1 (1984): 635|644 [S84a] Shampine L. F. "Stiffness and the Automatic Selection of ODE Code" J. Comp. Phys. 54 1 (1984): 74|86 [S86] Shampine L. F. "Conservation Laws and the Numerical Solution of ODEs" Comp. Maths. Appl. 12B (1986): 1287|1296 [S87] Shampine L. F. "Control of Step Size and Order in Extrapolation Codes" J. Comp. Appl. Math. 18 (1987): 3|16 [S91] Shampine L. F. "Diagnosing Stiffness for Explicit Runge-Kutta Methods" SIAM J. Sci. Stat. Comput. 12 2 (1991): 260|272 [S94] Shampine L. F. Numerical Solution of Ordinary Differential Equations. Chapman and Hall (1994) [SB83] Shampine L. F. and L. S. Baca. "Smoothing the Extrapolated Midpoint Rule" Numer. Math. 41 (1983): 165|175 [SG75] Shampine L. F. and M. Gordon. Computer Solutions of Ordinary Differential Equations. W. H. Freeman (1975)
366
Advanced Numerical Differential Equation Solving in Mathematica
[SR97] Shampine L. F. and M. W Reichelt. Solving ODEs with MATLAB. SIAM J. Sci. Comp. 18-1 (1997): 1|22 [ST00] Shampine L. F. and S. Thompson. "Solving Delay Differential Equations with dde23" Available electronically from http://www.runet.edu/~thompson/webddes/tutorial.pdf [ST01] Shampine L. F. and S. Thompson. "Solving DDEs in MATLAB" Appl. Numer. Math. 37 (2001): 441|458 [SGT03] Shampine L. F., I. Gladwell, and S. Thompson. Solving ODEs with MATLAB. Cambridge University Press (2003) [SBB83] Shampine L. F., L. S. Baca, and H. J. Bauer. "Output in Extrapolation Codes" Comp. and Maths. with Appl. 9 (1983): 245|255 [SS03] Sofroniou M. and G. Spaletta. "Increment Formulations for Rounding Error Reduction in the Numerical Solution of Structured Differential Systems" Future Generation Computer Systems 19, no. 3 (2003): 375|383 [SS04] Sofroniou M. and G. Spaletta. "Construction of Explicit Runge|Kutta Pairs with Stiffness Detection" Mathematical and Computer Modelling, special issue on The Numerical Analysis of Ordinary Differential Equations, 40, no. 11|12 (2004): 1157|1169 [SS05] Sofroniou M. and G. Spaletta. "Derivation of Symmetric Composition Constants for Symmetric Integrators" Optimization Methods and Software 20, no. 4|5 (2005): 597|613 [SS06] Sofroniou M. and G. Spaletta. "Hybrid Solvers for Splitting and Composition Methods" J. Comp. Appl. Math., special issue from the International Workshop on the Technological Aspects of Mathematics, 185, no. 2 (2006): 278|291 [S84c] Sottas G. "Dynamic Adaptive Selection Between Explicit and Implicit Methods When Solving ODEs" Report, Sect. de math, University of Genève, 1984 [S07] Sprott J.C. "A Simple Chaotic Delay Differential Equation", Phys. Lett. A. 366 (2007): 397-402 [S68] Strang G. "On the Construction of Difference Schemes" SIAM J. Num. Anal. 5 (1968): 506|517 [S70] Stetter H. J. "Symmetric Two-Step Algorithms for Ordinary Differential Equations" Computing 5 (1970): 267|280
Advanced Numerical Differential Equation Solving in Mathematica
367
[S01] Stewart G. W. "A Krylov-Schur Algorithm for Large Eigenproblems" SIAM J. Matrix Anal. Appl. 23 3, (2001): 601-614 [SJ81] Stewart W. J. and A. Jennings. "LOPSI: A Simultaneous Iteration Method for Real Matrices" ACM Trans. Math. Soft. 7 2, (1981): 184|198 [S90] Suzuki M. "Fractal Decomposition of Exponential Operators with Applications to ManyBody Theories and Monte Carlo Simulations" Phys. Lett. A 146 (1990): 319|323 [SLEPc05] Hernandez V., J. E. Roman and V. Vidal"SRRIT: a Fortran Subroutine to Calculate the Dominant Invariant Subspace of a Nonsymmetric Matrix" ACM Trans. Math. Soft. 31 3, (2005): 351|362 [T59] Trotter H. F. "On the Product of Semi-Group Operators" Proc. Am. Math. Soc. 10 (1959): 545|551 [TZ08] Tang, Z. H. and Zou, X. "Global attractivity in a predator-prey System with Pure Delays", Proc. Edinburgh Math. Soc. 51 (2008): 495-508 [V78] Verner J. H. "Explicit Runge|Kutta Methods with Estimates of The Local Truncation Error" SIAM J. Num. Anal. 15 (1978): 772|790. [V79] Vitasek E. "A-Stability and Numerical Solution of Evolution Problems" IAC 'Mauro Picone', Series III 186 (1979): 42 [W76] Whitham G. B. Linear and Nonlinear Waves. John Wiley and Sons (1976) [WH91] Wisdom J. and M. Holman. "Symplectic Maps for the N-Body Problem" Astron. J. 102 (1991): 1528|1538 [Y90] Yoshida H. "Construction of High Order Symplectic Integrators" Phys. Lett. A. 150 (1990): 262|268 [Z98] Zanna A. "On the Numerical Solution of Isospectral Flows" Ph.D. Thesis, Cambridge University, 1998 [Z72] Zeeman E. C. "Differential Equations for the Heartbeat and Nerve Impulse". In Towards a Theoretical Biology (C. H. Waddington, ed.). Edinburgh Univeristy Press, 4 (1972): 8|67 [Z06] Zennaro M. "The numerical solution of delay differential equations", Lecture notes, Dobbiaco Summer Chool on Delay Differential Equations and Applications (2006)