Advanced Computational Mechanics A. Ooi October 14, 2005
2
Contents 1 Ordinary Differential Equations 1.1 Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Backward (Implicit) Euler method . . . . . . . . . . . . . . . . 1.3 Trapez pezoid oidal or Crank-Nicolson method . . . . . . . . . . . . . . 1.4 Linearization of Crank-Nicolson method . . . . . . . . . . . . . 1.5 Runge-Kutta methods . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Second Order Runge-Kutta Method . . . . . . . . . . . . 1.5. 1.5.22 4th 4th Orde rder Rung Rungee-Kut -Kutta ta Sche Scheme me (RK(RK-4) 4) . . . . . . . . . 1.6 Stability and error analysis . . . . . . . . . . . . . . . . . . . . . 1.7 1.7 Syst Systeems of Ordi rdinar nary Diffe Differrenti ential al Equat quatiions ons . . . . . . . . . . . . 1.8 RungeRunge-Kut Kuttata-F Fehlberg ehlberg method: method: RungeRunge-Kut Kutta ta with with error error contro controll 1.9 Boundary Value problem . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
5 5 9 10 11 13 14 16 17 24 28 32
2 Fourier series 37 2.1 2.1 Some Some prop propeertie rtiess of the the Fouri ourieer coe coefficie fficien nts . . . . . . . . . . . . . . . 40 2.2 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3 Finite Difference 3.1 General Finite Difference Schemes . . . . . . . . . . . . . . . . 3.1.1 First Derivatives . . . . . . . . . . . . . . . . . . . . . 3.1.2 Some pop popular differencing schemes . . . . . . . . . . . 3.1.3 Higher order derivatives . . . . . . . . . . . . . . . . . 3.1.4 Summary of finite difference formula . . . . . . . . . . 3.2 Centred Difference Schemes . . . . . . . . . . . . . . . . . . . 3.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 3.3 Solv Solvin ingg PD PDE Es usin usingg finit finitee diff differen erencce sche scheme mess . . . . . . . . . . 3.4 Fourier Analysis of error . . . . . . . . . . . . . . . . . . . . . 3.4. 3.4.11 Fouri ourier er anal analys ysis is of cen central tral diffe differe renc ncin ingg sche scheme me . . . . . 3.5 3.5 Stab Stabil ilit ity y anal analys ysis is usin usingg the the modifi modified ed wavenum enumbe berr . . . . . . . 3.6 3.6 Disp Dispeersio rsionn-R Relati lation on-P -Prreser serving ving Sche Scheme me . . . . . . . . . . . . . 3.7 Genera Generall Finite Finite Differe Difference nce Schem Schemes es For The Second Second Deriv Derivati ative ve 3.7.1 Some pop popular differencing schemes . . . . . . . . . . . 3.8 Multidimensional problems . . . . . . . . . . . . . . . . . . . . 3
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
47 47 47 48 51 52 53 55 56 60 65 68 71 72 73 75
4
3.8.1 Steady problem . . . . . . . . . . . . . . 3.8.2 Unsteady problem . . . . . . . . . . . . 3.8.3 Modified wavenumber stability analysis . 3.9 Test Problems . . . . . . . . . . . . . . . . . . . 3.10 Euler equations . . . . . . . . . . . . . . . . . .
CONTENTS . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
75 77 81 83 83
4 Differentiation: Unequally spaced data 87 4.1 Approximation of the 2nd Derivative . . . . . . . . . . . . . . . . . . 89 4.2 Application of Finite Difference Formulas . . . . . . . . . . . . . . . . 92 5 Galerkin Method 5.1 Convection equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Burgers Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Aliasing error in the calculation of the nonlinear term . . . . . . . . .
95 95 96 98
6 Collocation Method 101 6.1 Matrix operator for Fourier spectral numerical differentiation . . . . . 101 7 Some numerical examples 103 7.1 Heating of an oil droplet in air . . . . . . . . . . . . . . . . . . . . . 103 7.2 Blasius solution: Contributed by Mr. M. Giacobello . . . . . . . . . . . . . . . . . . . 109
Chapter 1 Ordinary Differential Equations In many engineering problems, you will need to solve differential equations that look something like dx = f (t, x) (1.1) dt in the domain a with the initial condition
≤t≤b
x(t = a) = α
1.1
Euler’s method
Euler’s method is probably the simplest method used to solve Eq. (1.1). Consider Taylor’s theorem (tn+1 tn )2 d2 x dx (ξ n ) x(tn+1 ) = x(tn ) + (tn+1 tn ) (tn ) + 2 dt dt2 where ξ is somewhere in between t n and t n+1 .
−
−
(1.2)
Exercise 1.1: Find the numerical value of ξ if x(t) = sin(πt/2), t n = 0, t n+1 = 1, hence h = 1. Note that according to the Eq. (1.2), the value of ξ must lie between 0 and 1. If we let t n+1
− t = h and substitute Eq. (1.1) into Eq. (1.2), we get n
h 2 d2 x (ξ n ). (1.3) x(tn+1 ) = x(tn ) + hf (tn , xn ) + 2 dt2 If we assume that h is small, then we can neglect the second order term in the equation above. Thus, we get the formula for Euler’s method 5
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
6
xn+1 = x n + hf (tn , xn )
(1.4)
where xn is the numerical approximation of the exact solution x(tn ). Equation (1.4) is sometimes also called the explicit Euler formula because there is an implicit version of the Euler’s method will be be discussed later.
3
3
(a)
2.5
2
2
) t ( 1.5 x
) t ( 1.5 x
1
1
0.5
0.5
0
0
1
2
(b)
2.5
3
4
5
6
7
8
9
10
0
0
1
2
3
4
t
6
7
8
9
10
6
7
8
9
10
t
3
3
(c)
2.5
2
) t ( 1.5 x
) t ( 1.5 x
1
1
0.5
0.5
0
1
2
(d)
2.5
2
0
5
3
4
5
6
7
8
9
10
0
0
t
1
2
3
4
5
t
Figure 1.1: Figure showing the solution to Exercise 1.2. h = 2.0 (a), h = 1.0 (b), exact solution, computed solution. h = 0.5 (c), h = 0.1 (d).
◦
1.1. EULER’S METHOD
7
Exercise 1.2: Using Euler’s method, solve
for 0
≤t≤
dx =1 dt 10. Use x(t = 0) = 0 and
−x
(a) h = 2.0 (b) h = 1.0 (c) h = 0.5 (d) h = 0.1 Compare the numerical solution with the analytical (exact) solution which is x(t) = 1 e−t .
−
The solution to Exercise 1.2 is shown in Figure 1.1. Note that the numerical solution gets closer to the exact solution for smaller values of h. By comparing, Eqs. (1.3) and (1.4), one can conclude that if the exact solution is known at time t = t n , the the numerical solution at time t = t n+1 will have an error which is O(h2 ). One can say that the Euler’s method has a local truncation error of order 2. This is because if we reduce h by 2, then it can be expected that the error will be approximately reduced by 4. However, it can be shown that (see [3] and [1] and Exercise 1.3 ) the errors actually accumulate over time. This is illustrated in Fig. 1.2 which shows the error associated with the Euler’s method when the solution is computed using h = 0.2, 0.1 and 0.05. Figure 1.2 (a) plots the error
E h (tn ) = x(tn )
|
−x | n
(1.5)
for various values of h. The gaps between all three curves are the same which indicates that the error is proportional to h. This fact is confirmed in Fig. 1.2(b) which shows the ratio of the errors, E h=0.1 /E h=0.2 error E h=0.05 /E h=0.2 . From the graphse, it is clear that the error is approximately halved when the h is halved. Thus, the global error is shown to be O(h) i.e. Euler’s method is only first order accurate. In summary, Euler’s method is the easiest method to implement but it is not very accurate.
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
8
!
$!
1/2
!$
$!
!%
* $! + * * ) !& $!
!'
$!
!#
$!
!
!"#
$
$"#
%
%"#
&
&"#
'
'"#
#
'"#
#
(
$
132
!"-
+ 0 ( !", / * * + * !"' * ) !"%
!
!
!"#
$
$"#
%
%"#
&
&"#
'
(
Figure 1.2: Figure showing the error associated to the solution to Exercise 1.2. The error is shown in (a) with h = 0.2 , h = 0.1 ,h = 0.05 . (b) shows the ratio between the error, E h=0.1 /E h=0.2 , E h=0.05 /E h=0.2
Exercise 1.3: Subtract Eq. (1.4) from Eq. (1.3) and show that h 2 (1.6) n+1 = n + h(f (tn , x(tn )) f (tn , xn )) + x (ξ n ). 2 where n = x(tn ) xn , the error at time t = t n . For the case where f (t, x) = 2t, use the above equation and show that the error at time, t n = nh is O(h).
−
−
Note: Even though you have only proven that the error is O(h) for the specific case where f (t, x) = 2t, this result is more general and applies to all cases where f (t, x) is a smooth function.
1.2. BACKWARD (IMPLICIT) EULER METHOD
1.2
9
Backward (Implicit) Euler method
In many applications (see later) the Euler method described in section 1.1 is very unstable. A more stable method is the backward Euler scheme given by the following formula: xn+1 = x n + hf (tn+1 , xn+1 )
(1.7)
The Backward Euler method is an implicit method because it has xn+1 on both sides of the equation. For simple problems, there is generally no real difference between the implicit Euler’s method and the more conventional explicit Euler’s method because it is possible to obtain an explicit expression for xn+1 from Eq. (1.7). As an example, for the problem discussed in Exercise 1.2, one can use Eq. (1.7) to show that xn+1 =
h + xn 1 + h
(1.8)
Using the Eq. (1.8), one can generate the results shown in Fig. 1.3. Comparing Figs. 1.3(a) and 1.1(a), it is clear that the Backward Euler method gives a better solution h = 2 in this situation. This is one property of the implicit Eulers method. For a given value of h, it usually gives a better solution than the explicit Euler’s method. 3
3
2.5
2
2
) t ( 1.5 x
) t ( 1.5 x
1
1
0.5
0.5
0
0
1
2
(b)
2.5
(a)
3
4
5
6
7
8
9
10
0
0
1
2
3
t
4
5
6
7
8
9
10
t
Figure 1.3: Figure showing the solution to Exercise 1.2 using the Backward Euler method. h = 2.0 (a), h = 1.0 (b). exact solution, computed solution.
◦
However, in generally, it is more difficult to obtain a solution using backward Eulers method than the (explicit) Eulers method. This commonly occurs when you have a slightly more complicated expression for f (t, x) which is illustrated in Example 1.1
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
10
Example 1.1: Consider the nonlinear first order ordinary differential equation dx dt
− cos(x) = 0
(1.9)
with the initial condition x(0) = 0.0. The exact solution to Eq. (1.9) is x(t) = arctan
− e2t 1 2et
(1.10)
We will now show how the backward Euler method can be used to obtain an approximate solution to Eq. (1.9). Applying Eq. (1.7) to Eq. (1.9) gives xn+1
− h cos(x
n+1 )
− x = 0.
n
(1.11)
xn is known from the previous time step. One then needs to find x n+1 . It is not easy to find an explicit expression for xn+1 . Root finding methods (see 2nd year computational mechanics lecture notes) must be used to find xn+1 numerically. If we used the Newton-Raphson method, the iterative scheme could be written as (k+1)
(k+1) xn+1
(k) = x n+1
−
x n+1
(k+1) n+1 ) (k+1) 1 + h sin(xn+1
− h cos(x
−x
n
(k)
where xn+1 is the k’th guess for the value of xn+1 that satisfies Eq. (1.11). While (k) (0) one may use any value for xn+1 , it would be logical to use x n+1 = x n . Figure 1.4 shows the numerical solution to Eq. (1.9) obtained using both the implicit and the explicit Euler’s method. For large h = 2.0, the solution obtained using explicit Euler is oscillatory even though the exact solution does not oscillate. The backward Euler gives a solution that is in better agreement with the exact solution. For h = 0.5 both implicit and explicit Euler solution converges to the exact analytical solution.
1.3
Trapezoidal or Crank-Nicolson method
The solution to Eq. (1.1) can also be obtain by integration
tn+1
x(t) = x(tn ) +
f (t, x(t))
tn
The integration can be approximated using the trapezoidal rule (see 2nd year, Computational mechanics lecture notes)
tn+1
tn
f (t, x(t)) =
h (f (tn , x(tn )) + f (tn+1 , x(tn+1 ))) + O(h3 ) 2
(1.12)
1.4. LINEARIZATION OF CRANK-NICOLSON METHOD 2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2 ) t ( x
11
1.2 ) t ( x
1 0.8
1 0.8
(b) 0.6
0.6
(a)
0.4
0.4
0.2
0.2
0
0
1
2
3
4
5
6
7
8
9
10
0
0
t
1
2
3
4
5
6
7
8
9
10
t
Figure 1.4: Figure showing the solution to Example 1.1 for h = 2.0 (a) h = 0.5 (b). exact solution, explicit Euler and backward Euler solution.
where h = tn+1 tn = h. Ignoring the error term, one will obtain the following formula for the Crank-Nicolson (trapezoidal) method
−
h xn+1 = x n + (f (tn , xn ) + f (tn+1 , xn+1 )) 2
(1.13)
By comparing Eqs. (1.12) with (1.13) one can conclude that the error associated with the Crank-Nicolson method is O(h3 ). However, similar to the Euler’s method, the error accumulate over time. Thus, the global error of the Crank-Nicolson method is O(h2 ).
Exercise 1.4: Solve
dx + 2x = 0 dt for 0 < t < 3 with x(t = 0) = 1 using (a) Euler method (b) Backward Euler method (c) Crank Nicolson method
1.4
Linearization of Crank-Nicolson method
The main problem with implicit method is that they are difficult to solve, i.e. it is difficult to obtain xn+1 from the resulting discretized equations. Sometimes, it is possible to “linearized” implicit methods. Consider the Crank-Nicolson scheme h xn+1 = x n + (f (tn , xn ) + f (tn+1 , xn+1 )) + O(h3 ) 2
(1.14)
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
12
The difficulty comes from the term f (tn+1 , xn+1 ). Let’s consider a Taylor series expansion about of f (tn+1 , xn+1 ) about xn
f (tn+1 , xn+1 ) = f (tn+1 , xn )+(xn+1 xn )
−
∂f ∂ 2 f (xn , tn+1 )+(xn+1 xn )2 2 (xn , tn+1 )+HOT ∂x ∂x (1.15)
−
But we know that xn+1
− x = O(h) n
Substituting into Eq. (1.15) gives
f (tn+1 , xn+1 ) = f (tn+1 , xn ) + (xn+1
(x , t − x ) ∂f ∂x n
2 n+1 ) + O(h )
n
(1.16)
Substituting Eq. (1.16) into (1.14) gives h xn+1 = x n + 2
f (tn+1 , xn ) + (xn+1
−
∂f xn ) (xn , tn+1 ) + f (tn , xn ) ∂x
(1.17)
It is now possible to obtain an explicit expression for x n+1 to be xn+1 = x n +
h (f (xn , tn+1 ) + f (xn , tn )) 2 1 h2 ∂f (xn , tn+1 ) ∂x
−
(1.18)
This method has got good stability characteristics (see later). The main problem with this scheme is that you have to find the derivative of f with respect to x. This is not always possible.
Example 1.2: Consider again the example dx = cos(x) dt Applying the Crank Nicolson method will give an implicit expression for x n+1 xn+1
− h2 cos(x
n+1 )
− x − h2 cos(x ) = 0 n
(1.19)
n
Since xn is known from the previous time step, it is possible to use root finding methods to find a solution to Eq. (1.19). On the other hand, applying the linearized Crank-Nicolson method (Eq. (1.18)) gives an explicit expression for x n+1 xn+1 = x n +
h cos(xn ) 1 + (h/2) sin(xn )
(1.20)
1.5. RUNGE-KUTTA METHODS
13
As you can probably imagine, writing a computer program to implement Eq. (1.19) is a lot simpler than writing a computer program to implement Eq. (1.20). The result for both the implicit and linearized Crank-Nicolson methods are shown in Fig. 1.5. As you can see, both methods give very similar results.
2
1.8
1.6
1.4
1.2
) t ( 1 x 0.8
0.6
0.4
0.2
0 0
2
4
6
8
10
t
Figure 1.5: Comparing the linearized and implicit Crank-Nicolson methods for h = 1. linearized Crank-Nicolson and implicit Crank-Nicolson method.
1.5
Runge-Kutta methods
These methods are probably the most popular methods in solving initial value problems. However, many variations of the Runge-Kutta methods exist. We will derive one set below and see why the various formulae are not unique. Let’s start with the problem we would like to solve dx = f (t, x) dt In general, the Runge-Kutta schemes can be written as xn+1 = x n + φ(xn , tn , h)h
(1.21)
where h is the interval size, i.e. h = ∆t = t n+1 tn . φ is known as the incremental function and it can be interpreted as the slope which is used to predict the new value of x. In general, φ can be written as
−
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
14
φ = a1 k1 + a2 k2 + a3 k3 + a4 k4 +
··· a
N kN
(1.22)
where k1 k2 k3 k4 .. . .. . .. .
= = = = .. . .. . .. .
f (tn , xn ) f (tn + p1 h, xn + q 11 k1 h) f (tn + p2 h, xn + q 21 k1 h + q 22 k2 h) f (tn + p3 h, xn + q 31 k1 h + q 32 k2 h + q 33 k3 h) .. . .. . .. .
kN = f (tn + pN −1 h, xi + q N −1,1 k1 h + q N −1,2 k2 h +
··· + q
N −1,N −1 kN −1 h)
For N = 1, we get the first order Runge-Kutta scheme. This is just the same as the Euler integration scheme presented earlier.
1.5.1
Second Order Runge-Kutta Method
If we put N = 2 into Eq. (1.22), we get φ = a1 k1 + a2 k2 Substituting the above into Eq. (1.21) gives xn+1 = xn + (a1 k1 + a2 k2 ) h xn+1 = xn + a1 f (tn , xn )h + a2 f (tn + p1 h, xn + q 11 k1 h)h
(1.23)
Thus in order to find a numerical scheme, we need to find values for the following constants, a1 , a2 , p1 and q 11 . The last term in the above equation is a 2 variable function and the Taylor series expansion (to the lineariazed approximation) for a two variable function is given by ∂f ∂f + ∆ + HOT ∂t ∂x Using this relationship and ignoring the higher order terms (HOT), f (t + h, x + ∆) = f (t, x) + h
f (tn + p1 h, xn + q 11 k1 h) = f (tn , xn ) + p1 h We know that
∂f ∂f (tn , xn ) + q 11 k1 h (tn , xn ) ∂t ∂x
(1.24)
1.5. RUNGE-KUTTA METHODS
15
k1 = f (tn , xn ) Hence Eq. (1.24) can be written as
f (tn + p1 h, xn + q 11 k1 h) = f (tn , xn ) + p1 h
∂f ∂f + q 11 f (tn , xn )h ∂t ∂x
(1.25)
Substituting Eq. (1.25) into Eq. (1.23) gives
xn+1 = x n + (a1 + a2 )f (tn , xn )h + a2 p1
∂f 2 ∂f h + a2 q 11 f (tn , xn ) h2 ∂t ∂x
(1.26)
We can also write a Taylor series expansion for x in terms of t as dx d 2 x h2 (tn ) h + 2 (tn ) + HOT x(tn+1 ) = x(tn ) + 2! dt dt where HOT stands for higher order terms. Let’s assume that they are small. So the above equation becomes dx d 2 x h2 xn+1 = x n + h + 2 dt dt 2! The problem we are trying to solve has
(1.27)
dx = f (t, x) dt Substituting the above into Eq. (1.27) gives df (tn , xn ) h2 xn+1 = x n + f (tn , xn )h + 2 dt ∂f ∂ f dt + dx ∂t ∂x df ∂f ∂ f dx = + dt ∂t ∂x dt df =
(1.28) Hence xn+1 = x n + f (tn , xn )h +
1 ∂f 2 1 ∂f h + f (tn , xn )h2 2 ∂t 2 ∂x
(1.29)
Comparing Eqs. (1.29) with Eq. (1.26) will give you the following three equations
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
16
a1 + a2 = 1 1 a2 p1 = 2 1 a2 q 11 = 2 (1.30) We have four unknowns (a1 , a2 , p1 and q 11 ) but only the above three equations. So there cannot be a unique solution. You have an infinite number of solutions. One possible solution is to set 1 a2 = 2 then 1 2 = 1 = 1
a1 = p1 q 11
(1.31) Hence, one possible second order Runge-Kutta time stepping scheme is xn+1 = x n +
1 1 k1 + k2 h 2 2
(1.32)
where
k1 = f (tn , xn ) k2 = f (tn + h, xn + hk1 ) (1.33)
1.5.2
4th Order Runge-Kutta Scheme (RK-4)
By far, the most popular numerical method for solving ODE is the 4th order RungeKutta scheme xn+1 = x n + where
1 1 1 k1 + (k2 + k3 ) + k4 h 6 3 6
(1.34)
1.6. STABILITY AND ERROR ANALYSIS
k1 = f (tn , xn ) h h k2 = f tn + , xn + k1 2 2 h h k3 = f tn + , xn + k2 2 2 k4 = f (tn + h, xn + hk3 )
17
This method is very accurate and has good stability properties.
1.6
Stability and error analysis
When discussing stability of the numerical methods, one usually considers the model problem dx = λx (1.35) dt λ is a constant which can be a complex number. In most engineering problems, the real part of λ is usually negative. This means that the solution that you are after will typically decay with time. Applying the Euler method (with timestep ∆ t = h) to Eq. (1.35) gives xn+1 = x n + λhxn
(1.36)
where x n is the numerical solution of x(tn ) where t n is the time t n = nh. To determine the region of stability, simplify Eq. (1.36) to give xn+1 = (1 + λh) xn Thus the error at any time step n can be written as xn = x0 (1 + λh)n = x0 (1 + (λR + IλI )h)n = x0 σn (1.37) where λ R and λ I are the real and imaginary parts of λ = λR + IλI . σ = (1 + hλR + IhλI )
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
18
is usually called the amplification factor. If σ < 1 then the error will decay with time. The opposite is true if σ > 1. Hence, in order to ensure the stability of the numerical method, σ < 1.
| |
| |
| |
|σ|
2
= (1 + hλR )2 + (hλI )2 < 1
This is just a circle of radius 1 centred on (-1,0). This plot is called the stability diagram and is shown in Fig. 1.6
λ I h
λ R h
2
Figure 1.6: The stability diagram of Euler method. If λ is real and negative, then in order for the numerical method to be stable, h
≤ |λ2|
(1.38)
Exercise 1.5: Solve dx + 2x = 0 dt with x(t = 0) = 1 Use h = 1.2, 0.8 and 0.3 in the domain 0
≤ t ≤ 30
Consider now the case where λ is purely imaginary i.e. λ = I ω
(1.39)
1.6. STABILITY AND ERROR ANALYSIS
19
where I is the imaginary unit and ω is a real number. The model problem for stability, Eq. (1.35) becomes dx = I ωx dt For illustrative purposes, let’s use x(t = 0) = 1. The exact solution to this problem is x(t) = x 0 eIω t
(1.40)
where x0 = x(t = 0). If the Euler method is used to solve this equation, we know that the amplitude will grow with time as the value of λ is not within the stability region. Let’s now analyse the phase error. The amplification factor, σ for this problem can be written as σ = 1 + Iωh = σ eIθ
(1.41)
||
where θ = arctan(ωh) Let’s now compare the exact solution at time t = nh x(t = nh) = x 0 eIωnh with the approximated solution given by the Euler’s equation. xn = σ n x0 = An eInθ x0 Dividing the two equations and a little bit of algebra will give you xn = x(nh)An eIn(θ−ωh) So the approximated solution that you will get will be amplified by An and its phase will be shifted by n(θ ωh). Thus A is usually called the amplitude error and (θ ωh) is called the phase error. At any time step, h, the error associated with the phase is ωh θ
−
−
−
ωh
−θ
= ωh =
− arctan(ωh) (ωh) ωh − ωh − 3
(ωh)3 = 3
−
3
(ωh)5 (ωh)7 + + ...... 5 7 (ωh)5 (ωh)7 + + ...... 5 7
−
(1.42)
20
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
So the phase error has a leading order term of ( ωh)3 which is very small. In contrast, the amplitude error grows like A n . We can also apply the stability analysis to the model equation using the implicit Euler method xn+1 = x n + λxn+1 h Rearranging the above equation gives xn+1 =
xn 1 λh
−
hence xn = σ n x0 where σ =
1 1
− λh
Exercise 1.6: Show that the amplification factor, σ for the implicit Euler scheme can be written as σ = Ae iθ where A = and
− (1
1 λR h)2 + (λI h)2
θ = arctan
λI h 1 λR h
−
Since we are only interested in problems which have λR < 0, A is always smaller than unity. Thus the implicit Euler scheme is always stable (as long as λR < 0), no matter what the value of h. This means that the implicit Euler scheme is unconditionally stable, which is a property of most implicit numerical scheme.
1.6. STABILITY AND ERROR ANALYSIS
21
Exercise 1.7: Compare the implicit and explicit Euler scheme and show that for the model problem dx = Iωx, dt the two schemes have the different amplitude error but the phase error is the same. Show that the magnitude of the solution given by the explicit Euler scheme is growing and the magnitude of the solution given by the implicit Euler scheme decreases with time. The solution to this exercise with h = 0.1 is shown in Fig. 1.7.
'
#
(
"
&
, ) + ! *
!& !" !( !# !'
!
"
#
$
%
&!
&"
&$
&%
"!
)
Figure 1.7: Figure showing the solution to Exercise 1.7. backward Euler method, Euler method.
exact solution,
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
22
Exercise 1.8: (a) For the model problem dx = λx, dt perform a stability analysis on the Crank-Nicolson method and show that it is unconditionally stable, i.e. the method is stable for all values of h. (b) Show that the linearized Crank-Nicolson and the implicit Crank-Nicolson methods have the same stability characteristics. (c) Also, show that when using the Crank-Nicolson to solve the model problem dx = Iωx, dt there is no amplitude error associated with the numerical method. Show that the phase error has a leading order term that looks like (ωh)3 PE = + . . . 12
Exercise 1.9: Perform stability analysis for the second order Runge-Kutta method on the model equation dx = λx dt and show that the region of stability can be obtained by solving the following equation λ 2 h2 (1.43) eiθ = 0 2 Show also that for the 4th order Runge-Kutta method, the stability region is obtained by solving 1 + λh +
−
λ 2 h2 λ 3 h3 λ 4 h4 + + + 1 eiθ = 0 λh + 2 6 24 The solutions to Eqs. (1.43) and (1.44) are shown in Fig. 1.8.
−
Example 1.3: For the model problem
(1.44)
1.6. STABILITY AND ERROR ANALYSIS
23
!
"
#
'
(
!
$
!#
!"
!! !!
!"
!# ! '
$
#
%
Figure 1.8: Figure showing the stability region of the 2nd ( ) and 4th order ( ) Runge-Kutta methods. For comparison, the stability region of the Eulers scheme is shown as .
dx = I ω, dt The solution given by the numerical schemes can be written as xn+1 = σxn where
| | −
|σEuler
=
1 + (ωh)2
|σRK2
=
1+
|σRK4| =
1
(ωh)4 4
(ωh)6 (ωh)8 + 72 576
(1.45)
(1.46)
(1.47)
24
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
for the Eulers, second order Runge-Kutta (RK2) and fourth order Runge-Kutta (RK4) schemes respectively. A plot of the magnitude of σ for the various schemes is shown in Fig. 1.9. It is seen that the magnitude of σ is very close to 1 for both the RK4 and RK2 schemes while it diverges away from 1 very quickly for the Euler scheme. Thus, the magnitude error for the RK4 and RK2 schemes (especially the RK4) is very small. To get an idea how small the error is, if we take ωh = 0.5, then σEuler = σRK2 =
−
√
1 + 0.52 = 1.118
1+
0.54 = 1.0078 4
0.56 0.58 + = 0.9998948784 72 576 These numbers might seem small but consider this. If we carry out simulations with ωh=0.5, then after 100 time steps, the RK4 scheme produce a numerical solution whose magnitude is 98.9% of the true solution, the RK2 scheme will give you a solution that is 217% of the true solution, and the Euler’s method will be out by 7000000% (yes 7 million percent) ! It is interesting to point out here that since σRK4 is given by Eq. (1.47), the stability curve for the 4th order Runge-Kutta scheme (shown in Fig. 1.8) intercepts the (λI h axis when σRK4 =
1
λI h = ωh 576 = 72 = 8 = 2.8284
√
1.7
Systems of Ordinary Differential Equations
In many engineering problems, it is essential to solve, not just one, but a set of ordinary differential equations. This problem can be expressed in matrix-vector notation as d x = [A] x dt
{}
{}
(1.48)
1.7. SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS
25
1.05
1.04
1.03
1.02
1.01
"
1
0.99
0.98
0.97
0.96
0.95 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
! h
Figure 1.9: Figure showing the magnitude of σ for Euler ( RK4 ( )
), RK2 (
) and
where the curly brackets ( ) denotes a vector and the square brackets [] represents a matrix. x is the vector of dependent variables and [A] is usually a matrix of constant coefficients. Equation (1.48) can be solved using the methods described above. Applying the explicit Euler method
{}
{}
n+1
n
{x} − {x} h
n+1
{x}
= [A] x =
n
{} {x} + h [A] {x} n
n
At every time step, one would need to perform a matrix-vector multiplication in order to obtain the values of the vector x at the next time step. If an implicit scheme is required to solve the problem, then one would need to solve an system of linear algebraic equations at every time step. For example, applying the trapezoidal/Crank Nicolson scheme to Eq. (1.48) gives
{ }
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
26
n+1
n
{x} − {x} h
{x} − h2 [A] {x} h I − [A] {x} 2 n+1
n+1
n+1
1 1 [A] x n + [A] x 2 2 h = x n + [A] x n 2 h = I + [A] x n 2
=
{}
{}
n+1
{}
{} {}
(1.49)
Equation (1.49) can now be solved using methods for solving a system of differential (e.g. LU decomposition, Gauss-Seidel etc.)
Exercise 1.10: Solve the following second order ordinary differential equation d2 x + ω 2 x = 0 2 dt
(1.50)
with the initial conditions x (t = 0) = p dx (t = 0) = 0. dt Outline the numerical algorithm for the Eulers and Trapezoidal method.
Multi-dimensional RK-4 In many situations, you might be required to solve a multi-dimensional system using RK-4. The RK-4 can easily be extended for a multi-dimensional system. An example of using RK-4 to solve a system with 4 dependent variables is given below. In the example below, it will be assumed that the system you are interested in is autonomous, i.e. the right hand side is not a function of time. Suppose you are asked to solve a system of ordinary differential equations that looks like da1 dt da2 dt da3 dt da4 dt
= f 1 (a1 , a2 , a3 , a4 ) = f 2 (a1 , a2 , a3 , a4 ) = f 3 (a1 , a2 , a3 , a4 ) = f 4 (a1 , a2 , a3 , a4 )
Equation (1.34) can be written in multi-dimensional form as
1.7. SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS
= an1 + an+1 1 = an2 + an+1 2 = an3 + an+1 3 = an4 + an+1 4
1 1 1 k11 + (k21 + k31 ) + k41 6 3 6 1 1 1 k12 + (k22 + k32 ) + k42 6 3 6 1 1 1 k13 + (k23 + k33 ) + k43 6 3 6 1 1 1 k14 + (k24 + k34 ) + k44 6 3 6
27
h
(1.51)
h
(1.52)
h
(1.53)
h
(1.54)
where the approximated solution to ai at time n + 1 is denoted as an+1 . Note that i there has been a change in notation. The subscript of the variable a correspond to the number of dependent variables. The time level is now denoted as a superscript. In order to calculate a i at various times, k ij need to be calculated as follows
k11 k12 k13 k14
k21 = f 1 k22 = f 2 k23 = f 3 k24 = f 4
= = = =
f 1 (a1 , a2 , a3 , a4 ) f 2 (a1 , a2 , a3 , a4 ) f 3 (a1 , a2 , a3 , a4 ) f 4 (a1 , a2 , a3 , a4 )
h h h h a1 + k11 , a2 + k12 , a3 + k13 , a4 + k14 2 2 2 2 h h h h a1 + k11 , a2 + k12 , a3 + k13 , a4 + k14 2 2 2 2 h h h h a1 + k11 , a2 + k12 , a3 + k13 , a4 + k14 2 2 2 2 h h h h a1 + k11 , a2 + k12 , a3 + k13 , a4 + k14 2 2 2 2
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
28
k31 = f 1 k32 = f 2 k33 = f 3 k34 = f 4
h h h h a1 + k21 , a2 + k22 , a3 + k23 , a4 + k24 2 2 2 2 h h h h a1 + k21 , a2 + k22 , a3 + k23 , a4 + k24 2 2 2 2 h h h h a1 + k21 , a2 + k22 , a3 + k23 , a4 + k24 2 2 2 2 h h h h a1 + k21 , a2 + k22 , a3 + k23 , a4 + k24 2 2 2 2
and k41 k42 k43 k44
= = = =
f 1 (a1 + hk31 , a2 + hk32 , a3 + hk33 , a4 + hk34 ) f 2 (a1 + hk31 , a2 + hk32 , a3 + hk33 , a4 + hk34 ) f 3 (a1 + hk31 , a2 + hk32 , a3 + hk33 , a4 + hk34 ) f 4 (a1 + hk31 , a2 + hk32 , a3 + hk33 , a4 + hk34 )
Generalization of RK-4 to a system of N dependent variables should be straight forward.
1.8
Runge-Kutta-Fehlberg method: Runge-Kutta with error control
In this method, the step size, h is adjusted every time level in order to keep the error of the numerical scheme under control. In order to see how this can be done, recall that the N ’th order Runge-Kutta scheme can be written as, xn+1 = x n + hφ(tn , xn , h)
(1.55)
the Taylor series can be written as x(tn+1 ) = x(tn ) + hψ(tn , x(tn )) + O(hN +1 )
(1.56)
where φ is a series consisting of your k values and ψ is a series consisting of derivatives of x(t). Recall that in Eq. (1.55), the expression for φ is derived by requiring that they are identical to ψ in Eq. (1.56). Thus, locally, at time level tn , the numerical value of φ and ψ must be the same. A higher order Runge-Kutta scheme can be written as
1.8. RUNGE-KUTTA-FEHLBERG METHOD: RUNGE-KUTTA WITH ERROR CONTROL29
ˆ n , xn , h) xˆn+1 = xˆn + hφ(t
(1.57)
and it is obtained from a higher order Taylor series ˆ n , x(tn )) + O(hN +2 ) x(tn+1 ) = x(tn ) + hψ(t
(1.58)
Let’s assume that at time level, t n , there is hardly any error in the numerical solutions, i.e. xn xˆn x(tn ). Rearranging Eq. (1.56), one obtains an expression for the local truncation error,
≈ ≈
x(tn+1 ) x(tn ) ψ(tn , x(tn )) h x(tn+1 ) xn = ψ(tn , x(tn )) h x(tn+1 ) (xn + hψ(tn , x(tn ))) = . h
− − − − −
τ n+1 (h) =
Note that τ n+1 (h) is O(hN ). Since the Runge-Kutta scheme requires that the numerical value of ψ = φ, we can continue the above as
τ n+1 (h) = =
x(tn+1 )
− (x + hφ(t , x )) n
1 (x(tn+1 ) h
n
n
h
n+1 )
(1.59)
−x
Similarly, starting from Eq. (1.58), one can obtain an expression for the local truncation 1 (x(tn+1 ) xˆn+1 ) h which is O(hN +1 ). Going back to Eq. (1.59) and using Eq. (1.60) τˆ n+1 (h) =
−
1 (x(tn+1 ) xn+1 ) h 1 = ([x(tn+1 ) xˆn+1 ] + [ˆ xn+1 h 1 = τˆn +1 (h) + [ˆ xn+1 xn+1 ] h
τ n+1 (h) =
− −
(1.60)
n+1 ])
−x
−
Recall that τ n+1 (h) is O(hN ) and τˆ n+1 (h) is O(hN +1 ). Thus, if h is small, τ n+1 (h) can be simply approximated as τ n+1 (h)
≈ h1 (ˆx − x n+1
n+1 )
(1.61)
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
30
Let’s now see how this information can be used to control the local truncation error. Since τ n+1 (h) is O(hN ), one can write τ n+1 (h) = K hN
(1.62)
If we increase or decrease the time step h by a factor of r, then the local truncation r N error would be τ n+1 (rh) = K (rh)N = rN KhN = rN τ n+1 (h) (ˆ xn+1 ), h xn+1 using Eq. (1.61). Thus if we want to bound the local truncation error to a small value , then
≈
r In practice, one usually sets
≤
r =
h xˆn+1
−x
n+1
1/N
(1.63)
1/N
h 2
xˆn+1
−
−x
n+1
(1.64)
One popular method to implement the above algorithm is called the RungeKutta-Fehlberg method. In this method xn+1 and xˆn+1 are approximated as xn+1 = x n + h
ˆn + h xˆn+1 = x
25 1408 2197 k1 + k3 + k4 216 2565 4104
16 6656 28561 k1 + k3 + k4 135 12825 56430
−
−
1 k5 5
9 2 k5 + k6 50 55
(1.65)
(1.66)
where k1 = f (tn , xn ) h h k2 = f tn + , xn + k1 4 4 3h 3h 9h k3 = f tn + , xn + k1 + k2 8 32 32 12h 1932h 7200h 7296h k4 = f tn + , xn + k1 k2 + k3 13 2197 2197 2197 439h 3680h 845h k5 = f tn + h, xn + k1 8hk2 + k3 k4 216 513 4104 8h 3544h 1859h 11h h k6 = f tn + , xn k1 + 2hk2 k3 + k4 k5 2 27 2565 4104 40
−
−
−
−
−
−
It can be shown that the global error associated with xn+1 is O(h4 ) and the global error associated with x ˆ n+1 is O(h5 ). So N = 4 and r is calculated as
1.8. RUNGE-KUTTA-FEHLBERG METHOD: RUNGE-KUTTA WITH ERROR CONTROL31
r = 0.84
xˆn+1
1/4
h
−x
n+1
(1.67)
Recall that the error at time level n+1 is approximated as xˆn+1 xn+1 and assuming that there is no error at time level n, i.e. x(tn ) xn xˆn . So using Eqs. (1.65) and (1.66), the error at time level n + 1 is approximated as
≈ ≈
= =
xˆn+1 xn+1 1 128 k1 k3 360 4275
|
− | −
−
|
−
2197 1 2 k4 + k5 + k6 75240 50 55
|
The main MATLAB code to implement the Runge-Kutta-Fehlberg method would look something like that shown in Matlab Program 1.1 Matlab Program 1.1: % % Main matlab loop to implement the Runge-Kutta Fehlberg method % while FLAG==1 K1=h*TestFunction(t(n),x(n)); K2=h*TestFunction(t(n)+(1/4)*h,x(n)+(1/4)*K1); K3=h*TestFunction(t(n)+(3/8)*h,x(n)+(3/32)*K1+(9/32)*K2); K4=h*TestFunction(t(n)+(12/13)*h,x(n)+(1932/2197)*K1 -(7200/2197)*K2+(7296/2197)*K3); K5=h*TestFunction(t(n)+h,x(n)+(439/216)*K1 -8*K2+(3680/513)*K3-(845/4104)*K4); K6=h*TestFunction(t(n)+(1/2)*h,x(n)-(8/27)*K1 +2*K2-(3544/2565)*K3+(1859/4104)*K4-(11/40)*K5); R=(1/h)*abs((1/360)*K1-(128/4275)*K3 -(2197/75240)*K4+(1/50)*K5+(2/55)*K6); if R
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
32
if delta <0.1 h=0.1*h elseif delta >4.0 h=4.0*h; else h=delta*h; end if h > hmax h=hmax end if t(n)>=tmax FLAG=0; elseif t(n)+h>tmax h=tmax-t(n); elseif h
1.9
Boundary Value problem
The methods introduced above can only be used to solve initial value problems. Meaning that all the information you are given is at time t = 0, say, and you are asked to predict the solution at a later point in time, say t = tf . What if you are given some information at t = t f ? This kind of problems are called boundary value problems. Can the methods above be used to solve boundary value problems? It turns out that it is possible to use the methods introduced in the preceding chapters to solve boundary value problems. These family of methods are called shooting methods and in general they guess the information at t = 0 in order to give the required conditions at t = tf . The method will be introduced by considering a system of 3 ODEs. Generalization to a system of N ODEs is straightforward. Consider the system of 3 ordinary differential equations.
1.9. BOUNDARY VALUE PROBLEM
33
da = f a (a,b,c) dt db = f b (a,b,c) dt dc = f c (a,b,c) dt
(1.68)
You are given the following conditions a(0) = 0, b(0) = 0 and c(tf ) = P
(1.69)
You are asked to find a(t), b(t) and c(t). It is important to note that c(0) is not defined and for the purpose of the following discussion, let c(0) = α. The idea behind the shooting methods is that because c(0) is not defined, we are free to choose any value for α that will give us c(tf ) = P . But we do not know before hand what value of α will give you c(tf ) = P . So we need iterate through different values of α until the computed value of c(tf ) = P . As you can imagine, this is a very inefficient method. There are more intelligent methods in choosing the value of α that will give you c(tf ) = P . This is described below. Using any numerical scheme for ODEs, Eq. (1.68) can be solved with the following initial conditions a(0) = 0, b(0) = 0, c(0) = α
(1.70)
Let’s say we take N steps of step size h to approach t = t f , then the approximate value of c(tf ) can be denoted as cN . The value of cN is dependent on the value of α. Since we are only guessing the value of α, it is very likely that c N P = 0. For the following analysis, it will be convenient to define a function
−
g(α) = c N (α)
− P.
(1.71)
In order for the numerical solution to satisfy the original boundary conditions defined in Eq. (1.69), we must ensure that g(α) = 0. Thus this becomes a root finding problem, i.e we have to iteratively find the value of α such that g(α) = 0. Each value α will give you a numerical solution. Only the numerical solution computed with the value of α that ensures that g(α) = 0 is the correct solution to the original problem. Since the problem has been recast as a root finding problem, the Secant formula is usually used to provide a better guess value of α, αi+1 = α i
− (αg(α−) α− g(α)g(α )) i
i−1
i
i
i−1
(1.72)
34
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
that will make g(αi ) = 0.
Example 1.4: You are given the following ordinary differential equation d2 x = 4cos(x) (1.73) dt2 and the boundary conditions x(0) = 0 and x(1) = 5. You are asked to find x(t) using numerical methods. Firstly, rewrite Eq. (1.73) as a set of 2 ODEs. Set
−
a = x dx b = dt da = b dt db = 4cos(a) (1.74) dt We are given that a(0) = x(0) = 0. However we do not know the value of dx/dt(0) = b(0). Instead of defining a condition for b(0) = dx/dt(t = 0) we are given the condition at t = 1, i.e. a(1) = 5. So we are free to choose b(0) = α. Numerical solution computed using different values of α will give you different values of aN (for simplicity, I have just used Euler’s method to compute the numerical solution. But you can use any method you like). We would like to pick only the numerical solution that will give us a N = 5. Let’s define a function
−
g(α) = aN (α)
− 5.
So our task now is to find the value of α such that g(α) = 0. Let’s just guess α 0 = 0 and α 1 = 1. For these values of α, numerical solution for Eq. (1.74) could be computed and g(α0 ) = 6.7733 and g(α1 ) = 5.9639. One can then use Eq. (1.72) to compute α2 . This process is then repeated until g(αi ) 0. The table below shows numerical values of αi and g(αi ) for each iteration.
−
i 0 1 2 3 4 5 6
−
≈
αi g(αi ) 0 -6.7733 1 -5.9639 8.3682 3.2575 5.7654 0.6666 5.0957 -0.1424 5.2136 0.0085 5.2069 0.0001
The numerical solution computed using different values of α i is shown in Fig. 1.10. Note that the αi are chosen according to Eq. (1.72) and it forces the numerical
1.9. BOUNDARY VALUE PROBLEM
35
solution of x at t = 1 to approach 5. The numerical solution computed using α5 and α6 are not distinguishable on the scale of the diagram.
#!
*
!
$
(
!
%
/ , . & -
!
&
$
!
#
!
!
!
!$
!
!"#
!"$
!"%
!"&
!"'
!"(
!")
!"*
!"+
#
,
Figure 1.10: Figure showing the solution to Example 1.4. The numerical solution computed using α 0 , α 1 , α 2 , α 3 , α 4 are indicated on the diagram.
36
CHAPTER 1. ORDINARY DIFFERENTIAL EQUATIONS
Chapter 2 Fourier series Any periodic function can be, f (x) can expressed as a complex Fourier series with coefficients ck . To prove this, recall that in Engineering Analysis, any periodic function can be expressed as a sin and cosine series ∞
f (x) = a0 +
[al cos (lk0 x) + bl sin (lk0 x)],
(2.1)
l=1
where
2π , L L is the length of the domain and k 0 is the fundamental wavenumber. The term k0 =
lk0 = k l is the fundamental harmonics. The coefficients, a l and bl in Eq. (2.1) can be found by evaluating the following integrals
a0 al bl
1 = L 2 = L 2 = L
L
f (x) dx
0
L
f (x)cos(lk0 x) dx
0
L
f (x)sin(lk0 x) dx
0
Note that a−l = al b−l = bl
−
37
(2.2)
CHAPTER CHAPTER 2. FOURIER FOURIER SERIES SERIES
38
Recall that sin and cos functions can be expressed in terms of complex exponentials sin ρ =
eIρ
−Iρ
−e 2i
eIρ + e−Iρ cos ρ = 2
(2.3)
(2.4)
Substituting Eqs. (2.3) and (2.4) into Eq. (2.1) gives
− − − ∞
(x) = a0 + f (x
eIlk 0x + e−Ilk 0 x 2
al
l=1 ∞
= a0 +
al bl + + e−Ilk 0 x 2 2I
eIlk 0 x
l=1 ∞
= a0 +
al
I bl
eIlk 0x +
2
l=1
+ bl
eIlk 0 x al 2
+ Ibbl al + I 2
e−Ilk 0 x 2I bl 2I
e−Ilk 0x
Define c Define c l to be c0 = a0 al cl =
− Ib 2 a − Ib l
−l
c−l =
−l
2
+ Ibbl al + I 2
=
Substituting the definition for c for c into the expression for f ( f (x) gives
∞
( x) = c0 + f (x
∞
cl eIlk 0 x +
l=1
∞
=
Ilk 0 x
cl e
c−l e−Ilk 0 x
l=1
∞
+
l=0
c−l eI (−l)k0 x
l=1
Observe that
∞
l=1
Thus
−∞
I (−l)k0 x
c−l e
=
l=−1
cl eIlk 0 x
39
∞
(x) = f (x
cl e
+
l=0
cl eIlk 0x
l=−1
∞
=
−∞
Ilk 0 x
cl eIlk 0x
(2.5)
l=−∞
(2.6)
∞
f ( f (x) =
cl eIlk 0 x
(2.7)
l=−∞
Exercise 2.1: Use the definition of al and b and b l above and show that cl =
al
1 = L
− Ib
l
2
L
( x) e−Ilk 0 xdx f (x
(2.8)
0
Solution to Exercise 2.1: cl =
cl
al
− Ib
l
2 1 2 L 2I 2 I L = ( x)cos(lk )cos(lk0 x) dx (x)sin(lk )sin(lk0 x) dx f (x f (x 2 L 0 L 0 1 L I L = ( x)cos(lk )cos(lk0 x) dx (x)sin(lk )sin(lk0 x) dx f (x f (x L 0 L 0 1 L eIlk 0 x + e−Ilk 0x = ( x) dx f (x 2 L 0 i L eIlk 0x e−Ilk 0x (x ( ) dx f x 2I L 0 1 L = ( x) e−Ilk 0 xdx f (x L 0
−
−
− −
We usually cannot sum to infinity - problem with Eq. (2.7) In general you cannot do the integral integral specified specified in (2.8) exactly exactly.. The Nyquist critical critical wavenu wavenumber mber i.e. the largest wavenumber that is resolvable by an equally sampled data in physical space is N k 0 2
(2.9)
CHAPTER CHAPTER 2. FOURIER FOURIER SERIES SERIES
40
Partial solution to these problems is to introduce the discrete Fourier Transform (DFT). In engineering, functions are often represented represented by finite set of discrete equally spaced values. Additionally, data is often collected in or converted to such a discrete format. The discrete analog to Eq. (2.7) is given by N
1 ( x j ) = f (x N
2
cl e+Ilk 0 xj
(2.10)
l=− N +1 2
where x where x j = jL/N and L and L is the length of the domain. The above equation can be used to find cl since it can easily be noted that Eq. (2.10) is a linear equation which consist of N can N unknowns, c−N/2+1 N/2+1 ,..,cN/2 N/2 . We can find all the c the c l ’s by solving the linear system of equations. However, it is much more efficient to use the orthogonality property of a Fourier series to find the c l ’s.
Exercise 2.2: Show the following orthogonality relationship N −1
eIk 0 (l−m)xj =
j=0 j =0
if l m = pN = pN where p = 0, 1, 2,... otherwise
N 0
−
± ±
(2.11)
If we multip multiply ly both sides of Eq. (2.10) (2.10) by e−Ik 0 mxj and summing from j = 0 to N 1 gives
− −
N −1
cl =
(x j ) e−Ilk 0xj f (x
(2.12)
j=0 j =0
2.1 2.1
Some Some prope propert rtie iess of the Fou Fouri rier er coeffic coefficie ien nts
• If f (x) is a real (no imaginary part) function, then c
l
∗
∗ = c − superscript pt l . The superscri
denote complex conjugation.
Proof:
N −1
cl =
(x j )cos(k )cos(kl x j ) + I + If (kl x j )sin(k )sin(kl x j ) f (x f (k
j=0 j =0 N −1
c−l =
( x j )cos(k )cos(kl x j ) f (x
j=0 j =0
(k x )sin(k )sin(k x ) − I f (k l j
and c • If f ( f (x) is a real function, then both c and c then c = c = c . • If f ( f (x) is a real function, then c 0
N −l
−l
N/2 N/2 are
l j
real numbers
2.1. SOME PROPERTIES OF THE FOURIER COEFFICIENTS
41
Proof:
N −1
cl =
f (xi ) e−Ilk 0xj ,
j=0 N −1
c−l =
f (xi ) eIlk 0xj
j=0
N −1
cN −l =
f (xi ) e−I (N −l)k0xj
j=0
N −1
=
f (xi ) e−IN k0 xj eIlk 0 xj
j=0
where x j = j∆ L = j N
(2.13)
(2.14)
Lj 2π e−iN k0xj = e−IN ( L )( N ) = e−I 2πj = cos (2πj) + I sin (2πj) = 1
=1
Hence,
=0
cN −l = c −l
Exercise 2.3: Show that N
1 f (x j ) = N 1 = N
2
cl e+Ilk 0 xj
l=− N +1 2 N −1
cl eIlk 0 xj
l=0
CHAPTER 2. FOURIER SERIES
42
Solution to Exercise 2.3: N
1 f (x j ) = N
2
cl e+Ik 0lxj
l=− N +1 2 N
1 = N
−1
2
Ilk 0 xj
+
cl e
l=0
cl eIlk 0xj
l=− N +1 2
N
1 = N
N
2
2
cl eIlk 0xj +
l=0
−1
c−l e−Ilk 0 xj
l=1
But we know that c −l = c N −l . So
f (x j ) =
1 N
1 f (x j ) = N
N
N
2
2
cl eIlk 0 xj +
l=0
−1
cN −l eI (N −l)k0 xj
l=1
N −1
cl eIlk 0xj
l=0
Note that eI (N −l)k0 xj = eIN k0 xj e−Ilk 0 xj = eIN k0 (Lj/N ) e−Ilk 0 xj = e−Ilk 0 xj (2.15) Thus the discrete Fourier transform pair can be written as 1 f (x j ) = N
N −1
cl eIlk 0 xj
(2.16)
f (x j ) e−Ilk 0xj
(2.17)
l=0
N −1
cl =
j=0
• If f (x) is an even function, c ’s are real numbers • If f (x) is an odd function, c ’s are imaginary numbers • If the Fourier transform if f (x) is c , then the Fourier transform of f (x + t) is l
l
l
Ik l t
cl e
2.1. SOME PROPERTIES OF THE FOURIER COEFFICIENTS
43
Proof: 1 f (x) = N
N/2
cl eIk l x
l=−N/2+1
1 f (x + t) = N 1 = N
N/2
cl eIk l (x+t)
l=−N/2+1 N/2
cl eIk l t eIk l x
l=−N/2+1 N −1
1 = bl eIk l x N k=0 where b l = c l eIk l t is the Fourier image of f (x + t).
Exercise 2.4: Show that if f (x) is a real function, then N
2
f (x j ) =
cl e+Ik l xj
l=− N +1 2 N
2 = p0 + N
2
( pl cos(kl x j )
l=0
− q sin(k x )) l
l j
where k l = lk 0 and p l and q l are the real and imaginary parts of cl respectively. If we only consider situations where f is defined in the domain 0 < x < 2π i.e. L = 2π, k0 = 2π/L = 2π/2π = 1 (2.18) With this assumption, the discrete Fourier series
CHAPTER 2. FOURIER SERIES
44
N
2
1 f (x j ) = N
cl e+Ilx j
(2.19)
l=− N +1 2
N −1
cl =
f (x j ) e−Ilx j
(2.20)
j=0
Exercise 2.5: Express the solution of the convection diffusion equation ∂f ∂f ∂ 2 f = U + µ 2 (2.21) ∂t ∂x ∂x in terms of a complex Fourier series and show that the coefficients of the Fourier series evolves as
−
2
cl (t) = c l (t = 0)eIU kl e−µkl t
(2.22)
where k l = lk 0 . If we set U = 1, the solutions at time t = 0 and t = 20 are shown in Fig.
2.2
Aliasing
For any sample interval ∆, there is a special wave number kc called the Nyquist critical wave number which is defined to be 2π 2∆ π = ∆
kc =
For a domain L = 2π, ∆ = 2π/N . Thus π 2π/N N = 2
kc =
kc is important for the following reasons. Any energy in wave number that exist beyond k c will be reflected to k < k c with k c as the point of reflection.
2.2. ALIASING
45
1
0.8
0.6
) t , x ( f
0.4
0.2
0
−0.2 −10
−5
0
5
10
15
20
25
30
x
Figure 2.1: initial condition at time t = 0. Solution at t = 20 for µ = 0, solution at t = 20 for µ = 0.01, solution at t = 20 for µ = 0.1.
◦
46
CHAPTER 2. FOURIER SERIES
Chapter 3 Finite Difference 3.1
General Finite Difference Schemes
In many applications, one needs to find the derivatives of a function. This section of the notes describes a procedure that one could use to find the spatial derivatives of a function.
3.1.1
First Derivatives
We would like to express the derivative of a function, f (x), in terms the function values at neighbouring points. The general explicit formula to for doing this can be written as 1 f i = ∆
M
M
a− j f i− j + a0 f i +
j=1
a j f i+ j
(3.1)
j=1
where f i is the approximation to the exact derivative df/dx(x = x i ) at x = x i . We will assume for the moment that all data is equally spaced i.e. ∆ = x i+1
− x = constant. i
In Eq. (3.1) f i± j = f (xi
± j∆)
So performing the Taylor series 1 1 1 ( j∆)3 f (xi )+ ( j∆)4 f iv (xi ) HOT. f (xi j∆) = f (xi ) ( j∆)f (xi )+ ( j∆)2 f (xi ) 2! 3! 4!
±
±
±
Substituting into the RHS of Eq. (3.1) gives 47
±
CHAPTER 3. FINITE DIFFERENCE
48
f i = + + + +
1 (a−M + a−M +1 + ... + a−1 + a0 + a1 + ...aM −1 + aM )f (xi ) ∆ 1 df (a−M ( M ) + ... + a−1 ( 1) + a1 + ... + aM M ))∆ (xi ) ∆ dx 2 2 2 2 1 ( M ) ( 1) 1 M 2 2 d f + ... + a−1 + a1 + ... + aM ∆ (xi ) a−M ∆ 2! 2! 2! 2! dx2 2 1 ( M )3 ( 1)3 13 M 3 3 d f + ... + a−1 + a1 + ... + aM ∆ (xi ) a−M ∆ 3! 3! 3! 3! dx2 O(∆4 )
− −
−
−
−
−
So if we want f i to approximate df/dx(xi ) as closely as possible, we would need to satisfy the following equations a−M + a−M +1 + ... + a−1 + a0 + a1 + ...aM −1 + aM a−M ( M ) + ... + a−1 ( 1) + a1 + ... + aM M ( M )2 ( 1)2 12 M 2 + ... + a−1 + a1 + ... + aM a−M 2! 2! 2! 2! 3 3 3 ( M ) ( 1) 1 M 3 + ... + a−1 + a1 + ... + aM a−M 3! 3! 3! 3! .. .
− −
−
− −
−
= 0 = 1 = 0 = 0 . = ..
The more equations we choose to satisfy, the higher the accuracy of the approximation. For example, if we only satisfy the first two equations, then the accuracy of the approximation is O(∆). If we satisfy the first three equations, then the approximation is O(∆2 ).
3.1.2
Some popular differencing schemes
Let’s now see what happens when we choose M = 1. The finite diffrencing can be written as 1 (a−1 f −1 + a0 f 0 + a1 f 1 ) ∆ The equations that we need to satisfy in order for f i to approximate df/dx(xi ) can be written as
f i =
a−1 + a0 + a1 a−1 + a1 1 1 a−1 + a1 2 2! 1 1 a−1 + a1 3! 3!
−
−
= 0 = 1
(3.2) (3.3)
= 0
(3.4)
= 0
(3.5)
3.1. GENERAL FINITE DIFFERENCE SCHEMES
49
We have 3 unknowns, a−1 , a0 and a1 , thus we can at satisfy 3 equations at most. If we only choose to satisfy the first two equations, then we will have a numerical scheme that is first order accurate. We will also have one free variable. Let’s set a−1 = 0, then one possible solution would be
a−1 = 0 1 a0 = a1 = 1
−
With this set of coefficients, the derivative of f can be approximated as
f i =
1 (f i+1 ∆
− f )
(3.6)
i
This is called the 1st order forward difference scheme. If we choose to set a 1 = 0, then yet another possible solution would be
a−1 = 0 a0 = 1 1 a1 =
−
With this set of coefficients, the derivative of f can be approximated as
f i =
1 (f i ∆
i−1 )
(3.7)
− f
This is called the 1st order backward difference scheme. If we now choose to satisfy Eqs. (3.2) to (3.4), then we will not have a free parameter. The formula we get will be second order accurate and
a−1 =
− 12
a0 = 0 1 a1 = 2
with the corresponding formula being the approximation for f ,
f i =
1 (f i+1 2∆
i−1 ) .
− f
This scheme is called the 2nd order centred difference scheme.
(3.8)
CHAPTER 3. FINITE DIFFERENCE
50
Exercise 3.1: A five-point stencil finite difference formula can be obtained by setting M = 2 in Eq. (3.1) to obtain 1 (a−2 f i−2 + a−1 f i−1 + a0 f i + a1 f i+1 + a2 f i+2 ) ∆ Show that in order to obtain a fourth order accurate finite difference formula, one must satisfy the following 5 equations
f i =
a−2 + a−1 + a0 + a1 + a2 2a−2 a−1 + a1 + 2a2 1 1 2a−2 + a−1 + a1 + 2a2 2 2 4 1 1 4 a−2 a−1 + a1 + a2 3 6 6 3 2 1 1 2 a−2 + a−1 + a1 + a2 3 24 24 3
−
−
−
−
Solve the 5 equations above and show that
a1 =
a0 = 0 2 a−1 = 3
− a = −a 2
−2
=
− 121
= 0 = 1 = 0 = 0 = 0
3.1. GENERAL FINITE DIFFERENCE SCHEMES
51
Exercise 3.2: A three-point stencil, one-sided finite difference formula can be written as 1 (a0 f i + a1 f i+1 + a2 f i+2 ) ∆ Show that in order to obtain a second order accurate finite difference formula, one must satisfy the following 3 equations
f i =
a0 + a1 + a2 = 0 a1 + 2a2 = 1 1 a1 + 2a2 = 0 2 Solve the 3 equations above and show that
a0 =
− 32
a1 = 2 a2 =
3.1.3
− 12
Higher order derivatives
Generally, the finite difference formula for the second derivative, f i is expressed as 1 f i = 2 ∆
M
2
2
≈ d f/dx (x ) i
M
a− j f i− j + a0 f i +
j=1
a j f i+ j
.
(3.9)
j=1
Expanding the right hand side in terms of Taylor series gives
a−M + a−M +1 + ... + a−1 + a0 + a1 + ...aM −1 + aM a−M ( M ) + ... + a−1 ( 1) + a1 + ... + aM M ( M )2 ( 1)2 12 M 2 + ... + a−1 + a1 + ... + aM a−M 2! 2! 2! 2! 3 3 3 ( M ) ( 1) 1 M 3 + ... + a−1 + a1 + ... + aM a−M 3! 3! 3! 3! .. .
− −
−
− −
−
= 0 = 0 = 1 = 0 . = ..
As for the case as the first derivative, the more equations you satisfy, the more accurate your finite difference approximation will be.
CHAPTER 3. FINITE DIFFERENCE
52
In order to obtain a difference formula for the second derivative, consider a situation where M = 1. The equations that we need to satisfy are a−1 + a0 + a1 = 0 a−1 + a1 = 0 1 1 = 1 a−1 + a1 2 2!
−
Solving the above equations will give you a−1 = a1 = 1 2 a0 =
−
Thus a finite difference for the second derivative is given by 1 (f i−1 2f i + f i+1 ) (3.10) ∆2 In order to find finite difference formula for higher order derivatives of order n, one would need to try the general formula
f i =
f in
3.1.4
1 = n ∆
−
M
M
a− j f i− j + a0 f i +
j=1
a j f i+ j
.
(3.11)
j=1
Summary of finite difference formula
In the interior points, it is common to use central difference scheme. This is because for a given stencil, the central difference scheme always gives a higher order accuracy. Some commonly used central difference scheme Central difference scheme Error 1 ( f i−1 + f i−1 ) O(∆2 ) f i = 2∆
f i =
1 (f i−2 12∆
−
− 8f
i−1 + 8f i+1
i+2 )
− f
O(∆)4
At the end points, it is common to use one-sided difference scheme. Forward difference scheme is
3.2. CENTRED DIFFERENCE SCHEMES
53
Forward difference scheme Error 1 f i = ( f i + f i+1 ) O(∆) ∆ 1 ( 3f i + 4f i+1 f i+2 ) O(∆2 ) f i = 2∆
f i =
−
−
1 ( 11f i + 18f i+1 6∆x
−
−
− 9f
i+2 + 2f i+4 )
O(∆3 )
One can also come up with a partially one sided scheme, for example
f i =
1 ( 2f i−1 6∆x
−
− 3f + 6f − f i
i+2 )
i+1
(3.12)
Backward difference scheme is
Backward difference scheme Error 1 f i = ( f i f i−1 ) O(∆) ∆ 1 (3f i 4f i−1 + f i+2 ) O(∆2 ) f i = 2∆
f i =
1 ( 2f i−3 + 9f i−2 6∆x
−
− −
− − 18f
i−1 + 11f i )
O(∆3 )
One can also come up with a partially one sided scheme, for example
f i =
3.2
1 (f i−2 6∆x
− 6f
i−1 + 3f i + 2f i+1 )
(3.13)
Centred Difference Schemes
As you have seen in the exercises above, the most accurate scheme arise when the coefficients of the finite difference scheme (the a j s and a− j s) are antisymmetrical (i.e. a− j = a j ). These finite difference schemes are called the centred difference scheme and one would start with the following formula
−
df (xi ) dx
M
≈
1 f i = a j (f i+ j ∆ j=1
i− j )
− f
(3.14)
where f i+ j is the numerical approximation to f (xi + j∆). It is known that one can expand f (x + j∆) as a Taylor series
CHAPTER 3. FINITE DIFFERENCE
54
f (xi + m∆) = f (xi ) 1 + f (xi )(m∆) 1! 1 + f (xi )(m∆)2 2! 1 + f (xi )(m∆)3 3! 1 iv + f (xi )(m∆)4 4! 1 v + f (xi )(m∆)5 5! 1 vi + f (xi )(m∆)6 6! + ...
f (xi
− m∆x)
= f (xi ) 1 f (xi )(m∆) 1! 1 + f (xi )(m∆)2 2! 1 f (xi )(m∆)3 3! 1 iv + f (xi )(m∆)4 4! 1 v f (xi )(m∆)5 5! 1 vi + f (xi )(m∆)6 6! ...
−
− − −
Substituting into Eq (3.14) gives
3.2. CENTRED DIFFERENCE SCHEMES
55
2 df (xi ) = f (xi )(1a1 + 2a2 + 3a3 + . . . + M aM )(∆) 1! dx 2 + f (xi )(0 + 0 + 0 + . . . + 0)(∆)2 2! 2 + f (xi )(1a1 + 8a3 + 27a3 + . . . + M 3 aM )(∆)3 3! 2 iv + f (xi )(0 + 0 + 0 + . . . + 0)(∆)4 4! 2 v + f (xi )(1a1 + 32a2 + 243a3 + . . . + M 5 aM )(∆)5 5! 2 vi + f (xi )(0 + 0 + 0 + . . . + 0)(∆)6 6! + ...
Thus, if we want a formula to calculate the derivative at xi , then all we have to do is to solve the following system of equations 1a1 + 2a2 + 3a3 + . . . + M aM 1a1 + 8a2 + 27a3 + . . . + M 3 aM 1a1 + 32a2 + 243a3 + . . . + M 5 aM .. .
= 1 = 0 = 0 . = ..
The number of equations we need to use will depend on the number of coefficients you are using in your formula. The higher the value of M , the more accurate the answer. The error will be of the order of (∆) 2M
3.2.1
Example
Let’s suppose that we want to express df (xi ) = a1 (f (xi + ∆) f (xi ∆)) dx Taking the steps outlined above, we would require that
−
−
2a1 = 1 thus 1 2 So the formula to calculate the derivative would be 1 df (xi ) = (f (xi + ∆) f (xi 2∆ dx a1 =
−
− ∆))
CHAPTER 3. FINITE DIFFERENCE
56
3.3
Solving PDEs using finite difference schemes
Let’s now consider using the finite difference technique to solve to solve the convection equation ∂f = ∂t
−U ∂f ∂x
2
(3.15)
with initial condition f (t = 0, x) = 0.5e−(ln2)(x/3) . The domain is x [ 20, 600]. You are also given that f (t, x = 20) = 0.0. Using analytical techniques, it is straightforward to show that the exact solution to this problem is
∈ −
−
f (t, x) = 0.5e−(ln2)((x−U t)/3)
2
Let’s first discretize the domain into N + 1 data points. The grid is your series of x values, x 0 , x 1 ..... xN , and the corresponding value of f is denoted as f 0 , f 1 , .... , f N . For this example, we will use 2nd order central spatial discretization at the interior nodes. So at the interior points, df 1 dt df 2 dt df 3 dt .. .
= = =
− f −U f 2∆ − f −U f 2∆ − f −U f 2∆ 2
0
3
1
4
2
. = ..
df N −1 = dt
−U f −2∆f N
N −2
Since, the function f is only defined inside the domain, we do not have the value of the function at x = x −1 and x = xN +1 . Thus we cannot use the central difference formula at the end points. It is common to use a lower order scheme at the end points. For this example, we will use 1st order one-sided scheme at the end points to approximate the spatial derivatives. Note that we only need to find the x derivative at x = x N since we have been given that f 0 = 0 as the boundary condition to our problem. df N f N f N −1 = U ∆ dt The above set of N ordinary differential equations can be put into the form
−
−
d x = [A] x + c . dt For example, if N = 5, the above system will look like
{}
{ } {}
3.3. SOLVING PDES USING FINITE DIFFERENCE SCHEMES
d dt
f 1 f 2 f 3 f 4 f 5
=
− ∆U
−
0 1/2 0 0 0
1/2 0 1/2 0 0
−
0 0 0 1/2 0 0 0 1/2 0 1/2 0 1/2 0 1 1
−
−
f 1 f 2 f 3 f 4 f 5
+
57
−
f 0 /2 0 0 0 0
From previous lectures, we have already seen that the eigenvalues of [ A] will determine the stability of the time-integration scheme. The eigenvalues for the above example is
− ∆U (0.0712 ± 0.8331I ) U = − (0.2500 ± 0.4330I ) ∆ U λ = − (0.3576) ∆
λ1,2 = λ3,4
5
where I is the imaginary unit. Remember that from our stability analysis, the stability of a system is determined by plotting λh where h is the temporal step size. Thus the stability of our system is dependent on the parameter Uh (3.16) ∆ This parameter is called the CFL number. For stability, we would like the CFL to be as small as possible. Note that the error of our spatial discretization is dependent on ∆. However, if we make ∆ small, then CFD becomes big and the system becomes unstable ! The stability plots for N = 5 is shown in Fig. 3.1. The eigenvalues are plotted with CFL=1.0 on both the Euler and Runge-Kutta stability diagrams. It is easily seen that the Euler scheme is unstable and the 4th order Runge-Kutta scheme would be stable. CFL =
The solution for the convection equation with ∆ = 1, U = 1 at time, t = 400 is shown in Fig. 3.2. The solution computed for h = 1 and h = 0.001 are plotted. It is clear that there are numerical oscillations in the computed solution. These oscillations do not go away if we decrease h. With such small value of h = 0.001, the error in the Runge-Kutta scheme is negligible. It is clear that the oscillations come from the propogation of the error in the spatial discretisation scheme. To confirm this statement, a simulation was carried out with an even larger value of
CHAPTER 3. FINITE DIFFERENCE
58
1
3
0.8 2 0.6
0.4 1 0.2
h
I
λ
h
0
I 0
λ
−0.2 −1 −0.4
−0.6 −2 −0.8
−1 −2
−1.8
−1.6
−1.4
−1.2
−1 λ h
−0.8
−0.6
−0.4
−0.2
−3 −3
0
−2.5
−2
−1.5
R
−1 λ h
−0.5
0
0.5
1
R
Figure 3.1: Convection equation stability plots. Euler scheme (left) and RungeKutta scheme (right)
0.5
0.5
(b)
(a) 0.4
0.4
0.3
0.3
0.2
0.2
) x ( f
) x ( f
0.1
0.1
0.0
0.0
-0.1
-0.1
-0.2
0
100
200
300
x
400
500
-0.2
0
100
200
300
400
500
x
Figure 3.2: Solution of the convection equation at time t = 400. (a) computed with h = 1.0 and (b) computed with h = 0.001. The exact solution is shown as a solid line and the computed solution is shown as a dashed line.
3.3. SOLVING PDES USING FINITE DIFFERENCE SCHEMES
59
0.5
0.4
0.3
0.2 ) x ( f
0.1
0.0
-0.1
-0.2
0
100
200
300
400
500
x
Figure 3.3: Numerical solution at t = 400 computed with ∆ = 0.1, h = 0.01. The numerical solution is plotted with dashed line and the exact solution is shown as a solid line. They lie on top of each other.
CHAPTER 3. FINITE DIFFERENCE
60 0.6
0.5
0.6
(a)
(b)
0.5
0.4
0.4
0.3
0.3
) x ( f
) x ( f
0.2
0.2
0.1
0.1
0
0
−0.1 380
385
390
395
400 x
405
410
415
420
−0.1 380
385
390
395
400 x
405
410
415
420
Figure 3.4: Exact solution, numerical solution. (a) is for ∆ = 1.0 and (b) is computed with ∆ = 0.5.
◦
h = 0.01 and small value of ∆ = 0.1. The result at t = 400 is shown in Fig. 3.3 together with the exact solution. One cannot distinguish between the two plots. If we use the 4th order central scheme at the interior points and Kennedy and Carpenters one-sided scheme at the end points with the classical 4th order Runge Kutta time step scheme to solve the convection equation given above, with h = 0.1, the results are shown in Fig 3.4. The wiggles present in Fig 3.4 (a) is due to dispersion error, waves with different wave numbers travel at different speeds. Some get left behind. If we reduce ∆, then we get the exact solution.
3.4
Fourier Analysis of error
The above analysis only gives you an idea how accurate the calculation of derivative is with respect to the grid size. This does not give all information regarding the characteristics of the numerical scheme. Many physical processes exhibit wave-like motions. Hence, a Fourier analysis would provide additional information about the resolution characteristics of a particular numerical scheme. Consider a domain, length L. Let’s assume that the depedent variable, f (x) is periodic. Thus, f (x) can be represented as a Fourier series N/2
f (x) =
l=−N/2+1
where
cl eIk l x
3.4. FOURIER ANALYSIS OF ERROR
61
kl = k0 l 2π = l L kl [0, π]. L is the length of the domain of interest. Direct differentiation of f (x) gives
∈
df (x) = dx
N/2
cl Ikl eikl x
(3.17)
l=−N/2+1
Note that the Fourier coefficients of the derivative of a periodic function is just the Fourier coefficients of the function multiplied by I kl . In order to analyse the error introduced by the discretisation scheme, note that N
f (x + j∆) =
cl eIk l (x+ j∆)
l=−N +1 N
f (x j∆) =
−
cl eIk l (x− j∆)
l=−N +1
Substituting into Eq. (3.1) gives
f n =
N/2
M
1 ∆
N/2
cl eIk l (xn − j∆) + a0
a− j
j=1
l=−N/2+1
N/2
1 = ∆
l=−N/2+1
cl eIk l xn +
a− j e
+ a0 +
j=1
cl eIk l (xn + j∆)
a j
j=1
M
−Ik l j∆
N/2
M
l=−N/2+1
M
cl
l=−N/2+1
a j eIk l j∆ eIk l xn
j=1 N/2
f n =
cl Ikl∗eIk l xn
(3.18)
l=−N/2+1
We have defined a modified wavenumber, kl∗ such that M
∗
I ∆kl
=
j=1 M
=
j=1
M
−Ik l j∆
a− j e
+ a0 +
a j eIk l j∆
j=1
(a j + a− j )cos(kl j∆) + a0 + I
M
j=1
(a j
−a
− j
)sin(kl j∆)
CHAPTER 3. FINITE DIFFERENCE
62 M
∆kl∗ =
− M
(a j
j=1
−a
− j
)sin(kl j∆)
I
(a j + a− j )cos(kl j∆) + a0
j=1
If we split k l∗ into its real and imaginary parts, α l∗ and -β l∗ respectively, then M
∗
∆αl
=
− (a j
a− j )sin(kl j∆)
j=1 M
∆β l∗ =
(a j + a− j )cos(kl j∆) + a0
j=1
Example 3.1: Recall that the modified wavenumber can be written as kl∗ = α∗l second order scheme, one will have a −1 = 1/2, a 0 = 0 and a 1 = 1/2.
−
∗
− Iβ For a l
αl∗∆ = sin(kl ∆) β l∗ ∆ = 0 For a fourth order accurate scheme, a 1 = a−1 = 8/12 and a 2 = Thus the components of the modified wavenumber are given by
−
4 sin(kl ∆) 3 β l∗ ∆ = 0
α∗l ∆ =
−a
−2
=
− 16 sin(2k ∆) l
Applying the finite-difference schemes to solve the convection equation ∂f ∂f = U ∂t ∂x Subtituting Eq. (3.18) into the right hand side of the above equation gives
−
∂ ∂t
N/2
N/2
cl eIk l xn =
l=−N/2+1
−U
cl Ikl∗ eIk l xn
l=−N/2+1
Comparing the coefficient of e Ik l xn on both sides of the equations will give dcl = dt
∗
−U Ik c
l l
If you solve this equation using analytical techniques
−1/12.
3.4. FOURIER ANALYSIS OF ERROR
63
∗
∗
cl = c l (0)e−IU αl t e−U β l t
(3.19)
If you now consider the convection diffusion equation ∂f ∂f ∂ 2 f = U + µ 2 ∂t ∂x ∂x Substituting the Fourier series for f gives
−
∂ ∂t
N/2
N/2
Ik l xn
=
cl e
l=−N/2+1
−U
N/2
cl Ikl e
cl
IUkl
Ik l xn
l=−N/2+1 N/2
=
−
l=−N/2+1
−µ 2 l
− µk
cl kl2 eIk l xn
l=−N/2+1
eIk l xn
Comparing the coefficient of e Ik l xn on both sides of the equations will give
dcl = U Ikl + µkl2 cl dt Solving the above equation using analytical techniques will give you
−
2
cl = c l (0)e−IU kl t e−µkl t
(3.20)
Comparing Eqs. (3.19) with (3.20) will give you the numerical dissipation β l∗ µ = U 2 kl One can also define a numerical phase speed ∗
α∗l U = U kl ∗
(3.21)
(3.22)
Example 3.2: Let’s look at the case where M = 1. Thus the finite difference scheme can be writen as 1 (a−1 f i−1 + a0 f i + a1 f i+1 ) ∆ For the central difference scheme which is 2nd order accurate, then
f i =
a−1 =
− 12
a0 = 0 1 a1 = 2
CHAPTER 3. FINITE DIFFERENCE
64
For these values of a i ’s, the values of ∆α∗l and ∆β l∗ ∆α∗l = sin(kl ∆) ∆β l∗ = 0.0 For the forward difference scheme which is 1st order accurate, then a−1 = 0 1 a0 = a1 = 1
−
thus ∆α∗l = sin(kl ∆) ∆β l∗ = cos(kl ∆)
−1
(3.23)
For the backward difference scheme which is 1st order accurate, then 1 a−1 = a0 = 1 a1 = 0
−
thus ∆α∗l = sin(kl ∆) ∆β l∗ = 1 cos(kl ∆)
−
(3.24)
Plots of ∆αl∗ and ∆β l∗ for the central, forward and backward differencing schemes are shown in Fig. 3.5. Note that in Fig. 3.5 (b), it is shown that the ∆β l∗ for the forward differencing scheme is always negative. This indicates that if you solve ∂f ∂f = U , ∂t ∂x using the forward differencing scheme, then it will blow up. However, if you solve the above equation using the backward differencing scheme, then ∆β l∗ is always positive and looking at Eqs. (3.21) and (3.19) would suggest that the solution will decay with time. Figure 3.6 shows the numerical solution together with the exact solution. It is clear that the magnitude of the numerical solution is less than than the exact solution which indicates a diffusion in the numerical scheme.
−
3.4. FOURIER ANALYSIS OF ERROR
65
In a domain with L = 2π, k l = l. If at time t = 0, f (x, t = 0) = sin(2x) then the only nonzero value of k l is 2. The numerical phase speed for the wave is sin(2∆) 2∆ where ∆ = L/(N 1) where N is the number of grid points. The values of U ∗ corresponding to N is shown in the table below U ∗ = U
−
N U ∗ /U 11 0.756827 51 0.989506
Since U ∗ /U is always less than 1, then the numerical wave speed will always be slower than the true wave speed. When we use more number of grid points, then U ∗ /U will approach 1 and the dispersion error will decrease to zero.
3.4.1
Fourier analysis of central differencing scheme
Substituting into the central differencing formula gives
− − M
N
1 df (xi ) = a j ∆ j=1 dx
cl eIk l (xi + j∆)
eIk l (xi − j∆)
l=−N +1
M
N
1 = a j ∆ j=1
cl eIk l xi eIk l ( j∆)
e−Ik l ( j∆)
l=−N +1
M
N
1 = a j ∆ j=1
cl eIk l xi 2I sin(kl ∆)
l=−N +1
N
1 = cl I ∆ l=−N +1 N
M
a j 2sin(kl ∆) eIk l xi
j=1
1 df (xi ) = cl i ∆ l=−N +1 dx
M
a j 2sin(kl ∆) eikl xi
(3.25)
j=1
By comparing Eq. (3.17) with Eq. (3.25), we can see that the coefficients of the Fourier expansion of have now been modified. It should be clear that
CHAPTER 3. FINITE DIFFERENCE
66
&
-/.
%"# .
!
%
(
' , + *
$"# $ !"# ! !
!"#
$
$"#
'
(
!
%
%"#
&
!
-2. $
!!"#
!
.
!
(
!$
' * 1 0
!$"#
!%
!
!"#
$
$"#
'
(
!
%
%"#
&
%
.
-0.
$"#
!
(
' * 1 0
$
!
$
!"#
! !
!"#
$
$"#
'
(
!
%
%"#
&
Figure 3.5: (a) Plot of ∆α∗l for point the central (2nd order), forward and backward (1st order) differencing schemes. (b) Plot of ∆ β l∗ for the forward differencing scheme. (b) Plot of ∆β l∗ for the backward differencing scheme.
3.4. FOURIER ANALYSIS OF ERROR
67
0.8 0.6 0.4 0.2 ) x ( f
0 −0.2 −0.4 −0.6 −0.8
0
1
2
3
4
5
6
x
Figure 3.6: Simulation carried out using the upwind differencing scheme at time t = 2 with U = 1, N = 101 and the initial condition f (x, t = 0) = sin(2x). exact solution numerical solution.
CHAPTER 3. FINITE DIFFERENCE
68
kl∗ =
1 ∆
M
a j 2sin(kl ∆)
j=1
is effectively the wave number of the finite difference scheme. kl∗ ∆ is a function of kl ∆. To understand how well the finite difference scheme approximate the exact derivative, it is informative to plot kl∗ ∆ as a function of k l ∆.
Exercise 3.3: Show that the real part of the non-dimensional effective wavenumber for both the 1st order forward scheme and the 2nd order central difference scheme is given by αl∗∆ = sin(kl ∆).
(3.26)
For the 1st backward difference scheme, the imaginary part of the non-dimensional effective wavenumber is β l∗ ∆ = 1
− cos(k ∆).
l
(3.27)
Also show that for the 4 th order central difference scheme, the nondimensional effective wavenumber is 4 kl∗ ∆ = sin(kl ∆) 3
3.5
− 16 sin(2k ∆)
(3.28)
l
Stability analysis using the modified wavenumber
In general, when the finite difference approximation is used to solve a linear partial differential equation, the equation of motion can be written as a set of ordinary differential equation d f = [A] f . dt
{}
{}
(3.29)
Strictly speaking, the stability of the above system can be determined by just finding the eigenvalues of [A]. The linear operator [A] usually contains information regarding the boundary nodes. However, this is not usually convenient to find the eigenvalues of [A]. From past experience, it is more more likely for the interior nodes to go unstable thus there is really no need to worry about what is happening close to the boundary points. Thus we will confine the stability analysis assuming a periodic solution and thus confining ourselves to the operator at the interior points. Consider the periodic function
3.5. STABILITY ANALYSIS USING THE MODIFIED WAVENUMBER
69
N/2
f j =
cl eIk l xj
l=−N/2+1
The exact derivative can be written as
N/2
f j =
cl Ikl eIk l xj
l=−N/2+1
The finite difference approximation to f j can be written as 1 (f j+1 f j −1 ) 2∆ and f j+1 in terms of a Fourier series, the above equation can be
f j =
If we express f j −1 written as
f j = =
− − 1 2∆
cl eIk l (xj +∆)
cl
l
1 eIk l ∆ 2∆
e−Ik l ∆
eIk l xj
1 sin (kl ∆) eIk l xj ∆
cl I
l
=
cl eIk l (xj −∆)
l
l
=
−
cl Ikl∗ eIk l xj
l
where
1 sin (kl ∆) . (3.30) ∆ If we now use the 2nd order finite difference scheme to solve the convection equation, kl∗ =
∂f ∂f = U ∂t ∂x Discretising using finite difference scheme gives
−
l
−
dcl Ik l xj = e dt
U cl Ikl∗ eIk l xj
l
Comparing all the terms containing e Ik l xj gives dcl = dt dcl = dt
∗
−Uc Ik −Uc I ∆1 sin (k ∆) l
l
l
l
CHAPTER CHAPTER 3. FINITE FINITE DIFFERENCE DIFFERENCE
70
Referring back to the model problem considered when analysing the stability of the ordinary differential equations (dy/dt ( dy/dt = = λy ), the stability is determined by the λy), maximum magnitude of sin(k sin(kl ∆). ∆). Sinc Sincee 0 kl ∆ π the maximum value of the right hand side of the above equation is λmax = I = I U/ ∆. The stability is determined U/∆. by λ by λ max h = IU = IU h/∆. One can see that λ that λ is is purely imaginary so if we use the explicit Euler time stepping scheme, the system will always be unstable.
≤
≤
Exercise 3.4: Using similar similar analysis as above, above, show that dcl = dt
1 U cl I ∆
−
4 sin(k sin(kl ∆) 3
−
1 sin(2k sin(2kl ∆) 6
(3.31)
if you use a higher order difference formula, say the 4th order central scheme 1 (f (f j −2 8f j −1 + 8f 8 f j+1 j +1 12∆ to approximate the derivative (see section 3.1.4). f j =
−
j+2 j +2 )
− f
Equation (3.31) shows that the stability of your time stepping scheme is dependent on 1 I U ∆
4 sin(k sin(kl ∆) 3
−
1 sin(2k sin(2kl ∆) . 6
Since the maximum value of
4 sin(k sin(kl ∆) 3
−
1 sin(2k sin(2kl ∆) 6
occurs when kl ∆ = 1.7974 and the value of 43 sin(1. sin(1.7974) 16 sin(2 1.3722. Thus the value of λ λ max h for the 4th order central scheme is
−
× 1.7974)
=
h 3722. I U 1.3722. ∆ In the discussion above, it was shown that the maximum value of λ of λ max h for the 2nd order central scheme is h I U . ∆ Note that for both cases the value of λ λ max h is purely imaginary. In Example 1.3 it was shown that if λmax h is purely imaginary, then in order to keep the 4th order Runge-Kutta scheme stable, 2 .8284I 8284I λmax h < 2. Thus if we use the discretise the spatial domain using the 2nd order central scheme, then
3.6. DISPERSION-RELATION-PRESER DISPERSION-RELATION-PRESERVING VING SCHEME SCHEME
71
∆ 2 .8284 . h < 2. U Whereas if you discretise the spatial domain using the 4th order central scheme, then 2.8284 ∆ 1.3722 U ∆ < 2.0612 . U
h <
So we have to take a slightly smaller time step when you use a higher spatial discretisation scheme. In general, this is usually the case.
3.6
Dispersio Dispersion-R n-Rela elatio tion-P n-Prese reservi rving ng Sc Scheme heme
The Dispersion-Relation-Preserving (DRP) scheme is a numerical scheme scheme introduced by [4] to minimi minimize ze the dispers dispersion ion error error in the numer numerica icall schem scheme. e. For a 7 stenci stencill scheme, 4th order accuracy, the central differencing scheme requires that 2a1 + 4a 4 a2 + 6a 6 a3 = 1 1 8 9 a3 = 0 a1 + a2 + 9a 3 3 The above eqns can be solved to obtain in terms of a of a 1 .
a2 = a3 =
− 45 a + 209 1 2 a − 5 15
1
(3.32)
1
(3.33) (3.34)
In order to ensure that the derivatives have better dispersion characteristics, a 1 will be chosen so as to minimize the integrated error, E defined E defined as
− − π/2 π/ 2
E =
(iκ
π/ 2 −π/2
sin(κ) i2a1 sin(κ
π/2 π/ 2
=
iκ
π/ 2 −π/2
2
− i2a sin(2κ sin(2κ) − i2a sin(3κ sin(3κ))
sin(κ) i2a1 sin(κ
2
− − i
3
4 9 sin(2κ sin(2κ) a1 + 5 20
−
dκ
1 i2 a1 5
−
2 sin(3κ sin(3κ) 15
2
dκ
CHAPTER CHAPTER 3. FINITE FINITE DIFFERENCE DIFFERENCE
72
Note that E that E is is only a function of a of a 1 . In order to minimize a minimize a 1, it is required that dE =0 da1
(3.35)
this will give you 992 2 84 − 3584 + π + a − π + a 375 1125 75 25 1
1
π = 0 15 π − 421 −45496π −+ 128
a1 =
a1 = 0.799266426974 Putting this value of a a 1 into Eqs. (3.32) and (3.33) will give you 0.18941314 a2 = a3 = 0.02651995
−
3.7 3.7
Gene Genera rall Fini Finite te Diffe Differe renc nce e Sc Sche heme mess For The The Second Derivative
The general explicit explicit formula formula to calculate calculate derivativ derivatives es of a function function can be written written as 1 f i = 2 ∆
M
M
+ b0 f i + b− j f i− j + b
j=1 j =1
b j f i+ j
(3.36)
j=1 j =1
where f i is the approximatio approximation n to the exact derivativ derivativee d2 f/dx2 (x = xi ) at x = xi . Let’s do a Taylor series expansion on Eq. (3.1)
∆2 f i
= (b−M + b−M +1 + ... + + b + b0 + b + b1 + ...b + ...bM −1 + b + bM ) f ( b−1 + b f (xi ) M +1 + ... df + (b−M ( M ) + ... + + b + b1 + ... + ... + + b M ) + ... b−1 ( 1) + b bM M ))∆ M ))∆ (xi ) dx 2 2 2 2 ( M ) ( 1) 1 M ) M 2 2 d f + + ... + + b1 + ... + ∆ (xi ) b−M ... + b b−1 ... + b bM 2! 2! 2! 2! dx2 2 ( M ) ( 1)3 13 M )3 M 3 3 d f + + ... + + b1 + ... + ∆ (xi ) b−M ... + b b−1 ... + b bM 3! 3! 3! 3! dx2 + O (∆4 )
− − −
−
− −
So if we want f i to approximate d2 f/dx2 (xi ) as closely as possible, we would need to satisfy the following equations
3.7. GENERAL FINITE DIFFERENCE SCHEMES FOR THE SECOND DERIVATIVE 73
b−M + b−M +1 + ... + b−1 + b0 + b1 + ...bM −1 + bM b−M ( M ) + ... + b−1 ( 1) + b1 + ... + bM M ( M )2 ( 1)2 12 M 2 + ... + b−1 + b1 + ... + aM 1 b−M 2! 2! 2! 2! ( M )3 ( 1)3 13 M 3 + ... + b−1 + b1 + ... + aM b−M 3! 3! 3! 3! .. .
−
−
−
−
−
−
−
= 0 = 0 = 0 = 0 . = ..
The more equations we choose to satisfy, the higher the accuracy of the approximation. For example, if we only satisfy the first two equations, then the accuracy of the approximation is O(∆). If we satisfy the first three equations, then the approximation is O(∆2 ).
3.7.1
Some popular differencing schemes
Let’s now see what happens when we choose M = 2. The finite differencing schem for the second derivative can be written as
f i =
1 (b−2 f −2 + b−1 f −1 + b0 f 0 + b1 f 1 + b2 f 2 ) ∆2
The equations that we need to satisfy in order for f i to approximate d2 f/dx2 (xi ) can be written as
b−2 + b−1 + b0 + b1 + b2 2b−2 b−1 + b1 + 2b2 1 1 2b−2 + b−1 + b1 + 2b2 1 2 2 4 1 1 4 b−2 b−1 + b1 + b2 3 6 6 3 2 1 1 2 b−2 + b−1 + b1 + b2 3 24 24 3
−
−
−
−
−
= 0 = 0
(3.37) (3.38)
= 0
(3.39)
= 0
(3.40)
= 0
(3.41)
We have 5 unknowns, b−2 , b−1 , b0 , b1 and b2 . If we only choose to satisfy the first three equations, then we will have a numerical scheme that is first order accurate. We will also have two free variables. Let’s set b−2 = 0 and b −1 = 0, then one possible set of solutions would be
CHAPTER 3. FINITE DIFFERENCE
74
b−2 b−1 b0 b1 b2
= = = = =
0 0 1
−2
1
With this set of coefficients, the derivative of f can be approximated as 1 (f i 2f i+1 + f i+2 ) (3.42) ∆2 This is called the 1st order forward difference scheme for the second derivative. If we choose to set b2 = 0 and b1 = 0, then yet another possible solution would be
f i =
−
b−2 b−1 b0 b1 b2
= = = = =
1
−2
1 0 0
With this set of coefficients, the derivative of f can be approximated as 1 (f i 2f i−1 + f i−2 ) (3.43) ∆ This is called the 1st order backward difference scheme for the second derivative. If we now choose to satisfy Eqs. (3.37) to (3.41), then we will not have a free parameter. The formula we get will be second order accurate and
f i =
−
b−2 = b−1 = b0 = b1 = b2 =
− 121 4 3
− 52 4 3
− 121
3.8. MULTIDIMENSIONAL PROBLEMS
75
with the corresponding formula being the approximation for f ,
f i =
1 ( f i−1 + 16f i−1 12∆2
−
− 30f + 16f − f i
i+1
i−1 ) .
(3.44)
This scheme is called the 4 t h order centered difference scheme for the second derivative.
Exercise 3.5: 1. Show that the second order centered difference scheme for the second derivative is given by
f j =
1 (f j −1 ∆2
− 2f + f
j+1 )
j
(3.45)
where ∆ is the spacing between the grid points. 2. Show also that the modified wave number for this operator is given by ∆2 kl∗2 = 2 (1
− cos(k ∆))
(3.46)
l
where kl is the wavenumber and k l∗ is the modified wavenumber. 3. Using the result above, obtain the condition of stability if one were to use Eq. (3.45) along with the explicit Euler scheme to solve the diffusion equation ∂f ∂ 2 f = µ 2 . ∂t ∂x
3.8 3.8.1
(3.47)
Multidimensional problems Steady problem
Let’s now see how we can use differentiation schemes to solve the Poisson’s equation ∂ 2 φ ∂ 2 φ k 2 + 2 = Q(x, y) ∂x ∂y
(3.48)
The discretize form of this equation is ∆2 Q(xi , y j ) φi−1,j + φi+1,j + φi,j −1 + φi,j+1 4φi,j = k If we assume that N x = 4 and N y = 4 number of intervals in the x and y directions, the system of equation that you will be required to solve is
−
CHAPTER 3. FINITE DIFFERENCE
76
−
∆2 Q(x1 , y1 ) 4φ1,1 = k
−
∆2 Q(x2 , y1 ) 4φ2,1 = k
−
∆2 Q(x3 , y1 ) 4φ3,1 = k
−
∆2 Q(x1 , y2 ) 4φ1,2 = k
−
∆2 Q(x2 , y2 ) 4φ2,2 = k
−
∆2 Q(x3 , y2 ) 4φ3,2 = k
i = 1 j = 3 : φ0,3 + φ2,3 + φ1,2 + φ1,4
−
∆2 Q(x1 , y3 ) 4φ1,3 = k
i = 2 j = 3 : φ1,3 + φ3,3 + φ2,2 + φ2,4
− 4φ
i = 3 j = 3 : φ2,3 + φ4,3 + φ3,2 + φ3,4
− 4φ
i = 1 j = 1 : φ0,1 + φ2,1 + φ1,0 + φ1,2
i = 2 j = 1 : φ1,1 + φ3,1 + φ2,0 + φ2,2
i = 3 j = 1 : φ2,1 + φ4,1 + φ3,0 + φ3,2
i = 1 j = 2 : φ0,2 + φ2,2 + φ1,1 + φ1,3
i = 2 j = 2 : φ1,2 + φ3,2 + φ2,1 + φ2,3
i = 3 j = 2 : φ2,2 + φ4,2 + φ3,1 + φ3,3
∆2 Q(x2 , y3 ) 2,3 = k
3,3 =
∆2 Q(x3 , y3 ) k
In matrix form, the above equations become
−
4 1 0 1 0 0 0 0 0
1 4 1 0 1 0 0 0 0
−
0 1 4 0 0 1 0 0 0
−
1 0 0 0 1 0 1 1 0 0 4 1 1 4 1 1 4 0 0 0 1 1 0 0 0 0 1
−
−
−
0 0 0 1 0 1 4 1 0
−
0 0 0 0 1 0 1 4 1
−
− 0 0 0 0 0 1 0 1 4
φ1,1 φ2,1 φ3,1 φ1,2 φ2,2 φ3,2 φ1,3 φ2,3 φ3,3
=
3.8. MULTIDIMENSIONAL PROBLEMS
77
∆2 Q(x1 , y1 )/k φ1,0 φ0,1 ∆2 Q(x2 , y1 )/k φ2,0 ∆2 Q(x3 , y1 )/k φ3,0 ∆2 Q(x1 , y2 )/k φ0,2 ∆2 Q(x2 , y2 )/k ∆2 Q(x3 , y2 )/k ∆2 Q(x1 , y3 )/k φ1,4 φ0,3 ∆2 Q(x2 , y3 )/k φ2,4 ∆2 Q(x3 , y3 )/k φ3,4
−
−
−
− − −
−
− −
(3.49)
One should then be able to solve Eq. (3.49) using solvers for linear systems. Because there are so many zeros on the left hand side of Eq. (3.49), it is more common to use iterative solvers to solve Eq. (3.49).
3.8.2
Unsteady problem
Consider the two-dimensional heat equation
∂φ ∂ 2 φ ∂ 2 φ =Γ + ∂t ∂x 2 ∂y 2
(3.50)
The solution at t = 20 at Γ = 0.1/1.2 and Nx = 41, Ny = 41 is shown in Fig. 3.7. The boundary condition is φ(x, y = 1, t) φ(x = 0, y , t) ∂φ (x, y = 0, t) ∂y ∂φ (x = 1, y , t) ∂x
= 300 = 20(1
− y) + 300
= 0 = 0
If we use 2nd order finite difference scheme to represent the spatial derivatives, then discretise form of Eq. (3.50) is d φi,j = Γ dt
φi+1,j
− 2φ
i,j + φi−1,j ∆2x
+
φ i,j+1
− 2φ
i,j + φi,j −1 ∆2y
(3.51)
where φ i,j is the approximated value of φ at point (xi , y j ). At the points close to the boundaries, we have to implement the boundary conditions. Close to the east boundary, φN x−1,j = φ Nx,j
(3.52)
CHAPTER 3. FINITE DIFFERENCE
78
d φN −1,j = Γ dt x
−
φN x−1,j + φi−1,j φ i,j+1 + ∆2x
− 2φ
i,j + φi,j −1 ∆2y
(3.53)
The above is just a system of ODE’s that one will have to solve in order to obtain a time-dependent solution. One can use Euler, Runge-Kutta etc. to get a solution to the set of equations. In order to carry out stability analysis, one has to put it into matrix-vector form. Taking into account of the boundary conditions, the above system of equations could be written into matrix-vector form (assuming Nx = 5 and Ny = 4 number of intervals in the x and y directions respectively) as follows.
d dt
where
φ1,1 φ2,1 φ3,1 φ4,1 φ1,2 φ2,2 φ3,2 φ4,2 φ1,3 φ2,3 φ3,3 φ4,3
=Γ
b α 0 0 γ 0 0 0 0 0 0 0
α b α 0 0 γ 0 0 0 0 0 0
0 α b α 0 0 γ 0 0 0 0 0
0 0 α a 0 0 0 γ 0 0 0 0
γ 0 0 0 β α 0 0 γ 0 0 0
0 γ 0 0 α β α 0 0 γ 0 0
α = β = γ =
0 0 γ 0 0 α β α 0 0 γ 0
0 0 0 γ 0 0 α a 0 0 0 γ
1 ∆2x 2 ∆2x 1 ∆2y 1 ∆2x 2 ∆2x
0 0 0 0 γ 0 0 0 β α 0 0
0 0 0 0 0 γ 0 0 α β α 0
0 0 0 0 0 0 γ 0 0 α β α
0 0 0 0 0 0 0 γ 0 0 α a
φ1,1 φ2,1 φ3,1 φ4,1 φ1,2 φ2,2 φ3,2 φ4,2 φ1,3 φ2,3 φ3,3 φ3,4
+Γ
φ0,1 /∆2x 0 0 0 φ0,2 /∆2x 0 0 0 2 φ0,3 /∆x + φ1,4 /∆2y φ2,4 /∆2y φ3,4 /∆2y φ4,4 /∆2y
− − ∆2
2 y
a =
− − ∆2
b =
− − ∆1
2 y 2 y
One can then use can be the above system to perform stability analysis. If you want to use implicit time-stepping scheme to compute the solution to Eq. (3.51), things get a bit complicated. Let’s use the Crank-Nicholson time-stepping scheme
3.8. MULTIDIMENSIONAL PROBLEMS
(n+1)
φi,j
(n) i,j
−φ
h
Γ 2
=
Γ 2
+
(n+1)
φi+1,j (n) φi+1,j
(n+1) i,j ∆2x
− 2φ −
(n) 2φi,j ∆2x
79
+ φ(n+1) i−1,j
(n) + φi−1,j
(n+1)
+
φi,j+1
(n) φi,j+1
+
−
(n+1) i,j ∆2y
− 2φ
(n) 2φi,j ∆2y
+ φ(n+1) i,j −1
(n) + φi,j −1
(n+1)
Rearranging and putting on the unknowns (φi,j ) on the left hand side and all the (n) terms that are known (φi,j ) on the right hand side gives
− − − =
Γh (n+1) φi,j −1 2 2∆y Γh (n) φi,j −1 + 2 2∆y
− −
Γh Γh Γh Γh Γh (n+1) (n+1) (n+1) (n+1) + 1 φ φ φ φi,j+1 i 1,j i,j i+1,j − 2 2 2 2 2 2∆x ∆x ∆y 2∆x 2∆y Γh Γh Γh Γh Γh (n) (n) (n) (n) + 1 + + + φ φ φ φi,j+1 i−1,j i,j i+1,j 2 2 2 2 2 2∆x ∆x ∆y 2∆x 2∆y
−
One can write the above system in matrix-vector form as
[A] φ(n+1) = C
{ }
where the matrix [A] is a sparse matrix consisting of lots of zeros. The vector C (n) (n+1) is a linear function of φi,j and the boundary values of φi,j which we know. For N x = 5 and N y = 4, the matrix [A] would look similar to
b α 0 0 γ 0 0 0 0 0 0 0
α b α 0 0 γ 0 0 0 0 0 0
0 α b α 0 0 γ 0 0 0 0 0
0 0 α a 0 0 0 γ 0 0 0 0
γ 0 0 0 β α 0 0 γ 0 0 0
0 γ 0 0 α β α 0 0 γ 0 0
0 0 γ 0 0 α β α 0 0 γ 0
{ }
0 0 0 γ 0 0 α a 0 0 0 γ
0 0 0 0 γ 0 0 0 β α 0 0
0 0 0 0 0 γ 0 0 α β α 0
0 0 0 0 0 0 γ 0 0 α β α
0 0 0 0 0 0 0 γ 0 0 α a
CHAPTER 3. FINITE DIFFERENCE
80 where Γh − 2∆
α =
2 x
β = 1
− ∆Γh − ∆Γh 2 x
2 y
Γh − 2∆
γ =
2 y
a = 1
3Γh Γh − 2∆ − 2∆
b = 1
Γh 3Γh − 2∆ − 2∆
2 x
2 y
2 x
2 y
In order to solve the system of equations, one can use point Jacobi iteration. We can re-write
− 1
Γh ∆2x
−
Γh ∆2y
(n+1)
φi,j
= + + + + + + + +
Γh (n+1) φi,j −1 2 2∆y Γh (n+1) φ i−1,j 2∆2x Γh (n+1) φ i+1,j 2∆2x Γh (n+1) φ i,j+1 2∆2y Γh (n) φi,j −1 2 2∆y Γh (n) φi−1,j 2 2∆x Γh Γh 1+ 2 + 2 ∆x ∆y Γh (n) φi+1,j 2 2∆x Γh (n) φ i,j+1 2∆2y
(n)
φi,j
(n+1)
(n+1)
The first four terms on the right hand side (i.e. terms containing φi−1,j , φi+1,j , (n+1) (n+1) φi,j −1 , φ i,j+1 ) are unknowns, so we just have to guess their values in order to solve (n+1) ( for φi,j+1 . The iterative formula one would use to iteratively solve for φ i,j n + 1) is
3.8. MULTIDIMENSIONAL PROBLEMS
− 1
Γh ∆2x
−
Γh ∆2y
(n+1),r+1
φi,j
= + + + + + + + +
81
Γh 2∆2y Γh 2∆2x Γh 2∆2x Γh 2∆2y Γh 2∆2y Γh 2∆2x 1+
(n+1),r
φi,j −1
(n+1),r
φi−1,j
(n+1),r
φi+1,j
(n+1),r
φi,j+1 (n)
φi,j −1 (n)
φi−1,j
Γh Γh + ∆2x ∆2y
Γh 2∆2x Γh 2∆2y
(n)
φi,j
(n)
φi+1,j (n)
φi,j+1
This scheme is called the point Jacobi scheme. One would use the formula above to (n+1) iteratively obtain the value for φ i,j for the (r + 1)th iteration using the values of (n+1) (n+1) (n+1) (n+1) φi−1,j , φ i+1,j , φ i,j −1 , φi,j+1 ) at the rth iteration. If you want to use the Gauss-Seidel iteration scheme, one would use the latest (n),r (n),r 1 value of φ i,j instead of the previous value φ i,j − .
3.8.3
Modified wavenumber stability analysis
Using eigenvalue techniques to analyse the stability of a numerical algorithm can be tedious and not very informative. It is much quicker if you use the modified wavenumber in order to determine the stability of a numerical scheme. Let’s express φ i,j in terms of a two-dimensional Fourier series φi,j =
l
clm (t)eIk l xi eIk m yj
m
where I is the imaginary unit. Therefore
CHAPTER 3. FINITE DIFFERENCE
82
1 301 301
3 02
0.8
0.6
3 0 5
Y
3 0 7
0.4
3 1 0 3 1 4
0.2 3 1 6
0
0
0.2
0.4
0.6
0.8
X
Figure 3.7: Solution at time t = 20 for the 2d heat equation.
3.9. TEST PROBLEMS
φi−1,j
83
=
m
l
φi+1,j =
clm (t)eIk l xi eIk m (yj −∆y )
m
l
φi,j+1 =
clm (t)eIk l (xi +∆x ) eIk m yj
m
l
φi,j −1 =
clm (t)eIk l (xi −∆x ) eIk m yj
clm (t)eIk l xi eIk m (yj +∆y )
m
l
Substituting the above into Eq. (3.51) and compare coefficients of e Ik l xi eIk m yj gives
−
1 d 2Γ clm (t) = dt = λclm (t)
−
cos(kl ∆x ) 1 + ∆2x
− cos(k ∆ ) l
∆2y
y
clm (t)
In order for stability, we would like λ to fall within the stability diagram of the particular numerical time-stepping scheme that you are using. If you are using Euler time-stepping, then
−2 > λh > 0 The worst case scenario is when cos(kl ∆x ) = cos(km∆y ) = using the explicit Euler scheme, then h
≤
1 Γ
2 2 + ∆2x ∆2y
−1. So if you time-step
.
Exercise 3.6: If one were to use 2nd order central difference (for spatial discretisation) and the Euler time stepping schemes to solve the diffusion equation, what is the maximum value of h that one can use. Use, ∆x = ∆y = ∆ = 0.01 and Γ = 0.1. Compare your 1-d and 2-d results.
3.9 3.10
Test Problems Euler equations
Using second order central difference with 4th order Runge-Kutta time stepping with the following parameters dt = 0.1, ∆x = ∆y = 1, Mach number, M = 0.8 and pressure forcing function
CHAPTER 3. FINITE DIFFERENCE
84 100
80
60
40
20
y
0
−20 −40 −60 −80 −100 −100
−80
−60
−40
−20
0
20
40
60
80
100
x
Figure 3.8: Contour plot of the solution to the Euler Equations at time t = 400. The solution is computed using 2nd order central difference scheme and RungeKutta time stepping. Contour levels are at p = 0.1, 0.05, 0.01, 0.005, 0.001. Note the dispersion error and also the reflected wave. The numerical solution was obtained with dt = 0.1, ∆x = ∆y = 1.
± ±
2
sin(Ωt)e((x+20)
±
±
±
+y 2 )/9
where Ω = 0.03π. The solution at time t = 400 is shown in Fig. 3.8. Note the high wavenumber dispersion waves and the reflected waves. The dispersion error could be eliminated by conducting the simulation with more grid points. Figure 3.9 shows the results from the same simulation but carried out with a better spatial resolution of ∆x = ∆y = 0.5 with dt = 0.02. The contour plot shown is at t = 600. Note that no visible high wavenumber dispersive waves. Only the reflected waves are the source of error. The effects the reflected waves from the external boundaries could be lessen by computing the solution using a bigger domain. Solution computed with ∆x = ∆y = 0.5, dt = 0.01 but with 300 x 300 and 300 y 300 is shown in Fig. 3.10.
−
≤ ≤
− ≤ ≤
3.10. EULER EQUATIONS
85
100
80
60
40
20
y
0
−20 −40 −60 −80 −100 −100
−80
−60
−40
−20
0
20
40
60
80
100
x
Figure 3.9: Contour plot of the solution to the Euler Equations at time t = 600. The solution is computed using 2nd order central difference scheme and RungeKutta time stepping. Contour levels equally spaced with 50 contour levels between the maximum and minimum p values. The numerical solution was obtained with dt = 0.02, ∆x = ∆y = 0.5.
CHAPTER 3. FINITE DIFFERENCE
86
!""
#""
$""
"
!$""
!#""
!!"" !!""
!#""
!$""
"
$""
#""
!""
Figure 3.10: Contour plot of p for the solution to the Euler Equations at time t = 400. The solution is computed using 2nd order central difference scheme and Runge-Kutta time stepping. The numerical solution was obtained with dt = 0.01, ∆x = ∆y = 0.5. The solution was computed on a bigger domain, 300 x 300 and 300 y 300
−
≤ ≤
− ≤ ≤
Chapter 4 Differentiation: Unequally spaced data In order to find a formula to differentiate a function, consider the Taylor series
f (xi+1 ) = f (xi ) + f (xi ) (xi+1
− x ) + 2!1 f (x ) (x − x )
2
i
i
i+1
i
+
1 f (ξ ) (xi+1 3!
−x) i
3
(4.1)
where xi+1 = x i + ∆ i df dx and ξ is somewhere in between xi and xi+1 . Equation (4.1) can be re-arranged to obtain
f (x) =
f (xi ) =
f (xi+1 ) (xi+1
− f (x ) − 1 f (x ) (x − x ) + 1 f 3! − x ) 2! i
i
i+1
i
(ξ ) (xi+1
i
−x) i
2
(4.2)
Hence f (x) can be approximated as
f (xi ) =
f (xi+1 ) (xi+1
− f (x ) −x) i
(4.3)
i
Equation (4.3) is called the Forward Difference Scheme (FDS). The other terms that have been neglected in Eq. (4.2) gives an approximation of the error in approximating f (xi ) with Eq. (4.3). The leading term in the truncation error for the FDS approximation is
E F DS =
1 f (xi ) (xi+1 2!
87
−x) i
88
CHAPTER 4. DIFFERENTIATION: UNEQUALLY SPACED DATA
Thus, in order to reduce the error, one would like to make ( xi+1 xi ) as small as possible. The Taylor series expansion Eq. (4.1) could also be used to obtain an expression for f (xi−1 ).
−
f (xi−1 ) = f (xi ) + f (xi ) (xi−1 xi ) 1 1 + f (xi ) (xi−1 xi )2 + f (ξ 2 ) (xi−1 xi )3 2! 3! = f (xi ) f (xi ) (xi xi−1 ) 1 1 + f (xi ) (xi xi−1 )2 f (ξ 2 ) (xi xi−1 )3 2! 3!
−
−
− −
−
−
−
−
(4.4)
where xi−1 < ξ 2 < xi . Equation (4.4) can be re-arranged to obtain an another approximation for f (xi )
f (xi ) =
f (xi ) (xi
i−1 )
− f (x −x )
(4.5)
i−1
Equation (4.5) is called the Backward Difference Scheme (BDS) approximation of the first derivative. If we assume that xi−1 is very close to xi , then the leading term in the truncation error for the FDS approximation is E BDS =
1 f (xi ) (xi 2!
i−1 )
−x
Exercise 4.1: Subtract Eq. (4.4) from Eq. (4.1) and show that, after some algebra, that the first derivative can be written as
f (xi+1 )−f (xi−1 ) (xi+1−xi )2 −(xi −xi−1 )2 (x ) f i xi+1−xi−1 2!(xi+1 −xi−1 ) f (ξ) (xi+1−xi )3 f (ξ2 ) (xi −xi−1 )3 3! (xi+1 −xi−1 ) 3! (xi+1 −xi−1 )
f (xi ) =
−
−
−
Following Ex. (4.1), yet another formula for computing f (xi ) is
f (xi ) =
f (xi+1 ) xi+1
i−1 )
− f (x −x
(4.6)
i−1
This formula is called the Central Difference Scheme (CDS) and its leading order error is given by
E CD S = f (xi )
(xi+1
2
2 i−1 )
− x ) − (x − x 2! (x − x ) i
i+1
i
i−1
4.1. APPROXIMATION OF THE 2ND DERIVATIVE
89
Exercise 4.2: For the case where all the x i ’s are equally spaced, i.e. xi+1 xi = ∆ = const, show that Eqs. (4.3), (4.5), and (4.6) simplifies to the following expressions for the FDS, BDS and CDS
−
f (xi+1 ) f (xi ) ∆ f (xi ) f (xi−1 ) f (xi ) = ∆ f (xi+1 ) f (xi−1 ) f (xi ) = 2∆ Also show that the leading error term in the FDS and BDS is O(h) and the leading error term in the CDS is O(h2 ).
−
f (xi ) =
−
−
Example 4.1: Suppose we have a function 2
f (x) = e −x sin(10x) This function is shown in Fig. 4.1. Note that all the ’action’ of this function is confined to the region 0 x 3. Thus, if we want to use the difference schemes (CDS, FDS or BDS) to calculate the derivatives, we would like to to confine most of the grid points in the region 0 x 3 and not so many grid points in the region x > 3. One method would be to describe place the grid points where
≤ ≤
≤ ≤
xi+1 = x i + ∆ i ∆i = r∆i−1 and r > 1. Figures 4.2 and 4.3 shows the error in calculating the derivatives using the CDS in the domain 0 x 5 using 30 grid points. The data in fig. 4.2 was computed using r = 1.0 (i.e. equally spaced grid points) and the data in fig. 4.3 was computed using r = 1.1. As you can clearly see, you can get much better results by simply just making a small change in the location of the grid points. Numerically, the mean error for r = 1.0 using the CDS scheme is 0.5411 and for r = 1.1 is 0.2729.
≤ ≤
4.1
Approximation of the 2nd Derivative
A formula for the 2nd derivative can be obtained by evaluating f (x) at points halfway between x i+1 and x i and between x i and x i−1 (see Fig. 4.4). df
(xi+1/2 ) d2 f dx (x ) = i dx2 xi + 12 (xi+1 xi )
−
df dx
− (x − x +
i−1/2 ) 1 (xi i−1 2
i−1 )
−x
(4.7)
Using Eq. (4.6) to approximate the first derivative, Eq. (4.7) can be written as
90
CHAPTER 4. DIFFERENTIATION: UNEQUALLY SPACED DATA 1
0.8
0.6
0.4
0.2
) x ( f
0
−0.2 −0.4 −0.6 −0.8 −1
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
x
2
Figure 4.1: Figure showing f (x) = e −x sin(10x). 1 (a)
0.5 ) x ( f
0
−0.5 −1
0
0.5
1
1.5
2
2.5 x
3
3.5
4
4.5
5
10 x d / ) x ( f d
(b)
5 0
−5 −10
0
0.5
1
1.5
2
2.5 x
3
3.5
4
4.5
5
0
10
(c)
r o r r −5 E 10
0
0.5
1
1.5
2
2.5 x
3
3.5
4
4.5
5
Figure 4.2: Analysis of the error in the calculation for the derivatives for r = 1.0. (a) shows the function ( ) and the data points, x i ( ). The analytical derivative of the function ( ) and the calculated derivative using CDS is shown using the symbols in (b). (c) shows the error, indicated by the symbols, calculated at every grid point.
◦
d2 f (xi ) = dx2
f (xi+1 )−f (xi ) xi+1 −xi 1 (xi+1 2
f (xi )−f (xi−1 ) xi −xi−1
− −x
i−1 )
(4.8)
4.1. APPROXIMATION OF THE 2ND DERIVATIVE
91
1 (a)
0.5 ) x ( f
0
−0.5 −1
0
0.5
1
1.5
2
2.5 x
3
3.5
4
4.5
5
10 x d / ) x ( f d
5
(b)
0
−5 −10
0
0.5
1
1.5
2
2.5 x
3
3.5
4
0
10
4.5
5
(c)
r o r r −5 E 10
0
0.5
1
1.5
2
2.5 x
3
3.5
4
4.5
5
Figure 4.3: Analysis of the error in the calculation for the derivatives for r = 1.1. (a) shows the function ( ) and the data points, x i ( ). The analytical derivative of the function ( ) and the calculated derivative using CDS is shown using the symbols in (b). (c) shows the error, indicated by the symbols, calculated at every grid point.
◦
l
l
xi-1
m
m
xi
xi+1 xi+1/2
xi-1/2
Figure 4.4: Figure showing the data points used to calculate the second derivative
Exercise 4.3: Show that Eq. (4.8) simplifies to d2 f f (xi+1 ) (xi (x ) = i dx2
i−1 )
− f (x ) (x − x ) + f (x ) (x − x ) (x − x ) (x − x ) (x − x ) In addition, if one assumes that x − x = x − x = ∆, then d f f (x ) − 2f (x ) + f (x ) (x ) = i
i+1
i−1
i+1
i−1
i+1
i
i
i−1
−x 1 2
i+1
2
dx2
i
i
i+1
i
i−1
i
i+1
i
i−1
i−1
∆2
Results from the above exercise illustrates that the most accurate scheme is the CDS because the error in the CDS approximation is O(∆2 ). This mean that if we halve the grid, the error will reduce to a quarter its original value. For the FDS and
CHAPTER 4. DIFFERENTIATION: UNEQUALLY SPACED DATA
92
BDS approximations, halving h will only halve the error. In most practical engineering calculations, it is inefficient to have equally spaced grid. Take for example, in the situation below, 1 E F DS = f (xi )∆i 2 1 E BDS = f (xi )∆i−1 2
1 ∆i−1 E CDS = f (xi ) 2 2 where ∆i = x i+1
4.2
∆i ∆i−1
− 1
−x
i
Application of Finite Difference Formulas
In many engineering problems, you will have to solve ordinary differential equations that look something like d2 y dy + p(x) + q (x)y = r(x) dx2 dx
(4.9)
where a
≤x≤b
We are also usually given the conditions at the boundaries y(a) = α y(b) = β
Exercise 4.4: Using CDS approximation for all the derivatives, show that Eq. (4.9) can be approximated as
− −
pi
xi+1 −xi−1
+ + where
+
2 + q i (xi+1 −xi )(xi −xi−1 )
pi xi+1−xi−1
+
2 (xi+1 −xi−1 )(xi −xi−1 )
(4.10)
yi
2 (xi+1−xi−1 )(xi+1 −xi )
yi = y(xi ) pi = p(xi ) q i = q (xi ) ri = r(xi )
yi−1 yi+1 = r i
4.2. APPLICATION OF FINITE DIFFERENCE FORMULAS
93
From the boundary conditions, we know that y0 = α yN = β
(4.11)
Eq. (4.10) together with Eq. (4.11) represent a linear system that can be solved. They can be represented by [A] X = C
{ } { }
where
[A] =
1 0 χ1 ε1
0 η1
0 0
0
ε2 ...
η2 ...
χ2
0 0 0
0 0 0
··· 0
C =
...
···
y0 y1 .. . .. . .. . yN α r1 r2 .. .
rN −1 β
and
χi = εi = ηi =
− −
pi xi+1
(xi+1 pi
xi+1
+
−x
i−1
−
−x
2 xi ) (xi
i−1
+
0 .. .
...
χN −2 εN −2 ηN −1 0 1
{ } { } X =
0 0
··· ···
(xi+1
−
i−1 )
−x
(xi+1
−
2 xi−1 ) (xi + q i
2 xi−1 ) (xi+1
i−1 )
−x
−x) i
94
CHAPTER 4. DIFFERENTIATION: UNEQUALLY SPACED DATA
Chapter 5 Galerkin Method In this chapter, we will use Fourier series to solve a few classical partial differential equations
5.1
Convection equation
Let’s use Fourier series to solve the wave equation ∂u ∂t
− ∂∂xu = 0
(5.1)
with initial conditions given by u(x, 0) = sin(π cos(x)) We will also assume that the domain of interest is x solution as a Fourier series 1 u(x, t) = N
∈ [0, 2π].
Approximate the
N/2
ck (t) eikx .
(5.2)
k=−N/2+1
Substitute Eq. (5.2) into Eq. (5.1) gives 1 N
k
dck (t) ikx e dt
−
1 N
ck (t) (ik) eikx = 0.
k
Use Galerkin method and multiply the equation above by (1/2π) e−ilx and integrate over the domain 0 < x < 2π to give
N/2
k=−N/1+1
2π
0
dck i(k−l)x dx = e dt
N/2
2π
ck ik
k=−N/1+1
ei(k−l)x
0
=2πδkl
95
CHAPTER 5. GALERKIN METHOD
96 Using the orthogonality condition will give
dcl (t) = ilc l (t) (5.3) dt The equation above is on ordinary differential equation with t as the independent variable. The initial condition is obtained by Fourier transforming the initial condition. The solution to Eq. (5.3) can be formally written as ck (t) = c k (t = 0)eikt .
(5.4)
Alternatively, one can use Rungge-Kutta or Euler integration scheme to numerically approximate c k (t). Using Euler integration scheme, ck (t + h) ck (t) = ikck (t) h ck (t + h) = ck (t) + hikck (t)
−
(5.5)
5.2
Burgers Equation
Burger’s equation is a model equation for the Navier-Stokes equation. It can be written as ∂u ∂u ∂ 2 u + u = ν 2 ∂t ∂x ∂x
(5.6)
To find a solution to Eq. (5.6), let 1 u(x, t) = N
N/2
ak (t) eikx
(5.7)
k=−N/2+1
All terms in Eq. (5.6), must be evaluated before one can solve it. Look at the diffusion term on the right hand side first
−
1 ∂u = ∂x N 1 ∂ 2 u = ∂x 2 N
ak (ik) eikx
k
N
ak
k 2 e−ikx
(5.8)
k
Calculation of u∂u/∂x is a problem (will elaborate later), but for the moment, it will be treated as just another periodic function and can be represented as
5.2. BURGERS EQUATION
97
1 ∂u = u ∂x N
N
bk (t)eikx
(5.9)
k
Substituting Eqs. (5.9), (5.8) and (5.7) into Eq. (5.6) gives 1 N
k
dak ikx 1 e + dt N
k
1 bk eikx = ν N
−
k 2 ak eikx
k
Use Galerkin Formulation and multiply the above equation by (1/2π)e−imx gives
N −1
k=0
dak dt
N −1
2π −i(m−k)x
e
dx +
0
−i(m−k)x
bk
e
dx = ν
0
k=0
−
N −1
2π
2π
2
k ak
k=0
e−i(m−k)x dx
0
Using the orthogonality condition will give the Burgers equation in Fourier space dam + bm = νk 2 am (5.10) dt Equation (5.10) is just an ordinary differential equation and one can use RunggeKutta and other standard ode solvers to time step it forward with time. There is another neater way of doing things. One can rearrange Eq. (5.10) as
−
dam + νk 2 am = bm dt Multiplying both sides by an integrating factor
−
ν k 2 dt
R
e
= e νk
2
t
gives eνk
2
t dam
dt
2
+ eνk t νk 2 am =
d 2 am eνk t dt
=
−b −b
νk 2 t me νk 2 t me
(5.11)
Using Euler integration scheme gives am (t + h)eνm
2
(t+h)
−a
νm 2 t m (t)e
h Re-arranging this equation will give us
am (t + h) = (am (t)
=
νm 2 t m (t)e
−b
−k 2 ν ∆t
m (t)h) e
−b
(5.12)
So to use Fourier Spectral Galerkin method to solve Burger’s equation, do the following:
CHAPTER 5. GALERKIN METHOD
98
(1) Fourier transform u(x, t = 0) = sin(x) to get the Fourier coefficients, a k (t = 0). (2) Calculate Fourier coefficients for ∂u/∂x
⇒c
k
=
−ika
k
(3) Inverse transform c k to get ∂u/∂x(xi ) in physical space. (4) Calculate u(xi )∂u/∂x(xi ) in physical space. (5) Obtain bk by taking the Fourier transform of u(xi )∂u/∂x(xi ). This method is commonly known as the pseudo spectral method. (6) Use Eq. (5.12) to get a m (t + h) (7) Repeat steps (2) - (6) to step forward the solution in time.
5.3
Aliasing error in the calculation of the nonlinear term
Let’s see what happen when we use pseudo spectral method to calculate the Fourier coefficients of the product of two functions, u(x) v(x). Assume that both u(x) and v(x) are periodic functions
×
N
1 u (x j ) = N
2
uˆ ( p) eipxj
p=− N +1 2 N
1 v (x j ) = N
2
νˆ (q ) eiqxj
q=− N +1 2
(5.13)
The product of u and v can then be written as
1 u (x j ) v (x j ) = 2 N
p
uˆ ( p) vˆ (q ) ei( p+q)xj
q
Taking the Fourier transform of the above gives
uv (k) =
1 N 2
Using orthogonality condition N −1
1 eirxj = N j=0
uˆ ( p) vˆ (q ) ei( p+q−k)xj
j
p
q
1 if r = N m, where m = 0, 1, 2, ..... 0 otherwise
± ±
(5.14)
5.3. ALIASING ERROR IN THE CALCULATION OF THE NONLINEAR TERM 99 Eq. (5.14) can be written as
uv (k) =
1 1 ˆ ( p) νˆ (q ) + u uˆ ( p) νˆ (q ) N k= p+q N k= p+q±N
Errorterm
The ”exact” result should be
uvExact (k) =
Example Let
1 ˆ ( p) νˆ (q ) u N k= p+q
u(x) = sin(3x) v(x) = sin(5x) Thus 1 u(x)v(x) = sin(3x) sin(5x) = cos(2x) 2 If N is big, this is what we will get
− − − − − N 2
uˆ (k) =
0
N 2
vˆ (k) =
0
i
N 2
N 4
uv (k) =
0
if N = 14
i
N 2
i
N 4
− 12 cos(8x)
if k = 3 i if k = 3 otherwise
−
if k = 5 i if k = 5 otherwise
−
if k = 2 i if k = 8 otherwise
± ±
ˆ (k) = u
7i if k = 3 7i if k = 3 0 otherwise
vˆ (k) =
7i if k = 5 7i if k = 5 0 otherwise
−
−
CHAPTER 5. GALERKIN METHOD
100
uv (k) =
The mode at k =
3.5 if k = 2 3.5 if k = 6 0 otherwise
± ±
±6 is the aliasing error term of 1 uˆ ( p) vˆ (q ) N k= p+q±N
The error mainly occurs because the FFT cannot differentiate between 1 uv(x) = cos(2x) 2
− 12 cos(8x)
1 uv(x) = cos(2x) 2 at the data points (see handout)
− 12 cos(6x)
and
Chapter 6 Collocation Method In the collocation method, the approximated solution to the differential equation is required to satisfy the differential equation at discrete collocation points. Thus it is required to find the derivative at the collocation points.
6.1
Matrix operator for Fourier spectral numerical differentiation
Here, we recall that the Fourier transform pair is given by N −1
ck =
f (x j )e−ikxj
(6.1)
j=0
1 f (x j ) = N
N/2
ck eikxj
(6.2)
k=−N/2+1
It was earlier shown that the derivative at the grid points, x j can be written as
1 (Du)l = N
N/2
k=−N/2+1
Substituting Eq. (6.1) into (6.3) gives 101
ck ikeikxl .
(6.3)
CHAPTER 6. COLLOCATION METHOD
102
1 = N
(Du)l
N/2
N −1
f (x j )e−ikxj ike ikxl
k=−N/2+1 j=0 N/2
N −1
1 = N j=0
k=−N/2+1 N/2
N −1
1 = N j=0
e−ikxj ikeikxl f (x j ) ike−ik(xl −xj ) f (x j )
k=−N/2+1
(6.4)
remembering that x j =
2πj N
gives N/2
N −1
1 (Du)l = N j=0 If we define dl,j
1 = N
ike−
2πik
N
(l− j)
f (x j ).
(6.5)
k=−N/2+1
N/2
ike−
2πik
N
(l− j)
(6.6)
k=−N/2+1
then the derivative at every grid point can be written as N −1
(Du)l =
dlj f (x j ).
(6.7)
j=0
Note that it is possible to show that dl,j =
(1/2) ( 1)l− j cot 0
−
π(l− j) N
if l = j if l = j
Note that Eq. (6.7) can be written in vector-matrix form as
du (x = x 0 ) dx du (x = x 0 ) dx
.. . .. .
du (x = x N −1 ) dx
=
d0,0 .. . .. . .. . dN −1,0
d0,1 ...
··· ··· ... ...
··· ··· ···
d0,N −1 .. . .. . .. . dN −1,N −1
u(x = x 0 ) u(x = x 1 ) .. . .. . u(x = x N −1 )
Chapter 7 Some numerical examples In this chapter, some examples of problems and solutions written by past and present PhD. students are presented. It is intended to give the reader the vast majority of problems that can be solved using the numerical methods presented in this set of notes.
7.1
Heating of an oil droplet in air
In this section, we will look at the problem where an oil droplet is suspended in air. This same problem has been solved by Dombrovsky and Sazhin [2] so comparison could be made wit their data. The partial differential equation governing the behaviour of temperature inside the liquid, φ l , is 1 ∂ ∂φ l = k l 2 ρl c p,l ∂t r ∂r
r2
∂φ l ∂r
.
(7.1)
where kl = 0.14W/(mK) is the thermal conductivity of oil, c p,l = 2.83 kJ/(kg K) is the specific heat capacity of the liquid and ρl = 600 kg/m3 is the density of the oil. We will assume that the oil have constant properties. Let’s also assume that the droplet size is 20µm so Eq. (7.1) is only valid for 0 < r < 2 10−6 . The initial temperature of the droplet is set to be 300 K. Because of symmetry, the boundary condition at r = 0 is
×
∂φ l (0) =0 ∂r On the gas side, the governing equation for the temperature, φ g , is 1 ∂ ∂φ g = k g 2 ρg c p,l ∂t r ∂r
r2
∂φ g ∂r
.
(7.2)
(7.3)
where kg = 0.061W/(mK) is the thermal conductivity of oil, c p,g = 1.12 kJ/(kg K) is the specific heat capacity of the liquid and ρ g = 23.8 kg/m3 is the density of the 103
CHAPTER 7. SOME NUMERICAL EXAMPLES
104
oil. Again, we will assume all properties are constant. Let’s set the boundary of the calculation to be 20 times the radius of the droplet so Eq. (7.1) is only valid for 2 10−6 < r < 4 10−5 . The initial temperature of the air is set to be 880K. At the interface of the droplet, energy conservation requires that
×
×
∂φ l ∂φ g = k g ∂r ∂r Using one sided scheme to approximate the derivatives for Eq. (7.4) gives kl
kl
φs
−φ
l,N l −1
∆l
= kg
φs =
(7.4)
φg,1 φs ∆g
−
kg ∆l φ kl ∆g g,1 kg ∆l kl ∆g
φl,N l −1 + 1+
(7.5)
where it is assumed that the temperature on the liquid side is represented by the array [φl,0 , φl,1 ,.....,φl,N l −1 , φl,N l ] and the temperature on the gas side is represented as [φg,0 , φg,1 ,.....,φg,N g −1 , φl,N g ]. N g and N l are the number of grid points (including boundary points) in the gas and liquid side respectively. ∆l and ∆g are the grid size in the gas and liquid side respectively. φs = φ l,N l = φ g,0 is the temperature on the surface of the droplet. The solution can be obtained by solving Eqs. (7.1) and (7.3) along with the boundary conditions given by Eq. (7.5) and (7.2). φg,i (i = 1..N g ) is set at 880 K. The solution can be computed using Runge-Kutta time stepping scheme using the following numerical parameters ∆g = 7.7551 10−6 , ∆l = 2.2222 10−6 , h = 10−5 . The solution at various times are shown in Fig. 7.1. The corresponding matlab code is shown in Matlab Program 7.1 and 7.2
×
×
7.1.
HEATING OF AN OIL DROPLET IN AIR
,!!
,!!
+!!
+!!
*!!
*!!
)!!
105
4 )!! 3 2
!
!
-.!/!!!!!! 0
(!!
&!!
&!!
'!!
'!!
!
"
#
1
$ "!
!
,!!
+!!
+!!
*!!
*!!
4 )!! 3 2
4 )!! 3 2
!
!
-.!/!!#!!! 0
&!!
'!!
'!!
"
#
1
$ "!
!
,!!
+!!
+!!
*!!
*!!
4 )!! 3 2
4 )!! 3 2
!
!
-.!/!!&!!! 0
"
!&
$ "!
-.!/!!(!!! 0
(!!
&!!
#
1
!&
,!!
(!!
# !&
$ "!
-.!/!!'!!! 0
(!!
&!!
!
" 1
!&
,!!
(!!
-.!/!!"!!! 0
(!!
&!!
'!!
'!!
!
" 1
# $ "!
!&
!
" 1
# !&
$ "!
Figure 7.1: Numerical solution for droplet heating problem. , temperature inside the droplet, , air temperature and , temperature on the interface of the droplet.
•
◦
106
CHAPTER 7. SOME NUMERICAL EXAMPLES
Matlab Program 7.1: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % File to solve heat equation using % central differencing scheme. The problem % we are looking at is a droplet of fuel in air. % The parameters are chosen so that results % could be compared with the data published in % % "A parabolic temperature profile model for % heating of droplets" % L.A. Dombrovsky and S.S. Sazhin % J. Heat Transfer, June 2003, vol 125, 535-537. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clear all; close all; Nliquid=10; Ngas=50; h=1.0e-5; nstep=500; rhol=600 %Density of liquid in kg/m^3 rhog=23.8 %Density of gas in kg/m^3 kl=0.14 %Thermal conductivity of liquid in W/(mK) kg=0.061 %Thermal conductivity of gas in W/(mK) cpl=2830 %specific heat capacity of liquid in J/(kg K) cpg=1120 %specific heat capacity of gas in J/(kg K)
nuliquid=kl/(rhol*cpl); %Thermal diffusivity nugas=kg/(rhog*cpg); %Thermal diffusivity L=20e-6; % Initial size of droplet in m
rliquid=linspace(0,L,Nliquid); rgas=linspace(L,20*L,Ngas); rliquid=rliquid’; rgas=rgas’; Deltaliquid=rliquid(2)-rliquid(1); Deltagas=rgas(2)-rgas(1); Delta=Deltaliquid;
7.1.
HEATING OF AN OIL DROPLET IN AIR
107
ratio=(kg/kl)*(Deltaliquid/Deltagas);
fliquid=300*ones(size(rliquid)); fgas=880*ones(size(rgas));
fgas(1)=(fliquid(Nliquid-1)+ratio*fgas(2))/(1+ratio); fliquid(Nliquid)=fgas(1);
% %Plotting initial solution % t=0; gg=plot(rliquid(1:Nliquid-1),fliquid(1:Nliquid-1),’ko’, rgas(2:Ngas),fgas(2:Ngas),’ks’, set(gg(3),’markerfacecolor’,[0 0 0],’markersize’,10); axis([0 20e-5 250 900]); crap=sprintf(’t=%f s’,t); text(1.5e-4,500,crap,’fontsize’,12); xlabel(’r’); ylabel(’\phi’) % % Starting time iteration % for j=1:nstep
foldgas=fgas; k1=CalcRHS(nugas,Deltagas,rgas,foldgas); temp=foldgas+k1*(h/2); k2=CalcRHS(nugas,Deltagas,rgas,temp); temp=foldgas+k2*(h/2); k3=CalcRHS(nugas,Deltagas,rgas,temp); temp=foldgas+k3*h; k4=CalcRHS(nugas,Deltagas,rgas,temp); fgas=foldgas+((1./6.)*k1+(1./3.)*(k2+k3)+(1./6.)*k4)*h; foldliquid=fliquid; k1=CalcRHS(nuliquid,Deltaliquid,rliquid,foldliquid);
CHAPTER 7. SOME NUMERICAL EXAMPLES
108
temp=foldliquid+k1*(h/2); k2=CalcRHS(nuliquid,Deltaliquid,rliquid,temp); temp=foldliquid+k2*(h/2); k3=CalcRHS(nuliquid,Deltaliquid,rliquid,temp); temp=foldliquid+k3*h; k4=CalcRHS(nuliquid,Deltaliquid,rliquid,temp); fliquid=foldliquid+((1./6.)*k1+(1./3.)*(k2+k3)+(1./6.)*k4)*h; %Heat flux at r=L fgas(1)=(fliquid(Nliquid-1)+ratio*fgas(2))/(1+ratio); fliquid(Nliquid)=fgas(1); %Symmetry condition at r=0; fliquid(1)=fliquid(2); t=t+h
% % Plotting solution % figure(1) gg=plot(rliquid(1:Nliquid-1),fliquid(1:Nliquid-1),’ko’, rgas(2:Ngas),fgas(2:Ngas),’ set(gg(3),’markerfacecolor’,[0 0 0],’markersize’,10); axis([0 20e-5 250 900]); crap=sprintf(’t=%f s’,t); text(1.5e-4,500,crap,’fontsize’,12); xlabel(’r’); ylabel(’\phi (K)’) figure(2) plot(t,fgas(1),’ko’); xlabel(’t(s)’); ylabel(’\phi_s’); hold on pause(0.001); end
7.2.
BLASIUS SOLUTION: CONTRIBUTED BY MR. M. GIACOBELLO 109
Matlab Program 7.2: function d2f=CalcRHS(mu,Delta,r,f)
dr=Delta; N=length(f); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Initializing matrix and allocating memory %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% d2f=zeros(N,1); d2f(1)=0; for i=2:N-1 d2f(i)=mu*((f(i+1)-2*f(i)+f(i-1))/dr^2+(2/r(i))*(f(i+1)-f(i-1))/(2*dr)); end d2f(N)=0;
7.2
Blasius solution: Contributed by Mr. M. Giacobello
To derived the Blasius boundary layer equation, the starting point is the steady incompressible Navier–Stokes and continuity equations. For a freestream flow that is aligned with the x-axis, these are given as
1 ∂p ∂u ∂u = + ν u + v ∂x ∂y ρ ∂x 1 ∂p ∂v ∂v = + ν u + v ∂x ∂y ρ ∂y ∂u ∂ v + = 0. ∂x ∂y
− −
∂ 2 u ∂ 2 u + , ∂x 2 ∂y 2 ∂ 2 v ∂ 2 v + , ∂x 2 ∂y 2
(7.6)
(7.7) (7.8)
For a boundary layer developing on a semi-infinite flat plate, these equations are subject to the boundary conditions u = v = 0
for
x
≥ 0,
y = 0.
(7.9)
The Navier-Stokes equations can be simplified by noting that within the boundary layer u changes from zero at y = 0 to a value close to U o at y = δ (x). Here δ (x) is a measure of the boundary layer thickness and is defined as the distance from
CHAPTER 7. SOME NUMERICAL EXAMPLES
110
the wall, to the point where u reaches 99% of the freestream velocity, U o . In the wall-parallel direction, the rate of change of u is much more gradual. By considering the order of magnitude of each term in the Navier-Stokes and continuity equations and retaining only the dominant terms, it can be shown that equation (7.6) and (7.8) reduce to ∂u ∂u ∂ 2 u = ν 2 u + v ∂x ∂y ∂y ∂u ∂v + = 0. ∂x ∂y
(7.10)
(7.11)
These equations must be solved within the boundary layer subject to the boundary conditions u = v = 0
for
x
≥ 0,
y = 0.
(7.12)
→ ∞.
(7.13)
and the asymptotic conditions u
→ U , o
v
→ 0
as
y
To simplify the solution further, the stream function, ψ, is introduced, such that u = v =
∂ψ , ∂y ∂ψ . ∂x
−
(7.14)
(7.15)
A dimensionless coordinate η is defined as η = y
U o νx
1/2
,
(7.16)
and a dimensionless stream function f (η) is also defined by writing ψ = (νU o x)1/2 f (η).
(7.17)
(7.18)
(7.19)
In terms of f (η), equation (7.10) may be re-written as ∂ 3 f 1 ∂ 2 f + f = 0. ∂η 3 2 ∂η 2 The boundary conditions (7.12) become f (0) =
∂f (0) = 0, ∂η
and the asymptotic condition (7.13) becomes ∂f ( ) = 1. ∂η
∞
(7.20)
7.2.
BLASIUS SOLUTION: CONTRIBUTED BY MR. M. GIACOBELLO 111
Equation (7.18) to (7.20) comprise a boundary value problem for f (η). Due to its non-linearity, equation (7.18) cannot be solved in closed form and Blasius resorted to a series solution. Today it is much easier to solve this equation using a computer. Here a fourth order Runge–Kutta method is used. Runge–Kutta integration methods are best suited to initial value problems. To solve this boundary value problem, it is first recast as the initial value problem 1 ∂ 2 f ∂ 3 f + = 0, f 2 ∂η 2 ∂η 3 ∂f (0) = 0, f (0) = ∂η ∂ 2 f ( ) = tk . ∂η 2
(7.21)
(7.22)
∞
(7.23)
The value t k must then be chosen to ensure, that lim
∂f ( , tk ) = 1. ∂η
∞
k
→ ∞
(7.24)
Here the correct value of tk is converged-on iteratively using the secant method. An initial guess of tk is made and equation (7.21) must be integrated to“infinity”. At this point the computed value of ∂f/∂η is compared to the specified boundary value. If the agreement is not satisfactory another guess is made. This is known as a shooting method. In practice it is not necessary to integrate to “infinity”. Integrating from η = 0 to 20 was found to be sufficient for an accurate solution. The numerical solution for f , ∂f/∂η and ∂ 2 f/∂η 2 versus η is presented in figure 7.2. Once f (η) is obtained, u(η) and v(η) may be calculated directly from their relationship to the dimensionless stream function. That is u(η) = U o 1 v(η) = 2
∂f , ∂η
− νU o x
1/2
∂f η ∂η
f .
(7.25)
(7.26)
u(η)/U o and v(η)/U o are plotted versus y /δ (x) in figure 7.3 for Rex = U o x/ν = 1.
CHAPTER 7. SOME NUMERICAL EXAMPLES
112
2.5 f df/d! 2
2
d f /d! 2
1.5
1
0.5
0 0
1
2
3
4
5
6
!
Figure 7.2: Numerical solution for f , ∂f/∂η and ∂ 2 f/∂η 2 versus η.
7.2.
BLASIUS SOLUTION: CONTRIBUTED BY MR. M. GIACOBELLO 113
1 u/Uo v/Uo
0.9
0.8
0.7
0.6 ) x (
!
/ y
0.5
0.4
0.3
0.2
0.1
0 0
0.2
0.4
0.6
0.8
1
U/Uo
Figure 7.3: Numerical solution for u(η)/U o and v(η)/U o versus y/δ (x) for Rex = 1.
CHAPTER 7. SOME NUMERICAL EXAMPLES
114
Matlab Program 7.3: %-------------------------------------------------------------% % Nane: Matteo Giacobello. % % Date: 01/07/05. % % Solution to the Blasius Boundary Layer Profile. % % A shooting method is used to solve the BVP as an IVP. % % Ths secant method is used to accelerate towards the % % best guess of f’’(0). % %-------------------------------------------------------------% % Note: w1 == f(y), w2 == dfdy, w3 == d2fdy2. % %-------------------------------------------------------------% % As Chong would say, "it works like a charm". % %-------------------------------------------------------------% % Check that what I’m solving is correct.
%
function main close all; clear all; clc; error_tol = 1e-5; nu = 1.0; x = 1.0; Uo = 1.0; output_tag = 1;
% 1 == output, 0 == no output.
N = 800; kk = 1; M = 100;
% number of grid points is N+1.
a b
= 0.0; = 20.0;
% domain boundries.
ya dya dyb
= = =
% boundary conditions. f(0) = 0. % f’(0) = 0. % f’(infty) = 1.0
h
= (b-a)/N;
0.0; 0.0; 1.0;
tkm2 = (dyb-dya)/(b-a);
% maximum number of iterations.
% grid spacing. % first guess of d2f/dx2.
7.2.
BLASIUS SOLUTION: CONTRIBUTED BY MR. M. GIACOBELLO 115
for ii = 1:N+1 eta(ii) = a + (ii-1)*h; end w1(1) = ya; w2(1) = dya; w3(1) = tkm2; [w1,w2,w3] = RKutta(w1(1),w2(1),w3(1),a,h,N); ykm2 = w2(N+1); tkm1 = tkm2 +(dyb-ykm2)/(b-a); % second guess of d2f/dx2. while (kk <= M) w1(1) = ya; w2(1) = dya; w3(1) = tkm1; [w1,w2,w3] = RKutta(w1(1),w2(1),w3(1),a,h,N); ykm1 = w2(N+1); tk = tkm1-(ykm1-dyb)*(tkm1-tkm2)/(ykm1-ykm2); tkm2 = tkm1; ykm2 = ykm1; tkm1 = tk; kk = kk + 1; if(abs(w2(N+1)-dyb)
CHAPTER 7. SOME NUMERICAL EXAMPLES
116 disp(string); end
%-------------------------------------------------------------% % return to physical space. % % Note: u = dPsi/dx is calculated using 2nd order FD. % % The finite difference has been checked for % % a known Psi. % % v = 0.5*sqrt(nuUo/x)*(eta*f’-f). % %-------------------------------------------------------------% for ii = 1: N+1 y(ii) = sqrt(nu*x/Uo)*eta(ii); Psi(ii) = sqrt(nu*Uo*x)*w1(ii); %Psi(ii) = y(ii)^3; end
% The streamfunction.
%ii = 1; % forward difference. %u(ii) = (-3*Psi(ii) + 4.0*Psi(ii+1) - Psi(ii+2))/(2*(y(ii+1)-y(ii))); %for ii = 2: N % central difference. %u(ii) = (Psi(ii+1)-Psi(ii-1))/(2*(y(ii+1)-y(ii))); %end %ii = N+1; % backward difference. %u(ii) = (Psi(ii-2)-4.0*Psi(ii-1)+3*Psi(ii))/(2*(y(ii)-y(ii-1))); end
for ii = 1: N+1 u(ii) = Uo*w2(ii); v(ii) = 0.5*sqrt(nu*Uo/x)*(eta(ii)*w2(ii)-w1(ii)); end figure; plot(u,y,’-x’, v,y,’-o’) legend(’u’,’v’); xlabel(’U’); ylabel(’y’); axis([0.0 1.1 0.0 10.0]) %title(’Blasius ZPG Boundary Layer Profile in Physical Coordinates’); grid on; %-------------------------------------------------------------%
7.2.
BLASIUS SOLUTION: CONTRIBUTED BY MR. M. GIACOBELLO 117
% Finally nornalise by U_99. % %-------------------------------------------------------------% for ii = 1: N+1 if(u(ii)>=0.99*Uo) y99 = y(ii) u99 = u(ii); break; end end figure; plot(u/Uo,y/y99,’-x’, v/Uo, y/y99,’-o’) legend(’u/Uo’,’v/Uo’); %title(’Normalised Blasius ZPG Boundary Layer Profile’); xlabel(’U/Uo’); ylabel(’y/delta(x)’); axis([0.0 1.1 0.0 1.0]) grid on;
if (output_tag == 1) out1 = sprintf(’f_vs_eta.plt’); out2 = sprintf(’u_v_vs_y.plt’); fid1 = fopen(out1,’a’); fid2 = fopen(out2,’a’); fprintf(fid1,’variables="eta","f", "df", "d2f"’); fprintf(fid2,’variables="y","u", "v"’); for ii = 1: N+1 fprintf(fid1,’%10.8g %10.8g %10.8g %10.8gn’, eta(ii), w1(ii), w2(ii), w3(ii)); fprintf(fid2,’%10.8g %10.8g %10.8g n’, y(ii)/y99, u(ii), v(ii)); end fclose(fid1); fclose(fid2); end
string = sprintf(’The boundary layer thickness is approximately Delta = %g ’,y99); disp(string); function [w1,w2,w3] = RKutta(w1,w2,w3,a,h,N)
CHAPTER 7. SOME NUMERICAL EXAMPLES
118
for ii = 1:1:N eta(ii) = a + (ii-1)*h; K11 = h*rhs1(eta(ii), w1(ii), w2(ii), w3(ii)); K12 = h*rhs2(eta(ii), w1(ii), w2(ii), w3(ii)); K13 = h*rhs3(eta(ii), w1(ii), w2(ii), w3(ii)); K21 = h*rhs1(eta(ii)+h/2,w1(ii)+K11/2,w2(ii)+K12/2,w3(ii)+K13/2); K22 = h*rhs2(eta(ii)+h/2,w1(ii)+K11/2,w2(ii)+K12/2,w3(ii)+K13/2); K23 = h*rhs3(eta(ii)+h/2,w1(ii)+K11/2,w2(ii)+K12/2,w3(ii)+K13/2); K31 = h*rhs1(eta(ii)+h/2,w1(ii)+K21/2,w2(ii)+K22/2,w3(ii)+K23/2); K32 = h*rhs2(eta(ii)+h/2,w1(ii)+K21/2,w2(ii)+K22/2,w3(ii)+K23/2); K33 = h*rhs3(eta(ii)+h/2,w1(ii)+K21/2,w2(ii)+K22/2,w3(ii)+K23/2); K41 = h*rhs1(eta(ii)+h,w1(ii)+K31,w2(ii)+K32,w3(ii)+K33); K42 = h*rhs2(eta(ii)+h,w1(ii)+K31,w2(ii)+K32,w3(ii)+K33); K43 = h*rhs3(eta(ii)+h,w1(ii)+K31,w2(ii)+K32,w3(ii)+K33); w1(ii+1) w2(ii+1) w3(ii+1) end
= w1(ii) +(K11+2*(K21+K31)+K41)/6; = w2(ii) +(K12+2*(K22+K32)+K42)/6; = w3(ii) +(K13+2*(K23+K33)+K43)/6;
function rhs1 = rhs1(x, w1, w2, w3) rhs1 = w2; function rhs2 = rhs2(x, w1, w2, w3) rhs2 = w3; function rhs3 = rhs3(x, w1, w2, w3) rhs3 = -0.5*w1*w3; %-------------------------------------------------------------%