STRUCTURAL DYNAMICS, VOL. 9, Version 0.1 Computational Dynamics
Søren R. K. Nielsen
P (λ) λ − µk y(λ) = P (µk ) + P (µk ) − P (µk−1 ) µk − µk−1 λ1 µk−1
λ2
µk µk+1
Aalborg tekniske Universitetsforlag May 2003
λ3
λ
Contents
6
7
8
9
LINEAR EIGENVALUE PROBLEMS 6.1 Formulation of Linear Eigenvalue Problems 6.2 Characteristic Polynomials . . . . . . . . . 6.3 Eigenvalue Separation Principle . . . . . . 6.4 Shift . . . . . . . . . . . . . . . . . . . . . 6.5 Transformation of GEVP to SEVP . . . . . 6.6 Exercises . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
7 7 18 24 27 29 32
APPROXIMATE SOLUTION METHODS 7.1 Static Condensation . . . . . . . . . . . 7.2 Rayleigh-Ritz Analysis . . . . . . . . . 7.3 Error Analysis . . . . . . . . . . . . . . 7.4 Exercises . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
33 33 38 47 52
. . . . . .
53 53 54 65 70 73 77
. . . . . .
79 79 81 85 90 98 106
. . . .
. . . .
VECTOR ITERATION METHODS 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 8.2 Inverse and Forward Vector Iteration . . . . . . . . . . 8.3 Shift in Vector Iteration . . . . . . . . . . . . . . . . . 8.4 Inverse Vector Iteration with Rayleigh Quotient Shift . 8.5 Vector Iteration with Gram-Schmidt Orthogonalization 8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . SIMILARITY TRANSFORMATION METHODS 9.1 Introduction . . . . . . . . . . . . . . . . . . . 9.2 Special Jacobi Iteration . . . . . . . . . . . . . 9.3 General Jacobi Iteration . . . . . . . . . . . . . 9.4 Householder Reduction . . . . . . . . . . . . . 9.5 QR Iteration . . . . . . . . . . . . . . . . . . . 9.6 Exercises . . . . . . . . . . . . . . . . . . . . — 3 —
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
4
Contents
10 SOLUTION OF LARGE EIGENVALUE PROBLEMS 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 10.2 Simultaneous Inverse Vector Iteration . . . . . . . . 10.3 Subspace Iteration . . . . . . . . . . . . . . . . . . . 10.4 Characteristic Polynomial Iteration . . . . . . . . . . 10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
107 107 109 115 123 130
11 INDEX
131
A Solutions to Exercises A.1 Exercise 6.1 . . . . . A.2 Exercise 6.2 . . . . . A.3 Exercise 6.3 . . . . . A.4 Exercise 6.4: Theory A.5 Exercise 7.1 . . . . . A.6 Exercise 7.2 . . . . . A.7 Exercise 7.3 . . . . . A.8 Exercise 8.1 . . . . . A.9 Exercise 8.2 . . . . . A.10 Exercise 9.3 . . . . . A.11 Exercise 9.6 . . . . . A.12 Exercise 10.1 . . . . A.13 Exercise 10.3 . . . . A.14 Exercise 10.5 . . . .
133 134 138 141 142 146 149 151 153 156 158 164 169 172 175
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
Preface
This text has been prepared for the course on Computational Mechanics given at the 8th semester at the structural engineering program in civil engineering at Aalborg University. The rather weird pagination starting with Chapter 6 reflects the fact that only the latter half of the text dealing with eigenvalue analysis has been completed by March 2003. The first part, dealing with subjects such as numerical analysis of Fourier series, Fourier and Laplace transforms, and numerical integration of dynamic equations of motion will not be ready until March 2004. Answers to all exercises given at the end of each chapter can be downloaded from the home page of the course at the address: www.civil.auc.dk/i5/engelsk/dyn/index/htm Aalborg University, May 2003 Søren R.K. Nielsen
— 5 —
6
Contents
C HAPTER 6 LINEAR EIGENVALUE PROBLEMS 6.1
Formulation of Linear Eigenvalue Problems
The basic equation of motion for forced vibrations of a linear viscous damped n degree-offreedom system reads1 M¨ x + Cx˙ + Kx = f (t) , x(0) = x0
t>0
)
(6–1)
˙ x(0) = x˙ 0
,
x(t) is the vector of displacement from the static equilibrium state, and f (t) is the dynamic load vector. K, M and C denotes the stiffness matrix, mass matrix and damping matrices, all of the dimension n × n. For any vector a 6= 0 these fulfill the following positive definite and symmetry properties aT Ka > 0 aT Ma > 0 aT Ca > 0
K = KT T , M=M
,
(6–2)
If the structural system is not supported against stiff-body motions, the stiffness matrix is merely positive semi-definite, so aT Ka ≥ 0. Correspondingly, if some degrees of freedom are not carrying kinetic energy (pseudo degrees of freedom with zero mass or zero mass moment of inertia), the mass matrix is merely positive semi-definite, so aT Ma ≥ 0. The positive definite property of the damping matrix is a formal statement of the physical property that any nonzero velocity of the system should be related with energy dissipation. C need not fulfill any symmetry properties. However, energy dissipation is confined to the symmetric part. So-called aeroelastic loads are external dynamic loads depending on the structural deformation, which ˙ If the are often assumed to be proportional to the structural velocity, i.e. f (t) = −Ca x(t). aeroelastic damping matrix Ca is absorbed in the total damping matrix C, no definite property can be stated for the latter matrix. 1
S.R.K. Nielsen: Vibration Theory, Vol. 1. Linear Vibration Theory. Aalborg tekniske Universitetsforlag, 1998. — 7 —
8
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
Undamped eigenvibrations C = 0, f (t) ≡ 0 are obtained as linear independent solutions to the homogeneous matrix differential equation
M¨ x + Kx = 0
(6–3)
Solutions are searched for on the form (6–4) x(t) = Φ(j) eiωj t √ where i = −1 is the complex unit. Insertion of (6-4) into (6-3) provides the following homogeneous system of linear equations for the determination of the amplitude Φ(j) and the unknown constant ωj
K − λj M Φ(j) = 0
,
λj = ωj2
(6–5)
(6-5) is a so-called generalized eigenvalue problem (GEVP). If M = I, where I is the identity matrix, the eigenvalue problem is referred to as a special eigenvalue problem (SEVP). The necessary condition for non-trivial solutions (i.e. Φ(j) 6= 0) is that the determinant of the coefficient matrix is different from zero. This lead to the characteristic equation
P (λ) = det K − λM = 0
(6–6)
P (λ) is known as the characteristic polynomial. This may be expanded as P (λ) = a0 λn + a1 λn−1 + · · · + an−1 λ + an
(6–7)
The constants a0 , a1 ,. . . ,an are known as the invariants of the GEVP. This designation stems from the fact that the characteristic polynomial (6-7) is invariant under any rotation of the coordinate system. Obviously, a0 = (−1)n det(M), and an = det(K). The nth order equation (6-6) determines n solutions, λ1 ,λ2 ,. . . ,λn . Assume that either M or K are positive definite. Then, all eigenvalues λj are non-negative real, which may be ordered in ascending magnitude as follows
0 ≤ λ1 ≤ λ2 ≤ · · · ≤ λn−1 ≤ λn ≤ ∞
(6–8)
λn = ∞, if det(M) = 0. Similarly, λ1 = 0, if det(K) = 0. The eigenvalues are denotes as simple, if λ1 < λ2 < · · · < λn−1 < λn . The undamped circular eigenfrequencies are related to the eigenvalues as follows
6.1 Formulation of Linear Eigenvalue Problems
ωj =
p
9
λj
(6–9)
The corresponding solutions for the amplitude functions, Φ(1) ,. . . ,Φ(n) , are denoted the undamped eigenmodes of the system, which are real as well. The eigenvalue problems (6-5) can be assembled into following matrix formulation
λ1 0 0 λ2 K Φ(1) Φ(2) · · · Φ(n) = M Φ(1) Φ(2) · · · Φ(n) . .. .. . 0 0
··· ··· ...
0 0 .. .
⇒
· · · λn
KΦ = MΦΛ
(6–10)
where λ1 0 0 λ 2 Λ=. . . . . . 0 0
0 0 .. .
··· ··· ...
(6–11)
· · · λn
and Φ is the so-called modal matrix of dimension n × n, defined as Φ = Φ(1) Φ(2) · · · Φ(n)
(6–12)
If the eigenvalues are simple, the eigenmodes fulfill the following orthogonality properties1
Φ
(i) T
MΦ
(j)
=
Φ(i) T KΦ(j) =
(
0 , Mi ,
(
0
ωi2 Mi
i 6= j i=j , ,
(6–13)
i 6= j i=j
(6–14)
where Mi denotes the modal mass. The orthogonality properties (6-13) can be assembled in the following matrix formulation
M1 0 (1) (2) T 0 M2 Φ Φ · · · Φ(n) M Φ(1) Φ(2) · · · Φ(n) = . .. .. . 0 0 ΦT MΦ = m
··· ··· ...
0 0 .. .
· · · Mn
⇒
(6–15)
10
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
where M1 0 0 M 2 m= . . . . . . 0 0
··· ··· ...
0 0 .. .
· · · Mn
(6–16)
The corresponding grouping of the orthogonality properties (6-14) reads ΦT KΦ = k
(6–17)
where
ω12 M1 0 0 2 ω2 M2 k= . .. .. . 0 0
··· ··· ...
0 0 .. .
· · · ωn2 Mn
(6–18)
If the eigenvalues are all simple, the eigenmodes become linear independent, which means that the inverse Φ−1 exists. In the following it is generally assumed that the eigenmodes are normalized to unit modal mass, so m = I. For the special eigenvalue problem, where M = I, it then follows from (6-12) that Φ−1 = ΦT
(6–19)
A matrix fulfilling (6-19) is known as orthonormal or unitary, and specifies a rotation of the coordinate system. All column and row vectors have the length 1, and are mutually orthogonal. It follows from (6-15) and (6-17) that in case of simple eigenvalues a so-called similarity transformation exists, defined by the modal matrix Φ, that reduce the mass and stiffness matrices to a diagonal form. In case of multiple eigenvalues the problem becomes considerable more complicated. For the standard eigenvalue problem with multiple eigenvalues it can be shown that the stiffness matrix merely reduces to the so-called Jordan normal form under the considered similarity transformation, given as follows
k1 0 0 k 2 k= . . . . . . 0 0
··· ··· ...
0 0 .. .
· · · km
(6–20)
where m ≤ n denotes the number of different eigenvalues, and ki signifies the so-called Jordan boxes, which are block matrices of the form
6.1 Formulation of Linear Eigenvalue Problems
ωi2 ,
"
ωi2 1 0 ωi2
#
ωi2 1 0 0 2 ωi 1 0 0 ωi2 1 0 , 0 ωi2 1 , , ... 2 0 0 ω 1 i 0 0 ωi2 0 0 0 ωi2
11
(6–21)
The equations of motion (6-1) may be reformulated on the following state vector form of coupled 1st order differential equations Az˙ + Bz = F(t) ,
t>0
)
z(0) = z0
(6–22)
"
" # " # " # " # # x(t) f (t) C M K 0 x0 z(t) = , F(t) = , A= , B= (6–23) , z0 = ˙ 0 M 0 0 −M x(t) x˙ 0 Damped eigenvibrations are obtained as linear independent solutions to the homogeneous matrix differential equation
Az˙ + Bz = 0
(6–24)
Analog to (6-4) solutions are searched for on the form z(t) = Ψ(j) eλj t
(6–25)
Insertion of (6-25) into (6-24) provides the following homogeneous system of linear equations for the determination of the amplitude Ψ(j) and the unknown constant λj
λj A + B Ψ(j) = 0
(6–26)
(6-26) is a GEVP of the dimension 2n. The principal difference to (6-5) is that neither A nor B are positive definite matrices. For this reason the damped eigenvalues, λj , and the damped eigenmodes, Ψ(j) , are generally complex. Upon complex conjugation of (6-26) it is seen, that if (λ, Ψ) denotes an eigen-pair (solution) to (6-26), then (λ∗ , Ψ∗ ) is also an eigen-pair, where * denotes complex conjugation. Hence, all eigen-pairs are either real, or mutually complex conjugates. Since, the orthogonality conditions (6-13) and (6-14) rely on the symmetry properties of the mass- and stiffness matrices, and the matrix A is generally not symmetric, similar orthogonality properties for the damped eigenmodes does not hold. Instead, the so-called adjoint eigenvalue problem to (6-26) is considered
12
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
νi AT + BT Ψ(i) a = 0
(6–27)
It can be shown that the eigenvalues of the direct and adjoint eigenvalue problem are identical, i.e. νi = λi . Moreover, the eigenmodes of the direct and adjoint eigenvalue problems fulfill the following orthogonality properties, see proof below
T AΨ(j) = Ψ(i) a
T Ψ(i) BΨ(j) = a
(
0 , mi ,
i 6= j i=j
(
0 , −λi mi ,
(6–28)
i 6= j i=j
(6–29)
mi denotes the damped modal mass. (6-28) and (6-29) can be assembled into the matrix formulations ΨTa AΨ = a
(6–30)
ΨTa BΨ = b
(6–31)
where Ψ and Ψa are modal matrices of dimension 2n × 2n, defined as Ψ = [Ψ(1) Ψ(2) · · · Ψ(2n) ]
(6–32)
(2) (n) Ψa = [Ψ(1) a Ψa · · · Ψa ]
(6–33)
and m1 0 0 m 2 a= . .. .. . 0 0
··· ··· ...
0 0 .. .
· · · m2n
0 −λ1 m1 0 −λ2 m2 b= . .. .. . 0 0
··· ··· ...
(6–34)
0 0 .. .
· · · −λ2n m2n
(6–35)
6.1 Formulation of Linear Eigenvalue Problems
13
It follows from (6-30) that the direct and adjoint modal matrices are related as follows Ψa = (A−1 )T (Ψ−1 )T aT = (AT )−1 (ΨT )−1 a
(6–36)
If A and B are symmetric, the direct and adjoint eigenvalue problems (6-26) and (6-27) become identical. Such a problem is denoted as self-adjoint. Obviously, (6-5) is self-adjoint. In what follows we shall primarily consider the GEVP (6-5), i.e. the involved system matrices are symmetric and non-negative definite.
Box 6.1: Proof of orthogonality properties of damped modes (i) T
(6-26) is pre-multiplied with Ψa ing to the identities
, and (6-27) is pre-multiplied with Ψ(j) T , lead-
T T AΨ(j) + Ψ(i) BΨ(j) = 0 λj Ψ(i) a a
(j) T T B Ψ(i) νi Ψ(j) T AT Ψ(i) a +Ψ a = 0
νi Ψa(i) T AΨ(j)
+
T Ψ(i) BΨ(j) a
(6–37) ⇒
=0
(6–38)
The last statement follows from transposing the previous one. Withdrawal of (6-38) from (6-37) provides T AΨ(j) = 0 λj − νi Ψ(i) a (i) T
Since, Ψa
(6–39)
AΨ(i) 6= 0, (6-39) can only be fulfilled for i = j, if νi = λi .
Next, presuming simple eigenvalues, so λi 6= λj for i 6= j, (6-39) can only be fulfilled, if (i) T Ψa AΨ(j) = 0, corresponding to (6-28). (i) T
Finally, it then follows from (6-37) that Ψa BΨ(j) = 0 for i 6= j, and that (i) T Ψa BΨ(i) = −λi mi for i = j, corresponding to (6-29).
14
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
Example 6.1: Verification of eigensolutions Given the following mass- and stiffness matrices " # " # 5 5 −2 0 M= 4 1 , K= −2 2 0 5
(6–40)
Verify that the eigensolutions with modal masses normalized to 1 are given by # " 2 0 = 0 ω22
" ω2 Λ= 1 0
# 0 12
h
Φ= Φ
,
(1)
Φ
(2)
i
=
"
4 5
2 5
1
−2
#
(6–41)
Based on the proposed eigensolutions the following calculations are performed, cf. (6-10) "
5 −2 KΦ = −2 2
MΦΛ =
"
5 4
0
0
1 5
#"
#"
#
4 5
2 5
1
−2
4 5
1
=
"
#" 2 −2 0 2 5
2
6
2 5
− 24 5
#
# " 0 2 = 2 12 5
(6–42)
# 6 − 24 5
This proofs the validity of the proposed eigensolutions. The orthonormality follows from the following calculations, cf. (6-15) and (6-17)
T
Φ MΦ =
ΦT KΦ =
"
"
4 5
2 5
1
−2
#T "
5 4
0
0
1 5
#T "
4 5
2 5
1
−2
#"
4 5
1
" 1 = −2 0
#" −2 45 2 1
5 −2
2 5
#
#
# 0 1
" 2 5 = 2 −2 0
# 0 12
(6–43)
Example 6.2: M- and K-orthogonal vectors Given the following mass- and stiffness matrices 1 0 0 2 −1 0 2 M = 0 1 0 , K = −1 4 −1 0 0 12 0 −1 2
(6–44)
Additionally, the following vectors are considered
1
√
v1 =
2 2
0
,
v2 = −
1
√
2 2
0
From (6-45) the following matrix is formed
(6–45)
6.1 Formulation of Linear Eigenvalue Problems
1
1
√ V = [v1 v2 ] = 22
√
−
0
15
2 2
(6–46)
0
We may then perform the following calculations, cf. (6-15) and (6-17)
1
√ VT MV = 22
1
√
0
1 √2 V KV = 2 T
2 2
−
0
T 1
√
0
2
0
2 2
0
0
0
1 2
0
2 −1 0
1
√ 0 22
1
T
1 −
0
0
1 √2 4 −1 2
−1
−1
0
" √ 1 2 − 2 = 0 0 1
2
0
1
√
−
2 2
# 0 1
" 2.5858 = 0
0
# 0 5.4142
(6–47)
(6-47) shows that the vectors v1 and v2 are mutual orthogonal with weights M and K, and that both have been normalized to unit modal mass. As will be shown in Example 6.3 neither v1 nor v2 are eigenmodes, and the eigenvalues are different from 2.5858 and 5.4142. However, if three linear independent vectors are mutual orthogonal with weights M and K, they will be eigenmodes to the system.
Example 6.3: Analytical calculation of eigensolutions The mass- and stiffness matrices defined in Example 6.2 are considered again. Now, an analytical solution of the eigenmodes and eigenvalues is wanted. The generalized eigenvalue problem (6-5) becomes 2 − 12 λj −1
(j) Φ1 0 (j) = −1 Φ2 0 (j) 0 2 − 12 λj Φ3
−1
0
4 − λj
0
−1
(6–48)
The characteristic equation (6-6) becomes 2 − 12 λj P (λ) = det −1 0 1 2 − λj 2 2 , λj = 4 , 6 ,
4 − λj
j=1 j=2 j=3
−1 4 − λj −1
0 −1 = 2 − 12 λj
1 1 1 1 = 2 − λj 6 − 4λj + λ2j = 0 2 − λj − 1 + 1 · − 2 − λj 2 2 2 2
⇒
(6–49)
16
Chapter 6 – LINEAR EIGENVALUE PROBLEMS (j)
Initially, the eigenmodes are normalized by setting an arbitrary component to 1. Here we shall choose Φ3 = 1. (j) (j) The remaining components Φ1 and Φ2 are then determined from any two of the three equations (6-48). The first and the second equations are chosen, corresponding to # " (j) # Φ1 −1
" 2 − 12 λj −1
(j)
4 − λj
Φ2
" # 0 = 1
(j) 2 2 Φ1 14−8λj +λj (j) 4−λj Φ2 = 14−8λj +λ2
⇒
(6–50)
j
(j) Φ3
1
¯ This becomes The modal matrix with eigenmodes normalized as indicated in (6-50) is denoted as Φ. 1 ¯ = Φ 1 1
1 0 −1
−1 1
(6–51)
1
The modal masses become, cf. (6-15) 2 T ¯ ¯ m = Φ MΦ = 0 0
0 1 0
0 0
(6–52)
2
¯ (1) in the following way Φ(1) denotes the 1st eigenmode normalized to unit modal mass. This is related to Φ
Φ(1) = √
1 ¯ (1) Φ M1
1 1 = √ 1 2 1
(6–53)
The other modes are treated in the same manner, which results in the following eigensolutions 2 ω1 Λ=0 0
0 ω22 0
2 0 = 0 0
ω32
0
0 0
4 0
,
i
h
Φ = Φ(1) Φ(2) Φ(3) =
0 6
√2 2 √2 2 √ 2 2
√
−1 0 1
Example 6.4: Undamped and damped eigenvibrations of 2DOF system
x1 100 N/m
x2 200 N/m
3 kg/s
300 N/m 2 kg
1 kg 2 kg/s
Fig. 6–1 Two-degrees-of-freedom system.
2 √2 − 22 √ 2 2
1 kg/s
(6–54)
6.1 Formulation of Linear Eigenvalue Problems
17
The system shown on Fig. 6-1 has the indicated two degrees of freedom x1 and x2 . The corresponding massdamping and stiffness matrices become " 1 M= 0
# 0 kg 2
,
"
# 5 −2 kg C= −2 3 s
"
# −200 N 500 m
300 K= −200
,
(6–55)
The eigensolutions with modal masses normalized to 1 are given as " ω2 Λ= 1 0
# " 131.39 0 = 0 ω22
# 0 s−2 418.61
,
h
Φ= Φ
(1)
Φ
(2)
i
" 0.64262 = 0.54177
0.76618 −0.45440
#
(6–56)
The matrices A and B defined by (6-23) become
5 −2 A= 1 0
−2 1 3 0 0 0 2 0
0 2 0 0
,
300 −200 0 500 0 −200 B= 0 0 −1 0 0 0
0 0 0
(6–57)
−2
The eigenvalues and eigenfunctions become
λ1 0 0 0 0 λ2 0 0 Λ= = 0 0 λ3 0 0 0 0 λ4 −2.4737 − 20.231i 0 0 0 0 −2.4737 + 20.231i 0 0 0 0 −0.7763 − 11.480i 0 0 0 0 −0.7763 + 11.480i (6–58)
Ψ = Ψ(1) Ψ(2) Ψ(3) Ψ(4) = −0.00503 + 0.04797i −0.00503 − 0.04797i 0.00037 − 0.08192i 0.00037 + 0.08192i 0.00875 + 0.02658i −0.00803 − 0.06908i −0.00803 + 0.06908i 0.00875 − 0.02658i 0.98300 − 0.01700i 0.98300 + 0.01700i −0.94070 + 0.05930i −0.94070 − 0.05930i −0.55935 − 0.11133i −0.55935 + 0.11133i −0.78686 + 0.14585i −0.78686 − 0.14585i (6–59) Since A = AT and B = BT the problem is self-adjoint, and Ψa = Ψ. The diagonal matrices a and b become, cf. (6-30), (6-31)
18
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
a = ΦT AΦ = −0.05786 + 0.14403i 0 0 0 0 −0.05786 − 0.14403i 0 0 0 0 0.04957 + 0.36741i 0 0 0 0 0.04957 − 0.36741i (6–60) b = ΦT BΦ = −3.0571 − 0.81439i 0 0 0 0 −3.0571 + 0.81439i 0 0 0 0 −4.17941 + 0.85432i 0 0 0 0 −4.17941 − 0.85432i (6–61)
6.2
Characteristic Polynomials
Since the matrix K − λM is symmetric, it may be Gauss factorized on the form K − λM = LDLT
(6–62)
where L is a lower triangular matrix with units in the main diagonal, and D is a diagonal matrix given as
1 l21 1 l 1 l L= 31 32 . . . . .. .. . . . . ln1 ln2 ln3 · · · 1
(6–63)
d11 0 0 d 22 D= . . .. .. 0 0
(6–64)
··· ··· ...
0 0 .. .
· · · dnn
Since det(L) = det(LT ) = 1, the following representation of the characteristic polynomial (6-6) is obtained
6.2 Characteristic Polynomials
P (λ) = det LDLT = det L det D det LT = det D = d11 d22 · · · dnn
19
(6–65)
At the same time (6-7) can be written on the form P (λ) = a0 λ − λ1 λ − λ2 · · · λ − λn
(6–66)
Despite the striking similarity between (6-65) and (6-66), dii is very different from the corre sponding factor λ − λi in (6-66), as demonstrated in Example 1.6 below. Let λ be varying in the interval ]0, ∞[. From (6-7) follows that P (0) = an = det(K) ≥ 0. Since no factor in (6-66) is changing sign for λ ∈]0, λ1 [, it follows that P (λ) > 0 in this interval. For λ ∈]λ1 , λ2 [ the factor λ − λ1 has changed its sign, while the remaining factors remain with unchanged sign. Hence, P (λ) changes sign at the passage of the eigenvalue λ1 . Similar sign changes occur at the passage of the other eigenvalues. Then, similar sign-changes must take place in (6-65). For λ ∈]0, λ1 [ all diagonal components dii are positive. At the passage of λ1 , so λ ∈]λ1 , λ2 [, exactly one diagonal component becomes negative. At the passage of λ2 one extra component becomes negative. Generally, if m components are negative, it can be decided that λ is a number placed somewhere in the interval ]λm , λm+1 [. This property can be used to bound the interval for the eigenvalues as demonstrated in Example 6.7 below. Actually, the principle can be used to calculate say the jth eigenvalue λj with arbitrary accuracy. The method is simply to make an initial sweep of calculations of P (λ) until j components in the main diagonal of D are negative. Successively, we can then perform additional calculations to reduce the interval, where the jth sign change take place. This procedure of calculating eigenvalues is known as the telescope method.
20
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
Box 6.2: Gauss factorization of symmetric matrix Gauss factorization reduces a symmetric matrix K of dimension n × n to an upper triangular matrix S in a sequence of n − 1 matrix multiplications. After the first (i − 1) matrix multiplications the following matrix is considered −1 −1 K(i) = L−1 i−1 Li−2 · · · L1 K ,
i = 2, . . . , n
(6–67)
where K(1) = K. Sequentially, the indicated matrix multiplications produce zeros below the main diagonal of the columns j = 1, . . . , i − 1. Then, pre-multiplication of K(i) with L−1 i will produce zeroes below the main diagonal of the ith column without affecting the zeroes in the previous columns. L−1 i is a lower triangular matrix with units in the principal diagonal, and where only the ith column is non-zero, given as
L−1 i
1 0 . .. 0 = 0 0 .. . 0
1 .. . . . . 0 ··· 0 ··· 0 ··· .. .
1 0 1 0 −li+1,i .. .. . . 0 · · · 0 −ln,i
1 .. . . . . 0 ··· 1
(6–68)
The components lj,i entering the ith column are given as (i)
lj,i =
Kj,i
(i)
,
j = i + 1, . . . , n
(6–69)
Ki,i (i)
where Kj,i denotes the component in the jth row and ith column of K(i) . By insertion it is proved that the inverse of (6-68) is given as 1 0 1 . . . .. .. . . 0 0 · · · 1 Li = (6–70) 0 0 · · · 0 1 0 0 · · · 0 li+1,i 1 .. .. .. .. .. . . . . . . . . 0 0 ··· 0
ln,i
0 ··· 1
6.2 Characteristic Polynomials
21
Then, K(n) obtained after the (n − 1)th multiplication with L−1 n−1 , has zeroes in all the first (n − 1) columns below the main diagonal, corresponding to an upper triangular matrix S. Hence −1 −1 L−1 n−1 Ln−2 · · · L1 K = S
K = LS ,
⇒
L = L1 L2 · · · Ln−1
(6–71)
Since, L defined by (6-71) is the product of lower triangular matrices with 1 in the main diagonal, it becomes a matrix with the same structure as indicated by (6-63). Because K is symmetric, S must have the structure S = DLT
(6–72)
where D is a diagonal matrix, given by (6-64). This proofs the validity of the factorization (6-62).
Example 6.5: Gauss factorization of three-dimensional matrix Given the symmetric matrix
K = K(1)
5 −4 = −4 6 1 −4
1
4 L−1 1 = 5 − 15
L−1 2
1 = 0 0
0 0 1 0 0 1
0 0 1 0 3.2 1 2.8
From this follows that
1 −4 6
(6–73)
⇒
5 −4 1 (1) K(2) = L−1 = 0 2.8 −3.2 1 K 0 −3.2 5.8 1 0 0 4 (1) L = L1 = − 5 1 0 1 0 1 5
(6–74)
⇒
5 −4 1 (2) K(3) = L−1 = S = 0 2.8 −3.2 2 K 0 0 2.1429 1 0 0 L(2) = L1 L2 = L = −0.8 1 0 0.2 −1.1429 1
(6–75)
22
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
0 0 1 0 −1.1429 1
1 L = −0.8 0.2
,
5 D = 0 0
0 0 2.8 0 0 2.1429
(6–76)
Example 6.6: Gauss factorization of three-dimensional generalized eigenvalue problem Of course for given value of λ the matrix K − λM may be factorized numerically according to method explained in Box 6.1. However for smaller problems explicit expressions may be derived analytically as demonstrated in the following. Given the mass- and stiffness matrices defined in Example 6.2, the components of L and D are calculated from the following identities, cf. (6-48), (6-62)
1
0 l
l21 l31
l32
0 d11 0 0 0 1
−1 2 − 12 λ 4−λ −1 0
−1
0 1 l21 0 0 1 d33 0 0
0 d22 0
d11 l31 l32 = d11 l21 1 d11 l31
d11 l21 2 d22 + d11 l21
d11 l21 l31 + d22 l32
d11 l21 l31 + d22 l32 =
2 2 d33 + d11 l31 + d22 l32
0
−1
2−
d11 l31
(6–77)
λ 21
Equating the corresponding components on the left and right hand sides, provides the following equations for the determination of the unknown quantities, which are solved sequentially 1 1 d11 = 2 − λ = − λ − 4 2 2 d11 l21 = −1
⇒
l21 =
2 λ−4
4 1 λ2 − 8λ + 14 λ−4 2 = − 2 λ−4 λ−4 λ−4 d11 l21 l31 + d22 l32 = d22 l32 = −1 ⇒ l32 = 2 λ − 8λ + 14 1 2 2 d33 + d11 l31 + d22 l32 = d33 − l32 = 2 − λ ⇒ 2 3 λ−4 1 λ − 12λ2 + 44λ − 48 1 λ−2 λ−4 λ−6 1 =− =− d33 = 2 − + 2 2 λ − 8λ + 14 2 λ2 − 8λ + 14 2 λ2 − 8λ + 14
2 =4−λ d22 + d11 l21
⇒
d22 = 4 − λ +
(6–78)
Then, the following expression for the characteristic equation is obtained, in agreement with (6-49)
P (λ) = d11 d22 d33
λ2 − 8λ + 14 1 λ − 2 λ − 4 λ − 6 1 1 − = − λ−2 λ−4 λ−6 = − λ−4 − 2 λ−4 2 λ2 − 8λ + 14 4 (6–79)
6.2 Characteristic Polynomials
23
Example 6.7: Bounds on eigenvalues In this example bounds on the eigenvalues of the GEVP in Example 6.2 is constructed by a check of the sign of the diagonal components of the matrix D, using the observation at the bottom of p. 13. For λ = 1 we get: −1 0 K − λM = −1 3 −1 3 0 −1 2
3 2
⇒
1
3 2
LDLT = − 32 1 0 0 − 37 1 0
0 1 − 23 1 0
0 7 3
15 14
0
0
− 37 1
(6–80)
The components of the matrices L and D may be calculated by the formulas indicated in Example 6.6. As seen d11 = 32 > 0, d22 = 73 > 0, d33 = 15 14 > 0. Hence all three diagonal components are positive, from which it is concluded that λ1 > λ = 1. For λ = 8 we get: −2 −1 0 K − λM = −1 −4 −1 0 −1 −2
⇒
−2 0 0 1
1
LDLT = 12 0
1 2 7
0 1 0
0
− 27 0 − 12 7
1 2
1
0
2 7
(6–81)
1
As seen d11 = −2 < 0, d22 = − 72 < 0, d33 = − 12 7 < 0. Hence all three diagonal components are negative, from which it is concluded that λ3 < λ = 8. For λ = 5 we get: − 1 −1 2 K − λM = −1 −1 0 −1
0 −1 − 12
⇒
1 LDLT = 2 0
1 −1
0 1 2 − 12 0 0 1 0 1 0 0 − 32 1
0 −1 1
(6–82)
As seen d11 = − 12 < 0, d22 = 1 > 0, d33 = − 23 < 0. Hence two diagonal components are negative and one is positive, from which it is concluded that λ2 < λ = 5 < λ3 . For λ = 3 we get: −1 0 K − λM = −1 1 −1 1 0 −1 2
1 2
⇒
1 LDLT = −2 0
1 2
1 0 0 1 1
0 −1 0
0 1 −2 0 1 3 2
0 1 1
(6–83)
As seen d11 = 12 > 0, d22 = −1 < 0, d33 = 32 < 0. Hence two diagonal components are positive and one is negative, from which it is concluded that λ1 < λ = 3 < λ2 . In conclusion the following bounds prevail 1 < λ1 < 3 3 < λ2 < 5 5 < λ3 < 8
(6–84)
24
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
6.3
Eigenvalue Separation Principle
The matrices M(m) and K(m) of dimension (n − m) × (n − m) are obtained from M and K, if the last m rows and columns are omitted in these matrices. Then, consider the sequence of related characteristic polynomials of the order (n − m)
P
(m)
λ
(m)
(m) (m) (m) = det K − λ M
,
m = 0, 1, . . . , n − 1
(6–85)
where M(0) = M, K(0) = K, λ(0) = λ and P (0) (λ) = P (λ). The eigenvalues corresponding to (m) (m) (m) M(m) and K(m) are denoted as λ1 , λ2 , . . . , λn−m . Now, for any m = 0, 1, . . . , n − 1 it can be proved that the roots of P (m+1) λ(m+1) = 0 are separating the roots of P (m) λ(m) = 0, i.e. (m)
0 ≤ λ1
(m+1)
≤ λ1
(m)
≤ λ2
(m+1)
≤ λ2
(m)
(m+1)
(m)
≤ · · · ≤ λn−m−1 ≤ λn−m−1 ≤ λn−m ≤ ∞
(6–86)
A formal proof of (6-86) has been given by Bathe.2 A sequence of polynomials such as P (m) (λ) with roots fulfilling the property (6-86), is denoted a Sturm sequence. Next, consider the Gauss factorization (6-62). Omitting the last m rows and columns in M and K is tantamount to omitting the last m rows and columns in L and D. Then
P
(m)
λ
(m)
T (m) (m) (m) = det K −λ M = det L(m) D(m) L(m) = det D(m) = d11 d22 · · · dn−m,n−m (6–87)
where
L(m)
1 l32 1 = .. .. ... . . ln−m,1 ln−m,2 ln−m,3 · · · 1
D(m)
2
1 l21 l31 .. .
d11 0 0 d 22 = . . .. .. 0 0
··· ··· ...
0 0 .. .
· · · dn−m,n−m
(6–88)
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
(6–89)
6.3 Eigenvalue Separation Principle
25
The bounding property explained in Section 6.2 for the case m = 0 can then easily be generalized. Let λ(m) = µ, and perform a Gauss factorization on the matrix K(m) − µM(m) . Then (m) the number of eigenvalues, λj < µ, will be equal to number of negative diagonal components d11 , . . . , dn−m,n−m in the matrix D. The number of negative elements in main diagonal of the matrix D in the Gauss factorization of K − λM = LDLT , and hence the number of eigenvalues smaller than λ, can then be retrieved from the signs of the sequence P (0) (λ), P (1) (λ), . . . , P (n−1) (λ) as seen in the following way. Introduce P (n) (λ) as an arbitrary positive quantity. Since P (n−1) (λ) = d11 , it follows that the sequence P (n) (λ), P (n−1) (λ) has the sign sequence sign(P (n) (λ)), sign(P (n−1) (λ)) = +, −, if d11 < 0, and the sign sequence +, +, if d11 > 0. d11 < 0 indicates that at least one eigenvalue is smaller than λ, in which case one sign change, namely from + to −, has occurred in the indicated sign sequence. Next, P (n−2) (λ) = d11 d22 is considered. d11 < 0 ∧ d22 < 0 indicates that two eigenvalues are smaller than λ. This in turns implies that P (n−1) (λ) has a negative sign, and P (n−2) (λ) has a positive sign. Then, one additional sign change has occurred in the sequence of sign of the characteristic polynomials sign(P (n) (λ)), sign(P (n−1) (λ)), sign(P (n−2) (λ))=+, −, +. If d22 > 0, then P (n−1) (λ) and P (n−2) (λ) have the same sign, and no additional sign change is recorded in the sequence of signs of the characteristic polynomials. Proceeding in this way it is seen that the number of sign changes in the sequence of signs sign(P (n) (λ)), sign(P (n−1) (λ)), . . . , sign(P (0) (λ)) determines the total number of eigenvalues smaller than λ. This property of the sequence of characteristic polynomials is known as a Sturm sequence check. Example 6.8: Bounds on eigenvalues by eigenvalue separation principle For the mass- and stiffness matrices defined in Example 6.2, the matrices M(1) and K(1) become M
(1)
=
"
1 2
0
0 1
#
,
(1)
K
"
2 −1 = −1 4
#
(6–90)
The characteristic equation (6-6) becomes " (1) 2 − 12 λj det −1
−1 (1) 4 − λj
#!
=0
⇒
( √ (1) λ1 = 4 − 2 = 2.59 √ (1) λ2 = 4 + 2 = 5.41
(6–91)
The matrices M(2) and K(2) become M(2) =
h1i 2
h i K(2) = 2
,
⇒
(2)
λ1 = 4
(6–92)
The relation (6-86) becomes (1)
λ1 ≤ λ1 ≤ λ2 (1)
(2)
(1)
λ1 ≤ λ1 ≤ λ2 (1)
λ2 ≤ λ2 ≤ λ3
⇒
(1) 0 ≤ λ1 ≤ λ1 (1)
(1)
λ1 ≤ λ2 ≤ λ2 (1)
λ2 ≤ λ3 ≤ ∞
⇒
0 ≤ λ1 ≤ 2.59 2.59 ≤ λ2 ≤ 5.41 5.41 ≤ λ3 ≤ ∞
The exact solutions are λ1 = 2, λ2 = 4, and λ3 = 6, cf. Example 6.2.
(6–93)
26
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
Example 6.9: Physical interpretation of eigenvalue separation principle
a) F
1
µ
2
u1
j
n−1
u2 u(x, t)
uj+1
uj
un−2
n
F
un−1
∆l x
(1)
ω1
(0)
ω1
b) (1)
ω2
(0)
ω2
Fig. 6–2 Vibrating string. a) Definitions. b) Undamped eigenmodes. Fig. 6-2a shows a vibrating string with the pre-stress force F , and the mass per unit length µ. The string has been divided into n identical elements, each of the length ∆l. Hence, the total length of the string is l = n∆l. Vibrations u(x, t) of the string in the transverse direction is given by the wave equation with homogeneous boundary conditions
µ
∂2u ∂2u −F 2 =0 , 2 ∂t ∂x
u(0, t) = u(l, t) = 0
x ∈]0, l[
(6–94)
where x is measured from the left support point. The spatial operator in (6-94) is discretized by means of a central difference operator, i.e.
F
F ∂2u ui+1 − 2ui + ui−1 ' 2 2 ∂x ∆l
,
i = 1, . . . , n − 1
(6–95)
where ui (t) = u(i∆l, t). The boundary conditions imply that u0 (t) = un (t) = 0. Then, the discretized wave equation may be represented by the matrix differential equation ¨ + K(0) u = 0 M(0) u
u1 (t) u2 (t) .. .
, M(0) u(t) = un−2 (t) un−1 (t)
(6–96) 1 0 0 ··· 0 1 0 · · · . . . . .. = µ .. .. .. 0 0 0 · · · 0 0 0 ···
2 −1 0 −1 2 0 . F .. .. (0) . . , K = ∆l2 .. 0 0 0 1 0 0 0 0 0 1 0 0 .. .
0 0 .. .
The undamped circular eigenfrequencies of the continuous system are given as
0 ··· −1 · · · .. . . . . 0 ··· 0 ···
0 0 .. . −1 2 −1 0 −1 2 (6–97) 0 0 .. .
0 0 .. .
6.4 Shift
(0) ωj
27
jπ = l
s
F µ
(6–98)
The corresponding eigenmodes are sine functions, shown with unbroken signature on Fig. 6-2b. The wavelength of a half-sine is equal to l/j. As seen from (6-98) the eigenfrequency increases, if the wave-length of the half-sine decreases. Next, consider the system defined by the matrices M(1) and K(1) of dimension (n − 2) × (n − 2), where the last row and column are omitted in M(0) and K(0). Physically, this corresponds to constraining the displacement un−1 (t) = 0, as indicated by the additional support in Fig. 6-2b. The corresponding eigenmodes of the continuous system have been shown with a dashed signature. As seen in Fig. 6-2b the wave-lengths related to the circular (0) (1) (0) (1) eigenfrequencies ω1 , ω1 , ω2 and ω2 decreases in the indicated order. Hence, the following ordering of these eigenfrequencies prevails (0)
(1)
(0)
(1)
ω1 < ω1 < ω 2 < ω 2 (m)
Since λj (0)
(m) 2
= ωj (1)
(6–99)
, the corresponding ordering of the eigenvalues become
(0)
(1)
λ1 < λ1 < λ2 < λ2
(6–100)
which corresponds to (6-86).
6.4
Shift
Occasionally, a shift on the stiffness matrix may be used to enhance the speed of calculation of the considered GEVP. In order to explain this the eigenvalue problem (6-5) is written in the following way K − ρM + ρM − λj M Φ(j) = 0
(6–101)
Obviously, we have withdrawn and added the quantity ρM inside the bracket, where ρ is a suitable real number, which will not affect neither the eigenvalues λj , nor the eigenvectors Φ(j) . (6-94) is rearranged on the form
ˆ − µj M Φ(j) = 0 K
(6–102)
where ˆ = K − ρM K
,
µj = λ j − ρ
(6–103) Hence, instead of the original generalized eigenvalue problem defined by the matrices K, M , ˆ M is considered in the shifted system, where K ˆ is calculated the system with the matrices K,
28
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
as indicated in (6-103). The two systems have identical eigenvectors. However, the eigenvalues of the shifted system become (λ1 − ρ), (λ2 − ρ), . . . , (λn − ρ), where λ1 , λ2 , . . . , λn denote the eigenvalues of the original system. For non-supported systems (ships and aeroplanes) a stiff-body motion Φ 6= 0 exists, which fulfills
KΦ = 0
(6–104)
(6-104) shows that λ = 0 is an eigenvalue for such systems. Correspondingly, det(K) = 0 for systems, which possesses a stiff-body motion. However, some numerical algorithms presume that det(K) 6= 0. In such cases a preliminary shift on the stiffness matrix must be performed, because det(K − ρM) 6= 0, if det(K) = 0. Example 6.10: Shift on stiffness matrix Given the mass- and stiffness matrices " 2 M= 1
1 2
#
,
"
3 K= −3
# −3 3
(6–105)
The characteristic equation (6-6) becomes
det
"
#! 3 − 2λ −3 − λ =0 −3 − λ 3 − 2λ
⇒
(
λ1 = 0 λ2 = 6
(6–106)
λ1 = 0 since det(K) = 0. Next, a shift on the stiffness matrix with ρ = −2 is performed, which provides
ˆ = K
"
3 −3
# " −3 2 +2 3 1
# " # 1 7 −1 = 2 −1 7
(6–107)
Now, the characteristic equation becomes
det
"
#! 7 − 2µ −1 − µ =0 −1 − µ 7 − 2µ
⇒
(
µ1 = 2 µ2 = 8
(6–108)
6.5 Transformation of GEVP to SEVP
6.5
29
Transformation of GEVP to SEVP
Some eigenvalue solvers are written for the special eigenvalue problem. Hence, their use presumes an initial transformation of the generalized eigenvalue problem (6-5). Of course, this may be performed, simply by a pre-multiplication of (6-5) with M−1 . However, then the resulting system matrix M−1 K is no longer symmetric. In this section a similarity transformation is indicated, which preserves the symmetry of the system matrix. Since, M = MT it can be factorized on the form M = SST
(6–109)
The generalized eigenvalue problem (6-5) may then be written in the form K ST
−1
ST Φ(j) = λj SST Φ(j)
S−1 K S−1
T
ST Φ(j) = λj ST Φ(j)
⇒ (6–110)
−1 T = S−1 has been used. (6-110) can then be formulated in terms where the identity ST of the following standard EVP ˜ (j) ˜Φ ˜ (j) = λj Φ K
(6–111)
where ˜ = S−1 K S−1 K
˜ (j) = ST Φ(j) Φ
T
(6–112)
(6–113)
(6-112) defines a similarity transformation with the transformation matrix S−1 , which diagolizes ˜ = K ˜ T . As seen from (6-111) the eigenvalues λ1 , . . . , λn are the mass matrix. Obviously, K identical for the original and the transformed eigenvalue problem, whereas the eigenvectors Φ(j) ˜ (j) are related by the transformation (6-113). and Φ The determination of a matrix S fulfilling (6-109) is not unique. Actually, infinite many solutions to this problem exist. Below, two approaches have been given. In both cases it is assumed that M = MT is positive definite. Generally, Choleski decomposition is considered the most effective way of solving the problem. In this case a lower triangular matrix exist S, so (6-109) is fulfilled. Obviously, S is related to the Gauss factorization as follows
30
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
√
S = LD
1 2
,
d11 0 √ d22 0 .. .. . . 0 0
D = 1 2
··· ··· ... ···
√
0 0 .. . dnn
(6–114)
1
The diagonal matrix D 2 does only exist, if the components dii of the matrix D are all positive. This is indeed the case, if M is positive definite. Although, S may be calculated from (6-114), there exists a faster and more direct algorithm for the determination of this quantity. Alternatively, a so-called spectral decomposition of M may be used. The basis of this method is the following SEVP for M Mv(j) = ρj v(j)
(6–115)
ρj and v(j) denotes the jth eigenvalue and eigenvector of M. Both are real, since M is symmetric. The eigenvalue problems (6-115) can be assembled into the matrix formulation, cf. (6-10) Mµ = VR
(6–116)
µ1 0 0 µ 2 µ= . . .. .. 0 0
··· ··· ...
0 0 .. .
,
V = [v(1) v(2) · · · v(n) ]
(6–117)
· · · µn
The eigenvectors are normalized to magnitude 1, i.e. v(i) T v(j) = δij . Then, the modal matrix V fulfills, cf. (6-19) V−1 = VT
(6–118)
From (6-116) and (6-118) the following representation of M is obtained M = VµVT
(6–119)
Finally, from (6-109) and (6-119) the following solution for S is obtained √
S = Vµ
1 2
,
µ = 1 2
µ1 0 √ µ2 0 .. .. . . 0 0
0 0 .. . √ ··· µn
··· ··· ...
(6–120)
6.5 Transformation of GEVP to SEVP
31
The drawback of the spectral approach is that an initial SEVP must be solved, before the transformed eigenvalue problem (6-111) can be analyzed.
Box 6.3: Choleski decomposition of symmetric positive definite matrix Choleski decomposition factorizes a symmetric positive definite matrix M into the matrix product of a lower triangular matrix S and its transpose, as follows M = SST
m11 m21 m 21 m22 . .. .. . mn1 mn2
⇒ · · · mn1 s11 0 · · · mn2 s21 s22 = . .. .. ... . . .. sn1 sn2 · · · mnn
··· ··· ...
0 0 .. .
s11 s21 0 s 22 . .. .. . 0 0 · · · snn
· · · sn1 · · · sn2 = . . . .. . · · · snn
s211 s s 21 11 .. .
s222 + s221 .. .
symmetric ...
sn1 s11 sn2 s22 + sn1 s21 · · · s2nn + s2n,n−1 + · · · + s2n2 + s2n1
(6–121)
Equating the components of the final matrix product with the component on and below the main diagonal of M equations can be formulated for the determinations of sij , which √ Next, si1 , i = 2, . . . , n are are solved sequentially. First s11 = m11 is calculated. p 2 determined from si1 = mi1 /s11 . Next, s22 = m22 − s21 is calculated, and si2 , i = 3, . . . , n can be determined from si2 = (m2i − si1 s21 )/s22 . Next, the 3th column can be calculated and so forth. The general algorithm for calculating the components sij in the jth column reads
sjj =
q
mjj − s2j,j−1 − · · · − s2j1
,
j = 1, . . . , n
sij = (mij − si,j−1 sj,j−1 − · · · − si1 sj1 )/sjj
,
i = j + 1, . . . , n
(6–122)
32
6.6
Chapter 6 – LINEAR EIGENVALUE PROBLEMS
Exercises
6.1 Given the following mass- and stiffness matrices 1 0 0 2 −1 0 M = 0 2 0 , K = −1 2 0 0 0 12 0 0 3 (a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass. (b.) Determine two vectors that are M-orthonormal without being eigenmodes. (c.) Show that the eigenvalue separation principle is valid for the considered example.
6.2 Given the following mass- and stiffness matrices " # " # 2 0 6 −1 M= , K= 0 0 −1 4 (a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass. (b.) Perform a shift ρ = 3 on K and calculate the eigenvalues and eigenmodes of the new problem.
6.3 The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional generalized eigenvalue probem are given as " # " # "√ √ # 2 2 λ1 0 1 0 (1) (2) 2 2 √ √ Λ= = , Φ= Φ Φ = 2 0 4 − 22 0 λ2 2 (a.) Calculate M and K.
6.4 Given a symmetric matrix K. (a.) Write a MATLAB program, which determines the matrices L and D of a Gauss factorization as well as the matrix (S−1 )T , where S is a lower triangular matrix fulfilling SST = K.
6.5 Given a symmetric positive definite matrix K. (a.) Write a MATLAB program, which performs Choleski decomposition.
C HAPTER 7 APPROXIMATE SOLUTION METHODS This chapter deals with various approximate solution methods for solving the generalized eigenvalue problem. Section 7.1 consider the application of static condensation or Guyan reduction.1 The idea of the method is to reduce the magnitude of the generalized eigenvalue problem from n to n1 n degrees of freedom. Next, the reduced system is solved exact. In principle no approximation is related to the procedure.
Section 7.2 deals with the application of Rayleigh-Ritz analysis. Similar to static condensation this is a kind of system reduction procedure. As shown the method can be given a formulation identical to static condensation. However, exact results are no longer obtained. Section 7.3 deals with the bounding of the error related to a certain approximate eigenvalue.
7.1
Static Condensation
The basic assumption of static condensation is that inertia is confined to the first n1 degrees of freedom, whereas inertia effects are ignored for the remaining n2 = n − n1 degrees of freedom. The approximation of the method stems from the ignorance of these inertial couplings. This corresponds to the following partitioning of the mass and stiffness matrices # M11 0 M= 0 0 "
,
"
K11 K12 K= K21 K22
#
(7–1)
M11 and K11 are sub-matrices of dimension n1 × n1 , K12 = KT21 is a sub-matrix of dimension n1 × n2 , and K22 is of the dimension n2 × n2 . The eigenvalue problems for the first n1 and the last n2 eigenvectors can be assembled in the following partitioned matrix formulations, cf. (6-10) 1
S.R.K. Nielsen: Vibration Theory, Vol. 1. Linear Vibration Theory. Aalborg tekniske Universitetsforlag, 1998. — 33 —
34
Chapter 7 – APPROXIMATE SOLUTION METHODS
"
K11 K12
#"
K21 K22 "
K11 K12
Φ11
#
=
"
M11 0 0
Φ21 #"
K21 K22
Φ12
#
=
"
M11 0
Φ22
#"
Φ11
#
0 Φ21 #" # 0 Φ12 0
Φ22
Λ1
(7–2)
Λ2
where Λ1 and Λ2 are diagonal matrices of the dimension n1 × n1 and n2 × n2 λ1 0 0 λ 2 Λ1 = . . .. .. 0 0
(j)
··· ··· ...
0 0 .. .
· · · λn1
,
0 λn1 +1 0 λn1 +2 Λ2 = . .. .. . 0 0
··· ··· ...
0 0 .. .
(7–3)
· · · λn
(j)
Φ1 and Φ2 denote sub-vectors encompassing the first n1 and the last n2 components of the jth eigenmode Φ(j) . Then, the matrices Φ11 , Φ12 , Φ21 and Φ22 entering (7-2) are defined as Φ11 =
Φ21 =
(1) Φ1 (1) Φ2
(2) Φ1
(n ) · · · Φ1 1
(2) Φ2
(n1 )
· · · Φ2
, ,
Φ12 =
(n +1) Φ1 1
(n +2) Φ1 1
(n) · · · Φ1
Φ22 =
(n +1) Φ2 1
(n +2) Φ2 1
(n) · · · Φ2
(7–4)
At first the solution for the first n1 eigenmodes is considered. From the lower lower half of the first matrix equation in (7-2) follows K21 Φ11 + K22 Φ21 = 0
⇒
Φ21 = −K−1 22 K21 Φ11
(7–5)
From the corresponding upper half of the said matrix equation, and (7-5), follows K11 Φ11 − K12 K−1 22 K21 Φ11 = M11 Φ11 Λ1 ˜ 11 Φ11 = M11 Φ11 Λ1 K
⇒ (7–6)
where ˜ 11 = K11 − K12 K−1 K21 K 22
(7–7)
7.1 Static Condensation
35
(7-6) is a generalized eigenvalue problem of reduced dimension n1 , which is solved for Λ1 , Φ11 . Next, the remaining components of the first n1 eigenmodes are calculated from (7-5). The modal masses become
m1 =
#" #T " # " M11 0 Φ11 Φ11 0
Φ21
0
Φ21
= ΦT11 M11 Φ11
(7–8)
Hence, the total eigenmodes will be normalized to unit modal mass with respect to M, if the sub-vectors Φ11 are normalized to unit modal mass with respect to M11 . Next, the solution for the last n2 eigenmodes are considered. From the last matrix equation in (7-2) follows "
K11 K12
#"
K21 K22
Φ12
#
Φ22
Λ−1 2
=
"
M11 Φ12
#
(7–9)
0
−1 Obviously, (7-9) is fulfilled for Λ−1 2 = 0 ∧ Φ12 = 0. Λ2 = 0 implies that all n2 eigenvalues are equal to infinity. Hence, the following eigensolutions are obtained
∞ 0 ··· 0 0 ∞ ··· 0 Λ2 = . . . . .. . . .. .. 0 0 ··· ∞
,
"
Φ12
#
Φ22
=
"
0
#
(7–10)
Φ22
The matrix Φ22 is undetermined. Any matrix with linear independent column vectors will do. Then, this quadratic matrix may simply be chosen as an n2 × n2 unit matrix Φ22 = I
(7–11)
The modal masses become
m2 =
"
0
Φ22
#T "
M11 0 0
#"
0
0
#
=0
(7–12)
Φ22
Generally, if the ith row and the ith column in M are equal to zero, then ΦT = [0, . . . , 0, 1, 0, . . . , 0] is an eigenvector with the eigenvalue λ = ∞. The modal mass is 0. ˜ 11 is solved by means of an initial Choleski decomposition of K22 , In praxis the calculation of K cf. Box 6.3
36
Chapter 7 – APPROXIMATE SOLUTION METHODS
K22 = SST
−1 K−1 22 = S
⇔
T
S−1
(7–13)
˜ 11 is determined from where both S and S−1 are lower triangular matrices. Then, K ˜ 11 = K11 − RT R K
(7–14)
where the n2 × n2 matrix R is obtained as solution to the matrix equation SR = K21
(7–15)
In principle (7-15) represent n2 linear equations with n2 right-hand sides. Given that S is a lower triangular matrix, this is relatively easily solved. Finally, it should be noticed that the static condensation approach is only of value if n1 n. Example 7.1: Static condensation Given the following mass- and stiffness matrices 2 −1 0 0 0 0 0 0 2 −1 0 −1 0 2 0 0 M= , K= 0 −1 0 0 0 0 2 −1 0 0 −1 1 0 0 0 1
(7–16)
The rows and columns are interchanged the mass and stiffness matrices, so the following eigenvalue problems are obtained
2 0 0 1 −1 −1 −1 0
−1 −1 2 0
2 0 0 1 −1 −1 −1 0
−1 −1 2 0
−1 2 Φ11 0 0 = 0 0 Φ21 2 0 2 −1 Φ12 0 0 = 0 0 Φ22 0 2
0 1
0 0
0 0
0 0
0 1
0 0
0 0
0 0
0 Φ11 λ1 0 0 Φ21 0 0 0 Φ12 λ3 0 0 Φ22 0 0
0 λ2
(7–17)
0 λ4
Then
K11
" 2 = 0
# 0 1
,
K12 = K21
" # −1 −1 = −1 0
,
K22
" 2 = 0
# 0 2
,
M11
" 2 = 0
# 0 1
(7–18)
7.1 Static Condensation
37
From (7-7) follows
˜ 11 = K11 − K
" 2 = 0
K12 K−1 22 K21
# " 0 −1 − 1 −1
#" −1 2 0 0
#−1 " # " 0 −1 −1 1 = 2 −1 0 − 12
− 12 1 2
#
(7–19)
The reduced eigenvalue problem (7-6) becomes "
1 − 12
#
1 2
− 21
Φ11
" 2 = 0
# 0 Φ11 Λ1 1
(7–20)
The eigensolutions with eigenmodes normalized to modal mass 1 become " λ Λ1 = 1 0
# " √ 1 0 − 42 = 2 λ2 0
0√ + 42
1 2
#
,
Φ11 =
#
"
"
1 √2 2 2
− 12
#
(7–21)
√
2 2
From (7-5) follows
Φ21
" 2 =− 0
#−1 " 0 −1 2 −1
#" 1 −1 √2 2 0 2
− 12 √
2 2
=
√ 1 4
+
√
2 4
− 41 + − 14
1 4
2 4
#
(7–22)
From (7-10) and (7-11) follows " λ Λ2 = 3 0
# " ∞ 0 = 0 λ4
#
0 ∞
,
Φ12
" 0 = 0
0 0
#
Φ22
,
" 1 = 0
0 1
#
(7–23)
After interchanging the degrees of freedom back to the original order (the 1st and 2nd components of Φ11 and Φ12 are placed as the 2nd and 4th component of Φ(j) , the 1st and 2nd components of Φ21 and Φ22 are placed as the 3rd and 1st component Φ(j) ), the following eigensolution is obtained λ1 0 Λ= 0 0
0 λ2 0 0
0 0 λ3 0
√ 2 1 0 2 − 4 0 0 = 0 0 λ4 0
h
Φ= Φ
(1)
Φ
(2)
Φ
(3)
Φ
(4)
i
1 4 1 2√
= 1 4 +
2 √ 4 2 2
1 2
0√ + 42 0 0
0 0 0
0 0 ∞ 0
∞
− 14
1
− 12
0
√ − 41 + 42 √ 2 2
0 0
0 0 1 0
(7–24)
38
Chapter 7 – APPROXIMATE SOLUTION METHODS
7.2
Rayleigh-Ritz Analysis
Consider the generalized eigenvalue problem (6-5). If M is positive definite, so vT Mv > 0 for any v 6= 0. Then, the so-called Rayleigh quotient may be defined as
ρ(v) =
vT Kv vT Mv
(7–25)
It can be proved that ρ(v) fulfills the bounding, see Box 7.1
λ1 ≤ ρ(v) ≤ λn
(7–26)
where λ1 and λn denote the smallest and the largest eigenvalues of the generalized eigenvalue problem. Especially, if v = Φ(1) , where Φ(1) has been normalized to unit modal mass, it follows that Φ(1) T MΦ(1) = 1 and Φ(1) T KΦ(1) = λ1 , see (7-34) and (7-35) below. Then, ρ(v) = λ1 . This property is contained in the so-called Rayleigh’s principle
λ1 = minn ρ(v)
(7–27)
v∈R
Next, assume that v is M-orthogonal to Φ(1) , so Φ(1) T Mv = 0. Then, the following bounding of the Rayleigh quotient may be proved, see Box 7.1
λ2 ≤ ρ(v) ≤ λn
(7–28)
Correspondingly, λ2 may be evaluated by the following extension of the Rayleigh principle, where the M-orthogonality of the test vector v to the first eigenmode Φ(1) has been included as a restriction
λ2 =
minn ρ(v) v∈R
Φ
(1) T
(7–29)
Mv = 0
The corresponding optimal vector will be v = Φ(2) . Generally, if v is M-orthogonal to the first m − 1 eigenmodes Φ(1) , Φ(2) , . . . , Φ(m−1) , so Φ(j) T Mv = 0 , j = 1, . . . , m − 1, the following bounding of the Rayleigh quotient may be proved, see Box 7.1
λm ≤ ρ(v) ≤ λn
,
m
(7–30)
7.2 Rayleigh-Ritz Analysis
39
Correspondingly, λm may be evaluated by the following extension of the Rayleigh variational principle, where restriction of M-orthogonal of the test vector v to the eigenmodes Φ(j) = 0 , j = 1, . . . , m − 1 are included
λm =
minn ρ(v) v∈R
(7–31)
Φ(j) T Mv = 0 ,
j = 1, . . . , m − 1
The corresponding optimal vector will be v = Φ(m) . The Rayleigh quotient may be used to calculate an upper bound for the lowest eigenvalue λ1 . The quality of the estimate depends on the choice of v. The better the qualitative and quantitative resemblance of v to the shape of the lowest eigenmode, the sharper will be the calculated upper bound.
Box 7.1: Proof of boundings of the Rayleigh quotient Given the linear independent eigenmodes, normalized to unit modal mass Φ(1) , Φ(2) , . . . , Φ(n) . Using the eigenmodes as a vector basis, any n-dimensional vector may be written as v = q1 Φ(1) + q2 Φ(2) + · · · + qn Φ(n)
(7–32)
Insertion of (7-32) into (7-25) provides n P n P
qi qj Φ(i) T KΦ(j)
i=1 j=1
ρ(v) = P n P n
= qi qj Φ(i) T MΦ(j)
q12 λ1 + q22 λ2 + · · · + qn2 λn q12 + q22 + · · · + qn2
(7–33)
i=1 j=1
where the orthonormality conditions of the eigenmodes have been used in the last statement, i.e.
Φ(i) T MΦ(j)
( 0 , = 1 ,
Φ(i) T KΦ(j) =
(
0 , λi ,
i 6= j i=j
(7–34)
i 6= j i=j
(7–35)
40
Chapter 7 – APPROXIMATE SOLUTION METHODS
Given the following ordering of the eigenvalues
0 ≤ λ1 ≤ λ2 ≤ · · · ≤ λn−1 ≤ λn ≤ ∞
(7–36)
it follows directly from (7-33) that q12 λ1 + q22 λ1 + · · · + qn2 λ1 = λ1 ρ(v) ≥ q12 + q22 + · · · + qn2
(7–37)
q12 λn + q22 λn + · · · + qn2 λn ρ(v) ≤ = λ n 2 2 2 q1 + q2 + · · · + qn which proves the bounding (7-26).
(7-32) is pre-multiplied with Φ(j) T M. Then, use of (7-34) provides the following expression for the jth modal coordinate qj = Φ(j) T Mv
(7–38)
Hence, if v is M-orthogonal to Φ(j) , j = 1, . . . , m − 1 it follows that q1 = q2 = · · · = qm−1 = 0. In this case (7-33) attains the form
ρ(v) =
2 2 λm + qm+1 λm+1 + · · · + qn2 λn qm 2 + q2 2 qm m+1 + · · · + qn
(7–39)
Proceeding as in (7-37) it then follows that 2 2 qm λm + qm+1 λm + · · · + qn2 λm ρ(v) ≥ = λm 2 2 2 qm + qm+1 + · · · + qn q 2 λn + q 2 λn + · · · + qn2 λn ρ(v) ≤ m 2 m+1 = λn 2 qm + qm+1 + · · · + qn2
(7–40)
which proves the bounding (7-30).
The so-called Ritz analysis m linearly independent base vectors, Ψ(1) , . . . , Ψ(m) , are defined, which span an m-dimensional subspace Vm ⊆ Vn . Often the base vectors are determined as the static deflections from m linearly independent load vectors f1 , . . . , fm . This is preferred, because it often is simpler to specify static load, which will produce displacements qualitatively in agreement with the eigenmodes to be determined by the analysis. The Ritz-basis is determined from the equilibrium equation
7.2 Rayleigh-Ritz Analysis
KΨ = f
⇒
41
Ψ = K−1 f
Ψ = Ψ(1) Ψ(2) · · · Ψ(m)
(7–41) ,
f = [f1 f2 · · · fm ]
(7–42)
Then, any vector v ∈ Vm can be written on the form
v = q1 Ψ(1) + q2 Ψ(2) + · · · + qm Ψ(m)
q1 q1 (1) (2) q2 (m) q2 = Ψ Ψ ···Ψ . = Ψq , q = . .. .. qm
qm (7–43)
The idea in Ritz analysis is to insert (7-43) into the Rayleigh quotient (7-25), and determine the modal coordinates q1 , q2 , . . . , qm , which minimizes this quantity. Hence, the following reformulation of the Rayleigh quotient is considered T ˜ Ψq KΨq qT Kq = ρ(q) = T ˜ qT Mq Ψq MΨq
(7–44)
where ˜ = ΨT MΨ = [M ˜ ij ] , M ˜ ij = Ψ(i) T MΨ(j) M ˜ = ΨT KΨ = [K ˜ ij ] K
,
˜ ij = Ψ(i) T KΨ(j) K
)
(7–45)
˜ and K ˜ are denoted as the projected mass- and stiffness matrices on the subspace spanned by M the Ritz basis Ψ. The approximation to λ1 then follows from (7-27)
λ1 ≤ ρ1 = min ρ(q) = min q∈Vm
q1 ,...,qm
m m P P
˜ ij qj qi K
i=1 j=1 m m P P
˜ ij qj qi M
(7–46)
i=1 j=1
Generally, ρ1 is larger than λ1 in agreement with (7-26), Only if Φ(1) ∈ Vm in which case modal coordinates q, . . . , qm exist so Φ(1) = q1 Ψ(1) + · · · + qm Ψ(m) , will ρ1 = λ1 . The necessary condition for a minimum is that
42
Chapter 7 – APPROXIMATE SOLUTION METHODS
∂ ∂qi
˜ q Kq ˜ qT Mq T
˜ · qT Mq
∂ ∂qi
!
=0
,
i = 1, . . . , m
˜ − qT Kq ˜ · qT Kq ˜ 2 qT Mq
∂ ∂qi
˜ qT Mq
⇒
=0
⇒
∂ ˜ − ρ1 ∂ qT Mq ˜ qT Kq =0 (7–47) ∂qi ∂qi P Pm Pm ˜ ˜ ˜ = ∂q∂ i m Now, ∂q∂ i qT Kq j=1 k=1 qj Kjk qk = 2 k=1 Kik qk , where the symmetry property, Pm ˜ T ˜ =K ˜ ), has been applied. Similarly, ∂ qT Mq ˜ ˜ jk = K ˜ kj (K K = 2 k=1 Mik qk . Then, the ∂qi minimum condition (7-47) reduces to m X
˜ ij qj − ρ1 K
j=1
m X
˜ ij qj = 0 M
(7–48)
j=1
From (7-48) follows that ρ1 is determined as the lowest eigenvalue to the following generalized eigenvalue problem of dimension m ˜ − ρMq ˜ =0 Kq
(7–49)
(7-49) has m eigensolutions (ρi , q(i) ), i = 1, . . . , m. ρi becomes an approximation to the ith eigenvalue λi . The corresponding approximation to the ith eigenmode is calculated from ¯ (i) = q1,i Ψ(1) + · · · + qm,i Ψ(m) = Ψq(i) Φ
,
i = 1, . . . , m
(7–50)
where q1,i , . . . , q1m,i denote the components of q(i) . The relations (7-50) can be assembled into the matrix equation ¯ = ΨQ Φ
(7–51)
(1) (2) ¯ (m) ¯ ···Φ ¯ = Φ ¯ Φ Φ
,
Q = q(1) q(2) · · · q(m)
(7–52)
We shall assume that the eigenvectors q(i) are normalized to unit modal mass with respect to the projected mass matrix, i.e. the following orthonormality properties are fulfilled
˜ (j) q(i) T Mq
( 0 , = 1 ,
i 6= j i=j
(7–53)
7.2 Rayleigh-Ritz Analysis
˜ (j) = q(i) T Kq
(
0 , ρi ,
43
i 6= j i=j
(7–54)
¯ become Then, the modal mass of the eigenmodes Φ ¯ = (ΨQ)T MΨQ = QT MQ ¯ T MΦ ˜ =I Φ
(7–55)
¯ (i) will be normalized to unit modal mass, if this is the Hence, the approximate eigenmodes Φ ¯ forms an alternative case for the eigenvectors q(i) with respect to the projected mass matrix. Φ m Ritz-basis in R , which in addition is M-orthonormal. Similarly, the approximate eigenmodes are K-orthogonal as follows ˜ =R ¯ = (ΨQ)T KΨQ = QT KQ ¯ T KΦ Φ
(7–56)
where R is m-dimensional diagonal matrix with the eigenvalues ρ1 , . . . , ρm in the main diagonal. Obviously, the Rayleigh quotient approach corresponds to m = 1. Hence, Ritz analysis is merely a multi-dimensional generalization, for which reason the name Rayleigh-Ritz analysis has been coined for the method. As a generalization to (7-26) the following boundings can be proved2
λ 1 ≤ ρ1
2
,
λ 2 ≤ ρ2
,
...
,
λm ≤ ρ m ≤ λ n
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
(7–57)
44
Chapter 7 – APPROXIMATE SOLUTION METHODS
Box 7.2: Rayleigh-Ritz algorithm
1. Estimate m linearly independent static load vectors f1 , . . . , fm , assembled columnwise in the n × m matrix f = [f1 f2 · · · fm ]. 2. Calculate the Ritz basis from Ψ = K−1 f , Ψ = Ψ(1) Ψ(1) · · · Ψ(m) . 3. Calculate projected mass and stiffness matrices in the m-dimensional subspace ˜ = ΨT MΨ , K ˜ = ΨT KΨ. spanned by the Ritz basis: M ˜ = MQR. ˜ 4. Solve the generalized eigenvalue problem of dimension m: KQ ¯ = 5. Determine approximations to thelowest m eigenvector from the transformation Φ (1) (2) (m) ¯ ¯ ¯ ¯ . The corresponding approximate eigenvalues are ΨQ , Φ = Φ Φ · · · Φ contained in the main diagonal of R.
Returning to the static condensation problem in Section 7.1, let us define a Ritz basis of the dimension m = n1 as
I
Ψ1 = −1 −K22 K21
(7–58)
where I is a unit matrix of dimension n1 × n1 . Given the structure of the mass and stiffness matrices in (7-1), we may then evaluate the following projected matrices T M11 ˜ = ΨT MΨ1 = M 1 0 −K−1 22 K21 T K11 I ˜ = ΨT KΨ1 = K 1 K21 −K−1 22 K21
I
0 0
K12
I −K−1 22 K21
I
= M11
(7–59)
˜ = K11 − K12 K−1 22 K21 = K11 −1 −K22 K21 K22 (7–60)
Hence, (7-49) reduce to the generalized eigenvalue problem (7-6), with Q = Φ11 , and R = Λ1 . Consequently, static condensation may be interpreted as merely a Rayleigh-Ritz analysis with the Ritz basis (7-58). The following identity may be proved by insertion −1 I I K11 K12 K11 − K12 K−1 K = 21 22 −1 −K22 K21 K21 K22 0
(7–61)
7.2 Rayleigh-Ritz Analysis
45
Then, we may construct an alternative Ritz basis from (7-41) with the static load given as the right hand side of (7-61), i.e. −1 −1 I I ˜ −1 = K11 − K12 K−1 K = Ψ1 K 21 22 11 −1 0 −K22 K21 K22 (7–62)
K11 K12
Ψ2 = K−1 f = K21
Hence, the base vectors in Ψ2 is a linear combination of the base vectors in Ψ1 . Then, Ψ1 and Ψ2 span the same subspace Vn1 , for which reason both bases will determine the same eigenvalues and eigenvectors. The projected mass and stiffness matrices become ˜ −1 ΨT MΨ1 K ˜ −1 M11 K ˜ = ΨT MΨ2 = K ˜ −1 = K ˜ −1 M 2 11 1 11 11 11
(7–63)
˜ −1 = K ˜ −1 ΨT KΨ1 K ˜ −1 ˜ = ΨT KΨ2 = K K 2 11 1 11 11
(7–64)
Then, the modal matrices Q1 and Q2 , obtained as solutions to (7-49) for the respective Ritz bases, are seen to be related as ˜ −1 Q2 Q1 = K 11
(7–65)
¯ = Ψ1 Q1 = Ψ2 Q2 . (7-65) implies that Φ Example 7.2: Rayleigh-Ritz analysis Given the following mass- and stiffness matrices 1 0 0 2 −1 0 2 M = 0 1 0 , K = −1 4 −1 0 0 12 0 −1 2
(7–66)
which have the exact eigensolutions, cf. Example 6.3 λ1 Λ=0 0
0 λ2 0
2 = 0 0 0
λ3
0
0 0
4 0 0 6
,
h
i
Φ = Φ(1) Φ(2) Φ(3) =
√2 2 √2 2 √ 2 2
√
−1 0 1
2 √2 − 22 √ 2 2
(7–67)
A two dimensional Rayleigh-Ritz analysis is performed, where the static load vectors are estimated as 1 f = 0 0
0 0 1
(7–68)
46
Chapter 7 – APPROXIMATE SOLUTION METHODS
The Ritz basis becomes
2 −1 Ψ = −1 4 0 −1
7 0 1 2 0 = 12 1 1
−1 1 0 −1 0 0 2
1 2 7
(7–69)
The projected mass and stiffness matrices become " 1 29 ˜ M= 144 11
11 29
#
" 1 7 ˜ K= 12 1
,
# 1 7
(7–70)
The eigensolutions with modal masses normalized to 1 become " ρ R= 1 0
# " 2.4 0 = 0 ρ2
# 0 4
,
i h Q = q(1) q(2) =
"
√3 5 √3 5
# 2 −2
(7–71)
The solutions for the eigenvectors become 7 i h 1 (1) ¯ (2) ¯ ¯ = Φ= Φ Φ 2 12 1
1 " 3 √ 2 35 √
7
5
# 2 −2
=
√2 15 √ 5 √2 5
1
0
(7–72)
−1
¯ (2) are calculated exact. This is so, because Φ(2) is placed in the As seen from (7-71) and (7-72) ρ2 = 4 and Φ subspace spanned by the selected Ritz basis as seen from the expansion
Φ(2) = 2Ψ(1) − 2Ψ(2)
7 1 1 2 2 = 2 − 2 = 0 12 12 1 7 −1
(7–73)
Next, a new analysis is performed, where the static load vectors are estimated as 1 f = 1 1
0 1 0
(7–74)
The Ritz basis becomes
2 −1 Ψ = −1 4 0 −1
−1 1 0 −1 1 1 2
5 0 1 1 = 4 6 5 0
1 2 1
The projected mass and stiffness matrices become
(7–75)
7.3 Error Analysis
" 1 41 ˜ = M 36 13
47
13 5
#
" 1 7 ˜ = K 3 2
,
# 2 1
(7–76)
The eigensolutions with modal masses normalized to 1 become " ρ R= 1 0
# " 2 0 = 0 ρ2
# 0 6
,
i h Q = q(1) q(2) =
"√
√
2 √2 2 2
−322
#
(7–77)
√ 9 2 2
The solutions for the eigenvectors become
h
¯ = Φ ¯ (2) ¯ (1) Φ Φ
i
5 1 = 4 6 5
1 " √2 2 √22 1
2
√ # −322 √ 9 2 2
√2 =
2 √2 2 √ 2 2
√
2 √2 2 √2 2 − 2
−
(7–78)
¯ (1) = λ1 , Φ(1) and ρ2 , Φ ¯ (2) = λ3 , Φ(3) . This is so, because Φ(1) and Φ(3) are placed In this case ρ1 , Φ in the sub-space spanned by the selected Ritz basis.
7.3
Error Analysis
¯j , Φ ¯ (j) ), the error vector is defined as Given a certain approximation to the jth eigen-pair (λ ¯j M Φ ¯ (j) (7–79) εj = K − λ Presuming that the eigenvalues have been normalized to unit modal mass, it follows from (6-15) and (6-17) that M = Φ−1
T
I Φ−1
,
K = Φ−1
T
Λ Φ−1
(7–80)
Insertion of (7-80) into (7-79) provides εj = Φ−1
T
¯ j I Φ−1 Φ ¯ (j) Λ−λ
⇒
−1 ¯j I ¯ (j) = Φ Λ − λ ΦT ε j Φ
(7–81)
We shall use the Euclidean vector norm k·kE and the Hilbert matrix norm k·kH in the following. For a definition of these quantities, see Box 7.3. The mentioned norms are compatible, so
(j) ¯
Φ
E
−1 −1
T
¯j I ¯j I ≤ Φ Λ−λ ΦT kεj kE ≤ Φ H Λ−λ
Φ H kεj kE (7–82) H
H
48
Chapter 7 – APPROXIMATE SOLUTION METHODS
The last statement of (7-82) follows from the defining properties of matrix norms, see Box 7.3. ¯ j I) is a diagonal matrix. Then, (Λ− λ ¯ j I)−1 is also a diagonal matrix with the components (Λ− λ ¯ j )−1 , k = 1, . . . , n in the main diagonal. The eigenvalues of a diagonal matrix is equal (λk − λ to the components in the main diagonal. Since, the Hilbert norm of a symmetric matrix is equal to the numerical largest eigenvalue, it follows that
−1
¯
= max
Λ − λj I H
k=1,...,n
1 ¯j | |λk − λ
=
1 ¯j | min |λk − λ
(7–83)
k=1,...,n
The Hilbert norms of Φ and ΦT are identical as stated in Box 7.3. Further, it can be shown that, see Box 7.4
kΦk2H =
1 µ1
(7–84)
where µ1 is the lowest eigenvalue of M. Then, (7-82), (7-83) and (7-84) provides the following bounding of the calculated eigenvalue ¯j λ ¯ j | ≤ 1 kεj kE = 1 |εj | min |λk − λ ¯ (j) kE ¯ (j) | k=1,...,n µ1 kΦ µ1 |Φ
(7–85)
(7-85) is only of value, if µ1 can be calculated relatively easily. This is the case for the special eigenvalue problem, where M = I, which means that µ1 = · · · = µn = 1, so (7-85) reduces to ¯j | ≤ min |λk − λ
k=1,...,n
kεj kE |εj | = ¯ (j) (j) ¯ kΦ kE |Φ |
(7–86)
7.3 Error Analysis
49
Box 7.3: Vector and matrix norms A vector norm is a real number kvk associated to any n-dimensional vector v, which fulfills the following conditions 1. kvk > 0 for v 6= 0 and k0k = 0. 2. kcvk = |c| · kvk for any complex or real number c. 3. ku + vk ≤ kuk + kvk (triangle inequality). The most common vector norms are 1/p P n 1. p-norm (p ∈]0, ∞[): kvkp = |vi |p . i=1
2. One norm (p = 1): kvk1 =
n P
|vi |.
i=1
3. Two norm (p = 2, Euclidean norm): kvk2 = |v| =
P n
|vi |2
1/2
.
i=1
4. Infinity norm (p = ∞): kvk∞ = max |vi |. i=1,...,n
where vi denotes the components of v. Given
1 v = −3 2
⇒
kvk1/2 = kvk = 1 kvk2 = kvk∞ =
√
√ 2 2 = 17.19 1+3+2 =6 1/2 12 + 32 + 22 = 3.74 max(1, 3, 2) = 3 √
1+
3+
(7–87)
A matrix norm is a real number kAk associated to any n × n matrix A, which fulfills the following conditions 1. kAk > 0 for A 6= 0 and k0k = 0. 2. kcAk = |c| · kAk for any complex or real number c. 3. kA + Bk ≤ kAk + kBk (triangle inequality). 4. kABk ≤ kAkkBk.
50
Chapter 7 – APPROXIMATE SOLUTION METHODS
The most common matrix norms are 1. One norm: kAk1 = max
n P
j=1,...,n i=1
2. Infinity norm: kAk∞ = max
|aij |. n P
i=1,...,n j=1
3. Euclidean norm: kAkE =
|aij |.
P n P n
a2ij
1/2
.
i=1 j=1
4. Hilbert norm (spectral norm): kAkH =
max λi
1/2
i=1,...,n T
, where λi is the ith eigen-
value of AAT identical to the eigenvalues of A A, so kAkH = kAT kH . aij denotes the components of A. Notice, if A = AT the eigenvalues of AAT = A2 becomes equal to the square of the eigenvalues of A. Given
"
2 −5 A= 3 −1
#
"
29 11 ⇒ AAT = 11 10
#
⇒
kAk1 = kAk∞ = kAkE = kAk = H
max(2 + 3, 5 + 1) = 6 max(2 + 5, 3 + 1) = 7 1/2 4 + 25 + 9 + 1 = 6.24 r √ 13 3 + 5 = 5.83 2 (7–88)
A matrix norm k · km is said to be compatible to a given vector norm k · kv , if kAvkv ≤ kAkm · kvkv
(7–89)
It can be shown that the Hilbert matrix norm is compatible to the Euclidean vector norm, that the one matrix norm is compatible to the one vector norm, and that the infinity matrix norm is compatible to the infinity vector norm. However, the Euclidean matrix norm is not compatible to the Euclidean vector norm.
7.3 Error Analysis
51
Box 7.4: Hilbert norm of modal matrix Presuming that the columns of the modal matrix have been normalized to unit modal mass, so m = I, it follows from (6-15) that M = ΦT
−1
Φ−1
M−1 = ΦΦT
⇒
(7–90)
From the definition of the Hilbert norm in Box 7.3 and (7-90) follows that kΦk2H becomes equal to the maximum eigenvalue of M−1 . If µ1 , µ2 , . . . , µn denote the eigenvalues of M in ascending order, then the eigenvalues in ascending order of M−1 become 1 , . . . , µ12 , µ11 , so the maximum eigenvalue of M−1 is equal to µ11 . This proves (7-84). µn
Example 7.3: Bound on calculated eigenvalue Given the mass- and stiffness matrices for the following special eigenvalue problem 1 M = 0 0
0 0 1 0 0 1
,
3 −1 0 K = −1 2 −1 0 −1 3
(7–91)
The eigensolutions with modal masses normalized to 1 are given as λ1 Λ=0
0 λ2
0
0
1 0 = 0 0
λ3
0
0 0
3 0
,
h
i
Φ = Φ(1) Φ(2) Φ(3) =
0 4
√
√
6 6 2√ 6 6 √ 6 6
√
2 2
0
√
−
2 2
3 √3 − 33 √ 3 3
(7–92)
¯2, Φ ¯ (2) ), has been calculated to the 2nd eigen-pair (λ2 , Φ(2) ) Assume that the following approximate solution , (λ
¯ 2 = 3.1 λ
,
¯ (2) Φ
1.0 = 0.2 −1.0
⇒
(2) ¯ = 1.4283 Φ
(7–93)
Then, the error vector becomes, cf. (7-79)
3 ε2 = −1 0
−1 0 1 2 −1 − 3.1 0 −1 3 0
0 0 1.0 −0.30 1 0 0.2 = −0.22 0 1 −1.0 −0.10
⇒
ε2 = 0.3852
(7–94)
Since, M = I we may use the simplified result (7-86), which provides ¯ 2 | ≤ 0.3852 = 0.26971 |λ2 − λ 1.4283 ¯ 2 | = 0.1. Actually, |λ2 − λ
(7–95)
52
7.4
Chapter 7 – APPROXIMATE SOLUTION METHODS
Exercises
7.1 Given the following mass- and stiffness matrices 6 −1 0 0 0 0 M = 0 2 1 , K = −1 4 −1 0 −1 2 0 1 1 (a.) Perform a static condensation by the conventional procedure based on (7-5), (7-6), and next by a Rayleigh-Ritz analysis with the Ritz basis given by (7-62).
7.2 Given the following mass- and stiffness matrices 6 −1 0 2 0 0 M = 0 2 1 , K = −1 4 −1 0 1 1 0 −1 2 (a.) Calculate approximate eigenvalues and eigenmodes by a Rayleigh-Ritz analysis using the following Ritz basis 1 1 Ψ = [Ψ(1) Ψ(2) ] = 1 −1 1 1 7.3 Consider the mass- and stiffness matrices in Exercise 7.2, and let 1 v = 1 1 ¯1 = ρ Φ ¯ (1) , as approximate solu¯ (1) = K−1 Mv, and next λ (a.) Calculate the vector Φ tions to the lowest eigenmode and eigenvalue. (b.) Establish the error bound for the obtained approximation to the lowest eigenvalue.
C HAPTER 8 VECTOR ITERATION METHODS 8.1
Introduction
(1) (2) , λ , In structural dynamics only a small number n 1 of the lowest eigen-pairs, λ1 , Φ 2, Φ (n1 ) . . . , λn1 , Φ , where n1 n, are of structural significance. Hence, there is a need for methods, which concentrate on the determination of the low-order modes. This is the underlying motivation for most of the methods described in the following chapters. It should be noticed that if λj is known, then Φ(j) can be determined from the linear, homogeneous equations, cf. (6-5) K − λj M Φ(j) = 0
(8–1)
If λj is an eigenvalue, the coefficient matrix K − λj M is singular. Then, Φ(j) can be determined within a common factor by solving a linear system of n − 1 equations as illustrated in Example 6.3. On the other hand, if Φ(j) is known, the eigenvalue λj can be determined from the Rayleigh quotient, cf. (7-25)
λj =
Φ(j) T KΦ(j) Φ(j) T MΦ(j)
(8–2)
Since, the eigenvalues are determined as solutions to the characteristic equation (6-7), which can only be solved analytically for n ≤ 4, all solution methods for practical problems relies implicitly or explicitly on iterative numerical schemes. Iterative numerical solution methods may be classified in the following categories Vector iteration methods operate directly on the generalized eigenvalue problem (8-1), so that a certain eigenvalue and associated eigenmode are determined iteratively with increasing accuracy. Vector iteration methods are considered both in Chapters 8 and 10.
— 53 —
54
Chapter 8 – VECTOR ITERATION METHODS
Similarity transformation methods transform the generalized eigenvalue problem via a sequence of similarity transformations, so the transformed mass and stiffness matrices eventually attain a diagonal form. These methods are considered in Chapter 9. Characteristic polynomial iteration methods operates directly or indirectly on the characteristic equation (6-7). These methods are dealt with in Section 10.4.
8.2
Inverse and Forward Vector Iteration
The principle in inverse vector iteration may be explained in the following way. Given a start vector, Φ0 . Based on the generalized eigenvalue problem (8-1), we may then calculate a new vector Φ1 as follows KΦ1 = MΦ0
⇒
Φ1 = K−1 MΦ0 = AΦ0
(8–3)
where A = K−1 M
(8–4)
Clearly, if Φ0 = Φ(j) is an eigenmode, then Φ1 = λ1j Φ0 . If not so, we may consider Φ1 as another, and hopefully better approximation to the eigenmode. Next, based on Φ1 we may proceed to calculate a still better approximation Φ2 from KΦ2 = MΦ1
⇒
Φ2 = AΦ1
This proceed may be continued until the convergence criteria Φk+1 = sufficient accuracy.
(8–5) 1 Φ λj k
is fulfilled with
The inverse vector iteration algorithm may then be summarized as follows
Box 8.1: Inverse vector iteration algorithm Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the following items for k = 0, 1, . . . ¯ k+1 = AΦk . 1. Calculate Φ 2. Normalize solution vector to unit modal mass, so ΦTk+1 MΦk+1 = 1: ¯ k+1 Φ Φk+1 = q ¯ T MΦ ¯ k+1 Φ k+1
8.2 Inverse and Forward Vector Iteration
55
Obviously, the algorithm requires that the stiffness matrix is non-singular, so the inverse K−1 exists. By contrast the mass matrix needs not be non-singular as is the case in Example 8.1 below. After convergence the lowest eigenvalue is most accurately calculated from the Rayleigh quotient (7-25). In case the lowest eigenvalue is simple, i.e. that λ1 < λ2 , the inverse iteration algorithm converges towards the lowest eigenpair (λ1 , Φ(1) ). The solution vector obtained after the kth iteration step, Φk , is an n-dimensional vector, which may be expanded in the basis formed by the n undamped eigenmodes as follows Φk = q1,k Φ(1) + q2,k Φ(2) + · · · + qn,k Φ(n) = Φqk q1,k q 2,k Φ = [Φ(1) Φ(2) · · · Φ(n) ] , qk = . . . qn,k
(8–6)
The components of the vector qk are denoted the modal coordinates of the vector Φk . The expansion (8-6) should be considered as formal, since the base vectors Φ(1) , Φ(2) , . . . , Φ(n) are unknown. Actually, the whole analysis deals with the determination of these quantities. ¯ k+1 reads Similarly, the expansion for Φ ¯ k+1 = Φ¯ qk+1 Φ
(8–7)
¯ k+1 . Insertion of (8-6) and (8-7) into the ¯ k+1 denotes a vector of modal coordinates of Φ where q iteration algorithm provides
KΦ¯ qk+1 = MΦqk
⇒
¯ k+1 = ΦT MΦ qk ΦT KΦ q Λ¯ qk+1 = qk
⇒ (8–8)
where the orthogonality properties (6-15) and (6-17) have been used, and the eigenmodes are assumed to be normalized to unit modal mass. The diagonal matrix Λ is given by (6-11). As ¯ k+1 = qk = Ψ(j) , where Ψ(j) signifies the eigenmode in k → ∞ convergence implies that λ1j q the modal space. This means that
56
Chapter 8 – VECTOR ITERATION METHODS
ΛΨ(j) = λj Ψ(j) λ1 0 0 λ 2 . . .. .. 0 0
Ψ(j)
··· ··· ...
⇒ 0 Ψ1 Ψ1 0 Ψ2 Ψ2 = λ j .. .. ... . .
· · · λn Ψn Ψ1 0 . . .. .. Ψ 0 j−1 = Ψj = 1 Ψj+1 0 .. .. . . Ψn 0
⇒
Ψn
(8–9)
The jth component of Ψ(j) is equal to 1, and the remaining components are zero. Let the start vector be given as q0 = [1, . . . , 1]T . Then, the following sequence of results may be calculated from (8-8)
1 λ1
···
1 λ2
.. .
··· ...
0
0
···
1 λn
1 λ1
0
···
0
1 λ2
.. .
··· ...
0
···
0 .. .. = .. . . .
0 q1 = Λ q0 = .. . −1
0 q2 = Λ q1 = .. . −1
0
−1
qk = Λ qk−1
1 1 λ1 1 0 1 λ2 .. .. = .. . . .
0
1 λ1
0 = .. .
0
0
···
1 λ2
.. .
··· ...
0
···
1 λ1 1 λ2
1 λn
0
1 λn
1
⇒
1 2 λ11 2 λ2
1 λn
⇒ ··· ⇒
1 λ2n
1 1 λk−1 λk1 1 1 1k 0 k−1 1 λ2 λ2 = = k .. .. .. . . . λ1
0
1 λn
1 λk−1 n
If λ1 < λ2 ≤ · · · ≤ λn it follows from (8-10) that
1 λkn
1 λ1 k λ2 .. . λ1 k λn
(8–10)
8.2 Inverse and Forward Vector Iteration
1 0 lim λk1 qk = . = Ψ(1) k→∞ ..
57
(8–11)
0
Hence, the algorithm converge to Ψ(1) in the modal space. The corresponding convergence to Φ(1) then takes place in the physical space. As seen from (8-11), |qk | → 0 if λ1 > 1, and |qk | → ∞ if λ1 < 1. This is the rationale behind the normalization to unit modal mass of the iteration vector, performed at each iteration step in the algorithm in Box 8.1. The relative error of the iteration vector after the kth iteration step is defined from
ε1,k
s k 2k 2k 2k λ1 qk − Ψ(1) λ1 λ1 λ1 k (1) = λ = q − Ψ + + · · · + = = k 1 Ψ(1) λ2 λ3 λn
λ1 λ2
k s 2k 2k λ2 λ2 1+ + ··· + λ3 λn
(8–12)
From (8-12) follows, that the relative error at large values of k has the magnitude ε1,k ' λλ12 Based on the asymptotic behavior of the relative error, the convergence rate is defined from k+1 λ1 qk+1 − Ψ(1) ε1,k+1 = = lim k r1 = lim k→∞ ε1,k k→∞ λ1 qk − Ψ(1) q 2k+2 λ2 2k+2 + · · · + λλn2 λ1 λ1 1 + λ3 q lim = 2k 2k k→∞ λ2 λ2 1 + λλ23 + · · · + λλn2
k
.
(8–13)
The last statement of (8-13) presumes that the eigenvalue λ2 is simple, i.e. that λ2 < λ3 . It follows from (8-12) that the smaller is the fraction λλ12 the faster will the convergence to the first eigenmode be. Hence, the convergence rate as defined by (8-13) should be small (despite linguistic logics suggests the opposite). An vector iteration scheme, where the convergence rate is proportional to λλ12 is said to have linear convergence. Hence, inverse vector iteration has linear convergence. The Rayleigh quotient based on Φk = Φqk becomes
ρ(qk ) =
ΦTk KΦk qTk ΦT KΦqk qTk Λqk = = ΦTk MΦk qTk ΦT MΦqk qTk qk
(8–14)
58
Chapter 8 – VECTOR ITERATION METHODS
The relative error of the Rayleigh quotient after the kth iteration step is defined from
ε2,k =
ρ(qk ) − λ1 λ1
(8–15)
From (8-10) follows that
T 1 k λ11 k λ2
λ1 0 0 λ2 qTk Λqk = . .. .. .. . . 1 0 0 λkn T 1
··· ··· ... ···
1 0 λk1 1 0 λk 1 1 1 1 2 = + + + · · · + .. .. λ2k−1 λ2k−1 λ2k−1 2k−1 λ n . . 1 2 3 1 λn k λ n
1
k
k
λ11 k λ T qk qk = .2 ..
λ11 k 1 1 1 1 λ2 . = 2k + 2k + 2k + · · · + 2k . . λ1 λn λ2 λ3
1 λkn
1 λkn
(8–16) Then, (8-15) may be written as
1 = λ1
ε2,k 1+
1
λ1 λ2 λ1 λ2
1 λ2k−1 1
+
1 λ2k 1
1 λ2k−1 2 + λ12k 2
+ +
1 1 + · · · + λ2k−1 λ2k−1 n 3 1 + · · · + λ12k λ2k n 3
λ1 2k−1 λ1 2k−1 λ1 2k−1 + + · · · + λ2 λ3 λn λ1 2k λ1 2k λ1 2k + λ2 + λ3 + · · · + λn
2k−1
1−
λ1 λ2
+ 1
2k−1
λ2 2k−1 1− λ3 2k + λλ12 +
λ1 1− + ··· λ2
λ1 λ3
−1=
−1=
+ ··· +
λ1 2k λ3
+ ···
λ2 2k−1 λn 2k + λλn1
1−
λ1 λn
=
(8–17)
where the dots denote terms, which converge to zero as k → ∞. (8-17) shows that the relative 2k−1 . Hence, error of the Rayleigh quotient at large values of k has the magnitude ε2,k ' λλ12 the relative error on the components of the eigenmode at a certain iteration step, as measured by ε2,k , is significantly larger than the relative error on the eigenvalue estimate, as determined by the Rayleigh quotient. The convergence rate of the Rayleigh quotient is defined from
8.2 Inverse and Forward Vector Iteration
ε2,k+1 r2 = lim = lim k→∞ ε2,k k→∞
λ1 2k+1 λ2 λ1 2k−1 λ2
59
1−
λ1 λ2
+ ···
1−
λ1 λ2
+ ···
=
λ1 λ2
2
(8–18)
Hence, the Rayleigh quotient has quadratic convergence in inverse vector iteration. Example 8.1: Inverse vector iteration Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 7.1. Calculate the lowest eigenvalue and eigenvector by inverse vector iteration using the inverse iteration algorithm described in Box 8.1 with the start vector 1 1 Φ0 = 1 1
(8–19)
The matrix A becomes, cf. (8-5), (7-16)
2 −1 A= 0 0
−1 −1 0 0 0 2 −1 0 0 −1 2 −1 0 0 −1 1 0
0 2 0 0
0 0 0 0
0 0 0 0 = 0 0 0 1
2 4 4 4
0 0 0 0
1 2 3 4
(8–20)
At the 1st and 2nd iteration step the following calculations are performed 0 0 ¯1 = Φ 0 0
2 4 4 4
0 0 ¯2 = Φ 0 0
2 4 4 4
0 0 0 0
3 1 1 2 1 6 = 3 1 7 4
1
⇒
¯ T MΦ ¯ 1 = 136 Φ 1
8
(8–21)
0.25725 3 1 6 0.51450 = Φ1 = √ 136 7 0.60025 0.68599 8
0 0 0 0
1.7150 1 0.25725 2 0.51450 3.4300 = 3 0.60025 4.1160 4.8020 4 0.68599
0.25126 1.7150 1 3.4300 0.50252 √ = Φ = 2 46.588 4.1160 0.60302 0.70353 4.8020
⇒
¯ T MΦ ¯ 2 = 46.588 Φ 2 (8–22)
60
Chapter 8 – VECTOR ITERATION METHODS
The Rayleigh quotient based on Φ2 provides the following estimate for λ1 , cf. (7-25) T 0.25126 2 −1 2 0.50252 −1 0.60302 0 −1 0.70353 0 0 ρ(Φ2 ) = T 0.25126 0 0 0.50252 0 2 0.60302 0 0 0.70353 0 0
0 0 0.25126 −1 0 0.50252 2 −1 0.60302 0.70353 −1 1 = 0.1464646 0 0 0.25126 0 0 0.50252 0 0 0.60302 0
(8–23)
0.70353
1
The exact solutions are given as, cf. (7-24)
√ 2 1 λ1 = − = 0.1464466 2 4
,
Φ
(1)
= 1 4
0.25000 0.50000 = + 42 0.60355 √ 2 0.70711 2 1 4 1 2√
(8–24)
The relative errors, ε1 and ε2 , on the calculation of the eigenvalue and the 1st component of Φ(1) becomes |Φ2 − Φ(1) | 0.00458 −3 = 4.22 · 10 ε1,2 = = 1.0848 |Φ(1) | ε2,2
ρ(Φ2 ) − λ1 0.1464646 − 0.1464466 = 1.23 · 10−4 = = λ1 0.1464466
(8–25)
As seen the relative error on the components of the eigenmode is significantly larger than the error on the the Rayleigh quotient.
The generalized eigenvalue problem (6-5) may be reformulated on the form MΦ(1) = λ1 MK−1 MΦ(1) Ψ(1) = λ1 MK−1 Ψ(1)
⇒
⇒
K−1 Ψ(1) = λ1 K−1 MK−1 Ψ(1)
,
Ψ(1) = MΦ(1)
(8–26)
From (8-26) the following Rayleigh quotient may be defined
ρ(v) =
vT K−1 v vT K−1 MK−1 v
(8–27)
If v = Ψ(1) = MΦ(1) then (8-4) provides the limit λ1 . An inverse vector iteration procedure based on the formulation (8-26), (8-27) has been indicated in Box 8.2. The lowest eigenmode
8.2 Inverse and Forward Vector Iteration
61
Φ(1) can only be retrieved after convergence, if M−1 exists.
Box 8.2: Alternative inverse vector iteration algorithm Given start vector Ψ0 . Repeat the following items for k = 0, 1, . . . 1. Calculate vk+1 = K−1 Ψk . ¯ k+1 = Mvk+1 . 2. Calculate Ψ 3. Calculate the Rayleigh quotient (8-29) for the test vector Ψk by T vk+1 Ψk ΨTk K−1 Ψk = T −1 ρ Ψk = T ¯ Ψk K MK−1 Ψk vk+1 Ψk+1 4. Normalize the new solution vector, so ΨTk+1 K−1 MK−1 Ψk+1 = 1 ! ¯ k+1 ¯ k+1 Ψ Ψ =p T Ψk+1 = q T Ψk K−1 MK−1 Ψk ¯ vk+1 Ψk+1 5. After convergence the lowest eigenmode at the same iteration step is calculated from Φk+1 = M−1 Ψk+1 .
Example 8.2: Alternative inverse vector iteration Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 6.2. Calculate the lowest eigenvalue and eigenvector by inverse vector iteration using the alternative inverse vector iteration algorithm in Box 8.2 with the start vector 1 Φ0 = 1 1
(8–28)
The inverse stiffness matrix becomes, cf. (6-44)
K−1
2 −1 = −1 4 0 −1
−1 7 0 1 = 2 −1 12 1 2
2 1 4 2 2 7
At the 1st and 2nd iteration steps the following calculations are performed
(8–29)
62
Chapter 8 – VECTOR ITERATION METHODS
7 2 1 1 1 v = 2 4 2 1 = 1 12 1 2 7 1 1 0 0 5 2 ¯1 = 1 Ψ 0 1 0 4 = 6 1 0 0 2 5
5 1 4 6 5 5 1 8 12 5
,
T 5 ¯1 = 1 v1T Ψ 4 6 · 12 5
5 41 8 = 36 5 (8–30)
T 5 1 T v1 Ψ 0 36 84 = 2.0488 = = ρ Ψ 4 1 = 0 ¯1 6 · 41 41 v1T Ψ 5 1 5 0.3904 ¯ Ψ1 1 q 8 = 0.6247 Ψ1 = p = T ¯ 41 v1 Ψ 1 12 · 36 5 0.3904 7 2 1 0.3904 0.3644 1 v2 = 2 4 2 0.6247 = 0.3384 12 1 2 7 0.3904 0.3644 1 0 0 0.3644 0.1822 2 ¯2 = Ψ 0 1 0 0.3384 = 0.3384 , 0 0 12 0.3644 0.1822
T 0.3644 ¯2 = v2T Ψ 0.3384 0.3644
T 0.3644 0.3904 v T Ψ1 1 ρ Ψ1 = 2T ¯ = 0.3384 0.6247 = 2.0055 0.2473 v Ψ 2 2 0.3644 0.3904 0.1822 0.3664 ¯ Ψ2 1 =√ 0.3384 = 0.6805 Ψ 2 = pv T Ψ ¯2 0.2473 2 0.1822 0.3664
0.1822 0.3384 = 0.2473 0.1822 (8–31)
The lowest eigenvector at the end of the 2nd iteration step becomes
1 2
Φ2 = M−1 Ψ2 = 0 0
0 1 0
−1 0 0.3664 0.7328 0 0.6805 = 0.6805 1 0.3664 0.7328 2
(8–32)
The exact solutions are given as, cf. (6-54) √2 Φ(1) =
2 √2 2 √ 2 2
0.7071 = 0.7071
,
λ1 = 2
(8–33)
0.7071
As for the simple formulation of the inverse vector iteration algorithm the convergence towards the exact eigenvalue takes place as a monotonously decreasing sequence of upper values, ρ(Ψ0 ), ρ(Ψ1 ), . . ..
8.2 Inverse and Forward Vector Iteration
63
The principle in forward vector iteration may also be explained based on the eigenvalue problem (8-1). Given a start vector, Φ0 , a new vector Φ1 may be calculated as follows KΦ0 = MΦ1
⇒
Φ1 = M−1 KΦ0 = BΦ0
(8–34)
where B = M−1 K
(8–35)
Clearly, if Φ0 = Φ(j) is an eigenmode, then Φ1 = λj Φ0 . If not so, a new and better approximation Φ2 may be calculated based on Φ1 as follows Φ2 = BΦ1
(8–36)
The process may be continued until converge is obtained. The forward vector iteration algorithm may be summarized as follows
Box 8.3: Forward vector iteration algorithm Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the following items for k = 0, 1, . . . ¯ k+1 = BΦk . 1. Calculate Φ 2. Normalize solution vector to unit modal mass, so ΦTk+1 MΦk+1 = 1 ¯ k+1 Φ Φk+1 = q ¯ T MΦ ¯ k+1 Φ k+1
Obviously, the algorithm requires that the mass matrix is non-singular, so the inverse M−1 exists. By contrast the stiffness matrix needs not be non-singular. After convergence the eigenvalue is calculated from the Rayleigh quotient. In case the largest eigenvalue is simple, i.e. that λn−1 < λn , the forward iteration algorithm converges towards the largest eigenpair λn , Φ(n) . The convergence rate of the eigenmode estimate is linear, and the convergence rate of the Rayleigh quotient is quadratic in the fraction λn−1 . A proof of this has been given in Section 8.3. λn
64
Chapter 8 – VECTOR ITERATION METHODS
Example 8.3: Forward vector iteration Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 6.2. Calculate the largest eigenvalue and eigenvector by forward vector iteration using the forward vector iteration algorithm in Box 8.3 with the start vector 1 Φ0 = 0 0
(8–37)
The matrix B becomes, cf. (8-35), (6-44)
1 2
B = 0 0
0 1 0
−1 0 2 0 −1 1 0 2
−1 0 4 −2 0 4 −1 = −1 4 −1 −1 2 0 −2 4
(8–38)
At the 1st and 2nd iteration step the following calculations are performed 4 ¯ Φ1 = −1 0
−2 0 1 4 4 −1 0 = −1 −2 4 0 0
4 ¯ Φ2 = −1 0
−2 0 1.3333 6.0000 4 −1 −0.3333 = −2.6667 −2 4 0 0.6667
⇒
¯1 = 9 ¯ T MΦ Φ 1 (8–39)
4 1.3333 1 Φ1 = √ −1 = −0.3333 9 0 0
⇒
¯ 2 = 25.333 ¯ T MΦ Φ 2
6.0000 1.1921 1 Φ2 = √ −2.6667 = −0.5298 25.333 0.6667 0.1325
(8–40)
The Rayleigh quotient based on Φ2 becomes
T 1.1921 2 −1 0 1.1921 −0.5298 −1 4 −1 −0.5298 0.1325 0 −1 2 0.1325 ρ(Φ2 ) = T = 5.404 1 0 0 1.1921 1.1921 2 −0.5298 0 1 0 −0.5298 0 0 12 0.1325 0.1325 The results for the iteration vector and the Rayleigh quotient in the succeeding iteration steps become
(8–41)
8.3 Shift in Vector Iteration
1.0622 Φ3 = −0.6276 0.2897
,
0.9584 Φ4 = −0.6726 0.4204
,
0.8811 Φ5 = −0.6923 0.5149
,
65
ρ(Φ3 ) = 5.697 ρ(Φ4 ) = 5.855 ρ(Φ5 ) = 5.933
(8–42)
The exact solutions becomes, cf. (6-54) Φ(3) =
√
2 2 √2 − 2 √ 2 2
0.7071
= −0.7071
,
λ3 = 6
(8–43)
0.7071
The relative slow convergence of the algorithm to the exact solution is because the fraction λλ23 = 46 is relatively high. Theoretically the relatively errors of the Rayleigh quotient after 5 iterations should be of the magnitude, cf. (8-17)
ε2,5 '
4 2·5−1 6
1−
4 = 0.0087 6
(8–44)
Actually, the error is slightly larger, namely
ε2,5 =
8.3
6 − 5.933 = 0.0112 6
(8–45)
Shift in Vector Iteration
Shift on the stiffness matrix in the eigenvalue problem (8-1) as indicated by (6-102) may be appropriate both in relation to inverse and forward vector iteration, either in order to obtain convergence to other eigen-pairs than (λ1 , Φ(1) ) or (λn , Φ(n) ), or to improve the convergence rate of the iteration process. ˆ = K − ρM. Algorithms in Box Reference til formel (6-102). K is replaced by the matrix K 8.1 and 8.3 are unchanged, if the matrices A and B in (8-6) and (8-35) are redefined as follows ˆ −1 M A=K
(8–46)
66
Chapter 8 – VECTOR ITERATION METHODS
ˆ B = M−1 K
(8–47)
ˆ is defined by (6-103). where K The Rayleigh quotient estimate of the eigenvalue λj after the kth iteration step becomes T ˆ ¯ j = ρ(Φk ) + ρ = Φk KΦk + ρ λ ΦTk MΦk
(8–48)
In the modal space the inverse vector iteration with shift on the stiffness matrix can be written as, cf. (8-8)
ˆ qk+1 = MΦqk ⇒ KΦ¯ ¯ k+1 = ΦT MΦ qk ΦT K − ρM Φ q
⇒
¯ k+1 = qk Λ − ρI q
(8–49)
(8-49) is identical to (8-8), if λj is replaced with λj − ρ. With the same start vector q0 = [1, . . . , 1]T as in (8-10), the solution vector after the kth iteration step becomes, cf. (8-10)
1 (λ1 −ρ)k
.. . (λj−11−ρ)k 1 1 = qk = k (λj −ρ) (λj − ρ)k 1 (λj+1 −ρ)k . . . 1 (λn −ρ)k
λj −ρ k λ1 −ρ
λj −ρ k λj−1 −ρ 1 λj −ρ k λj+1 −ρ .. . .. .
(8–50)
λj −ρ k λn −ρ
where the jth eigenvalue fulfills
|λj − ρ| = min |λi − ρ| i=1,...,n
It then follows from (8-50) that
(8–51)
8.3 Shift in Vector Iteration
67
0 . .. 0 k lim λj − ρ qk = 1 = Ψ(j) k→∞ 0 .. . 0
(8–52)
Hence, for a value of ρ fulfilling (8-51) the algorithm converge to Ψ(j) in the modal space. In physical space the algorithm then converge to Φ(j) . The convergence rate of the eigenmode becomes, cf. (8-13) λj−1 − ρ λj − ρ , r1 = max λj − ρ λj+1 − ρ
(8–53)
Then, the corresponding convergence rate of the Rayleigh quotient is given as r2 = r12 .
a) 0
λ1
λj−1
λj ρ
λj+1
λn−1
λn
0
ρ λ1
λj−1
λj
λj+1
λn−1
λn
0
λ1
λj−1
λj
λj+1
λn−1
λn ρ
b)
c)
λ
λ
λ
Fig. 8–1 Optimal position of shift parameter at inverse vector iteration. a) Convergence towards λj . b) Convergence towards λ1 . c) Convergence towards λn .
In case inverse vector iteration towards the jth eigenmode is attempted, the shift parameter should be place in the vicinity of λj as shown on Fig. 8.1a in order to obtain a small convergence rate. It should be emphasized that any inverse vector iteration with shift should be accompanied with a Sturm sequence check to insure that the calculated eigenvalue is indeed the λj . At inverse vector iteration towards the lowest eigenmode the convergence rate r1 = |λ1 − ρ|/|λ2 − ρ| should be minimized. Hence, ρ should be placed close to but below λ1 , as shown on Fig. 8.1b. At inverse vector iteration towards the highest eigenmode the convergence rate r1 = |λn−1 − ρ|/|λn − ρ| should be minimized. Hence, ρ should be placed close to but above λn , as shown
68
Chapter 8 – VECTOR ITERATION METHODS
on Fig. 8.1c. In case of forward with iteration with shift, (8-49) provides the solution after k iterations
k
(λ1 − ρ) .. . (λ − ρ)k j−1 k k qk = (λj − ρ) = (λj − ρ) (λj+1 − ρ)k .. . (λn − ρ)k
λ1 −ρ k λj −ρ
λj−1 −ρ k λj −ρ 1 k λj+1 −ρ λj −ρ .. . .. .
(8–54)
λn −ρ k λj −ρ
where the jth eigenvalue fulfills |λj − ρ| = max |λi − ρ|
(8–55)
i=1,...,n
Clearly, (8-55) has the solutions λj = λ1 or λj = λn . The former occurs, if ρ is closest to λn , and the latter if ρ is closest to λ1 . Then, it follows form (8-54) that
lim
k→∞
1 λj − ρ
(j) k qk = Ψ
,
j = 1, n
(8–56)
For a value of ρ fulfilling (8-55) the algorithm converge to Ψ(j) in the modal space, or to Φ(j) in the physical space. Forward iteration with shift always converge to either the lowest or the highest eigenmode depending on the magnitude of the shift parameter. The convergence rate of the iteration vector becomes λ1 − ρ λj−1 − ρ λj+1 − ρ λn − ρ ,..., , ,..., r1 = max λj − ρ λj − ρ λj − ρ λj − ρ
(8–57)
Shift in forward vector iteration is not as useful as in inverse vector iteration, because the optimal choice of the shift parameter is more difficult to specify. At forward vector iteration towards the highest eigenmode the optimal shift parameter is typically placed somewhere in the middle of the eigenvalue spectrum. Especially for ρ = 0, (8-57) becomes
r1 =
λn−1 λn
as stated in Section 8.2 on forward iteration without shift.
(8–58)
8.3 Shift in Vector Iteration
69
Example 8.4: Forward vector iteration with shift The problem in Example 8.3 is considered again. However, now a shift with ρ = 3 is performed on the stiffness matrix. ˆ becomes, cf. (6-103), (6-44) The matrix K
1 2 −1 0 2 ˆ = −1 K 4 −1 − 3 0 0 0 −1 2
0 1 0
1 0 −1 2 0 = −1 1 1 0 −1 2
0 −1
(8–59)
1 2
The matrix B becomes, cf. (8-47), (6-44)
1 2
B = 0 0
0 1 0
−1 1 0 2 0 −1 1 0 2
−1 0 1 −2 0 1 −1 = −1 1 −1 1 −1 0 −2 1 2
(8–60)
At the 1st and 2nd iteration step the following calculations are performed 1 ¯1 = Φ −1 0
−2 0 1 1 1 −1 0 = −1 −2 1 0 0
1 ¯ Φ2 = −1 0
−2 0 0.8165 2.4495 1 −1 −0.8165 = −1.6330 −2 1 0 1.6330
⇒
¯ 1 = 1.5 ¯ T MΦ Φ 1 (8–61)
1 0.8165 1 Φ1 = √ −1 = −0.8165 1.5 0 0
⇒
¯1 = 7 ¯ T MΦ Φ 1
2.4495 0.9258 1 Φ2 = √ −1.6330 = −0.6172 7 1.6330 0.6172
(8–62)
The Rayleigh quotient estimate of λ3 based on Φ2 becomes, cf. (8-48)
T 1 −1 0 0.9258 0.9258 2 1 −1 −0.6172 −0.6172 −1 1 0.6172 ¯ 3 = ρ(Φ2 ) + 3 = 0.6172 0 −1 2 λ + 3 = 2.9048 + 3 = 5.9048 T 1 0 0 0.9258 0.9258 2 −0.6172 0 1 0 −0.6172 0 0 12 0.6172 0.6172 The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become
(8–63)
70
Chapter 8 – VECTOR ITERATION METHODS
0.7318 Φ3 = −0.7318 0.6273 0.7331 Φ4 = −0.6982 0.6982
,
¯3 λ
,
¯3 λ
,
¯3 λ
0.7100 Φ5 = −0.7100 0.6983
= 5.9891 = 5.9988 = 5.9999
(8–64)
The results in (8-64) should be compared to those in (8-42). As seen the convergence of the shifted problem is much faster.
8.4
Inverse Vector Iteration with Rayleigh Quotient Shift
As demonstrated in Section 8.3 the convergence properties are improved if inverse vector a shift on the stiffness matrix is applied, where the shift parameter ρ ' λ1 . The idea in the present section is to update the shift parameter at each iteration step with the most recent estimate of ¯ 1 is known after the kth the lowest eigenvalue. Assume, that an estimate of the eigenvalue λ ¯ 1 is performed, so a new un-normalized iteration step. Then, a shift with the parameter ρk = λ eigenmode estimate is calculated at the (k + 1)th iteration step from −1 ¯ MΦk Φk+1 = K − ρk M
(8–65)
where Φk is the normalized estimate of the eigenmode after the kth iteration step. A new estimate of the eigenvalue, and hence the shift parameter, then follows from (8-48) ¯ k+1 ¯T K − ρ M Φ Φ k + ρk (8–66) ρk+1 = k+1 ¯ T ¯ k+1 Φk+1 M Φ The convergnce towards λ1 , Φ(1) is not safe, since the first shift determined by ρ1 may cause convergence towards other eigen-pairs, especially if the first and second eigenvalue are close. For this reason the first couples of iteration steps are often performed without shift. When the convergence towards the first eigen-pair takes place, the convergence rate of the Rayleigh quotient estimate of the eigenvalue will be cubic, i.e. r2 = ( λλ12 )3 . Additionally, the length of the converge process is very much dependent on the start vector, as demonstrated in the succeeding Example 8.5. Even though the convergence may be fast it should be realized that the process requires inversion of the matrix K − ρk M at each iteration step, which may be expensive for
8.4 Inverse Vector Iteration with Rayleigh Quotient Shift
71
large systems.
Box 8.4: Algorithm for inverse vector iteration with Rayleigh quotient shift Given start vector Φ0 , which needs not be normalized to unit modal mass, and set the initial shift to ρ0 = 0. Repeat the following items for k = 0, 1, . . . −1 ¯ k+1 = K − ρk M 1. Calculate Φ MΦk . 2. Calculate new shift parameter (new estimate on the eigenvalue) from the Rayleigh ¯ k+1 by quotient estimate based on Φ ¯ k+1 ¯T K − ρ M Φ Φ k estimate on λ1 ρk+1 = k+1 ¯ T + ρk ¯ Φk+1 M Φk+1 3. Normalize the new solution vector to unit modal mass Φk+1 = q
¯ k+1 Φ ¯T MΦ ¯ k+1 Φ k+1
Example 8.5: Inverse vector iteration with Rayleigh quotient shift Consider the generalized eigenvalue problem defined by the mass and stiffness matrices in Example 6.2. Calculate the lowest eigenvalue and eigenvector by inverse vector iteration with Rayleigh quotient shift with the start vector 1 Φ0 = 0 0 At the 1st and 2nd iteration step the following calculations are performed
(8–67)
72
Chapter 8 – VECTOR ITERATION METHODS
1 2 −1 0 0 0 2 −1 0 2 ˆ = K −1 4 −1 − 0 · 0 1 0 = −1 4 −1 0 −1 2 0 0 12 0 −1 2 −1 1 0 0 1 2 −1 0 0.2917 2 ¯ Φ1 = −1 4 −1 0 1 0 0 = 0.0833 ⇒ 0 0 12 0 0 −1 2 0.0417
¯ 1 = 0.05035 ¯ T MΦ Φ 1 (8–68)
T 0.2917 2 −1 0 0.2917 1 ρ1 = 0.0833 −1 4 −1 0.0833 + 0 = 2.8966 0.05035 0.0417 0 −1 2 0.0417 0.2917 1.2999 1 Φ1 = √ 0.0833 = 0.3714 0.05035 0.0417 0.1857 1 2 −1 0 0 0 0.5517 −1.0000 0.0000 2 ˆ = K −1 4 −1 − 2.8966 · 0 1 0 = −1.0000 1.1034 −1.0000 0 −1 2 0 0 12 0.0000 −1.0000 0.5517 −1 1 0 0 1.2999 0.5517 −1.0000 0.0000 −0.0567 2 ¯2 = Φ −1.0000 1.1034 −1.0000 0 1 0 0.3714 = −0.6812 ⇒ 0 0 12 0.1857 0.0000 −1.0000 0.5517 −1.0664 T −0.0567 0.5517 −1.0000 1 ρ2 = −0.6812 −1.0000 1.1034 1.0342 −1.0664 0.0000 −1.0000 −0.0567 −0.0557 1 Φ2 = √ −0.6812 = −0.6698 1.0342 −1.0664 −1.0486
¯ 2 = 1.0342 ¯ T MΦ Φ 2
0.0000 −0.0567 −1.0000 −0.6812 + 2.8966 = 2.5206 0.5517 −1.0664
(8–69) The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become 0.9011 Φ3 = 0.6830 0.5049 −0.6985 Φ4 = −0.7073 −0.7152
ρ3 = 2.0793
,
,
ρ4 = 2.0001
(8–70)
Despite the shifts the convergence is very slow during the 1st and 2nd iteration step. Not until the 3rd and 4th step a fast speed-up of the convergence takes place. This is due to the poor guess of the start vector.
8.5 Vector Iteration with Gram-Schmidt Orthogonalization
8.5
73
Vector Iteration with Gram-Schmidt Orthogonalization
Inverse vector iteration or forward vector iteration with Gram-Schmidt orthogonalization is (1) (n) or λn , Φ are wanted. used, when more than the eigen-pairs λ1 , Φ Assume, that the eigenmodes Φ(1) , Φ(2) , . . . , Φ(m) , m < n, have been determined. Next, the eigenmode Φ(m+1) is wanted using inverse vector iteration by means of the algorithm in Box 8.1. In order to prevent the algorithm to converge toward Φ(1) a cleansing of the vector Φk+1 for information about the first m eigenmodes is performed by a so-called Gram-Schmidt orthogonalization . In this respect the following modified iteration vector iteration algorithm is considered
ˆ k+1 = Φ ¯ k+1 − Φ
m X
cj Φ(j)
(8–71)
j=1
Inspired by the variational problem (7-31), where the test vector v is chosen to be M-orthogonal ˆ k+1 is chosen to be Mto the previous determined eigenmodes, the modified iteration vector Φ (1) (2) (m) orthogonal on Φ , Φ , . . . , Φ , i.e. ˆ k+1 = 0 Φ(i) T MΦ
,
i = 1, . . . , m
(8–72)
(8-71) is premultiplied with Φ(i) T M. Assuming that the calculated eigenmodes have been normalized to unit modal mass, it follows from (6-13), (8-71) and (8-72) that the expansion coefficients c1 , c2 , . . . , cm are determined from
0=Φ
(i) T
¯ k+1 − MΦ
m X
¯ k+1 − ci cj Φ(i) T MΦ(j) = Φ(i) T MΦ
⇒
j=1
¯ k+1 ci = Φ(i) T MΦ
(8–73)
ˆ k+1 is considered as the After insertion of the calculated expansion coefficients into (8-71), Φ (m+1) at the (k + 1)th iteration step. The convergence takes place with the linear estimate to Φ . convergence rate r1 = λλm+1 m+2 In principle the orthogonalization process need only to be performed after the first iteration step, since all succeeding iteration vectors then will be orthogonal to the subspace spanned by Φ(1) , Φ(2) , . . . , Φ(m) . However, round-off errors inevitable introduce information about the first eigenmode. Obviously, the use of this so-called vector deflation method becomes increasingly cumbersome as m increases. A similar orthogonalization process can be performed in relation to forward vector iteration to ensure convergence to eigenmodes somewhat lower than the highest.
74
Chapter 8 – VECTOR ITERATION METHODS
Box 8.5: Algorithm for inverse vector iteration with Gram-Schmidt orthogonalization Given start vector Φ0 , which needs not be normalized to unit modal mass. Repeat the following items for k = 0, 1, . . . ¯ k+1 = K−1 MΦk . 1. Calculate Φ 2. Orthogonalize iteration vector to previous calculated eigenmodes Φ(j) , j = 1, . . . , m ¯ k+1 − ˆ k+1 = Φ Φ
m X
cj Φ(j)
,
¯ k+1 cj = Φ(j) T MΦ
j=1
3. Normalize the orthogonalized iteration vector to unit modal mass Φk+1 = q
ˆ k+1 Φ ˆT MΦ ˆ k+1 Φ k+1
Example 8.6: Inverse and forward vector iteration with Gram-Schmidt orthogonalization Given the following mass- and stiffness matrices 2 0 M= 0 0
0 2 0 0
0 0 1 0
0 0 0
5 −4 6 −4 K= 1 −4 0 1
,
1
1 0 −4 1 6 −4 −4 5
(8–74)
Further, assume that the lowest and highest eigenmodes have been determined by inverse and forward vector iteration
Φ(1)
0.31263 0.49548 = 0.47912 0.28979
,
Φ(4)
−0.10756 0.25563 = −0.72825 0.56197
(8–75)
Calculate Φ(2) by inverse vector iteration with deflation, and Φ(3) by forward vector iteration with deflation. In both cases the following start vector is used 1 1 Φ0 = 1 1 The matrices A and B become
(8–76)
8.5 Vector Iteration with Gram-Schmidt Orthogonalization
2.4 3.2 A = K−1 M = 2.8 1.6
3.2 5.2 4.8 2.8
2.5 −2.0 B = M−1 K = 1.0 0.0
1.4 2.4 2.6 1.6
−2.0 3.0 −4.0 1.0
0.8 1.4 1.6 1.2
0.5 −2.0 6.0 −4.0
0.0 0.5 −4.0 5.0
75
(8–77)
At the 1st iteration step in the inverse iteration process towards Φ(2) the following calculations are performed 7.8 1 2.4 3.2 1.4 0.8 3.2 5.2 2.4 1.4 1 11.2 ¯1 = ¯ 1 = 24.7067 Φ = ⇒ c1 = Φ(1) T MΦ 11.8 1 2.8 4.8 2.6 1.6 7.2 1 1.6 2.8 1.6 1.2 0.07595 0.31263 7.8 11.2 0.49548 −0.04158 ˆ T MΦ ˆ 1 = 0.01801 ˆ1 = Φ ⇒ Φ = − 24.7067 1 −0.03740 0.47912 11.8 0.04016 0.28979 7.2 0.56599 0.07595 1 −0.04158 −0.30989 √ = Φ = 1 0.01801 −0.03740 −0.27871 0.29927 0.04016
(8–78)
The results for the iteration vector in the succeeding iteration steps become 0.61639 −0.14318 Φ2 = −0.42383
−0.13960
0.53412 0.02582 Φ3 = −0.48439 −0.43985 .. . 0.44527 0.12443 Φ13 = −0.48944 −0.57702
The process converged with the indicated digit after 13 iterations.
(8–79)
76
Chapter 8 – VECTOR ITERATION METHODS
At the 1st iteration step in the forward iteration process towards Φ(3) the following calculations are performed 1.0 1 2.5 −2.0 0.5 0.0 −2.0 3.0 −2.0 0.5 1 −0.5 ¯1 = ¯ 1 = 1.38144 Φ = ⇒ c4 = Φ(4) T MΦ 1.0 −4.0 6.0 −4.0 1 −1.0 2.0 1 0.0 1.0 −4.0 5.0 1.14859 −0.10756 1.0 −0.5 0.25563 −0.85314 ˆ T MΦ ˆ1 = Φ ¯ 1 − c4 Φ(4) = ˆ 1 = 5.59161 Φ ⇒ Φ = − 1.38144 1 −0.72825 0.00604 −1.0 1.22367 0.56197 2.0 0.48573 1.14859 1 −0.85314 −0.36079 √ = Φ = 1 5.59161 0.00604 0.00256 0.51748 1.22367 (8–80) The results for the iteration vector in the succeeding iteration steps become 0.44542 −0.41392 Φ2 = −0.02891 0.50962 0.44063 −0.41617 Φ3 = −0.02534 0.51445 .. . 0.43867 −0.41674 Φ9 = −0.02322 0.51696
(8–81)
The process converged with the indicated digit after 9 iterations. Based on the Rayleigh quotient estimates of the obtained eigenmodes the following eigenvalues may be calculated, cf. (8-2) λ1 0 Λ= 0 =
0
0
λ2
0
0
λ3
0
0
0.09654 0 = 0 0 0
0
λ4
0
0
0
0
1.39147
0
0
0
4.37355
0
0
0
10.6384
(8–82)
8.6 Exercises
8.6
77
Exercises
8.1 Given the following mass- and stiffness matrices 6 −1 0 2 0 0 M = 0 2 1 , K = −1 4 −1 0 −1 2 0 1 1 (a.) Perform two inverse iterations, and then calculate an approximation to λ1 . (b.) Perform two forward iterations, and then calculate an approximation to λ3 .
8.2 Given the following mass- and stiffness matrices 1 0 0 2 −1 0 2 M = 0 1 0 , K = −1 4 −1 0 0 12 0 −1 2 The eigenmodes Φ(1) are Φ(3) are known to be, cf. (6-54) √2 Φ(1) =
2 √2 2 √ 2 2
,
Φ(3) =
√
2 2 √ 2 − 2 √ 2 2
(a.) Calculate Φ(2) by means of Gram-Schmidt orthogonalization, and calculate all eigenvalues.
78
Chapter 8 – VECTOR ITERATION METHODS
C HAPTER 9 SIMILARITY TRANSFORMATION METHODS
9.1
Introduction
Iterative similarity transformation methods are based on a sequence of similarity transformations of the original generalized eigenvalue problem in order to reduce this to a simpler form. The general form of a similarity transformation is defined by the following coordinate transformation of the eigenmodes
Φ(j) = PΨ(j)
(9–1)
where P is the transformation matrix, and Φ(j) and Ψ(j) signify the old and the new coordinates of the eigenmode. Then, the eigenvalue problem (6-5) may be written ˜ (j) ˜ (j) = λj MΨ KΨ ˜ = PT KP , K
(9–2)
˜ = PT MP M
The eigenvalues λj are unchanged under a similarity transformation, whereas the eigenmodes are related by (9-1). In the iteration process the transformation matrix P is determined, so this matrix converge toward the modal matrix Φ = [Φ(1) Φ(2) · · · Φ(n) ]. On condition that the eigenmodes have been normalized to unit modal mass, it follows from (6-15) and (6-17) that ˜ = ΦT MΦ = I. Hence, after convergence of the iteration process ˜ = ΦT KΦ = Λ, and M K the eigenmodes are stored column-wise in P = Φ, and the eigenvalues are stored in the main ˜ = Λ. By contrast to vector iteration methods similarity diagonal of the diagonal matrix K transformation methods determine all eigen-pairs λj , Φ(j) , j = 1, . . . , n. The general format of the similarity iteration algorithm has been summarized in Box 9.1. — 79 —
80
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Box 9.1: Iterative similarity transformation algorithm Let M0 = M, K0 = K and Φ0 = I. Repeat the following items for k = 0, 1, . . . 1. Calculate appropriate transformation matrix Pk at the kth iteration step. 2. Calculate updated transformation matrix and transformed mass and stiffness matrices Φk+1 = Φk Pk , Mk+1 = PTk Mk Pk , Kk+1 = PTk Kk Pk After convergence: k = K∞ , m = M∞ λ j1 0 · · · 0 0 λ 0 j2 · · · Λ= . = m−1 k , . . . .. . . .. .. 0 0 · · · λ jn
1 Φ = Φ(j1 ) Φ(j2 ) · · · Φ(jn ) = Φ∞ m− 2
Orthonormal transformation matrices fulfill, cf. (6-19) T P−1 k = Pk
(9–3)
For transformation methods operating on the generalized eigenvalue problem, such as the general Jacobi iteration method considered in Section 9.2, the transformation matrices Pk are not orthonormal, in which case Mk and Kk converge towards the diagonal matrices m and k as given by (6-16) and (6-18). The eigenvalue matrix Λ and the normalized modal matrix Φ are 1 retrieved as indicated in Box 9.1, where m− 2 denotes a diagonal matrix with the components p 1/ Mj in the main diagonal. Some similarity transformation algorithms are devised for the special eigenvalue problem, as is the case for the special Jacobi iteration method in Section 9.1, and the Householder-QR iteration method in Section 9.3. Hence, application of these methods require an initial similarity transformation from a GEVP to a SEVP as explained in Section 6.5. This may be achieved by specifying the transformation matrix of the transformation k = 0 in Box 9.1 as P0 = S−1
(9–4)
where S fulfills (6-109). Then, M1 = I. If the succeeding similarity transformation matrices are orthonormal, then all transformed mass matrices become identity matrices as seen by induction from Mk+1 = PTk Mk Pk = PTk IPk = I. Moreover, Φk+1 is orthonormal at each iteration step, as seen by induction from ΦTk+1 Φk+1 = PTk ΦTk Φk Pk = PTk IPk = I.
9.2 Special Jacobi Iteration
81
Finally, it should be noticed that after convergence the sequence of eigenvalues in the main diagonal of Λ and the eigenmodes in Φ is not ordered in ascending magnitude of the corresponding eigenvalues as indicated in Box 9.1, where the set of indices (j1 , j2 , . . . , jn ) denotes an arbitrary permutation of the numbers (1, 2, . . . , n).
9.2
Special Jacobi Iteration
The special Jacobi iteration algorithm operates on the special eigenvalue problem, so M = I at the outset. The idea is to ensure during the kth transformation that the off-diagonal component Kij,k , entering the ith and jth row and column of Kk , becomes zero after the similarity transformation. The transformation matrix is given as 1 0 . .. 0 0 Pk = .. . 0 0 .. .
i 0 ··· 0 1 ··· 0 .. . . .. . . . 0 · · · cos θ 0 ··· 0 .. .. . ··· . 0 · · · sin θ 0 ··· 0 .. .. . ··· .
0 0 ···
0
j 0 ··· 0 0 ··· 0 .. .. . ··· . 0 · · · − sin θ 1 ··· 0 .. . . .. . . . 0 ··· cos θ 0 ··· 0 .. .. . ··· . 0 ···
0
0 0 .. . 0 i 0 .. . j 0 · · · 0 1 · · · 0 .. . . .. . . . 0 ··· 1 0 ··· 0 ··· .. . ··· 0 ··· 0 ··· .. . ···
(9–5)
Basically, (9-5) is a identity matrix, where only the components Pii , Pij , Pji and Pjj are differing. Obviously, (9-5) is orthonormal. The components of the updated similarity transformation matrix, and the transformed stiffness matrix become (
Φli,k+1 = Φli,k cos θ + Φlj,k sin θ , Φlj,k+1 = Φlj,k cos θ − Φli,k sin θ ,
l = 1, . . . , n l = 1, . . . , n
Kii,k+1 = Kii,k cos2 θ + Kjj,k sin2 θ + 2Kij,k cos θ sin θ 2 2 Kjj,k+1 = Kjj,k cos θ +Kii,k sin θ − 2Kij,k cos θ sin θ Kij,k+1 = Kjj,k − Kii,k cos θ sin θ + Kij,k cos2 θ − sin2 θ Kli,k+1 = Kil,k+1 = Kli,k cos θ + Klj,k sin θ , l 6= i, j Klj,k+1 = Kjl,k+1 = Klj,k cos θ − Kli,k sin θ , l 6= i, j
(9–6)
(9–7)
The remaining components of Φk+1 and Kk+1 are identical to those of Φk and Kk . Hence, only the ith and jth row and column of Kk are affected by the transformation.
82
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Box 9.2: Special Jacobi iteration algorithm Let M0 = I, K0 = K and Φ0 = I. Repeat the following items for the sweeps m = 1, 2, . . . 1. Specify omission criteria εm in the mth sweep. 2. Check, if the component Kij,k in the ith row and jth column of Kk fulfills the criteria s 2 Kij,k < εm Kii,k Kjj,k 3. If the criteria is fulfilled, then skip to the next component in the sweep. Else perform the following calculations (a.) Calculate the transformation angle θ from (9-8), and then the transformation matrix Pk as given by (9-5). (b.) Calculate the components of the updated similarity transformation matrix Φk+1 = Φk Pk , and the transformed stiffness matrix Kk+1 = PTk Kk Pk from (9-6) and (9-7). Notice that k after the mth sweep is of the magnitude 1 (n − 1)n · m. 2 After convergence: Φ∞ = Φ = [Φ(j1 ) Φ(j2 ) · · · Φ(jn ) ] ,
λ j1 0 · · · 0 0 λ 0 j2 · · · =Λ= . . . . . . . . . . . . 0 0 · · · λ jn
K∞
Next, the angle θ is determined, so the off-diagonal component Kij,k+1 becomes equal to zero 1 Kii,k − Kjj,k sin 2θ + Kij,k cos 2θ = 0 ⇒ 2 2Kij,k 1 (9–8) θ = arctan , Kii,k 6= Kjj,k 2 Kii,k − Kjj,k θ = π , Kii,k = Kjj,k 4 Notice, that even though Kij,k+1 = 0 after the transformation, a subsequent transformation involving either the ith or jth row or column may reintroduce a non-zero value at this position. Optimally, Kij,k should be selected as the numerically largest off-diagonal component in Kk . However, in practice the iteration process is often performed in so-called sweeps, where all 1 (n − 1)n components above the main diagonal in turn are selected as the critical element to 2 become zero after the transformation. In this case the method is combined with a criteria for Kij,k+1 = −
9.2 Special Jacobi Iteration
83
omission of the similarity transformation, in case the component is numerically small. The transformation is omitted, if
s
2 Kij,k < εm Kii,k Kjj,k
(9–9)
where εm is the omission value in the mth sweep. Finally, it should be noticed that if K0 has a banded structure, so non-zero components are grouped in a band around the main diagonal, the banded structure is not preserved during the transformation process as seen from Example 9.1, where the initial matrix K0 is on a three diagonal form, whereas the transformed matrix K1 is full, see (9-11) below. The special Jacobi iteration algorithm can be summarized as indicated in Box 9.2.
Example 9.1: Special Jacobi iteration Given a special eigenvalue problem with the stiffness matrix
2 −1 K = K0 = −1 4 0 −1
0 −1 2
,
1 Φ0 = 0 0
0 0 1 0 0 1
(9–10)
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) : ( 1 2 · (−1) cos θ = 0.9239 θ = arctan = 0.3927 ⇒ 2 2 − 4 sin θ = 0.3827 0.9239 −0.3827 0 P0 = 0.3827 0.9239 0 0 0 1 0.9239 −0.3827 0 1.5858 T Φ1 = Φ0 P0 = 0.3827 0.9239 0 , K1 = P0 K0 P0 = 0 0 0 1 −0.3827
0 4.4142 −0.9239
−0.3827 −0.9239 2 (9–11)
84
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Next, the calculations are performed for (i, j) = (1, 3) : ( 1 2 · (−0.3827) cos θ = 0.8591 θ = arctan = 0.5374 ⇒ 2 1.5858 − 2 sin θ = 0.5119 0.8591 0 −0.5119 P1 = 0 1 0 0.5119 0 0.8591 0.7937 −0.3827 −0.4729 1.3578 T Φ2 = Φ1 P1 = = P K P = , K 0.3287 0.9238 −0.1959 −0.4729 2 1 1 1 0.5119 0 0.8591 0
−0.4729 4.4142 −0.7937
0 −0.7937 2.2280 (9–12)
−0.4498 4.6720 0
−0.1461 0 1.9703 (9–13)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) : ( 1 2 · (−0.7937) cos θ = 0.9511 θ = arctan = −0.3140 ⇒ 2 4.4142 − 2.2280 sin θ = −0.3089 1 0 0 P2 = 0 0.9511 0.3089 0 −0.3089 0.9511 0.7937 −0.2179 −0.5680 1.3578 T Φ3 = Φ2 P2 = 0.3287 0.9392 0.0991 , K3 = P2 K2 P2 = −0.4498 0.5119 −0.2653 0.8171 −0.1461
Φ3 and K3 represents the estimates of the modal matrix Φ and Λ after the 1st sweep. As seen the K12,1 = 0, whereas K12,2 = −0.4729. This is in agreement with the statement above, that off-diagonal components set to zero in one iteration, may attain non-zero values in a later iteration. Comparison of K0 to K3 shows that the numerical maximum off-diagonal component has decreased from | − 1| = 1 to | − 0.4498| after the 1st sweep. Hence, the algorithm is converging. At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the eigenvalues 0.6276 Φ = 0.4607 6 0.6276 0.6280 = Φ 0.4597 9 0.6280
−0.3258 0.8876 −0.3258
−0.7071 0.0000 0.7071
−0.3251 0.8881 −0.3251
−0.7071 0.0000 0.7071
,
1.2680 K6 = 0.0039 −0.0000
,
1.2679 K9 = −0.0000 −0.0000
0.0039 4.7320 0 −0.0000 4.7321 0
−0.0000 0 2.0000 −0.0000 0 2.0000
(9–14)
As seen the eigenmodes are stored column-wise in Φ according to the permutation (j1 , j2 , j3 ) = (1, 3, 2).
9.3 General Jacobi Iteration
9.3
85
General Jacobi Iteration
The general Jacobi iteration method operates on the generalized eigenvalue problem, i.e. M 6= I. The idea of the transformation is to ensure that during the kth transformation the off-diagonal component Mij,k and Kij,k , entering the ith and jth row and column of Mk and Kk , simultaneous become zero after the similarity transformation.
"
− sin θ cos θ
#
xj
" # β 1
"
1
cos θ sin θ " # 1 α
1
#
xi
Fig. 9–1 Projection of ith and jth column vectors of similarity transformation matrix in the (xi , xj )-plane.
The transformation matrix is given as i 0 0 ··· 0 0 ··· .. .. . . ··· 1 0 ··· 0 1 ··· .. .. . . . . .
j 0 0 ··· 0 0 ··· .. .. . . ··· β 0 ··· 0 0 ··· .. .. . . ···
0 0 .. . 0 i 0 .. . j 0 · · · α 0 · · · 1 0 · · · 0 0 · · · 0 0 · · · 0 1 · · · 0 . .. . . . . . . · · · .. .. · · · .. .. . . .. 0 0 ··· 0 0 ··· 0 0 ··· 1
1 0 . .. 0 0 Pk = .. . 0 0 .. .
0 ··· 1 ··· .. . . . . 0 ··· 0 ··· .. . ···
(9–15)
Because we have to specify requirements for both Mij,k+1 and Kij,k+1 , we need two free parameters α and β in the transformation matrix, where only the angle θ appears in (9-5). As a consequence (9-15) is not orthonormal. Actually, the ith and jth column vectors neither have the length 1 nor are mutual orthogonal, by contrast to the corresponding vectors in (9-5), see Fig. 9-1. The components of the updated similarity transformation matrix and the transformed stiffness and mass matrices become (
Φli,k+1 = Φli,k + α Φlj,k Φlj,k+1 = Φlj,k + β Φli,k
, ,
l = 1, . . . , n l = 1, . . . , n
(9–16)
86
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Mii,k+1 = Mii,k + α2 Mjj,k + 2αMij,k 2 Mjj,k+1 = Mjj,k + β Mii,k + 2βMij,k Mij,k+1 = βMii,k + αMjj,k + Mij,k 1 + αβ Mli,k+1 = Mil,k+1 = Mli,k + αMlj,k , l 6= i, j Mlj,k+1 = Mjl,k+1 = Mlj,k + βMli,k , l 6= i, j
(9–17)
Kii,k+1 = Kii,k + α2 Kjj,k + 2αKij,k 2 Kjj,k+1 = Kjj,k + β Kii,k + 2βKij,k Kij,k+1 = βKii,k + αKjj,k + Kij,k 1 + αβ Kli,k+1 = Kil,k+1 = Kli,k + αKlj,k , l 6= i, j Klj,k+1 = Kjl,k+1 = Klj,k + βKli,k , l 6= i, j
(9–18)
The remaining components of Φk+1 , Mk+1 and Kk+1 are identical to those of Φk , Mk and Kk . Hence, only the ith and the jth row and columns of Kk and Mk are affected by the transformation. Next, the parameters α and β are determined, so the off-diagonal components Mij,k+1 and Kij,k+1 become equal to zero Mij,k+1 = βMii,k + αMjj,k + Mij,k 1 + αβ = 0 Kij,k+1 = βKii,k + αKjj,k + Kij,k 1 + αβ = 0
(9–19)
The solution of (9-19) becomes, see Box 9.3
1 α= a
1 − 2
r
1 + ab 4
!
,
Kjj,k Mij,k − Mjj,k Kij,k Kii,k Mjj,k − Mii,k Kjj,k Kii,k Mij,k − Mii,k Kij,k b= Kii,k Mjj,k − Mii,k Kjj,k a=
α=
s
Kii,k Mij,k − Mii,k Kij,k Kjj,k Mij,k − Mjj,k Kij,k
a β = − α b
, if
,
β=−
Kii,k Mjj,k 6= Mii,k Kjj,k (9–20)
1 α
, if
Kii,k Mjj,k = Mii,k Kjj,k
9.3 General Jacobi Iteration
87
Box 9.3: Proof of equation (9-20) From (9-19) follows Kij,k βKii,k + αKjj,k = βMii,k + αMjj,k Mij,k
⇒
β=−
Kjj,k Mij,k − Mjj,k Kij,k α Kii,k Mij,k − Mii,k Kij,k
(9–21)
Elimination of β in the 1st equation in (9-19) by means of (9-21) provides the following quadratic equation in α Mij,k Kjj,k Mij,k − Mjj,k Kij,k α2 − Mij,k Kii,k Mjj,k − Mii,k Kjj,k α− Mij,k Kii,k Mij,k − Mii,k Kij,k = 0
(9–22)
If Kii,k Mjj,k = Mii,k Kjj,k the coefficient in front of α cancels. Then, in combination to (9-21) the following solutions are obtained for α and β s
α=±
Kii,k Mij,k − Mii,k Kij,k Kjj,k Mij,k − Mjj,k Kij,k
,
β=−
1 α
(9–23)
If Kii,k Mjj,k 6= Mii,k Kjj,k solutions of the quadratic equation for α in combination to (9-21) provides 1 α= a
1 ± 2
r
1 + ab 4
!
,
a β=− α b
(9–24)
where a and b are as given in (9-20). Both sign combinations in (9-23) and (9-24) will do.
The transformations are performed in sweeps as for the special Jacobi method. In this case the criteria for omitting a transformation during the mth sweep may be formulated as s
2 2 Mij,k Kij,k + < εm Kii,k Kjj,k Mii,k Mjj,k
where εm is the omission value in the mth sweep. The general Jacobi iteration algorithm can be summarized as indicated in Box 9.4.
(9–25)
88
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Box 9.4: General Jacobi iteration algorithm Let M0 = M, K0 = K and Φ0 = I. Repeat the following items for the sweeps m = 1, 2, . . . 1. Specify omission criteria εm in the mth sweep. 2. Check, if the components Mij,k and Kij,k in the ith row and jth column of Mk Kk fulfill the criteria s 2 2 Mij,k Kij,k + < εm Kii,k Kjj,k Mii,k Mjj,k 3. If the criteria is fulfilled, then skip to the next component in the sweep. Else perform the following calculations (a.) Calculate the parameters α and β as given by (9-24), and then the transformation matrix Pk as given by (9-15). (b.) Calculate the components of the updated similarity transformation matrix Φk+1 = Φk Pk , and the transformed mass and stiffness matrices Mk+1 = PTk Mk Pk and Kk+1 = PTk Kk Pk from (9-16), (9-17) and (9-18). Notice that k after the mth sweep is of the magnitude 12 (n − 1)n · m. After convergence: k = K∞ , m = M∞ λ j1 0 · · · 0 0 λ 0 j2 · · · Λ= . = m−1 k , . . . . . . . . . . . 0 0 · · · λ jn
1
Φ = [Φ(j1 ) Φ(j2 ) · · · Φ(jn ) ] = Φ∞ m− 2
Example 9.2: General Jacobi iteration Given a generalized eigenvalue problem with the mass and stiffness matrices 0.5 0.5 0 2 −1 0 1 = M = M0 = 0.5 1 0.5 , K = K0 = −1 , Φ 4 −1 0 0 0 0.5 1 0 −1 2 0
0 0 1 0 0 1
(9–26)
9.3 General Jacobi Iteration
89
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) : s 2 · 0.5 − 0.5 · (−1) α = = 0.7071 4 · 0.5 − 1 · (−1) NB : K11,0 M22,0 = K22,0 M11,0 β = − 1 = −1.4142 0.7071 1 −1.4142 0 1 −1.4142 0 P0 = 0.7071 1 0 , Φ1 = Φ0 P0 = 0.7071 1 0 0 0 1 0 0 1 1.7071 0 0.3536 2.5858 0 T T = P M P = = P K P = M , K 0 0.5858 0.5 0 10.8284 1 0 0 1 0 0 0 0 0.3536 0.5 1 −0.7071 −1
−0.7071 −1 2 (9–27)
Next, the calculations are performed for (i, j) = (1, 3) : 2 · 0.3536 − 1 · (−0.7071) ( a= = −1.7071 α = 0.9664 2.5858 · 1 − 1.7071 · 2 ⇒ 2.5858 · 0.3536 − 1.7071 · (−0.7071) β = −0.6443 = −2.5607 b= 2.5858 · 1 − 1.7071 · 2 1 0 −0.6443 1 −1.4142 −0.6443 P1 = 0 1 0 , Φ2 = Φ1 P1 = 0.7071 1 −0.4556 0.9664 0 1 0.9664 0 1 3.3243 0.4832 0 3.0869 −0.9664 T T = P M P = = P K P = M , K 0.4832 0.5858 0.5 −0.9664 10.8284 2 1 1 2 1 1 1 1 0 0.5 1.2530 0 −1
0 −1 3.9844 (9–28)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) : 3.9844 · 0.5 − 1.2530 · (−1) ( a= = 0.2889 α = −0.4702 10.8284 · 1.2530 − 0.5858 · 3.9844 ⇒ 10.8284 · 0.5 − 0.5858 · (−1) β = 0.2543 = 0.5341 b= 10.8284 · 1.2530 − 0.5858 · 3.9844 1 0 0 1 −1.1113 −1.0039 P2 = 0 1 0.2543 , Φ3 = Φ2 P2 = 0.7071 1.2142 −0.2012 0 −0.4702 1 0.9664 −0.4702 1 3.3243 0.4832 0.1229 3.0869 −0.9664 T T = P M P = = P K P = M , K 0.4832 0.3926 0 −0.9664 12.6498 3 2 2 3 2 2 2 2 0.1229 0 1.5452 −0.2458 0
−0.2458 0 4.1761 (9–29)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the transformed mass and stiffness matrices
90
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
0.7494 −1.2825 −1.0742 1.0999 −0.2865 Φ6 = 0.8195 1.0376 −0.6084 0.9213 3.4931 −0.0024 0.0000 M6 = −0.0024 0.3225 0 0.0000 0 1.5517 0.7501 −1.2820 −1.0742 Φ9 = 0.8189 1.1005 −0.2865 1.0379 −0.6076 0.9213 3.4932 −0.0000 0.0000 0.3225 0 M9 = −0.0000 0.0000 0 1.5517
,
3.0336 K6 = 0.0048 −0.0000
0.0048 13.029 0
−0.0000 0 4.2464 (9–30)
,
3.0336 K9 = 0.0000 −0.0000
0.0000 13.029 0
−0.0000 0 4.2464
Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes are next retrieved by the following calculations, cf. Box. 9.4 3.4932 −0.0000 0.0000 0.5350 0 − 12 m = M = = , m −0.0000 0.3225 0 0 1.7608 9 0.0000 0 1.5517 0 0 0 0.8684 0.0000 −0.0000 λ1 0 Λ = 0 λ3 0 = M−1 40.395 −0.0000 9 K9 = 0.0000 0 0 λ2 −0.0000 −0.0000 2.7365 0.4013 −2.2573 −0.8623 1 Φ = Φ(1) Φ(3) Φ(2) = Φ9 m− 2 = 0.4381 1.9378 −0.2300 0.5553 −1.0698 0.7396
0 0 0.8028
⇒
(9–31)
The reader should verify that the solution matrices within the indicated accuracy fulfill ΦT MΦ = I and ΦT KΦ = Λ.
9.4
Householder Reduction
The Householder reduction method operates on the standard eigenvalue problem (SEVP). Hence, a preliminary similarity transformation of the generalized eigenvalue problem (GEVP) to SEVP form must be performed as explained in Section 6.5. The Householder method reduces a symmetric matrix K1 to three diagonal form by totally n−2 consecutive similarity transformation. After the (n − 2)th transformation the stiffness matrix has the form
9.4 Householder Reduction
Kn−1
α1 β1 0 β1 α2 β2 0 β2 α3 = .. .. .. . . . 0 0 0 0 0 0
91
··· ··· ··· ...
0 0 0 .. .
· · · αn−1 · · · βn−1
0 0 0 .. .
βn−1 αn
(9–32)
During the reduction process the numbers α1 , . . . , αn and β1 , . . . , βn−1 , as well as the sequence of transformation matrices P1 , . . . , Pn−2 are determined. Since all transformation matrices become orthonormal all transformed mass matrices remain unit matrices. After completing the Householder reduction process the standard eigenvalue problem with the three diagonal matrix Kn−1 must be solved by some kind of iteration method, which preserves the three diagonal structure of the reduced system matrix, and benefits from this reduced structure in order to improve the calculation time. As mentioned in Section 9.2 this requirement rules out the special Jacobi iteration method. Since, the inverse of a three diagonal matrix is full, inverse vector iteration with Gram-Schmidt orthogonalization must also be avoided. Of the methods discussed hitherto only forward vector iteration with Gram-Schmidt orthogonalization meets the requirement. As wee shall see the requirements are also met by the QR iteration method to be discussed in Section 9.5. Finally, an initial Householder reduction is favorable in relation to characteristic polynomial iteration methods discussed in Section 10.4. The transformation matrix during the kth similarity transformation is given as follows
Pk = I − 2wk wkT
,
|wk | = 1
(9–33)
wk denotes a unit column vector to be determined below. Hence, wkT wk = 1. Obviously, Pk is symmetric, i.e. Pk = PTk . Moreover, Pk is orthonormal as seen from the following derivation T I − 2wk wk = = I− I − 2wk wkT − 2wk wkT + 4 wkT wk wk wkT = I
Pk PTk
PTk = P−1 k
2wk wkT
⇒ (9–34)
As mentioned, this means that the mass matrix remains an identity matrix during the Householder similarity transformations, because this is ensured in the initial transformation from a GEVP to a SEVP, as explained in the remarks subsequent to (9-4).
92
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
wk −2wk (wkT x) wkT x x
l
wkT x Pk x
Fig. 9–2 Geometrical interpretation of the effect of the Householder transformation matrix.
Consider a given column vector x. Then, Pk x = I − 2wk wkT x = x − 2 wkT x wk
(9–35)
Notice that wkT x is a scalar. The transformed vector, Pk x, may be interpreted as a reflection of x in the line l, which is orthogonal to the vector wk and placed in the plane spanned by x and wk as illustrated in Fig. 9-2. At the kth transformation the vector wk has the form
0 . .. k rows 0 0 wk = = wk+1 ¯ w n − k rows k .. .
(9–36)
wn
where 2 ¯ k = wk+1 ¯ kT w + · · · + wn2 = 1 wkT wk = w
(9–37)
Then, the transformation matrix may be written on the following matrix form k n−k columns columns z}|{ z}|{ ¯In−k k rows 0 Pk = ¯k n − k rows 0 P
(9–38) ,
¯ k = ¯Ik − 2w ¯ kw ¯ kT P
where ¯Ik denotes a unit matrix of dimension (n − k) × (n − k).
9.4 Householder Reduction
93
¯ k defining the transformation matrix, the stiffness matrix In order to determine the sub-vector w before the kth similarity transformation is considered, at which stage the stiffness matrix has been reduced to three diagonal form down to and including the (k − 1)th row and column. Hence, the stiffness matrix has the structure n−k columns z}|{ k 0 0 0 α1 β1 0 · · · β1 α2 β2 · · · 0 0 0 0 β2 α3 · · · 0 0 0 (9–39) .. .. . . .. .. .. .. . . . . . . . Kk = 0 0 0 · · · αk−1 βk−1 0 0 0 0 · · · βk−1 Kkk k k k o T ¯ n − k rows Kk 0 0 0 ··· 0 kk ¯ k is a symmetric matrix of the dimension kk is a row vector of the dimension (n − k), and K (n − k) × (n − k) defined as
kk = [Kk k+1 Kk k+2 · · · Kkn ]
Kk+1 k+1 Kk+1 k+2 K k+2 k+1 Kk+2 k+2 ¯k = K .. .. . . Kn k+1 Kn k+2
(9–40) · · · Kk+1 n · · · Kk+2 n .. ... . · · · Knn
(9–41)
Then, with the transformation matrix given by (9-38) the stiffness matrix after the kth transformation becomes n−k columns z}|{ k 0 0 0 α1 β1 0 · · · β1 α2 β2 · · · 0 0 0 0 β2 α3 · · · 0 0 0 .. .. . . .. .. .. .. . . . . . . . T Kk+1 = Pk Kk Pk = 0 0 0 · · · αk−1 βk−1 0 ¯ 0 0 0 · · · βk−1 Kkk k kk Pk o T T T ¯ K ¯ kP ¯k ¯ k n − k rows P 0 0 0 ··· 0 P k k k (9–42)
94
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
where
αk = Kkk
(9–43)
T ¯ k = kk ¯Ik − 2w ¯ kT = kk − 2 kk w ¯k w ¯k ¯ kw kk P
(9–44)
¯ k − 2w ¯ kw ¯TK ¯ kP ¯k = K ¯ k − 2K ¯ kw ¯ kw ¯ kT K ¯ kw ¯ kT + 4 w ¯k w ¯ kT ¯ kw ¯ kT K P k
(9–45)
¯ kw ¯ k is a scalar. Similarly, w ¯k ¯ k is a column vector, kk w ¯ kT K Since, kk is a row vector and w becomes a scalar. If the kth row and column in (9-40) should be on a three-diagonal form, it is required that
¯ T kT = βk e ¯k P k k
,
1 0 ¯k = . e ..
(9–46)
0
¯k is a unit column vector of dimension (n−k). The transformation matrix is symmetric, where e ¯ k kT . Moreover, P ¯ k kT is a reflection of the vector kT in the line l as depicted in ¯ T kT = P so P k k k k k Fig. 9-2, and hence has the length |kTk |. Hence, it follows that βk should be selected as βk = ±|kk |
(9–47)
Then it follows from (9-44) that ¯k w ¯ k = ±|kk |¯ kTk − 2 kk w ek ¯ k = a kTk ∓ |kk |¯ ek w
⇒ (9–48)
¯ k is a scalar, which may be absorbed in the coefficient a. a is where it is noticed that 2 kTk w ¯ k is of unit length. determined so the vector w
9.4 Householder Reduction
95
Box 9.5: Householder reduction algorithm T Transform the GEVP to a SEVP by the similarity transformation matrix P = S−1 , where S is a solution to M = SST , and define the initial updated transformation and stiffness matrices as T T , Φ1 = S−1 K1 = S−1 K S−1 Next, repeat the following items for k = 1, . . . , n − 2 1. Calculate the similarity transformation matrix Pk at the kth similarity transformation by (9-38), (9-50). 2. Calculate updated transformation and stiffness matrices from (9-42), (9-52) Φk+1 = Φk Pk , Kk+1 = PTk Kk Pk After completion of the reduction process the following standard eigenvalue problem is solved by some iteration method Kn−1 V = VΛ Λ is the diagonal eigenvalue matrix of the original GEVP, and V is the orthonormal eigenvector matrix of the three diagonal matrix Kn−1 . Then, the eigenmodes normalized to unit modal mass of the original GEVP are retrieved from the matrix product Φ = Φn−1 V
Both sign combinations in (9-47) and (9-48) will do. However, in order to prevent numerical ¯k the following choice of sign in the problems of the algorithm in the case, where kk ' Kk k+1 e ¯ solutions for βk and wk should be preferred βk = −sign(Kk k+1 |kk |
(9–49)
kT + sign(Kk k+1 )|kk |¯ ek ¯ k = kT w k + sign(Kk k+1 )|kk |¯ e k k
(9–50)
The updated transformation matrix before the kth transformation is partitioned as follows k n−k columns columns z}|{ z}|{ k rows Φ12 Φ11 Φk = n − k rows Φ21 Φ22
(9–51)
96
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
With the transformation matrix as given by (9-38) the transformation matrix after the kth transformation becomes
Φk+1
k n−k columns columns z}|{ z}|{ ¯k k rows Φ12 P Φ11 = ¯k n − k rows Φ21 Φ22 P
(9–52)
Finally, it should be noticed that alternative algorithms for reduction to three diagonal form have been indicated by Givens and Lanczos. Example 9.3: Householder reduction Given a generalized eigenvalue problem with the mass and stiffness matrices given by (8-74). The similarity transformation matrix transforming from a GEVP to a SEVP becomes √ 2 1 0 S = M2 = 0 0
0 √ 2 0 0
0 0 0 1
0 0 1 0
√
2 2
⇒
0 S−1 = 0 0
0
√
2 2
0 0
0 0 1 0
0 0 0
(9–53)
1
Then, the stiffness matrix and updated transformation matrix before the 1st Householder similarity transformation becomes, cf. (6-112), (6-113) √
K1 = S−1 K S−1
T
2 2
0 = 0 0 √
5 2
−2 K1 = √2 2 2 2
0 Φ1 = 0 0
√2 −2 2 6 −4
2 2
0
√
2
−2 3 √ −2 √2
0
√
2 2
0 0
0
√
2 2
0 0
5 0 0 0 0 −4 1 0 1 0 0 1
0 2 2 −4 √
5
0 0 0
0 0 1 0
1
√ 2 −4 1 0 2 6 −4 1 0 −4 6 −4 0 1 −4 5 0
0
√
2 2
0 0
0 0 1 0
0 0 0 1
⇒
(9–54)
At the Householder transformation k = 1 one has 5 α1 = 2 " k1 = −2
√ 2 2
#
0
,
Then, cf. (9-38), (9-49) and (9-50)
√ 3 2 |k1 | = 2
(9–55)
9.4 Householder Reduction
97
√ √ 3 2 3 2 β1 = −sign(−2) = = 2.1213 2 2 √ 3 2 √ 1 −2 −2 − √ 2 3 2 √ 2 w ¯ 1 = a 22 + sign(−2) 0 = a 2 2 0 0 0 −0.9828 0.3333 0 ¯ ¯ ¯ 1w ¯ 1T = 0.3333 0.9428 0 P1 = I1 − 2w 0 0 1
⇒
−0.9856 ¯ 1 = 0.1691 w 0
(9–56)
The stiffness matrix and updated transformation matrix after the Householder transmission k = 1 becomes 2.5000 2.1213 K2 = PT1 K1 P1 = 0 0 0.7071 0 Φ2 = Φ1 P1 = 0 0
2.1213 5.1111 3.1427 −2.0000
0 −0.6667 0.3333 0
0 3.1427 3.8889 −3.5355
0 −2.0000 −3.5355 5.000 (9–57)
0 0 0.2357 0 0.9428 0 0 1
where the transformed matrices are calculated by means of (9-42) and (9-52), respectively. At the Householder transformation k = 2 the following calculations are performed α2 = 5.1111
k = [3.1427 2
(9–58) − 2.0000]
,
|k2 | = 3.7251
β2 = −sign(3.1427) · 3.7251 = −3.7251 " # " #! " # 3.1427 1 6.8678 w ¯2 = a + sign(3.1427) · 3.7251 =a −2.0000 0 −2.0000 " # −0.8436 0.5369 ¯ ¯ ¯ 2w ¯ 2T = P2 = I2 − 2w 0.5369 0.8436
⇒
"
0.9601 ¯2 = w −0.2796
# (9–59)
The stiffness matrix and updated transformation matrix after the Householder transmission k = 2 becomes 2.5000 2.1213 T K3 = P2 K2 P2 = 0 0 0.7071 0 Φ3 = Φ2 P2 = 0 0
2.1213 5.1111 −3.7251 −0.0000
0 −0.6667 0.3333 0
0 −3.7251 7.4120 2.0005
0 −0.0000 2.0005 1.4769
0 0 −0.1988 0.1265 −0.7954 0.5062 0.5369 0.8436
(9–60)
98
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
The reader should verify that the solution matrices within the indicated accuracy fulfill ΦT3 MΦ3 = I and ΦT3 KΦ3 = K3 .
9.5
QR Iteration
As is the case for the Householder reduction method QR-iteration operates on the standard eigenvalue problem, so an initial similarity transformation of the GEVP to a SEVP is presumed. T Let K1 = S−1 K S−1 denote the stiffness matrix after the initial similarity transformation, where S is a solution to M = SST , cf. (6-109), (6-112). QR iteration is based on the following property that any non-singular matrix K can be factorized on the following form K = QR
(9–61)
where Q is an orthonormal matrix, and R is an upper triangular matrix. Hence, Q and R have the form Q = q1 q2 · · · qn r11 r12 r13 0 r22 r23 0 0 r33 R= 0 0 0 .. .. .. . . . 0
0
0
qTk qj = δkj r14 · · · r1n r24 · · · r2n r34 · · · r3n r44 · · · r4n .. . . .. . . . 0 · · · rnn
,
(9–62)
(9–63)
where δij denotes Kronecker’s delta. It should be noticed that the factorization (9-61) holds even for non-symmetric matrices. The orthonormality of Q, which implies that Q−1 = QT , is essential to the method. Based on K1 a sequence of transformed stiffness matrices Kk are next constructed with the QR factors Qk and Rk according to the algorithm Kk = Qk Rk
Kk+1 = QTk Kk Qk = QTk Qk Rk Qk = Rk Qk
(9–64)
Hence, Kk+1 is obtained by a similarity transformation with the transformation matrix Qk . The transformation is reduced to a evaluation of Rk Qk due to the orthonormality property of Qk . For the same reason all transformed mass matrices remain unit matrices.
9.5 QR Iteration
99
Box 9.6: Proof of equation (9-61) Let k1 k2 , . . . , kn denote the column vectors of the matrix K, i.e. K = k1 k2 · · · kn
(9–65)
Since K is non-singular, k1 k2 , . . . , kn are linearly independent, and hence form a vector basis. A new orthonormal vector basis q1 q2 · · · qn linearly dependent on k1 k2 , . . . , kn may then be constructed by a process, which resembles the Gram-Schmidt orthogonalization described in Section 8.5. (9-61) is identical to the following relations k1 = r11 q1 k2 = r12 q1 + r22 q2 .. .
kj = r1j q1 + r2j q2 + · · · + rjj qj = .. . n X k = rkn qk n
j X
(9–66)
rkj qk
k=1
k=1
(9-66) is solved sequentially downwards using the properties of orthonormality of qj . From the 1st equation follows by scalar multiplication with q1 r11 = |k1 |
⇒
q1 =
1 k1 r11
(9–67)
Now, q1 and r11 are known. Scalar multiplication of the 2nd equation with q1 , and use of the orthogonality property qT1 q2 = 0 provides r12 = qT1 k2
⇒
r22 = |k2 − r12 q1 |
⇒
q2 =
1 k2 − r12 q1 r22
(9–68)
At the determination of qj , 1 < j ≤ n, the mutually ortonormal basis vectors q1 , q2 , . . . , qj−1 have already been determined. Scalar multiplication of the jth equation with qk , k = 1, 2, . . . , j − 1, and use of the orthogonality property qTk qj = 0 provides rkj
j−1 X = qTk kj ⇒ rjj = kj − rkj qk k=1
⇒
1 qj = rjj
kj −
j−1 X
rkj qk
!
(9–69)
k=1
Hence a solution fulfilling all requirements has been obtained for the components rkj of R and the column vectors qj of Q, which proves the validity of the factorization (9-61).
100
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Box 9.7: QR iteration algorithm T Transform the GEVP to a SEVP by the similarity transformation matrix P = S−1 , where S is a solution to M = SST , and define the initial updated transformation and stiffness matrices as T T , Φ1 = S−1 K1 = S−1 K S−1 Repeat the following items for k = 1, 2, . . . 1. Perform a QR factorization of the stiffness matrix before the kth similarity transformation Kk = Qk Rk 2. Calculate updated transformation and stiffness matrices by a similarity transformation with the orthonormal transformation matrix Qk Φk+1 = Φk Qk , Kk+1 = QTk Kk Qk = Rk Qk After convergence: λn 0 ··· 0 0 λ 0 n−1 · · · Λ= . = K∞ = R∞ . . . .. . . .. .. 0 0 · · · λ1
,
Φ = [Φ(n) Φ(n−1) · · · Φ(1) ] = Φ∞
Now, it can be proved that
K∞ = R∞
0 λn 0 λ n−1 =Λ= . .. . . . 0 0
··· ··· ...
0 0 .. .
,
Φ∞ = Φ = Φ(n) Φ(n−1) · · · Φ(1) (9–70)
· · · λ1
Qk converge to a unit matrix, as a consequence of K∞ = R∞ . As seen, at convergence the eigen-pairs are order in descending order of the eigenvalues. Moreover, the algorithm converges faster to the lowest eigenmode than to the largest, as is the case for subspace iteration as describes in Section 10.3, a method which has some resemblance to QR iteration. The rate of convergence seems to be rather comparable to that of subspace iteration. These properties have been illustrated in Example 9.4 below. The proof of convergence and the associated determination of the convergence rate is rather tedious and involved, and will be omitted here.
9.5 QR Iteration
101
The general QR iteration algorithm can be summarized as indicated in Box 9.7. Usually, the QR algorithm becomes computational expensive when applied to large full matrices, due to the time consuming orthogonalization process involved in the QR factorization. However, if Kk is on the three diagonal form (9-32), it can be shown that matrices Rk and Qk have the form 0 r11 r12 r13 0 0 r 0 22 r23 r24 0 0 r33 r34 r35 0 0 r44 r45 Rk = 0 0 0 0 0 r55 .. .. .. .. .. . . . . . 0 0 0 0 0
··· ··· ··· ··· ··· ...
q11 q12 q 21 q22 0 q32 0 Qk = 0 0 0 .. .. . . 0 0
··· ··· ··· ··· ··· ...
q13 q23 q33 q42 0 .. .
q14 q24 q34 q44 q54 .. .
q15 q25 q35 q45 q55 .. .
0
0
0
0 0 0 0 0 .. .
· · · rnn
(9–71)
q1n q2n q3n q4n q5n .. .
(9–72)
· · · qnn
Hence, Rk becomes an upper three diagonal matrix with only 3n − 3 nontrivial coefficients rjk versus 12 n(n + 1) for a full matrix Kk . Similarly, Qk contains zeros below the first lower diagonal. As a consequence of the indicated structure of Rk and Qk , the matrix product Kk+1 = Rk Qk will again be a symmetric three diagonal matrix. Hence, this property is preserved for the transformed stiffness matrices during the iteration process. This motivates the application of QR iteration in combination to an initial Householder reduction of the initial generalized eigenvalue problem to three diagonal form, which is known as the HOQR method. Example 9.4: HOQR iteration QR iteration is performed on the stiffness matrix of Example 9.3, which has been reduced to three diagonal form by Householder reduction. Hence, the initial stiffness matrix and updated transformation matrix reads, cf. (9-60) 2.5000 2.1213 K1 = 0 0
2.1213 5.1111 −3.7251 −0.0000
0 −3.7251 7.4120 2.0005
0 −0.0000 2.0005 1.4769
,
0.7071 0 Φ1 = 0 0
0 −0.6667 0.3333 0
0 −0.1988 −0.7954 0.5369
0 0.1265 0.5062
0.8436 (9–73)
At the determination of q1 and r11 in the 1st QR iteration the following calculations are performed, cf. (9-67)
102
2.5000 2.1312 k = 1 0 0
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
,
r11
2.5000 2.1312 = = 3.2787 0 0
0.7625 2.5000 1 2.1312 0.6470 q = = 1 3.2787 0 0 0 0
(9–74)
q2 and r12 , r22 are determined from the following calculations, cf. (9-68) T 2.1213 0.7625 2.1213 5.1111 0.6470 5.1111 k , r = = = 4.9244 2 12 −3.7251 0 −3.7251 0 0 0 2.1213 0.7625 0.6470 5.1111 − 4.9244 · r22 = = 4.5001 −3.7251 0 0 0 −0.3630 0.7625 2.1213 1 0.6470 0.4278 5.1111 q = − 4.9244 · = 2 4.5001 −3.7251 0 −0.8278 0 0 0
(9–75)
q3 and r13 , r23 , r33 are determined from the following calculations, cf. (9-69) 0 −3.7251 k = , r13 = qT1 k3 = −2.4101 , r23 = qT2 k3 = −7.7292 3 7.4120 2.0005 r33 = k3 + 2.4101q1 + 7.7292q2 = 2.6959 −0.3590 1 0.4231 q = k = + 2.4101q + 7.7292q 3 3 1 2 2.6959 0.3761 0.7421
(9–76)
9.5 QR Iteration
103
Finally, q4 and r14 , r24 , r34 , r44 are determined from the following calculations, cf. (9-69) 0 0 k = , r14 = qT1 k4 = 0 , r24 = qT2 k4 = −1.6560 4 2.0005 1.4769 r34 = qT3 k4 = 1.8483 , r44 = k4 − 0q1 + 1.6560q2 − 1.8483q3 = 0.1571 0.3974 1 −0.4684 q4 = k4 − 0q1 + 1.6560q2 − 1.8483q3 = 0.1571 −0.4163 0.6703
(9–77)
Then, at the end of the 1st iteration the following matrices are obtained 0.7625 −0.3630 −0.3590 0.3974 0.6470 0.4278 0.4231 −0.4684 Q = 1 0 −0.8278 0.3761 −0.4163 0 0 0.7421 0.6703 3.2787 4.9244 −2.4101 0 0 4.5001 −7.7292 −1.6560 R = 1 0 0 2.6959 1.8483 0 0 0 0.1571
⇒
0.5392 −0.2567 −0.2539 0.2810 −0.4313 −0.1206 −0.2629 0.4799 Φ = Φ Q = 2 1 1 0.2157 0.8010 0.2175 0.5143 0 −0.4444 0.8280 0.3420 5.6860 2.9115 0 0 2.9115 8.3232 −2.2317 0 K = R Q = 2 1 1 0 −2.2317 2.3854 0.1166 0 0 0.1166 0.1053
(9–78)
As seen the matrices R1 and Q1 have the structure (9-71) and (9-72). Additionally, K2 has the same three diagonal structure as K1 . The corresponding matrices after the 2nd and 3rd iteration become
104
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
0.8901 −0.4279 −0.1566 0.0117 0.4558 0.8356 0.3058 −0.0229 Q2 = 0 −0.3445 0.9362 −0.0702 0 0 0.0748 0.9972 6.3881 6.3850 −1.0171 0 0 6.4780 −2.6866 −0.0402 R = 2 0 0 1.5595 0.1170 0 0 0 0.0968
⇒
0.9458 −0.3230 −0.0345 0.0002 0.3248 0.9404 0.1003 −0.0005 Q3 = 0 −0.1061 0.9943 −0.0051 0 0 0.0051 1.0000 9.0891 4.8514 −0.1745 0 0 5.0643 −0.6610 −0.0008 R = 3 0 0 1.4065 0.0077 0 0 0 0.0965
⇒
0.3629 −0.3577 −0.3795 0.3103 −0.4389 0.1744 −0.1796 0.4947 Φ3 = Φ2 Q2 = 0.5570 0.5021 0.4533 0.4818 −0.2026 −0.6566 0.6648 0.2931 8.5962 2.9525 0 0 2.9525 6.3386 −0.5372 0 K = R Q = 3 2 2 0 −0.5372 1.4687 0.0072 0 0 0.0072 0.0966
0.2270 −0.4134 −0.4242 0.3125 −0.3584 0.3248 −0.1434 0.4954 Φ4 = Φ3 Q3 = 0.6899 0.2442 0.4844 0.4793 −0.4049 −0.6226 0.6036 0.2900 10.172 1.6451 0 0 1.6451 4.8328 −0.1492 0 K = R Q = 4 3 3 0 −0.1492 1.3986 0.0005 0 0 0.0005 0.0965
(9–79)
(9–80)
As seen from R3 and K4 the terms in the main diagonal have already after the 3rd iteration grouped in descending magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in Box 9.7. Moreover, for both matrices convergence to the lowest eigenvalue λ1 = 0.0965 has occurred, illustrating the fact that the QR algorithm converge faster to the lowest eigenmode than to the highest.
9.5 QR Iteration
105
The matrices after the 14th iteration become 1.0000 −0.0000 −0.0000 0.0000 0.0000 1.0000 0.0000 −0.0000 Q14 = 0 −0.0000 1.0000 −0.0000 0 0 0.0051 1.0000 10.638 0.0003 −0.0000 0 0.0000 4.3735 −0.0000 −0.0008 R = 14 0 0 1.3915 0.0077 0 0 0 0.0965
⇒
(9–81)
0.1076 −0.4387 −0.4453 0.3126 −0.2556 0.4167 −0.1244 0.4955 Φ15 = Φ14 Q14 = 0.7283 0.0232 0.4894 0.4791 −0.5620 −0.5170 0.5770 0.2898 10.638 0.0001 0 0 0.0001 4.3735 −0.0000 0 K = R Q = 15 14 14 0 −0.0000 1.3915 0.0000 0 0 0.0000 0.0965
Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for the eigenvalues and eigenmodes of the original general eigenvalue problem λ4 0 Λ= 0 0
0 λ3 0 0
0 0 λ2 0
10.638 0 0 0 = K15 = 0 0 0 λ1
Φ = Φ(4) Φ(3) Φ(2) Φ(1) = Φ15
0 4.3735 0 0
0.1076 −0.2556 = 0.7283 −0.5620
0 0 1.3915 0
−0.4387 0.4167 0.0232 −0.5170
0 0 0 0.0965
−0.4453 −0.1244 0.4894 0.5770
0.3126 0.4955 0.4791 0.2898
(9–82)
The reader should verify that the solution matrices within the indicated accuracy fulfill ΦT MΦ = I and ΦT KΦ = Λ, where M and K are the mass and stiffness matrices given by (8-74). (9-82) agrees with the results (8-75), (879), (8-81) and (8-82) in Example 8.6.
106
9.6
Chapter 9 – SIMILARITY TRANSFORMATION METHODS
Exercises
9.1 Given a symmetric K. (a.) Write a MATLAB program, which performs special Jacobi iteration.
9.2 Given the symmetric matrices M and K. (a.) Write a MATLAB program, which performs general Jacobi iteration.
9.3 Given the following mass- and stiffness matrices 2 0 0 6 −1 0 M = 0 2 1 , K = −1 4 −1 0 1 1 0 −1 2 (a.) Perform an initial transformation to a special eigenvalue problem, and calculate the eigenvalues and eigenvectors by means of standard Jacobi iteration. (b.) Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi iteration operating on the original general eigenvalue problem.
9.4 Given the symmetric matrices M and K of dimension n ≥ 3. (a.) Write a MATLAB program, which performs a Householder reduction to three diagonal form.
9.5 Given the symmetric matrices M and K. (a.) Write a MATLAB program, which performs QR iteration.
9.6 Consider the mass- and stiffness matrices defined in Exercise 9.3. after the transformation to the special eigenvalue problem. (a.) Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.
C HAPTER 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS 10.1
Introduction
In civil engineering large numerical models with n = 104 − 106 degrees of freedom have become common practise along with the development of computer technology. However, most natural and man made loads such as wind, waves, earthquakes and traffic have spectral contents in the low frequency range. As a consequence only a relatively small number n1 n of the lowest structural modes will contribute to the global structural dynamic response. In this chapter methods will be discussed, which have been devised with this specific fact in mind. Sections 10.2 and 10.3 deals with simultaneous inverse vector iteration and socalled subspace iteration, respectively. In both cases a sequence of subspaces are defined, each of which are spanned by a specific system of basis vectors. The idea is that these subspaces at the end of the iteration process contains the n1 lowest eigenmodes Φ(1) , Φ(2) , . . . , Φ(n1 ) of the general eigenvalue problem (6-5). These eigenvalue problems may be assembled on the following matrix form, cf. (6-10), (6-11), (6-12)
λ1 0 0 λ 2 K[Φ(1) Φ(2) · · · Φ(n1 ) ] = M[Φ(1) Φ(2) · · · Φ(n1 ) ] . . . . . . 0 0 KΦ = MΦΛ λ1 0 0 λ 2 Λ=. .. .. . 0 0
··· ··· ...
0 0 .. .
· · · λn1
⇒
(10–1) ··· ··· ...
0 0 .. .
· · · λn1
(10–2)
By contrast to the formulation in Chapter 6 the modal matrix Φ is no longer quadratic, but has the dimension n × n1 , defined as — 107 —
108
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
Φ = Φ(1) Φ(2) · · · Φ(n1 )
(10–3)
V∞ (2)
(1)
Φ∞ = Φ(2)
Φ∞ = Φ(1) V0
(2)
Φ0
(1)
Φ0
Fig. 10–1 Principle of subspace iteration.
The principle of iterating through a sequence of subspaces has been illustrated iin Fig. 10-1. V0 h (1) (2) denotes a start subspace, which is spanned by the start basis Φ0 = Φ0 Φ0 . The iteration processh passes through a sequence of subspaces V1 , Vi2 , . . ., where Vk is spanned by the basis i h (1) (2) (1) (2) Φk = Φk Φk . At convergence, Φ∞ = Φ∞ Φ∞ = Φ(1) Φ(2) is spanning the limiting subspace V∞ containing the eigenmodes searched for. Simultaneous inverse inverse vector iteration is a generalization of the inverse vector iteration and inverse vector iteration with deflation described in Sections 8.2 and 8.5. The start vector basis converges towards a basis made up of the wanted eigenmodes as shown in Fig. 10-1. The subspace iteration method and socalled subspace iteration described in Section 10.2 is in principle a sequence of Rayleigh-Ritz analyses, where the Ritz base vectors are forced to converge to each of the eigenmodes. As a consequence, if the start basis contains the n1 eigenmodes the subspace iteration converge in a single step as described in Section 7.2, which is generally not the case for simultaneous inverse vector iteration. Being based on a convergence of a sequence of vector bases both methods are in fact subspace iteration methods, although this name has been coined solely for the latter method. A more informative name for this method would probably be Rayleigh-Ritz iteration. Section 10.4 deals with characteristic polynomial iteration methods, which operates on the characteristic equation (6-6). These methods form an alternative to inverse or forward vector iteration with deflation in case some specific eigenmode different from the smallest or largest is searched for. To be numerical effective these methods require that the generalized eigenvalue
10.2 Simultaneous Inverse Vector Iteration
109
problem has been reduced to a standard eigenvalue problem on three diagonal form, such as the Householder reduction described in Section 9.4. Polynomial methods may be based either on the numerical iteration of the characteristic polynomial directly, or based on a Sturm sequence iteration. Even in the first mentioned case a Sturm sequence check should be performed after the calculation to verify that the calculated n1 eigenmodes are indeed the lowest. It should be noticed that some problems in structural dynamics, such as acoustic transmission and noise emission, are governed by high frequency structural response. Additional to the numerical problems in calculating these modes, lack of accuracy of the underlying mechanical models in the high-frequency range adds to the problems in using modal analysis in such high frequency cases.
10.2
Simultaneous Inverse Vector Iteration
i h (1) (2) (n ) Let Φ0 = Φ0 Φ0 · · · Φ0 1 denote n1 arbitrary linearly independent vectors, which span an n1 dimensional start subspace. Next, the algorithm for simultaneous inverse vector iteration takes place according to the algorithm
¯ k+1 = AΦk Φ
,
k = 0, 1, . . .
(10–4)
where A = K−1 M, cf. (8-4). (10-4) is identical to the inverse vector iteration algorithm described by (8-4). The only difference is that now n1 vectors are simultaneous iterated. At convergence the iterated base vectors obtained from (10-4) will span an n1 -dimensional subspace containing the n1 lowest eigenmodes. However, due to the inherent properties of the inverse vector iteration algorithm all the iterated base vectors tend to become mutually parallel, and parallel to the lowest eigenmode Φ(1) . Hence, the vector basis becomes more and more ill conditioned. For the case shown on Fig. 10-1 this means that the subspace Vk will converge to (1) (2) the limit plane V∞ , but the iterated base vectors Φk and Φk become more and more parallel. In order to prevent this the method is combined with a Gram-Schmidt orthogonalization pro¯ k+1 cedure. Similar to the QR factorization procedure described in Box 9.6 the iterated basis Φ can be written on the following factorized form
¯ k+1 = Φk+1 Rk+1 Φ
(10–5)
where Φk+1 is an M-orthonormal basis in the iterated subspace, and Rk+1 is an upper triangular matrix. Hence, Φk+1 and Rk+1 have the properties
Φk+1 =
h
(1) Φk+1
(2) Φk+1
(n1 ) · · · Φk+1
i
,
(i) T
(j)
Φk+1 MΦk+1 = δij
(10–6)
110
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
Rk+1
r11 r12 r13 0 r22 r23 0 r33 = 0 .. .. .. . . . 0 0 0
r1n1 r2n1 r3n1 .. .
··· ··· ··· ...
(10–7)
· · · rn1 n1
i h (1) (2) (n1 ) spanning the iterated subspace The M-orthonormal base vectors Φk+1 = Φk+1 Φk+1 · · · Φk+1 Vk+1 , as well as the components of the triangular matrix Rk+1 , are determined sequentially in much the same way as the determination of the matrices Q and R in the QR factorization described by (9-66)-(9-69). At first it is noticed that (10-5) is identical to the following relations (1) (1) ¯ Φ k+1 = r11 Φk+1 ¯ (2) = r12 Φ(1) + r22 Φ(2) Φ k+1 k+1 k+1 .. . j X (j) (1) (2) (j) (i) ¯ rij Φk+1 Φk+1 = r1j Φk+1 + r2j Φk+1 + · · · + rjj Φk+1 = i=1 .. . n1 X (i) ¯ (n1 ) = rin1 Φk+1 Φ k+1
(10–8)
i=1
(10-8) is solved sequentially downwards using the M-orthonormality of the already determined (j) base vectors Φk+1 . The details of the derivation has been given in Box 10.1. After convergence the eigenvalues are obtained from the Rayleigh quotients evaluated with the calculated eigenvectors, cf. (7-25). Since each of the n1 eigenmodes have been normalized to unit modal mass the quotients become λj = Φ(j) T KΦ(j)
,
j = 1, . . . , n1
(10–9)
The Rayleigh quotients in (10-9) may be assembled in the following matrix equation Λ = ΦT KΦ
(10–10)
where λ1 0 0 λ 2 Λ=. . . . . . 0 0
··· ··· ...
0 0 .. .
· · · λn1
,
Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
(10–11)
10.2 Simultaneous Inverse Vector Iteration
111
It can be proved that the upper triangular matrix Rk+1 converges towards the diagonal matrix Λ−1 . Although the Rayleigh quotients (10-10) provides more accurate estimates, the eigenvalues may then as an alternative be retrieved from Λ = R−1 ∞
(10–12)
Box 10.1: M-orthonormalization of iterated basis Evaluating the modal mass on both sides of the 1st equation of (10-8) provides
(1) ¯ r11 = Φ k+1
(1)
Φk+1 =
⇒
1 ¯ (1) Φ r11 k+1
(10–13)
(1) ¯ represents the square root of the modal mass of Φ ¯ (1) defined as where the norm Φ k+1 k+1 12
(1) (1) T (1) ¯ = Φ ¯ ¯
Φ k+1 k+1 MΦk+1
(10–14)
(1)
Now, Φk+1 and r11 are known. Scalar pre-multiplication of the 2nd equation (1) T (1) T (2) with Φk+1 M, and use of the orthonormality properties Φk+1 MΦk+1 = 0 and (1) T (1) Φk+1 MΦk+1 = 1, provides (1) T ¯ (2) r12 = Φk+1 MΦ k+1 (2)
Φk+1 =
⇒
(2) (1) ¯ r22 = Φ − r Φ 12 k+1 k+1
⇒
1 ¯ (2) (1) Φk+1 − r12 Φk+1 r22
(10–15)
(j)
At the determination of Φk+1 , 1 < j ≤ n1 , the mutually ortonormal basis vectors (1) (2) (j−1) Φk+1 , Φk+1 , . . . , Φk+1 have already been determined. Scalar pre-multiplication of the (i) T jth equation with Φk+1 M, i = 1, 2, . . . , j − 1, and use of the orthogonality property (i) T (j) Φk+1 MΦk+1 = 0 provides rij =
(i) T ¯ (j) Φk+1 MΦ k+1
⇒
j−1
X
¯ (j) (i) rjj = Φk+1 − rij Φk+1
⇒
i=1
(j) Φk+1
1 = rjj
¯ (j) Φ k+1
−
j−1 X
(i) rij Φk+1
!
(10–16)
i=1
It is characteristic for simultaneous inverse vector method in contrast to the subspace iteration method described in Section 10.3, that eigenmodes which at one level of the iteration process
112
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
is contained in the iterated subspace, may move out of the iterated subspace at later levels as illustrated in Example 10.1.
Box 10.2: Simultaneous inverse vector iteration algorithm i h (1) (2) (n ) Given the n1 -dimensional start vector basis Φ0 = Φ0 Φ0 · · · Φ0 1 . The base vectors must be linearly independent, but need not be normalized to unit modal mass. Repeat the following items for k = 0, 1, . . . 1. Perform simultaneous inverse vector iteration: ¯ k+1 = AΦk Φ
A = K−1 M
,
2. Perform Gram-Schmidt orthogonalization Gram-Schmidt orthogonalization to obtain a new M-orthonormal iterated vector basis Φk+1 as explained by (10-12)(10-16) corresponding to the factorization: ¯ k+1 = Φk+1 Rk+1 Φ After convergence has been achieved the eigenvalues and eigenmodes normalized to unit modal mass are obtained from:
λ1 0 0 λ 2 Λ=. . . . . . 0 0
··· ··· ...
0 0 .. .
· · · λn1
= ΦT∞ KΦ∞ = R−1 ∞
,
Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
As for all kind of inverse vector iteration methods the convergence rate of the iteration vector is linear in the quantity
r1 = max
λ
1
λ2
,
λ2 λn1 ,..., λ3 λn1 +1
(10–17)
Correspondingly, the Rayleigh quotients (10-9) have quadratic convergence rate r2 = r12 . The simultaneous inverse vector iteration algorithm always converges towards the lowest n1 eigenmodes. Hence, no Sturm sequence check is needed to ensure that these modes have indeed been calculated. Further, the rate of convergence seems to be comparable for all modes
10.2 Simultaneous Inverse Vector Iteration
113
contained in the subspace, as demonstrated in Example 10.1 below. The simultaneous inverse vector iteration algorithm may be summarized as indicated in Box 10.2. Example 10.1: Simultaneous inverse vector iteration Consider the generalized eigenvalue problem defined in Example 6.2. Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous inverse vector iteration with the start vector basis
0 i h (1) (2) Φ0 = Φ0 Φ0 = 1 2
2 1 0
(10–18)
The matrix A becomes, cf. (6-44)
2 −1 A = K−1 M = −1 4 0 −1
−1 1 0 2 −1 0 0 2
0 1 0
0 0.2917 0 = 0.0833 1 0.0417 2
0.1667 0.3333 0.1667
0.0417 0.0833 0.2917
(10–19)
Then, the 1st iterated vector basis becomes, cf. (10-4)
0.2917 0.1667 i h ¯ (2) = AΦ0 = ¯ (1) Φ ¯1 = Φ Φ 0.0833 0.3333 1 1 0.0417 0.1667
0.0417 0 0.0833 1 0.2917 2
2 0.2500 = 1 0.5000 0 0.7500
0.7500 0.5000 0.2500
(10–20)
(1)
At the determination of Φ1 and r11 in the 1st vector iteration the following calculations are performed, cf. (10-13)
0.2500 ¯ (1) = 0.5000 Φ 1 0.7500
,
r11
T 1
0.2500
¯ (1) 2 = Φ1 = 0.5000 0 0.7500 0
0 1 0
12 0 0.2500 0 0.5000 = 0.7500 1 0.7500 2
0.2500 0.3333 1 (1) Φ1 = 0.7500 0.5000 = 0.6667 0.7500 1.0000
(2)
Φ1 and r12 , r22 are determined from the following calculations, cf. (10-15)
(10–21)
114
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
T 1 0 0 0.7500 0.7500 0.3333 2 ¯ (2) = Φ 0.5000 , r12 = 0.6667 0 1 0 0.5000 = 0.5833 1 0 0 12 0.2500 0.2500 1.0000
0.7500 0.3333
r22 = 0.5000 − 0.5833 · 0.6667 = 0.4714
0.2500 1.0000 0.7500 0.3333 1.1785 1 Φ(2) 0.5000 − 0.5833 · 0.6667 = 0.2357 1 = 0.4714 0.2500 1.0000 −0.7071
(10–22)
Then, at the end of the 1st iteration the following matrices are obtained " # 0.7500 0.5833 R1 = 0 0.4714 0.3333 1.1785 Φ1 = 0.6667 0.2357 1.0000 −0.7071
(10–23)
¯ 1 . The corresponding matrices after the 2nd and 3rd iteration become The reader should verify that Φ1 R1 = Φ " # 0.4787 0.1231 R2 = 0 0.2611 0.5222 1.1078 Φ2 = 0.6963 0.1231 0.8704 −0.8616
(10–24)
" # 0.4943 0.0650 R3 = 0 0.2529 0.6163 1.0583 Φ3 = 0.7043 0.0623 0.7924 −0.9339
(10–25)
Convergence of the eigenmodes with the indicated number of digits were achieved after 14 iterations, where
10.3 Subspace Iteration
115
" # 0.5000 0.0000 R14 = 0 0.2500 0.7071 1.0000 Φ14 = 0.7071 0.0000 0.7071 −1.0000
(10–26)
Presuming that convergence has occurred after the 14th iteration the following eigenvalues are obtained from (10-10) and (10-12) " # # " 2.0000 −0.0000 λ1 0 T −1 = Φ14 KΦ14 = R∞ = Λ= −0.0000 4.0000 0 λ2 (10–27) 0.7071 1.0000 i h (1) (2) = Φ14 = 0.7071 Φ= Φ Φ 0.0000 0.7071 −1.0000 λ3 = 6, see (6-49). Then, the convergence rate of the iteration vectors becomes r1 = max λλ12 , λλ23 = max 24 , 46 = 2 3 , cf. (10-17). This is a relatively large number, which is displayed in the rather slow convergence of the iterative process. The convergence towards Φ(1) and Φ(2) occurred within the same iteration step. This suggests that the convergence rate is uniform to all considered modes in the subspace. Further it is noted that
Φ(1)
√ 1 √ √ √ √ 0 2 2 2 2 2 2 (1) (2) · 1 + · 1 = · Φ0 + · Φ0 = 1 = 2 4 4 4 4 1 2 0 0 2 1 1 1 1 1 (1) (2) = 0 = − · 1 + · 1 = − · Φ0 + · Φ0 2 2 2 2 2 0 −1
Φ(2)
(10–28)
Hence, the 1st and 2nd eigenmode are originally in the subspace spanned by the basis Φ0 . As seen during the iteration process these eigenmodes are moving out of the iterated subspace.
10.3
Subspace Iteration
As is the case for the simultaneous inverse vector iteration algorithm the subspace iteration algoi h (1) (2) (n ) rithm presumes that a start subspace V0 , spanned by the vector basis Φ0 = Φ0 Φ0 · · · Φ0 1 , has been defined.
116
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
i h (1) (2) (n ) At the kth iteration step of the iteration process a vector basis Φk = Φk Φk · · · Φk 1 , which spans the iterated subspace Vk , has been obtained. Based on this a simultaneous inverse vector iteration is performed ¯ k+1 = AΦk Φ
,
k = 0, 1, . . .
(10–29)
¯ k+1 as a where A = K−1 M, cf. (8-4). Next, a Rayleigh-Ritz analysis is performed using Φ Ritz basis, in order to obtain approximate solutions to the lowest n1 eigenmodes and eigenvalues. This requires the solution of the following reduced generalized eigenvalue problem of the dimension n1 , cf. (6-10), (7-49) ˜ k+1 Qk+1 Rk+1 ˜ k+1 Qk+1 = M K
,
k = 0, 1, . . .
(10–30)
˜ k+1 denote the mass and stiffness matrices projected on the subspace Vk+1 , cf. ˜ k+1 and K M (7-45) ˜ k+1 = Φ ¯ T MΦ ¯ k+1 M k+1
(10–31) ˜ k+1 = Φ ¯ T KΦ ¯ k+1 K k+1 i h (1) (2) (n1 ) Qk+1 = qk+1 qk+1 · · · qk+1 of the dimension n1 × n1 contains the eigenvectors of the eigen(i)
value problem (10-30). In what follows the eigenvectors qk+1 are assumed to normalized to unit modal mass with respect to the projected mass matrix, i.e.
(i) T ˜ (j) qk+1 M k+1 qk+1 =
0 ,
1 ,
i 6= j (10–32) i=j
Rk+1 is a diagonal matrix containing the corresponding eigenvalues of (10-29) in the main diagonal
Rk+1
0 ··· 0 ρ1,k+1 0 0 ρ2,k+1 · · · = . . . .. .. .. 0 0 0 · · · ρn1 ,k+1
(10–33)
The eigenvalues ρj,k+1 , j = 1, . . . , n1 indicates the estimate of the eigenvalues after the kth iteration. These are all upperbounds to the corresponding eigenvalues of the full problem, cf. (7-57). At the end of the kth iteration step a new estimate of the lowest n1 eigenvectors are determined from, cf. (7-51)
10.3 Subspace Iteration
117
¯ k+1 Qk+1 Φk+1 = Φ
(10–34)
˜ k+1 , If the column vectors in Qk+1 have been normalized to unit modal mass with respect to M the M-orthogonal column vectors of Φk+1 will automatically be normalized to unit modal mass with respect to M, cf. (7-55). Next, the calculations in (10-30), (10-31), (10-34) are repeated with the new estimate of the normalized eigenmodes Φk+1 . At convergence of the subspace iteration algorithm the lowest n1 eigenvectors and eigenvalues are retrieved from
Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
,
λ1 0 · · · 0 0 λ ··· 0 2 Λ=. = R∞ = ±Q∞ (10–35) . . . . . . . . 0 0 0 · · · λn1
At convergence, Q∞ can be shown to be a diagonal matrix, where the numerical value of the components are equal to the eigenvalue of the original problem as indicated in (10-35). It should be realized that subspace iteration involves iteration at two levels. Primary, a global simultaneous inverse vector iteration loop as defined by the index k is performed. Inside this loop a secondary iteration process is performed at the solution of the eigenvalue problem (10-30). Usually, the latter problem is solved iteratively by means of a general Jacobi iteration algorithm as described in Section 9.3. Because the applied similarity transformations in the general Jacobi (j) algorithm are not orthonormal, the eigenvectors qk are not normalized to unit modal mass at convergence. Hence, in order to fulfill the requirements (10-32) this normalization should be performed after convergence. Further, the eigenvalues will not be ordered in ascending order of magnitude as presumed in (10-35), cf. Box 9.4. The convergence rate for the components in the kth eigenmode and the kth eigenvalue, r1,k and r2,k , are defined as
r1,k = r2,k =
λk λn1 +1 λ2k λ2n1 +1
=
2 r1,k
,
k = 1, . . . , n1
(10–36)
Hence, convergence is achieved at first for the lowest mode and latest for mode k = n1 , as has been demonstrated in Example 10.2 below. This represents a marked difference from simultaneous inverse vector iteration, where as mentioned the convergence rate seems to be almost identical for all modes contained in the subspace. A rule of thumb says that approximately 10
118
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
subspace iterations are needed to obtain a solution for the components of Φ(1) with 6 correct digits.
Box 10.3: Subspace iteration algorithm (1) (2) (n ) Given the n1 -dimensional start vector basis Φ0 = Φ0 Φ0 · · · Φ0 1 . The base vectors must be linearly independent, but the base vectors need not be normalized to unit modal mass. Repeat the following items for k = 0, 1, . . . 1. Perform simultaneous inverse vector iteration: ¯ k+1 = AΦk Φ
,
A = K−1 M
2. Calculate projected mass and stiffness matrices: ˜ k+1 = Φ ¯ T MΦ ¯ k+1 M k+1
,
˜ k+1 = Φ ¯ T KΦ ¯ k+1 K k+1
3. Solve the generalized eigenvalue problem of dimension n1 by means of a general Jacobi iteration algorithm with the eigenvectors Qk+1 normalized to unit modal mass at exit: ˜ k+1 Qk+1 Rk+1 ˜ k+1 Qk+1 = M K 4. Calculate new solution to eigenvectors: ¯ k+1 Qk+1 Φk+1 = Φ After convergence has been achieved the eigenvalues and eigenmodes normalized to unit modal mass are obtained from:
λ1 0 0 λ 2 Λ=. . . . . . 0 0
··· ··· ...
0 0 .. .
· · · λn1
= R∞ = ±Q∞
,
Φ = Φ(1) Φ(2) · · · Φ(n1 ) = Φ∞
Finally, a Sturm sequence check should be performed to ensure that the lowest n1 eigenpairs have been calculated.
In order to speed up the iteration process towards the n1 modes actually wanted, the dimension of the iterated subspace is sometimes increased to n2 > n1 . Then, the convergence rate of the iteration vector the highest mode of interest decreases to
10.3 Subspace Iteration
r1,n1 =
119
λn1 λn2 +1
(10–37)
In case of an adverse choice of the start basis vector Φ0 it may happen that one of the eigenmodes searched for, Φ(j) , j = 1, . . . , n1 , is M-orthogonal to start subspace, i.e. (k)
Φ(j) T MΦ0 = 0
,
k = 1, 2, . . . , n1
(10–38)
In this case the subspace iteration algorithm converges towards the eigenmodes Φ(1) , . . . , Φ(j−1) , Φ(j+1) , . . . , Φ(n1 ) , Φ(n1 +1) . In principle a similar problem occurs in simultaneous inverse vector iteration, although round-off errors normally eliminates this possibility. Singular to subspace iteration is that eigenmodes contained in the initial basis Φ0 remain in later iterated bases. Hence, if Φ(j) , j = n1 + 1, . . . , n is contained in Φ0 , this mode will be among the calculated modes. In both cases we are left with the problem to decide whether the calculated n1 eigenmodes are in indeed the lowest n1 modes of the full system. For this reason a subspace iteration should always be followed by a Sturm sequence check. This is performed in the following way. Let µ be a number slightly larger than the largest calculated eigenvalue ρn1 ,∞ , and perform the following Gauss factorization of the matrix K − µM K − µM = LDLT
(10–39)
where L and D are given by (6-63), (6-64). Then, the number of eigenvalue less than µ is equal to the number of negative elements in the diagonal of the diagonal matrix D., cf. Section 6.2. Alternatively, the same information may be withdrawn from thenumber of sign changes in the sign (n) (n−1) (µ) , . . . , sign P (0) (µ) , where P (n−1) (µ), . . . , P (0) (µ) desequence sign P (µ) , sign P notes the Sturm sequence of characteristic polynomials, and P (n) (µ) is a dummy positive component in the sequence, cf. Section 6.3. The marked difference between the subspace iteration algorithm and and the simultaneous inverse vector iteration algorithm is that the orthonormalization process to prevent ill-conditioning of the iterated vector base in the former case is performed by an eigenvector approach related to the Rayleigh-Ritz analysis, whereas a Gram-Schmidt orthogonalization procedure is used in the latter case. There are no marked difference in the rate of convergence of the two algorithms. Example 10.2: Subspace iteration The generalized eigenvalue problem defined in Example 10.1 defined by (6-44) is considered again. Using the same initial start basis (10-18) as in Example 10.1, the problem is solved in this example by means of subspace iteration.
120
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
¯ 1 , which is At the 1st iteration step (k = 0) the simultaneous inverse vector iteration produces the vector basis Φ unchanged given by (10-20). ¯ 1 the following projected mass and stiffness matrices are calculated, cf. (6-44), (10-20), (10-31) Based on Φ 0.2500 T ¯ ¯ ˜ M1 = Φ1 MΦ1 = 0.5000 0.7500
T 1 0.7500 2 0.5000 0 0 0.2500
0.2500 T ¯ ¯ ˜ K1 = Φ1 KΦ1 = 0.5000 0.7500
T 2 0.7500 0.5000 −1 0 0.2500
0 0.2500 0 0.5000 1 0.7500 2
0 1 0
" 0.7500 0.5625 0.5000 = 0.4375 0.2500
−1 0 0.2500 4 −1 0.5000 −1 2 0.7500
0.4375 0.5625
#
" # 0.7500 1.2500 0.7500 0.5000 = 0.7500 1.2500 0.2500 (10–40)
The corresponding eigenvalue problem (10-30) becomes ˜ 1 Q1 = M ˜ 1 Q1 R1 K #
⇒
" 1.2500 0.7500
0.7500 1.2500
" 2 R1 = 0
# " 0 λ = 1 4 0
h
(1) q1
(2) q1
0 λ2
#
i
" 0.5625 = 0.4375
,
Q1 =
0.4375 0.5625
"√
2 √2 2 2
−2
#
h
(1) q1
(2) q1
i
" ρ1,1 0
0
#
ρ2,1
#
⇒
(10–41)
2
The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (10-34) 0.2500 ¯ 1 Q1 = Φ1 = Φ 0.5000 0.7500
0.7500 " √2 0.5000 √22 2 0.2500
# −2 2
√2 =
2 √2 2 √ 2 2
−1 i h (1) (2) 0 = Φ Φ
(10–42)
1
(10-41) and (10-42) indicate the exact eigenvalues and eigenmodes, cf. (6-49), (6-51). Hence, convergence is obtained in just a single iteration. This is so because the start subspace V0 , spanned by the vector basis Φ0 contains the eigenmodes Φ(1) and Φ(2) as shown by (10-28). This property is singular to the subspace iteration algorithm compared to the simultaneous inverse vector iteration technique. Next, let us perform the same calculations using the start basis 1 i h (1) (2) Φ0 = Φ0 Φ0 = 2 3
−1 2 −3
(10–43)
The simultaneous inverse vector iteration (10-29) provides, cf. (10-19) 0.2917 ¯ 1 = AΦ0 = Φ 0.0833 0.0417
0.1667 0.3333 0.1667
0.0417 1 0.0833 2 0.2917 3
−1 0.7500 2 = 1.0000 −3 1.2500
−0.0833 0.3333 −0.5833
(10–44)
10.3 Subspace Iteration
121
The projected mass and stiffness matrices become 0.7500 ˜ 1 M 1.0000 1.2500
T 1 −0.0833 2 0.3333 0 0 −0.5833
0 1 0
T
0.7500 ˜1 = K 1.0000 1.2500
0 0.7500 0 1.0000 1 1.2500 2
2 −1 −0.0833 4 0.3333 −1 0 −1 −0.5833
" −0.0833 2.0625 0.3333 = −0.0625 −0.5833
0 0.7500 −1 1.0000 2 1.2500
−0.0625 0.2847
"
−0.0833 4.2500 0.3333 = −0.2500 −0.5833
#
# −0.2500 1.5833
(10–45)
The solution of the corresponding generalized eigenvalue problem (10-30) becomes " 2.0534 R1 = 0
0 5.5656
#
" −0.6982 Q1 = −0.0851
,
−0.0254 −1.8784
#
(10–46)
The estimate of the lowest eigenmode after the 1st iteration becomes, cf. (10-34) 0.7500 ¯ 1 Q1 = Φ1 = Φ 1.0000 1.2500
−0.0833 " −0.6982 0.3333 −0.0851 −0.5833
−0.5165 −0.0254 = −0.7265 −1.8784 −0.8231 #
0.1375
−0.6516
(10–47)
1.0640
Correspondingly, after the 2nd, 7th and 14th iteration steps the following matrices are calculated " 2.0118 R2 = 0
0 5.2263
0.6195 Φ2 = 0.7241 0.7535
0.0821 0.5686 −1.1604
" 2.0000 R7 = 0
0 4.0533
−0.7067 Φ7 = −0.7074 −0.7069
#
#
,
" −2.0171 Q2 = −0.0887
0.1513 −5.3145
#
(10–48)
#
(10–49)
,
−0.8711 −0.1155 1.1020
" −2.0000 Q7 = −0.0007
0.0011 −4.0661
122
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
R14
Φ14
" 2.0000 = 0
0 4.0002
#
0.7071 = 0.7071 0.7071
0.9931 0.0068 −1.0068
,
Q14
" −2.0000 = −0.0000
0.0000 −4.0002
#
(10–50)
As seen the subspace iteration process determines the 1st eigenvalue and eigenvector after 7 iteration, whereas the 2nd eigenvector has not yet been calculated with the sufficiently accuracy even after 14 iterations. By contrast the simultaneous inverse vector iteration managed to achieve convergence for this quantity after 14 iterations, see (10-26). The 2nd calculated eigenvalue becomes ρ2,14 = 4.0002. Then, let µ = 4.05 and perform a Gauss factorization of the matrix K − 4.05M, i.e. −0.0250 K − 4.05M = −1.0000 0.0000
1 LDLT = 40 0
0 1 −0.0250
−1.0000 −0.0500 −1.0000
0.0000 −1.0000 = −0.0250
0 −0.0250 0 0 1 0
0 39.950 0
0 1 0 0 −0.0500 0
40 0 1 −0.0250 0 1
(10–51)
It follows that two components in the main diagonal of D are negative, from which is concluded that two eigenvalues are smaller than µ = 4.05. In turn this means that the two eigensolutions obtained by (10-46) are indeed the lowest two eigensolutions of the original system. Finally, consider the start vector basis 0 i h (1) (2) Φ0 = Φ0 Φ0 = −1 2
2 −1 0
(10–52)
Now,
(1)
T 1 1 2 2 = 1 0 2 0 1
(2)
T 1 √ 1 2 2 = 1 0 2 0 1
√
Φ(1) T MΦ0
Φ(1) T MΦ0
0 1 0 0 1 0
0 0 = 0 0 −1 1 2 2
0 2 = 0 0 −1 1 0 2
(10–53)
It follows that the lowest eigenmode Φ(1) is M-orthogonal to the selected start vector basis. Hence, it should be expected that the algorithm converges towards Φ(2) and Φ(3) . Moreover, in the present three dimensional case a start subspace, which is M-orthogonal to Φ(1) , must contain Φ(2) and Φ(3) . Actually, cf. (6-54)
10.4 Characteristic Polynomial Iteration
123
Φ(2)
0 2 −1 1 1 1 1 (1) (2) = 0 = − · −1 + · −1 = − · Φ0 + · Φ0 2 2 2 2 2 0 1
Φ(3)
√ √ √ √ √ 1 0 2 2 2 2 2 2 (1) (2) · · · Φ · Φ0 = + = + = −1 −1 −1 0 2 4 4 4 4 1 2 0
(10–54)
Hence, convergence towards Φ(2) and Φ(3) should take place in a single iteration step. Actually, after the 1st subspace iteration the following matrices are calculated " 4 R1 = 0
# 0 6
,
1.0000 Φ2 = 0.0000 −1.0000
" −2.0000 Q1 = −2.0000
−2.1213 −2.1213
−0.7071 0.7071 −0.7071
#
(10–55)
The 2nd calculated eigenvalue becomes ρ2,1 = 6. In order to check whether ρ2,1 = λ2 or ρ2,1 = λ3 we choose µ = 6.05, and perform a Gauss factorization of the matrix K − 6.05M, i.e. −1.0250 K − 6.05M = −1.0000 0.0000
1 LDLT = 0.9756 0
0 1 0.9308
−1.0000 −2.0500 −1.0000
0.0000 −1.0000 = −1.0250
0 −1.0250 0 0 1 0
0 −1.0744 0
0 1 0 0 −0.0942 0
0.9756 1 0
0 0.9308 1
(10–56)
It follows that three components in the main diagonal of D are negative, from which is concluded that the largest of the two calculated eigenvalues must be equal to the largest eigenvalue of the original system, i.e. ρ2,1 = λ3 . Still, we do not know whether ρ1,1 = λ1 or ρ1,1 = λ2 . In order to investigate this another calculation is performed with µ = 4.05. The Gauss factorization of the matrix K − 4.05M has already been performed as indicated by (10-51). Since this result shows that two eigenvalues exist, which are smaller than µ = 4.05, ρ1,1 = 4 must be the largest of these, and hence the 2nd eigenvalue of the original system.
10.4
Characteristic Polynomial Iteration
In this section it is assumed that the stiffness and mass matrices have been reduced to a three diagonal form through a series of similarity transformations as explained in Section 9.4, corresponding to, cf. (9-32)
124
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
α1 β1 0 β1 α2 β2 0 β2 α3 K= .. .. .. . . . 0 0 0 0 0 0 γ1 δ1 0 δ1 γ2 δ2 0 δ2 γ3 M= .. .. .. . . . 0 0 0 0 0 0
··· ··· ··· ...
0 0 0 .. .
· · · αn−1 · · · βn−1
··· ··· ··· ...
0 0 0 .. .
· · · γn−1 · · · δn−1
0 0 0 .. .
βn−1 αn
0 0 0 .. .
(10–57)
δn−1 γn
(10–58)
In principle polynomial iteration methods works equally well on fully populated stiffness and mass matrices. However, the computational efforts become too extensive to make them competitive in this case. Now, the characteristic equation of the generalized eigenvalue problem can be written in the following form, cf. (6-6) P (λ) = P (0) (λ) = det K − λM =
0 α1 − λγ1 β1 − λδ1 β1 − λδ1 α2 − λγ2 β2 − λδ2 0 β2 − λδ2 α3 − λγ3 det .. .. .. . . . 0 0 0 0 0 0
··· ··· ··· ...
0 0 0 .. .
· · · αn−1 − λγn−1 · · · βn−1 − λδn−1
2 αn − λγn · P (1) (λ) − βn−1 − λδn−1 · P (2) (λ)
0 0 0 .. .
= βn−1 − λδn−1 αn − λγn (10–59)
The last statement in (10-59) is obtained by expanding the determinant after the components in the last row. P (1) (λ) and P (2) (λ) denote the characteristic polynomials obtained by omitting the last row and column, and the last two rows and columns in the matrix K−λM, respectively, cf. (6-85). The validity of the result (10-59) has been demonstrated for a 4-dimensional case
10.4 Characteristic Polynomial Iteration
125
in Example 10.3. In turn, P (1) (λ) may be expressed in terms of P (2) (λ) and P (3) (λ) by a similar expression. Actually, the complete Sturm sequence of characteristic polynomials may be calculated recursively from the algorithm P (n−1) (λ) = α1 − λγ1
2 P (n−2) (λ) = α1 − λγ1 α2 − λγ2 − β1 − λδ1 2 P (n−m) (λ) = αm − λγm · P (n−m+1) (λ) − βm−1 − λδm−1 · P (n−m+2) (λ) , m = 3, 4, . . . , n (10–60) The effectiveness of characteristic polynomial iteration methods for matrices on three diagonal form relies on the result (10-60). Assume, that the jth eigensolution λj , Φ(j) is wanted. At first one needs to determine two figures µ0 and µ1 fulfilling λj−1 < µ0 < λj < µ1 < λj+1 . This is done based on the sequence of signs sign(P (n) (µ)), sign(P (n−1) (µ)), . . . , sign(P (1) (µ)), sign(P (0) (µ)), in which the number of sign changes indicates the total number of eigenvalues smaller than µ, and where P (n) (µ) is a dummy positive figure, cf. Section 6.3. Below, on Fig. 10-2 are marked two points µk−1 and µk on the λ-axis in the vicinity of the eigenvalue searched for, which is λ1 in the illustrated case. The values of the characteristic polynomial in these points, P (µk−1 ) and P (µk ), may easily be calculated by means of (10-60) (notice that P (µ) = P (0) (µ)). The line through the points µk−1 , P (µk−1 ) and µk , P (µk ) has the equation λ − µk y(λ) = P (µk ) + P (µk ) − P (µk−1 ) µk − µk−1
(10–61)
P (λ) λ − µk y(λ) = P (µk ) + P (µk ) − P (µk−1 ) µk − µk−1 λ1 µk−1
λ2
λ3
µk
λ
µk+1 Fig. 10–2 Secant iteration of characteristic equation towards λ1 .
The line defined by (10-61) intersects the λ-axis at the point µk+1 . It is clear that this point will be closer to λj than both µk−1 and µk . The intersection point of the line with the λ-axis is obtained as the solution to the equation y(λ) = y(µk+1 ) = 0, which is given as
µk+1 = µk −
P (µk ) µk − µk−1 P (µk ) − P (µk−1 )
(10–62)
126
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
Next, the iteration index is raised to k + 1, and a new intersection point µk+2 is obtained. The sequence µ0 , µ1 , µ2 , . . . converges relatively fast to the eigenvalue λj as demonstrated below in Example 10.4.
Box 10.4: Characteristic polynomial iteration algorithm In order to calculate the jth eigenvalue λj and the jth eigenvector Φ(j) the following items are performed 1. Based on the sequence of signs sign(P (n) (µ)), sign(P (n−1) (µ)), . . . , sign P (1) (µ)), sign P (0) (µ)) of the Sturm sequence of characteristic polynomials determine two figures µ0 and µ1 fulfilling the inequalities: λj−1 < µ0 < λj < µ1 < λj+1 2. Perform secant iteration in search for λj = µ∞ according to the algorithm: (µk ) µk − µk−1 µk+1 = µk − P (µk P)−P (µk−1 ) ¯ (j) from the algorithm (10-65). 3. Determine the un-normalized eigenmode Φ 4. Normalize the eigenmode to unit modal mass: Φ(j) =
√
¯ (j) Φ (j) ¯ ¯ (j) Φ T MΦ
Alternatively, the eigenvalue λj may be determined by means of Sturm sequence check, where the interval ]µ0 , µ1 [ is increasingly narrowed around the eigenvalue λj by bisection of the previous interval. This algorithm, which is merely telescope method described in Section 6.2, will generally converge much slower than the secant iteration algorithm. i h (j) (j) (j) Finally, the components Φ1 , Φ2 , . . . , Φn of the eigenmode Φ(j) are determined as nontrivial solutions to the linear equations det K − λj M Φ(j) = 0
⇒
0 α1 − λj γ1 β1 − λj δ1 β1 − λj δ1 α2 − λj γ2 β2 − λj δ2 0 β2 − λj δ2 α3 − λj γ3 .. .. .. . . . 0 0 0 0 0 0
··· ··· ··· ...
0 0 0 .. .
· · · αn−1 − λj γn−1 · · · βn−1 − λj δn−1
(j) Φ1 0 (j) Φ2 0 (j) Φ3 0 . = . . . . . (j) βn−1 − λj δn−1 Φn−1 0 (j) αn − λj γn Φn 0 (10–63) 0 0 0 .. .
10.4 Characteristic Polynomial Iteration
127
i h (j) (j) ¯ (j) = Φ ¯ (j) ¯ ¯ denote the eigenmode with components arbitrarily normalLet Φ , Φ , . . . , Φ n 1 2 ¯ (j) = 1 the equations (10-62) may be solved recursively from above by the ized. Setting Φ 1 following algorithm α1 − λj γ1 ¯ (j) ·1 Φ 2 = − β1 − λj δ1 β1 − λj δ1 α2 − λj γ2 ¯ (j) ¯ (j) Φ ·1− · Φ2 3 = − β2 − λj δ2 β2 − λj δ2 αm−1 − λj γm−1 ¯ (j) ¯ (j) = − βm−2 − λj δm−2 · Φ ¯ (j) Φ Φ m−2 − m βm−1 − λj δm−1 βm−1 − λj δm−1 m−1
,
m = 4, . . . , n
(10–64)
¯ (j) is almost free. Obvious, the Hence, the determination of the components of the vector Φ indicated algorithm breaks down, if any of the denominators βm−1 − λj δm−1 = 0. This means that the algorithm should be extended with alternatives to deal with such exceptions. ¯ (j) should be normalized to unit modal mass as follows Finally, the eigenmode Φ Φ(j) = √
¯ (j) Φ
(10–65)
¯ (j) ¯ (j) T MΦ Φ
Example 10.3: Evaluation of determinant The determinant of the following matrix on a three diagonal form of the dimension 4 × 4 is wanted α1 β1 K= 0 0
β1 α2 β2 0
0 β2 α3 β3
0 0 β3 α4
(10–66)
Expansion of the determinant after the components in the 4th row provides α1 det K = P (0) = α4 · det β1 0 α1 α4 · det β1 0
β1 α2 β2
β1 α2 β2
0 α1 · det − β β2 β1 3 0 α3
" 0 α1 2 β2 − β3 · det β1 α3
β1 α2
#!
β1 α2 β2
0 0 = β3
= α4 · P (1) − β32 · P (2)
(10-67) has the same recursive structure as described by (10-59).
(10–67)
128
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
Example 10.4: Characteristic polynomial iteration The generalized eigenvalue problem defined in Example 6.2 is considered again. Calculate the 3rd eigenvalue by secant iteration on the characteristic polynomial, and next determine the corresponding eigenvector. At first a calculation with µ = 2.5 is performed, which produces the following results
0.7500 K − 2.5M = −1.0000 0.0000
−1.0000 1.5000 −1.0000
0.0000 −1.0000 0.7500
⇒
(3) P (2.5) = 1 P (2) (2.5) = 0.7500 P (1) (2.5) = 0.7500 · 1.5000 − (−1)2 = 0.1250 (0) P (2.5) = 0.7500 · 0.1250 − (−1)2 · 0.7500 = −0.6563
, , , ,
sign(P (3) (2.5)) = + sign(P (2) (2.5)) = + (1) sign(P (2.5)) = + (0) sign(P (2.5)) = −
(10–68)
Hence, the sign sequence of the Sturm sequence becomes + + +−. One sign change occurs in this sequence from which is concluded that the lowest eigenvalue λ1 is smaller than µ = 2.5. Next, a calculation with µ = 5.5 is performed, which provided the results −0.7500 K − 5.5M = −1.0000 0.0000
−1.0000 −1.5000 −1.0000
0.0000 −1.0000 −0.7500
⇒
(3) P (5.5) = 1 P (2) (5.5) = −0.7500 P (1) (5.5) = (−0.7500) · − 1.5000 − (−1)2 = 0.1250 (0) P (5.5) = (−0.7500) · 0.1250 − (−1)2 · (−0.7500) = 0.6563
sign(P (3) (5.5)) = + sign(P (2) (5.5)) = − (1) sign(P (5.5)) = + (0) sign(P (5.5)) = +
, , , ,
(10–69)
Now, the sign sequence of the Sturm sequence becomes + − ++, in which two sign changes occur, from which is concluded that the lowest two eigenvalues λ1 and λ2 are both smaller than µ = 5.5. Finally a calculation with µ = 6.5 is performed, which provided the results −1.2500 K − 6.5M = −1.0000 0.0000
−1.0000 −2.5000 −1.0000
0.0000 −1.0000 −1.2500
⇒
(3) P (6.5) = 1 P (2) (6.5) = −1.2500 P (1) (6.5) = (−1.2500) · − 2.5000 − (−1)2 = 2.1250 (0) P (6.5) = (−1.2500) · 2.1250 − (−1)2 · (−1.2500) = −1.4063
, , , ,
sign(P (3) (6.5)) = + sign(P (2) (6.5)) = − (1) sign(P (6.5)) = + (0) sign(P (6.5)) = − (10–70)
In this cse the sign sequence of the Sturm sequence becomes +−+−, corresponding to three sign changes. Hence, it is concluded that all three eigenvalues λ1 , λ2 and λ3 are smaller than µ = 6.5.
10.4 Characteristic Polynomial Iteration
129
From the Sturm sequence checks it is concluded that 5.5 < λ3 < 6.5. Then, we may use the following start values, µ0 = 5.5 and µ1 = 6.5, in the secant iteration algorithm. Moreover P (5.5) = P (0) (5.5) = 0.6563 and P (6.5) = P (0) (6.5) = −1.4063, cf. (10-69) and (10-70). Then, from (10-61) it follows for k = 1
µ2 = 6.5 −
(−1.4063) (6.5 − 5.5) = 5.8182 (−1.4063) − 0.6563
(10–71)
Next, P (µ2 ) = P (5.8182) = 0.3156 is calculated by means of the algorithm (10-60), and a new value µ3 can be obtained from
µ3 = 5.8182 −
0.3156 (5.8182 − 6.5) = 5.9431 0.3156 − (−1.4063)
(10–72)
During the next 5 iterations the following results were obtained µ4 µ5 µ6 µ7 µ8
= 6.00900500472288 = 5.99960498912941 = 5.99999734553262 = 6.00000000078659 = 6.00000000000000
(10–73)
As seen the convergence of the secant iteration algorithm is very fast. The linear equation (10-63) attains the form
¯ (3) = 0 K − 6.0000M Φ
−1 −1 0
⇒
(3) ¯ Φ −1 0 0 ¯ 1(3) −2 −1 Φ2 = 0 ¯ (3) −1 −1 0 Φ 3
(10–74)
¯ (3) = 1 the algorithm (10-64) now provides Setting Φ 1 ¯ (3) = − (−1) · 1 = −1 Φ 2 (−1) ¯ (3) Φ 3
(−2) (−1) ·1− · (−1) = 1 =− (−1) (−1)
⇒
¯ (3) Φ
1 = −1 1
(10–75)
Normalization to unit modal mass provides, cf. (6-54) 1 2 = −1 2 1 √
Φ(3)
(10–76)
130
Chapter 10 – SOLUTION OF LARGE EIGENVALUE PROBLEMS
10.5
Exercises
10.1 Given the following mass- and stiffness matrices 6 −1 0 2 0 0 M = 0 2 1 , K = −1 4 −1 0 −1 2 0 1 1 (a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous inverse vector iteration with the start vector basis 1 1 (1) (2) Φ0 = Φ0 Φ0 = 1 0 1 −1
10.2 Given the symmetric matrices M and K of dimension n. (a.) Write a MATLAB program, which for given start basis performs simultaneous inverse vector iteration for the determination of the lowest n1 eigenmodes and eigenvalues.
10.3 Consider the general eigenvalue problem in Exercise 10.1. (a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace iteration using the same start basis as in Exercise 10.1.
10.4 Given the symmetric matrices M and K of dimension n. (a.) Write a MATLAB program, which for given start basis performs subspace iteration for the determination of the lowest n1 eigenmodes and eigenvalues.
10.5 Consider the general eigenvalue problem in Exercise 10.1. (a.) Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope method).
10.6 Given the symmetric matrices M and K of dimension n on three diagonal form. (a.) Write a MATLAB program, which performs Sturm sequence check and secant iteration iteration for the determination of the jth eigenvalue, and next determines the corresponding eigenvector.
Index
adjoint eigenvalue problem, 11 aeroelastic damping matrix, 7 alternative inverse vector iteration, 60, 61
forward vector iteration with Gram-Schmidt orthogonalization, 73, 74, 91 forward vector iteration with shift, 68, 69
central difference operator, 26 characteristic equation, 8, 15, 25, 28, 53, 108, 124 characteristic polynomial, 8, 18, 24, 124 characteristic polynomial iteration, 54, 91, 108, 123, 125, 126, 128 Choleski decomposition, 29, 31, 35 compatible matrix and vector norms, 47, 50 complex unit, 8 convergence rate of iteration vector, 57, 67, 68, 73, 112, 115 convergence rate of Rayleigh quotient, 58, 67, 70, 112 convergence rate of the iteration vector, 118 coupled 1st order differential equations, 11 cubic convergence, 70
Gauss factorization, 18, 20, 24, 29, 119, 122, 123 general Jacobi iteration method, 80, 85, 88, 117, 118 generalized eigenvalue problem, 118 generalized eigenvalue problem, 8, 11, 15, 22, 23, 27, 29, 35, 42, 54, 59–61, 64, 71, 79, 80, 85, 88, 90, 95, 96, 100, 107, 109, 113, 116, 121, 124 Gram-Schmidt orthogonalization, 73, 99, 109, 119 Guyan reduction, 33 Hilbert matrix norm, 47, 50, 51 HOQR iteration method, 80, 101 Householder reduction method, 90, 95, 96, 101, 109
damped eigenmode, 11 damped eigenvalue, 11 damped modal mass, 12 damping matrix, 7 degree of freedom, 33, 107 diagonal matrix, 55, 111, 116, 117 dynamic load vector, 7
infinity matrix norm, 50 infinity vector norm, 49 inverse vector iteration, 54, 59, 65, 109 inverse vector iteration with Gram-Schmidt orthogonalization, 73, 74, 91 inverse vector iteration with Rayleigh quotient shift, 70, 71 inverse vector iteration with shift, 66, 67, 70 iterative similarity transformation method, 79, 80
eigenmode, 9, 79 eigenvalue, 8 eigenvalue separation principle, 24, 26 error analysis of calculated eigenvalues, 33, 47, 51 error vector, 47 Euclidean matrix norm, 50 Euclidean vector norm, 47, 49
Jordan boxes, 10 Jordan normal form, 10 Kronecker’s delta, 98
forward vector iteration, 54, 63–65
linear convergence, 57, 73
131
132
linear viscous damping, 7 lower triangular matrix, 18, 21, 29, 31, 36 M-orthonormalization of vector basis, 111 mass matrix, 7, 91, 123 matrix norm, 49 modal coordinate, 40, 41, 55 modal mas, 112 modal mass, 9, 16, 35, 39, 43, 47, 51, 54, 63, 71, 73, 74, 79, 110, 111, 118, 126 modal matrix, 9, 16, 51, 79, 107 modal space, 66 omission criteria for similarity transformation, 82, 83, 87, 88 one matrix norm, 50 one vector norm, 49 orthogonality property, 9, 12, 55 orthonormal matrix, 10, 80, 81, 91, 98 p vector norm, 49 partitioned matrix, 33 permutation of numbers, 81 positive definite matrix, 7, 30 positive semi-definite matrix, 7 projected mass matrix, 41, 45, 46, 116, 118, 120, 121 projected stiffness matrix, 41, 45, 46, 116, 118, 120, 121 QR iteration method, 91, 98, 100, 101 quadratic convergence, 59 Rayleigh quotient, 38, 39, 53, 55, 57, 60, 61, 63, 64, 66, 70, 71, 76, 110 Rayleigh’s principle, 38 Rayleigh-Ritz analysis, 33, 38, 40, 43, 44, 108, 116, 119 relative error of iteration vector, 57, 60 relative error of Rayleigh quotient, 58, 60 relatively errors of Rayleigh quotient, 65 Ritz basis, 40, 43, 46, 108, 116 secant iteration, 126, 128, 129 self-adjoint eigenvalue problem, 13 shift on stiffness matrix, 27, 28, 65 similarity transformation, 10, 29, 54, 79, 85, 90, 98, 100, 123 similarity transformation matrix, 29, 79, 85, 91, 95, 96, 98, 100
INDEX
simple eigenvalues, 8 simultaneous inverse vector iteration, 107–109, 111–113, 116–120 special eigenvalue problem, 8, 29, 30, 48, 51, 80, 81, 83, 95, 96, 98, 100 special Jacobi iteration method, 80–83, 91 spectral decomposition, 30 spectral matrix norm, 50 standard eigenvalue problem, 90, 91, 109 state vector formulation, 11 static condensation, 33, 36, 44 stiff-body motion, 28 stiffness matrix, 7, 93, 98, 123 Sturm sequence, 24, 119, 125, 126 Sturm sequence check, 25, 67, 109, 112, 118, 119, 126, 128, 129 Sturm sequence iteration, 109 subspace iteration, 100, 107, 108, 111, 115, 118– 120 sweep, 82, 83, 87, 89 symmetric matrix, 7, 90, 91 telescope method, 19, 126 three diagonal matrix, 90, 95, 101, 109, 123, 127 two vector norm, 49 undamped circular eigenfrequency, 8, 26 undamped eigenvibration, 8 unitary matrix, 10 upper three diagonal matrix, 101 upper triangular matrix, 21, 98, 109–111 vector basis, 39, 40, 99, 108 vector iteration method, 53 vector iteration with deflation, 73 vector iteration with Gram-Schmidt orthogonalization, 73 vector iteration with shift, 65 vector norm, 49 vibrating string, 26 wave equation, 26
A PPENDIX A Solutions to Exercises
— 133 —
134
Chapter A – Solutions to Exercises
A.1
Exercise 6.1
Given the following mass- and stiffness matrices 1 0 0 2 −1 0 M = 0 2 0 , K = −1 2 0 0 0 12 0 0 3
(1)
1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass. 2. Determine two vectors that are M-orthonormal, but are not eigenmodes. 3. Show that the eigenvalue separation principle is valid for the considered example.
SOLUTIONS: Question 1: The generalized eigenvalue problem (6-5) becomes 2 − λj −1 0
−1 2 − 2λj 0
(j) Φ1 0 (j) 0 Φ 2 = 0 (j) 0 3 − 12 λj Φ3 0
(2)
Upon evaluating the determinant of the coefficient matrix after the 3rd row, the characteristic equation (6-6) becomes 2 − λj (0) P (λ) = P (λ) = det −1 0
−1
0
2 − 2λj
0
0
3 − 12 λj
=
2 1 1 3 − λj 2 − λj 2 − 2λj − − 1 = 3 − λj 3 − 6λj + 2λ2j = 0 2 2 √ 1 3− 3 , j =1 2 √ λj = 12 3 + 3 , j = 2 6 , j=3
⇒
(3)
The largest eigenvalue λ3 = 6 is obtained when the 1st factor in (3) is equal to 0, whereas the two lowest solutions corresponds vanishing of the 2nd factor. Because the 3rd eigenmode is decoupled from the 1st and 2nd the solution method is slightly different in this case. As seen be inspection the solutions have the form
A.1 Exercise 6.1
135
¯ (j) Φ
(j) Φ 1(j) = Φ2 0
¯ (3) Φ
0 = 0 1
j = 1, 2
,
(4)
(j)
(j)
The 1st and 2nd components of the 1st and 2nd eigenmodes, Φ1 and Φ2 are determined from the two (j) (j) first equations in (2). We choose to set Φ1 = 1, and determine Φ2 from the 1st equations. Notice that (j) we may as well have determined Φ2 from the 2nd equation. Then
2 − λj · 1 −
(j) Φ2
=0
(j) Φ2
⇒
=
(
1 2 1 2
√ 3 , √ 1− 3 , 1+
j=1
(5)
j=2
The modal masses become
¯ (j) T MΦ ¯ (j) Mj = Φ
¯ (3) T MΦ ¯ (3) M3 = Φ
T 1 = Φ(j) 2 0 T 0 = 0 1
( √ 1 1 0 0 2 3+ 3 , (j) (j) = 0 2 0 Φ2 = 1 + 2 Φ2 √ 3− 3 , 1 0 0 2 0
j=1
(6)
j=2
1 0 0 0 1 0 2 0 0 = 2 0 0 12 1
(7)
¯ (1) in the following Φ(1) denotes the 1st eigenmode normalized to unit modal mass. This is related to Φ way
Φ(1) = √
1 ¯ (1) 1 Φ =p √ 12 M1 3+ 3
0.4597 √ 1 + 3 = 0.6280 0 0 1
(8)
The other modes are treated in the same manner, which results in the following eigensolutions λ1 0 Λ = 0 λ2 0
0
0
1
0=
λ3
2
3− 0
√ 3
0 √ 3 + 3 0 0
1 2
0
0
0.4597 0.8881 i h (1) (2) (3) = 0.6280 −0.3251 Φ= Φ Φ Φ 0
0
6
0 0 1.4142
(9)
136
Chapter A – Solutions to Exercises
Question 2: Consider the vectors 1 v 1 = 0 0
v2 =
,
0 √
2 2
(10)
0
Upon insertion the following relations are seen to be valid v1T Mv1 = 1 ,
v2T Mv2 = 1
v1T Mv2 = 0
,
(11)
Hence, v1 and v2 are mutually M-orthonormal. However,
√ 3 0
1
2
Kv1 = −1 6= λ1 Mv1 =
3−
2
0
0
(12)
√ − 22 0 √ √ √ 2 Kv2 = Mv = = 6 λ 3 + 3 4 2 2 2 0 0 Hence, neither v1 nor v2 are eigenmodes.
Question 3: (0)
The eigenvalues λj have been calculated by (3). Next, #! " (1) −1 2 − λ j = P (1) (λ(1) ) = det (1) −1 2 − 2λj
(1)
2 − λj
(1) λj
=
(
1 2 1 2
(1)
2 − 2λj
√ 3 , √ 3+ 3 , 3−
P (2) (λ(2) ) = det
h
2
− −1
(1) (1) 2 = 3 − 6λj ) + 2 λj =0
j=1
⇒
(13)
j=2 (2)
2 − λj
i
⇒
(2)
λj
=2
(14)
A.1 Exercise 6.1
137
Then, (6-86) attains the following forms for m = 0 and m = 1 (0)
(1)
(0)
(1)
(0)
0 ≤ λ1 ≤ λ 1 ≤ λ 2 ≤ λ 2 ≤ λ3 ≤ ∞ 0≤
√ 1 √ 1 √ 1 √ 1 3− 3 ≤ 3− 3 ≤ 3+ 3 ≤ 1+ 3 ≤6≤∞ 2 2 2 2
(1)
(2)
(1)
0 ≤ λ1 ≤ λ 1 ≤ λ 2 ≤ ∞ 0≤
⇒ (15)
⇒
√ √ 1 1 3− 3 ≤2≤ 3+ 3 ≤∞ 2 2
(16) (0)
(1)
(0)
(1)
Hence, (6-86) holds for the considered example. λ1 = λ1 and λ2 = λ2 , because of the decoupling of the 3rd eigenmode from the 1st and 2nd eigenmode.
138
Chapter A – Solutions to Exercises
A.2
Exercise 6.2
Given the following mass- and stiffness matrices "
2 0 M= 0 0
#
"
# 6 −1 K= −1 4
,
(1)
1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass. 2. Perform a shift ρ = 3 on K and calculate the eigenvalues and eigenmodes of the new problem.
SOLUTIONS: Question 1: The generalized eigenvalue problem (6-5) is written on the form "
2 0 0 0
# " # #" (j) (j) 1 6 −1 Φ1 Φ1 (j) = (j) λj −1 4 Φ2 Φ2
#"
,
j = 1, 2
(2)
Obviously, (2) has the solution
λ2 = ∞ ,
Φ(2)
# " # (2) 0 Φ1 = (2) = 1 Φ2 "
(3)
Hence, λ2 = ∞ is an eigenvalue. This is so because the mass matrix is singular, and has zeroes in the last row and column. Since, the modal mass M2 related to eigenmode Φ(2) is zero, this mode cannot be normalized in the usual manner. In Section 7.1 the problem of infinite eigenvalues will be thoroughly dealt with. The other eigensolution may be obtained by the standard approach. Then, the eigenvalue problem (2) is written on the form "
# " (1) # 6 − 2λ1 −1 Φ1 −1
4
(1)
Φ2
" # 0 = 0
(4)
The characteristic equation (6-6) becomes
det
"
#! 6 − 2λ1 −1 −1
4
2 = 4 6 − 2λ1 − − 1 = 23 − 8λ1 = 0
⇒
λ1 =
23 8
(5)
A.2 Exercise 6.2
139
(1)
(1)
We choose to set Φ1 = 1, and determine Φ2 from the 1st equations. Then (1) 6 − 2λ1 · 1 − Φ2 = 0
(1)
⇒
Φ2 =
1 4
(6)
The modal mass becomes ¯ (1) T
M1 = Φ
¯ (1)
MΦ
" #T " #" # 2 0 1 1 =2 = 1 0 0 14 4
(7)
Then, the eigenmode normalized to unit modal mass Φ(1) becomes (1)
Φ
" # " # 1 ¯ (1) 1 1 0.7071 Φ =√ 1 = =√ M1 2 4 0.1768
(8)
Hence, the following eigensolutions have been obtained
Λ=
"
#
λ1 0 0
=
0 λ0
"
#
23 8
0
0
∞
,
h
(1)
Φ= Φ
(2)
Φ
i
=
"
0.7071 0
#
(9)
0.1768 1
Question 2: (6-103) attains the form "
# " # " # 6 −1 2 0 0 −1 ˆ = K − 3M = K −3 = −1 4 0 0 −1 4
(10)
The eigenvalue problem (6-102) becomes "
" #! " (1) # " # # 2 0 Φ1 0 0 −1 = − λj (1) 0 −1 4 0 0 Φ2
(11)
For the same reason as in question 1, λ2 = ∞ is still an eigenvalue with the eigenmode given by (3). The characteristic equation for the 1st eigenvalue becomes, cf. (5)
det
"
#! −2λ1 −1 −1
4
2 = 4 −2λ1 − −1 = −1−8λ1 = 0
⇒
λ1 = −
1 8
=
23 −3 (12) 8
140
Chapter A – Solutions to Exercises (1)
(1)
Let Φ1 = 1, and determine Φ2 from the 1st equations of (11) (1) − 2λ1 · 1 − Φ2 = 0
⇒
(1)
Φ2 =
1 4
(13)
which is identical to (6). Hence Φ(1) is unaffected by the shift as expected, cf. the comments following (6-103). The eigensolutions are unchanged as given by (9), save that λ1 = − 18 .
A.3 Exercise 6.3
A.3
141
Exercise 6.3
The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional generalized eigenvalue problem are given as
Λ=
"
λ1
0
0
λ2
#
"
1 0 = 0 4
#
i
h
Φ = Φ(1) Φ(2) =
,
"√
√
2 √2 2 2
2 √2 − 22
#
(1)
1. Calculate M and K. SOLUTIONS: Question 1: From 6-15) and (6-18) follows M = Φ−1 K = Φ−1
T
T
mΦ−1
(2)
kΦ−1
(3)
Since it is known that the eigenmodes have been normalized to unit modal mass it follows from (6-16) and (6-18) that m=I ,
k=Λ
(4)
The inverse of the modal matrix becomes
−1
Φ
=
"√
2 √2 2 2
√ 2 √2 − 22
#−1
=
"√
√
2 √2 2 2
2 √2 − 22
#
(5)
Of course, (5) can be obtained by direct calculation. Alternatively, the result may be obtained from the following arguments. Notice that Φ is orthonormal, so Φ−1 = ΦT , cf. (6-19). Additionally, the modal matrix is symmetric, i.e. Φ = ΦT , from which the indicated result follows. Insertion of (4) and (5) into (1) and (2) provides
M=
K=
"√
2 √2 2 2
"√
2 √2 2 2
√ 2 √2 − 22 √ 2 √2 − 22
#T "
#T "
1 0 0 1
1 0 0 4
# "√
2 √2 2 2
# "√
2 √2 2 2
√ 2 √2 − 22 √ 2 √2 − 22
#
#
=
"
1 0
#
(6)
0 1
=
"
#
2.5 −1.5 −1.5
2.5
(7)
Actually, since M = I, the considered eigenvalue problem is of the special type, cf. the remarks subsequent to (6-5).
142
Chapter A – Solutions to Exercises
A.4
Exercise 6.4: Theory
Gauss Elimination Given a symmetric matrix K of the dimension n × n with the components Kij = Kji . Consider the static equilibrium equation
Kx = f
⇒ K12 K22 K32 .. .
K11 K 21 K31 .. .
· · · K1n f1 x1 · · · K2n x2 f2 · · · K3n x3 = f3 .. .. .. .. . . . . · · · Knn xn fn
K13 K23 K33 .. .
Kn1 Kn2 Kn3
(1)
In order to have a one at the 1st element of the main diagonal of the coefficient matrix the 1st equation is divided with K11 resulting in
(1)
1 K21 K31 .. .
Kn1
(1)
K12 K22 K32 .. .
K13 K23 K33 .. .
Kn2
Kn3
K1j K11
,
(1) (1) x1 · · · K1n f1 x · · · K2n 2 f2 x3 = f3 · · · K3n .. .. .. .. . . . . xn · · · Knn fn
(2)
where (1)
K1j = (1) f1
j = 2, . . . , n
f1 = K11
(3)
In turn, the 1st equation of (2) is multiplied with Ki1 , i = 2, . . . , n, and the resulting equation is withdrawn from the ith equation. This will produce a zero in the ith row of the 1st column, corresponding to the following system of equations 1 0 0 .. . 0
(1)
K13
(1)
K23
K12 K22
(1)
(1) (1) (1)
K32 K33 .. .. . . (1) (1) Kn2 Kn3
(1)
(1) f1 x1 (1) · · · K2n x2 f2(1) (1) (1) · · · K3n x3 = f3 . . .. . . .. . . . . (1) (1) xn fn · · · Knn · · · K1n
(4)
A.4 Exercise 6.4: Theory
143
where (1)
(1)
Kij = Kij − Ki1 K1j (1) fi
(1) Ki1 f1
= fi −
,
,
i = 2, . . . , n ,
j = 2, . . . , n
i = 2, . . . , n
(5)
(1)
Next, the 2nd equation is divided with K22 , so the coefficient in the 2nd component in the main diagonal (1) becomes equal to 1. In turn, the resulting 2nd equation is multiplied with Ki2 , i = 3, . . . , n, and the resulting equation is withdrawn from the ith equation. This will produce zeros in the ith row of the 2nd column below the main diagonal, corresponding to the system of equations (1) 1 K12 0 1 0 0 . .. . . . 0 0
(1)
(1) f1 (2) (2) · · · K2n x2 f2 (2) (2) · · · K3n x3 = f3 .. .. .. .. . . . . (2) (2) xn · · · Knn fn
(1)
K13
· · · K1n
(2)
K23
(2)
K33 .. . (2) Kn3
x1
(6)
where
(1)
(2)
K2j =
K2j
(1)
,
j = 3, . . . , n
K22 (1)
(2)
f2
=
f2
(1)
K22 (2)
(1)
(1)
(2)
Kij = Kij − Ki2 K2j (2)
fi
(1)
= fi
(1) (2)
− Ki2 f2
, ,
i = 3, . . . , n , i = 3, . . . , n
j = 3, . . . , n
(7)
The process of producing ones in the main diagonal, and zeros below the main diagonal is continued for all n columns resulting in the following system of linear equations (1) (1) 1 K12 K13 (2) 0 1 K23 0 0 1 .. .. .. . . . 0 0 0
(1)
· · · K1n
x1
(1)
f1
(2) (2) · · · K2n x2 f2 (3) (3) · · · K3n x3 = f3 .. .. .. .. . . . . (n) xn ··· 1 fn
(8)
Next, (1) are solved simultaneous with n righthand sides, where the loads form the columns in a unit matrix. The n solution vectors X = [x1 x2 x3 · · · xn ] are organized in the matrix equationi.e.
144
Chapter A – Solutions to Exercises
KX = I
K11 K21 K31 .. .
⇒ K12 K22 K32 .. .
· · · K1n x11 x12 · · · K2n x21 x22 · · · K3n x31 x32 . . .. .. .. . .. . · · · Knn xn1 xn2
K13 K23 K33 .. .
Kn1 Kn2 Kn3
x13 x23 x33 .. . xn3
· · · x1n 1 · · · x2n 0 · · · x3n = 0 . . .. .. . .. · · · xnn 0
0 0 ··· 1 0 ··· 0 1 ··· .. .. . . . . . 0 0 ···
0 0 0 .. .
(9)
1
Following the steps (2)-(8), simultaneous Gauss elimination of the coefficient matrix and the n righthand sides provides the following equivalent matrix equation (1) (1) 1 K12 K13 (2) 0 1 K23 0 0 1 .. .. .. . . . 0 0 0
(1)
· · · K1n
x11
x12
x13
· · · x1n
(1)
f11
0
0
···
0
(2) (2) (2) x21 x22 x23 · · · x2n f21 f 0 · · · 0 · · · K2n 22 (3) (3) (3) (3) 0 · · · K3n x31 x32 x33 · · · x3n = f31 f32 f33 · · · .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . (n) (n) (n) (n) xn1 xn2 xn3 · · · xnn ··· 1 fn1 fn2 fn3 · · · fnn (10)
As indicated the identity matrix on the righthand side is transformed into a lower triangular matrix F. In the program the triangulation of the matrix K and the calculation of the matrix F is performed in a matrix A of the dimension n × 2n, which at the entry of the triangulation loop has the form A = KI
(11)
At exit from the triangulation loop the matrix A stores the triangulized stiffness at the position originally occupied by K, and the matrix F at the position occupied by the unit matrix.
Calculation of L, D and (S−1)T Using the Gauss factorization of the stiffness matrix (9) may be written, cf. (6-62) KX = LDLT X = I
⇒ (12)
LT X = ID−1 L−1 = D−1 L−1 = F Upon comparison of (10) and (12) it becomes clear that LT is stored as the coefficient matrix in (10), whereas the righthand sides store the matrix F = D−1 L−1 . Since, L−1 is a lower triangular matrix with ones in the main diagonal, the main diagonal must contain the main diagonal of D−1 . Hence,
A.4 Exercise 6.4: Theory
−1
D
(1) 0 0 f11 (2) 0 f22 0 (3) = 0 0 f33 .. .. .. . . . 0 0 0
145
···
0
···
0
··· .. .
0 .. .
···
(n) fnn
⇒
D=
1 (1) f11
0 0 .. . 0
0
0
···
1
0
···
1 (3) f33
···
(2)
f22
0 .. . 0
.. . 0
..
.
···
0
0 0 .. .
(13)
1 (n) fnn
Finally, cf. (6-114) 1
S = LD 2
⇒
1
1
1
S−1 = D− 2 L−1 = D 2 D−1 L−1 = D 2 F
(14)
The matrices D and (S−1 )T are retrieved from the righthand sides of (10) as stored in the matrix F according to the indicated relations at the end of the program.
146
Chapter A – Solutions to Exercises
A.5
Exercise 7.1
Given the following mass- and stiffness matrices 0 0 0 M = 0 2 1 0 1 1
6 −1 0 K = −1 4 −1 0 −1 2
,
(1)
1. Perform a static condensation by the conventional procedure based on (7-5), (7-6), and next by Rayleigh-Ritz analysis with the Ritz basis given by (7-62).
SOLUTIONS: Question 1: The 1st and 3rd row, and next the 1st and 3rd column of the are interchanged, which brings the matrices of the general eigenvalue problem on the following form, cf. (7-1), (7-2) M11 M= M21 K11 K= K21
1 1 M12 = 1 2 M22 0 0
0 0 0
2 −1 4 = −1 K22 0 −1 K12
0 −1 6
(2)
Notice that the interchange of two rows or two columns may change the sign, but not the numerical value, of the characteristic polynomial. However, since the characteristic polynomial is zero at the eigenvalue the determination of the eigenvalues is unaffected by the sign change. The reduced stiffness matrix (7-7) becomes
˜ 11 = K11 − K12 K−1 K21 K 22
# " # " # h i 2 −1 2 −1 0 −1 = − [6] 0 −1 = −1 23 −1 4 −1 6 "
(3)
The reduced eigenvalue problem (7-6) is solved "
# " # 2 −1 1 1 Φ11 = Φ11 Λ1 −1 23 1 2 6
(4)
A.5 Exercise 7.1
147
The eigensolutions with eigenmodes normalized to modal mass 1 with respect to M11 becomes # " # 0.7325 0 λ1 0 = Λ1 = 0 λ2 0 9.1008 "
,
Φ11 =
(1) (2) Φ1 Φ1
=
"
0.5320
# 1.3103
0.3892 −0.9212
(5)
From (7-5) follows
Φ21
" # h i h i 0.5320 (1) (2) 1.3103 = 0.0649 −0.1535 = Φ2 Φ2 = − [6]−1 0 −1 0.3892 −0.9212
(6)
From (7-10) and (7-11) follows
Λ2 = [λ3 ] = [∞]
,
Φ12
" # 0 = 0
,
Φ22 = [1]
(7)
After interchanging the degrees of freedom back to the original order (the 1st components of Φ11 and Φ12 are placed as the 3rd component of Φ(j) , and the components of Φ21 and Φ22 are placed as the 1st component Φ(j) ), the following eigensolution is obtained λ1 0 0 0.7325 0 0 Λ = 0 λ2 0 = 0 9.1008 0 0 0 λ3 0 0 ∞
0.0649 −0.1535 1 i h (1) (2) (3) = 0.3892 −0.9212 0 Φ= Φ Φ Φ 0.5320 1.3103 0
(8)
Next, the same problem is solved by means of Rayleigh-Ritz analysis. The Ritz basis is constructed from (7-62)
2 −1 −1 4 Ψ2 = 0 −1
−1 1 0 0 0.5750 0.1500 −1 0 1 = 0.1500 0.3000 0 0 6 0.0250 0.0500
The projected mass and stiffness matrices become, cf. (7-63),(7-64)
(9)
148
Chapter A – Solutions to Exercises
T 0.5750 0.1500 ˜ = 0.1500 0.3000 M 0.0250 0.0500
" # 0.5750 0.1500 1 1 0 0.548125 0.371250 0.1500 0.3000 = 1 2 0 0.371250 0.292500 0 0 0 0.0250 0.0500
T " # 0.5750 0.1500 0.5750 0.1500 2 −1 0 0.5750 0.1500 0.1500 0.3000 ˜ = 0.1500 0.3000 −1 = K 4 −1 0.1500 0.3000 0 −1 6 0.0250 0.0500 0.0250 0.0500
(10)
˜ and K ˜ with modal masses normalized to 1 The eigensolutions to the eigenvalue problem defined by M ˜ with respect to M become, cf. Box. 7.2 # " # 0.7325 0 ρ1 0 = R= 0 ρ2 0 9.1008 "
,
" # i h 0.6748 3.5418 Q = q(1) q(2) = 0.9599 −4.8415
(11)
The solutions for the eigenvectors become, cf. (7-51) # 0.5750 0.1500 " 0.5320 1.3103 i h 3.5418 ¯ (2) = 0.1500 0.3000 0.6748 ¯ (1) Φ ¯ = Φ Φ 0.9599 −4.8415 = 0.3892 −0.9212 0.0649 −0.1535 0.0250 0.0500
(12)
As seen the eigenvalues (11) are identical to the lowest two eigenvalues from the static condensation procedure (8). The two lowest eigenmodes in (8) are retrieved from (12) upon interchanging the 1st and 3rd components in the latter.
A.6 Exercise 7.2
A.6
149
Exercise 7.2
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Calculate approximate eigenvalues and eigenmodes by Rayleigh-Ritz analysis using the following Ritz basis 1 1 Ψ = [Ψ(1) Ψ(2) ] = 1 −1 1 1
SOLUTIONS: Question 1: The projected mass and stiffness matrices become, cf. (7-45) T 1 1 ˜ = M 1 −1 1 1
" # 2 0 0 1 1 7 1 0 2 1 1 −1 = 1 3 0 1 1 1 1
T " # 6 −1 0 1 1 1 1 8 4 ˜ K = 1 −1 −1 4 −1 1 −1 = 4 16 0 −1 2 1 1 1 1
(2)
˜ and K ˜ with modal masses normalized to 1 The eigensolutions to the eigenvalue problem defined by M ˜ with respect to M become, cf. Box. 7.2 # " # 1.0459 0 ρ1 0 = R= 0 ρ2 0 5.3541 "
,
" # i h −0.3864 0.0269 Q = q(1) q(2) = 0.0887 −0.5849
(3)
The solutions for the eigenvectors become, cf. (7-51) " # 1 1 −0.2976 −0.5580 i h 0.0269 −0.3864 ¯ (2) = ¯ (1) Φ ¯ = Φ = −0.4751 Φ 1 −1 0.6118 0.0887 −0.5849 1 1 −0.2976 −0.5580
(4)
150
Chapter A – Solutions to Exercises
The exact eigensolutions can be shown to be 0.7245 0 0 λ1 0 0 Λ = 0 λ2 0 = 0 2.9652 0 0 0 9.3104 0 0 λ3
−0.0853 −0.6981 −0.0458 i h (1) (2) (3) = −0.3884 −0.0486 Φ= Φ Φ Φ 0.5778 −0.5251 0.1997 −0.8149
(5)
As seen ρ1 and ρ2 are upperbounds to the exact eigenvalues λ1 and λ2 , and ρ2 is smaller than λ3 , cf. ¯ (2) (7-57). The estimates of the eigenmodes are not useful. Not even the signs of the components of Φ are correctly represented. These poor results are obtained because the chosen Ritz basis is far away from the basis spanned by Φ(1) and Φ(2) .
A.7 Exercise 7.3
A.7
151
Exercise 7.3
Consider the mass- and stiffness matrices in Exercise 7.2, and let 1 v = 1 1
(1)
¯1 = ρ Φ ¯ (1) = K−1 Mv, and next λ ¯ (1) , as approximate solutions to the 1. Calculate the vector Φ lowest eigenmode and eigenvalue. 2. Establish the error bound for the obtained approximation to the lowest eigenvalue.
SOLUTIONS: Question 1: From the given formula we calculate −1 2 0 0 6 −1 0 1 0.55 = −1 4 −1 0 2 1 1 = 1.30 0 1 1 0 −1 2 1 1.65
¯ (1) Φ
(2)
¯ (1) becomes, cf. (7-25) The Rayleigh quotient based on Φ T 0.55 6 1.30 −1 1.65 0 ¯1 = ρ Φ ¯ (1) = λ T 0.55 2 1.30 0 1.65 0
−1 0 0.55 4 −1 1.30 −1 2 1.65 = 0.7547 0 0 0.55 2 1 1.30 1 1 1.65
(3)
¯ (1) resembles Φ(1) much better than the corresponding apThe obtained un-normalized eigenmode Φ (1) ¯1 ¯ proximation for Φ indicated in eq. (4) of Exercise 7.2. As a consequence the obtained eigenvalue λ is a much better approximation to the exact eigenvalue λ1 = 0.7245 given in eq. (5) of Exercise 7.2, than the approximation ρ1 = 1.0459 obtained by the Rayleigh-Ritz analysis. The indicated formula for ¯ (1) represents the 1st iteration step in the socalled inverse vector iteration algorithm described obtaining Φ in Section 8.2
152
Chapter A – Solutions to Exercises
Question 2: From (2) follows that (1) ¯ = 2.1714 Φ
(4)
The error vector becomes, cf. (7-79)
6 −1 0 2 0 0 0.55 1.1698 ε1 = −1 4 −1 − 0.7547 · 0 2 1 1.30 = −0.2075 0 −1 2 0 1 1 1.65 −0.2264 ε1 = 1.2095
⇒
(5)
The lowest eigenvalue of M can be shown to be
µ1 = 0.3820
(6)
// Then, from (7-85) the following bound is obtained ¯1| ≤ |λ1 − λ
1.2095 1 · = 2.1714 0.3820 2.1714
(7)
(7-95) ¯ 1 | = |0.7245 − 0.7547| = 0.0302. Hence, the bounding method provides a rather crude Actually, |λ1 − λ upperbound in the present case.
A.8 Exercise 8.1
A.8
153
Exercise 8.1
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Perform two inverse iterations, and then calculate an approximation to λ1 . 2. Perform two forward iterations, and then calculate an approximation to λ3 .
SOLUTIONS: Question 1: The calculations are performed with the start vector 1 Φ0 = 1 1
(2)
The matrix A becomes, cf. (8-4) −1 2 0 0 0.350 0.125 0.075 6 −1 0 A = −1 4 −1 0 2 1 = 0.100 0.750 0.450 0 1 1 0.050 0.875 0.725 0 −1 2
(3)
At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 8.1 0.350 0.125 0.075 1 0.55 ¯ Φ1 = 0.100 0.750 0.450 1 = 1.30 0.050 0.875 0.725 1 1.65 0.55 0.16585 1 Φ1 = √ 1.30 = 0.39201 10.9975 1.65 0.49755
⇒
¯ 1 = 10.9975 ¯ T MΦ Φ 1 (4)
154
Chapter A – Solutions to Exercises
0.350 0.125 0.075 0.16585 0.14436 ¯2 = Φ 0.100 0.750 0.450 0.39201 = 0.53449 0.050 0.875 0.725 0.49755 0.71202
⇒
¯ 2 = 1.8812 ¯ T MΦ Φ 2
0.14436 0.10526 1 Φ2 = √ 0.53449 = 0.38970 1.8812 0.71202 0.51914
(5)
Since, Φ2 has been normalized to unit modal mass, so ΦT2 MΦ2 = 1, an approximation is obtained from the following Rayleigh fraction, cf. (7-25) T 6 −1 0 0.10526 0.10526 ¯ 1 = ΦT KΦ2 = λ 4 −1 0.38970 = 0.72629 0.38970 −1 2 0 −1 2 0.51914 0.51914
(6)
The exact solution is λ1 = 0.72446.
Question 2: The calculations are performed with the start vector (2). The matrix B becomes, cf. (8-35) −1 6 −1 0 2 0 0 3.0 −0.5 0.0 B = 0 2 1 −1 4 −1 = −1.0 5.0 −3.0 0 −1 2 0 1 1 1.0 −6.0 5.0
(7)
At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 8.3 3.0 −0.5 0.0 1 2.5 ¯ 5.0 −3.0 1 = 1.0 Φ1 = −1.0 1.0 −6.0 5.0 1 0.0 2.5 0.65653 1 Φ1 = √ 1.0 = 0.26261 14.5 0.0 0.00000
⇒
¯ 1 = 14.5 ¯ T MΦ Φ 1 (8)
A.8 Exercise 8.1
155
3.0 −0.5 0.0 0.65653 1.83829 ¯2 = Φ −1.0 5.0 −3.0 0.26261 = 0.65653 1.0 −6.0 5.0 0.00000 −0.91915
⇒
¯ 1 = 7.25862 ¯ T MΦ Φ 1
1.83829 0.68232 1 Φ1 = √ 0.65653 = 0.24369 7.25862 −0.91915 −0.34116
(9)
Again, Φ2 has been normalized to unit modal mass, so ΦT2 MΦ2 = 1, an approximation is obtained from the following Rayleigh fraction T 6 −1 0 0.68232 0.68232 ¯ 3 = ΦT KΦ2 = λ 4 −1 0.24369 = 3.09739 0.24369 −1 2 0 −1 2 −0.34116 −0.34116
(10)
The exact solution is λ3 = 9.31036. The poor result is obtained because Φ2 is a rather bad approximation to Φ(3) .
156
A.9
Chapter A – Solutions to Exercises
Exercise 8.2
Given the following mass- and stiffness matrices
0 0 M = 0 1 0 0 0 12
1 2
,
2 −1 0 K = −1 4 −1 0 −1 2
(1)
The eigenmodes Φ(1) are Φ(3) are known to be, cf. (6-54) √ Φ(1) =
2 √2 2 √2 2 2
,
Φ(3) =
√
2 √2 − 2 √2 2 2
(2)
1. Calculate Φ(2) by means of Gram-Schmidt orthogonalization, and calculate all eigenvalues.
SOLUTIONS: Question 1: Consider an arbitrary vector 1 x = 2 2
(3)
Since, Φ(1) , Φ(2) and Φ(3) form a vector basis, we may write x = c1 Φ(1) + c2 Φ(2) + c3 Φ(3)
(4)
In order to determine the expansion coefficient cj , (4) is premultiplied with Φ(j) T M, and the Mothonormality of the eigenmodes are used, i.e. that Φ(i) T MΦ(j) = δij . For j = 1, 3 the following results are obtained
√ T 2 1 √2 2 2 0 √2 2 0 2 (j) T Mx = √ cj = Φ T 2 1 2 2 √ 2 − √2 0 2 0 2
1 1 0 2 =
0 0 0
1 2
√ 7 2 4
0
j=1
2
0 0 1 √ 1 0 2 = − 42 1 2
,
2
(5) ,
j=3
A.9 Exercise 8.2
157
Then, from (3), (4) and (5) follows
c2 Φ(2)
√ √ 2 2 1 −0.5 √ √ 2 2 √ √ 2 7 2 2 + − 2 = = 2 − · · 0.0 4 √2 4 √2 2 2 2 0.5 2 2
T 1 −0.5 −0.5 2 0 0 c22 = 0.0 0 1 0 0.0 = 0.25 0.5
Φ(2)
0 0
1 2
⇒
⇒
0.5
−0.5 −1 1 · 0.0 = 0 = 0.5 0.5 1
(6)
Hence, the modal matrix becomes, cf. (6-54)
i h Φ = Φ(1) Φ(2) Φ(3) =
√
2 √2 2 √2 2 2
√
−1 0 1
2 √2 − 22 √ 2 2
(7)
Given that all eigenmodes have been normalized to unit modal mass the eigenvalues may be calculated from the Rayleigh quotient, cf. (7-25) 2 0 0 Λ = ΦMΦ = 0 4 0 0 0 6
(8)
Generally, if n − 1 eigenmodes to a general eigenvalue problem is known the remaining eigenmode can lways be determined solely from the M-orthonormality conditions.
158
Chapter A – Solutions to Exercises
A.10
Exercise 9.3
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
6 −1 0 K = −1 4 −1 0 −1 2
,
(1)
1. Perform an initial transformation to a special eigenvalue problem, and calculate the eigenvalues and eigenvectors by means of standard Jacobi iteration. 2. Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi iteration operating on the original general eigenvalue problem.
SOLUTIONS: Question 1: Initially, a Choleski decomposition of the mass matrix is performed, cf. (6-109). As indicated by the algorithm in Box 6.3 the following calculations are performed s11 =
√
√ m11 =
2
m21 0 = √ =0 s11 2 m31 0 = = √ =0 s11 2 q p √ = m22 − s221 = 2 − 02 = 2
s21 = s31 s22 s32 s33
√ 2 1 1 = (m32 − s31 · s21 ) = √ (1 − 0 · 0) = s22 2 2 s √ √ q 2 2 2 2 2 = m33 − s32 − s31 = 1 − − 02 = 2 2
(2)
Hence, the matrices S and S−1 become √
2 0 √ S= 0 2 √ 2 0 2 √
2 2
S−1 = 0 0
0 0 √
⇒
2 2
0
√
2 √2 − 22
0
0 √ 2
(3)
A.10 Exercise 9.3
159
The initial value of the updated similarity transformation matrix and the stiffness matrix becomes, cf. (6-112), (9-4)
√
2 2
Φ0 = (S−1 )T = 0 0
0
√
2 2
0
0
√ − 22 √ 2
˜ = S−1 K(S−1 )T = K0 = K √ √2 2 0 0 6 −1 0 2 √ 2 2 0 4 −1 0 0 −1 √2 √ 2 0 −1 2 0 2 0 − 2
0
√
2 2
0
0 3.0 −0.5 0.5 √ 2 = 2.0 −3.0 − 2 −0.5 √ 0.5 −3.0 8.0 2
(4)
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
( 2 · (−0.5) 1 cos θ = 0.9239 = −0.3927 ⇒ θ = arctan 2 3.0 − 2.0 sin θ = −0.3827 0.9239 0.3827 0 P0 = −0.3827 0.9239 0 0 0 1 0.6533 0.2706 0 Φ1 = Φ0 P0 = −0.2706 0.6533 −0.7071 0 0 1.4142 3.2071 0 1.6070 T 1.7929 −2.5803 K1 = P0 K0 P0 = 0 1.6070 −2.5803 8.0000
(5)
160
Chapter A – Solutions to Exercises
Next, the calculations are performed for (i, j) = (1, 3) : 2 · 1.6070 1 = −0.2958 ⇒ θ = arctan 2 3.2071 − 8.0000 0.9566 0 0.2915 P1 = 0 1 0 −0.2915 0 0.9566 0.6249 0.2706 0.1904 Φ2 = Φ1 P1 = −0.0527 0.6533 −0.7553 −0.4122 0 1.3528 2.7165 0.7521 0 K2 = PT1 K1 P1 = 0.7521 1.7929 −2.4682 0 −2.4682 8.4906
(
cos θ = 0.9566 sin θ = −0.2915
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) : ( 2 · (−2.4682) 1 cos θ = 0.9500 = 0.3176 ⇒ θ = arctan 2 1.7929 − 8.4906 sin θ = 0.3123 1 0 0 P2 = 0 0.9500 −0.3123 0 0.3123 0.9500 0.6249 0.3165 0.0964 Φ3 = Φ2 P2 = −0.0527 0.3848 −0.9215 −0.4122 0.4224 1.2852 2.7165 0.7145 −0.2349 K3 = PT2 K2 P2 = 0.7145 0.9816 0 −0.2349 0 9.3019
(6)
(7)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the eigenvalues 0.6980 0.0862 0.0729 Φ6 = 0.0481 0.3885 −0.9202 −0.2004 0.5249 1.2978 0.6981 0.0853 0.0729 Φ9 = 0.0486 0.3884 −0.9202 −0.1997 0.5251 1.2978
,
2.9652 0.0028 0.0000 K6 = 0.0028 0.7245 0 0.0000 0 9.3104
,
2.9652 0.0000 0.0000 K9 = 0.0000 0.7245 0 0.0000 0 9.3104
(8)
As seen the eigenmodes are stored column-wise in Φ according to the permutation (j1 , j2 , j3 ) = (2, 1, 3), cf. Box 9.2.
A.10 Exercise 9.3
161
Question 2: The following initializations are introduced, cf. Box 9.4 2 0 0 6 −1 0 M0 = M = 0 2 1 , K0 = K = −1 4 −1 0 1 1 0 −1 2
,
1 0 0 Φ0 = 0 1 0 0 0 1
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) : 4 · 0 − 2 · (−1) ( = 0.5 a= α = −0.4142 6 · 2 − 2 · 4 ⇒ 6 · 0 − 2 · (−1) β = 0.4142 b= = 0.5 6 · 2 − 2 · 4 1 0.4142 0 1 0.4142 0 P0 = −0.4142 1 0 , Φ1 = Φ0 P0 = −0.4142 1 0 0 0 1 0 0 1 2.3431 0 −0.4142 M1 = PT0 M0 P0 = 0 2.3431 1 −0.4142 1 1 7.5147 0 0.4142 K1 = PT0 K0 P0 = 0 4.2010 −1 0.4142 −1 2
(9)
(10)
162
Chapter A – Solutions to Exercises
Next, the calculations are performed for (i, j) = (1, 3) : 2 · (−0.4142) − 1 · 0.4142 ( = −0.4393 a = α = 1.0023 7.5147 · 1 − 2.3431 · 2 ⇒ 7.5147 · (−0.4142) − 2.3431 · 0.4142 β = −0.3050 b= = −1.4437 7.5147 · 1 − 2.3431 · 2 1 0 −0.3050 1 0.4142 −0.3050 P1 = 0 1 0 1 0.1263 , Φ2 = Φ1 P1 = −0.4142 1.0023 0 1 1.0023 0 1 2.5174 1.0023 0 M2 = PT1 M1 P1 = 1.0023 2.3431 1 0 1 1.4707 10.3542 −1.0023 0 K = PT K P = −1.0023 4.2010 −1 2 1 1 1 0 −1 2.4465
(11)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) : 2.4465 · 1 − 1.4707 · (−1) ( = 8.7838 a= α = −1.2369 4.2010 · 1.4707 − 2.3407 · 2.4465 ⇒ 4.2010 · 1 − 2.3431 · (−1) β = 0.7404 = 14.6745 b= 4.2010 · 1.4707 − 2.3431 · 2.4465 1 0 0 1 0.7915 0.0016 P2 = 0 1 0.7404 , Φ3 = Φ2 P2 = −0.4142 0.8437 0.8667 0 −1.2369 1 1.0023 −1.2369 1 2.5174 1.0023 0.7421 M3 = PT2 M2 P2 = 1.0023 2.1193 0 0.7421 0 4.2357 10.3542 −1.0023 −0.7421 K3 = PT2 K2 P2 = −1.0023 10.4174 0 −0.7421 0 3.2684
(12)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the transformed mass and stiffness matrices
A.10 Exercise 9.3
1.6779 Φ6 = 0.0350 −0.3741 5.7469 M6 = 0.1959 −0.0118 1.6780 Φ = 9 0.1169 −0.4801 5.7769 M = 0.0000 9 0.0000
163
−0.0129 0.1846 1.3400 0.8282 −1.9075 1.1188 0.1959 −0.0118 2.1179 0 0 4.5448
,
17.0856 −0.1959 0.0118 K6 = −0.1959 19.6067 0 0.0118 0 3.2925 (13)
−0.1060 0.1819 1.3379 0.8281 −1.8869 1.1195 0.0000 0.0000 2.1139 0 , 0 4.5448
17.1296 −0.0000 −0.0000 K9 = −0.0000 19.6810 0 −0.0000 0 3.2925
Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes are next retrieved by the following calculations, cf. Box. 9.4 5.7769 0.0000 0.0000 0.4161 0 0 −1 m = M9 = 0.0000 2.1139 0 , m 2 = 0 0.6878 0 0.0000 0 4.5448 0 0 0.4691 2.9652 −0.0000 −0.0000 λ2 0 0 K9 = −0.0000 Λ = 0 λ3 0 = M−1 9.3104 0.0000 9 −0.0000 0.0000 0.7245 0 0 λ1 0.6981 −0.0729 0.0853 1 Φ = Φ(2) Φ(3) Φ(1) = Φ9 m− 2 = 0.0486 0.9202 0.3884 −0.1997 −1.2978 0.5251
⇒
(14)
The solutions (14) are identical to those obtained in (8) for the special Jacobi iteration algorithm. In the present case the eigenmodes are stored column-wise in Φ according to the permutation (j1 , j2 , j3 ) = (2, 3, 1), cf. Box 9.4. The convergence rates of the special nd the general Jacobi iteration algorithm seems to be rather alike.
164
Chapter A – Solutions to Exercises
A.11
Exercise 9.6
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.
SOLUTIONS: Question 1: At first, as indicated in Box 9.7 an initial similarity transformation of the indicated general eigenvalue problem into a special eigenvalue problem is performed with the similarity transformation matrix P = T S−1 , where S is a solution to M = SST . In case S is determined from an Choleski decomposition of the mass matrix the initial updated transformation and stiffness matrices have been calculated in Exercise 9.3, eq. (4). The result becomes √
2 2
Φ1 = (S−1 )T = 0 0
0
√
2 2
0
0
√ − 22 √ 2
K1 = S−1 K(S−1 )T = √ √2 2 0 0 6 −1 0 2 √ 2 2 0 4 −1 0 0 −1 √2 √ 2 0 −1 2 0 2 0 − 2
0
√
2 2
0
0 3.0 −0.5 0.5 √ 2 = 2.0 −3.0 − 2 −0.5 √ 0.5 −3.0 8.0 2
(2)
As seen the original three diagonal structure of K is destroyed by the similarity. This may be reestablished by means of a Householder transformation as described in Section 9.4. However, this will be omitted here, so the QR-iteration is performed on the full matrix K1 . At the determination of q1 and r11 in the 1st QR iteration the following calculations are performed, cf. (9-67) 3.0 k1 = −0.5 0.5
,
r11
3.0 = −0.5 = 3.0822 0.5
3.0 0.9733 1 q1 = −0.5 = −0.1622 3.0822 0.5 0.1622
(3)
A.11 Exercise 9.6
165
q2 and r12 , r22 are determined from the following calculations, cf. (9-68) T −0.5 −0.5 0.9733 , r12 = −0.1622 2.0 = −1.2978 k = 2 2.0 −3.0 −3.0 0.1622 −0.5 0.9733 r22 = 2.0 + 1.2978 · −0.1622 = 3.4009 −3.0 0.1622 −0.5 0.9733 0.2244 1 q2 = 2.0 + 1.2978 · −0.1622 = 0.5262 3.4009 −3.0 0.1622 −0.8202
(4)
q3 and r13 , r23 , r33 are determined from the following calculations, cf. (9-69) 0.5 k3 = −3.0 , r13 = qT1 k3 = 2.2711 , r23 = qT2 k3 = −8.0282 8.0 r33 = k3 − 2.2711q1 + 8.0282q2 = 1.9080 0.0477 1 k3 − 2.2711q1 + 8.0282q2 = 0.8348 q3 = 1.9080 0.5486
(5)
166
Chapter A – Solutions to Exercises
Then, at the end of the 1st iteration the following matrices are obtained
0.9733 0.2244 0.0477 Q1 = −0.1622 0.5662 0.8348 0.1622 −0.8202 0.5486 3.0822 −1.2978 2.2711 = R 0 3.4009 −8.0282 1 0 0 1.9080 0.6882 0.1587 Φ2 = Φ1 Q1 = −0.2294 0.9521 0.2294 −1.1600 3.5789 −1.8540 K2 = R1 Q1 = −1.8540 8.3744 0.3095 −1.5650
⇒
0.0337 0.2024 0.7758
(6)
0.3095 −1.5650 1.0466
The corresponding matrices after the 2nd and 3rd iteration become
0.8853 0.4648 0.0115 Q2 = −0.4586 0.8689 0.1861 0.0766 −0.1700 0.9825 4.0425 −5.6020 1.0719 = R 0 6.6809 −1.3940 2 0 0 0.7405 0.5391 0.4521 Φ3 = Φ2 Q2 = −0.6243 0.6862 0.7945 −1.0332 6.2303 −3.1708 K3 = R2 Q2 = −3.1708 6.0422 0.0567 −0.1259
⇒
0.0706 0.3734 0.5489
0.0567 −0.1259 0.7275
(7)
A.11 Exercise 9.6
167
0.8912 0.4536 0.0021 Q3 = −0.4536 0.8610 0.0219 0.0081 −0.0205 0.9998 6.9910 −5.5673 0.1135 R = 0 3.9475 −0.1014 3 0 0 0.7274 0.2760 0.6459 Φ4 = Φ3 Q3 = −0.8645 0.3206 1.1811 −0.5714 8.5763 −1.7913 K4 = R3 Q3 = −1.7913 3.5192 0.0059 −0.0148
⇒
0.0816 0.3871 0.5277
(8)
0.0059 −0.0148 0.7245
As seen from R3 and K4 the terms in the main diagonal have already after the 3rd iteration grouped in descending magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in Box 9.7. Moreover, for both matrices convergence to the lowest eigenvalue λ1 = 0.7245 has occurred, illustrating the fact that the QR algorithm converge faster to the lowest eigenmode than to the highest. The matrices after the 14th iteration become 1.0000 0.0000 0.0000 = Q −0.0000 1.0000 0.0000 14 0.0000 −0.0000 1.0000 9.3104 −0.0000 0.0000 = R 0 2.9652 −0.0000 14 0 0 0.7245 0.0729 0.6981 Φ15 = Φ14 Q14 = −0.9202 0.0486 1.2978 −0.1997 9.3104 −0.0000 = R Q = K −0.0000 2.9652 15 14 14 0.0000 −0.0000
⇒
0.0853 0.3884 0.5251
0.0000 −0.0000 0.7245
(9)
168
Chapter A – Solutions to Exercises
Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for the eigenvalues and eigenmodes of the original general eigenvalue problem λ3 0 0 9.3104 −0.0000 0.0000 Λ = 0 λ2 0 = K15 = −0.0000 2.9652 −0.0000 0.0000 −0.0000 0.7245 0 0 λ1 0.0729 0.6981 0.0853 (3) (2) (1) Φ= Φ Φ Φ = Φ15 = −0.9202 0.0486 0.3884 1.2978 −0.1997 0.5251
(10)
The solution (10) agrees with the corresponding solutions for the special and general Jacobi iteration algorithms obtained in Exercise 9.3, eq. (8) and (14), respectively.
A.12 Exercise 10.1
A.12
169
Exercise 10.1
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous inverse vector iteration with the start vector basis 1 1 (1) (2) Φ0 = Φ0 Φ0 = 1 0 1 −1
SOLUTIONS: Question 1: The matrix A becomes, cf. (6-44) −1 2 0 0 6 −1 0 0.350 0.125 0.075 A = K−1 M = −1 4 −1 0 2 1 = 0.100 0.750 0.450 0 1 1 0 −1 2 0.050 0.875 0.725
(2)
Then, the 1st iterated vector basis becomes, cf. (10-4) 0.350 0.125 0.075 1 1 0.550 0.275 (1) (2) ¯ Φ ¯ ¯1 = Φ = AΦ0 = 0.100 0.750 0.450 1 Φ 0 = 1.300 −0.350 1 1 0.050 0.875 0.725 1 −1 1.650 −0.675
(3)
(1)
At the determination of Φ1 and r11 in the 1st vector iteration the following calculations are performed, cf. (10-13) T 0.550 0.550
¯ (1) ¯ (1) = Φ 1.300 , r11 = Φ 1 1 = 1.300 1.650 1.650 0.550 0.1659 1 (1) Φ1 = 1.300 = 0.3920 3.3162 1.650 0.4976
12 2 0 0 0.550 0 2 1 1.300 = 3.3162 0 1 1 1.650 (4)
170
Chapter A – Solutions to Exercises
(2)
Φ1 and r12 , r22 are determined from the following calculations, cf. (10-15) T 2 0 0 0.275 0.275 0.1659 ¯ (2) = Φ −0.350 , r12 = 0.3920 0 2 1 −0.350 = −0.9578 1 0 1 1 −0.675 −0.675 0.4976
0.275
0.1659
r22 = −0.350 + 0.9578 · 0.3920 = 0.6380
−0.675 0.4976 0.275 0.1659 0.6800 1 (2) Φ1 = −0.350 + 0.9578 · 0.3920 = 0.0399 0.6380 −0.675 0.4976 −0.3111
(5)
Then, at the end of the 1st iteration the following matrices are obtained " # 3.3162 −0.9578 R1 = 0 0.6380 0.1659 0.6800 Φ1 = 0.3920 0.0399 0.4976 −0.3111
(6)
¯ 1 . The corresponding matrices after the 2nd and 3rd iteration The reader should verify that Φ1 R1 = Φ become " # 1.3716 −0.1507 R2 = 0 0.3392 0.1053 0.6944 Φ2 = 0.3897 0.0492 0.5191 −0.2311
(7)
" # 1.3798 −0.0371 R3 = 0 0.3374 0.0902 0.6972 Φ3 = 0.3888 0.0496 0.5237 −0.2086
(8)
Convergence of the eigenmodes with the indicated number of digits were achieved after 9 iterations, where
A.12 Exercise 10.1
" # 1.3803 −0.0000 R14 = 0 0.3372 0.0853 0.6981 Φ9 = 0.3884 0.0486 0.5251 −0.1997
171
(9)
Presuming that convergence has occurred after the 9th iteration the following eigenvalues are obtained from (10-10) and (10-12) # " # " 0.7245 0.0000 λ1 0 T −1 = Φ9 KΦ9 = R∞ = Λ= 0 λ2 0.0000 2.9652 (10) 0.0853 0.6981 (1) (2) = Φ9 = 0.3884 Φ= Φ Φ 0.0486 0.5251 −0.1997 The solution (10) agrees with the corresponding solutions obtained in Excercises 9.3 and 9.6.
172
Chapter A – Solutions to Exercises
A.13
Exercise 10.3
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace iteration with the start vector basis 1 1 (1) (2) Φ0 = Φ0 Φ0 = 1 0 1 −1
SOLUTIONS: Question 1: The matrix A becomes, cf. (6-44) −1 2 0 0 0.350 0.125 0.075 6 −1 0 A = K−1 M = −1 4 −1 0 2 1 = 0.100 0.750 0.450 0 1 1 0.050 0.875 0.725 0 −1 2
(2)
Then, the 1st iterated vector basis becomes, cf. (10-4) 0.350 0.125 0.075 1 1 0.550 0.275 (1) (2) ¯ Φ ¯ ¯1 = Φ = AΦ0 = 0.100 0.750 0.450 1 Φ 0 = 1.300 −0.350 1 1 0.050 0.875 0.725 1 −1 1.650 −0.675
(3)
In order to perform the Rayleigh-Ritz analysis in the 1st subspace iteration the following projected mass ¯ 1 , cf. (6-44), (10-20), (10-31) and stiffness matrices are calculated based on Φ T 0.550 0.275 ¯ T MΦ ¯1 = ˜1=Φ M 1.300 −0.350 1 1.650 −0.675
" # 2 0 0 0.550 0.275 10.998 −3.1763 0 2 1 1.300 −0.350 = −3.1763 1.3244 0 1 1 1.650 −0.675
T " # 6 −1 0 0.550 0.275 0.550 0.275 8.300 −1.850 T ¯ ¯ ˜ K1 = Φ1 KΦ1 = 1.300 −0.350 −1 4 −1 1.300 −0.350 = −1.850 1.575 0 −1 2 1.650 −0.675 1.650 −0.675 (4)
A.13 Exercise 10.3
173
The corresponding eigenvalue problem (10-30) becomes ˜ 1 Q1 = M ˜ 1 Q1 R1 K "
⇒
" " # # # 8.300 −1.850 (1) (2) 10.998 −3.1763 (1) (2) ρ1,1 0 q1 q 1 = q1 q 1 0 ρ2,1 −1.850 1.575 −3.1763 1.3244 "
# 0.7246 0 R1 = 0 2.9752
,
Q1 =
"
⇒
# −0.2471 −0.4845
(5)
0.1813 −1.5569
The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (10-34) # −0.0861 −0.6947 0.550 0.275 " −0.2471 −0.4845 ¯ 1 Q1 = Φ1 = Φ = −0.3848 −0.0850 1.300 −0.350 0.1813 −1.5569 1.650 −0.675 −0.5302 0.2514
(6)
Correspondingly, after the 2nd and 9th iteration steps the following matrices are calculated "
# 0.7245 0 R2 = 0 2.9662
,
0.0854 0.6972 Φ2 = 0.3881 0.0603 0.5255 −0.2162
"
# 0.7245 0 R9 = 0 2.9652
# −0.7245 −0.0013 Q2 = 0.0004 −2.9673
(7)
# −0.7245 −0.0000 Q9 = 0.0000 −2.9652
(8)
"
,
−0.0853 −0.6981 Φ9 = −0.3884 −0.0486 −0.5251 0.1997
"
The subspace iteration process converged with the indicated accuracy after 8 iterations. Finally, it should be checked that th calculated eigenvalues are inded the lowest two by a Sturm sequence or Gauss factorization check. The 2nd calculated eigenvalue becomes ρ2,9 = 2.9652. Then, let µ = 3.1 and perform a Gauss factorization of the matrix K − 3.1M, i.e.
174
Chapter A – Solutions to Exercises
−0.2 −1.0 0 K − 3.1M = −1.0 −2.2 −4.1 = 0 −4.1 −1.1 1 0 0 −0.2 0 0 1 5 0 LDLT = 5 1 0 0 2.8 0 0 1 −1.4643 0 −1.4643 1 0 0 −7.1036 0 0 1
(9)
It follows that two components in the main diagonal of D are negative, from which is concluded that two eigenvalues are smaller than µ = 3.1. In turn this means that the two eigensolutions obtained by (8) are indeed the lowest two eigensolutions of the original system. The solution (8) agrees with the corresponding solutions obtained in Excercises 9.3, 9.6 and 10.1.
A.14 Exercise 10.5
A.14
175
Exercise 10.5
Given the following mass- and stiffness matrices 2 0 0 M = 0 2 1 0 1 1
,
6 −1 0 K = −1 4 −1 0 −1 2
(1)
1. Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope method).
SOLUTIONS: Question 1: At first a calculation with µ = 2.5 is performed, which produces the following results
1.0 −1.0 0.0 K − 2.5M = −1.0 −1.0 −3.5 0.0 −3.5 −0.5
⇒
(3) P (2.5) = 1 P (2) (2.5) = 1.0 P (1) (2.5) = 1.0 · (−1.0) − (−1.0)2 = −2.0 (0) P (2.5) = −0.5 · (−2.0) − (−3.5)2 · (1.0) = −11.25
, , , ,
sign(P (3) (2.5)) = + sign(P (2) (2.5)) = + (1) sign(P (2.5)) = − (0) sign(P (2.5)) = −
(2)
Hence, the sign sequence of the Sturm sequence becomes + + −−. corresponding to the number of sign changes nsign = 1 in the sequence. One eigenvalue is smaller than µ = 2.5. Similar calculations are performed for µ = 3.5, 4.5, . . . , 9.5
µ = 3.5 : µ = 4.5 : µ = 5.5 : µ = 6.5 : µ = 7.5 : µ = 8.5 : µ = 9.5 :
Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + −
⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒
nsign nsign nsign nsign nsign nsign nsign
=2 =2 =2 =2 =2 =2 =3
From this is concluded that the 3rd eigenvalue is placed somewhere in the interval 8.5 < λ3 < 9.5. Next, similar calculations are performed for µ = 8.6, 8.7, . . . , 9.4
(3)
176
Chapter A – Solutions to Exercises
µ = 8.6 : µ = 8.7 : µ = 8.8 : µ = 8.9 : µ = 9.0 : µ = 9.1 : µ = 9.2 : µ = 9.3 : µ = 9.4 :
Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + + Sign sequence = + − + −
⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒
nsign nsign nsign nsign nsign nsign nsign nsign nsign
=2 =2 =2 =2 =2 =2 =2 =2 =3
(4)
From this is concluded that the 3rd eigenvalue is confined to the interval 9.3 < λ3 < 9.4. Next, calculations are performed for µ = 9.31, 9.32, . . . , 9.39 µ = 9.31 : µ = 9.32 : µ = 9.33 : µ = 9.34 : µ = 9.35 : µ = 9.36 : µ = 9.37 : µ = 9.38 : µ = 9.39 :
Sign sequence = + − + + Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + − Sign sequence = + − + −
⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒
nsign nsign nsign nsign nsign nsign nsign nsign nsign
=2 =3 =3 =3 =3 =3 =3 =3 =3
(5)
From this is concluded that the 3rd eigenvalue is confined to the interval 9.31 < λ3 < 9.32. Proceeding in this manner, it may be shown after totally 52 Sturm sequence calculations that the 3rd eigenvalue is confined to the interval 9.31036 < λ3 < 9.31037. Each extra digit requires 9 calculations. Setting λ3 ' 9.310365, the linear equation (10-63) attains the form
¯ (3) = 0 K − 9.310365M Φ
⇒
(3) ¯ Φ −12.6207 −1 0 0 1 ¯ (3) −1 −14.6207 −10.3104 Φ2 = 0 ¯ (3) 0 −10.3104 −7.3104 Φ 0 3
(6)
¯ (3) = 1 the algorithm (10-64) now provides Setting Φ 1 ¯ (3) = − (−12.6207) · 1 = −12.6207 Φ 2 (−1) ¯ (3) Φ 3
(−14.6207) (−1) ·1− · (−12.6207) = 1 =− (−10.3104) (−10.3104)
⇒
¯ (3) Φ
1 = −12.6207 17.7800
(7)
A.14 Exercise 10.5
177
Normalization to unit modal mass provides, cf. (6-54)
Φ(3)
0.0729 = −0.9202 1.2978
(8)
The eigenvalue λ3 ' 9.310365 and the corresponding eigenmode Φ(3) as given by (8) agree with the corresponding results obtained in Excercises 9.3 and 9.6.