http://www.elsolucionario.blogspot .com .com
LIBROS UNIVERISTARIOS Y SOLUCIONARIOS DE MUCHOS DE ESTOS LIBROS LOS SOLUCIONARIOS CONTIENEN TODOS LOS EJERCICIOS DEL LIBRO RESUELTOS Y EXPLICADOS DE FORMA CLARA VISITANOS PARA DESARGALOS GRATIS.
Solution Manual for Adaptive Adaptive Control Second Edition
Karl Johan Åström Björn Wittenmark
Preface
This Solution Manual contains solutions to selected problems in the second edition of Adaptive Control published by Addison-Wesley 1995, ISBN 0-20155866-1.
PROBLEM SOLUTIONS
SOLUTIONS TO CHAPTER 1
1.5
Linearizatio Linearization n of the valve valve shows shows that ∆v =
4v30 ∆u
The loop transfer function is then G0 ( s) G PI ( s) 4v30
where G PI is the transfer function of a PI controller i.e. G PI ( s) = K
1
1+
sT i
The characteristic characteristic equation equation for the closed loop system is sT i( s + 1 ) 3 + K ⋅ 4v30( sT i + 1 ) =
with K
=
0
0.15 and T i = 1 we get
( s + 1 ) s( s + 1 ) 2 + 0 .6v30 = 0 ( s + 1 ) ( s3 + 2 s2 + s + 0 .6v30 ) = 0
The root locus of this equation with respect to vo is sketched in Fig. 1. According to the Routh Hurwitz criterion the critical case is
10 = 1.49 3 Since the plant G0 has unit static gain and the controller has integral action the steady-state output is equal to v0 and the set point yr . The closed-loop system is stable for yr = uc = 0.3 and 1.1 but unstable for yr = uc = 5.1. Compare with Fig. 1.9. 0.6v30
=
2
⇒ v 0 =
3
1
2
Problem Solutions
2
1.5
1
0.5 s i x A g a m I
0
-0.5
-1
-1.5
-2 -2
-1.5
-1
-0.5
Figure 1.
1.6
0 Real Axis
0.5
1
1.5
2
Root locus in Problem 1.5.
Tune the controller using the Ziegler-Nichols closed-loop method. The frequency ω u , where the process has 180 phase lag is first determined. The controller parameters are then given by Table 8.2 on page 382 where 1
K u =
G0 ( iω u )
we have G0 ( s) =
arg G0 ( iω )
=
e
s q
1+s q ω q
arctan
ω q
=
π
q
ω
G0 ( iω )
K
T i
0.5
1.0 2.0 4.1
0.45 0.45 0.45
1 1 1
5 .24 2 .62 1.3
A simulation of the system obtained when the controller is tuned for the smallest flow q = 0.5 is shown Fig. 2. The Ziegler-Nichols method is not the best tuning method in this case. In the Fig. 3 we show results for
Solutions to Chapter 1
3
Process output 2
1
0
0
10
20
30
40
10
20
30
40
Control signal 2
1
0
0
Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The controller is designed for q = 0.5. Figure 2.
controller designed for q = 1 and in Fig. 4when the controller is designed for q = 2. 1.7 Introducing the feedback u =
the system becomes 1 0 dx 0 3 = dt 0 0 y1 =
0 0 x 1
1 1 0 x
k2
k2 y2
0 2 1
1 0 1 x +
1 0 0
The transfer function from u1 to y1 is G( s) =
=
1 1 0
s + 1
2 k2 k2
0 0 s + 3 2 k2 0 s + 1 + k2
s2 + ( 4 k2) s + 3 + k2 ( s + 1 )( s + 3 )( s + 1 + k 2)
1
1 0 0
u1
4
Problem Solutions
Process output 2
1
0
0
10
20
30
40
10
20
30
40
Control signal 2
1
0
0
Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The controller is designed for q = 1. Figure 3.
The static gain is G ( 0) =
3 + k2 3( 1 + k2)
Solutions to Chapter 1
Process output 2
1
0
0
10
20
30
40
10
20
30
40
Control signal 2
1
0
0
Simulation in Problem 1.6. Process output and control signal are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The controller is designed for q = 2. Figure 4.
5
6
Problem Solutions
SOLUTIONS TO CHAPTER 2
2.1
The function V can be written as n
V ( x1 ⋅ ⋅ ⋅ x m ) =
n
xi x j ( aij + a ji )
2 +
i, j = 1
b i xi + c
i=1
Taking derivative with respect to xi we get V = xi
n
( aij + a ji ) x j + b i
j = 1
In vector notation this can be written as grad x V ( x) 2.2
T = ( A + A ) x + b
The model is yt =
ϕ tT θ
+ e t =
ut
b0
ut 1
+ et
b1
The least squares estimate is given as the solution of the normal equation θ ˆ = ( Φ Φ ) 1Φ Y = T
T
( a) Input is a unit step
u2t
utut 1 u2t 1
u t ut 1
ut =
1
u t yt
ut 1 yt
1 t≥0 0 otherwise
Evaluating the sums we get
N
θ ˆ =
1
1 N 1 N 1 N
N
y1
θ ˆ =
1
N
yt
2
1
yt
=
1
1
1
N N 1
y1
The estimation error is
e1
θ =
N
1
N
1
et
2
yt
1
N
yt
2
2
θ ˆ
N
N
1
N
yt
e1
Solutions to Chapter 2
7
Hence θ E ( ˆ
θ )( ˆ θ
when N zero as N
1
θ ) T = ( Φ T Φ ) 1 ⋅ 1 =
1
N N 1
1
1 1
1 1
. Notice that the variance of the estimates do not go to . Consider, however, the estimate of b0 + b1.
E( bˆ 0 + bˆ 1
b0
b1 ) =
1
1 1
1 N N 1
1
1 1
=
1 N
1
With a step input it is thus possible to determine the combination b0 + b 1 consistently. The individual values of b0 and b 1 can, however, not be determined consistently. ( b) Input u is white noise with Eu 2 = 1 and u is independent of e. Eu 2t =
ˆ θ ) cov(θ
=
1 ⋅ E ( Φ T Φ )
1
1 =
Eu tut 1 =
0
N
0
N
1
0
1
=
1
0
N
0
1
1 In this case it is thus possible to determine both parameters consistently. 2.3
Data generating process: y( t) = b0u( t) + b1u( t
1)
+ e( t) =
where ϕ T ( t) = u( t) ,
and
e¯( t) = b1u( t
Model: or
ϕ T ( t)θ 0 + e¯( t)
θ 0 = b0
1)
+ e( t)
ˆ ( t) yˆ ( t) = bu ˆ ( t) + ε ( t) = ϕ T ( t) ˆθ + ε ( t) y( t) = bu
where ε ( t) = y( t)
yˆ ( t)
The least squares estimate is given by θ Φ Φ ( ˆ T
θ 0) = Φ T Ed
Ed =
e¯( 1)
.. .
e¯( N )
N
8
Problem Solutions
1 N
1 N
N
1
T
Φ Φ =
N
Φ Ed =
u2 ( t)
Eu2
N
N
1
1))
( a)
Hence
E ( u( t) u( t
bˆ = θ ˆ
u( t) ( b1u( t
1) + e( t))
1
+ E ( u( t) e( t))
u( t) =
1
N
1
¯( t) = u( t) e
b1 E ( u( t) u( t
E( u2) =
N
1
N
1
T
1 t 0 t
≥ <
1 1
1))
=
1
N
Eu ( t) e( t) =
θ 0 + b 1 = b0 + b1
0
N
i.e. bˆ converges to the stationary gain ( b) u( t) ∈ N ( 0, σ ) ⇒ Eu 2 = σ 2
Hence 2.6
Eu ( t) u( t
bˆ
b0
1)
=
0
Eu ( t) e( t) =
0
N
The model is yt =
ϕ tT θ
+
ε t =
yt 1
a
ut 1
ϕ tT
et + ce t 1
+
b
ε t
θ
The least squares estimate is given by the solution to the normal equation ( 2.5) . The estimation error is θ ˆ
θ = ( Φ T Φ ) 1Φ T ε =
y2t 1
yt 1ut 1
yt 1u t 1
u2t 1
1
yt 1 et
c
ut 1 et + c
yt 1 et 1
u t 1 et 1
Notice that Φ T and ε are not independent. ut and e t are independent, yt depends on et, et 1, et 2, . . . and yt depends on ut 1, ut 2, . . .. Taking mean values we get θ E ( ˆ
θ ) = E ( Φ T Φ ) 1 E ( Φ T ε )
To evaluate this expression we calculate E
y2t 1
yt 1ut 1
yt 1u t 1 u2t 1
= N
Ey2t
0
0
Eu 2t
Solutions to Chapter 2
and E
yt 1 et c u t 1 et + c
yt 1 et 1 ut 1 et 1
=
cNEyt 1 et 1
0
Ey t 1 et 1 = E ( ayt 2 + but 2 + et 1) e t 1 = σ 2
Since yt =
b q + a
q + c e t q+a
u t +
Ey2t =
xb2 + 1 a2
Eu 2t =
1
2ac + c 2 2 σ 1 a2
1
Ee 2t = σ 2
we get
2.8
ˆ E ( a
a) 2 =
E ( bˆ
b) 2 =
σ 2 c ( 1 a2 ) b2 + ( 1 2 ac + c 2)σ 2
0
The model is y( t) = a + bt + e( t) = ϕ T θ + e( t)
ϕ =
1
a
θ =
t
b
According to Theorem 2.1 the solution is given by equation ( 2.6) , i.e. θ ˆ = ( Φ T Φ ) 1Φ T Y
where Φ
T
=
1 1 1 1 2 3
1
Hence
1
N
1
t=1 N
t=1
y( 2)
.. . y( N )
y( t)
t=1
t=1
N
N
t2
ty( t)
t=1
t=1
2
N ( N
y( 1)
N
t
t
=
Y =
N
θ ˆ =
N
⋅⋅⋅
1)
( ( 2 N + 1 ) s0
3s1)
6 ( ( N + 1 ) s0 + 2 s1) N ( N + 1 )( N 1)
9
10
Problem Solutions
where we have made use of N
t =
N
N ( N + 1 )
2
t=1
N ( N + 1 )( N + 2 )
t2 =
6
t=1
and introduced N
s0 =
N
y( t)
s1 =
t=1
ty( t)
t=1
The covariance of the estimate is given by cov( ˆθ ) = σ 2 ( Φ T Φ )
1
=
12 N ( N + 1 )( N 1)
( N + 1 )( 2 N + 1 )
N + 1
6
2
N + 1
1
2
Notice that the variance of bˆ decreases as N 3 for large N but the variance of aˆ decreases as N 1 . The reason for this is that the regressor associated with a is 1 but the regressor associated with b is t . Notice that there are better numerical methods to solve for θ ˆ! 2.17
( a) The following derivation gives a formula for the asymptotic LS esti-
mate bˆ
=
T
( Φ Φ)
1 Φ Y
N
=
1
1) 2
φ ( k
k=1
=
1
N
N
u( k
k=1
E ( u( k
1 ) 2)
N
1) 2 1
1
φ ( k
k=1
1) y¯ ( k)
N
1
u( k
N
¯ ( k) 1) y
k=1
¯ ( k)) , 1) y
E ( u( k
as N
The equations for the closed loop system are u( k) = K ( uc ( k)
y( k))
y¯ ( k) = y( k) + ay( k
1)
= bu( k
1)
The signals u( k) and y( k) are stationary signals. This follows since the controller gain is chosen so that the closed loop system is stable. ¯ ( k)) = It then follows that E( u( k 1) 2) = E( u( k) 2) and E( u( k 1) y 2 2 E( bu( k 1) ) = bE ( u( k) ) exist and the asymptotic LS estimate becomes bˆ = b, i.e. we have an unbiased estimate.
Solutions to Chapter 2
11
Estimator
d uc
=0
u Σ
y Σ
K
y
b
1
−1
q + a
The system redrawn.
Figure 5.
( b) Similarly to ( a) , we get
1
N
N
1
u( k
k=1
( u 2( k
1
N
1) 2
N 1
1)) 0
u( k
1) y¯ ( k)
k=1
( u( k
¯ ( k))0 , 1) y
as N
where ( ⋅) 0 denote the stationary value of the argument. We have u2 ( k
( u( k
1)
0 =
¯ ( k))0 1) y
(( u( k)) 0) 2
= ( u( k))0 b (( u( k) 0 + d0 )
( u( k))0 = H ud ( 1) d0 =
1
K b d0 + a + Kb
and the asymptotic LS estimate becomes ˆ = ( u2( k b
= b
1
1)) 0
1
1 + a + Kb K b
¯ ( k))0 1) y
( u( k
=
1+a
= b
1
d0 + ( u( k)) 0
K
How do we interpret this result? The system may be redrawn as in Figure 5. Since U c = 0, we have that u = qK + a y¯ , and we can regard K as the controller for the system in Figure 5. It is then obvious q+a that we have estimated the negative inverse of the static controller gain. ( c) Introduction of high pass regressor filters as in Figure 6 eliminates or at least reduces the influence from the disturbance d on the estimate of b. One choice of regressor filter could be H f ( q 1) = 1 q 1, i.e. a differentiator. Another possibility would be to introduce a constant in
12
Problem Solutions
Estimator
H f
H f
d uc
Σ
K
u
y
b
Σ
q+a
−1
Introduction of regressor filters.
Figure 6.
the regressor and then estimate both b and bd. The regression model is in this case y¯ ( t) = 2.18
1) 1
u( t
b bd
The equations for recursive least squares are y( t) = ϕ T ( t
ˆ t) = θ ( ˆ t θ (
P ( t) =
=
φ ( t) T θ
1)θ 0 1)
+ K ( t)ε ( t)
ε ( t) = y( t) ϕ T ( t 1) ˆ θ ( t K ( t) = P ( t)ϕ ( t 1) =
1)
1)ϕ ( t 1) λ + 1) P ( t 1)ϕ ( t 1) I K ( t)ϕ T ( t 1) P ( t 1) λ P( t
ϕ T ( t
Since the quantity P ( t)ϕ ( t 1) appears in many places it is convenient to introduce it as an auxiliary variable w = Pϕ . The following computer code is then obtained: Input u,y: real Parameter lambda: real State phi, theta: vector P: symmetric matrix Local variables w: vector, den : real "Compute residual e=y-phi^T*theta "update estimate w=P*phi den=w^T*phi+lambda
Solutions to Chapter 2
theta=theta+w*e/den "Update covariance matrix P=(P-w*w^T/den)/lambda "Update regression vectors phi=shift(phi) phi(1)=-y phi(n+1)=u
13
14
Problem Solutions
SOLUTIONS TO CHAPTER 3
3.1
Given the process B ( z) z + 1 .2 = 2 A( z) z z + 0 .25
H ( z) =
Design specification: The closed system should have a pole that correspond to following characteristic polynomial in continuous time s2 + 2 s + 1 = ( s + 1 ) 2
This corresponds to with
Am ( z) = z2 + am1 z + am2
( e 1 + e 1) =
am 1 =
2 e
1
e 2
am 2 =
( a) Determine an indirect STR of minimal order. The controller should
have an integrator and the stationary gain should be 1. Solution: Choose Bm such that Bm ( 1) = 1 Am ( 1) The integrator condition gives ′
1)
R = R ( z
We get the following conditions ( 1)
B T = Bm Ao
( 2)
AR + B S = Am Ao ′
As B is unstable must Bm = B Bm . This makes ( 1) B Bm Ao ⇔ T = Bm Ao . Choose Bm such that ′
′
′
′
B ( 1) Bm( 1) = A( 1)
′
1 ⇒ Bm ( 1)
=
⇔ B T =
A( 1) B ( 1)
The simplest way is to choose ′
′
Bm = b m =
A( 1) = B ( 1)
0.25 2.2
Further we have ( z2 + a1 z + a2)( z
1)( z + r) + ( b0 z + b1)( s0 z2 + s1 z + s 2) 2 2 = ( z + am1 z + am2 )( z + ao1 z + ao2 )
Solutions to Chapter 3
15
with a1 = 1, a2 = 0.25 and ao1 and ao2 chosen so that Ao is stable. Equating coefficients give
or
1 + a1
r
+ b0 s0 = ao1 + a m1
1) + a2 + b0 s1 + b1 s0 = ao2 a2( r 1) + b0 s2 + b1 s1 = am1 ao2
r + a1 ( r
+ am1 ao1 + am2
ar +
+ am2 ao1
a2 r + b 1 s2 = am2 ao2
1 a1
1
b1
b0
0 0
a2
a1
0 0
b1
b0
0
b1
a2
b0
0
r s0 s1 s2
Now choose to estimate
θ =
b1
=
b1
ao1 + am1 + 1
a1
aa2 + am1 ao1 + a m2 + a1 am1 ao2 + am2 ao1 + a2 am2 ao2
a1
a2
a2
T
by equation 3.22 in the textbook. ¯ and ( b) As H ( z) is not minimum phase we cancel B = B between R ¯ S . This is difficult, see page 118 in the textbook. An indirect STR is given by Eq. 3.24. ¯ + S¯ y Ao Am y = Ru with R¯
¯ = B S, T = Bm′ Ao . = B R , S
Furthermore we have ′
¯ c Ao Am ym = Ao Bm uc = Ao B Bm uc = B Tuc = Tu ¯ y = R
1 Ao Am
¯ u + S
1 Ao Am
y
= u f
¯ ym = T
1
Ao Am
= y f
uc
= uc f
ε = y
¯ f + S¯ y f ym = Ru
¯ c f Tu
¯ with a recursive method in the above Now estimate R¯ , S¯ and T equation. Then cancel B and calculate the control signal. ( c) Take a = 0 in Example 5.7, page 206 in the textbook. This gives
with e
= y
ym = y
dt0 dt ds0 dt
Gm uc .
=
γ uc e
=
γ ye
16
3.3
Problem Solutions
The process has the transfer function G( s) =
b s+a
⋅
q s+p
where p and q are known. The desired closed loop system has the transfer function Gm ( s) =
ω 2 s2 + 2ξ ω s + ω 2
Since a discrete time controller is used the transfer functions are sampled. We get H ( z) = H m ( z) =
b0 z + b1 2 z + a1 z + a2 b m0 z + bm1 z2 + am1 z + am2
The fact that p and q are known implies that one of the poles of H is known. This information will be disregarded. In the following we will assume 1 G( s) = s( s + 1 ) With h
=
0.2 this gives H ( z) =
0.0187( z + 0 .936) ( z 1)( z 0.819)
Furthermore we will assume that ω = 2 and ζ = 0.7. Indirect STR: The parameters of a general pulse transfer function of second order is estimated by recursive least squares ( See page 51) . We have θ = ϕ ( t) =
bo u( t
b1
a1
1) u( t
a2
T
2)
y( t
1)
y( t
2)
The controller is calculated by solving the Diophantine equation. We look at two cases 1.B canceled: ( z2 + a1 z + a2 ) 1 + b0( s0 z + s1) = z2 + am1 z + am2 z1 : a 1 + b0 s0 = am1 z0 : a 2 + b0 s1 = am2
am1 a1 b0 am2 a2 s1 = b0 s0 =
Solutions to Chapter 3
17
The controller is thus given by R ( z) = z + b1 b0 S ( z) = s0 z + s1 T ( z) = t0 z
where
t0 =
1 + am1
+ am2
b0
2. B not canceled: The design equation becomes ( z2 + a1 z + a 2 )( z + r1 ) + ( b0 z + b1)( s0 z + s1 ) = ( z2 + am1 z + am2)( z + ao1)
Identification of coefficients of equal powers of z gives z2 : a 1 + r1 + b0 s0 = am1 + ao1 z1 : a 2 + a1 r1 + b1 s0 + b0 s1 = am1 ao1 + am2 z0 : a 2 r1 + b1 s1 = am2 ao1
The solution to these linear equations is b0 b1n2 + ao1 am2 b20 b20 a2 a1 b0 b1 + b21 n1 r1 s0 = b0
r1 =
s1 =
where
b21 n1
b0 n2
r1 ( a1b0
b1 n1
b1 )
b20 n1 = a1
am1
ao1
n2 = am1 ao1 + am2 a2 The solution exists if the denominator of r1 is different
from zero, which means that there is no pole-zero cancellation. It is helpful to have access to computer algebra for this problems e.g. Macsyma, Maple or Matematica! Figure 7 shows a simulation of the controller obtained when the polynomial B is canceled. Notice the “ringing” of the control signal which is typical for cancellation of a poorly damped zero. In this case we have z = 0.936. In Figure 8 we show a simulation of the controller with no cancellation. This is clearly the way to solve the problem. Direct STR: To obtain a direct self-tuning regulator we start with the design equation AR + B S = Am Ao B +
Hence
B + Am Ao y = ARy + B Sy = B Ru + B Sy
y = R
B u Ao Am u f
+ S
B y Ao Am y f
18
Problem Solutions
2
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the indirect self-tuning regulator when the process zero is canceled. Figure 7.
From this model R and S can be estimated. The polynomial T is then given by T =
to Ao Bm B
where t o is chosen to give the correct steady state gain. Again we separate two cases. 1. Cancel B: If the polynomial B is canceled we have B + = z + b1 b0, B = b0. From the analysis of the indirect STR we know that no observer is needed in this case and that the controller has the structure deg R = deg S = 1. Hence y( t) = R
bo u( t) Am
+ S
bo y( t) Am
Since bo is not known we include it in the polynomial R and S and estimate it. The polynomial R then is not monic. We have 1 1 y( t) = ( r0 q + r1 )
Am
u( t)
+ ( s0 q + s1 )
To obtain a direct STR we thus estimate θ =
r0
r1
s0
s1
T
Am
y( t)
Solutions to Chapter 3
2
19
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the indirect self-tuning regulator when the process zero is not canceled. Figure 8.
by RLS. The case r0 = 0 must be taken care of separately. Furthermore T has the form T ( q) = t0 q where B T B to q to q = = AR + B S Am B + bo Am
B
To get unit steady state gain choose
t o = Am ( 1 )
A simulation of the system is shown in Fig. 9. We see the typical ringing phenomena obtained with a controller that cancels a poorly damped zero. To avoid this we will develop an algorithm where the process zero is not canceled. 2. No cancellation of process zero: We then have B + = 1 and B = b0 q + b1. From the analysisof the indirect STR we know that a first order observer is required, i.e. A0 = q + a o1. We have as before
y = R
B u Ao Am u f
+ S
B y Ao Am y f
( ∗)
20
Problem Solutions
2
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10 0
Simulation in Problem 3.3. Process output and control signal are shown for the direct self-tuning regulator when the process zero is canceled. Figure 9.
Since B is not known we can, however, not calculate u f and y f . One possibility is to rewrite ( *) as
y = R B R
′
′
1 Ao Am
u
′
+ SB S
′
1 Ao Am
y
and to estimate R and S as second order polynomials and to cancel the common factor B from the estimated polynomials. This is difficult because there will not be an exact cancellation. Another possibility is to use some estimate of B . A third possibility is to try to estimate B R and B S as a bilinear problem. In Fig. 8–11 we show simulation when the model ( *) is used with 1
B
=
B
= q
B
=
B
=
q + 0 .4
1.4 q 0.4 0.6
Solutions to Chapter 3
2
21
uc y
1 0 −1 −2 0 10
10
20
30
40
50
10
20
30
40
50
u
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the direct self-tuning regulator when the process zero is not canceled and when B = 1. Figure 10.
3.4
The process has the transfer function G( s) =
b s( s + 1 )
with proportional feedback u = k( u c
y)
we get the closed loop transfer function Gcl ( s) =
kb s2 + s + k b
The gain k = 1 b gives the desired result. Idea for STR: Estimate b and use k = 1 bˆ . To estimate b introduce s( s + 1 ) y = bu
1 s( s + 1 ) y = b u 2 ( s + a) ( s + a) 2
y f
ϕ
22
Problem Solutions
2
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the direct self-tuning regulator when the process zero is not canceled and when B = q. Figure 11.
The equations on page 56 gives ˆ db = P ϕ e = P ϕ ( y f dt dP = α P dt
With bˆ ( 0) = 1, P( 0) = 100, α in Fig. 14.
bˆ ϕ )
P ϕ ϕ T P = α P =
P 2 ϕ 2
0.1, and a = 1 we get the result shown
Solutions to Chapter 3
2
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the direct self-tuning regulator when the process zero is not canceled and when B = ( q + 0 .4) 1.4. Figure 12.
23
24
Problem Solutions
2
uc y
1 0 −1 −2 0
10
20
30
40
50
10
20
30
40
50
u 10
0
−10
0
Simulation in Problem 3.3. Process output and control signal are shown for the direct self-tuning regulator when the process zero is not canceled and when B = ( q 0.4) 0.6. Figure 13.
Solutions to Chapter 3
2
uc y
0
−2 0 10
5
10
15
20
5
10
15
20
5
10
15
20
u
5 0 0 1
b
0.6 0.2 0
Simulation in Problem 3.4. Process output, control signal and estimated parameter b are shown for the indirect continuous-time self-tuning regulator. Figure 14.
25
26
Problem Solutions
SOLUTIONS TO CHAPTER 4
4.1
The estimate bˆ may be small because of a poor estimate. One possibility is to use a projection algorithm where the estimate is restricted to be in a given range, bo ≤ bˆ ≤ b1. This requires prior information about b:s values. Another possibility is to replace 1 bˆ
bˆ
by
bˆ 2 + P
where P is the variance of the estimate. Compare with the discussion of cautious control on pages 356–358. 4.10 Using ( 4.21) the
output can be written as R ∗ S ∗ R ∗ ut + yt + 1 et + do C∗ C∗ C∗
yt + d =
( ∗)
= ro ut + f t
Consider minimization of J = y2t + d + ρ u2t
( +)
Introduce the expression ( *) J = ( ro ut + f t) 2 + ρ u2t =
( r2o + ρ ) u2t + 2 ro u t f t + f t2
=
( r2o + ρ ) u2t +
=
( r2
Hence J = =
=
o
+ ρ )
1 r2o + ρ
1 r2o + ρ
1 r2o + ρ
2 rout f t
2 + f t
r2o + ρ
2
ro f t
ut + 2 ro + ρ
2
r2 + ρ f t + o ut ro f t +
ro +
yt + d +
ρ ro
r2o f t2 2 + f t r2o + ρ
+
ρ
ut
ro
2
ut
+
ρ r2o + 2
+
ρ
f t2
ρ f 2 2 ro + ρ t
ρ f 2 2 ro + ρ t
Since r2o + ρ is a constant we find that minimizing ( + ) is the same as to minimize J 1 = yt + d +
ρ ro
ut = f t +
ro +
ρ ro
ut
Solutions to Chapter 5
27
SOLUTIONS TO CHAPTER 5
5.1
The plant is
1
G( s) =
=
s( s + a)
B A
The desired response is Gm ( s) =
ω 2 Bm = 2 2 s + 2ζ ω s + ω Am
( a) Gradient method or MIT rule. Use formulas similar to ( 5.9) . In this
case B +
= 1
and Ao is of first order. The regulator structure is R = s + r 1
S = s0 s + s1
T = t0 Ao
This gives the updating rules
1 dr1 u = γ e dt Ao Am ds0 p y = γ e dt Ao Am
1 ds1 y = γ e dt Ao Am dt0 = dt
γ e
1
Ao Am
uc
( b) Stability theory approach 1. First derive the error equation. If all
process zeros are cancelled we have Ao Am y = AR 1 y + bo Sy = B R 1 u + b o Sy = b o( Ru + Sy)
Further Ao Am ym = Ao Bm u c = b o Tuc
Hence
Ao Am e = Ao Am ( y e =
ym ) = b o( Ru + Sy
b ( Ru + Sy Ao Am
Tu c )
Tuc )
Since 1 Ao Am is not SPR we introduce a polynomial D such that D Ao Am is SPR. We then get e =
where u f =
1 D
bD Ru f + Sy f Ao Am
u
y f =
1 D
y
Tuc f
uc f =
1 D
u c
28
Problem Solutions
( c) Stability theory approach 2. In this case we assume that all states
measurable. Process model:
0 x + 1 0
1 0
a
x˙ =
A p
B p
0 1 x
y =
C
Control law
Lx = θ 3 uc
u = Lr u c
θ 1 x1
θ 2 x2
The closed loop system is x˙ = ( A p
B p L) x + B Lr uc = Ax + Bu c
y = C x
where A(θ ) = A p
B p L =
B (θ ) = B p Lr =
θ 3
0
The desired response is given by
θ 1
a
1
θ 2
0
x˙ m = Am xm + Bm uc
where Am =
We have
ω 2
2ζ ω 1 T
0
( A
Am )
( B
Bm ) T =
Introduce the state error
=
e˙ = x
Bm =
ω 2
θ 3
ω 2
0
0
xm
xm = Ax + Buc
= Am e + ( A
2ζ ω a θ 1 0 ω 2 θ 2 0
e = x
we get
Am xm
Am ) x + ( B
Bm u c
Bm ) u c
( ∗)
The error goes to zero if Am is stable and A(θ )
Am =
0
( ∗)
Solutions to Chapter 5
29
0 ( +) It is thus necessary that A(θ ) and B (θ ) are such that there is a θ for which ( *) and ( + ) hold. Introduce the Lyapunov function B (θ )
Notice that
Bm =
V = eT Pe + tr( A
Am ) T Qa ( A
Am )
+ tr( B
Bm ) T Qb ( B
Bm )
tr( A + B ) = trA + trB xT Ax = tr( xx T A) = tr( Axx T ) tr( AB ) = tr( B A)
we get
dV ˙ T + Pe e˙T + A˙ T Qa ( A = tr P ee dt + ( A
˙ Qb ( B Am ) Qa A˙ + B T
Am ) Bm ) Qb B˙ T
Bm ) + ( B
But from ( *) ˙ T = P ( Am e + ( A P ee
Am ) x + ( B
Pe e˙T = Pe ( Am e + ( A
( +)
Bm ) uc ) e T
Am ) x + ( B
Bm ) u c )
T
Introducing this into ( + ) and collecting terms proportional to ( A Am ) T we find that they are 2tr( A Am ) T Qa A˙ + PexT
Similarly we find that terms proportional to ( B ˙ + Peu T 2 tr( B Bm ) T Qb B c Hence
dV T T T = e PAm e + e Am Pe dt
Bm )
+ 2 tr( A
Am ) T Qa A˙ + Pe xT
+ 2 tr( B
˙ + Peu T Bm ) T Qb B c
Hence if the symmetric matrix P is chosen so that AT m P + PA m =
are
Q
where Q is positive definite ( can always be done if Am is stable!) and parameters are updated so that ( A Am ) T Qa A˙ + Pe xT = 0 ( +) ( B Bm ) T Qb B˙ + Peu T =0 c
30
Problem Solutions
we get dV = dt
eT Qe
The equations for updating the parameters derived above can now be used. This gives 2ζ ω a θ 1 0 θ ˙ 1 θ ˙2 Q + Pe xT = 0 a ω 2 θ 2 0 0 0 θ ˙ 3 θ 3 ω 2 0 Qb + Peu c = 0 0 Hence with Qa = I and Qb = 1
dθ 1 = p11 e1 x1 + p12 e2 x1 = ( p11 e1 + p12 e2 ) x1 dt dθ 2 = p11 e1 x2 + p12 e2 x2 = ( p11 e1 + p12 e2 ) x2 dt dθ 3 = ( p11 e1 + p12 e2 ) uc dt
where e1 = x1 P such that
x m1
and e2
= x2
xm2 . It now remains to determine
AT m P + PA m =
Choosing ζ
=
Q
0.707, ω = 2 and Q =
we get
41.2548 11.3137 11.3137 16.0000
P =
4 2 2 16
The parameter update laws become
dθ 1 = ( 4 e1 + 2 e2) x1 dt dθ 2 = ( 4 e1 + 2 e2) x2 dt dθ 3 = ( 4 e1 + 2 e2 ) u c dt
A simulation of the system is given in Fig. 15. 5.2
The block diagram of the system is shown in Fig. 16. The PI version of the SPR rule is dθ = dt
γ 1
d ( uc e) dt
γ 2 u c e
( ∗)
Solutions to Chapter 5
2
States x1 xm1 x2 xm2
1 0 −1 −2 0
10
20
30
40
20
30
40
Estimated parameters 1
0.5
0 0
10
Figure 15. Simulation in Problem 5.1. Top: Process and model states, x1 ( full) , x m1 ( dashed) , x2 ( dotted) , and x m2 ( dash-dotted) .
Bottom: Controller parameters θ 3 ( full) , θ 1 ( dashed) , and θ 2 ( dotted) .
0
θ
y m
1 s
−
uc
Σ Π
e
y
1 s
θ
Figure 16.
Block diagram in Problem 5.2.
To derive the error equation we notice that dym 0 = θ uc dt dy = θ uc dt
31
32
Problem Solutions
Hence
de = (θ dt
we get
θ 0) u c
d2 e dθ uc + (θ = dt2 dt
θ 0)
duc dt
Inserting the parameter update law ( *) into this we get d2 e = dt2
Hence
γ 1
duc de e + u c dt dt
d2 e 2 de γ u + + 1 c dt2 dt
uc
γ 2 u2c e + (θ
duc 2 γ 1 uc + γ 2 uc e = (θ dt
θ 0)
θ 0)
duc dt
duc dt
Assuming that u c is constant we get the following error equation d2 e 2 de 2 γ u + + γ 2 u c e = 1 c 2 dt dt
0
Assuming that we want this to be a second order system with ω and ζ we get γ 1 u2c = 2ζ ω γ 1 = 2ζ ω u2c
γ 2 u2c = ω 2
γ 2 = ω 2 u2c
This gives an indication of how the parameters γ 1 and γ 2 should be selected. The analysis was based on the assumption thatuc was constant. To get some insight into what happens when uc changes we will give a simulation where uc is a triangular wave with varying period. The adaptation gains are chosen for different ω and ζ . Figure 17 shows what happens when the period of the square wave is 20 and ω = 0.5, 1 and 2. Corresponding to the periods 12, 6 and 3. Figure 18 show what happen when uc is changed more rapidly. 5.6
The transfer function is G( s) =
b0 s2 + b1 s + b2 s2 + a1 s + a2
The transfer function has no poles and zeros in the right half plane if a1 ≥ 0, a 2 ≥ 0, b0 ≥ 0, b1 ≥ 0, and b2 ≥ 0. Consider G( iω ) =
B ( iω ) A( iω ) ⋅ A( iω ) A( iω )
The condition Re G( iω ) ≥ 0 is equivalent to Re ( B ( iω ) A( iω ) ) ≥ 0. But g (ω ) =
Re ( b0ω 2 + iω b1
4 = b0ω + ( a1 b1
+ b2 )(
b0 a2
ω 2
iω a1 + a2)
b2)ω 2 + a2 b2
Solutions to Chapter 5
Process and model output
33
Process and model output
10
10
5
5
0
0
0
5
10
15
20
0
Estimated parameter
1.5
10
15
20
Estimated parameter
1.5
1
5
1
0.5
0.5
0
0
0
5
10
15
20
0
5
10
15
20
Simulation in Problem 5.2 for a triangular wave of period 20. Left top: Process and model outputs, Left bottom: Estimated parameter θ when ω = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for ζ = 0.7. Right top: Process and model outputs, Right bottom: Estimated parameter θ when ζ = 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for ω = 1. Figure 17.
Completing the squares the function can be written as g (ω ) = b0
ω 2
+
a1 b1
b0a2 2 b0
b2
2
( a1 b1
+ a2 b2
b0 a2 4 b0
b2 ) 2
When b0 = 0 the condition for g to be positive is that a1 b1 b0a2 b2 ≥ 0 ( i) If b0 > 0 the function g (ω ) is nonnegative for all ω if either ( i) holds or if a1 b1 b0 a2 b2 < 0 and a2 b2 >
( a1 b1
Example 1. Consider G( s) =
b0 a2 4 b0
s2 + 6 s + 8 s2 + 4 s + 3
b2 ) 2
We have a1 b1 b0 a2 b2 = 24 3 8 = 13 function G( s) is SPR. Example 2. 3s2 + s + 1 G( s) = 2 s + 3 s + 4
> 0.
Hence the transfer
34
Problem Solutions
Process and model output
Process and model output
10
10
5
5
0
0
0
5
10
15
20
0
Estimated parameter
1.5
10
15
20
Estimated parameter
1.5
1
5
1
0.5
0.5
0 0
0 5
10
15
20
0
5
10
15
20
Simulation in Problem 5.2 for a triangular wave of period 5. Left top: Process and model outputs, Left bottom: Estimated parameter θ when ω = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for ζ = 0.7. Right top: Process and model outputs, Right bottom: Estimated parameter θ when ζ = 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for ω = 1. Figure 18.
we have a 1 b1
a2 b0
b2 =
3
12
1
=
a2 b2 =
4
b2 ) 2
10. Furthermore
100 12 Hence the transfer function G( s) is neither PR nor SPR. ( a1b1
5.7
a2 b0 4 b0
=
Consider the system dx = Ax + B1 u dt y = C1 x
where B1 =
1 0 .. .
0
Let Q be positive and let P be the solution of the Lyapunov equation AT P + PA =
Q
( ∗)
Solutions to Chapter 5
Define C1 as T
C1 = B P =
p11
p12
...
p1n
35
According to the Kalman-Yacobuvich Lemma the transfer function A) 1 B1
G1 ( s) = C1 ( sI
is then positive definite. This transfer function can, however, be written as G( s) =
p11 sn 1 + p 12 sn 2 + . . . + p1n sn + a1 sn 1 + . . . + an
Since there are good numerical algorithms for solving the Lyapunov equation we can use this result to construct transfer functions that are SPR. The method is straightforward. 1. Choose A stable. 2. Solve ( *) for given Q positive. 3. Choose B as B ( s) = p11 sn 1 + p12 sn 2 + . . . + p1n 5.11 Let
us first solve the underlying design problem for systems with known parameters. This can be done using pole placement. Let the plant dynamics be y =
B u A
and let the controller be Ru = Tuc
sy
The basic design equation is then ( ∗)
AR + B S = Am Ao
In this case we have A = ( s + a)( s + p) B = bq Am = s2 + 2ζ ω s + ω 2
We need an observer of at least first order. The design equation ( *) then becomes ( s + a)( s + p)( s + r1 ) + bq ( s0s + s1) = ( s2 + 2ζ ω s + ω 2 )( s + a0)
where Ao the form
= s + ao is
( +)
the observer polynomial. The controller is thus of du + r1 u = t0u c dt
s0 y
s1
dy dt
36
Problem Solutions
It has four parameters that have to be updated r 1 , t0 , s0 , and s1. If no prior knowledge is available we thus have to update these four parameters. When parameter p is known it follows from the design equation ( + ) that there is a relation between the parameters and it then suffices to estimate three parameters. This is particularly easy to see when the observer polynomial is chosen as Ao ( s) = s + p . Putting s = p in ( + ) gives s0 p + s 1 =
0
Hence ( ∗∗)
s1 = ps0
In this particular case we can thus update t0, s0, and r1 and compute s1 from ( **) . Notice, however, that the knowledge of q is of no value since q always appear in combination with the unknown parameter b. The equations for updating the parameters are derived in the usual way. With a 0 = p equation ( + ) simplifies to ( s + a)( s + r1 ) + bqs 0 = s2 + 2ζ ω s + ω 2
Introducing
′
′
A ( s) = s + a
′
S ( s) = s0
we get
T = t0
′
y = e B uc = t0 A R + B S ′
′
′
B T uc A R + B S ′
′
B uc Am
e = s0
BT B uc = ( A R + B S ) 2
e = r1
A BT uc = ( A R + B S ) 2
=
′
′
′
′
A R + B S
′
′
′
′
A B u = Am A
B
′
A R + B S
B u Am ( s + p)
dr1 1 u e = γ dt ( s + p) Am ds0 1 y e = γ dt Am
γ
1
Am
y
B y Am
′
y
A y Am
′
A
The MIT rule then gives
dt0 = dt
′
u c e
A simulation of the controller is given in Fig. 19.
′
Solutions to Chapter 5
2
Model and process output
1 0 −1 −2 0
20
40
60
80
100
40
60
80
100
Estimated parameters 2 1.5 1 0.5 0
20
Simulation in Problem 5.11. Top: Process ( full) and model ( dashed) output. Bottom: Estimated parameters r1 ( full) , s0 ( dashed) , and t0 ( dotted) . Figure 19.
5.12
The closed loop transfer function is Gcl ( s) =
kb s2 + s + k b
This is compatible with the desired dynamics. The error is e = y
ym =
kb u c p2 + p + kb
Hence e b u c = 2 k p + p + kb b ( uc = 2 p + p + kb
1
b 2 ( uc p + p + 1
ym
b2 k u c ( p2 + p + k b) 2 y) y)
p =
d dt
The following adjustment rule is obtained from the MIT rule dk = dt
e γ e = k ′
′
γ b γ
1
p2 + p + 1
( uc
y) e
37
38
Problem Solutions
2
ym y, gamma = 0.05
2
0
1
−2
0
0 2
20
40
60
ym y, gamma = 0.3
2
0
2
40
60
40
60
40
60
k, gamma = 0.3
0 20
40
60
ym y, gamma = 1
0 2
0
1
−2
0
0
20
1
−2 0
0
k, gamma = 0.05
20
40
60
0
20
k, gamma = 1
20
Simulation in Problem 5.12. Left: Process ( full) and model ( dashed) output. Right: Estimated parameter k for different values of γ . Figure 20.
A simulation of the system is given Fig. 20. This shows the behavior of the system when u c is a square wave.
Solutions to Chapter 6
39
SOLUTIONS TO CHAPTER 6
6.1
The process is given by b
G( s) =
s( s + a)
and the regressor filter should be 1
G f ( s) =
A f ( s)
=
1 Am ( s)
=
1 s2 + 2ζ ω s + ω 2
The controller is given by s0 s + s1 t0 ( s + ao ) Y ( s) + uc ( s) s + r1 s + r1
U ( s) =
For the estimation of process parameters we use a continuous-time RLS algorithm. The process is of second order and the controller is of first order. The regressor filter is of second order and both inputs and outputs should be filtered. Hence we need seven states in ξ . The process parameters are contained in θ , and the controller parameters in ϑ . The relation between these are given by
ϑ =
r1 s0 s1 t0
=
aˆ
2ζ ω + ao ( ao 2ζ ω + ω 2
ˆ 1 ) bˆ ar
ω 2 ao bˆ ω 2 bˆ
=
χ ( θ )
To find A(ϑ ) , B (ϑ ) , C (ϑ ) and D(ϑ ) we start by finding realizations for y, y f , u f and the controller. We get
d dt
and
y˙
d dt
y
y˙ f y f
=
=
y˙
a
0 0
1 2ζ ω 1
y
b
+
0
ω 2
y˙ f
0
y f
2ζ ω ω 2 u f 1 0 and the control law can be rewritten as d dt
u˙ f
u =
=
s0 y + t0uc +
u˙ f u f
u
+
1 y 0
+
1 0
u
s1 + r1 s0 ao t0 t0 r1 y + uc p + r1 p + r1
We need one state for the controller and it can be realized as x˙ =
r1 x + ( s1 + r1 s0) y + ( ao t0 u = x
s0 y + t0 u c
r1 t0 ) uc
40
Problem Solutions
Combining the states results in
d dt
y˙ y y˙ f y f
˙ f u u f x
=
a
1 0 0 0 0 0
×
0 0 2ζ ω 1 0 0 0
bs 0
0 1 0 s0
0 s1 + r1 s0 y˙ y
y˙ f y f
˙ f u u f x
+
which defines the relation
bt0
0 0 0 t0
0 ao t0
r1 t0
0 0
0 0 0 0 2ζ ω 1 0
ω 2
0 0 0 0
0 0 0 0 ω 2
0 0
b
0 0 0 1 0 r1
uc
dξ = A(ϑ )ξ + B (ϑ ) uc dt
Now we need to express e and ϕ in the states so that we find the C and D matrices. The estimator tries to find the parameters in y f =
b p( p + a)
u f
which is rewritten as p2 y
f
Clearly
ϕ =
and
e = p2 y f
=
0 0 0 0
ϕ T ˆ θ =
py f
u f
a
b
1 0 0 0 0 0 0 0 1 0
˙ f 2ζ ω y
=
ϕ T θ 0
y˙ y y˙ f y f u˙ f u f x
ω 2 y f + y + aˆ yˆ f
ˆ f bu
Solutions to Chapter 6
If we use the relations aˆ written as ˙ f 2ζ ω y
e =
=
0 1
r1 + 2ζ ω + ao and bˆ = ω 2 t0 then e can
=
r1 + ao
0 0
ω 2 t0
0
Combining the expressions for ϕ and e gives
e
ϕ
=
0 1 0 0 0 0
ω 2
˙ f ω 2 y f + y + ( r1 + 2ζ ω + ao ) y
r1 + ao
1 0
0 0 0 0 0 0
ω 2 t0
0 1
0 0 0
y˙ y y˙ f y f u˙ f u f x
t0
41
be
u f
y˙ y y˙ f y f
˙ f u u f x
= C (ϑ )ξ
i.e. D(ϑ ) = 0. As given in the problem description, the estimator is defined by dθ = P ϕ e dt dP = α P dt
Pϕ ϕ T P
where P is a 2 × 2 matrix and e and ϕ are given above. 6.3
The averaged equations for the parameter estimates are given by ( 6.54) on page 303. In this particular case we have ab2 ( s + a)( s + b ) 2 a Gm ( s) = s + a G( s) =
42
Problem Solutions
To use the averaged equations we need
avg ( Gm uc ) ( Gu c )
=
avg v
1 2 b2 v ⋅ 2 cos2ϕ 2 b + ω 2
=
a2 b2u20 = 2( a2 + ω 2 )( b2 + ω 2 )
=
b
( p + b) 2
v
a2 u20 b2 2( a2 + ω 2 )( b2 + ω 2 )
2b2
1
ω 2 + b2
2cos2 ϕ
1
a2 b2u20 ( b2 ω 2 ) = 2( a2 + ω 2 )( b2 + ω 2 ) 2
where we have introduced uc = u0 sin ω t a v = uc p + a
ϕ = atan
ω b
Similarly we have avg ( uc Gu c ) =
u20
2
⋅
=
u20
2
G
cos ( 2ϕ + ϕ 1)
ab2 a2 + ω 2 ⋅ ( b2 + ω 2 )
( cos2ϕ cos ϕ 1 sin2ϕ sin ϕ 1)
u20 ab2( ab2 ω 2 ( a + 2 b)) = 2( a2 + ω 2 )( b2 + ω 2 ) 2
where ϕ = atan
ω
ϕ 1 = atan
b
ω a
It follows from the analysis on page 302–304 that the MIT rule gives a stable system as long as ω < b while the stability condition for the SPR rule is ω <
with b
=
10 a we get
a
a + 2 b
b
ω MI T = 10a ω SP R = 2.18 a
6.10
The adaptive system was designed for a process with the transfer function Gˆ ( s) =
b s+a
( 1)
The controller has the structure u = θ 1uc
θ 2 y
( 2)
Solutions to Chapter 6
43
The desired response is bm s + am
Gm ( s) =
( 3)
Combining ( 1) and ( 2) gives the closed loop transfer function Gcl =
bθ 1 s + a + bθ 2
Equating this with Gm ( s) given by ( 3) gives bθ 1 = b m a + bθ 2 = am
If these equations are solved for θ 1 and θ 2 we obtain the controller parameters that give the desired closed loop system. Conversely if the equations are solved for a and b we obtain the parameters of the process model that corresponds to given controller parameters. This gives a = am
b mθ 2 θ 1
b = b m θ 1
The parameters a and b can thus be interpreted as the parameters of the model the controller believes in. Inserting the expressions for θ 1 and θ 2 from page 318 we get
When
a(ω ) b(ω )
=
( +)
229 = 2.7179 31 0. The reson for this is that the value of the plant transfer ω =
we get a (ω ) function
229 31ω 2 = 259 ω 2 458 = 259 ω 2
G ( s) =
458 ( s + 1 )( s2 + 30 s + 229)
( ∗)
is G( 2.7179i) = 0.6697i. The transfer function of the plant is thus purely imaginary. The only way to obtain a purely imaginary value of the transfer function b Gˆ = ( ∗∗) s+a
is to make a = 0. Also notice that b( 2.7179i) = 1.8203 which gives Gˆ ( 2.7179i) = 0.6697i.When ω = 259 = 16.09 we get infinite values of a and b. Notice that G( i 259) = 0.0587 that is real and negative. The only way to make Gˆ ( iω ) negative and real is to have infinite large values