EE221A Linear System Theory Problem Set 1 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 9/1; Due 9/8
Problem Problem 1: Functions. unctions. Consider f : R3 → R3 , defined as
1 f ( f (x) = Ax,A = 0
0 1 0 0 0 1 1
,x ∈
3
R
Is f a function? Is it injective? Is it surjective? Justify your answers. Problem 2: Fields. (a) Use the axioms of the field to show that, in any field, the additive identity and the multiplicative identity are unique. (b) Is GLn , the set of all n × n nonsingular matrices, a field? Justify your answer. Problem Problem 3: Vector Spaces. (a) Show that ( Rn , R), the set of all ordered n-tuples of elements from the field of real numbers space.
R,
is a vector
(b) Show that the set of all polynomials in s of degree k or less with real coefficients is a vector space over the field R. Find a basis. What is the dimension of the vector space? Problem Problem 4: Subspaces. Subspaces. Suppose U 1 , U 2 , . . . , Um are subspaces of a vector space V . V . The The sum sum of U 1 , U 2 , . . . , Um , denoted U 1 + U 2 + . . . + U m , is defined to be the set of all possible sums of elements of U 1 , U 2 ,...,U m : U 1 + U 2 + . . . + U m = {u1 + u2 + . . . + um : u1 ∈ U 1 , . . . , um ∈ U m } (a) Is U 1 + U 2 + . . . + U m a subspace of V ? V ? (b) Prove Prove or give a countere counterexampl xample: e: if U 1 , U 2 , W are subspaces of V such that U 1 + W = U 2 + W , W , then U 1 = U 2 . Problem Problem 5: Subspaces. Subspaces. Consider the space F of all functions f : R+ → R, which have a Laplace transform ˆ f ( f (s) = 0 f ( f (t)e st dt defined for all Re( Re(s) > 0. For some fixed s0 in the right half plane, is {f |f ( f ˆ(s0 ) = 0} a subspace of F ? F ? ∞
−
Problem Problem 6: Linear Linear Independence. Independence. Let V be the set of 2-tuples whose entries are complex-valued rational functions. Consider two vectors in V : V : v1 =
1/(s + 1) 1/(s + 2)
, v2 =
(s + 2)/ 2)/((s ((s + 1)(s 1)(s + 3)) 1/(s + 3)
Is the set {v1 , v2 } linearly independent over the field of rational functions? Is it linearly independent over the field of real numbers? Problem 7: Bases. Let U be the subspace of R5 defined by U = {[x1 , x2 , . . . , x5 ]T ∈ R5 : x1 = 3x 3 x2 and x3 = 7x 7 x4 } 1
Find a basis for U . U . Problem Problem 8: Bases. Prove that if {v1 , v2 , . . . vn } is linearly independent in V , V , then so is the set {v1 − v2 , v2 − v3 , . . . , vn 1 − vn , vn }. −
2
EE221A Problem Set 1 Solutions - Fall 2011
Note: these solutions are somewhat more terse than what we expect you to turn in, though the important thing is that you comm communic unicate ate the main idea of the solution. solution. Problem Problem 1. Functions. unctions. It is a functi function; on; matrix matrix multip multiplic licati ation on is well defined. defined. Not injecti injective ve;; easy easy to find T a countere counterexampl xamplee where f (x1 ) = f (x2 ) x1 = x2 . Not surjec surjectiv tive; e; suppose suppose x = (x1 , x2 , x3 ) . Then f (x) = (x1 + x3 , 0, x2 + x3 )T ; the range of f is not the whole codomain. Problem Problem 2. Fields. Fields. a) Suppose 0 and 0 are both additive additive identiti identities. es. Then x + 0 = x + 0 = 0 ⇐⇒ 0 = 0. Suppose 1 and 1 are both multiplicative identities. identities. Consider for x = Premultiply ply by x−1 to 0, x · 1 = x = x · 1 . Premulti
see that 1 = 1 . b) We are not given what the operations + and · are but we can assume at least that + is componentwise addition. The identity matrix I is nonsingular so I ∈ GLn . But I + (−I ) = 0 is singular so it cannot be a field. Problem Problem 3. Vector Spaces. a) This is the most familiar kind of vector space; all the vector space axioms can be trivially shown. b) First write a general vector as x(s) = ak xk + ak−1 xk−1 + · · · + a1 x + a0 . It’s easy to show show associativit associativity y and commutat comm utativit ivity y (just look at operations operations componentwi componentwise). se). The additive additive identit identity y is the zero polynomial polynomial ( a0 = a1 = · · · = ak = 0) and the additive additive inverse inverse just has each each coefficient coefficient negated. negated. The axioms of scalar scalar multipl multiplicat ication ion are similarly trivial to show as are the distributive laws. A natural basis is B := 1, x , x2 , . . . , xk . It spans the space (we can write a general x(s) as linear combinations of the basis elements) and they are linearly independent since only a0 = a1 = · · · = ak = 0 solves ak xk + ak−1 xk−1 + 0 . The dimension of the vector space is thus the cardinality of B , which is k + 1. 1. · · · + a1 x + a0 = 0. Problem Problem 4. Subspaces. Subspaces. a) Yes, it is a subspace. First, U 1 + · · · + U m is a subset since its elements are sums of vectors in subspaces (hence also subsets) of V and since V is a vector space, those sums are also in V . Also a linear combination will be of the form
1 α11 u11 + α21 u21 + · · · + αm u1m + α2mu2m
= w1 + · · · + wm ∈ U 1 + · · · + U m
where u1k , u2k , wk ∈ U k . U 1 . Then U 1 + W = W = U 2 + W . b) Counterexample: U 1 = {0} , U 2 = W = ˆ(s0 ) = 0 is a subset of F then all that must be shown is Problem Problem 5. Subspaces. Subspaces. If we assume that S = f |f closure under linear combinations. Let f, g ∈ S and α, β ∈ R. Then
∞
ˆ [αf (t) + βg (t)] e dt L (αf + βg ) = ˆ ˆ −st
0
∞
∞
f (t)e−st dt + β
=α
g (t)e−st dt
0
0
= αf ˆ(s) + β gˆ(s) and thus we have closure since αf ˆ(s0 ) + β gˆ(s0 ) = α · 0 + β · 0 = 0. If on the other hand we do not assume S ⊂ F , then one could construct a counterexample of a transfer function with a zero at s0 and a pole somewhere else in the RHP that will be in S but not in F . f (t) := es t cos bt is one such counterexample. counterexample. Proble Problem m 6. Linear Linear Indepen Independen dence. ce. a) Linearly Linearly dependen dependent. t. Take α = ss+3 , then then v1 = αv2 . b) Lin Linea earl rly y +2 −1 independen independent. t. Let α, β ∈ R. Then Then αv1 + βv 2 = 0 ⇐⇒ α = −β (s + 2)(s + 3) for all s, which requires that α = β = 0. 0. T T T Problem Problem 7. Bases. B := {b1 , b2 , b3 } = 1, 13 , 0, 0, 0 , 0, 0, 1, 17 , 0 , [0, 0, 0, 0, 1] is a basis. They are linearly independent by inspection and they span U since we can find a1 , a2 , a3 such that u = a1 b1 + a2 b2 + a3 b3 for all u ∈ U . Problem Problem 8. Bases. Form the usual linear combination equalling zero: 0
α1 (v1 − v2 ) + α2 (v2 − v3 ) + · · · + αn−1 (vn−1 − vn ) + αn vn = 0
⇐⇒ α1 v1 + (α2 − α1 )v2 + · · · + (αn
−1
− αn
−2
)vn−1 + (αn − αn−1 )vn = 0
Now, since {v1 , . . . , vn } is linearly independent, this requires that α1 = 0 and α2 − α1 = α2 = 0, ..., αn = 0. Thus the new set is also linearly independent.
EE221A Linear System Theory Problem Set 2 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 9/8; Due 9/16
All answers must be justified.
Problem 1: Linearity. Linearity. Are the following maps A linear? (a) A(u(t)) = u(−t) for u(t) a scalar function of time (b) How about y (t) = A(u(t)) =
t
0
e−σ u(t − σ )dσ?
(c) How about the map A : as2 + bs + c → to itelf?
s
( 0
bt + a)dt from the space of polynomials with real coefficients
Problem 2: Nullspace of linear maps. Consider a linear map A. Prove that N (A) is a subspace. Problem Problem 3: Linearity Linearity.. Given A,B,C,X ∈ Cn plication) from Cn n → Cn n are linear.
n
×
×
, determine if the following maps (involving matrix multi-
×
AX + XB 1. X → AX + BX C 2. X → 3. X → AX + XB X Problem Problem 4: Solutions Solutions to linear equations equations (this was part of Professor El Ghaoui’s prelim question last year). Consider the set S = {x : Ax = b} where A ∈ Rm n , b ∈ Rm are given. given. What is the dimension dimension of S ? Does it depend on b? ×
Problem Problem 5: Rank-Nullit Rank-Nullity y Theorem. Theorem. Let A be a linear map from U to V with dimU = n and dimV = m. Show that dimR(A) + dim N (A) = n Problem Problem 6: Represent Representation ation of a Linear Map. Let A : ( U, F ) → (V, F ) with dim U = n and dim V = m be a linear map with rank( A) = k. Show that there exist bases ( ui )in=1 , and ( vj )m j =1 of U, V respectively such that with respect to these bases A is represented by the block diagonal matrix A=
I 0
0 0
What are the dimensions dimensions of the different different blocks? blocks? Problem 7: Sylvester’s Inequality. Inequality. In class, we’ve discussed the Range of a linear map, denoting the rank of the map as the dimension dimension of its range. Since all linear maps between between finite dimensional dimensional vector vector spaces spaces can be represented as matrix multiplication, the rank of such a linear map is the same as the rank of its matrix representation. Given A ∈ Rm
n
×
and B ∈ Rn
p
×
show that
rank(A) + rank(B ) − n ≤ rank AB ≤ min [rank(A), rank(B )]
1
EE221A Problem Set 2 Solutions - Fall 2011
Problem 1. Linearity. Linearity.
a) Linear: A(u(t) + v(t)) = u(−t) + v (−t) = A(u(t)) + A(v(t)) b) Linear: t
A(u(t) + v(t)) =
ˆ
t
−σ
e
(u(t − σ ) + v(t − σ ))dσ ))dσ =
0
ˆ
−σ
e
u(t − σ )dσ +
0
ˆ
t
e−σ u(t − σ )dσ
0
= A(u(t)) + A(v(t)) c) Linear: s
ˆ A(a s + b s + c + a s + b s + c ) = ((b ((b + b )t + (a (a + a ))dt ))dt = ˆ ˆ 1
2
1
1
2
2
2
2
1
2
1
2
0
s
=
s
(b1 t + a1 )dt +
(b2 t + a2 )dt
0
0
= A(a1 s2 + b1 s + c1 ) + A(a2 s2 + b2 s + c2 ) Proble Problem m 2. Nullsp Nullspace ace of linear linear maps. maps. Assume that A : U → V and that U is a vector space over the
field F. N (A) := {x ∈ U : A(x) = θv }. So by defin definit itio ion n N (A) ⊆ U . U . Let x, y ∈ N (A) and α, β ∈ F. Then βy ) = αA(x) + β A(y ) = α · θV + β · θV = θV . So N (A) is closed under linear combinations and is a subset A(αx + βy) of U , U , therefore it is a subspace of U . U . Problem 3. Linearity. Linearity. Call the map A in each example for clarity. A i) Linear: (X + X + Y ) Y ) = A(X + X + Y ) Y ) + (X + X + Y ) Y )B = AX AX + + AY + X B + Y B = AX AX + + XB + AY + Y B = A(X ) + A(Y ) Y ) ii) Linear: A(X + Y ) Y ) = A(X + Y ) Y ) + B (X + Y ) Y )C = AX + AY + BX C + BY C = AX + BX C + AY + BY C = A(X ) + A(Y ) Y ) iii) Nonlinear: X + Y ) Y ) = A(X + X + Y ) Y ) + (X ( X + + Y ) Y )B (X + X + Y ) Y ) = A(X + = AX AX + + AY + X BX + X BY + Y BX + Y BY = AX AX + + X BX + AY + Y BY + X BY + Y BX = A(X ) + A(Y ) Y ) + X BY + Y BX = A(X ) + A(Y ) Y ) / R(A), then there are no solutions, S = ∅ = {0} (dim S = 0, Problem Problem 4. Solution Solutions s to linear linear equations equations.. If b ∈ −1, or undefined depending on convention—though 0 is somewhat less preferable since it would make sense to reserve reserve zero for the dimension dimension of a singleton singleton set). set). If b ∈ R(A), then A(x + z ) = b for any x ∈ S , z ∈ N (A) =⇒ dim S = dim N (A).
n
n
incomplete e Lemma. A : U → V linear, dim U = n, {uj }k+1 a basis for N (A), {uj }1 a basis for U (use thm. of incomplet
basis). Then S = {A(uj )}k1 is a basis for R(A).
Proof. R(A) = {A(u) : u ∈ U } = {A ( 1n aj uj ) : aj ∈ F} = aj k1 A(uj ) , so S spans R(A). No Now w suppos supposee S wasn’t wasn’t linearly linearly independen independent, t, so a1 A(u1 ) + · · · + ak A(uk ) = 0 where aj = Then by linea linearit rity y 0 for some j . Then A(a1 u1 + · · · + ak uk ) = 0 =⇒ a1 u1 + · · · + ak uk ∈ N (A). Since {uj }1n is a basis for U and {uj }kn+1 is a basis for Thus S is linearly independent and spans R(A) so it is a basis N (A), we must have a1 u1 + · · · + ak uk = 0 →←. Thus for R(A).
Problem 5. Rank-Nullity Rank-Nullity Theorem. The theorem follows directly from the above lemma.
2
Problem Problem 6. Represen Representatio tation n of a Linear Linear Map. We have from the rank-nullity theorem that dim N (A) = n − k.
Let {ui }kn+1 be a basis of N (A). Then Then A(ui ) = θV for all i = k + 1, . . . , n. n. Since Since the zero zero vector vector has all all its coordinates zero in any basis, this implies that the last n − k columns of A are zero. Now it remains to show that we can complete the basis for U and choose a basis for V such that the first k columns columns are as desired. desired. But the lemma above gives us what we need. The form of the matrix A tells us that we want the i-th basis vector of V to be A(ui ), for i = 1, 1 , . . . , k. k. So let the basis for U be BU = {ui }1n (where the last n − k basis vectors are a basis for N (A) and the first k are arbitrarily chosen to complete the basis), and the basis for V be BV = {vi }m 1 where the first k basis vectors are defined by vi = A(ui ) and the remaining m − k are arbitrarily chosen (but we know we can find them by the theorem of the incomplete basis). Thus the block sizes are as follows: A=
I k×k
0k×(n−k)
0(m−k)×k
0(m−k)×(n−k)
Problem 7. Sylvester’s Sylvester’s Inequality. Inequality.
Av, Let U = R p , V = Rn , W = Rm . So B : U → V , V , A : V → W . W . Defin Definee A|R(B) : R(B ) → W : v → Av, “ A restricted in domain to the range of B ”. Clearly Clearly R(AB) AB ) = R(A|R(B) ). Rank/nul Rank/nullit lity y gives gives that dim R(A|R(B) ) + dim N (A|R(B) ) = dim R(B ), so dim R(AB) AB ) ≤ dim R(B ). Now R(A|R(B) ) ⊆ R(A) =⇒ dim R(A|R(B) ) = dim R(AB) AB ) ≤ dim R(A). We now have have one of the inequalitie inequalities: s: dim R(AB) AB ) ≤ min {dim R(A), dim R(B )}. Clearly Clearly rank/nullit lity y, dim R(A|R(B) ) + dim N (A) ≥ N (A|R(B) ) ⊆ N (A) =⇒ dim N (A|R(B) ) ≤ dim N (A), so by rank/nul dim R(B ) = rank(B rank( B ). Finally Finally by rank/nulli rank/nullity ty again, again, dim N (A) = n − rank(A rank(A). So we we hav have rank(AB rank(AB)) + n − rank(A rank(A) ≥ rank(B rank(B ). Rearranging this gives the other inequality we are looking for.
EE221A Linear System Theory Problem Set 3 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2010
Issued 9/22; Due 9/30
Problem 1. Let A : R3 → R3 be a linear map. Consider two bases for R3 : E = {e1 , e2 , e3 } of standard basis
elements for
R3 ,
and
Now suppose that:
1 2 0 B = 0 , 0 , 5 2 1 1 2 0 0 A(e ) = −1 , A(e ) = 0 , A(e ) = 4 1
2
3
0
0
2
Write down the matrix representation of A with respect to (a) E and (b) B . Problem Problem 2: Represen Representatio tation n of a Linear Linear Map. Let A be a linear map of the n-dimensional linear space (V, F ) onto itself. Assume that for some λ ∈ F and basis ( vi )in=1 we have
Avk = λvk + vk+1 k = 1 , . . . , n − 1 and
Avn = λvn Obtain a representation of A with respect to this basis. Problem Problem 3: Norms. Norms. Show that for x ∈ Rn ,
√1n ||x||1 ≤ || x||2 ≤ || x||1 .
Problem 4. Prove that the induced matrix norm: ||A||1,i = maxj ∈{1,...,n }
m i=1
|aij |.
Consider an inner inner product product space V , with x, y ∈ V . Show, using properties of the inner product, Problem 5. Consider that
||x + y||2 + ||x − y ||2 = 2 ||x||2 + 2||y||2 where | | · | | is the norm induced by the inner product. Problem 6. Consider an inner product space ( Cn , C), equipped with the standard inner product in Cn , and a map A : Cn → Cn which consists of matrix multiplication by an n × n matrix A. Find the adjoint of A. Problem 7: Continuity Continuity and Linearity. Linearity. Show that any linear map between finite dimensional vector spaces
is continuous.
1
EE221A Problem Set 3 Solutions - Fall 2011
Problem 1.
a) A w.r.t. the standard basis is, by inspection, AE
2 = −1
.
0 0 0 4 0 2
0
b) No Now w consid consider er the diagram diagram from LN3, p.8. p.8. We are dealing dealing with exactly exactly this this situat situation ion;; we have have one matrix matrix representation, and two bases, but we are using them in both the domain and the codomain so we have all the ingredients. So the matrices P and Q for the similarity transform in this case are,
e = b
P =
1
e2
e3
1
b2
b3
−1
b ,
b2
1
b3
since the matrix formed from the E basis vectors is just the identity; and Q=
b
1
b2
b3
−1
e
e2
1
e3
= b
b2
1
b3
−1
= P −1 .
Let AB be the matrix representation of A w.r.t. B . From the diagram, we have AB = QAE P
= P −1 AE P −1
b b b A b 1 2 0 2 0 = 0 0 5 −1 0 2 1 1 16 −4 12 0 0 1 7 32 −6 = 15 =
1
2
E
3
−1
21
6
b2
b3
0 4 2
1 0
1
2 0 0 5 2 1 1
12
Problem Problem 2. Represen Representatio tation n of a linear linear map. This is straightforward from the definition of matrix represen-
tation,
Problem Problem 3. Norms. Norms.
λ 1 A= 0 . .. 0
0
···
0
λ
..
.
.. .
1
..
.
0
..
.
λ
0
1
λ
···
Proof. 1st inequality: Consider the Cauchy-Schwarz Cauchy-Schwarz inequality, inequality, ( 2
2
n i=1
2
xi yi ) ≤
n i=1
x2i
n i=1
y = 1 (vector of all ones). Then we have x1 ≤ n x2 which is equivalent to the first inequality. 2nd inequality: Note that x2 ≤ x1 ⇐⇒ x22 ≤ x21 . Consider that
x22 = |x1 |2 + · · · + |xn |2 , while
x21 = ( |x1 | + · · · + |xn |)2 = |x1 |2 + |x1 | |x2 | + · · · + |x1 | |xn | + |x2 |2 + |x2 | |x1 | + · · · + |xn | |xn−1 | + |xn |2 = x22 + (cross terms) , showing the second inequality.
Now, w, let let yi2 . No
2
Problem 4. Proof. First note that the problem implies that A ∈ Fm×n . By definition,
Au1 . u∈U u1
A1,i = sup Consider Au1 =
n j =1
Aj uj
respectively. Then Au1 ≤
1
n j =1
, where Aj and uj represent the j -th column of A and the j -th component of u
Aj 1 |uj |. Let Amax be the column of A with the maximum 1-norm; that is, m
Amax =
Then Au1
≤
n j =1
Amax |uj | = Amax
n j =1
max
j ∈{1,...,n }
|aij | .
i=1
|uj | = Amax u1 . So we have that Au1 ≤ Amax . u1
Now, it remains to find a u such such that that equali equality ty holds. holds. Chose Chose u ˆ = (0, . . . , 1, . . . 0)T , where the 1 is in the k -th component such that Au ˆ pulls out a column of A having having the maximum maximum 1-norm. 1-norm. Note that u ˆ1 = 1, and we see then that ˆ1 Au = Amax . ˆ1 u Thus in this case the supremum is achieved and we have the desired result. Problem 5. Proof. Straightforward; we simply use properties of the inner product at each step:
x + y2 + x − y 2 = x + y, x + y + x − y, x − y = x + y, x + x + y, y + x − y, x + x − y, −y = ( x, x + y + y, x + y + x, x − y + −y, x − y ) = ( x, x + x, y + y, x + y, y + x, x + x, −y + −y, x + −y, −y ) 2
2
2
2
= 2 x + 2 y
+ x, y + y, x − x, y + x, −y
= 2 x + 2 y + x, y + x, −y
= 2 x2 + 2 y2 + (x, y + x, −y ) 2
2
2
2
= 2 x + 2 y + x, y − y = 2 x + 2 y
Problem 6. We will show that the adjoint map A∗ : Cn → Cn is identical to matrix multiplication by the complex conjugate transpose of A. Initially we will use the notation Aa for the matrix representation of the adjoint of A and reserve the notation v∗ for the complex conjugate transpose of v. First, First, we know that we can represent represent A (w.r.t. n n×n the standard basis of C ) by a matrix in C ; call this matrix A. Then we can use the defining defining property property of the
adjoint to write, Au, v = u, Aa v Au, u∗ A∗ v = u∗ Aa v
Now, this must hold for all u, v ∈ the k-th entry). This will give,
Cn .
Choose u = ei , v = ej (where ek is a vector that is all zeros except for 1 in ∗ a = aij aij ,
for all i, j ∈ {1, . . . , n}. Thus Aa = A∗ ; it is no accident that we use the ∗ notation for both adjoints and complex conjugate transpose.
3
Problem 7. Continuity Continuity and Linearity. Linearity. Proof. Let A : (U, F ) → (V, F ) with dim U = n and dim V = m be a linear linear map. map. Let Let x, y ∈ U , x = y , and Sincee A is a linear map between finite dimensional vector spaces we can represent it by a matrix A. z = x − y . Sinc
Now, the induced norm,
Ai :=
Az =0 z z ∈U,z=0 sup
=⇒ Az ≤ Ai z . Given some > 0, let δ=
Ai
So
x − y = z < δ =⇒ Az < Ai δ = Ai
=⇒ Az < Ai
and we have continuity. Alternatively, we can also use the induced matrix norm to show Lipschitz continuity,
∀x, y ∈ U, Ax − Ay < K x − y , where K > Ai , which shows that the map is Lipschitz continuous, and thus is continuous ( LC =⇒ C , note that the reverse implication is not true!).
EE221A Linear System Theory Problem Set 4 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 9/30; Due 10/7
Problem Problem 1: Existence Existence and uniqueness uniqueness of solutions solutions to differential differential equations. Consider the following two systems of differential equations: x˙ 1 x˙ 2
= −x1 + et cos(x1 − x2 ) = −x2 + 15 sin( sin(x1 − x2 )
and x˙ 1 x˙ 2
= −x1 + x1 x2 = −x2
(a) Do they satisfy a global Lipschitz condition? (b) For the second system, your friend asserts that the solutions are uniquely defined for all possible initial conditions and they all tend to zero for all initial conditions. Do you agree or disagree? Problem Problem 2: Existence Existence and uniqueness uniqueness of solutions solutions to linear differentia differentiall equations. equations. Let A(t) and B (t) be respectively n × n and n × ni matrices whose elements are real (or complex) valued piecewise continuous functions on R+ . Let u(·) be a piecewise continuous function from R+ to Rn . Show that for any fixed u(·), the differential equation i
x˙ (t) = A(t)x(t) + B (t)u(t)
(1)
satisfies the conditions of the Fundamental Theorem. Proble Problem m 3: Local Local or global global Lipsch Lipschitz itz condition condition.. Consider the pendulum equation with friction and constant input torque: x˙ 1 = x2 (2) k T x˙ 2 = − gl sin x1 − m x2 + ml 2
where x1 is the angle that the pendulum makes with the vertical, x2 is the angular rate of change, m is the mass of the bob, l is the length of the pendulum, k is the friction coefficient, and T is a constant torque. Let system (represen (represented ted as x˙ = f (x)) find whether f is locally Lipschitz in Br = {x ∈ R2 : ||x|| < r}. For this system x on Br for sufficiently small r, locally Lipschitz in x on Br for any finite r, or globally Lipschitz in x (ie. Lipschitz for all x ∈ R2 ). Proble Problem m 4: Local Local or global global Lipsc Lipschit hitz z conditi condition. on. Consider Consider the scalar scalar different differential ial equation equation x˙ = x2 for x ∈ R, with x(t0 ) = x0 = c where c is a constant. (a) Is this system locally or globally Lipschitz? (b) Solve this scalar differential equation directly (using methods from undergraduate calculus) and discuss the existence of this solution (for all t ∈ R, and for c both non-zero and zero). 1
Problem Problem 5: Perturbed Perturbed nonlinear systems. systems. Suppose that some physical system obeys the differential equation x˙ = p(x, t), x(t0 ) = x0 , ∀t ≥ t0
where p(·, ·) obeys the conditions conditions of the fundamenta fundamentall theorem. theorem. Suppose that as a result result of some pertu p erturbati rbation on the equation becomes z˙ = p(z, t) + f (t), z (t0 ) = x0 + δx 0 , ∀t ≥ t0 Given that for t ∈ [t0 , t0 + T ], ||f (t)|| ≤ ǫ1 and ||δx0 || ≤ ǫ0 , find a bound on ||x(t) − z (t)|| valid on [ t0 , t0 + T ].
2
EE221A Problem Set 4 Solutions - Fall 2011
Problem Problem 1. Existenc Existence e and uniqueness uniqueness of solution solutions s to differen differential tial equations. equations.
Call the first system f (x, t) = a) Construct the Jacobians:
˙
x1
D1 f (x, t) =
x˙ 2
T
and the second one g(x) =
et sin(x1 − x2 ) 15cos(x1 − x2 )
Dg (x) =
x˙ 2
x1
et sin(x1 − x2 ) −1 − 15 cos( cos(x1 − x2 )
−1 −
−1 +
˙
x2
x1
0
−1
T
.
,
.
bounded ∀x, and f (x, t) is continuous in x, so f (x) is globally Lipschitz continuous. But while g (x) is D1 f (x, t) is bounded continuous, Dg (x) is unbounded (consider the 1,1 entry as x2 → ∞ or the 1,2 entry as x1 → ∞) so the function is not globally LC. b) Agree. Agree. Note Note that x2 does not depend on x1 ; it satisfies the conditions of the Fundamental Theorem, and one can directly find the (unique by the FT) solution x2 (t) = x2 (0)e−t → 0 as t → ∞. This solution for x2 can be substitute substituted d into into the first equation equation to get x˙ 1 = −x1 + x1 x2 (0)e−t = x1 x2 (0)e−t − 1 ,
which which again satisfies satisfies the conditions conditions of the Fundament undamental al Theorem, and can be solved solved to find the unique solution x1 (t) = x1 (0)exp
e−t x2 (0) − t
1 −
which also tends to zero as t → ∞, for any x1 (0), x2 (0). (0).
Problem Problem 2. Existenc Existence e and uniqueness uniqueness of solution solutions s to differen differential tial equations. equations.
The FT requires: requires: i) a differential equation x˙ = f (x, t) ii) an initial condition x(t0 ) = x0 iii) f (x, t) piecewise continuous (PC) in t iv) f (x, t) Lipschitz continuous (LC) in x We clearly have i), f (x, t) = A(t)x(t) + B (t)u(t), and any IC will do for ii). We are given that that A(t), B (t), u(t) are PC in t so clearly f is also. It remains to be shown the f is LC in x. This is easily shown: f (x, t) − f (y, t) = A(t)(x − y ) ≤ A(t)i x − y Let k(t) := A(t)i . Since A(t) is PC and norms are continous, k (t) is PC. Thus f is LC in x so all the conditions of the FT are satisfied. Problem Problem 3. Local or global global Lipschi Lipschitz tz condition. condition.
Construct the Jacobian, Df =
0 1 g k − l cos x − m
. This is bounded for all x so the system is globally LC in x.
Problem Problem 4. Local or global global Lipschi Lipschitz tz condition. condition.
a) It is only locally LC since the derivative is unbounded for x ∈ R. b) The equation is solved by x(t) = 1−c(ct−t ) , for c = 0. (For (For c = 0, the solution is simply x(t) ≡ 0 defined 0
on
R).
We can see that that x(t0 ) = c (initial condition is satisfied) and x˙ (t) =
differential equation). However, this is not defined on all of
R;
c2 (1−c(t−t0 ))2
= (x(t))2 (satisfies the
consider consider the solution solution value value as t → t0 + 1c .
2
Problem Problem 5. Perturbed Perturbed nonlinear systems. systems. Let φ be a solution of x˙ = p(x, t), x(τ ) = x0 , and ψ be a solution of z˙ = p(z, t), z (τ ) = x0 + δx0 . Then we have t
φ(t) = x0 +
ˆ
p (φ (σ ) , σ ) dσ,
τ t
ψ(t) = x0 + δx0 +
ˆ
p (ψ (σ ) , σ ) + f (σ) dσ,
τ
so
( ) =
φ(t) − ψ t
t
ˆ δx + p (φ (σ) , σ ) − p (ψ (σ) , σ ) − f (σ ) dσ ˆ ≤ δx + + p (φ (σ) , σ ) − p (ψ (σ) , σ ) dσ ˆ ≤ + + K (σ ) φ(σ ) − ψ (σ ) dσ ˆ 0
τ
t
0
1
τ
t
0
1
τ
t
= 0 + 1 (t − t0 ) +
K (σ) φ(σ ) − ψ(σ ) dσ
τ
Now, identify u(t) := φ(t) − ψ(t), k (t) = K (t), c1 = 0 + 1 (t − t0 ) and apply Bellman-Gronwall to get,
t
φ(t) − ψ (t) ≤ (0 + 1 (t − t0 ))exp
ˆ
K (σ )dσ
t0
¯ := sup [ Now, take K σ∈ t
0
,t0 +T ]
K (σ), then t
φ(t) − ψ (t) ≤ (0 + 1 (t − t0 ))exp
ˆ
¯ Kdσ
t0
¯ (t − t0 ) = ( 0 + 1 (t − t0 ))exp K
EE221A Linear System Theory Problem Set 5 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 10/18; Due 10/27
Problem Problem 1: Dynamical Dynamical systems, time invarianc invariance. e.
Suppose that the output of a system is represented by t
y (t) =
e−(t−τ ) u(τ )dτ
−∞
Show that it is a (i) dynamical system, and that it is (ii) time invariant. You may select the input space U to be the set of bounded, piecewise continuous, real-valued functions defined on ( −∞, −∞, ∞). Proble Problem m 2: Jacobi Jacobian an Linear Lineariza ization tion I. Consider the now familiar pendulum equation with friction and
constant input torque: x˙ 1 x˙ 2
= x2 = − gl sin x1 −
k x m 2
+
T ml2
(1)
where x1 is the angle that the pendulum makes with the vertical, x2 is the angular rate of change, m is the mass of the bob, l is the length of the pendulum, k is the friction coefficient, and T is a constant torque. Considering T as the input to this system, derive the Jacobian linearized system which represents an approximate model for small angular motion about the vertical. Problem Problem 3: Satellite Satellite Problem, linearization, linearization, state space model.
Model the earth and a satellite satellite as particles. particles. The normalized equations of motion, in an earth-fixed inertial frame, frame, simplifie simplified d to 2 dimensio dimensions ns (from (from Lagrange’ Lagrange’ss equations equations of motion, motion, the Lagrangian Lagrangian L = T − V = k 1 2 1 2 ˙2 2 r˙ + 2 r θ − r ): r¨ = rθ˙2 − rk + u1 ˙ θ¨ = −2 θr r˙ + 1r u2 2
with u1 , u2 representing the radial and tangential forces due to thrusters. The reference orbit with u1 = u2 = 0 is circular with r(t) ≡ p and θ(t) = ωt. From the first equation it follows that p3 ω 2 = k. Obtain the linearized equation about this orbit. (How many state variables are there?) Problem Problem 4: Solution Solution of a matrix differential differential equation. equation.
Let A1 (·), A2 (·), and F (·), be known piecewise continuous n × n matrices. Let Φ i be the transition matrix of x˙ = Ai (t)x, for i = 1 , 2. Show that the solution of the matrix differential equation: ˙ (t) = A1 (t)X (t) + X (t)A2′ (t) + F (t), X (t0 ) = X 0 X is
t
′
X (t) = Φ1 (t, t0 )X 0 Φ2 (t, t0 ) +
Problem Problem 5: State Transit Transition ion Matrix, Matrix, calculations. calculations.
1
t0
Φ1 (t, τ )F (τ )Φ2′ (t, τ )dτ
Calculate Calculate the state transition transition matrix for x˙ (t) = A(t)x(t), with the following A(t): (a) A(t) =
−1
2
0 −3
Hint: for part (c) above, let Ω( t) =
t
0
; (b) A(t) =
−2t
1
0 −1
; (c) A(t) =
0 ω(t) −ω(t) 0
ω(t′ )dt′ ; and consider the matrix
cos cos Ω(t) sin Ω( Ω(t) − sinΩ(t) cos Ω (t)
Problem Problem 6: State transition transition matrix is invertibl invertible. e.
˙ (t) = A(t)X (t). Show that if there exists a t0 such that detX (t0 ) Consider the matrix differential equation X = 0 then detX (t) = 0 , ∀t ≥ t0 . HINT: One way to do this is by contradiction. Assume that there exists some t∗ for which det X (t∗ ) = 0, find a non-zero vector k in N (X (t∗ )), and consider the solution x(t) := X (t)k to the vector differential equation x˙ (t) = A(t)x(t).
2
EE221A Problem Set 5 Solutions - Fall 2011
Problem Problem 1. Dynamical Dynamical systems, systems, time invarian invariance. ce.
i) To show that this is a dynamical system we have to identify all the ingredients: First we need a differential equation of the form x˙ = f (x,u,t): Let x(t) = y (t) (so h(x,u,t) = f (x,u,t)) and differentiate (using Liebniz) the given integral equation to get d x(t) = −x(t) + u(t) dt
this a linear time invariant dynamical system by inspection (it’s of the form x˙ (t) = Ax(t) = Bu (t)) but we can show the axioms. axioms. First let’s let’s call the system D = ( U , Σ, Y , s , r). The time time domain domain is T = R. The input input space space U R is as specified in the problem; the state space Σ and output space Y are identical and are . The state transiti transition on function is t
(t−t0 )
−
s(t, t0 , x0 , u) = x(t) = e
x0 +
ˆ
e−(t−τ ) u(τ )dτ
t0
and the readout function is r(t, x(t), u(t)) = y (t) = x(t)
Now to show the axioms. The state transition transition axiom is easy to prove, since u(·) only enters the state transition function within the integral where it is only evaluated on [t0 , t1 ] (where t0 and t1 are the limits of the integral). For the semi group axiom, let s(t1 , t0 , x0 , u) = x(t1 ) be as defined above. Then plug this into (t2 −t1 )
−
s(t2 , t1 , s(t1 , t0 , x0 , u), u) = e
(t2 −t0 )
−
=e
e
(t1 −t0 )
−
ˆ
x0 +
t1
e
(t1 −τ )
−
t1
(t2 −τ )
−
e
0
u(τ )dτ +
t0 t2
(t2 −t0 )
−
=e
u(τ )dτ +
t0
ˆ x + ˆ
ˆ
t2
e−(t
ˆ
e−(t
τ )
2−
u(τ )dτ
t1
τ )
2−
u(τ )dτ
t1
e−(t
τ )
2−
x0 +
t2
u(τ )dτ
t0
= s(t2 , t0 , x0 , u), for all t0 ≤ t1 ≤ t2 , as required. ii) To To show that this d.s. is time invaria invariant, nt, we need to show show that the space of inputs is closed under the time shift operator T τ τ ; it is (clearly if u(t) ∈ U , u(t − τ ) ∈ U ). U ). Then we need to check that: (t1 −t0 )
−
ρ(t1 , t0 , x0 , u) = e
x0 +
ˆ
t1
e−(t
σ)
ˆ
t1 +τ
1−
u(σ )dσ
t0
(t1 +τ −(t0 +τ ))
−
=e
x0 +
e−(t
1
+τ −σ)
u(σ − τ )dσ
t0 +τ
= ρ(t1 + τ, t0 + τ, x0 , T τ τ u) Problem Problem 2. Jacobian Jacobian Linearization Linearization I. Let x := [ x1 , x2 ]T . We are given
d x = f (x, u) = dt
x2 g − l sin x1 −
k x m 2
0
+
1
u
ml2
Note that at the desired equilibrium, the equation for x˙ 2 implies that the nominal torque input is zero, so u0 = 0. 0. T The Jacobian (w.r.t. x) evaluated at x0 = [0 , 0] is, D1 f (x, u)|x
0
,u0
= =
0 g − l cos x1 0 − gl
1 k −m
1 k −m .
x=x0 ,u=u0
2
We can see by inspection that
D2 f (x, u) = D2 f (x, u)|x=x
So the linearized linearized system system is,
δ x˙ (t) =
0 − gl
1 k −m
0
0
=
,u=u0
1
ml2
0
δx +
δu
1
ml2
(Note: If you assumed based on the wording of the question that the torque was held constant for the linearized system, i.e. δu ≡ 0, then this will also be accepted) Problem Problem 3. Satellite Satellite Problem, Problem, lineariza linearization tion,, state space model. Write as a first-order system: x1 = r, x2 = r, ˙ x3 = θ, x4 = θ˙ . In these variables the equations of motion are,
d dt
x1 x2 x3 x4
=
x2
k x2 1
x1 x24 −
+ u1
x4
−2 xxx + 2
1
4
1
x1
u2
.
T T p, 0, ω t , ω ] , u0 = [0 , 0] . Let The reference orbit has x1 = p, x2 = 0 , x3 = ωt,x4 = ω , with u1 = u2 = 0, i.e. x0 = [ p, u = u0 + δu , which produces the trajectory x = x0 + δx, and take δx (t0 ) = 0. So
x˙ = x˙0 + δ x˙ = f (x0 + δx,u0 + δu )
We can write this in a Taylor series approximation: x˙0 + δ x˙ = f (x0 + δx,u0 + δu ) = f (x0 , u0 ) + D1 f (x, u)|x
0
δ x˙ = D1 f (x, u)|x
0
,u0
D1 f (x, u)|x
0
· δx + D2 f (x, u)|x
0
,u0
=
=
,u0
· δx + D2 f (x, u)|x
,u0
0
1 0 0 −2 xx
0 3ω 2 0 0
−
D2 f (x, u)|x
0
,u0
−
1 0 0 −2 pω
=
1 x1
Proof. First check that the initial condition is satisfied:
:
1
0 1 0 0
0 0 0 1
p
x0 ,u0
: 0
I I X (t0 ) = Φ1 (t0 , t0 )X 0 Φ2 (t0 , t0 ) + :
=
x0 ,u0
Problem Problem 4. Solution Solution of a matrix matrix differen differential tial equation. equation.
2
1
0 0 0
0 0 0 2x1 x4 0 1 0 −2 xx
4
0 0 0 2ωp 0 1 0 0
0 1 0 0
· δu + h.o.t.
· δu
0 x24 + 2kx 1 3 0 2 2x2 x4 x1 − x1 2 u2 −
,u0
0
t0
ˆ t
Φ1 (t0 , τ )F (τ )Φ2 (t0 , τ )dτ
= X 0 Now check that the differential equation is satisfied (taking appropriate care of differentiation under the integral
3
sign): d X (t) dt
d A1 (t)Φ1 (t, t0 )X 0 Φ2 (t, t0 ) + Φ 1 (t, t0 )X 0 Φ2 (t, t0 )A2 (t) + dt
=
ˆ
t
Φ1 (t, τ )F (τ )Φ2 (t, τ )dτ
t0
= A1 (t)Φ1 (t, t0 )X 0 Φ2 (t, t0 ) + Φ1 (t, t0 )X 0 Φ2 (t, t0 )A2 (t) + Φ 1 (t, t)F (t)Φ2 (t, t) +
ˆ
t
t0
d (Φ1 (t, τ )F (τ )Φ2 (t, τ )) dτ dt
= A1 (t)Φ1 (t, t0 )X 0 Φ2 (t, t0 ) + Φ1 (t, t0 )X 0 Φ2 (t, t0 )A2 (t) + F (t) +
ˆ
t
t0
t
ˆ
= A1 (t) Φ1 (t, t0 )X 0 Φ2 (t, t0 ) +
(A1 (t)Φ1 (t, τ )F (τ )Φ2 (t, τ ) + Φ1 (t, τ )F (τ )Φ2 (t, τ )A2 (τ )) dτ
+ Φ1 (t, t0 )X 0 Φ2 (t, t0 ) +
ˆ
Φ1 (t, τ )F (τ )Φ2 (t, τ )dτ
t0
t
Φ1 (t, τ )F (τ )Φ2 (t, τ )dτ A2 (t) + F (t)
t0
= A1 (t)X (t) + X (t)A2 (t) + F (t)
Problem 5. State Transition Transition Matrix, calculations.
(a) 1
Φ(t, 0) = eAt = L
−
1
−
=L
(sI − A)
s+1
−2
−
−
1
−
0 s+3
1 =L (s + 1)(s + 3) e t 0 = t e − e 3t e 3t 1
−
1
−
−
−
s+3
0 s+1
2
Thus, Φ(t, t0 ) = Φ(t − t0 , 0) =
e−(t−t ) e−(t−t ) − e−3(t−t
0
0
0
0
)
e−3(t−t
0
)
T
(b) Here our approa approach ch will be to directly directly solve solve the system system of equati equations ons.. Let x(t) = [x1 (t), x2 (t)] . Then we have have x˙ 1 (t) = −2tx1 (t). Recall Recall from from underg undergrad rad (or if not, not, from from sectio section n´ 8) that the soluti solution on to the linear linear a(s)ds x(t0 ). In this case that homogeneous equation initial condition x(t0 ) is x(t) = e that gives ´ 2sds x˙ (t) = a(t)x(t) with 2 t 2 2 (t t ) x1 (t) = x1 (t0 )e x1 (t0 ). = x1 (t0 )exp −s t = x1 (t0 )exp −t + t0 = e t
t0
t
t0
−
0
2
2
−
2
− 0
2
We also have x˙ 2 (t) = x1 (t) − x2 (t) = x1 (0)e (t t ) − x2 (t). This can be considere considered d a linear linear time-inva time-invarian riantt d (t t ) (t t ) system dt x2 (t) = −x2 (t)+ u(t), with state x2 and input u(t) = x1 (0)e , with solution x2 (t) = e x2 (t0 )+ t (t τ ) (τ t ) x1 (0) t e e dτ . We can now write down the s.t.m.,
´
−
−
−
2
−
− 0
−
2
2
− 0
−
− 0
2 − 0
0
Φ(t, t0 ) =
2
t t0
´
2 0
e−(t −t ) e−(t−τ ) e−(τ −t ) dτ 2
2 0
0 e−(t−t
0
)
cosΩ(t, t0 ) sin Ω( Ω(t, t0 ) Gues Guesss that that Φ(t, t0 ) = . This This is is the the s.t. s.t.m. m. if it it 0 − sinΩ(t, t0 ) cos Ω (t, t0 ) ˙ (t) = A(t)X (t) with X (t0 ) = I . Note that Ω(t0 , t0 ) = 0, so X (t0 ) = Φ(t0 , t0 ) = I . First satisfies satisfies the matrix d.e. X t t0
´ ω(τ )dτ . (c) Let Ω(t, t ) =
4
notice
d Ω(t, t0 ) dt
=
d dt
t t0
´ ω(τ )dτ = ω(t). Now look at the derivative, d Φ(t, t0 ) = dt
− sinΩ(t, t0 )ω(t) cos Ω( Ω(t, t0 )ω (t) − cos cos Ω(t, t0 )ω (t) − sinΩ(t, t0 )ω(t)
= ω (t) =
− sinΩ(t, t0 ) cos Ω( Ω(t, t0 ) − cos cos Ω(t, t0 ) − sinΩ(t, t0 )
ω (t ) 0 ω t − ()
= A(t)Φ(t, t0 )
cosΩ(t, t0 ) sin Ω( Ω(t, t0 ) t, t − sinΩ( 0 ) cos Ω (t, t0 )
Problem Problem 6. State transition transition matrix is invertib invertible. le. Proof. By contradicti contradiction: on: Suppose Suppose that there exists t∗ such that X (t∗) is singular; this means that there exists ˙ (t)k = A(t)X (t)k = A(t)x(t), Now let let x(t) := X (t)k = θ. Then Then we have have that that x˙ (t) = X k = θ, X (t∗)k = θ . Now and x(t∗ ) = X (t∗ )k = θ. This This has the unique unique solution solution x(t) ≡ θ , for all t. But in partic particula ularr this this impli implies es that x(t0 ) = X (t0 )k = θ, which implies that X (t0 ) is singular, i.e. det X (t0 ) = 0, giving our contradiction.
EE221A Linear System Theory Problem Set 6 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 10/27; Due 11/4
Proble Problem m 1: Linear Linear systems. systems. Using the definitions of linear and time-invariance discussed in class, show
that: (a) x˙ = A(t)x + B (t)u, y = C (t)x + D(t)u, x(t0 ) = x0 is linear; (b) x˙ = Ax + Bu, y = C x + Du, x(0) = x0 is time invariant (it’s clearly linear, from the above). Here, the matrices in the above are as defined in class for multiple input multiple output systems. Problem Problem 2: A linear time-inva time-invarian riantt system.
Consider a single-input, single-output, time invariant linear state equation x˙ (t)
= Ax(t) + bu(t), x(0) = x0 y (t) = cx(t)
(1) (2)
If the nominal input is a non-zero constant, u(t) = u, under what conditions does there exist a constant nominal solution x(t) = x0 , for some x0 ? Under what conditions is the corresponding nominal output zero? Under what conditions do there exist constant nominal solutions that satisfy y = u for all u? Problem Problem 3: Sampled Sampled Data System
You are given a linear, time-invariant system x˙ = Ax + Bu
(3)
which is sampled every T seconds. Denote x(kT ) by x(k). Further, urther, the input u is held constant between kT and (k + 1)T , that is, u(t) = u(k ) for t ∈ [kT , (k + 1)T ]. Derive the state equation for the sampled data system, that is, give a formula for x(k + 1) in terms of x(k ) and u(k ). Problem Problem 4: Discrete Discrete time linear system solution.
Consider the discrete time linear system: x(k + 1)
= Ax(k) + Bu (k) y (k ) = C x(k) + Du (k)
Here, k ∈ N , A ∈ Rn n , B ∈ Rn n , C ∈ Rn n D ∈ Rn in terms of x(k0 ) and the input sequence ( uk , . . . , uk ). ×
×
o×
i
o ×ni
(4) (5)
. Use induction to obtain formulae for y (k ), x(k)
0
Problem Problem 5: Linear Quadratic Quadratic Regulator. Regulator. Consider Consider the system described by the equations equations x˙ = Ax + Bu , y = Cx , where
A=
0 1 0 0
, B=
1
0 1
, C = [1 0]
Determine the optimal optimal control control () = () (a) (Determine ( )+ ( )) where is positive and real. u∗ t
∞
0
y2 t
ρu2 t dt
F ∗ x t , t ≥ 0 which which minimize minimizess the performanc performancee index J =
ρ
(b) Observe how the eigenvalues of the dynamic matrix of the resulting closed loop system change as a function of ρ. Can you comment on the results?
Problem Problem 6. Preserv Preservation ation of Eigenva Eigenvalues lues under Similarity Transform Transform..
Consider a matrix A ∈ Rn n , and a non-singular matrix P ∈ Rn are the same as those of A.
×n
×
−1
. Show that the eigenvalues of A = P AP
Remark: This important fact in linear algebra is the basis for the similarity transform that a redefinition of
the state (to a new set of state variables in which the equations above may have simpler representation) does not affect the eigenvalues of the A matrix, matrix, and thus thus the stability stability of the system. We will use this similari similarity ty transform in our analysis of linear systems. Problem Problem 7. Using the dyadic expansion discussed in class (Lecture Notes 12), determine eAt for square, diagonalizable A (and show your work).
2
EE221A Problem Set 6 Solutions - Fall 2011
Problem Problem 1. Linear Linear systems. systems.
a) Call this dynamical system L = ( , Σ, , s , r ), where = Rn , Σ = linear spaces over the same field ( R). We also have the response map
U U Y
U
Rn ,
i
Y = Rn . o
So clearl clearly y
U , Σ, Y are all
ρ(t, t0 , x0 , u) = y (t) = C (t)x(t) + D (t)u(t)
and the state transition function t
ˆ
s(t, t0 , x0 , u) = x(t) = Φ(t, t0 )x0 +
Φ(t, τ )B (τ )u(τ )dτ
t0
We need to check the linearity of the response map; we have that, ρ(t, t0 α1 x1 + α2 x2 , α1 u1 + α2 u2 )
∀t ≥ t0, t ∈ R+: t
= C (t) Φ(t, t0 ) (α1 x1 + α2 x2 ) +
ˆ
Φ(t, τ )B (τ ) (α1 u1 (τ ) + α2 u2 (τ )) dτ
t0
+D(t) (α1 u1 (τ ) + α2 u2 (τ ))
t
= α1 C (t)Φ(t, t0 )x1
ˆ + Φ(t, τ )B (τ )u (τ )dτ + D(t)u (t) ˆ 1
1
t0
t
+α2 C (t)Φ(t, t0 )x1 +
Φ(t, τ )B (τ )u2 (τ )dτ + D(t)u2 (t)
t0
= α1 ρ(t, t0 , x1 , u1 ) + α2 ρ(t, t0 , x2 , u2 ) b) Using the definition of time-invariance for dynamical systems, check: ρ(t1 + τ, t0 + τ, x0 , T τ τ u) = Cx (t1 + τ ) + Du ((t1 + τ )
A(t1 +τ −(t0 +τ )) τ ))
= C e = Ce
ˆ − τ ) t1 +τ
x0 +
A(t1 +τ −σ )
e
Bu (σ
t0 +τ
A(t1 −t0 )
x0 +
ˆ
t1
− τ )dσ
+ Du(t1 )
eA(t −s) Bu (s)ds + Du (t1 ) 1
t0
= ρ(t1 , t0 , x0 , u) Problem Problem 2. A linear linear time-inv time-invarian ariantt system. system.
Ax0 = bu a) The solution is constant exactly when x˙ (t) = 0, so 0 = Ax0 + bu ¯ ¯. Suc Such an x0 exists iff A b A u ( ) ( ) (since ¯ = 0). 0 ). b) For the output to be zero, we also need y(t) = cx0 = 0. 0 . We can write both conditions as
bu ¯
− ∈R
⇐⇒ ∈ R
⇐⇒
− − ∈R A c
which is equivalent to
b
bu ¯
x0 =
0
=
u ¯
b
0
−
,
A
( ). c 0 c) Now we must have u ¯ = cx0 . Similar to the above analysis, this leads to
− − − ∈R A c
and such an x0 will exist whenever
b
1
bu ¯ u ¯
x0 =
(
A c
=u ¯
b
1
,
)
Problem Problem 3. Sampled Sampled Data System. System.
To prevent confusion between the continuous time system and its discretization, we will use the notation x [k] := x(kT ), u [k ] := u(kT ) in the following:
2
A((k ((k+1)T +1)T −kT ) kT )
x [k + 1] = x((k + 1)T ) = e
(k+1)T +1)T
x(kT ) +
ˆ
((k+1)T +1)T −τ ) τ ) eA((k Bu (τ )dτ
kT
= eAT x [k ] +
(k+1)T +1)T
ˆ
((k+1)T +1)T −τ ) τ ) eA((k dτBu [k ]
kT
Now, make the change of variables σ = ( k + 1)T
− τ in the integral, to get T
AT
x [k + 1] = e
x [k] +
ˆ
eAσ dσBu [k ]
0
= Ad x [k] + Bd u [k] , where Ad := e
T
AT
, Bd :=
ˆ
eAσ dσB.
0
This is known as the ‘exact discretization’ of the original continuous-time system. If A is invertible, then consider (with the usual disclaimer about ‘proceeding formally’ where the infinite series is concerned),
Remark.
T
ˆ
Aσ
e
0
T
ˆ 1 dσ = I + Aσ + A σ ˆ ˆ 2
2 2
0
T
= I
T
dσ + A
0
0
+
···
1 σdσ + A2 2
dσ T
ˆ 0
1 1 2 3 A T + = T + AT 2 + 2 3 2 1 1 = A−1 AT + A2 T 2 + A3 T 3 + 2 3! = A−1 eAT I
So in this case we have Ad = eAT , Bd = A−1 eAT
·
−
− I
σ2 dσ +
···
···
···
B.
Problem Problem 4. Discrete Discrete time linear linear system system solution solution.. Assume k > k0 , and let N = k k0 (not to be confused with N in the problem statement, which might have
better been printed as
N).
Then,
−
3
x(k0 + 1) = Ax(k0 ) + Bu k
0
x(k0 + 2) = A(Ax(k0 ) + Bu k ) + Bu k 0
0
2
= A x(k0 ) + ABu k + Bu k 0
0
+1
+1
2
x(k0 + 3) = A(A x(k0 ) + ABu k + Bu k 0
3
0
+1 ) +
2
= A x(k0 ) + A Bu k + ABu k 0
0
+1
Bu k
0
+ Bu k
+2
0
+2
+1
+
.. . x(k ) = x(k0 + N ) = AN x(k0 ) + AN −1 Bu k + AN −2 Bu k 0
N
N
= A x(k0 ) +
AN −1 B
AN −2 B
0
···
· · · + ABu k−2 + Buk−1
AB
B
N
= A x(k0 ) +
AN −i Bu k
0
i=1
k−k0
=A
k−1
x(k0 ) +
uk uk +1 .. . uk−2 uk−1 0
0
,
+i−1
Ak−1−i Bu i
(1)
i=k0
k−k0
=A
x(k0 ) +
k−k0
Ak−k −i Bu k 0
0
i=1
k−k0
=A
x(k0 ) +
k−k0 −1 i=0
+i−1
(alternate form)
Ai Bu k−i−1 (alternate form)
Thus, y(k) = C x(k ) + Du (k ) k−k0
= CA
x(k0 ) + C
k−1
Ak−1−i Bu i + Du (k )
i=k0
Remark.
Note the similarity between the form of (1) and the usual form of the analogous continous time case, A(t−t0 )
x(t) = e
t
x(t0 ) +
ˆ
τ ) eA(t−τ ) Bu (τ )dτ.
t0
Problem Problem 5. Linear Linear Quadratic Quadratic Regulator. Regulator.
a) We have a cost function of the form J =
ˆ ∞ 0
yT Qy + uT Ru dt,
where in this case Q = 1 , R = ρ. In LN11 we have a proof that the optimal control is u =
−F x(t) = −R−1 B T P x(t) = −ρ−1 B T P x(t),
where P is the unique positive definite solution to the (algebraic) Riccatti equation P A + AT P
− P BR−1B T P + C T QC = 0
4
In this case the sparsity of A , B , C suggests that we may be able to determine the solution to the ARE by hand:
− 1
ρ
p11 p21
p12 p22
p11 p21
p12 p22
0 1 0 0
+
0 0 0 1
0 0 1 0
p11 p21
p11 p21
p12 p22
p12 p22
1 0 0 0
+
=
p12 = p21
= =
p22
=
⇒
=
p11
⇒
0 0 0 0
=
P
=
√ 1/4 √ p2ρ √ 3/4 2ρ √2ρ1/4 √ρ √ρ √2ρ3/4
Thus,
1
√√
√ρ
2ρ1/4
√ ρ3/4 −ρ 0 1 ρ 2 √ = −ρ−1/2 − 2ρ−1/4 x(t) = −F x(t)
u (t) =
x(t)
b) The closed loop system is x˙ (t) = Ax(t)
= (A = =
− BF x(t)
− BF )x(t)
− √ 0 1 0 0
0 − ρ 1/2
−
and the closed loop dynamics ACL = A
+
−
0 1
ρ−1/2
1 2ρ−1/4
−√2ρ−1/4
x(t),
x(t)
− BF has eigenvalues, eigenvalues, √2 ρ−1/4 (1 ± j ) 2
so the poles lie on 45-degree lines from the origin in the left half plane. Since ρ appears in the denominator, small values in ρ correspond to poles far away from the origin; the system response will be faster than for larger values √ 2 of ρ. However in all cases the damping ratio of ζ = 2 will be the same. Problem 6. Preservation Preservation of Eigenvalues Eigenvalues under Similarity Transform. Transform.
Recall the property of determinants that det AB = det A det B . Then, det(sI
− A¯) = det(sI − P AP −1) = det(sP P −1 − P AP −1 ) = det P det(sI − A)det P −1 = det P det P −1 det(sI − A) = det(sI − A)
Thus the characteristic polynomials of A¯ and A are identical and so are their eigenvalues. Problem 7.
First, consider (At)n = An tn = (
n T n n i=1 λi ei vi ) t
=
n i=1
λin tn ei viT (using the same argument as the n = 2 case
5
in the lecture notes). Recall that
n T i=1 ei vi
eAt = I + At +
t2
2!
A2 +
n
=
ei viT + t
A3 +
λi ei viT
i=1
(1 + λi t +
i=1 n
=
3!
n
i=1 n
=
t3
= I . Then,
t2
2!
λ2i +
t3
3!
··· n
+
λ3i +
t2
2!
λ2i ei viT
i=1
n
+
t3
3!
λ3i ei viT
i=1
· · · )ei viT
eλ t ei viT , i
i=1
where we are treating the infinite series representation of the exponentia ‘formally’.
+
···
EE221A Linear System Theory Problem Set 7 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 11/3; Due 11/10
Problem Problem 1.
A has characteristic polynomial ( s − λ1 )5 (s − λ2 )3 , it has four linearly independent eigenvectors, the largest Jordan block associated to λ1 is of dimension 2, the largest Jordan block associated to λ2 is of dimension 3. Write down the Jordan form J of this matrix and write down cos( eA ) explicitly. Problem Problem 2.
A matrix A ∈ R6
×6
has minimal polynomial s3 . Give bounds on the rank of A.
Problem Problem 3: Jordan Jordan Canonical Canonical Form. Form.
Given
−3 0 0 A= 00 0 0
1 0 0 0 0 0 −3 1 0 −3 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
(a) What are the eigenvalues of A? How many linearly independent eigenvectors does A have? How many generalized eigenvectors? (b) What are the eigenvalues of eAt ? (c) Suppose this matrix A were the dynamic matrix of an LTI system. What happens to the state trajectory over time (magnitude grows, decays, remains bounded...)? Problem Problem 4.
You are told that A : Rn → Rn and that R(A) ⊂ N (A). Can you determine A up to a change of basis? Why or why not? Problem Problem 6.
Let A ∈ Rn
×n
be non-singular. True or false: the nullspace of cos(log (A)) is an A−invariant subspace?
Problem Problem 7.
Consider A ∈ Rn
×n
invariant subspace. , b ∈ Rn . Show that span {b , A b , . . . , An−1 b} is an A−invariant
1
EE221A Problem Problem Set 7 Solutions Solutions - Fall 2011
Problem Problem 1.
With the given information, we can determine the Jordan form J = T AT
1
−
λ 0 J =
1 λ1
1
λ1 0
1 λ1 λ1 λ2 0 0
1 λ2 0
0 1 λ2
of A to be,
.
Thus,
cos e 0 cos e =
λ1
and cos eA = T
1
−eλ sin eλ cos eλ 1
1
1
cos eλ 0
1
J
−eλ sin eλ cos eλ 1
1
1
cos eλ
1
cos eλ 0 0
2
−
−eλ sin eλ cos eλ 0 2
2
2
− 12 eλ sin eλ + e2λ cos eλ −eλ sin eλ cos eλ
2
2
2
2
2
2
J
cos e T . T .
Problem Problem 2.
We know that there is a single eigenvalue λ = 0 with multiplicity 6, and that the size of the largest Jordan block is 3. We know that rank(A rank(A) = rank T 1 J T = rank(J rank(J ) since T is full rank (apply Sylvester’s inequality). inequality). Then J must must have rank of at least 2, arising from from the 1’s in the superdiagona superdiagonall in the Jordan Jordan block of size 3. If all the other Jordan blocks were size 1, then there would be no additional 1’s on the superdiagonal, so the lower bound on rank(A rank(A) is 2. Now the most 1’s on the superdiagonal superdiagonal that this matrix matrix could have have is 4, which would would be the case if there were two Jordan blocks of size 3. So rank(A rank(A) ≤ 4. Thus the bounds are
−
2 ≤ rank(A rank(A) ≤ 4. Problem Problem 3. Jordan Jordan Canonical Canonical Form. Form.
a) Since this matrix is upper triangular (indeed, already in Jordan form) we can read the eigenvalues from the diagonal elements: σ (A) = {−3, −4, 0}. Since Since there are 4 Jordan Jordan blocks, blocks, there are also also 4 linear linearly ly independe independent nt eigenvectors, and 3 generalized eigenvectors (2 associated with the eigenvalue of -3 and 1 with the eigenvalue of -4). b) By the spectral mapping theorem, σ(eAt ) = eσ(A)t = e
3t
−
,e
4t
−
,1
c) Since σ(A) has an eigenvalue not in the open left half plane, it is not (internally) asymptotically stable. (Note, however that it is (internally) stable since the questionable eigenvalues are on the jω-axis jω -axis and have Jordan blocks of size 1). In particular, particular, the first 5 states states will decay decay to zero asymptotic asymptotically ally (indeed, (indeed, exponentiall exponentially), y), and the last two will remain bounded (indeed, constant). Problem Problem 4.
No. The given property R(A) ⊂ N (A) is equivalent to A2 v = θ, ∀v ∈ Rn . Clearly A0 = 0n but so does, e.g., 0 1 0 ··· 0 .. 0 0 0 . .. . . .. A1 = , . . . 0 0 ··· 0 0
n
×
has this property,
2
,
2
and since A0 , A1 are both in Jordan form, but are not the same (even with block reordering), this means that A cannot be determined up to a change of basis. Problem 6.
True. Proof. Let f ( f (x) := cos cos (log (log ( x)). )). We can write A = T −1 J T . So f ( f (A) = f ( f (T −1 J T ) T ) = T −1 f ( f (J )T . T . Now consider
f (A)) = N (T 1 f ( f (J )T ) T ) N (f ( −
Now if x ∈ N (T 1 f ( f (J )T ) T ), T 1 f ( f (J )T x = θ ⇐⇒ f ( f (J )T x = θ ⇐⇒ T x ∈ N (f (J )). )). We need need show show that that 1 1 f ( f (A)Ax = θ ⇐⇒ T f ( f (J )T T J T x = θ ⇐⇒ f ( f (J )J T x = θ. This will be true if J and f ( f (J ) commute, because if so, then f ( f (J )J T x = J f ( f (J )T x = θ since we have shown that T x ∈ N (f ( f (J )) )) whenever x ∈ N (f ( f (A)). )). Note that the block structure of f ( f (J ) and J leads to f ( f (J )J and J f ( f (J ) having the same block structure, and we only need to check if J i and f ( f (J i ) commute, where J i is the i-th Jordan block. Write J i = λi I + S where S is an “upper shift” matrix (all zeros except for 1’s on the superdiagonal). So we want to know if (λi I + S ) f ( f (J i ) = λi f ( f (J i ) + Sf ( Sf (J i ) = f ( f (J i )λi + f ( f (J i )S . In other other words words does Sf ( Sf (J i ) = f ( f (J i )S . Note that when S pre-mul pre-multipl tiplies ies a matrix, matrix, the result result is the original original matrix with its entries entries shifted up, and the last row being filled with zeros; when S post-multiplies a matrix, the result is the original matrix with its entries shifted shifted to the right and the first column filled with zeros. Since Since f ( f (J i ) is an upper-triangular, banded matrix, the result is the same in either case and so f ( f (J ) and J commute. So indeed, indeed, the nullspac nullspacee of cos cos (log (log (A)) is an A-invariant -invariant subspace. −
−
−
−
Alternate Alternate proof: Let f ( f (x) := cos (log( (log(x x)). )). By the spectral mapping mapping theorem, theorem, σ(f ( f (A)) = f ( f (σ(A)); )); since we are interested in the nullspace of f ( f (A), this means we want to consider eigenvectors associated with eigenvalues at zero π/2 3π/2 of f ( f (A). So these these are the values values of x that make cos (log( (log(x x)) = 0. 0. These These are are eπ/2 , e π/2 , and so on. We have have seen that for any eigenvalue λ of A of A, the space N (A − λI ) spanned by the eigenvectors associated with that eigenvalue is A-invariant. The nullspace of f ( f (A) is thus the direct sum of such subspaces and is hence also A-invariant. (Thanks to Roy Dong for this proof). Another Another alternate alternate proof: Since f ( (log(x 0 and A nonsingular means 0 ∈ f (x) := cos (log( x)) is analytic for x = / σ(A), f ( f (A) = p(A) for some polynomial p of finite finite degree degree.. Then Then A-inv -invariance ariance of the nullspace nullspace is easy to check. check. Let v ∈ N (A), so Av = 0. 0 . Then
Af ((A)v = A c0 I + Af I + c1 A + · · · + cn
n−1 1A
−
= c0 Av + c1 A2 v + · · · + cn
n 1A v n−1
v
−
= c0 I + I + c1 A + · · · + cn =0
Av
1A
−
Problem 7.
Proof. Let v ∈ Ω := span b,Ab,A 2 b , . . . , An−1 b . Then v = α0 b + α1 Ab + α2 A2 b + · · · + αn−1 An−1 b.
Now consider Av = α0 Ab + α1 A2 b + α2 A3 b + · · · + αn 2 An 1 b + αn 1 An b. Apply the C-H theorem: An = β0 I + β1 A + · · · + βn 1 An 1 , −
−
−
−
−
so we have Av = (α ( αn
1 β0 )b
−
and so Av ∈ Ω.
+ (α (α0 + αn
1 β1 )Ab
−
+ (α (α1 + αn
2 1 β2 ) A b
−
+ · · · + (α (αn
2
−
+ αn
n−1 b 1 βn−1 )A
−
EE221A Linear System Theory Problem Set 8 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 11/10; Due 11/18
Problem Problem 1: BIBO Stability Stability.
f H , T
H i
T H
T C
f C , T C
V H
V C
i
Figure 1: A simple heat exchanger, for Problem 1. Consider the simple heat exchanger shown in Figure 1, in which f C C and f H H are the flows (assumed constant) of cold and hot water, T H H and T C C represent the temperatures in the hot and cold compartments, respectively, T Hi and T denote the temperature of the hot and cold inflow, respectively, and V H Hi Ci Ci H and V C C are the volumes of hot and cold water. water. The temperatures temperatures in both compartment compartmentss evolve evolve according according to: dT C C dt dT H H V H H dt V C C
= f C C (T Ci Ci − T C C ) + β (T H H − T C C )
(1)
= f H H (T Hi Hi − T H H ) − β (T H H − T C C )
(2)
Let the inputs to this system be u1 = T Ci Ci , u2 = T Hi Hi , the outputs are y1 = T C C and y2 = T H H , and assume that 3 3 3 0 .1 (m /min), β = 0. 0 .2 (m /min) and V H f C C = f H H = 0. H = V C C = 1 (m ). (a) Write the state space and output equations for this system in modal form. (b) In the absence of any input, determine y1 (t) and y2 (t). (c) Is the system BIBO stable? Show why or why not. Problem Problem 2: BIBO Stability Stability Consider a single input single output LTI system with transfer function G(s) = stable? 1
1 s2 +1
. Is this this system system BIBO
Problem Problem 3: Exponential Exponential stability of LTI systems. Prove that if the A matrix matrix of the LTI system system x˙ = Ax has all of its eigenvalues in the open left half plane, then the equilibrium xe = 0 is asymptotically stable. Problem Problem 4: Characteriz Characterization ation of Internal Internal (State Space) Stability for LTI systems. (a) Show that that the system system x˙ = Ax is internally stable if all of the eigenvalues of A are in the closed left half of the complex plane (closed means that the jω-axis jω -axis is included), and each of the jω-axis jω -axis eigenvalues has a Jordan block of size 1. (b) Given
−3 0 0 A= 00 0 0
1 0 0 0 0 0 −3 1 0 −3 0 0 0 0 −4 1 0 0 0 −4 0 0 0 0 0 0 0 0
Is the syste system m x˙ = Ax exponentially stable? Is it stable?
2
0 0 0 0 0 0 0
0 0 0 0 0 0 0
EE221A Problem Set 8 Solutions - Fall 2011
Problem 1. BIBO Stability. Stability. a) First write this LTI system in state space form, x˙ = Ax + Bu
= =
(β +f C ) V C β V H
x+
−0.3
0.2 −0.3
0.2
y = Cx =
β V C −(β +f H ) V H
−
x+
1 0 0 1
f C V C
0 f H V H
0
0.1 0 0 0.1
u,
u
x
T T where x := (T C distinct eigenvalues eigenvalues (so we know it can be diagonali diagonalized) zed) C , T H H ) , u := (T C C , T H H ) . This has two distinct 1 Let T = e1 e2 , so 1). Let λ1 = −0.5 with eigenvector e1 = (1, −1) and λ2 = −0.1 with eigenvector e2 = (1, 1). 1 −1 and the modal form is T = 12 1 1 i
i
−
˜ + Bu, ˜ z = Az ˜ , y = Cz where A˜ = T AT
1
−
b)
=
−0.5
0
0 −0.1
˜ = TB = ,B
0.05 −0.05 0.05 0.05
˜ = CT , C
1
−
=
1 1 −1 1
.
˜ ˜ ˜ (t) = Ce ˜ At ˜ At y(t) = Cz z (0) = Ce T x0
= = = =⇒ y1 (t) = y2 (t) =
1 1 1 0 1 −1 e 0.5t x0,1 0.1t − 1 1 0 1 1 e x0,2 2 1 1 1 e 0.5t −e 0.5t x0,1 0.1t 0.1t e e x0,2 2 −1 1 1 −e 0.5t + e 0.1t e 0.5t + e 0.1t x0,1 0 .5t 0 .1t 0.5t 0.1t − + + e e e e x0,2 2 1 0.5t 1 (x0,1 − x0,2 ) + e 0.1t (x0,1 + x0,2 ) e 2 2 1 0.5t 1 (x0,2 − x0,1 ) + e 0.1t (x0,1 + x0,2 ) e 2 2
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
−
c) Since all the eigenvalues are in the open left half plane, the system is (internally) exponentially stable, and since we have a minimal realization ( (A, B ) completely controllable and (A, C ) completely observable; observable; clear by inspection since B and C are both full rank), it is thus BIBO stable. Problem 2. BIBO Stability. Stability. The transfer function has poles at ± j ; thus there are some poles that are not in therefore therefore the system system cannot cannot be BIBO stable. Consider Consider for example example the bounded input u(t) = sin t. So u ˆ(s) = s 1+1 and 2
yˆ(s) = G(s)ˆ u(s) =
=⇒ y(t) =
1 (s2
1 [sin t − t cos t] 2
which which will clearly clearly grow grow without without bound b ound as t → ∞.
+ 1)
2
=
s2 −
1 1 1 − 2 2 2 s + 1 (s + 1)2
o C−
(open left half plane),
2
Problem Problem 3. Exponential Exponential stability stability of LTI systems. systems. We have seen that for an LTI system, Φ(t, t0 ) = eA(t t ) . By the spectral spectral mapping mapping theorem, theorem, σ eA(t t ) = 2 k f (σ(A)) where f (x) = ex(t t ) , thus f (x) = (t − t0 ) ex(t t ) , f (x) = (t − t0 ) ex(t t ) , ..., f (k) (x) = (t − t0 ) ex(t t ) . Note that the Jordan form of eA(t t ) will be comprised solely of entries of this form (scaled by (k 11)! < 1), ie. − 0
− 0
− 0
− 0
− 0
− 0
− 0
−
products of polynomials in t and eλ (t t ) . When When Re( Re(λi ) < 0, all these entries will go to zero as t → ∞, since any decaying decaying exponential exponential eventuall eventually y dominates dominates any growing growing polynomial. polynomial. So the magnitude magnitude of the state state must must also go to zero. The state is also bounded by continu continuity ity the polynomial-m polynomial-matri atrix x products. This implies implies that xe = 0 is asymptotically stable. This is developed a bit more formally in LN15, p.5 but the idea is the same; and we don’t need all the mechanics of that proof since we aren’t trying to show that the state goes to zero exponentially fast. i
− 0
Problem Problem 4. Characteri Characterizatio zation n of Internal Internal (State Space) Space) Stability Stability for LTI systems. systems. (a) For internal stability we simply need the state to be bounded for all t ≥ t0 . This implies that eJt must be bounded, where J is the Jordan form of A. By the analysis in problem 3, this is clearly true for the subspaces of the state space corresponding to eigenvalues in the open left half plane. For subspaces corresponding to an eigenvalue λi = jω on the imaginary axis, note that the corresponding Jordan block J i with block size 1 leads to simply eJ t = eλ t = ejωt = cos ωt + sin ωt, hence eJ t = 1 . (b) This system is in Jordan form; the eigenvalues have either negative imaginary part, or they are on the imaginary axis and have Jordan block size 1, so by the result of part (a) the system is (internally) stable. However, because of the eigenvalues at zero, the system is not exponentially stable.
i
i
i
EE221A Linear System Theory Problem Set 9 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 11/21; Due 12/1
Problem Problem 1: Lyapuno Lyapunov v Equation. Equation. (a) Consider the linear map L : 0, ∀λi , λj ∈ σ(A), the equation:
Rn×n
→
Rn×n
defined by L(P ) = AT P + P A. Show Show tha thatt if λi + λj =
AT P + P A = Q
has a unique symmetric solution for given symmetric Q. (b) Show that if σ (A) ⊂ C then for given Q > 0, there exists a unique positive definite P solving ◦
−
AT P + P A = −Q
(Hint: try P =
∞
0
T
eA t QeAt dt)
Problem Problem 2: Asymptotic Asymptotic and exponential exponential stability stability. True or False: If a linear time-varying system is asymptotically stable, it is also exponentially stable. If true, prove, if false, give a counterexample. Problem Problem 3: State observation observation problem. Consider the linear time varying system: x˙ (t)
= A(t)x(t) y (t) = C (t)x(t)
This system is not necessarily necessarily observable. observable. The initial condition condition at time 0 is x0 . (a) Suppose the output y (t) is observed over the interval [0 , T ]. Under Under what conditions conditions can the initial initial state x0 be determined? How would you determine it? (b) Now suppose the output output is subject to some error or measurement measurement noise. noise. Determine Determine the “best” estimate estimate of x0 given y (·) and the system model. (c) Consider all initial conditions x0 such that ||x0 || = 1. Defining the energy in the output signal as < y(t), y (t) >, is it possible for the energy of the output signal to be zero? Problem Problem 4: State vs. Output Output Feedbac Feedback. k. Consider a dynamical system described by: x˙ y
where A=
0
1 7 −4
= Ax + Bu = Cx
, B=
1 2
(1) (2)
, C = [1 3]
(3)
For each of cases ( a) and (b) below, derive a state space representations of the resulting closed loop system, and determine the characteristic equation of the resulting closed loop “A” matrix (called the closed loop characteristic characteristic equation): ( a) u = −[f 1 f 2 ]x, and ( b) u = −ky . Problem Problem 5: Controllab Controllable le canonical form. 1
Consider the linear time invariant system with state equation:
˙ ˙ = x1 x2 x˙ 3
0 0 −α3
1 0 −α2
0 1 −α1
0 + 0 x1 x2 x3
u
(4)
1
Insert Insert state feedback: feedback: the input to the overall overall closed loop system is v and u = v − kx where k is a constant 3 row vector. vector. Show Show that given given any polynomial p(s) = k=0 ak s3 k with a0 = 1, there is a row vector k such that the closed loop system has p(s) as its characteri characteristic stic equation. equation. (This naturally naturally extends to n dimensions, and implies that any system with a representation that can be put into the form above can be stabilized by state feedback.)
2
−
EE221A Linear System Theory Problem Set 10 Professor C. Tomlin Department of Electrical Engineering and Computer Sciences, UC Berkeley Fall 2011
Issued 12/2; Due 12/9
Problem 1: Feedback control design by eigenvalue placement. Consider the dynamic system: d4 θ d3 θ d2 θ dθ + + + α3 + α4 θ = u α α 1 2 4 3 2 dt dt dt dt 3
2
where u represents an input force, αi are real scalars. Assuming Assuming that ddtθ , ddtθ , dθ , and θ can all be measured, dt design a state feedback control scheme which places the closed-loop eigenvalues at s1 = −1, s2 = −1, s3 = −1 + j 1, s4 = −1 − j 1. 3
2
Problem Problem 2: Controllab Controllability ility of Jordan Jordan Forms. Given the Jordan Canonical Form of Problem Set 7:
−3 0 0 A= 00 0 0
1 −3 0 0 0 0 0
0 1 −3 0 0 0 0
0 0 0 −4 0 0 0
0 0 0 1 −4 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
Suppose this matrix A were the dynamic matrix of a system to be controlled. What is the minimum number of inputs needed for the system to be controllable? Problem Problem 3: Observer Observer design. Figure 1 shows a velocity observation system where x1 is the velocity to be
input u
velocity 1 s
x1
observed variable x 2
2 s 2+s
observer
z1 observer output
Figure 1: Velocity Observation Observation System. observed observed.. An observer observer is to be constructe constructed d to track track x1 , using u and x2 as inputs. The variable x2 is obtained from x1 through a sensor having the known transfer function 2−s 2+s 1
(1)
as shown in Figure 1. (a) Derive a set of state-space equations for the system with state variables x1 and x2 , input u and output x2 . (b) Design an observer with states z1 and z2 to track x1 and x2 respectively. Choose both observer eigenvalues to be at −4. Write out the state space equations for the observer. (c) Derive the combined state equation for the system plus observer. Take as state variables x1 , x2 , e1 = x1 −z1 , and e2 = x2 − z2 . Take u as input and z1 as the output. output. Is this system system controll controllable able and/or observable observable?? Give Give physical reasons for any states being uncontrollable or unobservable. (d) What is the transfer function relating u to z1 ? Explain your result. Problem Problem 4: Observer Observer-con -controlle trollerr for a nonlinear nonlinear system. system. The simplified dynamics of a magnetically suspended steel ball are given by: u2 my¨ = mg − c 2 y
where the input u represents the current supplied to the electromagnet, y is the vertical position of the ball, which may be measured by a position sensor, g is gravitational acceleration, m is the mass of the ball, and c is a positive constant such that the force on the ball due to the electromagnet is c uy . Assume a normalization such that m = g = c = 1. 2
2
(a) Using the states x1 = y and x2 = y˙ write down a nonlinear state space description of this system. (b) What equilibrium control input ue must be applied to suspend the ball at y = 1 m? (c) Write the linearized state space equations for state and input variables representing perturbations away from the equilibrium of part (b). (d) Is the linearize linearized d model stable? stable? What can you conclude about the stabilit stability y of the nonlinea nonlinearr system system close to the equilibrium point xe ? (e) Is the linearized model controllable? Observable? (f ) Design a state feedback controller for the linearized system, to place the closed loop eigenvalues at (g) Design a full order observer, so that the state estimate error dynamics has eigenvalues at
−1, −1.
−5, −5.
(h) Now, suppose that you applied this controller to the original nonlinear system; discuss how you would expect the system to behave. behave. How would the b ehavior ehavior change change if you had chosen controlle controllerr eigenv eigenvalue aluess at −5, −5, and observer eigenvalues at −20, −20? Problem 5. Given a linear time varying system R(·), show that if R(·) is completely controllable on [ t0 , t1 ], then R is completely controllable on any [ t0 , t1 ], where t0 ≤ t0 < t1 ≤ t1 . Show Show that this this is no longer longer true when the interval [ t0 , t1 ] is not a subset of [ t0 , t1 ]. ′
′
′
′
′
2
′
EE221A Problem Set 10 Solutions - Fall 2011
Problem 1. Feedback control design by eigenvalue placement.
First write the system in state space form:
x˙ =
θ˙ θ¨ ˙θ˙˙ ˙˙˙˙ θ
=
−
0 0 0 α4
1 0 0 α3
0 1 0 α2
−
0 0 1 α1
−
−
= Ax + Bu y=x
x+
0 0 0 1
u
We can check the controllability by considering Q= =
which clearly has rank 4 for any s is completely controllable. Now, let
sI
−A B s −1 0 0 s −1
0 α4
0 α3
s α2
0 0 1 s α1
0 0 0 1
− −
∈ C—moreover, by inspection (A, B ) is in controllable canonical form—so (A, B ) u=
T
−f
x=
−
The closed loop system is then
x˙ =
θ˙ θ¨ ˙θ˙˙ ˙˙˙˙ θ
=
=
−
0 0 0 α4
1 0 0 α3
−
0 1 0 α2
−
0 0 0
f 1
0 0 1 α1
−
1 0 0
f 2
f 3
f 4
−
0 0 0 1
x
0 1 0
x
f 1
0 0 1
−α4 − f 1 −α3 − f 2 −α2 − f 3 −α1 − f 4
= ACL x
f 2
f 3
f 4
x
x
We can compute the characteristic polynomial of the closed loop system, χ ˆA
CL
(s) = det det (sI
−A
CL )
= s4 + (α (α1 + f 4 ) s3 + (α (α2 + f 3 ) s2 + (α (α3 + f 2 ) s + α4 + f 1
While our desired characteristic polynomial is, 2 ˆ des X + j)) (s + 1 des (s) = (s + 1) (s + 1 + j
= s4 + 4s 4s3 + 7s 7s2 + 6s 6s + 2 and by matching terms we conclude that f 1 f 2 f 3 f 4 Problem 2. Controllability Controllability of Jordan Forms. Forms.
=2 =6 =7 =4
− α4 − α3 − α2 − α1
− j) j )
2
A minimum of two inputs are needed. Proof: The PBH test shows that no B matrix with a single column can provide complete controllability; it is easy to find a two-column B matrix matrix that does, for example example
B=
Problem Problem 3. Observe Observer r design. design.
0 0 1 0 1 1 0
0 0 0 0 0 0 1
a) We have x1 (s) = s1 u(s) = 2x1 (t) 2x2 (t) x˙ 1 (t) = 2x1 (t)
−
−
⇒ sx1(s) = u(s) =⇒ x˙ 1(t) = u(t). Also (2 + s)x2(s) = (2 − s)x1(s) =⇒ − 2x2(t) − u(t). So the system in state-space form is,
− − − − − − x˙ 1 x˙ 2
y
=
0 2
0 2
=
0 1
x1 x2
u
x1 x2
t1 t2
b) We want to place the eigenvalues eigenvalues of A T C where T = The characteristic polynomial of A
1 1
+
0 1
, C =
=
⇒ A−T C =
T C is,
det(sI det(sI
(A
x˙ 2 (t) =
0 2
−t1 −2 − t2
.
s t1 2 s + 2 + t2
T C )) = det
−
= s2 + (2 + t2 )s + 2t 2 t1
and we want it to equal the desired characteristic polynomial, (s + 4)2 = s2 + 8s + 16. 16. Thus 2 + t2 = 8 = and 2t1 = 16 = t1 = 8. 8 . The observer state space equations are therefore,
⇒
⇒
z˙ =
0 2
−8 −8
1 1
z+
8 6
u+
−
t2 = 6
y
c) The overall dynamics are described by
x˙ e˙
y
where
− − − − − −
=
A 0 0 A T C
=
0 2 0 0
=
1 0
x=
x1
0 2 0 0
0 0 0 2
x e
0 0 8 8
T
,e=
B=
x e
e1
+
e2
u 1 1 0 0
x e
1 0
x2
B 0
+
u
T
1 1
−
The overall system is not completely controllable, nor observable; the controllability controllability matrix Q = [B [ B AB A2 B A3 B ]
| |
is Q=
−
1 0 1 4 0 0 0 0
0 0 8 16 0 0 0 0
−
|
3
which has rank 2, and the observability matrix is
=
O
=
C CA CA 2 CA 3 1 0 0 0
0 0 0 0
−1
0 0 8 16 64 128 384
−
−
which which has rank 3. The error states states are not controlla controllable ble because the observer observer is designed designed to such that the error converges converges to zero. Also intuitively intuitively it makes sense that one should not be able to control the state estimates separately from the states that are being estimated! estimated! The state x2 is not observable observable.. This is because the system is designed to ensure z1 x1 independently of u, and one does not with variations in the controlled variable x2 to affect the estimate of x1 . d)
→
C (sI
1
−
− A)
B
=
1 0
−1
− ♠ −
s 0 2 s+2 0 0 0 0
0
s+2 s(s+2)
=
=
1 0
1
−1
0
s
0
−
2 s(s+2)
s s(s+2)
0 0
0 0
1 1 0 0
♠
1 s
=
0
0 0 0 0 s 8 2 s+8 0 0
0 0
♠ ♠ ♠ ♠
1
−
− − 1 1 0 0
1 1 0 0
where denotes terms that don’t matter since they will be multiplied by zero. The observer is essentially inverting the dynamics of the sensor, such that the transfer function from input to the estimated velocity is identical to the transfer function to the actual velocity.
♠
Problem Problem 4. Observe Observer-co r-contro ntroller ller for a nonlinear nonlinear system. system.
a) x˙ :=
x˙ 1 x˙ 2
=
x˙ 2 1
−
u2 x2 1
= f ( f (x), y =
1 0
x = Cx
b) When u = 1, 1 , x˙ 2 = 0, 0 , so this input will keep the system in equilibrium at y = x1 = 1 m. 0 1 0 c) Let A := Dx f x ,u , B := Du f x ,u , x0 := (1, (1, 0), 0), u0 = 1. 1 . Then A = ,B= . 2 0 2 d) The eigenvalues eigenvalues of A are 2 so the equilibrium x0 is unstable in the linearized system. The same equilibrium will consequently also be unstable in the nonlinear system. e) 0 2 = B AB = = controllable 2 0
|
0
0
±√ C
|
0
−
0
− − ⇒ O ⇒ − −
=
C CA
=
1 0 0 1
=
observable
f) Let the feedback system be u = F x, thus the closed loop dynamics are x˙ = (A BF ) BF ) x. By comp compar arin ingg 3/2 1 gives the desired closed loop the desired desired characte characterist ristic ic polynomial polynomial we can determine determine that F = eigenvalues.
−
−
4
g) Let the observer gain matrix be T = det[sI det[sI
t1 t2
. Then
s + t1 2 + t2
− (A − T C )])] = det −
= s2 + t1 s
−1
− 2 + t2
s
While the desired characteristic polynomial is, (s + 5)2 = s2 + 10s 10s + 25
10 gives the desired spectrum for observer dynamics A T C . 27 h) In principle, near the equilibrium this controller/observer system will both control and observe the nonlinear system. More aggressive aggressive eigenva eigenvalue lue placement placement leads to higher higher gains gains in the controller controller,, potential potentially ly degrading degrading performance (especially in the presence of measurement noise, actuator saturation, signal digitization, unmodeled disturbances, etc.). Thus, T =
−
Problem 5.
a) Let (x0 , t0 ) be the initial phase and (x1 , t1 ) be an arbitrary arbitrary final phase. phase. Construct Construct a control u( ) piecewise such that u(t) = 0, t [t0 , t0 ) (t1 , t1 ]. Then Then we have have that x(t0 ) = Φ(t Φ(t0 , t0 )x0 , and x(t1 ) = Φ(t Φ(t1 , t1 )x(t1 ) x(t1 ) = Φ(t Φ(t1 , t1 )x(t1 ). But since R( ) is c.c. on [t0 , t1 ], we know there exists a control u ˜ on [t0 , t1 ] that will transfer any (x0 , t0 ) to any (x1 , t1 ). So let u(t) = u ˜(t), t [t0 , t1 ]. b) Counterexample: Consider a system R( ) = (A( ), B ( ), C ( ), D ( )), )), where t0 = t0 < t 1 < t 1
∈
∪
·
·
·
∈
B (t) =
·
0n×n , I n×n ,
Then clearly R( ) is c.c. on [t0 , t1 ], but not on [t0 , t1 ].
·
·
· · t0 ≤ t ≤ t1 t1 < t ≤ t1
⇐⇒
EE221A Problem Set 9 Solutions - Fall 2011
Problem Problem 1. Lyapuno Lyapunov v Equation. Equation.
(a) We want to show that L(P ) = Q has a unique unique symmetr symmetric ic solution solution.. So we are intere intereste sted d in whethe whetherr L(P ) → AT P + P A is injective (for uniqueness) and surjective (solution exists for any given symmetric Q). Thus we want to know if L is bijective or equivalently (since L maps from Rn×n to itself), if N (L) = {θ}. A sketch of the proof is as follows: We use the (ordinary and generalized) eigenvalues and eigenvectors of A and the property that sums of eigenvalues cannot be zero, to show that v ∈ N (P ) for each v (ordinary and generalized) eigenvector of A. Since the set of all (ordinary and generalized) eigenvectors is a basis for Rn , the only P that satisfies this is P = 0, 0 , hence N (L) = {θ} as desired. Let e be an eigenvec eigenvector tor of A with eigenvalue eigenvalue λ. Then
L(P ) = 0 =⇒ AT P e + P Ae = 0 AT P e = −λPe,
and since σ (A) = σ (AT ), this means means that either: either: i) −λ is an eigenvalue of A, with left eigenvector eT P , or ii) precluded by the given property property on the eigenva eigenvalues lues of A. So we have have shown that for P v = 0. But the first case is precluded every eigenvector eigenvector e of A, P e = 0. 0 . If A happens happens to be diagonable diagonable (i.e. it has a complete complete set of n linearly independent eigenvectors), eigenvectors), then we are done. However However we can’t assume this. Thus, consider also a generalized eigenvector eigenvector v of A of degree 1 (so Av = λv + e where e is some eigenvector of A). Then,
L(P ) = 0 =⇒ AT P v + P Av = 0 AT P v = −λP v − P e AT P v = −λPv,
where we recall that we have already shown that P e = 0. 0 . By the same reasoning as before, we now have that P v = 0 for all generaliz generalized ed eigenvector eigenvectorss of degree 1. One can contin continue ue this until until all of the eigenv eigenvecto ectors rs and generaliz generalized ed eigenvectors of A have been exhausted, with the result that P maps every eigenvector and generalized eigenvector of A to zero. But since the eigenvector eigenvectorss and generalized generalized eigenvecto eigenvectors rs of A form a basis for Rn , this implies that ˜ (P ) = 0 =⇒ P = 0. So we hav havee that that L(P ) = Q has unique unique solution solutions. s. No Now w to show that any solution solution is L symmetric, Q = QT =⇒ AT P + P A = P T A + AT P T
=⇒ L(P ) = L(P T ) =⇒ P = P T o (b) Note that σ(A) ⊂ C− implies the property in part (a) so by that result we have existence of a unique, symmetric solution. Check that the hinted P is this solution:
T
T
A P + P A = A
ˆ
∞
∞
A
e
T
t
At
Qe dt +
0
ˆ
T
eA t QeAt dtA
0
T
∞
ˆ d ˆ = e Qe dt + dt ˆ d dt = e Qe At
At
0
0
∞
A
e
T
t
d At e dt
dt
∞
T
A
0
T
A
=e
t
At
dt
= −Q
t
At
Qe
∞
t=0
And P is clearly positive definite because eAt is invertible and Q is positive definite. Problem 2. Asymptotic and exponential stability. stability.
False. Counterex Counterexampl ample: e: Consider Consider the system system x˙ = − 1+t0 1+t
1 1+t
()=
This has solutio solution n xt x. This
1+t0 1+t
x0 , i.e. Φ(t, t0 ) =
. So Φ(t, 0) → 0 as t → ∞; therefore therefore xe = 0 is asymptot asymptotica ically lly stable. stable. But note that for any α > 0, |x(t)| exp[α(t − t0 )] → ∞ as t → ∞ , so we can never satisfy the requirements of exponential stability.
2
Problem Problem 3. State observatio observation n problem. problem.
(a) We have Lo x0 = y(t) = C (t)Φ(t, 0)x0 . So of course it is necessary necessary that y ∈ R(Lo ); however this should be guaranteed if y is the output of our system and there are no unmodeled dynamics or noise. Since Φ(t, 0) is invertible for all t, a sufficient condition would be if there exists t ∈ [0, T ] for which C −1 (t) exists. exists. Generally Generally howeve howeverr we don’t have such a simple case. However, we know that if the observability Grammian W o [0, T ] =
ˆ
T
Φ∗ (τ, 0)C ∗ (τ )C (τ )Φ(τ, 0)dτ
0
is full rank, then the system is completely observable on [0, T ] or in other words, we can determine x0 exactly from the output. We could determine it as in the derivation of the continuous time Kalman filter from lecture notes 18: x0 = ( Lo Lo ) −1 o
= W
−1
Lo y
[0, T ]
ˆ
T
Φ∗ (τ, 0)C ∗ (τ )y(τ )dτ.
0
(b) In this case we are not guaranteed that y(t) ∈ R(Lo ). But we can look for the least-norm least-norm approxima approximate te ⊥ solution to Lo x0 = y (t). Let y = yR + yN , where yR ∈ R(Lo ) and yN ∈ R(Lo ) = N (Lo ). Note Note then that that yR is the orthogonal projection of y onto the range of Lo —it is the vector in R(Lo ) that is closest to y in the least L2 norm sense. So we are looking for those x0 such that
Lo x0 = yR + yN =⇒ Lo Lo x0 = Lo (yR + yN ) = Lo yR Now as we have seen, if N (Lo ) = N (Lo Lo ) = {θ}, i.e. Lo is injective, then Lo Lo is invertible and we can recover a unique x0 that is the initial condition that, with no noise, would produce the output closest (in the sense we have described) to the observed output. Now consider the other cases. Note that Lo cannot be surjective, since it maps to an infinite-dimensional vector space. Now, if Lo is not injective, then at best we can define a set of possible initial conditions that would all result in the output yR , X := {x|Lo x = yR }. The The x0 obtained via the Moore-Penrose T T −1 ∗ −1 ∗ pseudoinverse, x ˜0 = V 1 Σr U 1 Lo y = V 1 Σr U 1 0 Φ (τ, 0)C (τ )y (τ )dτ , would be the solution of least (L2) norm (here, SVD Lo Lo := U 1 Σr V 1∗ ). (c) Yes; in the case that Lo is not injective, N (Lo ) is nontrivial and there exist x0 ∈ N (Lo ) with unit norm; then for any such x0 , y, y = Lo x0 , Lo x0 = θ, θ = 0. 0.
´
Problem Problem 4. State vs. Outpu Outputt Feedback. eedback.
(a) We have x˙ = Ax + B (−
=
A−B
= Acl , where Acl =
with characteristic equation,
0
1 7 −4
1 − 2
f 1
f 1
f 1
f 2
f 2
f 2
=
x)
x
−f 1 7 − 2f 1
1 − f 2 −4 − 2f 2
ˆA (s) = (s + f 1 )(s + 4 + 2 f 2 ) − (1 − f 2 )(7 − 2f 1 ) χ cl
= s2 + 4s + 2f 2 s + f 1 s + 4f 1 + 2f 1 f 2 − 7 + 2f 1 + 7f 2 − 2f 1 f 2 = s2 + (4 + 2 f 2 + f 1 )s + 6f 1 + 7f 2 − 7 (b) We have x˙ = Ax + B (−ky )
= Ax − kBCx = ( A − kBC ) x = Acl ,
3
where Acl =
0
1 1 3 = −
1 7 −4
k
with characteristic equation,
2
−k 1 − 3k 7 − 2k −4 − 6k
ˆA (s) = s2 + (7k + 4)s + 27k − 7 χ cl
Problem Problem 5. Control Controllable lable canonical canonical form. form.
The closed loop system is,
˙ 0 1 ˙ = 0 0 ˙ − 0− 0 = x1 x2 x3
α3
0 1 −α1
α2
0 + 0 − 1 0 0 + 0 1 x1 x2 x3
1 0 −α2 − k2
−α3 − k1
−α1 − k3
v
x1 x2 x3
k1
k2
v
1
So ˆA χ
CL
(s) = s3 + (α3 + k1 ) s2 + (α2 + k2 ) s + (α1 + k3 )
The desired characteristic polynomial is, p(s) = s3 + a1 s2 + a2 s + a3
So setting k such that, k1 = a3 − α3 k2 = a2 − α2 k3 = a1 − α1
gives the desired characteristic polynomial.
k3
x