Solutions Manual for Adaptive Filter Theory 5th Edition by Haykin Link full download : https://findtestbanks.com/download/solutions-manual-for-adaptive-filter-theory-5th-edition-by-haykin/
Chapter 2
Problem 2.1
a) Let
wk = x + x + j y p( p( k) = a + a + j b
−
We may then write
f =wk p ( k ) =(x =(x + j y)(a )(a j b) =(ax =(ax + + by j(ay by)) + j(ay ∗
−
−
− bx) bx)
Letting
f = u + u + j v where
= ax + + by u = ax by v = ay Hence,
− bx
∂u = a ∂x
∂u = b ∂y
∂v = a ∂y
∂v = ∂x
−b 21
PROBLEM 2.1.
CHAPTER 2.
From these results we can immediately immediately see that
∂u ∂v = ∂x ∂y ∂v = ∂x
− ∂u ∂y
In other words, the product term w k p ( k ) satisfies the Cauchy-Riem Cauchy-Riemann ann equations, equations, and so this term is analytic. ∗
−
b) Let
f =wk p ( k ) =(x =(x j y )(a )(a + j b) =(ax =(ax + + by j(bx by)) + j(bx ∗
−
−
− ay) ay )
Let
f = u + u + j jvv with
= ax + + by u = ax by v = bx Hence,
− ay
∂u =a ∂x ∂v =b ∂x
∂u = b ∂y ∂v = a ∂y
−
From these results we immediately see that
∂u ∂v = ∂x ∂y ∂v ∂ u = ∂x ∂y
−
In other words, the product term wk p( p( k ) does not satisfy the Cauchy-Riemann equations, and so this term is not analytic. analytic. ∗
−
22
PROBLEM 2.2 .
CHAPTER 2.
Problem 2.2
a) From the Wiener-Hopf equation, we have w0 = R
−
1
p
(1)
We are given that R =
p =
1 0.5 0.5 1
0.5 0.25
Hence the inverse of R is 1
−
R
1 0.5 = 0.5 1 1 = 0.75
−
1 0.5
−
1
1
−
−0.5
1
Using Equation Equation (1), we therefore get
− −
1 1 w0 = 0.5 0.75 1 0.375 = 0 0.75 0.5 = 0
0.5 1
0.5 0.25
b) The minimum mean-square error is
= σd2 J min min =σ =σd2 =σd2
H
−p w − 0.5 0.25 − 0.25
0
0.5 0
23
PROBLEM 2.2 .
CHAPTER 2.
c) The eigenvalues of the matrix R are roots of the characteristic equation:
(1
2
− λ) − (0. (0.5)
2
=0
That is, the two roots are
λ1 = 0.5
and λ2 = 1.5
The associated eigenvectors are defined by Rq = λ = λq For λ1 = 0.5, we have
1 0.5 0.5 1
q 11 q 11 11 = 0.5 11 q 12 q 12 12 12
Expanded this becomes
0 .5q 12 q 11 11 + 0. 12 = 0.5q 11 11 0.5q 11 + q 12 11 + q 12 = 0.5q 12 12 Therefore,
q 11 11 =
−q
12 12
Normalizing the eigenvector q1 to unit length, we therefore have
√
1 q1 = 2
1 1
−
Similarly, for the eigenvalue λ2 = 1.5, we may show that
√
1 1 q2 = 2 1
24
PROBLEM 2.3 .
CHAPTER 2.
Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows: 2
w0 =
− − − − − − − − i=1
1 qi qiH p λi
1 1 + q1 qH q2 qH p 1 2 λ1 λ2 1 1 1 1 1 + 1 1 = 1 3 1 1 1 1 1 1 0.5 = + 1 1 0.25 3 1 1 4 2 0.5 = 32 43 0.25 3 3 4 1 = 61 6 1 + 3 3 0.5 = 0 =
0.5 0.25
Problem 2.3
a) From the Wiener-Hopf equation we have w0 = R
−
1
p
(1)
We are given
1 0.5 0.25 1 0.5 R = 0.5 0.25 0.5 1 and
p = 0.5 0.25 0.125
T
25
PROBLEM 2.3 .
CHAPTER 2.
Hence, the use of these values values in Equation Equation (1) yields yields 1
w0 = R
−
p
1 0.5 0.25 = 0.5 1 0.5 0.25 0.5 1 =
1.33 0.67 0
0.67 1.67 0.67
w0 = 0.5 0 0
b)
1
−
− − − − 0.5 0.25 0.125
0 0.67 1.33
0.5 0.25 0.125
T
The Minimum mean-square error is
= σd2 J min min =σ =σd2 =σd2
H
−p
w0
−
0.5 0.25 0.125
− 0.25
0.5 0 0
c) The eigenvalues of the matrix R are
λ1 λ2 λ3 = 0.4069 0.75 1.8431
The corresponding eigenvectors constitute the orthogonal matrix:
Q =
− −
0.4544 0.7662 0.4544
−0.7071 0 0.7071
0.5418 0.6426 0.5418
Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows:
3
w0 =
i=1
1 qi qiH p λi 26
PROBLEM 2.4 .
w0 =
CHAPTER 2.
− − − − − − − × − − − − − − × 0.4544 0.7662 0.4544
1 0.4069
1 + 0.75
0.4544 0.7662
0.7071 0 0.7071
0.5418 1 + 0.6426 1.8431 0.5418 w0 =
0.2065 0.3482 0.2065
1 0.4069
1 + 0.75
0.7071 0
0.4544
0.7071
0.5418 0.6426 0.5418
0.3482 0.5871 0.3482
0.5 0 0 0 0.5 0
0.2065 0.3482 0.2065
0.5 0 0.5
0.2935 0.3482 0.2935 1 0.3482 0.4129 0.3482 + 1.8431 0.2935 0.3482 0.2935
0.5 = 0 0 Problem 2.4
By definition, the correlation matrix R =
H
E[u(n)u
(n)]
Where
u(n) =
u(n) u(n 1)
−
.. .
u(0)
Invoking the ergodicity theorem,
1 R(N ) = N + 1
N
0.5 0.25 0.125
u(n)uH (n)
n=0
27
0.5 0.25 0.125
PROBLEM 2.5 .
CHAPTER 2.
Likewise, we may compute the cross-correlation vector p =
E[u(n)d
∗
(n)]
as the time average
1 N ) = p(N ) + 1 N +
N
u(n)d (n) ∗
n=0
The tap-weight tap-weight vector of the wiener filter is thus defined by the matrix product 1
−
N
w0 (N ) =
u(n)uH (n)
n=0
N
u(n)d (n)
n=0
∗
Problem 2.5
a) R = E[u(n)uH (n)]
=E[(α [(α(n)s(n) + v(n))(α ))(α (n)sH (n) + vH (n))] ∗
With α( α(n) uncorrelated with v( v (n), we have R = E[ α(n) 2 ]s(n)sH (n) + E[v(n)vH (n)]
|
|
=σα2 s(n)sH (n) + Rv
(1)
where Rv is the correlation matrix of v
b) The cross-correlation vector between the input vector u vector u (n) and the desired response d( d(n) is p =
E[u(n)d
∗
(n)]
(2)
If d( d(n) is uncorrelated with u(n), we have p = 0 Hence, the tap-weight tap-weight of the wiener filter is w0 = R
1
−
p
=0 28
PROBLEM 2.5 .
CHAPTER 2.
c) With σα2 = 0, Equation (1) reduces to R = R v with the desired response
d(n) = v( v (n
− k)
Equation (2) yields p = E[(α [(α(n)s(n) + v(n)v (n ∗
=E[(v(n)v (n ∗
=E
=E
− k))]
v (n) v (n 1)
−
.. .
v (n
(v (n ∗
− M + + 1)
rv (n) rv (n 1)
−
.. .
rv (k
− k))]
,
0
− k))
≤ k ≤ M − − 1
(3)
− M + + 1)
where rv (k ) is the autocorrelation of v( v (n) for lag k . Accordingly, the tap-weight vector of the (optimum) wiener filter is w0 = R
1
−
p
1
=Rv p −
where p is defined in Equation (3).
d) For a desired response response
d(n) = α( α (n)exp( j ωτ ) ωτ )
−
29
PROBLEM 2.6 .
CHAPTER 2.
The cross-correlation vector p is
)(d n)] p = E[u(n)(d ∗
=E[(α [(α(n)s(n) + v(n)) α (n) exp( exp( j ωτ )] ωτ )] =s(n) exp( exp(jj ωτ ) ωτ )E[ α(n) 2 ] =σα2 s(n) exp( exp(jj ωτ ) ωτ )
−
∗
|
=σα2
|
1 exp( j ω )
− .. .
exp(( j ω)(M )(M
=σα2
− 1)) −
−
exp(j ωτ ) ωτ ) exp(j ω (τ 1)) .. .
− −
exp((j ω)(τ )(τ
− M + − + 1))
exp(j ωτ ) ωτ )
The corresponding value of the tap-weight vector of the Wiener filter is
= σα2 (σα2 s(n)sH (n) + Rv ) w0 =σ
1
−
exp(j ωτ ) ωτ ) exp(j ω (τ 1)) .. .
− −
exp((j ω)(τ )(τ
= s(n)sH (n) +
1 Rv σα2
1
−
− M + − + 1))
exp(j ωτ ) ωτ ) exp(j ω (τ 1)) .. .
exp((j ω)(τ )(τ
− −
− M + − + 1))
Problem 2.6 The optimum filtering solution is defined by the Wiener-Hopf equation Rw0 = p
(1)
for which the minimum mean-square error is
= σ d2 J min min = σ
H
−p
w0
(2)
30
PROBLEM 2.6 .
CHAPTER 2.
Combine Equations (1) and Equation(2) into a single relation:
σd2 pH
p
R
1
J min min
=
w0
0
Define A =
σd2 pH
p
R
(3)
Since
σd2 = E[d(n)d (n)] ∗
p = R =
E[u(n)d
∗
(n)]
H
E[u(n)u
(n)]
we may rewrite rewrite Equation (3) as
(n)] A = E[u(n)d (n)] =E
E[d(n)d
∗
∗
d(n) u(n)
H
E[d(n)u
(n)] H E[u(n)u (n)]
d (n) uH (n) ∗
The minimum mean-square error equals
= σ d2 J min min = σ
H
−p
w0
(4)
Eliminating σd2 between Equation (1) and Equation (4): H J (w) = J min min + p w0
H
−p
Rw
−w
H
Rw0 + wH Rw
(5)
Eliminating p between Equation (2) and Equation (5) H J (w) = J min min + w0 Rw0
H 0 Rw
−w
−w
H
Rw0 + wH Rw
where we have used the property RH = R. R . We may rewrite Equation (6) as
J (w) = J min min + ( w
H
−w ) 0
R(w
−w ) 0
which clearly shows that J ( J (w0 ) = J min min 31
(6)
PROBLEM 2.7 .
CHAPTER 2.
Problem 2.7 The minimum mean-square error is
= σ d2 J min min = σ
H
−p
1
−
R
p
(1)
Using the spectral theorem, we may express the correlation matrix R as R = Q ΛQH M
R =
λk qk qkH
(2)
k=1
Substituting Equation (2) into Equation (1) M
= σd2 J min min =σ
− − k =1 M
=σd2
k =1
1 H p qk pH qk λk 1 H p qk λk
|
|
2
Problem 2.8 When the length of the Wiener filter is greater than the model order m, the tail end of the tap-weight vector of the Wiener filter is zero; thus, w0 =
am 0
Therefore, Therefore, the only possible possible solution solution for the case of an over-fitted over-fitted model is w0 =
am 0
Problem 2.9
a) The Wiener solution is defined by RM aM = p M 32
PROBLEM 2.10 .
CHAPTER 2.
RM rM m H rM m RM m,M
am
−
−
−
−
m
RM am = p m
H rM
−
pM
0M
m
=
−
pm
pM
m
−
m am = p M −m −1 H H m = r M −m am = r M −m RM p m
(1)
−
b) Applying the conditions of Equation (1) to the example in Section 2.7 in the textbook H rM
−
m =
am =
−
0.05 0.1 0.15
− 0.8719 0.9129 0.2444
The last entry in the 4-by-1 vector p is therefore H rM
−
m am =
− 0.0436 − 0.0912 + 0.0.1222 = − 0.0126
Problem 2.10
= σ d2 J min min = σ = σ d2
H
−p −p
w0
H
1
−
R
p
when m = m = 0,
= σ d2 J min min = σ = 1.0 When m = m = 1,
J min min = 1
− 0.5 × 11.1 × 0.5
= 0.9773
33
PROBLEM 2.11.
CHAPTER 2.
when m = m = 2
J min min = 1
−
0.5
−0.4
= 1 0.6781 = 0.3219
−
1
−
1.1 0.5 0.5 1.1
0.5 0.4
−
when m = m = 3,
J min min = 1
−
0.5
−0.4 −0.2
= 1 0.6859 = 0.3141
−
1
−
−−
1.1 0.5 0.1 0.5 1.1 0.5 0.1 0.5 1.1
0.5 0.4 0.2
when m = m = 4,
0.6859 J min min = 1 = 0.3141
−
Thus any further increase in the filter order beyond m = m = 3 does not produce any meaningful reduction in the minimum mean-square error.
Problem 2.11
ν1(n)
+
_
.
Σ
d (n)
z-1 d (n-1)
0.8458 d (n)
.
(a)
x(n)
Σ
z-1
0.9452
b
34
ν2(n)
Σ
u(n)
PROBLEM 2.11.
CHAPTER 2.
a) + v (n) u(n) = x( x (n) + v
(1)
2
d(n) =
−d(n − 1) × 0.8458 + v + v (n) 0 .9458x 9458x(n − 1) x(n) = d( d (n) + 0.
(2)
1
(3)
Equation Equation (3) rearranged rearranged to solve for d( d(n) is
d(n) = x( x (n)
− 0.9458x 9458x(n − 1)
Using Equation Equation (2) and Equation (3):
x(n)
− 0.9458x 9458x(n − 1) = 0. 0.8458[−x(n − 1) + 0. 0.9458x 9458x(n − 2)] + v + v (n) 1
Rearranging the terms this produces:
=(0.9458 8.8458)x 8458)x(n 1) + 0. 0.8x(n x(n) =(0. =(0. =(0.1)x 1)x(n 1) + 0. 0.8x(n 2) + v + v (n)
− −
− −
1
b) + v (n) u(n) = x( x (n) + v 2
where x( x(n) and v2 (n) are uncorrelated, therefore R = R x + Rv
2
rx (0) rx (1) Rx = rx (1) rx (0)
=σx2 rx(0) =σ 1 + a + a2 σ12 = 1 a2 (1 + a + a2 )2
−
rx(1) =
−a
2 1
−a
=1
1
1 + a + a2
0.5 rx(1) = 0. 35
− 2) + v + v (n) 1
PROBLEM 2.11.
1 0.5 0.5 1
Rx =
0.1 0 0 0.1
Rv = 2
2
1.1 0.5 = 0.5 1.1
R = R x + Rv p =
CHAPTER 2.
p(0) p(0) p(1) p(1)
p( p(k) = E[u(n
− k)d(n)], )],
k = 0, 1
=rx(0) + b + b1 rx ( 1) p(0) p(0) =r =1 0.9458 0.5 =0. =0.5272
− ×
−
=rx(1) + b + b1 rx (0 p(1) p(1) =r =0. =0.5 0.9458 = 0.4458
−
−
Therefore, p =
0.5272 0.4458
−
c) The optimal weight vector is given by the equation w0 = R
=
−
1
−
1.1 0.5 w0 = 0.5 1.1
0.5272 0.4458
0.8363 0.7853
−
36
1
−
p; hence,
PROBLEM 2.12 .
CHAPTER 2.
Problem 2.12
a) For M = 3 taps, the correlation correlation matrix matrix of the tap inputs is
1.1 0.5 0.85 R = 0.5 1.1 0.5 0.85 0.5 1.1
The cross-correlation vector between the tap inputs and the desired response is
p =
− 0.527 0.446 0.377
b) The inverse of the correlation matrix is 1
−
R
=
−−
2.234 0.304 1.66
Hence, the optimum weight vector is
w0 = R
−
1
p =
−0.304 −1.666 1.186 −0.304 −0.304 2.234
− 0.738 0.803 0.138
The minimum mean-square error is
J min min = 0.15
37
PROBLEM 2.13 .
CHAPTER 2.
Problem 2.13
a) The correlation matrix R is R = E[u(n)uH (n)]
|
e e
j ω1 (n−1)
−
2
=E[ A1 ]
|
j ω1 n
−
e
.. .
j ω1 (n−M +1) +1)
−
e+ j ω
1
n
e+ j ω
1
(n−1)
. . . e+ j ω
1
(n−M +1) +1)
=E[ A1 2 ]s(ω1)sH (ω1 ) + IE[ v(n) 2 ]
| |
=σ12 s(ω1)sH (ω1 ) + σ + σv2 I
|
|
where I is the identity matrix.
b) The tap-weights vector of the Wiener filter is w0 = R
−
1
p
From part a) ,
= σ 12 s(ω1 )sH (ω1 ) + σ + σv2 I R = σ We are given
= σ 02 s(ω0 ) p = σ To invert the matrix R matrix R,, we use the matrix inversion lemma (see Chapter 10), as described here: If: A = B
−
1
+ CD 1 CH −
then: 1
−
A
= B
H
− BC(D + C
BC)
1
−
CH B
In our case:
= σ v2 I A = σ 38
PROBLEM 2.14 .
−
B
1
= σ v2 I
1
= σ 12
−
D
CHAPTER 2.
C = s (ω1 ) Hence,
1
−
R
=
1 I σv2
−σ
1 s(ω1 )sH (ω1 ) 2 σv
2 v σ12
+ sH (ω1 )s(ω1 )
The corresponding value of the Wiener tap-weight vector is w0 = R
−
1
p
σ02 w0 = 2 s(ω0 ) σv
−σ
σ02 s(ω1 )sH (ω1 ) 2 σv
2 v σ12
s(ω0 )
H
+ s (ω1 )s(ω1 )
we note that sH (ω1 )s(ω1 ) = M which is a scalar hence,
σ02 w0 = 2 s(ω0 ) σv
−
σ02 sH (ω1 )s(ω1 ) s(ω0 ) σv2 σv2 + M σ02
Problem 2.14 The output of the array processor equals
e(n) = u(1 u(1,, n)
− wu(2 wu(2,, n)
The mean-square error equals
J (w) =E[ e(n) 2 ] =E[(u [(u(1, (1, n) wu(2 ))(u (1, (1, n) wu(2,, n))(u 2 2 =E[ u(1, (1, n) ] + w E[ u(2, (2, n) 2 ]
| |
|
|
−
∗
| | |
− w u (2, (2, n))] | − wE[u(2, (2, n)u (1, (1, n)] − wE[u(1, (1, n)u (2, (2, n)]
39
∗
∗
∗
∗
PROBLEM 2.15 .
CHAPTER 2.
Differentiating J ( J (w) with respect to w :
∂J = ∂w Putting
2
−2E[u(1, (1, n)u (2, (2, n)] + 2w 2wE[|u(2, (2, n)| ] ∗
∂J = 0 and solving for the optimum value of w w : ∂w (1, n)u E[u(1,
(2, (2, n)] (2, n) 2 ] E[ u(2, ∗
w0 =
|
|
Problem 2.15 Define the index of the performance (i.e., cost function)
J (w) = E[ e(n) 2 ] + cH sH w + wH sc
|
|
J (w) = w H Rw + cH sH w + wH sc
H
− 2c
D1/2 1
H
− 2c
D1/2 1
Differentiate J ( J (w) with respect to w and set the result equal to zero:
∂J = 2Rw + 2sc = 0 ∂ w Hence, w0 =
−R
1
−
sc
But, we must constrain w0 as sH w0 = D 1/2 1 therefore, the vector c equals c =
H
−(s
−
R
1
s)
−
1
D1/2 1
Correspondingly, the optimum weight vector equals w0 = R
−
1
s(sH R
1
−
s)
1
−
D1/2 1
40
PROBLEM 2.16 .
CHAPTER 2.
Problem 2.16 The weight vector w of the beamformer that maximizes the output signal-to-noise ratio: (SNR)0 =
wH RS w wH Rv w
is derived in part b) of the problem 2.18; there it is shown that the optimum weight vector wSN so defined is given by wSN = R v 1 s −
(1)
where s where s is the signal component and Rv is the correlation matrix of the noise v (n). On the other hand, the optimum optimum weight vector of the LCMV beamformer is defined by
= g w0 = g
1
s(φ) sH (φ)R 1 s(φ) R
−
∗
(2)
−
where s( s(φ) is the steering vector. In general, the formulas (1) and (2) yield different values for the weight vector of the beamformer.
Problem 2.17 Let τ i be the propagation propagation delay, delay, measured from the zero-time reference reference to the ith element of a nonuniformly spaced array, for a plane wave arriving from a direction defined by angle θ with respect to the perpendicular to the array. For a signal of angular frequency ω , this delay amounts to a phase shift equal to ωτ i . Let the phase shifts for all elements of the array be collected together in a column vector denoted by d (ω, θ). The response of a beamformer with weight vector w vector w to a signal (with angular frequency ω ) originates from H angle θ = w d(ω, θ). Hence, constraining the response of the array at ω and θ to some value g involves the linear constraint
−
wH d(ω, θ) = g Thus, the constraint constraint vector vector d(ω, θ) serv serves es the the purp purpos osee of gene genera rali lizi zing ng the the idea idea of an LCMV LCMV beamformer beamformer beyond simply simply the case of a uniformly spaced array. array. Everything Everything else is the same as before, except for the fact that the correlation matrix of the received signal is no longer Toeplitz for the case of a nonuniformly spaced array
41
PROBLEM 2.18 .
CHAPTER 2.
Problem 2.18
a) Under hypothesis H 1 , we have u = s + v The correlation matrix of u equals R =
T
E[uu
]
R = ss T + RN , where RN =
E[vv
T
]
The tap-weight vector wk is chosen so that wkT u yields an optimum estimate of the k th element of s. s (k ) treated as the desired response, the cross-correlation vector s . Thus, with s( between u and s( s(k ) equals pk =E[us(k )]
=ss(k ),
k = 1, 2, . . . , m
Hence, the Wiener-Hopf equation yields the optimum value of wk as a s wk0 = R
−
1
pk
wk0 = (ssT + RN )
1
−
ss(k),
k = 1, 2, . . . , M
(1)
To apply the matrix inversion lemma (introduced in Problem 2.13), we let A = R −
B
1
= R N
C = s D = 1 Hence, RN 1 ssT RN 1 −
1
−
R
= R N 1 −
−
− 1+s
(2)
T R−1 s N
Substituting Equation (2) into Equation (1) yields: wk0 =
RN 1 ssT RN 1 −
RN 1 −
−
− 1+s
T R−1 s N
ss(k ) 42
PROBLEM 2.18 .
CHAPTER 2.
RN 1 s(1 + sT RN 1 s) −
wk0 =
−
−1 T −1 N ss RN s
−R
1 + sT RN 1s −
s(k )
wk0 =
1+
−1 sT RN s
s(k )
RN 1 s −
b) The output signal-to-noise ratio is E[(w
SNR =
T
s)2 ]
E[(wT v)2 ]
wT ssT w
=
wT E[vvT ]w wT ssT w
=
(3)
wT RN w
Since RN is positive definite, we may write, 1/2
1/2
RN = R N RN Define the vector 1/2
a = R N w or equivalently, 1/2
−
w = R N
a
(4)
Accordingly, we may rewrite Equation (3) as follows 1/2
SNR =
1/2
aT RN ssT RN a
(5)
aT a
where we have used the symmetric symmetric property of R of RN . Define the normalized normalized vector
¯ = a
a
||a||
|| ||
where a is the norm of a of a. Equation Equation (5) may be rewritten rewritten as: 1/2
1/2
¯ = ¯aT RN ssT RN a SNR = ¯ 43
PROBLEM 2.18 .
¯ SNR = a
CHAPTER 2.
1/2 RN s
T
2
Thus the output signal-to-noise ratio SNR equals the squared magnitude of the inner prod1/2 1/2 uct of the two vectors ¯ a and RN s. This inner product is maximized when a equals RN . That is, 1/2
−
aSN = R N
−
s
(6)
Let wSN denote the value of the tap-weight vector that corresponds to Equation (6). Hence, the use of Equation (4) in Equation (6) yields 1/2
−
wSN = R N
−
1/2
(RN s)
wSN = R N 1 s −
c) Since the noise vector v(n) is Gaussian, its joint probability density function equals
f (v) = v
1 M/ 2 (det(R ))1/2 (2π (2π)M/2 N
exp
−
1 T 1 v RN v 2 −
Under the hypothesis H 0 we have u = v and
f (u H 0 ) =
|
u
1 M/ 2 (detR detRN )1/2 (2π (2π)M/2
exp
−
1 T 1 u RN u 2
exp
−
1 (u 2
−
Under hypothesis H 1 we have u = s + v and
f (u H 1 ) =
|
u
1 M/ 2 (detR detRN )1/2 (2π (2π)M/2
Hence, the likelihood ratio is defined by
f (u H 1 ) f (u H 0 ) 1 T 1 =exp s RN s + sT RN 1 u 2
Λ=
u
u
| |
−
−
−
44
T
− s)
RN 1 (u −
− s)
PROBLEM 2.19 .
CHAPTER 2.
The natural logarithm of the likelihood ratio equals
ln Λ =
− 12 s
T
RN 1 s + sT RN 1 u −
−
(7)
The first term in (7) represents a constant. Hence, testing ln Λ against a threshold is equivalent to the test T
s
RN 1 u −
H 1
≷ λ H 0
where λ is some threshold. Equivalently, we may write wM L = R N 1 s −
where wM L is the maximum likelihood weight vector. The results of parts a) , b) , and c) show that the three criteria discussed here yield the same optimum value for the weight vector, except for a scaling factor. Problem 2.19
a) Assuming the use of a noncausal Wiener filter, we write ∞
w0i r(i
i=−∞
− k) = p( p(−k),
k = 0, 1, 2, . . . ,
± ±
where the sum now extends extends from i = i =
S (z ) =
(1)
−∞ to i = -transforms: i = ∞. Define the z -transforms:
∞
±∞
∞
k
r(k )z , −
H u (z ) =
k=−∞
w0,k z
−
k
k=−∞
∞
P ( P (z ) =
p( p( k )z
k=−∞
−
−
k
= P ( P (z 1 ) −
Hence, applying the z -transform -transform to Equation (1):
H u (z )S (z ) = P ( P (z 1 ) −
H u (z ) =
P (1 P (1/z /z ) S (z )
(2) 45
PROBLEM 2.19 .
CHAPTER 2.
b) P ( P (z ) =
0.36 0. 0 .2 (1 z
− 1
− 0.2z )
0.36
P (1 P (1/z /z ) =
−
(1
− 0.2z ) 1 (1 − 0.146z 146z S (z ) = 1.37 (1 − 0.2z
−
−
0. 0 .2 z
1
)(1 1 )(1
Thus, applying Equation (2) yields
− 0.146z 146z ) − 0.2z )
0.36 1.37(1 0.146z 146z 1 )(1 0.146z 146z ) 0.36z 36z 1 = 1.37(1 0.146z 146z 1 )(z )(z 1 0.146) 0.2685 0.0392 = + 1 0.146z 146z 1 z 1 0.146
H u (z ) =
−
−
−
−
−
−
−
−
−
−
−
−
Clearly Clearly, this system is noncausal. noncausal. Its impulse response is h( -transform of h (n) = inverse z -transform H u (z ) is given by n
2685(0.146) ustep(n) h(n) = 0.2685(0.
−
0. 0 .0392 0.146
1 0.146
n
ustep ( n)
−
where ustep (n) is the unit-step function:
ustep(n) =
1 for n = n = 0, 1, 2, . . . 0 for n = n = 1, 2, . . .
− −
and ustep ( n) is its mirror image:
−
ustep( n) =
−
1 for n = n = 0, 1, 2, . . . 0 for n = n = 1, 2, . . .
− −
Simplifying,
hu (n) = 0.2685
n
× (0. (0.146) u
step
(n)
− 0.2685 × (6. (6.849)
n
−
46
ustep( n)
−
PROBLEM 2.19 .
CHAPTER 2.
Evaluating hu (n) for varying n:
hu (0) = 0 0.03, 03, hu (1) = 0.
0.005, 005, hu (2) = 0.
0.0008 hu (3) = 0.
hu ( 1) =
hu ( 2) =
hu ( 3) =
−
−0.03, 03,
−
−0.005, 005,
−
−0.0008
The preceding values for hu (n) are plotted in the following figure: hu(n)
0.03
. . -2
-1
. 0
.
0.01 1
.
. . . 2
3
Time n
c) A delay of 3 time units applied to the impulse response will make the system causal and therefore realizable.
47