Communication Systems EE 132A Prof. Suhas Diggavi
UCLA Winter quarter 2011/2012 Handout #25, Thursday, 16th February 2012
Midterm Midterm Solution Solution
pts)) Proble Problem m 1 (Detection in uniform noise (20 pts)) (a) The hypothesis hypothesis test can be simplified simplified as follows: follows: 0
f H H |Y (0 y ) ≷ f H H |Y (1 y )
|
|
1 0
pH (0)f (0)f Y |H (y 0) ≷ pH (1)f (1)f Y |H (y 1)
|
1 f Z Z (y 2
|
1
− 1) ≷ 12 f (y + 1) 0
Z Z
1 0
f Z Z (y
− 1) ≷ f (y + 1) Z Z
1
From the above, it can be seen that the decision can be made according to: ˆ = M
0
if y y 0 1 if y y < 0
≥
with the equivalent decision region given by:
where everything to the right of 0 is decoded as M M = 0 and everything to the left is decoded as M as M = 1. (b) The error is the area under the noise pdf that it outside of the correct decision decision region: region: P e = p H (0)P (0)P e (0) + p + pH (1)P (1)P e (1) 1 1 P e = P e (0) + P e (1) 2 2 1 1 P e = ( + )P e (0) since P e (0) = P = P e (1) 2 2 P e = P e (0) =
L 2
pZ (z )dz
1
dz if ≥ 1 P = 0 ( − 1) othief r w ≥ise 1 L 2
e
P e =
1
1
L
L
2
1
L
L
2
L 2
0
otherwise
1
Since the error P e must be
≤ 10− , 6
1 L ( L 2
− 1) ≤ 10−
6
1 ≤ − 10 −
L
1 2
L0 = (c) To find SNR =
E 2 , σZ
6
1 2
−
1 10−6
we must find the energy of the signal, E , and the variance of the noise,
2 σZ .
E = p H (0) s0 2 + pH (1) s1 1 1 = (1)2 + ( 1)2 2 2 =1
2
−
The power of the noise in the SNR is the variance of Z , so we use the formula for variance 2 to find σ Z : 2 σZ = E [Z 2 ] + E [Z ]2 = E [Z 2 ] L
= −
2
L 2
L
=
−
2
L 2
1 z3 = L 3 Therefore, SNR =
E 2 σZ
=
12
L2
.
z 2 pZ (z)dz 1 z 2 dz L L
− 2
= L 2
L2 12
(d) The transmission will be error free if L2 1, since at that point, the noise will not overlap 12 the decision region. Then, L 2, so SNR , or SNR 3. Since log 10 (3) is about 0.5, 2 this corresponds to approximately 5 dB.
≤
≤
≥
2
2
≥
Problem 2 (QAM transmission (15 pts))
(a) The message-set size M is 7, which can be seen from the number of messages in the constellation or by counting the combinations of signals given in vector form. By contrast, some common incorrect answers are given below: The number of dimensions, usually denoted by N , is 2. The number of bits required to send the M messages, usually called b or k, is log 2 (M ), rounded up, which equals 3. For the average energy E x , since all signals are equally-likely, pH (i) = ergies are given for each signal as follows: 2
2
2
2
2
2
1 7
∀i.
The en-
[0, 0] = (0) + (0) = 0 [0, 1] = (0) + (1) = 1 [0,√ −1] = (0)√ + (−1) = 1 [ 23 , 12 ] = ( 23 ) + ( 12 ) = 1 √ 3 1 √ 3 [− 2 , 2 ] = (− 2 ) + ( 12 ) = 1 √ 3 1 √ 3 [ 2 , − 2 ] = ( 2 ) + (− 12 ) = 1 √ 3 1 √ 3 [− 2 , − 2 ] = (− 2 ) + (− 12 ) = 1 2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
So the sum of the energies is 6 and the average energy E x = 67 . (b) The constellation is approximately given by:
To draw the constellation, draw a straight line half-way between each pair of points in the constellation. The lines are half-way between because each point is equally likely. The decision boundaries are lines that form the boundaries between the two points, cut off at their intersections. The boundaries should form a perfect hexagon, with lines radiating out from the corners. Since the figure is not drawn perfectly to scale, it is not exact. 3
(c) By symmetry, there are four possible distances in the constellation that could be the minimum:
(0 − 0) + (1 − 0) = 1. √ √ From (0, 0) to ( , ) is ( − 0) + ( − 0) = 1. √ √ From (0, 1) to ( , ) is ( − 0) + ( − 1) = 1. √ √ √ √ 2
From (0, 0) to (0, 1) is
From (
3 2
2
3 2
1 2
3 2
2
1 2
2
3 2
1 2
3 2
2
1 2
2
, 12 ) to (
3 2
,
−
1 2
) is
(
3 2
−
3 2 2
) +(
1 2
− −
1 2 2
) = 1.
Since all distances in the figure are equal to 1, d min = 1. (d) The point with the maximum number of nearest neighbors is (0 , 0), which has 6, so maxi (N i ) = 6. The problem did not ask for the average number of nearest neighbors. From the problem statement, σ = 0.01 = 0.1. Then,
√
P e
≤ 6Q( 2 ∗1 0.1 ) = 6Q(5) = 6(2.87 ∗ 10− ) = 1.72 ∗ 10− 7
6
Problem 3 (Autocorrelation and PSD (20 pts))
(a) The power spectral density of X , S X (f ), is the Fourier transform of the autocorrelation, RX (τ ). The function R X (τ ) = e −2|τ | breaks down into e −2|τ | = e −2τ u(τ ) + e2τ u( τ ). The Fourier transform is thus:
−
∞ S (f ) = R (τ )e− −∞∞ − | | − = e e −∞ ∞ − − X
j 2πf τ
X
2 τ
=
e
2τ
e
dτ
j 2πf τ
dτ 0
j 2πf τ
dτ +
0
1 1 + 2 + j2πf 2 j2πf 4 1 = = 2 2 4 + 4π f 1 + π 2 f 2 =
e2τ e− j 2πf τ dτ
−∞
−
where the solutions to the integrals were given in the instructions as:
∞ e− e− −∞ e e− at
0 0
at
j 2πf t
j 2πf t
1
dt =
a+ j 2πf
dt =
a j 2πf
1
−
It is important to simplify the expression into the final form and not leave it in the form containing imaginary numbers. The Fourier transform of a real, even function is also always a real, even function. Therefore, since the autocorrelation function was real and is always even (Rx (τ ) = R x ( τ )), the PSD should also be real and even, as seen from the final expression.
−
(b) By the same method as section (a), we obtain S N (f ) =
4
6 9 + 4π 2 f 2
(c) For the autocorrelation of Y (t), we simplify the following: RY (τ ) = E [Y (t)Y (t + τ )] = E [(X (t) + N (t))(X (t + τ ) + N (t + τ ))] = E [X (t)X (t + τ ) + X (t)N (t + τ ) + N (t)X (t + τ ) + N (t)N (t + τ )] = E [X (t)X (t + τ )] + E [X (t)N (t + τ )] + E [N (t)X (t + τ )] + E [N (t)N (t + τ )] = E [X (t)X (t + τ )] + E [X (t)]E [N (t + τ )] + E [N (t)]E [X (t + τ )] + E [N (t)N (t + τ )] (by independence of X and N) = E [X (t)X (t + τ )] + E [N (t)N (t + τ )] (because X and N are zero-mean) = R X (τ ) + RN (τ ) = e −2|τ | + e−3|τ | The independence and zero-mean properties should be specified. Independence makes the expectation of the products equal to the product of the expectations, ie. E [A B] = E [A]E [B]. Because X and N also have zero mean, this means E [X (t)] = E [N (t)] = 0, which cancels out the cross-terms in the multiplication, leaving just the autocorrelation of X plus the autocorrelation of N .
·
(d) Again, the power spectral density is the Fourier transform of the autocorrelation. By linearity of the Fourier transform, S Y (f ) =
F{R (τ )} = F{ R (τ ) + R (τ )} = F{ R (τ )} + F{R (τ )} Y
X
N
X
N
= S X (f ) + S N (f ) 4 6 = + 4 + 4π 2 f 2 9 + 4π 2 f 2 Problem 4 (Waveform representations (25 pts))
(a) The basis functions ϕ 1 (t) and ϕ 2 (t) were orthogonal, not orthonormal, but the modulated waveforms could be found by s0(t) = ϕ 1 (t) + ϕ2 (t) s1 (t) = 2ϕ1 (t) + ϕ2(t) which give the following graphs:
5
(b) The goal is to obtain the waveforms s0 (t) and s1(t) in the previous section, but with a new basis, such that s1 (t) = s˜0,0 ϕ1(t) + s˜0,1 ϕ2 (t) and s2 (t) = s˜1,0 ϕ1 (t) + s˜1,1 ϕ2 (t). The answer is 2.6 s˜0 = [ , 0] 0.4714 3.9 1.3 s˜1 = [ , ] 0.4714 0.4714
−
(c) To solve this, realize the first basis was not normalized, and thus you must use the vectors found in part (b), which was normalized to find the distance between points. In this case, the number in the Q-function could not be easily-calculated without a computer, and the answer must be left unsimplified.
The distance between points is d = (
− ) + ( − − 0) = 3.9. Solving for√ the noise power in Watts: 10 dBm ⇔ 10 /1000 W = 10− W = σ . Then σ = 10− = 10− . P = Q( ) = Q( · ) = Q(19.5) = 5.91 · 10− . √ Without a calculator, the distance could be left as d = 2 · , which results in the √ · ) or similar. expression Q( 2 · d
2σ
2.6 2 0.4714 1
1.3 0.4714
3.9 2 0.1
1.3 0.4714
85
1.3 0.4714
1 0.2
6
2
2
1
2
e
3.9 0.4714
2