Homework # 7, #8
14.7. Assume that ϕX (u) is real. Then ϕ−X (u) = ϕ = ϕ X (−u) = ϕX (u) = ϕ X (u) d
Therefore −X = X , X , i.e., X i.e., X is is symmetric. d
Assume − X = X . X . Then ϕX (u) = ϕ = ϕ −X (u) = ϕX (u) So ϕ So ϕ X (u) is real. 14.8. Notice that ϕX −Y (u (u) = ϕ = ϕ X (u)ϕY (u (u) = ϕX (u)ϕX (u) = |ϕX (u)|2 is a real function. So X − Y Y is symmetric. 15.5. Write
∞
S N N
S 1 =
n {N =n}
n=0
We have ∞
ES = E S 1 = li m ES 1
n {N =n}
N N
n=0 m
m→∞
m
m
= E lim S 1 = lim E S 1 (1) = lim E (S )P {N = n } = E (S )P {N = n } m→∞
n {N =n}
n=0
m→∞
n=0
m→∞
n=0
∞
m
n {N =n}
n {N =n}
n
n
n=0
n=0
where the third equality follows from dominated convergence with the controaling random veriable ∞
|S |1 Y = n
{N =n}
n=1
This is because of
m
S 1
n {N =n}
n=0
and ∞
∞
≤Y
m = 1, 2, · · ·
n
EY = E |S |1 E |X |1 ≤ nE |X |P {N = n} = E |X | nP {N = n} = E |X |EN < ∞ = n
n=1
{N =n}
j
n=1
{N =n}
j =1
∞
∞
j
1
1
n=1
n=1
1
(2)
Inaddition, the fifth equality in (1), and the third step in (2) follows from independence between {X j } and N . Further, from (1) we have ∞
ES N
nEX P {N = n} = EX EN = 1
1
n=0
This is a special form of Wald’s first identity.
15.13. Write Z = (X, Y 1 , · · · , Y n ) n
ϕ (u , u , · · · , u ) = E exp iu X + i u Y Z
0
n
1
k
0
k
k=1
Notice that n
n
n
u X u X + u Y = u − u X + X + u X 1 = u − u n 1 = u − u + u X k
0
k
k
0
k=1
k
k=1
n
k=1
n
n
k
0
k
k=1
k=1
n
k
k
k
k=1
n
k=1
k
0
n
k
k
k=1
ϕZ (u0 , u1 , · · · , un ) n
n
2
n
2
σ 1 1 = exp − u − u +u + iµ u − u +u 2 n n σ 1 = exp − u − u +u 2 n 1 k
0
k=1
2
n
k=1
k=1
2
k
k=1
n
n
n
u0 −
uk + uk
k=1
n
n
1 u − u + u = u k=1
n
k
0
k=1
k
0
+ iµ
k
k=1 n
k
0
k=1
2
k
0
k
n
n
2
1 u − u + u n 1 2 = u − u + u u − u +u n n 1 2 = u − u + u u − u + u n n 1 1 − = u − u + u u + u n n 1 1 = u + u − u k
0
k=1
k
k=1
n
n
k
0
2
k=1
k
k
0
k
k=1 n
k
0
2
k
n
k=1
k=1
n
2
k=1 n
n
2 0
k
2
k
k=1 n
2
k
k=1
n
0
k=1
n
k
k=1 n
n
2
2
k
0
k=1
n
n
n
2
2
k
k
k=1
k=1
2
k
k=1
Therefore, ϕZ (u0 , u1 , · · · , un ) 2
n
2
n
2
σ 1 σ = exp − u − iµu exp − u − u 2n
2 0
2
2
0
k
2
n
k=1
k
k=1
Letting u0 = 0, we obtain the characteristic function of Y = (Y 1 , · · · , Y n ) ϕY (u1 , · · · , un ) = ϕZ (0, u1 , · · · , un ) = exp
σ 2 − 2
n
1 u2k − n k=1
n
2
u 2
k
k=1
Similarly, letting u 1 = · · · = u n = 0 we obtain the characteristic function of X 2
ϕX
σ (u ) = ϕ (u , 0, · · · , 0) = exp − u − iµu 0
Z
0
2n
2 0
0
In particular X ∼ N (µ, σ 2 /n). In addition, ϕZ (u0 , u1 , · · · , un ) = ϕ X (u0 )ϕY (u1 , · · · , un ) This implies that X and (Y 1 , · · · , Y n ) are independent. Since 1 S n = n 2
n
Y
2
j
j =1
is a function of (Y 1 , · · · , Y n ), X and S n2 are independent. Alternative solusion. We may also use Gaussian property to give a proof. First, we
claim that (x, Y 1 , · · · , Y n ) is Gaussian. Indeed, any linear combination of x, Y 1 , · · · , Y n is 1-dimensional normal, as it can be written as a linear combination of X 1 , · · · , X n . 3
Second, n
1 1 Cov(x, Y ) = E (x − µ)(X − µ) − E x − µ = Var(X ) − Var(X ) = 0 2
j
j
n
j
n2
k
k=1
for j = 1, · · · , n. By Theorem 16.4, x and (Y 1 , · · · , Y n ) are independent. Consequently, x and S 2 are independent. 16.2. Let A ⊂ (−∞, ∞) be Borel-measurable.
P {Z ∈ A} = P {Y ∈ A, |Y | ≤ a + P {−Y ∈ A, |Y | > a By the symmetry of Y , we can replace Y by −Y in the second ter on the right hand side:
P {−Y ∈ A, |Y | > a = P {−(−Y ) ∈ A, | − Y | > a = P {Y ∈ A, |Y | > a Thus,
P {Z ∈ A} = P {Y ∈ A, |Y | ≤ a + P {Y ∈ A, |Y | > a = P {Y ∈ A} d
Hence, Z = Y . Remark. What we really need here is not the normality, but symmetry of Y .
16.7. By definition of Gaussian random variable, Y ∼ N (µ, σ 2 ) where n
a E (X ) µ = EY = j
j
j =1
and
n
2
σ = Var(Y ) = E Y − EY = E a X − E (X ) = E a X − E (X ) + 2 a a X − E (X ) X − E (X ) = a Var(X ) + 2 a a Cov(X , X ) 2
2
j
j
j
j k
j
j
j =1
n
2
j
2
j
j
j =1
k
k
j
n
2
j
j =1
j
j k
j
k
j
16.16. We need only to exam two things: First, (X, Y −ρX ) is 2-dimensional Gaussian. Second, Cov(X, Y − ρX ) = 0. 4
For any constant a1 , a2 , a1 X + a2 (Y − ρX ) = (a1 − ρ)X + a2 Y Since (X, Y ) is Gaussian, a1 X + a2 (Y − ρX ) is normal. This shows that (X, Y − ρX ) is Gaussian. By linearty, 2 Cov(X, Y − ρX ) = Cov(X, Y ) − ρCov(X, X ) = hboxCov(X, Y ) − ρσX 2 2 By the fact σ X = σY 2
ρσX Hence,
= ρ σ σ = Cov(X, Y ) 2
2
X
Y
Cov(X, Y − ρX ) = Cov(X, Y ) − Cov(X, Y ) = 0. 17.3. By Chebyshev inequality, for any ǫ > 0 n
n
n
1 1 1 1 1 P X ≥ ǫ ≤ V ar X = V ar(X ) = n ǫ n n ǫ nǫ j
j
2
j =1
j
2 2
j =1
j =1
The right hand side tens to 0 as n → ∞. So we have n
1 lim P X ≥ ǫ = 0 n
n→∞
j
j =1
17.4 Replace n by n2 in above estimate. n2
1 1 P X ≥ ǫ ≤ n n ǫ j
2
2 2
j =1
Hence,
n2
∞
∞
P 1 X ≥ ǫ ≤ n2
n=1
1 <∞ 2 2 n ǫ n=1
j
j =1
By Borel-Cantelli lemma,
∞
n2
∞
1 P n X ≥ ǫ = 0 j
2
m=1 n=m
Notice that
j =1
n2
∞
n2
∞
1 1 lim sup X ≥ ǫ = n X ≥ ǫ n n→∞
j
2
j
2
m=1 n=m
j =1
5
j =1
2
Thus, n2
1 lim sup X ≤ ǫ n n→∞
j
2
a.s.
j =1
Letting ǫ → 0+ on the right hand side we have 1 lim 2 n→∞ n
n2
X = 0 a.s. j
j =1
6