Digital Signal Processing Formulary Author: Daniel Thiem -
[email protected] Version 0.7.4 - 17.02.2013
1
Contents 1 Digital Processing of Continous-Time Signals 1.1 Periodic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Reconstruction of Band-Limited Signals . . . . . . . . . . . . . . 1.1.2 Discrete-time Processing of Continous-Time Signals . . . . . .
5 5 5 5
2 Digital Filter Design 2.1 Finite Impulse Response (FIR) filter . . . . . . . . . . . . . . . . . . . . . 2.1.1 Properties of Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Linear Phase Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Generalized linear phase system . . . . . . . . . . . . . . . . . . 2.3 Linear Phase FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Linear Phase FIR filter Types . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Type I FIR linear phase filter . . . . . . . . . . . . . . . . . . . . . 2.4.2 Type II FIR linear phase filter . . . . . . . . . . . . . . . . . . . . 2.4.3 Type III FIR linear phase filter . . . . . . . . . . . . . . . . . . . . 2.4.4 Type IV FIR linear phase filter . . . . . . . . . . . . . . . . . . . . 2.5 FIR filter design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Steps in FIR filter design . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 FIR filter type choice . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 FIR filter design by Windowing . . . . . . . . . . . . . . . . . . . 2.5.4 Kaiser Window Filter Design . . . . . . . . . . . . . . . . . . . . . 2.5.5 Optimal Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Infinite Impulse Response (IIR) filter . . . . . . . . . . . . . . . . . . . . 2.6.1 Properties of IIR filters . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 IIR filter design from an analog or continuous-time filter (Impulse Invariance Method) . . . . . . . . . . . . . . . . . . . . . . 2.6.3 The Butterworth filter . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 Implementation of IIR Filters . . . . . . . . . . . . . . . . . . . . 2.7 Filter Specficaion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Ripple and Attenuation calculation . . . . . . . . . . . . . . . . .
6 6 6 7 7 7 8 8 8 9 9 9 9 10 10 13 13 14 14
3 Random Variables and Stochastic Processes
19
14 15 16 17 18 18
2
4 The Finite Fourier Transform 4.1 Definition . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Properties . . . . . . . . . . . . . . . . . 4.2 Discrete Finite Fourier Transform . . . . . . . 4.3 Statistical Properties . . . . . . . . . . . . . . . 4.3.1 White Gaussian Process . . . . . . . . . 4.3.2 Real Stationary Random Process . . . 4.4 Segmentation of the Finite Fourier Transform 4.5 Windowing of the process . . . . . . . . . . . . 4.5.1 Power of the windowed process . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
20 20 20 20 21 21 21 21 22 22
5 Digital Spectral Analysis 5.1 Consitency and Mean Square Error in terms of bias and variance 5.2 Mean Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 The Periodogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Distribution of the Periodogram . . . . . . . . . . . . . . . . 5.3.2 Mean of the Periodogram . . . . . . . . . . . . . . . . . . . . 5.3.3 Variance of the Periodogram . . . . . . . . . . . . . . . . . . 5.3.4 Averaging Periodograms (Barlett’s method) . . . . . . . . 5.3.5 Welch’s method . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Smoothing the Periodogram . . . . . . . . . . . . . . . . . . 5.3.7 Generall Class of Spectral Estimates . . . . . . . . . . . . . 5.4 The Log-Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 The Blackman-Tuckey Method . . . . . . . . . . . . . . . . . . . . . 5.6 Cross-Spectrum Estimation . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 The Cross-Periodogram . . . . . . . . . . . . . . . . . . . . . 5.6.2 Segmented Cross-Periodogram . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
23 23 23 23 23 24 24 24 25 26 26 26 27 27 27 28
6 Parametric Spectrum Estimation 6.1 Auto-regressive (AR) Processes . . . . . . . . . . . . 6.1.1 Yule-Walker Equations . . . . . . . . . . . . 6.1.2 extended Yule-Walker equations . . . . . . 6.2 Moving Average (MA) Process . . . . . . . . . . . . 6.2.1 Solving . . . . . . . . . . . . . . . . . . . . . . 6.3 Auto-regressive Moving Average (ARMA) Process 6.4 Model order selection . . . . . . . . . . . . . . . . . 6.4.1 Akaine’s Information Criterion (AIC) . . . . 6.4.2 Minimum Description Length (MDL) . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
29 29 29 29 30 30 31 31 31 31
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
3
7 Miscellaneous 7.1 Useful mathematical equations . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Conversion from Geometric series to trigonometric fraction 7.1.3 Modulation Theorem . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
32 32 32 32 32
Preface This formulary is based on the formulary of the course Stochastic Signals and Systems, which can be found here https://github.com/Tyde/stosigsysfs/blob/ master/document.pdf?raw=true. If you find any errors or have any ideas for improvement, mail me at
[email protected] or file an issue at https: //github.com/Tyde/dspformulary/issues. The LATEX source code is online on https://github.com/Tyde/dspformulary and can be improved and extended.
4
1 Digital Processing of Continous-Time Signals 1.1 Periodic Sampling Basic principles of sampling and transforming signals can be found in the Stochastic Signals and Systems formulary
1.1.1 Reconstruction of Band-Limited Signals Assume, that H r ( jΩ) and h r (t) are the frequency and time responses for an ideal low pass filter where x(n) is the input signal. Then the output will be x r (t) =
∞ X
x(n)h r (t − nT )
(1.1)
n=−∞
Because the filter is assumed to be ideal, its impulse response is given by: h r (t) =
sin(πt/T ) πt/T
(1.2)
where T is the sampling interval that should fulfil the Nyquist condition Ωs > 2ΩB Rules and correspondences can be found on the last page of the DSP-Script
1.1.2 Discrete-time Processing of Continous-Time Signals To process a continous-time signal x c (t) with digital signal processing techniques, the following setup is used:
5
2 Digital Filter Design Let x(n) and y(n), n = 0 ± 1, . . . be the input and the output of the system. Linear time-invariant discrete-time systems can be characterized by linear constant coefficient difference equations of the following type, where max(N − 1, M ) is the order: M X
ak y(n − k) =
N −1 X
b r x(n − r)
(2.1)
r=0
k=0
2.1 Finite Impulse Response (FIR) filter If M = 0 then the impulse response of the filter is finite so that: y(n) =
∞ 1 X
a0
b r x(n − k)
(2.2a)
k=−∞
comparison with the convolution sum yields ¨ bn /an , n = 0, . . . , N − 1 h(n) = 0, otherwise
(2.2b)
2.1.1 Properties of Filters With fSR being the sample rate. The filter length has to fit to the requirement of the filter type (odd or even). Filter Order = N − 1 ∆ω = Delay =
fs − f p fSR N −1 2
(2.3) 2π
(2.4) (2.5) 6
2.2 Linear Phase Filters A digital filter can be described as linear phase filter if H e jω can be expressed as: H e jω = H M e jω e− jωα
(2.6)
with H M e jω real and α(group delay) constant
2.2.1 Generalized linear phase system A Linear phase system with: H e jω = H M e jω e− jωα+ jβ
(2.7)
h(n) sin[ω(n − α) + β] = 0∀ω ∈ R
(2.8)
has to meet the condition: ∞ X n=−∞
2.3 Linear Phase FIR filters Applying the condition for generalized linear phase systems (2.2.1) to the FIR Filters yields two possible causal FIR Systems: ¨ h(n) =
h(N − 1 − n) , 0 ≤ n ≤ N − 1 0
, otherwise
H e jω = H e e jω e− jω(N −1)/2
(2.9a) (2.9b)
and ¨ h(n) =
−h(N − 1 − n) , 0 ≤ n ≤ N − 1 0
, otherwise
H e jω = H o e jω e− jω(N −1)/2+ jπ/2
(2.10a) (2.10b) 7
2.4 Linear Phase FIR filter Types The linear phase FIR filters have been classified into 4 Types: • Type I
symmetric impulse response and N is odd
• Type II
symmetric impulse response and N is even
• Type III
antisymmetric impulse response and N is odd
• Type IV antisymmetric impulse response and N is even
2.4.1 Type I FIR linear phase filter N is odd and the impuls response is symmetric. β = 0 h(n) = h(N − 1 − n) 0 ≤ n ≤ N − 1 (NX −1)/2 N −1 a(k) cos(ωk) H e jω = e− jω 2
(2.11a) (2.11b)
k=0
a(0) = h
N −1 2
where N −1 and a(k) = 2h −k , 2
k = 1, 2, . . . , (N − 1)/2
2.4.2 Type II FIR linear phase filter N is even and the impuls response is symmetric. β = 0 h(n) = h(N − 1 − n) 0 ≤ n ≤ N − 1 ( ) N /2 X N −1 1 H e jω = e− jω 2 b(k) cos ω k − 2 k=1
(2.12a) (2.12b)
where b(k) = 2h[N /2 − k],
k = 1, 2, . . . , N /2 8
2.4.3 Type III FIR linear phase filter N is odd and the impuls response is antisymmetric. β = π/2 h(n) = −h(N − 1 − n) 0 ≤ n ≤ N − 1 (NX −1)/2 N −1 c(k) sin(ωk) H e jω = je− jω 2
(2.13a) (2.13b)
k=1
where c(k) = 2h[(N − 1)/2 − k],
k = 1, 2, . . . , (N − 1)/2
2.4.4 Type IV FIR linear phase filter N is even and the impuls response is antisymmetric. β = π/2 h(n) = −h(N − 1 − n) 0 ≤ n ≤ N − 1 ( ) N /2 X N −1 1 H e jω = je− jω 2 d(k) sin ω k − 2 k=1
(2.14a) (2.14b)
where d(k) = 2h[N /2 − k],
k = 1, 2, . . . , N /2
2.5 FIR filter design
2.5.1 Steps in FIR filter design 1. Specify the frequency response H d e jω of the filter and obtain the corresponding impulse response hd (n) 2. Choose a window function that meets the desired design specification and determine the length of the filter. 3. Calculate the filter impulse response using h(n) = hd (n)w(n) 9
2.5.2 FIR filter type choice Not every FIR filter type fits on every filter. Type Low Pass High Pass Band Pass I II Not suitable III Not suitable Not suitable IV Not suitable
Band Stop Not suitable Not suitable Not suitable
2.5.3 FIR filter design by Windowing Most idealized systems are non causal and have infinite impulse responses. To achieve finite impulse responses and causality, truncating the ideal response is the most straightforward approach. For the ideal impulse response Zπ 1 H d e jω e jωn dω (2.15) hd (n) = 2π −π
the truncated response would be ¨ h(n) =
hd (n) , 0 ≤ n ≤ N − 1 0
, otherwise
(2.16a)
or h(n) = hd (n) · w(n)
(2.16b)
with ¨ w(n) =
1
,0 ≤ n ≤ N − 1
0
, otherwise
Properties of a rectangular Window If the window-funciton is a rectangle, the frequency response of the filter has to be multiplied with the Fourier transform of the rectangle: H e jω = H d e jω þ W e jω (2.17a) N −1 sin(ωN /2) = H d e jω þ e− jω 2 · (2.17b) sin(ω/2) This, if H d e jω should be closest to H e jω , N has to go to infinity 10
Main and Sidelobes With increasing N , the ’main-lobe’ width increases.
Peak approxomation error (PAE)
PAE = γ · max H d e jω − H e jω ω∈R
(2.18)
The PAE is also directly related to the maximum allowable specification tolerance. With ωc as the discontinuity of H d e jω : ∆H d e jωc · γ ≤ min(α1 , α2 )
(2.19)
11
Commonly used Windows
Rectangular ¨ w(n) =
1
,0 ≤ n ≤ N − 1
(2.20a)
0 , otherwise
Barlett ¨ w(n) =
2n N −1
2−
,0 ≤ n ≤ 2n N −1
N −1 2
(2.20b)
, N 2−1 ≤ n ≤ N − 1
Hanning w(n) =
1
2πn 1 − cos 2 N −1
0≤ n≤ N −1
(2.20c)
Hamming w(n) = 0.54 − 0.46 cos
2πn
N −1
0≤ n≤ N −1
(2.20d)
Blackman w(n) = 0.42 − 0.5 cos
2πn
N −1
+ 0.08 cos
4πn N −1
0≤ n≤ N −1 (2.20e)
These windows have different Properties:
12
2.5.4 Kaiser Window Filter Design The Kaiser window function adapts to the specifications. Let α = N 2−1 and I0 be the zeroth order modified Bessel function of the first kind, while β is a shape parameter: 2 1 2 n−α I β 1− α
0
w(n) =
I0 (β)
0
,0 ≤ n ≤ N − 1
(2.21)
, otherwise
Design a filter with Kaiser window Assuming the transition region is ∆ω = ωs − ω p . With A = −20 log10 min(α1 , α2 ) A > 50 0.1102(A − 8.7) 0.4 β = 0.5842(A − 21) + 0.07866(A − 21) 21 ≤ A ≤ 50 0 A < 21
(2.22)
and N=
A− 8 2.285∆ω
+1
(2.23)
Integrated Square Error To optimize a windowed filter, one has to attempt to minimize the integrated squared error Zπ 1 2 H e jω − H e jω 2 dω ε = (2.24) d 2π −π
2.5.5 Optimal Filter Design Optimal filter design does not use the integrated square error but instead the maximum error, which occurs at the discontinuities of H d e jω . This error gets distributed equally across the frequency band. Please use the script pages 33-36 for information about optimal filter design. 13
2.6 Infinite Impulse Response (IIR) filter IIR filters are systems, which follow the following restrictions: h(n) is • real • causal • satifies stability h(n) possesses a rational z-transform with a0 = 1, i.e. NP −1
H(z) =
bk z −k
k=0
1−
NP −1
(2.25) ak
z −k
k=0
2.6.1 Properties of IIR filters • Stability All poles of H(z) must lie inside the unit disk • Linear Phase IIR filters do not have linear phase. Only the magnitude response is specified.
2.6.2 IIR filter design from an analog or continuous-time filter (Impulse Invariance Method) Steps to design: • Step 1: Specification of the digital filter • Step 2: Translation to a continuous-time filter specification • Step 3: Determination of the continuous-time filter system function H c (s) • Step 4: Transformation of H c (s) to a digital filter system function H(z) 14
Step 1 Filter parameters like α1 , α2 , ω p , ωs and Td have to be set.
Step 2 If a Filter H c (Ω) is given, applying the relation Ω = ω/Td leads to the continuoustime filter specifications, where Td is the design sampling interval. Then the filter has to be checked to fulfil the Specifications given in Step 1
Step 4 Steps for the transformation of H c (s) to H(z) 1. From H c (s) and H c ( jΩ), we determine hc (t) 2. Optain the digital sequence by h(n) = Td · hc (nTd ) where Td is the design sampling interval 3. H(z) is obtained from h(n)
2.6.3 The Butterworth filter With Ωc being the 3dB cut-off frequency H( jΩ) 2 =
1 1+
Ω Ωc
2N
(2.26)
With the conditions H( jΩ) ≥ 1 − α1 for 0 ≤ |Ω| ≤ Ω p and H( jΩ) ≤ α2 for Ωs ≤ |Ω| 1 1 log 2 − 1 − log 2 −1 (1−α1 )
N≥
α 2
2(log Ω P − log ΩS ) ΩS Ωc = s 2N 1 − 1 2
(2.27a) (2.27b)
α 2
Ωc =
ΩP r
2N
1 (1−α1 )2
(2.27c) −1 15
2.6.4 Bilinear Transformation The bilinear transformation is used to map the imaginary axis of the s-plane to the unit circle. Now let 2 1 − z −1 s= (2.28a) Td 1 + z −1 which leads to H(z) = H c
2
1 − z −1
(2.28b)
1 + z −1
Td
with z = e jω : s=
2
Td
1 − e− jω
=j
1 + e− jω
2 Td
tan(ω/2)
(2.28c)
with s = σ + jΩ follows: Ω=
2 Td
tan(ω/2) and σ = 0
(2.28d)
Remarks on the bilinear transformation • The frequency deformation Ω = detailed shape of H c ( jΩ)
2 Td
tan(ω/2) comes to a deviation in the
¦ • The delay time is also modified: τd = τc 1 +
ωc Td 2
©
• The bilinear transformation has no aliasing problems. • This transformation is the most used in practise
Conversion from continuous-time to system function π
Let Ωc be the cutoff-frequency and sk = Ωc · e j 2N (2k+N +1) for k = 0, . . . , N − 1 be the poles of the function. Take those who are on the left half plane and: H(s) =
N −1 Y
ΩN C
k=0
s − sk
(2.29)
16
2.6.5 Implementation of IIR Filters The output of the filter can be described as y(n) =
M X
ak y(n − k) +
N −1 X
b r x(n − r)
(2.30)
r=0
k=1
Direct Form I A Signal flow graph can be seen at Figure 5.8 on page 47 in the DSP script.
−1 NX 1 H(z) = H2 (z)H1 (z) = b r z −r M P r=0 −k ak z
! (2.31)
k=1
Direct Form II The direct form II can change the order of H1 (z) and H2 (z). A Signal flow graph can be seen at Figure 5.9 on page 48 in the DSP script. H1 (z)H2 (z) = H(z) = H2 (z)H1 (z)
(2.32)
Cascade Form A Signal flow graph can be seen at Figure 5.10 on page 48 in the DSP script. NP −1
H(z) =
b r z −r
r=0
1−
M P
=A ak
z −k
Y
H k (z)
(2.33a)
k
k=1
with H k (z) =
1 + β1k z −1 + β2k z −2 1 − α1k z −1 − α2k z −2
(2.33b) 17
2.7 Filter Specficaion Non-ideal Filters have one or more pass-bands(ω p ), transition-bands(between ω p and ωs ) and stop-bands(ωs ). For the pass- and the stop- band, a tolerance αi has to be specified. Therefore a low-pass filter would have the following specification:
2.7.1 Ripple and Attenuation calculation Given the pass-band attenuation ATT P and the stop-band attenuation ATTS of an IIR filter α1 = 1 − 10−ATT P /20
(2.34a)
α2 = 10−ATTS /20
(2.34b)
Given the pass-band ripple R P and the stop-band ripple RS of a FIR Filter α1 = 10R P /20 − 1 α2 =
10−RS /20
(2.34c) (2.34d)
18
3 Random Variables and Stochastic Processes This is a revision of the Stochastic Signals and Systems course, which has an own Formulary which can be found at: https://github.com/Tyde/stosigsysfs/ blob/master/document.pdf?raw=true
19
4 The Finite Fourier Transform 4.1 Definition Let x(0), . . . , x(N − 1) be realisiations of a stationary random process X (n), n = 0, 1, 2, . . . , N − 1 −1 NX X N e jω = X (n)e− jωn ,
−∞ < ω∞
(4.1)
n=0
and inverse:
X (n) =
Zπ
1 2π
X N e jω e jωn dω,
n = 0, 1, . . . , N − 1
(4.2)
−π
4.1.1 Properties • Periodicity X N e j(ω+k2π = X N e jω • Symmetry X N e− jω = X N e jω ∗
∀k ∈ N
∀X (n) ∈ R
• Linearity F {aX (n) + bY (n)} = aX N e jω + bYN e jω
4.2 Discrete Finite Fourier Transform With ωk = 2πk/N , k = 0, . . . , N − 1 X N (e j
2πk N )
=
N −1 X
X (n)e− j2πkn/N
, k = 0, 1, . . . , N − 1
(4.3)
n=0
20
4.3 Statistical Properties 4.3.1 White Gaussian Process Let X (0), . . . , X (N − 1) be real valued random variables with X (n) ∼ N (0, 1) and N = 2 r with r ∈ N Then: −1 ¦ © NX Re X N (e j2πk/N ) = X (N ) cos(2πkn/N ) (4.4a) n=0 N −1 X © Im X N (e j2πk/N ) = − X (N ) sin(2πkn/N )
¦
(4.4b)
n=0
The Mean and Variance can be determined to: Re X N (e j2πk/N ) 0 N 1 ∼N , 0 2 0 Im X N (e j2πk/N )
0 1
(4.4c)
4.3.2 Real Stationary Random Process If the process has an arbitray µ x , cX X (κ) and CX X e jω for N → ∞ the asymptotic distribution is: 1 0 Re X N e jω 0 N jω ∼N , C e ω ∈ (0, π) (4.5a) 0 2 XX 0 1 Im X N e jω For ω = 0 and ω = π The Finite Fourier Transform is purely real: Re X N e jω ∼ N N µ x δ(ω), N CX X e jω
(4.5b)
For fixed frequencies 0 ≤ ω(1) < . . . < ω(M ) ≤ π the random variables jω
X N (e (1) ), X N (e distributed
jω(2)
), . . . , X N (e
jω(M )
) are asomptotically for N → ∞ indepently
4.4 Segmentation of the Finite Fourier Transform Division of random process X (n), n = 0, . . . , N − 1 into L segments of length M with N = M L will yield a finite fourier transform for each segment l which has the same asymptotic distribution as for the non-segmented process M −1 X X M (e jω , l) = X (n − (l − 1)M )e− jωn , l = 1, . . . , L (4.6) n=0
The random variables X M (e jω , l) arre for l = 1, . . . , L asymptotically as N → ∞ indepently distributed 21
4.5 Windowing of the process Let w(n) = 0 for n = 0, . . . , N −1 be a window that is applied to the random process. −1 NX w(n)X (n)e− jωn X n,ω e jω =
(4.7)
n=0
The window has the effect that the variance is multiplied by the factor PN −1 and the mean at ω = 0 is multiplied by the factor n=0 w(n)
PN −1 n=0
w(n)2
4.5.1 Power of the windowed process The average power of X (n) is defined by π
PX X
Z N −1 2 1 1 X 1 2 E X N e jω dω = lim E X (n) = lim N →∞ N N →∞ 2π N n=0
(4.8)
−π
22
5 Digital Spectral Analysis 5.1 Consitency and Mean Square Error in terms of bias and variance The MSE can be rewirtten as follows: ˆ X ) = Var µ ˆ X + bias(µ ˆ X )2 MSE(µ
(5.1)
ˆ X is called consisten if: The Estimator µ ˆX ) = 0 lim MSE(µ
N →∞
(5.2)
5.2 Mean Estimation The mean of X (0), . . . , X (N − 1) which are ech distributed as N (µX , σ2X ), can be estimated by: N −1 1X ˆ X (n) (5.3) µx = n n=0
5.3 The Periodogram A candiate estimator for CX X e jω is the periodogram: 2 −1 NX 2 1 1 X (n)e− jωn I XNX e jω = X N e jω = N N n=0
(5.4)
5.3.1 Distribution of the Periodogram The periodogram for data values X (N ) is indepently distributed as: CX X e jω 2 χ , ω 6= πk 2 I XNX ∼ 2 CX X e jω χ 2 , ω = πk 1
(5.5)
23
5.3.2 Mean of the Periodogram With the use of (7.1.2) E I XNX e
jω
=
∆N e jω N
þ CX X e
jω
∆N e jω 2 + · µ x N
(5.6a)
where ∆
N
e
jω
=
N −1 X
e
− jωn
=e
−1 − jω N 2
n=0
·
sin( ωN ) 2 sin( ω ) 2
(5.6b)
Thus for a zero mean stationary process X (n), E I XNX e jω for N → ∞ lim E I XNX = CX X e jω ω 6= 2kπ (5.7a) N →∞ lim E I XN−µ ,X −µ = CX X e jω ∀ω (5.7b) N →∞
x
x
5.3.3 Variance of the Periodogram Let X (n) be a real white Gaussian Process with σ2 and ω, λ 6= 0 1 Var I X X e jω ≈ CX X e jω 2 · 2 (|∆N (e j2ω )|2 + N 2 ) N
(5.8)
Therefore the Periodogram is not a consistent estimator
5.3.4 Averaging Periodograms (Barlett’s method) To improve the consistency of the Periodogram, it is possible to average the estimates of multiple independent measurements. With X l (n) = X (n + (l − 1) · M ) and n = 0, . . . , M − 1 2 −1 M X 1 I XMX (e jω , l) = X l (n)e− jωn = 1, . . . , L (5.9) M n=0
And estimate with: L 1 X M jω I (e , l) CˆXBX e jω = L l=1 X X
(5.10)
The variance of Barlett’s estimator decreases when L increases 24
Mean and Variance of Barlett’s method
L 1X E CˆXBX = E I XMX (e jω , l) ≈ E I XMX e jω L l=1 L 1 1 X Var CˆXBX = 2 Var I XMX (e jω , l) ≈ Var I XMX e jω L L l=1
(5.11a)
(5.11b)
5.3.5 Welch’s method Welch’s method is similar to Barlett’s method but the data segments are multiplied by a window w M (n) of length M and the segmens may overlap. With X l (n) = X (n + (l − 1) · D) L 1X M W jω ˆ (e jω , l) CX X e = I L l=1 W X ,W X
(5.12a)
and 2 −1 M X 1 jω M − jωn (e , l) = IW w (n) · X (n)e M l X ,W X MA
(5.12b)
n=0
Where A is a factor to ge an asymptotic unbiased estimation, which can be found using Parseval’s theorem: 1
Zπ
2π
M −1 X W (e jλ ) 2 dλ = w (n) 2 = M A M M
(5.12c)
n=0
−π
Mean of Welch’s Estimator
L 1 X M jω M jω E CˆXWX e jω = E IW (e , l) ≈ E I (e ) X ,W X W X ,W X L l=1
M E IW X ,W X e
with jω
≈ CX X e jω
ω 6= 0
(5.13a)
(5.13b) 25
5.3.6 Smoothing the Periodogram With ωk 6= ωl , thus X N (e jωk ) and X N (e jωl ) are asymptotically independent and I XNX (e jωk ) and I XNX (e jωl ) are also independent:
CˆXS X e jω =
1
m X
2m + 1 k=−m
I XNX e j2π(k(ω)+k)/N
(5.14)
Variance of the smoothed Periodogram
lim Var CˆXS X e jω =
N →∞
1 2m + 1
CX X e jω
2
(5.15)
5.3.7 Generall Class of Spectral Estimates Pm 1 Wit A = 2m+1 k=−m Wk a generall class of estimators can be defined analogusly to the smoothed periodogram
jω CˆXSW = X e
1
m X
(2m + 1)A k=−m
Wk I XNX (e j2π(k(ω)+k)/N )
(5.16)
5.4 The Log-Spectrum Estimating 10 log CX X e jω from 10 log CˆX X e jω instead of the spectrum 26
5.5 The Blackman-Tuckey Method This Mehthod estimates the sample covariance function and windows it with w2M −1 (κ) ¨ w2M −1 (κ) = CˆXBTX e jω =
w2M −1 (−κ) , for |κ| ≤ M − 1, with M N 0
, otherwise
M −1 X
ˆcX X (κ) · w2M −1 (κ) · e− jωκ
(5.17a)
(5.17b)
κ=−M +1
This is the same as CˆXBTX e
jω
=
1
Zπ
2π
I XN−X¯ ,X −X¯ (e jλ ) · W2M −1 e j(ω−λ dλ
(5.17c)
−π
This has the following constraint for the window: W2M −1 e jω ≥ 0 ∀ω
(5.17d)
The Blackman-Tukey estimator is unbiased for N → ∞ and its variance tends to zero if the window function has finite energy. The Variance of the Blackman-Tuckey Method is defined by: 1 Var CˆXBTX ≈ CX X e jω 2 N
M −1 X
w2M −1 (κ)2
(5.17e)
κ=−M +1
5.6 Cross-Spectrum Estimation
5.6.1 The Cross-Periodogram For X (0), . . . , X (N − 1), Y (0), . . . , Y (N − 1) the objective is to estimate the crossspectrum CY X e jω for −π < ω < π
This has the bias CY X
1 I YNX e jω = YN e jω X N e jω ∗ N e jω and the variance CY Y e jω CX X e jω
(5.18)
27
5.6.2 Segmented Cross-Periodogram For different Segments l = 1, . . . , L with length M L 1X 1 CˆYBX e jω = YM (e jω , l)X M (e jω , l)∗ L l=1 M
(5.19a)
with YN (e
jω
, l) =
M −1 X
Y (n − (l − 1)M )e− jωn
(5.19b)
X (n − (l − 1)M )e− jωn
(5.19c)
n=0
X N (e jω , l) =
M −1 X n=0
Now for N → ∞ the bias is CY X e jω and the variace is
1 C L YY
e jω CX X e jω
28
6 Parametric Spectrum Estimation 6.1 Auto-regressive (AR) Processes With σ2Z CX X e jω = 2 Pp 1 + a e− jωk k=1 k
(6.1)
The objective is to estimate the parameter a1 , . . . , a p which will be used to estimate the spectrum CX X e jω
6.1.1 Yule-Walker Equations
cX X (l) +
p X
¨ ak c x x (l − k) =
k=1
σ2Z
,l = 0
0
, l = 1, . . . , p
(6.2)
Using ˆcX X (0), . . . , ˆcX X (p) leads to a unique solution for a1 , . . . , a p
6.1.2 extended Yule-Walker equations Z(n) and V (n) are uncorrelated white noise processes with variances σ2Z and σ2V . With the differencee equation of the system Y (n) + aY (n − 1) = Z(n) + V (n) + aV (n − 1) 2 2 p σ Z + σV , l = 0 X cY Y (l) + ak cY Y (l − k) = al σ2V (6.3) , l = 1, . . . , p 0 k=1 ,l > p 29
6.2 Moving Average (MA) Process Consider a transversal filter h(n) with input real-valued Z(n) and real-valued output X (n) X (n) =
q X
h(k)Z(n − k)
(6.4)
k=0
If Z(n) is a white noise process, X (n) is a Moving average (MA) process with 2 2 CX X = H e jω C Z Z e jω = H e jω σ2Z
(6.5)
With b(m) = σ Z h(m) and m = 0, . . . , q and h(0) = 1 we get cX X (l) =
q−|l| X
b(m)b(m + |l|) for 0 ≤ |l| ≤ q
(6.6)
m=0
and
(6.7)
CX X = B e
jω
B(e
− jω
)
(6.8)
6.2.1 Solving 1. substitute e jω 2. Calculate P(z) = z q CX X (z) 3. Assign all roots Qq zi of P(z) inside the unit circle to B(z) B(z) = b(0) i=1 (1 − zi z −1 ) with b(0) > 0 and |zi | ≤ 1 4. Calculate h(m) = b(m)/b(0) for m = 1, . . . , q c (0) With b(0)2 = σ2Z = PqX X cX X (l) 2 and replacing cX X (l) with ˆ 1+
m=1
h(m)
The Spectrum can be obtained by the relationship
CˆX X e
jω
2 q X ˆb(n)e− jωn =
(6.9)
n=0
30
6.3 Auto-regressive Moving Average (ARMA) Process The difference equation of an ARMA process, called ARM A(p, q), is given by X (n) +
p X
ak X (n − k) = Z(n) +
k=1
q X
bk Z(n − k)
(6.10)
k=1
The modified Yule-Walker equations can be found: cX X (l) +
p X
¨ ak cX X (l − k) =
k=1
σ2Z
Pq k=l
bk h(k − l) , l = 0, . . . , q ,l > q
0
(6.11)
To solve for a1 , . . . , a p , the equations can be employed with l = q + 1, . . . , q + p. Then to determine b1 , . . . , bq , l = 1, . . . , q are used with (6.2.1)
6.4 Model order selection
6.4.1 Akaine’s Information Criterion (AIC)
ˆ 2Z,m + m AIC(m) = log σ
2 N
m = 1, . . . , M
(6.12)
6.4.2 Minimum Description Length (MDL)
ˆ 2Z,m + m MDL(m) = log σ
log N N
m = 1, . . . , M
(6.13)
31
7 Miscellaneous 7.1 Useful mathematical equations
7.1.1 Geometric series
N −1 X
qn =
n=0
1 − qN 1−q
for |q| < 1
(7.1)
7.1.2 Conversion from Geometric series to trigonometric fraction Let
1−q N 1−q
be with q = e− jω 1−e
− jN ω
1 − e− jω
=
N −1 e− jω 2
sin
sin
N ω 2
ω 2
(7.2)
7.1.3 Modulation Theorem Let F f (x) = F e jω be the fourier transform, then: 1 F cos(2πk0 x) f (x) = F (k − k0 ) + F (k + k0 ) 2
(7.3)
32