Lesson 9: Auto Lesson Autoreg regress ressiveive-Mov Moving ing Average (ARMA) models Umberto Triacca Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica Universi Univ ersit` t` a dell’Aqu dell ’Aquila ila,,
[email protected]
Introduction
We have seen that in the class of stationary, zero mean, Gaussian processes the probabilistic structure of a stochastic process is completly characterized by the autocovariance function.
Introduction
We have seen that in the class of stationary, zero mean, Gaussian processes the probabilistic structure of a stochastic process is completly characterized by the autocovariance function.
Autocovariance function Stationary, zero mean, Gaussian process
DGP
γ x x (k )
x 1 , ..., x T T
Introduction
However, in general, to know the autocovariance function means to know a sequence composed by an infinite number of elements. We have to estimate a infinite number of parameters γ x (0), γ x (1), γ x (2),..., from observed data. This mission is impossible
Introduction
We introduce a very important class of stochastic processes, which autocovariance functions depend on a finite number of unknown parameters: the class of the AutoregRessive Moving Average (ARMA) processes.
Autoregressive-Moving Average (ARMA) models Definition. The process {x t ; t ∈ Z} is an autoregressive moving average process of order ( p , q ), denoted with x t ∼ ARMA(p , q ),
if x t − φ1 x t −1 − ... − φp x t −p = u t + θ1 u t −1 + ... + θq u t −q ∀t ∈
where u t ∼ WN (0, σu 2 ), and φ1 ,...,φp , θ1 ,...,θq are p + q constants and the polynomials φ(z ) = 1 − φ1 z − ... − φp z p and θ(z ) = 1 + θ1 z ... + θq z q have no common factors.
Z,
Autoregressive-Moving Average (ARMA) models
For q = 0 the process reduces to an autoregressive process of order p , denoted with x t ∼ AR(p ), x t − φ1 x t −1 − ... − φp x t −p = u t ∀t ∈
Z,
For p = 0 to a moving average process of order q , denoted with x t ∼ MA(q ) x t = u t + θ1 u t −1 + ... + θq u t −q ∀t ∈
Z,
An example of Autoregressive-Moving Average (ARMA) process The process {x t ; t ∈ Z} defined by x t = 0.3x t −1 + u t + 0.7u t −1 ∀t ∈
Z,
where u t ∼ WN (0, σu 2 ), is an ARMA(1,1) process. Here φ(z ) = 1 − 0.3z and θ(z ) = 1 + 0.7z .
An example of Autoregressive-Moving Average (ARMA) process
A realizzation of the ARMA(1,1) process x t = 0.3x t 1 + u t + 0.7u t 1 is presented in the following figure. −
−
An example of Autoregressive (AR) process
The process {x t ; t ∈ Z} defined by x t = 0.7x t −1 − 0.5x t −1 + u t ∀t ∈
where u t ∼ WN (0, σu 2 ), is an AR(2) process. Here φ(z ) = 1 − 0.7z + 0.5z 2
Z,
An example of Autoregressive (AR) process
A realizzation of the AR(2) process x t = 0.7x t 1 − 0.5x t 2 + u t is presented in the following figure. −
−
An example of Moving Average (MA) process
The process {x t ; t ∈ Z} defined by x t = u t + 0.7u t −1 ∀t ∈
Z,
where u t ∼ WN (0, σu 2 ), is an MA(1) process. Here θ(z ) = 1 + 0.7z
An example of Moving Average (MA) process
A realizzation of the MA(1) process x t = u t + 0.7u t presented in the following figure.
1
−
is
An example of over-parameterization
Consider the process {x t ; t ∈ Z} defined by x t = x t −1 − 0.21x t −2 + u t − 0.7u t −1 ∀t ∈
Z,
where u t ∼ WN (0, σu 2 ). This process looks like an ARMA(2,1) process but it is not an ARMA(2,1) process.
An example of over-parameterization Here φ(z ) = 1 − z + 0.21z 2 = (1 − 0.7z )(1 − 0.3z ) and θ(z ) = 1 − 0.7z We note that both polynomials have a common factor, namely 1 − 0.7z . Discarding the common factor in each leaves φ (z ) = 1 − 0.3z ∗
and ∗
θ (z ) = 1. Thus the process is an AR(1) process, defined by x t = 0.3x t 1 + u t −
Causal Autoregressive-Moving Average (ARMA) models Definition. An ARMA(p , q ) process {x t ; t ∈ Z} is causal (strictly, a causal function of {u t ; t ∈ Z}) if there exists constants ψ0 , ψ1 ,... such that ∞
|ψ j | < ∞
j =0
and
∞
x t =
j =0
ψ j u t
j
−
∀t .
Autoregressive-Moving Average (ARMA) models Here, it is important to clarify the meaning of equality ∞
x t =
ψ j u t
j
−
∀t
j =0
It means that
2
n
lim E
n→∞
x t −
ψ j u t
j
−
= 0.
j =0
The equality is defined in terms of a limit in the quadratic mean.
Autoregressive-Moving Average (ARMA) models
The following two theorems provide, respectively, a characterization of the of causality and stationarity of an ARMA(p , q ) process.
Autoregressive-Moving Average (ARMA) models Theorem. An ARMA(p , q ) process {x t ; t ∈ Z} is causal if and only if φ(z ) = 1 − φ1 z − ... − φp z p = 0 for all |z | ≤ 1.
Theorem. An ARMA(p , q ) process {x t ; t ∈ Z} is stationary if and only if φ(z ) = 1 − φ1 z − ... − φp z p = 0 for all |z | = 1.
Autoregressive-Moving Average (ARMA) models
The causality and the stationarity of an ARMA process depend entirely on the autoregressive parameters and not on the moving-average ones.
Autoregressive-Moving Average (ARMA) models
Further, we note that if an ARMA( p , q ) process is causal, then is stationary, but stationarity does not imply causality .
Autoregressive-Moving Average (ARMA) models Consider, for example, the following AR(1) process: x t = 3x t −1 + u t
where u t WN(0, σu 2 ). We have that φ(z ) = 1 − 3z = 0 for all |z | = 1. and hence the process is stationary, but non causal since φ(z ) = 1 − 3z = 0 for z = 1/3.
Autoregressive-Moving Average (ARMA) models An important result: There is a one-to-one correspondence between the parameters of a causal ARMA( p ,q ) process and the autocovariance function.
Autoregressive-Moving Average (ARMA) models
It is important to underline that if we consider the set of autocorrelation functions there is not a one-to-one correspondence between the parameters of a causal ARMA(p ,q ) process and the autocorrelation function.
Autoregressive-Moving Average (ARMA) models Consider the following two MA(1) processes. x t = u t + θ u t −1
where u t ∼ WN (0, σu 2 ), with |θ| < 1 and 1 y t = u t + u t θ
1
−
where u t ∼ WN (0, σu 2 ). Since θ 1/θ = , 2 2 1 + θ 1 + (1/θ) we have that both processes share the same autocorrelation function. Thus it cannot be used to distinguish between the two parametrizations.
Autoregressive-Moving Average (ARMA) models
This example shows that an MA(1)-process is not uniquely determined by its autocorrelation function. There is an identification problem with the MA(1) models. In general, (if all roots of θ(z ) = 0 are real) there can be 2 q different MA(q ) processes with the same autocorrelation function.
Autoregressive-Moving Average (ARMA) models Definition. An ARMA(p , q ) process {x t ; t ∈ Z} is invertible (strictly, an invertible function of {u t ; t ∈ Z}) if there exists constants π0 , π1 ,... such that ∞
|π j | < ∞
j =0
and
∞
u t =
j =0
π j x t
j
−
∀t .
Autoregressive-Moving Average (ARMA) models
The following theorem provides a necessary and sufficient condition for the invertibility.
Theorem. An ARMA(p , q ) process {x t ; t ∈ Z} is invertible if and only if θ(z ) = 1 + θ1 z + ... + θq z q = 0 for all |z | ≤ 1.
Autoregressive-Moving Average (ARMA) models
We note that an AR( p ) process is always invertible, even if it is non-stationary, while an MA( q ) process is always stationary, even if it is non-invertible.
Autoregressive-Moving Average (ARMA) models
The invertibility can be used in order to ensure the identifiability of MA processes.
Autoregressive-Moving Average (ARMA) models In general, (if all roots of θ(z ) = 0 are real) there can be 2 q different MA(q ) processes with the same autocorrelation function, but only one of these is invertible.
Conclusion In the class of the mean-zero causal and invertible Gaussian ARMA processes there is a one-to-one correspondence between the family of the finite dimensional distributions of the process and the finite parametric representation of process.
Conclusion In the class of the mean-zero causal and invertible Gaussian ARMA processes the probabilistic properties of the process are completely characterized by the finite set of parameters
2 u
φ1 , φ2 ,...,φp , θ1 , θ2 ,...,θq , σ
Now, we have to estimate a finite number ( p + q + 1) of parameters from observed data. This mission is possible.