Kellogg School of Management

Finance 520-1

GENERAL SEMINAR IN Finance

 

Basic Time Series Analysis

 

SPRING 2007

 

The purpose of the course is to introduce, primarily at an intuitive level, the basic concepts of time series analysis, as applied in finance and economics. Topics will include:

 

·      Basic concepts and models: ergodicity, stationarity, ARMA models, Vector Autoregressions, ARCH/GARCH

·      Foundational results for time series models: Wold and Spectral Decomposition Theorems

·      Filtering tools: Kalman Filter and other filters such as the band pass filter and the Hodrick-Prescott filter

·      Statistical Inference:

o    Classical asymptotic theory: laws of large numbers; central limit theorems; sampling results for Generalized Method of Moments and Maximum Likelihood.

o    Bayesian inference: priors and posteriors; the marginal likelihood as a way to assess model fit; Monte Carlo Markov Chain and Gibbs sampling; use of Bayesian methods to achieve parameter parsimony in forecasting models, and for estimating structural models.

·      Nonstationarity in economic time series: unit roots, determination of cointegration rank, deterministic trends.

·      Identification of impulse response functions.

 

 

The textbook is Hamilton, Time Series Analysis. Chapter 11 in Sargent’s Macroeconomic Theory is also very useful.

 

 

Midterm

 

Final

 

Homeworks

 

The following homeworks invite students to explore time series concepts more deeply. In addition, the homeworks provide practical, data applications.

 

Homework #1: exercises designed to clarify the proof of the Wold decomposition theorem

 

Homework #2: questions designed to familiarize students with VARs and the inverse Fourier transform of the spectrum. Also, the question asks students to use VARs to study the dynamic properties of aggregate quantities and rates of return, and to do a particular test of the expectations hypothesis of the long term interest rate.

 

Homework #3: exercise for students to gain experience with the use of frequency domain tools. This involves the (King-Rebelo) derivation of the Hodrick-Prescott filter in the frequency domain; the derivation of a seasonal adjustment procedure using projection methods; and the derivation of the classic Sims result concerning the plim of a miss-specified distributed lag regression. Substantively, we re-examine the expectations hypothesis of the long term interest rate from the perspective of the frequency domain; derive the classic (initially counterintuitive) result that optimal seasonal adjustment results in dips at the seasonal frequencies; show that the HP filter is a ‘high pass filter’; explore conditions under which the sum of distributed lag coefficients is well-estimated.

 

Homework #4: this is designed to take students through Hamilton’s motivation of the spectral decomposition theorem (an alternative approach to the band-pass filter approach taken in class). In addition, it is also designed to give practical experience in the application of the Kalman filter. Substantively, we study seasonality in the industrial production data and consider two ways to estimate the spectrum – the periodogram and the parametric approach based on time series models – and see constructively why the periodogram is a noisy estimate of the spectrum. Also, we use the ‘two-sided’ Kalman filter to recover the real interest rate from data on the nominal interest rate and realized inflation.

 

Homework #5 will expose students to ideas in:

 

Boivin and Giannoni, DSGE Models in a Data-Rich Environment

Bernanke, Boivin and Eliasz, Measuring the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive (FAVAR) Approach, Quarterly Journal of Economics, 2005.

Bernanke and Boivin, Monetary Policy in a Data-Rich Environment, Journal of Monetary Economics, 2002.

Bekaert, Cho, and Moreno, New-Keynesian Macroeconomics and the Term Structure, 2006.

 

Data for homework #5

 

Homework #5: The first two questions in this three-part homework have two purposes: to let students see once again the power of the Kalman filter by exposing them to some promising recent research that uses it. The homework summarizes the recent work by Bernanke, Boivin, Eliasz and Giannone which exploits the fact that in practice there are many different variables which measure the same basic economic concept (e.g., there are several measures of hours worked, there are several measures of inflation,  output, etc.). The homework also shows how to evaluate the term structure implications of fully specified, dynamic equilibrium models, along the lines of recent work by Bekaert, Cho and Moreno and others.

 

Homework #6: the first part of this homework is designed to show students another example of how useful the tools of GMM can be for deriving the asymptotic sampling distribution of an estimator. The example used is the skewness statistic when the underlying data are iid. This statistic is of substantive interest and the exercise also allows the student to discover a surprising result – the sampling distribution of the skewness statistic is very sensitive to whether the mean of the underlying process is estimated or known. The next step is to apply GMM to testing for skewness in the data used in homework #2. Recognizing that asymptotic sampling theory may be a bad approximation in finite samples, students are asked to execute a suitably constructed Monte Carlo study to assess the accuracy of asymptotic sampling theory in small samples. The last part of the homework asks students to test for conditional heteroscedasticity in the disturbances of the VAR studied in homework #2. We apply Engle’s test, as well as a suitably constructed bootstrap experiment to verify that the outcome of the Engle test is reliable in a sample of the size that is at hand.

 

Homework #7:  This homework analyzes historical time series on Indian per capita GDP going back to 1884, to evaluate the Niall Ferguson hypothesis that India benefited under the British colonization that ended in 1948. This is an application of GMM. Next, the homework turns to Bayesian methods, in particular, the MCMC algorithm. This algorithm is designed to approximate the posterior distribution of individual parameters, in the standard case when this distribution cannot be expressed analytically. The homework assesses the accuracy of the MCMC algorithm. Data are sampled from a mixture of Normals distribution, and the MCMC algorithm is used to approximate that distribution. Since the mixture of Normals can be expressed analytically, the student can graph the histogram of the distribution produced by the MCMC algorithm against the actual distribution to assess accuracy. The mixture of Normals distribution is useful for this purpose because, depending on how the mixture parameter is selected, things are made difficult or easy for the MCMC algorithm. The final question in his homework explores the nature of the ARCH there is in the US long-term interest rate.

 

 

Course Outline

 

The first half of the course studies Time Series models and their properties. The second part discusses statistical inference about those models.

 

1.    Stochastic Processes (chapter 3)

·      Stationarity, ergodicity

·      White noise, AR, MA, ARMA models

·      Yule Walker equations

2.    Linear Projections (section 4.1, 4.2)

·      Necessity and sufficiency of orthogonality conditions

·      Recursive property of projections

·      Law of Iterated Projections

·      Projections versus regressions

3.    Wold decomposition theorem (page 109; Sargent)

4.    Vector autoregressions (sections 10.1-10.3)

5.    Spectral analysis (Chapter 6 and Sargent)

·      Lag operator notation

·      Autocovariance generating function (section 3.6)

·      Complex numbers, key integral property of complex numbers, inverse Fourier transform

·      Filters and band pass filter

·      Spectral density

·      Spectral decomposition theorem motivated according to band-pass filter approach in Sargent (Hamilton’s alternative approach is pursued in homework)

6.    Kalman filter (sections 13.1, 13.2, 13.6)

·      State Space, Observer System

·      Derivation of the Kalman filter: ‘forward pass’, ‘backward pass’

7.    Maximum likelihood estimation (chapter 5)

·      Gaussian maximum likelihood

·      Conditional versus unconditional maximum likelihood

·      Identification of moving averages

·      Using the Kalman filter to build the Gaussian likelihood (section 13.4)

8.    Introduction to generalized method of moments (section 14.1)

·      Basic ideas from asymptotic sampling theory convergence in probability, mean square and distribution; law of large numbers; mean value theorem (chapter 7)

·      Basic setup of GMM: the ‘GMM orthogonality conditions’

·      Asymptotic sampling distribution of a sample mean and spectral density at frequency zero of GMM orthogonality condition

·      Optimal weighting matrix

·      Sampling properties of GMM estimator, hypothesis testing

9.    GMM and Maximum Likelihood (chapter 14)

·      Preliminary: martingale difference sequence

                                                           i.      definition

                                                       ii.      ARCH processes (chapter 21)

                                                   iii.      Some asymptotic results for m.d.s.

·      Demonstration that ML is a special case of GMM

                                                           i.      Using GMM and structure of Gaussian density to derive asymptotic sampling distribution of Gaussian ML

                                                       ii.      Using GMM without applying structure of the likelihood function to derive asymptotic sampling distribution of quasi-ML

·      Asymptotic efficiency of Gaussian ML (Cramer-Rao lower bound) (this discussion is not in the book, but a good background discussion can be found in an econometric text such as, e.g., Theil)

·      Robustness of GMM versus efficiency of ML: example of estimating ARCH processes (chapter 21)

10.                   Bayesian inference (chapter 12)

·      Bayes rule, priors, posteriors

·      Estimating a mean using iid observations with known variance, priors and ‘padding data’

·      Markov Chain, Monte Carlo Methods

                                                           i.      Approximating the posterior distribution

                                                       ii.      Approximating the marginal distribution and using it to evaluate model fit

11.                   Nonstationarity (chapter 18, 19)

·      Trend stationarity

                                                           i.      Experiment: impact on long-term forecast of a shock

                                                       ii.      Breakdown of standard covariance stationary sampling theory

1.  some preliminary results concerning sums

2.  superconsistency of OLS slope estimator

·        Unit roots

                                                           i.      Experiment: impact on long-term forecast of a shock

·      Distinction between unit roots and trend stationarity based on spectral density at frequency zero

·      Cointegration – we skipped this!

                                                           i.      Defined

                                                       ii.      The proper handling of cointegration in VARs

12.                   Very brief review of chapter 22 discussion of time series analysis with changes in regime

·      One approach to distinguishing whether recent change in volatility of macroeconomic and finance data reflects change in the transmission mechanism or heteroscedasticity in fundamental disturbances

·      Much data (especially for emerging market economies) ‘looks’ like it reflects periodic, large regime changes, suggesting the importance of chapter 22 methods