Pierre Perron

WORKING PAPERS (all files in pdf format) (some of this work was supported by the National Science Foundation under Grant No. 0649350 and 0078492)

 

Inference Related to Common Breaks in a Multivariate System with Joined Segmented Trends with Applications to Global and Hemispheric Temperatures (with Dukpa Kim, Tatsushi Oka and Francisco Estrada), January 2017; Revised November 2017.

         What transpires from recent research is that temperatures and forcings seem to be characterized by a linear trend with two changes in the rate of growth. The first occurs in the early 60s and indicates a very large increase in the rate of growth of both temperatures and radiative forcings. This was termed as the "onset of sustained global warming". The second is related to the more recent so-called hiatus period, which suggests that temperatures and total radiative forcings have increased less rapidly since the mid-90s compared to the larger rate of increase from 1960 to 1990. There are two issues that remain unresolved. The first is whether the breaks in the slope of the trend functions of temperatures and radiative forcings are common. This is important because common breaks coupled with the basic science of climate change would strongly suggest a causal effect from anthropogenic factors to temperatures. The second issue relates to establishing formally via a proper testing procedure that takes into account the noise in the series, whether there was indeed a `hiatus period' for temperatures since the mid 90s. This is important because such a test would counter the widely held view that the hiatus is the product of natural internal variability. Our paper provides tests related to both issues. The results show that the breaks in temperatures and forcings are common and that the hiatus is characterized by a significant decrease in the rate of growth of temperatures and forcings. The statistical results are of independent interest and applicable more generally.

Testing for Common Breaks in a Multiple Equations System (with Tatsushi Oka), July 2011; Revised May 2017 (supplementary material).

 

         The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.

 A Comparison of Alternative Methods to Construct Confidence Intervals for the Estimate of a Break Date in Linear Regression Models (with Seong Yeon Chang), Revised October 2015. Forthcoming in Econometric Reviews.

         This paper considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997) based on the asymptotic distribution under a shrinking shift framework, Elliott and M┨ller (2007) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2014) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and M┨ller's (2007) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal to noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and M┨ller's (2007) method.

Combining Long Memory and Level Shifts in Modeling and Forecasting the Volatility of Asset Returns (with Rasmus T. Varneskov), Revised April 2017 (supplementary material). Forthcoming in Quantitative Finance.

 

         We provide a framework for modeling and forecasting the volatility of asset returns. In particular, we propose a parametric state space model with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman _lter is used to construct the likelihood function after augmenting the probability of states by a mixture of normally distributed processes. A new forecasting framework for random level shift models is proposed, which utilizes the information in the Kalman recursions to generate mean- and path-corrected forecasts. We apply our model to eight daily volatility series constructed from: (1) Tick-by-tick trades on the BAC, MRK and SPY stocks, (2) one-minute returns on S&P 500 and 10-year Treasury Bond futures contracts, and (3) daily returns on the USD-AUD, USD-CHF and USD-YEN exchange rates. The full sample parameter estimates reveal that random level shifts are present in all series. A genuine long memory component is present in the measures of volatility constructed using high-frequency data. On the other hand, the residual dynamics of the volatility series proxied by log-daily absolute returns may be characterized as a combination of short memory dynamics and measurement errors. We conduct extensive out-of-sample forecast evaluations and compare the results with six popular models in the literature. Interestingly, our ARFIMA model with random level shifts is the only model that consistently belongs to the 10% Model Confidence Set of Hansen et al. (2011) across a variety of forecast periods, forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons.

Forecasting in the presence of in and out of sample breaks (with Jiawen Xu), Revised January 30, 2017.

         We present a frequentist-based approach to forecast time series in the presence of in-sample and out-of-sample breaks in the parameters of the forecasting model. We first model the parameters as following a random level shift process, with the occurrence of a shift governed by a Bernoulli process. In order to have a structure so that changes in the parameters be forecastable, we introduce two modifications. The first models the probability of shifts according to some covariates that can be forecasted. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Similar modifications can also be made to model changes in the variance of the error process. Our full model can be cast into a non-linear non-Gaussian state space framework. To estimate it, we use particle filtering and a Monte Carlo expectation maximization algorithm. Simulation results show that the algorithm delivers accurate in-sample estimates, in particular the filtered estimates of the time path of the parameters follow closely their true variations. We provide a number of empirical applications and compare the forecasting performance of our approach with a variety of alternative methods. These show that substantial gains in forecasting accuracy are obtained.

 

Temporal Aggregation, Bandwidth Selection and Long Memory for Volatility Models (with Wendong Shi), June 2014.

         The effects of temporal aggregation and choice of sampling frequency are of great interest in modeling the dynamics of asset price volatility. We show how the squared low-frequency returns can be expressed in terms of the temporal aggregation of a high-frequency series. Based on the theory of temporal aggregation, we provide the link between the spectral density function of the squared low-frequency returns and that of the squared high-frequency returns. Furthermore, we analyze the properties of the spectral density function of realized volatility series, constructed from squared returns with different frequencies under temporal aggregation. Our theoretical results allow us to explain some findings reported recently and uncover new features of volatility in financial market indices. The theoretical findings are illustrated via the analysis of both low-frequency daily S&P 500 returns from 1928 to 2011 and high-frequency 1-minute S&P 500 returns from 1986 to 2007.

Robust testing of time trend and mean with unknown integration order errors (with Jiawen Xu); March 2013.

         We provide tests to perform inference on the coefficients of a linear trend assuming the noise to be a fractionally integrated process with memory parameter d(-0.5,1.5) by applying a quasi-GLS procedure using d-differences of the data. Doing so, the error term is short memory, the asymptotic distribution of the OLS estimators applied to quasi-differenced data and their t-statistics are unaffected by the value of d and standard procedures have a limit normal distribution. No truncation or pre-test is needed given the continuity with respect to d. To have feasible tests, we use the Exact Local Whitlle estimator of Shimotsu (2010), valid for processes with a linear trend. The finite sample size and power of the tests are investigated via simulations. We also provide a comparison with the tests of Perron and Yabu (2009) valid for a noise component that is I(0) or I(1). The results are encouraging in that our test is valid under more general conditions, yet has similar power as those that apply to the dichotomous cases with d either 0 or 1. We apply our tests to construct confidence intervals for the growth rate of temperature series pre and post 1960, which show that the slope is significantly higher in the post-1960 period consistent with global warming.

 

Breaks, trends and the attribution of climate change: a time-series analysis (with Francisco Estrada), March 2012.

 

         Climate change detection and attribution have been the subject of intense research and debate over at least four decades. However, direct attribution of climate change to anthropogenic activities using observed climate and forcing variables through statistical methods has remained elusive, partly caused by the difficulties for correctly identifying the time-series properties of these variables and by the limited availability of methods for relating nonstationary variables. This paper provides strong evidence concerning the direct attribution of observed climate change to anthropogenic greenhouse gases emissions by first investigating the univariate time-series properties of observed global and hemispheric temperatures and forcing variables and then by proposing statistically adequate multivariate models. The results show that there is a clear anthropogenic fingerprint on both global and hemispheric temperatures. The signal of the well-mixed GHG forcing in all temperature series is very clear and accounts for most of their secular movement since the beginning of observations. Both temperature and forcing variables are characterized by piecewise linear trends with abrupt changes in their slopes estimated to occur at different dates. Nevertheless, their long-term movements are so closely related that the observed temperature and forcing trends cancel out. The warming experimented during the last century was mainly due to the increase in GHG which was partially offset by the effect of tropospheric aerosols. Other forcing sources, such as solar, are shown to only contribute to (shorter-term) variations around the GHG forcing trend.

 

 

Testing Jointly for Structural Changes in the Error Variance and Coefficients of a Linear Regression Model (with Jing Zhou), July 2008.

 

         We provide a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression involving stationary regressors. Our framework is quite general in that we allow for general mixing-type regressors and the assumptions imposed on the errors are quite mild. The errors' distribution can be non-normal and conditional heteroskedasticity is permissable. Extensions to the case with serially correlated errors are also treated. We provide the required tools for addressing the following testing problems, among others: a) testing for given numbers of changes in regression coefficients and variance of the errors; b) testing for some unknown number of changes less than some pre-specified maximum; c) testing for changes in variance (regression coefficients) allowing for a given number of changes in regression coefficients (variance); and d) estimating the number of changes present. These testing problems are important for practical applications as witnessed by recent interests in macroeconomics and finance for which documenting structural change in the variability of shocks to simple autoregressions or vector autoregressive models has been a concern.

 

Testing for Breaks in Coefficients and Error Variance: Simulations and Applications  (with Jing Zhou), July 2008.

 

         In a companion paper, Perron and Zhou (2008) provided a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression model involving stationary regressors, allowing the break dates for the two components to be different or overlap. The aim of this paper is twofold. First, we present detailed simulation analyses to document various issues related to their procedures: a) the inadequacy of the two step procedures that are commonly applied; b) which particular version of the necessary correction factor exhibits better finite sample properties; c) whether applying a correction that is valid under more general conditions than necessary is detrimental to the size and power of the tests; d) the finite sample size and power of the various tests proposed; e) the performance of the sequential method in determining the number and types of breaks present. Second, we apply their testing procedures to various macroeconomic time series studied by Stock and Watson (2002). Our results reinforce the prevalence of change in mean, persistence and variance of the shocks to these series, and the fact that for most of them an important reduction in variance occurred during the 1980s. In many cases, however, the so-called "great moderation" should instead be viewed as a "great reversion".

 

An Analytical Evaluation of the Log-periodogram Estimate in the Presence of Level Shifts (with Zhongjun Qu), November 2007.

 

         Recently, there has been an upsurge of interest on the possibility of confusing long memory and structural changes in level. Many studies have shown that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. We analyze the properties of the log periodogram estimate of the memory parameter when the jump component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. Simulations are presented to highlight the properties of the distributions and to assess the adequacy of our approximations. We also show the usefulness of our results to distinguish between long memory and level shifts via an application to the volatility of daily returns for wheat commodity futures.

          Note: This is a revised version of parts of a working paper entitled "An Analytical Evaluation of the Log-periodogram Estimate in the Presence of Level Shifts and its Implications for Stock Returns Volatility".