Pierre Perron

WORKING PAPERS 

 

Testing for Changes in Forecasting Performance (with Yohei Yamamoto), May 2018, submitted.

         We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi's (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that  separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over all possible values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss Sup-Wald test (TSLW) and the Total Loss UDmax test (TLUD). Using extensive simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all Data Generating Processes are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and two empirical applications illustrate the relevance of our findings in practice.

 

Continuous Record Asymptotics for Structural Change Models (with Alessandro Casini), November 2017, submitted.

         For a partial structural change in a linear regression model with a single break, we develop a continuous record asymptotic framework to build inference methods for the break date. We have T observations with a sampling frequency h over a fixed time horizon [0, N] , and let T ★ ± with h ◎ 0 while keeping the time span N fixed. We impose very mild regularity conditions on an underlying continuous-time model assumed to generate the data. We consider the least-squares estimate of the break date and establish consistency and convergence rate. We provide a limit theory for shrinking magnitudes of shifts and locally increasing variances. The asymptotic distribution corresponds to the location of the extremum of a function of the quadratic variation of the regressors and of a Gaussian centered martingale process over a certain time interval. We can account for the asymmetric informational content provided by the pre- and post-break regimes and show how the location of the break and shift magnitude are key ingredients in shaping the distribution. We consider a feasible version based on plug-in estimates, which provides a very good approximation to the finite sample distribution. We use the concept of Highest Density Region to construct confidence sets. Overall, our method is reliable and delivers accurate coverage probabilities and relatively short average length of the confidence sets. Importantly, it does so irrespective of the size of the break.

 

Continuous Record Laplace-based Inference about the Break Date in Structural Change Models (with Alessandro Casini), December 2017, submitted.

         Building upon the continuous record asymptotic framework recently introduced by Casini and Perron (2017a) for inference in structural change models, we propose a Laplace-based (Quasi-Bayes) procedure for the construction of the estimate and confidence set for the date of a structural change. The procedure relies on a Laplace-type estimator defined by an integration-based rather than an optimization-based method. A transformation of the least-squares criterion function is evaluated in order to derive a proper distribution, referred to as the Quasi-posterior. For a given choice of a loss function, the Laplace-type estimator is defined as the minimizer of the expected risk with the expectation taken under the Quasi-posterior. Besides providing an alternative estimate that is more precise!lower mean absolute error (MAE) and lower root-mean squared error (RMSE)!than the usual least-squares one, the Quasi-posterior distribution can be used to construct asymptotically valid inference using the concept of Highest Density Region. The resulting Laplace-based inferential procedure proposed is shown to have lower MAE and RMSE, and the confidence sets strike the best balance between empirical coverage rates and average lengths of the confidence sets relative to traditional long-span methods, whether the break size is small or large.

 

Generalized Laplace Inference in Multiple Change-Points Models (with Alessandro Casini), March 2018, submitted.

         Under the classical long-span asymptotic framework we develop a class of Generalized Laplace (GL) inference methods for the change-point dates in a linear time series regression model with multiple structural changes analyzed in, e.g., Bai and Perron (1998). The GL estimator is defined by an integration rather than optimization-based method and relies on the least-squares criterion function. It is interpreted as a classical (non-Bayesian) estimator and the inference methods proposed retain a frequentist interpretation. Since inference about the change-point dates is a nonstandard statistical problem, the original insight of Laplace to interpret a certain transformation of a least-squares criterion function as a statistical belief over the parameter space provides a better approximation about the uncertainty in the data about the change-points relative to existing methods. Simulations show that the GL estimator is in general more precise than the OLS estimator. On the theoretical side, depending on some input (smoothing) parameter, the class of GL estimators exhibits a dual limiting distribution; namely, the classical shrinkage asymptotic distribution of Bai and Perron (1998), or a Bayes-type asymptotic distribution.

 

Inference Related to Common Breaks in a Multivariate System with Joined Segmented Trends with Applications to Global and Hemispheric Temperatures (with Dukpa Kim, Tatsushi Oka and Francisco Estrada), January 2017; Revised April 2018. Forthcoming in the Journal of Econometrics.

         What transpires from recent research is that temperatures and forcings seem to be characterized by a linear trend with two changes in the rate of growth. The first occurs in the early 60s and indicates a very large increase in the rate of growth of both temperatures and radiative forcings. This was termed as the "onset of sustained global warming". The second is related to the more recent so-called hiatus period, which suggests that temperatures and total radiative forcings have increased less rapidly since the mid-90s compared to the larger rate of increase from 1960 to 1990. There are two issues that remain unresolved. The first is whether the breaks in the slope of the trend functions of temperatures and radiative forcings are common. This is important because common breaks coupled with the basic science of climate change would strongly suggest a causal effect from anthropogenic factors to temperatures. The second issue relates to establishing formally via a proper testing procedure that takes into account the noise in the series, whether there was indeed a `hiatus period' for temperatures since the mid 90s. This is important because such a test would counter the widely held view that the hiatus is the product of natural internal variability. Our paper provides tests related to both issues. The results show that the breaks in temperatures and forcings are common and that the hiatus is characterized by a significant decrease in the rate of growth of temperatures and forcings. The statistical results are of independent interest and applicable more generally.

 

Forecasting in the presence of in and out of sample breaks (with Jiawen Xu), Revised January 30, 2017.

         We present a frequentist-based approach to forecast time series in the presence of in-sample and out-of-sample breaks in the parameters of the forecasting model. We first model the parameters as following a random level shift process, with the occurrence of a shift governed by a Bernoulli process. In order to have a structure so that changes in the parameters be forecastable, we introduce two modifications. The first models the probability of shifts according to some covariates that can be forecasted. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Similar modifications can also be made to model changes in the variance of the error process. Our full model can be cast into a non-linear non-Gaussian state space framework. To estimate it, we use particle filtering and a Monte Carlo expectation maximization algorithm. Simulation results show that the algorithm delivers accurate in-sample estimates, in particular the filtered estimates of the time path of the parameters follow closely their true variations. We provide a number of empirical applications and compare the forecasting performance of our approach with a variety of alternative methods. These show that substantial gains in forecasting accuracy are obtained.

 

Temporal Aggregation, Bandwidth Selection and Long Memory for Volatility Models (with Wendong Shi), June 2014.

         The effects of temporal aggregation and choice of sampling frequency are of great interest in modeling the dynamics of asset price volatility. We show how the squared low-frequency returns can be expressed in terms of the temporal aggregation of a high-frequency series. Based on the theory of temporal aggregation, we provide the link between the spectral density function of the squared low-frequency returns and that of the squared high-frequency returns. Furthermore, we analyze the properties of the spectral density function of realized volatility series, constructed from squared returns with different frequencies under temporal aggregation. Our theoretical results allow us to explain some findings reported recently and uncover new features of volatility in financial market indices. The theoretical findings are illustrated via the analysis of both low-frequency daily S&P 500 returns from 1928 to 2011 and high-frequency 1-minute S&P 500 returns from 1986 to 2007.

 

 

Testing Jointly for Structural Changes in the Error Variance and Coefficients of a Linear Regression Model (with Jing Zhou), July 2008.

 

         We provide a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression involving stationary regressors. Our framework is quite general in that we allow for general mixing-type regressors and the assumptions imposed on the errors are quite mild. The errors' distribution can be non-normal and conditional heteroskedasticity is permissable. Extensions to the case with serially correlated errors are also treated. We provide the required tools for addressing the following testing problems, among others: a) testing for given numbers of changes in regression coefficients and variance of the errors; b) testing for some unknown number of changes less than some pre-specified maximum; c) testing for changes in variance (regression coefficients) allowing for a given number of changes in regression coefficients (variance); and d) estimating the number of changes present. These testing problems are important for practical applications as witnessed by recent interests in macroeconomics and finance for which documenting structural change in the variability of shocks to simple autoregressions or vector autoregressive models has been a concern.

 

 

Testing for Breaks in Coefficients and Error Variance: Simulations and Applications  (with Jing Zhou), July 2008.

 

         In a companion paper, Perron and Zhou (2008) provided a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression model involving stationary regressors, allowing the break dates for the two components to be different or overlap. The aim of this paper is twofold. First, we present detailed simulation analyses to document various issues related to their procedures: a) the inadequacy of the two step procedures that are commonly applied; b) which particular version of the necessary correction factor exhibits better finite sample properties; c) whether applying a correction that is valid under more general conditions than necessary is detrimental to the size and power of the tests; d) the finite sample size and power of the various tests proposed; e) the performance of the sequential method in determining the number and types of breaks present. Second, we apply their testing procedures to various macroeconomic time series studied by Stock and Watson (2002). Our results reinforce the prevalence of change in mean, persistence and variance of the shocks to these series, and the fact that for most of them an important reduction in variance occurred during the 1980s. In many cases, however, the so-called "great moderation" should instead be viewed as a "great reversion".

 

 

An Analytical Evaluation of the Log-periodogram Estimate in the Presence of Level Shifts (with Zhongjun Qu), November 2007.

 

         Recently, there has been an upsurge of interest on the possibility of confusing long memory and structural changes in level. Many studies have shown that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. We analyze the properties of the log periodogram estimate of the memory parameter when the jump component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. Simulations are presented to highlight the properties of the distributions and to assess the adequacy of our approximations. We also show the usefulness of our results to distinguish between long memory and level shifts via an application to the volatility of daily returns for wheat commodity futures.

          Note: This is a revised version of parts of a working paper entitled "An Analytical Evaluation of the Log-periodogram Estimate in the Presence of Level Shifts and its Implications for Stock Returns Volatility". 

 

Some of this work was supported by the National Science Foundation under Grant No. 0649350 and 0078492.