Valentina Corrida

Professor Valentina Corradi


Professor of Econometrics
+441483683914

About

Biography

Valentina Corradi obtained a PhD in Economics in 1994 at the University of California, San Diego. She held positions at University of Pennsylvania, Queen Mary-University of London, University of Exeter and University of Warwick.

Her work has been published on Journal of Econometrics, Econometric Theory, Journal of the American Statistical Association, Review of Economic Studies, International Economic Review and Journal of Monetary Economics.

Valentina's current research interests include: (i) modelling and testing for jumps in financial assets (ii) evaluation of trading strategies (iii) financial analysts bias (iv) bandwidth selection for non-stationary processes (v) heaping and measurement error in child mortality data.

Research interests

  • Econometric Theory
  • Financial Econometrics
  • Time Series: Predictive evaluation
  • Realized measures and Jumps
  • Data driven procedure for bandwidth selection
  • Moment inequalities
  • Factor Models
  • Conditional CAPM.

Teaching

  • Econometrics for PhDs.

Departmental duties

  • PhD Programme Director.

Publications

Valentina Corradi, Jack Fosten, Daniel Gutknecht (2023)Out-of-sample tests for conditional quantile coverage an application to Growth-at-Risk, In: Journal of econometrics236(2)105490 Elsevier B.V

This paper proposes tests for out-of-sample comparisons of interval forecasts based on parametric conditional quantile models. The tests rank the distance between actual and nominal conditional coverage with respect to the set of conditioning variables from all models, for a given loss function. We propose a pairwise test to compare two models for a single predictive interval. The set-up is then extended to a comparison across multiple models and/or intervals. The limiting distribution varies depending on whether models are strictly non-nested or overlapping. In the latter case, degeneracy may occur. We establish the asymptotic validity of wild bootstrap based critical values across all cases. An empirical application to Growth-at-Risk (GaR) uncovers situations in which a richer set of financial indicators are found to outperform a commonly-used benchmark model when predicting downside risk to economic activity.

Zizhong Yan, Wiji Arulampalam, Valentina Corradi, Daniel Gutknecht (2020)heap: A command for fitting discrete outcome variable models in the presence of heaping at known points, In: The Stata journal20(2)pp. 435-467 Sage

Self-reported survey data are often plagued by the presence of heaping. Accounting for this measurement error is crucial for the identification and consistent estimation of the underlying model (parameters) from such data. In this article, we introduce two commands. The first command,heapmph, estimates the parameters of a discrete-time mixed proportional hazard model with gammaunobserved heterogeneity, allowing for fixed and individual-specific censoring and different-sized heap points. The second command,heapop, extends the framework to ordered choice outcomes, subject to heaping. We also provide suitable specification tests.

Valentina Corradi, Walter Distaso, Marcelo Fernandes (2019)Testing for Jumps Spillovers without testing for Jumps, In: Journal of the American Statistical Association Taylor & Francis

This paper develops statistical tools for testing conditional independence among the jump components of the daily quadratic variation, which we estimate using intraday data. To avoid sequential bias distortion, we do not pretest for the presence of jumps. If the null is true, our test statistic based on daily integrated jumps weakly converges to a Gaussian random variable if both assets have jumps. If instead at least one asset has no jumps, then the statistic approaches zero in probability. We show how to compute asymptotically valid bootstrap-based critical values that result in a consistent test with asymptotic size equal to or smaller than the nominal size. Empirically, we study jump linkages between US futures and equity index markets. We find not only strong evidence of jump cross-excitation between the SPDR exchange-traded fund and E-mini futures on the S&P 500 index, but also that integrated jumps in the E-mini futures during the overnight period carry relevant information.

V Corradi, NR Swanson (2006)Predictive density and conditional confidence interval accuracy tests, In: Journal of Econometrics135(1-2)pp. 187-228

This paper outlines testing procedures for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models. The tests that are discussed are based on either the comparison of entire conditional distributions or the comparison of predictive confidence intervals. We also briefly survey existing related methods in the area of predictive density evaluation, including methods based on the probability integral transform and the Kullback-Leibler Information Criterion. The procedures proposed in this paper are similar in many ways to [Andrews', 1997. A conditional Kolmogorov test. Econometrica 65, 1097-1128.] conditional Kolmogorov test and to [White's, 2000. A reality check for data snooping. Econometrica 68, 1097-1126.] reality check. In particular, a predictive density test is outlined that involves comparing square (approximation) errors associated with models i, i = 1, ..., n, by constructing weighted averages over U of E ((Fi (u | Zt, θi†) - F0 (u | Zt, θ0))2), where F0 (· | ·) and Fi (· | ·) are true and model-i distributions, u ∈ U, and U is a possibly unbounded set on the real line. A conditional confidence interval version of this test is also outlined, and appropriate bootstrap procedures for obtaining critical values when predictions used in the formation of the test statistics are obtained via rolling and recursive estimation schemes are developed. An empirical illustration comparing alternative predictive models for U.S. inflation is given for the predictive confidence interval test. © 2005 Elsevier B.V. All rights reserved.

Valentina Corradi, Jack Fosten, Daniel Gutknecht (2024)Predictive Ability Tests with Possibly Overlapping Models, In: Journal of econometrics Elsevier

This paper provides novel tests for comparing out-of-sample predictive ability of two or more competing models that are possibly overlapping. The tests do not require pre-testing, they allow for dynamic misspecification and are valid under different estimation schemes and loss functions. In pairwise model comparisons, the test is constructed by adding a random perturbation to both the numerator and denominator of a standard Diebold-Mariano test statistic. This prevents degeneracy in the presence of overlapping models but becomes asymptotically negligible otherwise. The test is shown to control the Type I error probability asymptotically at the nominal level, uniformly over all null data generating processes. A similar idea is used to develop a superior predictive ability test for the comparison of multiple models against a benchmark. Monte Carlo simulations demonstrate that our tests exhibit very good size control in finite samples reducing both over- and under-rejection relative to its competitors. Finally, an application to forecasting U.S. excess bond returns provides evidence in favour of models using macroeconomic factors

V Corradi, EM Iglesias (2008)Bootstrap refinements for QML estimators of the GARCH(1,1) parameters, In: Journal of Econometrics144(2)pp. 500-510

This paper reconsiders a block bootstrap procedure for Quasi Maximum Likelihood estimation of GARCH models, based on the resampling of the likelihood function, as proposed by Gonçalves and White [2004. Maximum likelihood and the bootstrap for nonlinear dynamic models. Journal of Econometrics 119, 199-219]. First, we provide necessary conditions and sufficient conditions, in terms of moments of the innovation process, for the existence of the Edgeworth expansion of the GARCH(1,1) estimator, up to the k-th term. Second, we provide sufficient conditions for higher order refinements for equally tailed and symmetric test statistics. In particular, the bootstrap estimator based on resampling the likelihood has the same higher order improvements in terms of error in the rejection probabilities as those in Andrews [2002. Higher-order improvements of a computationally attractive k-step bootstrap for extremum estimators. Econometrica 70, 119-162]. © 2008 Elsevier B.V. All rights reserved.

V Corradi, W Distaso, M Fernandes (2012)International market links and volatility transmission, In: Journal of Econometrics170(1)pp. 117-141

This paper gauges volatility transmission between stock markets by testing conditional independence of their volatility measures. In particular, we check whether the conditional density of the volatility changes if we further condition on the volatility of another market. We employ nonparametric methods to estimate the conditional densities and model-free realized measures of volatility, allowing for both microstructure noise and jumps. We establish the asymptotic normality of the test statistic as well as the first-order validity of the bootstrap analog. Finally, we uncover significant volatility spillovers between the stock markets in China, Japan, UK and US. © 2012 Elsevier B.V. All rights reserved.

V Corradi, W Distaso, NR Swanson (2011)Predictive inference for integrated volatility, In: Journal of the American Statistical Association106(496)pp. 1496-1512

Numerous volatility-based derivative products have been engineered in recent years. This has led to interest in constructing conditional predictive densities and confidence intervals for integrated volatility. In this article we propose nonparametric estimators of the aforementioned quantities, based on model-free volatility estimators. We establish consistency and asymptotic normality for the feasible estimators and study their finite-sample properties through a Monte Carlo experiment. Finally, using data from the New York Stock Exchange, we provide an empirical application to volatility directional predictability. © 2011 American Statistical Association.

V Corradi, W Distaso (2006)Semi-parametric comparison of stochastic volatility models using realized measures, In: Review of Economic Studies73(3)pp. 635-667

This paper proposes a procedure to test for the correct specification of the functional form of the volatility process within the class of eigenfunction stochastic volatility models. The procedure is based on the comparison of the moments of realized volatility measures with the corresponding ones of integrated volatility implied by the model under the null hypothesis. We first provide primitive conditions on the measu rement error associated with the realized measure, which allow to construct asymptotically valid specification tests. Then we establish regularity conditions under which the considered realized measures, namely, realized volatility, bipower variation, and modified subsampled realized volatility, satisfy the given primitive assumptions. Finally, we provide an empirical illustration based on thr ee stocks from the Dow Jones Industrial Average. © 2006 The Review of Economic Studies Limited.

V Corradi, A Fernandez, NR Swanson (2009)Information in the revision process of real-time datasets, In: Journal of Business and Economic Statistics27(4)pp. 455-467

Rationality of early release data is typically tested using linear regressions. Thus, failure to reject the null does not rule out the possibility of nonlinear dependence. This paper proposes two tests that have power against generic nonlinear alternatives. A Monte Carlo study shows that the suggested tests have good finite sample properties. Additionally, we carry out an empirical illustration using a real-time dataset for money, output, and prices. Overall, we find evidence against data rationality for output and prices, but not for money. © 2009 American Statistical Association.

Valentina Corradi, Jack Fosten, Daniel Gutknecht (2023)Out of Sample Test for Conditional Quantile Coverage: an Application to Growth at Risk, In: Journal of Econometrics Elsevier

This paper proposes tests for out-of-sample comparisons of interval forecasts based on parametric conditional quantile models. The tests rank the distance between actual and nominal conditional coverage with respect to the set of conditioning variables from all models, for a given loss function. We propose a pairwise test to compare two models for a single predictive interval. The set-up is then extended to a comparison across multiple models and/or intervals. The limiting distribution varies depending on whether models are strictly non-nested or overlapping. In the latter case, degeneracy may occur. We establish the asymptotic validity of wild bootstrap based critical values across all cases. An empirical application to Growth-at-Risk (GaR) uncovers situations in which a richer set of financial indicators are found to outperform a commonly-used benchmark model when predicting downside risk to economic activity.

V Corradi, W Distaso (2012)Multiple Forecast Model Evaluation, In: The Oxford Handbook of Economic Forecasting

© 2011 by Oxford University Press. All rights reserved.This article focuses on recent developments in the forecasting literature on how to simultaneously control both the overall error rate and the contribution of irrelevant models. As a novel contribution, it derives a general class of superior predictive ability tests, which controls for family-wise error rate (FWER) and the contribution of irrelevant models. The article is organized as follows. Section 2 defines the setup. Section 3 reviews the approaches that control for the conservative FWER. Section 4 considers a general class of tests characterized by multiple joint inequalities. Section 5 presents results allowing for control of the less conservative false discovery rate. Section 6 considers the model confidence set approach and offers a simple alternative that reduces the influence of irrelevant models in the initial set. Section 7 briefly reviews the empirical evidence, while Section 8 concludes.

V Corradi, W Distaso, A Mele (2013)Macroeconomic determinants of stock volatility and volatility premiums, In: Journal of Monetary Economics60(2)pp. 203-220

How does stock market volatility relate to the business cycle? We develop, and estimate, a no-arbitrage model, and find that (i) the level and fluctuations of stock volatility are largely explained by business cycle factors and (ii) some unobserved factor contributes to nearly 20% to the overall variation in volatility, although not to its ups and downs. Instead, this "volatility of volatility" relates to the business cycle. Finally, volatility risk-premiums are strongly countercyclical, even more than stock volatility, and partially explain the large swings of the VIX index during the 2007-2009 subprime crisis, which our model captures in out-of-sample experiments. © 2012 Elsevier B.V.

V Corradi, NR Swanson (2011)Predictive density construction and accuracy testing with multiple possibly misspecified diffusion models, In: Journal of Econometrics161(2)pp. 304-324

This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, we first outline a simple simulation-based framework for constructing predictive densities for one-factor and stochastic volatility models. We then construct tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of our tests, we also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salani (2004). In an empirical illustration, the predictive densities from several models of the one-month federal funds rates are compared. © 2011 Elsevier B.V. All rights reserved.

Valentina Corradi, Sainan Jin, Norman R. Swanson (2023)Robust forecast superiority testing with an application to assessing pools of expert forecasters, In: Journal of applied econometrics Wiley

We develop a forecast superiority testing methodology which is robust to the choice of loss function. Following Jin, Corradi and Swanson (JCS: 2017), we rely on a mapping between generic loss forecast evaluation and stochastic dominance principles. However, unlike JCS tests, which are not uniformly valid, and have correct asymptotic size only under the least favorable case, our tests are uniformly asymptotically valid and non-conservative. These properties are derived by first establishing uniform convergence (over error support) of HAC variance estimators. Monte Carlo experiments indicate good finite sample performance of the new tests, and an empirical illustration suggests that prior forecast accuracy matters in the Survey of Professional Forecasters. Namely, for our longest forecast horizons (4 quarters ahead), selecting pools of expert forecasters based on prior accuracy results in ensemble forecasts that are superior to those based on forming simple averages and medians from the entire panel of experts.

FM Bandi, V Corradi (2014)Nonparametric nonstationarity tests, In: Econometric Theory30(1)pp. 127-149

We propose additive functional-based nonstationarity tests that exploit the different divergence rates of the occupation times of a (possibly nonlinear) process under the null of nonstationarity (stationarity) versus the alternative of stationarity (nonstationarity). We consider both discrete-time series and continuous-time processes. The discrete-time case covers Harris recurrent Markov chains and integrated processes. The continuous-time case focuses on Harris recurrent diffusion processes. Notwithstanding finite-sample adjustments discussed in the paper, the proposed tests are simple to implement and rely on tabulated critical values. Simulations show that their size and power properties are satisfactory. Our robustness to nonlinear dynamics provides a solution to the typical inconsistency problem between assumed linearity of a time series for the purpose of nonstationarity testing and subsequent nonlinear inference. Copyright © Cambridge University Press 2013 A ̂.

G Bhardwaj, V Corradi, NR Swanson (2008)A simulation-based specification test for diffusion processes, In: Journal of Business and Economic Statistics26(2)pp. 176-193

This article makes two contributions. First, we outline a simple simulation-based framework for constructing conditional distributions for multifactor and multidimensional diffusion processes, for the case where the functional form of the conditional density is unknown. The distributions can be used, for example, to form predictive confidence intervals for time period t + τ, given information up to period t. Second, we use the simulation-based approach to construct a test for the correct specification of a diffusion process. The suggested test is in the spirit of the conditional Kolmogorov test of Andrews. However, in the present context the null conditional distribution is unknown and is replaced by its simulated counterpart. The limiting distribution of the test statistic is not nuisance parameter-free. In light of this, asymptotically valid critical values are obtained via appropriate use of the block bootstrap. The suggested test has power against a larger class of alternatives than tests that are constructed using marginal distributions/ densities. The findings of a small Monte Carlo experiment underscore the good finite sample properties of the proposed test, and an empirical illustration underscores the ease with which the proposed simulation and testing methodology can be applied. © 2008 American Statistical Association.

V Corradi, W Distaso, NR Swanson (2009)Predictive density estimators for daily volatility based on the use of realized measures, In: Journal of Econometrics150(2)pp. 119-138

The main objective of this paper is to propose a feasible, model free estimator of the predictive density of integrated volatility. In this sense, we extend recent papers by Andersen et al. [Andersen, T.G., Bollerslev, T., Diebold, F.X., Labys, P., 2003. Modelling and forecasting realized volatility. Econometrica 71, 579-626], and by Andersen et al. [Andersen, T.G., Bollerslev, T., Meddahi, N., 2004. Analytic evaluation of volatility forecasts. International Economic Review 45, 1079-1110; Andersen, T.G., Bollerslev, T., Meddahi, N., 2005. Correcting the errors: Volatility forecast evaluation using high frequency data and realized volatilities. Econometrica 73, 279-296], who address the issue of pointwise prediction of volatility via ARMA models, based on the use of realized volatility. Our approach is to use a realized volatility measure to construct a non-parametric (kernel) estimator of the predictive density of daily volatility. We show that, by choosing an appropriate realized measure, one can achieve consistent estimation, even in the presence of jumps and microstructure noise in prices. More precisely, we establish that four well known realized measures, i.e. realized volatility, bipower variation, and two measures robust to microstructure noise, satisfy the conditions required for the uniform consistency of our estimator. Furthermore, we outline an alternative simulation based approach to predictive density construction. Finally, we carry out a simulation experiment in order to assess the accuracy of our estimators, and provide an empirical illustration that underscores the importance of using microstructure robust measures when using high frequency data. © 2009 Elsevier B.V. All rights reserved.

V Corradi, NR Swanson (2007)Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes, In: International Economic Review48(1)pp. 67-109

We introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting among multiple alternative forecasting models, all of which are possibly misspecified. In a Monte Carlo investigation, we compare the finite sample properties of our block bootstrap procedures with the parametric bootstrap due to Kilian (Journal of Applied Econometrics 14 (1999), 491-510), within the context of encompassing and predictive accuracy tests. In the empirical illustration, it is found that unemployment has nonlinear marginal predictive content for inflation. © 2007 by the Economics Department Of The University Of Pennsylvania And Osaka University Institute Of Social And Economic Research Association.

S Jin, Valentina Corradi, NR Swanson (2016)Robust forecast comparison, In: Econometric Theory33(6)pp. 1306-1351 Cambridge University Press

Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limiting distributions, under the null, necessitating the use of resampling procedures to obtain critical values. Additionally, the tests are consistent and have nontrivial local power, under a sequence of local alternatives. The above theory is developed for the stationary case, as well as for the case of heterogeneity that is induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples, and an application in which we examine exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.

W Arulampalam, V Corradi, D Gutknecht (2016)Modeling Heaped Duration Data: An Application to Neonatal Mortality The University of Warwick - CAGE - Centre for Competitive Advantage in the Global Economy

In 2005, the Indian Government launched a conditional cash-incentive program to en- courage institutional delivery. This paper studies the e ects of the program on neonatal mortality using district-level household survey data. We model mortality using survival analysis, paying special attention to the substantial heaping present in the data. The main objective of this paper is to provide a set of sucient conditions for identi cation and consistent estimation of the baseline hazard accounting for heaping and unobserved heterogeneity. Our identi cation strategy requires neither administrative data nor mul- tiple measurements, but a correctly reported duration and the presence of some at segments in the baseline hazard which includes this correctly reported duration point. We establish the asymptotic properties of the maximum likelihood estimator and pro- vide a simple procedure to test whether the policy had (uniformly) reduced mortality. While our empirical ndings do not con rm the latter, they do indicate that accounting for heaping matters for the estimation of the baseline hazard.

Cointegration, common cycle, and related tests statistics are often constructed using logged data, even without clear reason why logs should be used rather than levels. Unfortunately, it is also the case that standard data transformation tests, such as those based on Box-Cox transformations, cannot be shown to be consistent unless assumptions concerning whether variables I ( 0 ) or I ( 1 ) are made. In this paper, we propose a simple randomized procedure for choosing between levels and log-levels specifications in the (possible) presence of deterministic and/or stochastic trends, and discuss the impact of incorrect data transformation on common cycle, cointegration and unit root tests. © 2005 Elsevier B.V. All rights reserved.

We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare "true" joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the "benchmark" model, against which all "alternative" models are to be compared. We then test whether at least one of the alternative models provides a more "accurate" approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions. © 2005 Elsevier B.V. All rights reserved.

V Corradi, NR Swanson (2006)Bootstrap conditional distribution tests in the presence of dynamic misspecification, In: Journal of Econometrics133(2)pp. 779-806

In this paper, we show the first order validity of the block bootstrap for Kolmogorov-type conditional distribution tests under dynamic misspecification and parameter estimation error. Our approach is unique because we construct statistics that allow for dynamic misspecification under both hypotheses. We consider two tests; the CK test of Andrews [1997. A conditional Kolmogorov test, Econometrica 65, 1097-1128], and a version of the DGT test of Diebold, Gunther and Tay [1998a. Evaluating density forecasts with applications to finance and management. International Economic Review 39, 863-883]. Test limiting distributions are Gaussian processes with covariance kernels that reflect dynamic misspecification and parameter estimation error. Critical values are based on an extension of the empirical process version of the block bootstrap to the case of nonvanishing parameter estimation error. Monte Carlo experiments are also carried out. © 2005 Elsevier B.V. All rights reserved.

V Corradi, NR Swanson (2006)Chapter 5 Predictive Density Evaluation, In: Handbook of Economic Forecasting1pp. 197-284

This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various authors including Christoffersen and Diebold [Christoffersen, P., Diebold, F.X. (2000). "How relevant is volatility forecasting for financial risk management?". Review of Economics and Statistics 82, 12-22], Diebold, Gunther and Tay [Diebold, F.X., Gunther, T., Tay, A.S. (1998). "Evaluating density forecasts with applications to finance and management". International Economic Review 39, 863-883], Diebold, Hahn and Tay [Diebold, F.X., Hahn, J., Tay, A.S. (1999). "Multivariate density forecast evaluation and calibration in financial risk management: High frequency returns on foreign exchange". Review of Economics and Statistics 81, 661-673], White [White, H. (2000). "A reality check for data snooping". Econometrica 68, 1097-1126], Bai [Bai, J. (2003). "Testing parametric conditional distributions of dynamic models". Review of Economics and Statistics 85, 531-549], Corradi and Swanson [Corradi, V., Swanson, N.R. (2005a). "A test for comparing multiple misspecified conditional distributions". Econometric Theory 21, 991-1016; Corradi, V., Swanson, N.R. (2005b). "Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes". Working Paper, Rutgers University; Corradi, V., Swanson, N.R. (2006a). "Bootstrap conditional distribution tests in the presence of dynamic misspecification". Journal of Econometrics, in press; Corradi, V., Swanson, N.R. (2006b). "Predictive density and conditional confidence interval accuracy tests". Journal of Econometrics, in press], Hong and Li [Hong, Y.M., Li, H.F. (2003). "Nonparametric specification testing for continuous time models with applications to term structure of interest rates". Review of Financial Studies, 18, 37-84], and others are reviewed. Extensions of some existing techniques to the case of out-of-sample evaluation are also provided, and asymptotic results associated with these extensions are outlined. © 2006 Elsevier B.V. All rights reserved.

V Corradi, MJ Silvapulle, NR Swanson (2014)Consistent Pretesting for Jumps, In: Working Paper

If the intensity parameter in a jump diffusion model is identically zero, then parameters characterizing the jump size density cannot be identified. In general, this lack of identification precludes consistent estimation of identified parameters. Hence, it should be standard practice to consistently pretest for jumps, prior to estimating jump diffusions. Many currently available tests have power against the presence of jumps over a finite time span (typically a day or a week); and, as already noted by various authors, jumps may not be observed over finite time spans, even if the intensity parameter is strictly positive. Such tests cannot be consistent against non-zero intensity. Moreover, sequential application of finite time span tests usually leads to sequential testing bias, which in turn leads to jump discovery with probability one, in the limit, even if the true intensity is identically zero. This paper introduces tests for jump intensity, based on both in-fill and long-span asymptotics, which solve both the test consistency and the sequential testing bias problems discussed above, in turn facilitating consistent estimation of jump diffusion models.

B Awartani, V Corradi, W Distaso (2009)Assessing market microstructure effects via realized volatility measures with an application to the dow Jones industrial average stocks, In: Journal of Business and Economic Statistics27(2)pp. 251-265

Transaction prices of financial assets are contaminated by market microstructure effects. This is particularly relevant when estimating volatility using high frequency data. In this article, we assess statistically the effect of microstructure noise on volatility estimators, and test the hypothesis that its variance is independent of the sampling frequency. We provide evidence based on the Dow Jones Industrial Average stocks.We find that noise has a statistically significant effect on volatility estimators at frequencies of 2-3 min or higher. The independently and identically distributed specification with constant variance seems to be a plausible model for microstructure noise, except for ultra high frequencies. © 2009 American Statistical Association.

Valentina Corradi, Mervyn J. Silvapulle, Norman R. Swanson (2018)Testing for Jumps and Jump Intensity Path Dependence, In: Journal of Econometrics204(2)pp. 248-267 Elsevier

In this paper, we fill a gap in the financial econometrics literature, by developing a “jump test” for the null hypothesis that the probability of a jump is zero. The test is based on realized third moments, and uses observations over an increasing time span. The test offers an alternative to standard finite time span tests, and is designed to detect jumps in the data generating process rather than detecting realized jumps over a fixed time span. More specifically, we make two contributions. First, we introduce our largely model free jump test for the null hypothesis of zero jump intensity. Second, under the maintained assumption of strictly positive jump intensity, we introduce a “self excitement test” for the null of constant jump intensity against the alternative of path dependent intensity. The latter test has power against autocorrelation in the jump component, and is a direct test for Hawkes diffusions (see e.g., Aït-Sahalia, Cacho-Diaz and Laeven (2015)). The limiting distributions of the proposed statistics are analyzed via use of a double asymptotic scheme, wherein the time span goes to infinity and the discrete interval approaches zero; and the distributions of the tests are normal and half normal, respectively. The results from a Monte Carlo study indicate that the tests have good finite sample properties.

Wiji Arulampalam, Valentina Corradi, Daniel Gutknecht (2023)Intercept Estimation in Nonlinear Selection Models *, In: Econometric theory Cambridge University Press

We propose various semiparametric estimators for nonlinear selection models, where slope and intercept can be separately identified. When the selection equation satisfies a monotonic index restriction, we suggest a local polynomial estimator, using only observations for which the marginal cumulative distribution function of the instrument index is close to one. Data-driven procedures such as cross-validation may be used to select the bandwidth for this estimator. We then consider the case in which either the monotonic index restriction does not hold and/or the set of observations with a propensity score close to one is thin so that convergence occurs at a rate that is arbitrarily close to the cubic rate. We explore the finite sample behavior in a Monte Carlo study and illustrate the use of our estimator using a model for count data with multiplicative unobserved heterogeneity.

V Corradi, NR Swanson (2014)Testing for structural stability. of factor augmented forecasting models, In: JOURNAL OF ECONOMETRICS182(1)pp. 100-118 ELSEVIER SCIENCE SA
Laura Coroneo, Valentina Corradi, Paulo Santos Monteiro (2018)Testing for optimal monetary policy via moment inequalities, In: Journal of Applied Econometrics33(6)pp. 780-796 Wiley

The specification of an optimizing model of the monetary transmission mechanism requires selecting a policy regime, commonly commitment or discretion. In this paper, we propose a new procedure for testing optimal monetary policy, relying on moment inequalities that nest commitment and discretion as two special cases. The approach is based on the derivation of bounds for inflation that are consistent with optimal policy under either policy regime. We derive testable implications that allow for specification tests and discrimination between the two alternative regimes. The proposed procedure is implemented to examine the conduct of monetary policy in the United States economy.

Wiji Arulampalam, Valentina Corradi, Daniel Gutknecht (2017)Modeling heaped duration data: An application to neonatal mortality, In: Journal of Econometrics200(2)pp. 363-377 Elsevier

In 2005, the Indian Government launched a conditional cash-incentive program to encourage institutional delivery. This paper studies the effects of the program on neonatal mortality using district-level household survey data. We model mortality using survival analysis, paying special attention to substantial heaping, a form of measurement error, present in the data. The main objective of this paper is to provide a set of sufficient conditions for identification and consistent estimation of the (discretized) baseline hazard accounting for heaping and unobserved heterogeneity. Our identification strategy requires neither administrative data nor multiple measurements, but a correctly reported duration point and the presence of some flat segment(s) in the baseline hazard. We establish the asymptotic properties of the maximum likelihood estimator and derive a set of specification tests that allow, among other things, to test for the presence of heaping and to compare different heaping mechanisms. Our empirical findings do not suggest a significant reduction of mortality in treated districts. However, they do indicate that accounting for heaping matters for the estimation of the hazard parameters.