We use a sample of option prices, and the method of Bakshi, Kapadia and Madan (2003), to estimate the ex ante higher moments of the underlying individual securities’ risk-neutral returns distribution.
We specify and estimate a time-varying Markov model of COVID-19 cases for the US in 2020. We find that the estimated level of undetected infections spiked in March and remained elevated through May. However, since late April estimated undetected infections have generally declined though it was not until June or July that detected cases exceeded the estimated number of undetected cases.
We apply advances in analysis of mix frequency and sparse data to estimate “unsmoothed” private equity (PE) Net Asset Values (NAVs) at the weekly frequency for individual funds. Using simulations and a large sample of buyout and venture funds, we show that our method yields superior estimates of fund asset values than a simple approach based on comparable public asset and as-reported NAVs.
The financial industry has eagerly adopted machine learning algorithms to improve on traditional predictive models. In this paper we caution against blindly applying such techniques. We compare forecasting ability of machine learning methods in evaluating future payoffs on synthetic variance swaps.
Applied financial econometrics subjects are featured in this second volume, with papers that survey important research even as they make unique empirical contributions to the literature. These subjects are familiar: portfolio choice, trading volume, the risk-return tradeoff, option pricing, bond yields, and the management, supervision, and measurement of extreme and infrequent risks.
This paper evaluates the role of various volatility specifications, such as multiple stochastic volatility (SV) factors and jump components, in appropriate modeling of equity return distributions.
Many time series are sampled at different frequencies. When we study co-movements between such series we usually analyze the joint process sampled at a common low frequency.
Using a sample of the 48 contiguous United States, we consider the problem of forecasting state and local governments' revenues and expenditures in real time using models that feature mixed-frequency data. We find that single-equation mixed data sampling (MIDAS) regressions that predict low-frequency fiscal outcomes using high-frequency economic data historically outperform both traditional fiscal forecasting models and theoretically motivated multi-equation models.
Simulation-based estimation methods have become more widely used in recent years. We propose a set of tests for structural change in models estimated via simulated method of moments (see Duffe and Singleton (Econometrica 61 (1993) 929).
Prior studies attribute analysts' forecast superiority over time-series forecasting models to their access to a large set of firm, industry, and macroeconomic information (an information advantage), which they use to update their forecasts on a daily, weekly or monthly basis (a timing advantage).
We propose a class of two factor dynamic models for duration data and related risk analysis in finance and insurance. Empirical findings suggest that the conditional mean and (under) overdispersion of times elapsed between stock trades feature various patterns of temporal dependence.
Macroeconomic data are typically subject to future revisions and released with delay. Predictive return regressions using such data therefore potentially overstate the information set available to investors in real time.
Time series regression analysis relies on the heteroskedasticity- and auto-correlation-consistent (HAC) estimation of the asymptotic variance to conduct proper inference. This paper develops such inferential methods for high-dimensional time series regressions.
We examine several autoregressive-based estimators for the parameters of a moving average process, including the estimators initially proposed by Galbraith and Zinde-Walsh  and Gouriéroux, Monfort and Renault . We also propose over-identified asymptotic-least-squares based variants of the former, and extensions of the latter based on Gallant and Tauchen's  simulated method of moments.
We examine the relationship between MIDAS regressions and the estimation of state space models applied to mixed frequency data. While in some cases the binding function is known, in general it is not, and therefore indirect inference is called for. The approach is appealing when we consider state space models which feature stochastic volatility, or other non-Gaussian and nonlinear settings where maximum likelihood methods require computationally demanding approximate filters.
We provide empirical evidence for the existence, magnitude, and economic cost of stigma associated with banks borrowing from the Federal Reserve's Discount Window (DW) during the 2007-2008 financial crisis.
Using a sample of the 48 contiguous United States, we consider the problem of forecasting state and local governments’ revenues and expenditures in real time using models that feature mixed-frequency data. We find that single-equation mixeddata sampling (MIDAS) regressions that predict low-frequency fiscal outcomes using high-frequency economic data historically outperform both traditional fiscal forecasting models and theoretically motivated multi-equation models.
Time series are demeaned when sample autocorrelation functions are computed. By the same logic it would seem appealing to remove seasonal means from seasonal time series before computing sample autocorrelation functions. Yet, standard practice is only to remove the overall mean and ignore the possibility of seasonal mean shifts in the data.
In this paper, we compare several approaches of producing multi-period-ahead forecasts within the GARCH and RV families – iterated, direct, and scaled short-horizon forecasts. We also consider the newer class of mixed data sampling (MIDAS) methods.
In this paper we examine the prevalence of data, specification, and parameter uncertainty in the formation of simple rules that mimic monetary policymaking decisions. Our approach is to build real-time data sets and simulate a real-time policy-setting environment in which we assume that policy is captured by movements in the actual federal funds rate, and then to assess what sorts of policy rule models and what sorts of data best explain what the Federal Reserve actually did.
We investigate the spatial dependence between commercial and residential mortgage defaults. A new class of observation-driven frailty factor models is introduced to do so. The idea of dynamic parameters embedded in the class of GAS models is utilized to estimate dynamic models of default risk with potentially multiple factors which are driven by stratified grouping of large panels of mortgage loan records. The score dynamics in the models is driven by so-called generalized residuals, and have therefore a fairly intuitive interpretation of ARMA-like dynamics. The proposed models are computationally easy to implement and therefore attractive in big data applications, something that gives them a considerable advantage in comparison to the typical latent factor frailty models proposed in the literature.
Time series regression analysis in econometrics typically involves a framework relying on a set of mixing conditions to establish consistency and asymptotic normality of parameter estimates and HAC-type estimators of the residual long-run variances to conduct proper inference. This article introduces structured machine learning regressions for high-dimensional time series data using the aforementioned commonly used setting.
To enhance our understanding of emerging markets, we study a data set from the Casablanca stock exchange containing all the transaction records over a long span. The exchange was included in 1996 in the International Finance Corporation (IFC) data base roughly 3 years after important market reforms.