A Flexible State Space Model and its Applications
Author: Hang Qian, 2012
Abstract:
The standard state space model (SSM) treats observations as imprecise measures of the Markov latent states.
Our flexible SSM treats the states and observables symmetrically, which are simultaneously determined by historical observations and up to first-lagged states.
The only distinction between the states and observables is that the former are latent while the latter have data.
Despite the conceptual difference, the two SSMs share the same Kalman filter. However, when the flexible SSM is applied to the ARMA model,
mixed frequency regression and the dynamic factor model with missing data,
the state vector is not only parsimonious but also intuitive in that low-dimension states are constructed simply by stacking all the relevant but unobserved variables in the structural model.
Backup URL to Download the Paper
Author: Hang Qian, 2011
Abstract:
Three new approaches are proposed to handle the mixed frequency Vector Autoregression.
The first is an explicit solution to the likelihood and posterior states distribution.
The second is a parsimonious, time-invariant and invertible state space form.
The third is a parallel Gibbs sampler without forward filtering and backward sampling.
The three methods are unified since all of them explore the fact that the mixed frequency observations
impose linear constraints on the distribution of high frequency latent variables.
By a simulation study, different approaches are compared and the parallel Gibbs sampler outperforms others.
A financial application on the yield curve forecast is conducted using mixed frequency macro-finance data.
Sampling Variation, Monotone Instrumental Variables
and the Bootstrap Bias Correction
Author: Hang Qian, 2011
Abstract:
This paper discusses the finite sample bias of analogue bounds under the monotone instrumental variables assumption.
By analyzing the bias function, we first propose a conservative estimator which is biased downwards (upwards) when the analogue estimator is biased upwards (downwards).
Using the bias function, we then show the mechanism of the parametric bootstrap correction procedure,
which can reduce but not eliminate the bias, and there is also a possibility of overcorrection.
This motivates us to propose a simultaneous multi-level bootstrap procedure so as to further correct the remaining bias.
The procedure is justified under the assumption that the bias function can be well approximated by a polynomial.
Our multi-level bootstrap algorithm is feasible and does not suffer from the curse of dimensionality.
Monte Carlo evidence supports the usefulness of this approach and we apply it to the disability misreporting problem studied by Kreider and Pepper (2007).
Bayesian Inference with Monotone Instrumental Variables
Author: Hang Qian, 2011
Abstract:
Sampling variations complicate the classical inference on the analogue bounds under the monotone instrumental variables assumption,
since point estimators are biased and confidence intervals are difficult to construct.
From the Bayesian perspective, a solution is offered in this paper.
Using a conjugate Dirichlet prior, we derive some analytic results on the posterior distribution of the two bounds of the conditional mean response.
The bounds of the unconditional mean response and the average treatment effect can be obtained with Bayesian simulation techniques.
Our Bayesian inference is applied to an empirical problem which quantifies the effects of taking extra classes on high school students' test scores.
The two MIVs are chosen as the education levels of their fathers and mothers.
The empirical results suggest that the MIV assumption in conjunction with the monotone treatment response assumption yield good identification power.
Linear Regression Using Both Temporally Aggregated
and Temporally Disaggregated Data: Revisited
Author: Hang Qian, 2010
Abstract:
This paper discusses regression models with aggregated covariate data.
Reparameterized likelihood function is found to be separable when one endogenous variable corresponds to one instrument.
In that case, the full-information maximum likelihood estimator has an analytic form,
and thus outperforms the conventional imputed value two-step estimator in terms of both efficiency and computability.
We also propose a competing Bayesian approach implemented by the Gibbs sampler,
which is advantageous in more flexible settings where the likelihood does not have the separability property.
Author: Hang Qian, 2009
Abstract:
This paper examines multivariate Tobit system with Scale mixture disturbances.
Three estimation methods, namely Maximum Simulated Likelihood, Expectation Maximization Algorithm and Bayesian MCMC simulators, are proposed and compared via generated data experiments.
The chief finding is that Bayesian approach outperforms others in terms of accuracy, speed and stability.
The proposed model is also applied to a real data set and study the high frequency price and trading volume dynamics.
The empirical results confirm the information contents of historical price, lending support to the usefulness of technical analysis.
In addition, the scale mixture model is also extended to sample selection SUR Tobit and finite Gaussian regime mixtures.
Author: Hang Qian, 2011
Abstract:
Departure from normality poses implementation barriers to the Markowitz mean-variance portfolio selection.
When assets are affected by common and idiosyncratic shocks, the distribution of asset returns may
exhibit Markov switching regimes and have a Gaussian mixture distribution conditional on each regime.
The model is estimated in a Bayesian framework using the Gibbs sampler.
An application to the global portfolio diversification is also discussed.
Bayesian Portfolio Selection with Gaussian Mixture Returns
Author: Hang Qian, 2009
Abstract:
Markowitz portfolio selection is challenged by huge implementation barriers.
This paper addresses the parameter uncertainty and deviation from normality in a Bayesian framework.
The non-normal asset returns are modeled as finite Gaussian mixtures.
Gibbs sampler is employed to obtain draws from the posterior predictive distribution of asset returns.
Optimal portfolio weights are then constructed so as to maximize agents’ expected utility.
Simple experiment suggests that our Bayesian portfolio selection procedure performs exceedingly well.