Results 11  20
of
40
2010): "Supply, demand and monetary policy shocks in a multicountry New Keynesian model", mimeo
"... Supply, demand and monetary policy shocks in a multicountry ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Supply, demand and monetary policy shocks in a multicountry
European Central Bank
, 2008
"... New Keynesian Phillips Curves (NKPC) have been extensively used in the analysis of monetary policy, but yet there are a number of issues of concern about how they are estimated and then related to the underlying macroeconomic theory. The first is whether such equations are identified. To check ident ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
New Keynesian Phillips Curves (NKPC) have been extensively used in the analysis of monetary policy, but yet there are a number of issues of concern about how they are estimated and then related to the underlying macroeconomic theory. The first is whether such equations are identified. To check identification requires specifying the process for the forcing variables (typically the output gap) and solving the model for inflationintermsoftheobservables. Inpractice,theequationisestimatedby GMM, relying on statistical criteria to choose instruments. This may result in failure of identification or weak instruments. Secondly, the NKPC is usually derived as a part of a DSGE model, solved by loglinearising around a steady state and the variables are then measured in terms of deviations from the steady state. In practice the steady states, e.g. for output, are usually estimated by some statistical procedure such as the HodrickPrescott (HP) filter that might not be appropriate. Thirdly, there are arguments that other variables, e.g. interest rates, foreign inflation and foreign output gaps should enter the Phillips curve. This paper examines these three issues and argues that all three benefit from a global perspective. The global perspective provides additional instruments to alleviate the weak instrument problem, yields a theoretically consistent measure of the steady state and provides a natural route for foreign inflation or output gap to enter the NKPC. Keywords: Global VAR (GVAR), identification, New Keynesian Phillips Curve, TrendCycle decomposition.
Discovering Sparse Covariance Structures with the Isomap
 Journal of Computational and Graphical Statistics
"... Regularization of covariance matrices in high dimensions is usually either based on a known ordering of variables or ignores the ordering entirely. This paper proposes a method for discovering meaningful orderings of variables based on their correlations using the Isomap, a nonlinear dimension redu ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Regularization of covariance matrices in high dimensions is usually either based on a known ordering of variables or ignores the ordering entirely. This paper proposes a method for discovering meaningful orderings of variables based on their correlations using the Isomap, a nonlinear dimension reduction technique designed for manifold embeddings. These orderings are then used to construct a sparse covariance estimator, which is blockdiagonal and/or banded. Finding an ordering to which banding can be applied is desirable because banded estimators have been shown to be consistent in high dimensions. We show that in situations where the variables do have such a structure, the Isomap does very well at discovering it, and the resulting regularized estimator performs better for covariance estimation than other regularization methods that ignore variable order, such as thresholding. We also propose a bootstrap approach to constructing the neighborhood graph used by the Isomap, and show it leads to better estimation. We illustrate our method on data on protein consumption, where the variables (food types) have a structure but it cannot be easily described a priori, and on a gene expression data set.
Financial Risk Measurement for Financial Risk Management
, 2011
"... Current practice largely follows restrictive approaches to market risk measurement, such as historical simulation or RiskMetrics. In contrast, we propose flexible methods that exploit recent developments in financial econometrics and are likely to produce more accurate risk assessments, treating bot ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Current practice largely follows restrictive approaches to market risk measurement, such as historical simulation or RiskMetrics. In contrast, we propose flexible methods that exploit recent developments in financial econometrics and are likely to produce more accurate risk assessments, treating both portfoliolevel and assetlevel analysis. Assetlevel analysis is particularly challenging because the demands of realworld risk management in financial institutions – in particular, realtime risk tracking in very highdimensional situations – impose strict limits on model complexity. Hence we stress powerful yet parsimonious models that are easily estimated. In addition, we emphasize the need for deeper understanding of the links between market risk and macroeconomic fundamentals, focusing primarily on links among equity return volatilities, real growth, and real growth volatilities. Throughout, we strive not only to deepen our scientific understanding of market risk, but also crossfertilize the academic and practitioner communities, promoting improved market risk measurement
A Sparse FactorAnalytic Probit Model for Congressional Voting Patterns
, 2010
"... This paper adapts sparse factor models for exploring covariation in multivariate binary data, with an application to measuring latent factors in U.S. Congressional rollcall voting patterns. We focus on the advantages of using formal probability models for inference in this context, drawing parallel ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper adapts sparse factor models for exploring covariation in multivariate binary data, with an application to measuring latent factors in U.S. Congressional rollcall voting patterns. We focus on the advantages of using formal probability models for inference in this context, drawing parallels with the seminal findings of Poole and Rosenthal (1991). Our methodological innovation is to introduce a sparsity prior on a latent covariance matrix that descibes common factors in binary and ordinal outcomes. We apply the method to analyze sixty years of rollcall votes from the United States Senate, focusing primarily on the interpretation of posterior summaries that arise from the model. We also explore two advantages of our approach over traditional factor analysis. First, patterns of sparsity in the factorloadings matrix often have natural subjectmatter interpretations. For the rollcall vote data, the sparsity prior enables one to conduct a formal hypothesis test about whether a given vote can be explained exclusively by partisanship. Moreover, the factor scores provide a novel way of ranking Senators by the partisanship of their voting patterns. Second, by introducing sparsity into existing factoranalytic probit models, we effect a favorable bias–variance tradeoff in estimating the latent covariance matrix. Our model can thus be used in situations where the number of variables is very large relative to the number of observations. Key words: covariance estimation; factor models; multivariate probit models; voting patterns 1 Corresponding author.
CONSISTENCY OF RESTRICTED MAXIMUM LIKELIHOOD ESTIMATORS OF PRINCIPAL COMPONENTS
"... In this paper we consider two closely related problems: estimation of eigenvalues and eigenfunctions of the covariance kernel of functional data based on (possibly) irregular measurements, and the problem of estimating the eigenvalues and eigenvectors of the covariance matrix for highdimensional Gau ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
In this paper we consider two closely related problems: estimation of eigenvalues and eigenfunctions of the covariance kernel of functional data based on (possibly) irregular measurements, and the problem of estimating the eigenvalues and eigenvectors of the covariance matrix for highdimensional Gaussian vectors. In [A geometric approach to maximum likelihood estimation of covariance kernel from sparse irregular longitudinal data (2007)], a restricted maximum likelihood (REML) approach has been developed to deal with the first problem. In this paper, we establish consistency and derive rate of convergence of the REML estimator for the functional data case, under appropriate smoothness conditions. Moreover, we prove that when the number of measurements per sample curve is bounded, under squarederror loss, the rate of convergence of the REML estimators of eigenfunctions is nearoptimal. In the case of Gaussian vectors, asymptotic consistency and an efficient score representation of the estimators are obtained under the assumption that the effective dimension grows at a rate slower than the sample size. These results are derived through an explicit utilization of the intrinsic geometry of the parameter space, which is nonEuclidean. Moreover, the results derived in this paper suggest an asymptotic equivalence between the inference on functional data with dense measurements and that of the highdimensional Gaussian vectors. 1. Introduction. Analysis of functional data
Covariance Estimation: The GLM and Regularization Perspectives
"... Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent highdimensional data environment where enforcing the positivedefinit ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent highdimensional data environment where enforcing the positivedefiniteness constraint could be computationally expensive. We provide a survey of the progress made in modeling covariance matrices from the perspectives of generalized linear models (GLM) or parsimony and use of covariates in low dimensions, regularization (shrinkage, sparsity) for highdimensional data, and the role of various matrix factorizations. A viable and emerging regressionbased setup which is suitable for both the GLM and the regularization approaches is to link a covariance matrix, its inverse or their factors to certain regression models and then solve the relevant (penalized) least squares problems. We point out several instances of this regressionbased setup in the literature. A notable case is in the Gaussian graphical models where linear regressions with LASSO penalty are used to estimate the neighborhood of one node at a time (Meinshausen and Bühlmann, 2006). Some advantages
Covariance Estimation
, 801
"... Abstract: The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in highdimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lassotype penalty. We establish a rate of convergence in ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in highdimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lassotype penalty. We establish a rate of convergence in the Frobenius norm as both data dimension p and sample size n are allowed to grow, and show that the rate depends explicitly on how sparse the true concentration matrix is. We also show that a correlationbased version of the method exhibits better rates in the operator norm. The estimator is required to be positive definite, but we avoid having to use semidefinite programming by reparameterizing the objective function
Predictordependent shrinkage for linear regression via partial factor modeling
"... In prediction problems with more predictors than observations, it can sometimes be helpful to use a joint probability model, π(Y, X), rather than a purely conditional model, π(Y  X), where Y is a scalar response variable and X is a vector of predictors. This approach is motivated by the fact that i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In prediction problems with more predictors than observations, it can sometimes be helpful to use a joint probability model, π(Y, X), rather than a purely conditional model, π(Y  X), where Y is a scalar response variable and X is a vector of predictors. This approach is motivated by the fact that in many situations the marginal predictor distribution π(X) can provide useful information about the parameter values governing the conditional regression. However, under very mild misspecification, this marginal distribution can also lead conditional inferences astray. Here, we explore these ideas in the context of linear factor models, to understand how they play out in a familiar setting. The resulting Bayesian model performs well across a wide range of covariance structures, on real and simulated data. 1
Optimality and Diversifiability of Mean Variance and Arbitrage Pricing Portfolios
, 2009
"... This paper investigates the limit properties of meanvariance (mv) and arbitrage pricing (ap) trading strategies using a general dynamic factor model, as the number of assets diverge to infinity. It extends the results obtained in the literature for the exact pricing case to two other cases of asymp ..."
Abstract
 Add to MetaCart
This paper investigates the limit properties of meanvariance (mv) and arbitrage pricing (ap) trading strategies using a general dynamic factor model, as the number of assets diverge to infinity. It extends the results obtained in the literature for the exact pricing case to two other cases of asymptotic noarbitrage and the unconstrained pricing scenarios. The paper characterizes the asymptotic behaviour of the portfolio weights and establishes that in the nonexact pricing cases the ap and mv portfolio weights are asymptotically equivalent and, moreover, functionally independent of the factors conditional moments. By implication, the paper sheds light on a number of issues of interest such as the prevalence of shortselling, the number of dominant factors and the granularity property of the portfolio weights.