Results 1  10
of
226
STRUCTURAL VECTOR AUTOREGRESSIONS: THEORY OF IDENTIFICATION AND ALGORITHMS FOR INFERENCE
, 2007
"... ABSTRACT. SVARs are widely used for policy analysis and to provide stylized facts for dynamic general equilibrium models. Yet there have been no workable rank conditions to ascertain whether an SVAR is globally identified and no efficient algorithms for smallsample statistical inference when ident ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
ABSTRACT. SVARs are widely used for policy analysis and to provide stylized facts for dynamic general equilibrium models. Yet there have been no workable rank conditions to ascertain whether an SVAR is globally identified and no efficient algorithms for smallsample statistical inference when
RankExtreme Association of Gaussian Vectors and LowRank Detection
, 2014
"... It is important to detect a lowdimensional linear dependency in highdimensional data. We provide a perspective on this problem, called the rankextreme (ReX) association, through studies of the maximum norm of a vector of p standard Gaussian variables that has a covariance matrix of rank d ≤ p. W ..."
Abstract
 Add to MetaCart
It is important to detect a lowdimensional linear dependency in highdimensional data. We provide a perspective on this problem, called the rankextreme (ReX) association, through studies of the maximum norm of a vector of p standard Gaussian variables that has a covariance matrix of rank d ≤ p
Exploiting Feature Covariance in HighDimensional Online Learning
"... Some online algorithms for linear classification model the uncertainty in their weights over the course of learning. Modeling the full covariance structure of the weights can provide a significant advantage for classification. However, for highdimensional, largescale data, even though there may be ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
be many secondorder feature interactions, it is computationally infeasible to maintain this covariance structure. To extend secondorder methods to highdimensional data, we develop lowrank approximations of the covariance structure. We evaluate our approach on both synthetic and realworld data sets
Model selection in partially nonstationary vector autoregressive processes with reduced rank structure,”Journal of Econometrics
, 1999
"... are preliminary materials circulated to stimulate discussion and critical comment. Requests for single copies of a Paper will be filled by the Cowles Foundation within the limits of the supply. References in publications to Discussion Papers (other than mere acknowledgment by a writer that he has ac ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
are preliminary materials circulated to stimulate discussion and critical comment. Requests for single copies of a Paper will be filled by the Cowles Foundation within the limits of the supply. References in publications to Discussion Papers (other than mere acknowledgment by a writer that he has access to such unpublished material) should be cleared with the author to
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 189 (20 self)
 Add to MetaCart
. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others
High Dimensional Structured Superposition Models
"... Abstract High dimensional superposition models characterize observations using parameters which can be written as a sum of multiple component parameters, each with its own structure, e.g., sum of low rank and sparse matrices, sum of sparse and rotated sparse vectors, etc. In this paper, we consider ..."
Abstract
 Add to MetaCart
Abstract High dimensional superposition models characterize observations using parameters which can be written as a sum of multiple component parameters, each with its own structure, e.g., sum of low rank and sparse matrices, sum of sparse and rotated sparse vectors, etc. In this paper, we
Fundamental performance limits for ideal decoders in highdimensional linear inverse problems. arXiv:1311.6239
, 2013
"... The primary challenge in linear inverse problems is to design stable and robust “decoders” to reconstruct highdimensional vectors from a lowdimensional observation through a linear operator. Sparsity, lowrank, and related assumptions are typically exploited to design decoders which performance is ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The primary challenge in linear inverse problems is to design stable and robust “decoders” to reconstruct highdimensional vectors from a lowdimensional observation through a linear operator. Sparsity, lowrank, and related assumptions are typically exploited to design decoders which performance
LowRank Tensors for Scoring Dependency Structures
"... Accurate scoring of syntactic structures such as headmodifier arcs in dependency parsing typically requires rich, highdimensional feature representations. A small subset of such features is often selected manually. This is problematic when features lack clear linguistic meaning as in embeddings o ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
or when the information is blended across features. In this paper, we use tensors to map highdimensional feature vectors into low dimensional representations. We explicitly maintain the parameters as a lowrank tensor to obtain low dimensional representations of words in their syntactic roles
O(d log N)Quantics . . . in HighDimensional Numerical Modeling
, 2010
"... In the present paper, we discuss the novel concept of supercompressed tensorstructured data formats in high dimensional applications. We describe the multifolding or quantics based tensor approximation method of O(d log N)complexity (logarithmic scaling in the volume size), applied to the disc ..."
Abstract
 Add to MetaCart
In the present paper, we discuss the novel concept of supercompressed tensorstructured data formats in high dimensional applications. We describe the multifolding or quantics based tensor approximation method of O(d log N)complexity (logarithmic scaling in the volume size), applied
Modeling Appearances with LowRank SVM
"... Several authors have noticed that the common representation of images as vectors is suboptimal. The process of vectorization eliminates spatial relations between some of the nearby image measurements and produces a vector of a dimension which is the product of the measurements ’ dimensions. It seem ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
separated representation from the discriminative learning stage, we achieve both by the same method. Our framework, ”LowRank separators”, studies the use of a separating hyperplane which are constrained to have the structure of lowrank matrices. We first prove that the lowrank constraint provides
Results 1  10
of
226