Results 1  10
of
517,057
ELECTRONIC COMMUNICATIONS in PROBABILITY HansonWright inequality and subgaussian concentration ∗
"... In this expository note, we give a modern proof of HansonWright inequality for quadratic forms in subgaussian random variables. We deduce a useful concentration inequality for subgaussian random vectors. Two examples are given to illustrate these results: a concentration of distances between rand ..."
Abstract
 Add to MetaCart
In this expository note, we give a modern proof of HansonWright inequality for quadratic forms in subgaussian random variables. We deduce a useful concentration inequality for subgaussian random vectors. Two examples are given to illustrate these results: a concentration of distances between
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 560 (10 self)
 Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so
The HansonWright inequality and subgaussian concentration. arXiv:1306.2872
"... Abstract. In this expository note, we give a modern proof of HansonWright inequality for quadratic forms in subgaussian random variables. We deduce a useful concentration inequality for subgaussian random vectors. Two examples are given to illustrate these results: a concentration of distances be ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
Abstract. In this expository note, we give a modern proof of HansonWright inequality for quadratic forms in subgaussian random variables. We deduce a useful concentration inequality for subgaussian random vectors. Two examples are given to illustrate these results: a concentration of distances
Inequality and Growth in a Panel of Countries
 JOURNAL OF ECONOMIC GROWTH
, 1999
"... Evidence from a broad panel of countries shows little overall relation between income inequality and rates of growth and investment. However, for growth, higher inequality tends to retard growth in poor countries and encourage growth in richer places. The Kuznets curve—whereby inequality first incre ..."
Abstract

Cited by 487 (4 self)
 Add to MetaCart
Evidence from a broad panel of countries shows little overall relation between income inequality and rates of growth and investment. However, for growth, higher inequality tends to retard growth in poor countries and encourage growth in richer places. The Kuznets curve—whereby inequality first
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 664 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing
A Simple Estimator of Cointegrating Vectors in Higher Order Cointegrated Systems
 ECONOMETRICA
, 1993
"... Efficient estimators of cointegrating vectors are presented for systems involving deterministic components and variables of differing, higher orders of integration. The estimators are computed using GLS or OLS, and Wald Statistics constructed from these estimators have asymptotic x2 distributions. T ..."
Abstract

Cited by 507 (3 self)
 Add to MetaCart
Efficient estimators of cointegrating vectors are presented for systems involving deterministic components and variables of differing, higher orders of integration. The estimators are computed using GLS or OLS, and Wald Statistics constructed from these estimators have asymptotic x2 distributions
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
vector machine' (RVM), a model of identical functional form to the popular and stateoftheart `support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer
Results 1  10
of
517,057