Results 11  20
of
1,070
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
, 2004
"... In this paper, we develop a robust uncertainty principle for finite signals in C N which states that for nearly all choices T, Ω ⊂ {0,..., N − 1} such that T  + Ω  ≍ (log N) −1/2 · N, there is no signal f supported on T whose discrete Fourier transform ˆ f is supported on Ω. In fact, we can mak ..."
Abstract

Cited by 181 (17 self)
 Add to MetaCart
on finding the correct uncertainty relation or the optimally sparse solution for nearly all subsets but not necessarily all of them, and allows to considerably sharpen previously known results [9, 10]. In fact, we show that the fraction of sets (T, Ω) for which the above properties do not hold can be upper
Sparse nonnegative solutions of underdetermined linear equations by linear programming
 Proceedings of the National Academy of Sciences
, 2005
"... Consider an underdetermined system of linear equations y = Ax with known d×n matrix A and known y. We seek the sparsest nonnegative solution, i.e. the nonnegative x with fewest nonzeros satisfying y = Ax. In general this problem is NPhard. However, for many matrices A there is a threshold phenomeno ..."
Abstract

Cited by 194 (7 self)
 Add to MetaCart
phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. In classical convex polytope theory, a polytope P is called kneighborly if every set of k vertices of P span a face of P. Let aj denote the jth column of A, 1 ≤ j ≤ n, let a0 = 0 and let P denote the convex
Just relax: Convex programming methods for subset selection and sparse approximation
, 2004
"... Subset selection and sparse approximation problems request a good approximation of an input signal using a linear combination of elementary signals, yet they stipulate that the approximation may only involve a few of the elementary signals. This class of problems arises throughout electrical enginee ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
Subset selection and sparse approximation problems request a good approximation of an input signal using a linear combination of elementary signals, yet they stipulate that the approximation may only involve a few of the elementary signals. This class of problems arises throughout electrical
Neighborly Polytopes and Sparse Solutions of Underdetermined Linear Equations
, 2005
"... Consider a d × n matrix A, with d < n. The problem of solving for x in y = Ax is underdetermined, and has many possible solutions (if there are any). In several fields it is of interest to find the sparsest solution – the one with fewest nonzeros – but in general this involves combinatorial optim ..."
Abstract

Cited by 130 (13 self)
 Add to MetaCart
optimization. Let ai denote the ith column of A, 1 ≤ i ≤ n. Associate to A the quotient polytope P formed by taking the convex hull of the 2n points (±ai) in R d. P is centrosymmetric and is called (centrally) kneighborly if every subset of k + 1 elements (±ilail)k+1 l=1 are the vertices of a face of P. We
Dimensionality Reduction via Sparse Support Vector Machines
 Journal of Machine Learning Research
, 2003
"... We describe a methodology for performing variable ranking and selection using support vector machines (SVMs). The method constructs a series of sparse linear SVMs to generate linear models that can generalize well, and uses a subset of nonzero weighted variables found by the linear models to prod ..."
Abstract

Cited by 121 (14 self)
 Add to MetaCart
We describe a methodology for performing variable ranking and selection using support vector machines (SVMs). The method constructs a series of sparse linear SVMs to generate linear models that can generalize well, and uses a subset of nonzero weighted variables found by the linear models
Small subsets inherit sparse εregularity
 J. COMBIN. THEORY SER. B
, 2004
"... In this paper we investigate the behaviour of subgraphs of sparse εregular bipartite graphs G = (V1 ∪ V2, E) with vanishing density d that are induced by small subsets of vertices. In particular, we show that, with overwhelming probability, a random set S ⊆ V1 of size s ≫ 1/d contains a subset S ′ ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
In this paper we investigate the behaviour of subgraphs of sparse εregular bipartite graphs G = (V1 ∪ V2, E) with vanishing density d that are induced by small subsets of vertices. In particular, we show that, with overwhelming probability, a random set S ⊆ V1 of size s ≫ 1/d contains a subset
Learning Decision Trees using the Fourier Spectrum
, 1991
"... This work gives a polynomial time algorithm for learning decision trees with respect to the uniform distribution. (This algorithm uses membership queries.) The decision tree model that is considered is an extension of the traditional boolean decision tree model that allows linear operations in each ..."
Abstract

Cited by 207 (10 self)
 Add to MetaCart
node (i.e., summation of a subset of the input variables over GF (2)). This paper shows how to learn in polynomial time any function that can be approximated (in norm L 2 ) by a polynomially sparse function (i.e., a function with only polynomially many nonzero Fourier coefficients). The authors
Flat 2D Tori with Sparse Spectra
"... Abstract. We identify a class of 2D flat tori Tω, quotients of the plane by certain lattices, on which the Laplace operator has spectrum contained in the set of integers Z, as a sparse subset, i.e., a subset of density 0. 1. ..."
Abstract
 Add to MetaCart
Abstract. We identify a class of 2D flat tori Tω, quotients of the plane by certain lattices, on which the Laplace operator has spectrum contained in the set of integers Z, as a sparse subset, i.e., a subset of density 0. 1.
Column Subset Selection via Sparse Approximation of SVD
, 2011
"... Given a real matrix A ∈ Rm×n of rank r, and an integer k < r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rankk approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approxim ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Given a real matrix A ∈ Rm×n of rank r, and an integer k < r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rankk approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c = Õ k log k ϵ2 η2) (A) such that ∥A − ΠCA∥
Results 11  20
of
1,070