Results 1  10
of
1,070
An NP Set With No coDP Sparse Subset
"... An oracle is constructed relative to which there exists an NP set that has no infinite sparse subset in coDP. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An oracle is constructed relative to which there exists an NP set that has no infinite sparse subset in coDP.
1Dissimilaritybased Sparse Subset Selection
"... Abstract—Finding an informative subset of a large number of data points or models is at the center of many problems in machine learning, computer vision, bio/health informatics and image/signal processing. Given pairwise dissimilarities between the elements of a ‘source set ’ and a ‘target set, ’ we ..."
Abstract
 Add to MetaCart
Abstract—Finding an informative subset of a large number of data points or models is at the center of many problems in machine learning, computer vision, bio/health informatics and image/signal processing. Given pairwise dissimilarities between the elements of a ‘source set ’ and a ‘target set
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization
 PROC. NATL ACAD. SCI. USA 100 2197–202
, 2002
"... Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered ..."
Abstract

Cited by 633 (38 self)
 Add to MetaCart
considered the special case where D is an overcomplete system consisting of exactly two orthobases, and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex
Quasipolynomial time approximation scheme for sparse subsets of polygons
 In Proc. 30th Annu. Sympos. Comput. Geom. (SoCG
, 2014
"... We describe how to approximate, in quasipolynomial time, the largest independent set of polygons, in a given set of polygons. Our algorithm works by extending the result of Adamaszek and Wiese [AW13, AW14] to polygons of arbitrary complexity. Surprisingly, the algorithm also works for computing the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
the largest subset of the given set of polygons that has some sparsity condition. For example, we show that one can approximate the largest subset of polygons, such that the intersection graph of the subset does not contain a cycle of length 4 (i.e., K2,2). 1.
ON DISTRIBUTION OF THREETERM ARITHMETIC PROGRESSIONS IN SPARSE SUBSETS OF F n p
, 905
"... Abstract. We prove a version of Szemerédi’s regularity lemma for subsets of a typical random set in F n p. As an application, a result on the distribution of threeterm arithmetic progressions in sparse sets is discussed. 1. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We prove a version of Szemerédi’s regularity lemma for subsets of a typical random set in F n p. As an application, a result on the distribution of threeterm arithmetic progressions in sparse sets is discussed. 1.
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easy
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability
Lassotype recovery of sparse representations for highdimensional data
 ANNALS OF STATISTICS
, 2009
"... The Lasso is an attractive technique for regularization and variable selection for highdimensional data, where the number of predictor variables pn is potentially much larger than the number of samples n. However, it was recently discovered that the sparsity pattern of the Lasso estimator can only ..."
Abstract

Cited by 250 (14 self)
 Add to MetaCart
that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the ℓ2 error with an appropriate choice of the smoothing parameter. The rate is shown to be
Learning Bayesian network structure from massive datasets: the “sparse candidate” algorithm
 In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI
, 1999
"... Learning Bayesian networks is often cast as an optimization problem, where the computational task is to find a structure that maximizes a statistically motivated score. By and large, existing learning tools address this optimization problem using standard heuristic search techniques. Since the sear ..."
Abstract

Cited by 247 (7 self)
 Add to MetaCart
an algorithm that achieves faster learning by restricting the search space. This iterative algorithm restricts the parents of each variable to belong to a small subset of candidates. We then search for a network that satisfies these constraints. The learned network is then used for selecting better
Bottomup computation of sparse and Iceberg CUBE
 In Proceedings of the 5th ACM international workshop on Data Warehousing and OLAP, DOLAP ’02
, 1999
"... We introduce the IcebergCUBE problem as a reformulation of the datacube (CUBE) problem. The IcebergCUBE problem is to compute only those groupby partitions with an aggregate value (e.g., count) above some minimum support threshold. The result of IcebergCUBE can be used (1) to answer groupby que ..."
Abstract

Cited by 187 (4 self)
 Add to MetaCart
by queries with a clause such as HAVING COUNT(*)> = X, where X is greater than the threshold, (2) for mining multidimensional association rules, and (3) to complement existing strategies for identifying interesting subsets of the CUBE for precomputation. We present a new algorithm (BUC) for Iceberg
Results 1  10
of
1,070