Results 1  10
of
581,282
On the Power of Circuits with Gates of Low L_1 Norms
, 1995
"... We examine the power of Boolean functions with low L 1 norms in several settings. In large part of the recent literature, the degree of a polynomial which represents a Boolean function in some way was chosen to be the measure of the complexity of the Boolean function (see, e.g. [1], [3], [5], [3 ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We examine the power of Boolean functions with low L 1 norms in several settings. In large part of the recent literature, the degree of a polynomial which represents a Boolean function in some way was chosen to be the measure of the complexity of the Boolean function (see, e.g. [1], [3], [5
Robust Subspace Computation Using L1 Norm
, 2003
"... Linear subspace has many important applications in computer vision, such as structure from motion, motion estimation, layer extraction, object recognition, and object tracking. Singular Value Decomposition (SVD) algorithm is a standard technique to compute the subspace from the input data. The SVD a ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
algorithm, however, is sensitive to outliers as it uses L2 norm metric, and it can not handle missing data either. In this paper, we propose using L1 norm metric to compute the subspace. We show that it is robust to outliers and can handle missing data. We present two algorithms to optimize the L1 norm
The L1norm bestfit hyperplane problem
, 2009
"... We present a simple and efficient algorithm for solving the L1norm bestfit hyperplane problem derived using first principles and intuitive geometric insights about L1 projections. The problem is easy to solve because the procedure relies on the solution of a small number of linear programs. We pro ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present a simple and efficient algorithm for solving the L1norm bestfit hyperplane problem derived using first principles and intuitive geometric insights about L1 projections. The problem is easy to solve because the procedure relies on the solution of a small number of linear programs. We
Dimension Reduction in the l1 norm
 in The 43th Annual Symposium on Foundations of Computer Science (FOCS'02
, 2002
"... The JohnsonLindenstrauss Lemma shows that any set of n points in Euclidean space can be mapped linearly down to ) dimensions such that all pairwise distances are distorted by at most 1 + #. We study the following basic question: Does there exist an analogue of the JohnsonLindenstrauss Lemma for t ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
for the # 1 norm? Note that JohnsonLindenstrauss Lemma gives a linear embedding which is independent of the point set. For the # 1 norm, we show that one cannot hope to use linear embeddings as a dimensionality reduction tool for general point sets, even if the linear embedding is chosen as a function
Principal Component Analysis Based on L1norm Maximization
, 2008
"... A method of principal component analysis (PCA) based on a new L1norm optimization technique is proposed. Unlike conventional PCA, which is based on L2norm, the proposed method is robust to outliers because it utilizes the L1norm, which is less sensitive to outliers. It is invariant to rotations ..."
Abstract

Cited by 58 (6 self)
 Add to MetaCart
A method of principal component analysis (PCA) based on a new L1norm optimization technique is proposed. Unlike conventional PCA, which is based on L2norm, the proposed method is robust to outliers because it utilizes the L1norm, which is less sensitive to outliers. It is invariant to rotations
The L1–norm density estimator process
 Ann. Probab
, 2003
"... The notion of an L1–norm density estimator process indexed by a class of kernels is introduced. Then a functional central limit theorem and a Glivenko–Cantelli theorem are established for this process. While assembling the necessary machinery to prove these results, a body of Poissonization techniqu ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
The notion of an L1–norm density estimator process indexed by a class of kernels is introduced. Then a functional central limit theorem and a Glivenko–Cantelli theorem are established for this process. While assembling the necessary machinery to prove these results, a body of Poissonization
L1norm penalized least squares with SALSA
 Connexions
"... Abstract. This lecture note describes an iterative optimization algorithm, ‘SALSA’, for solving L1norm penalized least squares problems. We describe the use of SALSA for sparse signal representation and approximation, especially with overcomplete Parseval transforms. We also illustrate the use of S ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. This lecture note describes an iterative optimization algorithm, ‘SALSA’, for solving L1norm penalized least squares problems. We describe the use of SALSA for sparse signal representation and approximation, especially with overcomplete Parseval transforms. We also illustrate the use
NearOptimal Sparse Recovery in the L1 norm
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an appro ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute
Results 1  10
of
581,282