Results 11  20
of
157
A simplified approach to recovery conditions for low rank matrices, in
 Proc. IEEE Int. Symp. on Inf. Theory (ISIT), 2011
"... ar ..."
(Show Context)
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity
, 2011
"... The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smoot ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
(Show Context)
The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multiorientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to nonEuclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping “pictures”.
Iterative reweighted least squares for matrix rank minimization
 PROCEEDINGS OF THE ALLERTON CONFERENCE
, 2010
"... The classical compressed sensing problem is to find the sparsest solution to an underdetermined system of linear equations. A good convex approximation to this problem is to minimize the ℓ1 norm subject to affine constraints. The Iterative Reweighted Least Squares (IRLSp) algorithm (0 < p ≤ 1), ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
The classical compressed sensing problem is to find the sparsest solution to an underdetermined system of linear equations. A good convex approximation to this problem is to minimize the ℓ1 norm subject to affine constraints. The Iterative Reweighted Least Squares (IRLSp) algorithm (0 < p ≤ 1), has been proposed as a method to solve the ℓp (p ≤ 1) minimization problem with affine constraints. Recently Chartrand et al observed that IRLSp with p < 1 has better empirical performance than ℓ1 minimization, and Daubechies et al gave ‘local ’ linear and superlinear convergence results for IRLSp with p = 1 and p < 1 respectively. In this paper we extend IRLSp as a family of algorithms for the matrix rank minimization problem and we also present a related family of algorithms, sIRLSp. We present guarantees on recovery of lowrank matrices for IRLS1 under the Null Space Property (NSP). We also establish that the difference between the successive iterates of IRLSp and sIRLSp converges to zero and that the IRLS0 algorithm converges to the stationary point of a nonconvex ranksurrogate minimization problem. On the numerical side, we give a few efficient implementations for IRLS0 and demonstrate that both sIRLS0 and IRLS0 perform better than algorithms such as Singular Value Thresholding (SVT) on a range of ‘hard ’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large). We also observe that sIRLS0 performs better than Iterative Hard Thresholding algorithm (IHT) when there is no apriori information on the low rank solution.
Sparse recovery by nonconvex optimization  instance optimality
, 2008
"... In this note, we address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓ p minimization with p ∈ (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romber ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
In this note, we address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓ p minimization with p ∈ (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romberg and Tao [3] and Wojtaszczyk [30] regarding the decoder ∆1, based on ℓ 1 minimization, to ∆p with p ∈ (0, 1). Our results are twofold. First, we show that under certain sufficient conditions that are weaker than the analogous sufficient conditions for ∆1 the decoders ∆p are robust to noise and stable in the sense that they are (2, p) instance optimal. Second, we extend the results of Wojtaszczyk to show that, like ∆1, the decoders ∆p are (2, 2) instance optimal in probability provided the measurement matrix is drawn from an appropriate distribution. While the extension of the results of [3] to the setting where p ∈ (0, 1) is straightforward, the extension of the instance optimality in probability result of [30] is nontrivial. In particular, we need to prove that the LQ1 property, introduced in [30], and shown to hold for Gaussian matrices and matrices whose columns are drawn uniformly from the sphere, generalizes to an LQp property for the same classes of matrices. Our proof is based on a result by Gordon and Kalton [18] about the BanachMazur distances of pconvex bodies to their convex hulls.
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Iterative reweighted algorithms for matrix rank minimization
 Journal of Machine Learning Research
"... The problem of minimizing the rank of a matrix subject to affine constraints has many applications in machine learning, and is known to be NPhard. One of the tractable relaxations proposed for this problem is nuclear norm (or trace norm) minimization of the matrix, which is guaranteed to find the m ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
The problem of minimizing the rank of a matrix subject to affine constraints has many applications in machine learning, and is known to be NPhard. One of the tractable relaxations proposed for this problem is nuclear norm (or trace norm) minimization of the matrix, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLSp (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, i.e., recovery of lowrank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLSp shows better empirical performance in terms of recovering lowrank matrices than nuclear norm minimization. We provide an efficient implementation for IRLSp, and also present a related family of algorithms, sIRLSp. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set.
A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework
, 2010
"... ..."
Microgeometry Capture using an Elastomeric Sensor
"... Figure 1: Our microgeometry capture system consists of an elastomeric sensor and a highmagnification camera (a). The retrographic sensor replaces the BRDF of the subject with its own (b), allowing microscopic geometry (in this case, human skin) to be accurately captured (c). The same principles can ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Figure 1: Our microgeometry capture system consists of an elastomeric sensor and a highmagnification camera (a). The retrographic sensor replaces the BRDF of the subject with its own (b), allowing microscopic geometry (in this case, human skin) to be accurately captured (c). The same principles can be applied to a portable system (d) that can measure surface detail rapidly and easily; again human skin (e). We describe a system for capturing microscopic surface geometry. The system extends the retrographic sensor [Johnson and Adelson 2009] to the microscopic domain, demonstrating spatial resolution as small as 2 microns. In contrast to existing microgeometry capture techniques, the system is not affected by the optical characteristics of the surface being measured—it captures the same geometry whether the object is matte, glossy, or transparent. In addition, the hardware design allows for a variety of form factors, including a handheld device that can be used to capture highresolution surface geometry in the field. We achieve these results with a combination of improved sensor materials, illumination design, and reconstruction algorithm, as compared to the original sensor of Johnson and
Efficient first order methods for linear composite regularizers
, 2011
"... A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function ω with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multitask l ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function ω with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multitask learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function ω is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient
Sparse Signal Recovery and Acquisition with Graphical Models  A review of a broad set of sparse models, analysis tools, and recovery algorithms within the graphical models formalism
, 2010
"... Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R N as the signal, y [ R M as measurements with M, N, F[R M3N as the measurement matrix, and n [ R M as the noise. The measurement matrix F is a matrix with random entries in data streaming, an overcomplete dictionary of features in sparse Bayesian learning, or a code matrix in communications [1]–[3]. Extracting x from y in (1) is ill posed in general since M, N and the measurement matrix F hence has a nontrivial null space; given any vector v in this null space, x 1 v defines a solution that produces the same observations y. Additional information is therefore necessary to distinguish the true x among the infinitely many possible solutions [1], [2], [4], [5]. It is now well known that sparse