DMCA
RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images (2010)
Cached
Download Links
Citations: | 161 - 6 self |
Citations
1887 | Robust real-time face detection
- Viola, Jones
(Show Context)
Citation Context ...inal images obtained using a face detector; and (b) average of the reconstructed low-rank images. We obtain an initial estimate of the transformation in each image using the Viola-Jones face detector =-=[39]-=-. We again align the images to an 80 ×60 canonical frame. For this experiment, we use affine transformations G = Aff(2) in RASL, to cope with the large pose variability in LFW. Since there is no groun... |
1056 | A fast iterative shrinkage-thresholding algorithm for linear inverse problems
- Beck, Teboulle
- 2009
(Show Context)
Citation Context ... are essential for its practical use. Fortunately, a recent flurry of work on high-dimensional nuclear norm minimization has shown that such problems are well within the capabilities of a standard PC =-=[19, 20, 2, 22, 17]-=-. In this section, we show how one such fast first-order method, the Accelerated Proximal Gradient (APG) algorithm [2, 22, 17], can be adapted to efficiently solve (6). The APG approach replaces the e... |
979 | A survey of image registration techniques
- Brown
- 1992
(Show Context)
Citation Context ... classification. Intelligently harnessing the information encoded in these large sets of images seems to require more efficient and effective solutions to the long-standing batch image alignment task =-=[18, 3]-=-: given many images of an object or objects of interest, align them to a fixed canonical template. To a large extent, progress in batch image alignment has ∗ This work was supported by grants NSF IIS ... |
934 | Robust face recognition via sparse representation
- Wright, Yang, et al.
- 2009
(Show Context)
Citation Context ... small fraction of all pixels in an image, we can model them as sparse errors whose nonzero entries can have arbitrarily large magnitude. This model has been successfully employed in face recognition =-=[19]-=-. In addition to occlusions, real images typically contain some noise of small magnitude in each pixel. To keep our discussion simple, we assume here that such noise is negligible in magnitude as comp... |
706 | Lucas-Kanade 20 years on: A unifying framework.
- Baker, Matthews
- 2004
(Show Context)
Citation Context ...ch as zooming in on a single dark pixel or a dark region in the images. i=1 3 This kind of iterative linearization has a long history in gradient algorithms for batch image alignment (see, e.g., [9], =-=[20]-=- and references therein). More recently a similar iterative convex programming approach was proposed for single-to-batch image alignment in face recognition [21]. April 14, 2011 DRAFTREVISED MANUSCRI... |
566 | Robust principal component analysis
- Candès, Li, et al.
- 2009
(Show Context)
Citation Context ...rg, and reference IEEECS Log Number TPAMI-2010-07-0582. Digital Object Identifier no. 10.1109/TPAMI.2011.282. 0162-8828/12/$31.00 2012 IEEE Published by the IEEE Computer Society minimization [15], =-=[16]-=- have shown that it is indeed possible to efficiently and exactly recover low-rank matrices despite significant corruption, using tools from convex programming. These developments prompt us to revisit... |
547 | A survey of medical image registration
- Maintz, Viergever
- 1998
(Show Context)
Citation Context ...assification. Intelligently harnessing the information encoded in these large sets of images seems to require more efficient and effective solutions to the long-standing batch image alignment problem =-=[2]-=-, [3]: Given many images of an object or objects of interest, align them to a fixed canonical template. (a) Original images (b) Aligned images (c) Low-rank component (d) Recovered errors (e) Average o... |
526 | Lambertian reflectance and linear subspaces
- Basri, Jacobs
- 2003
(Show Context)
Citation Context ...ely low-rank. This assumption holds ,i = 1,...,n are quite generally. For example, if the I0 i images of some convex Lambertian object under varying illumination, then a rank-9 approximation suffices =-=[1]-=-. Being able to correctly identify this low-dimensional structure is crucial for many vision tasks such as face recognition. Modeling corruption. In practice, however, this lowrank structure can be ea... |
448 | Labeled faces in the wild: A database for studying face recognition in unconstrained environments
- Huang, Mattar, et al.
- 2008
(Show Context)
Citation Context ..., and YouTube has led to a dramatic increase in the amount of visual data available online. Within the computer vision community, this has inspired a renewed interest in large, unconstrained datasets =-=[1]-=-. Such data pose steep challenges to existing vision algorithms: significant illumination variation, partial occlusion, as well as poor or even no alignment (see Figure 1(a) for example). This last di... |
388 | On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Eckstein, Bertsekas
- 1992
(Show Context)
Citation Context ...en two terms has been studied extensively as the alternating direction method of multipliers in the optimization literature and its convergence has been well established for various cases [32], [33], =-=[34]-=-. In particular, the convergence for the Principal Component Pursuit problem – essentially problem (7) without the term associated with ∆τ – has been established in [29]. Recently, [35] obtained a con... |
329 | The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010. Reuters. archive.ics.uci.edu/ml/datasets/Reuters-21578+Text+Categorization+Collection
- Lin, Chen, et al.
(Show Context)
Citation Context ...al for its practical use. Fortunately, a recent flurry of work on high-dimensional nuclear norm minimization has shown that such problems are well within the capabilities of a standard PC [27], [28], =-=[29]-=-. In this section, we show how one such fast first-order method, the Augmented Lagrange Multiplier (ALM) algorithm [29], [30], [16], can be adapted to efficiently solve (7). The basic idea of the ALM ... |
301 | Mutual-information-based registration of medical images: a survey.
- Pluim, Maintz, et al.
- 2003
(Show Context)
Citation Context ...SCRIPT SUBMITTED TO IEEE TRANS. PAMI, APRIL 2011. 3 To a large extent, progress in batch image alignment has been driven by the introduction of increasingly sophisticated measures of image similarity =-=[4]-=-. Learned-Miller’s influential congealing algorithm seeks an alignment that minimizes the sum of entropies of pixel values at each pixel location in the batch of aligned images [5], [6]. If we stack t... |
283 |
A dual algorithm for the solution of nonlinear variational problems via finiteelement approximations
- Gabay, Mercier
- 1976
(Show Context)
Citation Context ... between two terms has been studied extensively as the alternating direction method of multipliers in the optimization literature and its convergence has been well established for various cases [32], =-=[33]-=-, [34]. In particular, the convergence for the Principal Component Pursuit problem – essentially problem (7) without the term associated with ∆τ – has been established in [29]. Recently, [35] obtained... |
229 | Rank-sparsity incoherence for matrix decomposition. See http://arxiv.org/abs/0906.2220
- Chandrasekaran, Sanghavi, et al.
- 2009
(Show Context)
Citation Context ...em of fitting a low-rank model to highly corrupted data [14], a problem that until recently lacked a polynomial-time algorithm with strong performance guarantees. Recent advances in rank minimization =-=[15]-=-, [16] have shown that it is indeed possible to efficiently and exactly recover low-rank matrices despite significant corruption, using tools from convex programming. These developments prompt us to r... |
183 | An accelerated proximal gradient algorithm for nuclear norm regular- ized least squares problems
- Toh, Yun
- 2009
(Show Context)
Citation Context ... are essential for its practical use. Fortunately, a recent flurry of work on high-dimensional nuclear norm minimization has shown that such problems are well within the capabilities of a standard PC =-=[27]-=-, [28], [29]. In this section, we show how one such fast first-order method, the Augmented Lagrange Multiplier (ALM) algorithm [29], [30], [16], can be adapted to efficiently solve (7). The basic idea... |
181 |
An invitation to 3-D vision,
- Ma, Soatto, et al.
- 2006
(Show Context)
Citation Context ...transformations from a finitedimensional group G that has a parametric representation, such as the similarity group SE(2) × R+, the 2-D affine group Aff(2), and the planar homography group GL(3) (see =-=[18]-=- for more details on transformation groups). Consolidating the above two models, we formulate the image alignment problem as follows. Suppose that I1,I2,...,In represent n input images of the same obj... |
177 | A framework for robust subspace learning.
- Torre, Black
- 2003
(Show Context)
Citation Context ...ees of robustness or convergence rate. This somewhat unsatisfactory status quo is mainly due to the extremely difficult nature of the core problem of fitting a low-rank model to highly corrupted data =-=[14]-=-, a problem that until recently lacked a polynomial-time algorithm with strong performance guarantees. Recent advances in rank IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, ... |
167 |
Sur lapproximation par elements finis dordre un, et la resolution par penalisation-dualite dune classe de problemes de Dirichlet nonlineaires, Rev. Francaise dAut
- Glowinski, Marrocco
- 1975
(Show Context)
Citation Context ...nating between two terms has been studied extensively as the alternating direction method of multipliers in the optimization literature and its convergence has been well established for various cases =-=[32]-=-, [33], [34]. In particular, the convergence for the Principal Component Pursuit problem – essentially problem (7) without the term associated with ∆τ – has been established in [29]. Recently, [35] ob... |
123 | Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. - Fazel, Hindi, et al. - 2003 |
108 | Towards a practical face recognition system: robust registration and illumination by sparse representation.
- Wagner, Wright, et al.
- 2009
(Show Context)
Citation Context ...batch image alignment (see, e.g., [9], [20] and references therein). More recently a similar iterative convex programming approach was proposed for single-to-batch image alignment in face recognition =-=[21]-=-. April 14, 2011 DRAFTREVISED MANUSCRIPT SUBMITTED TO IEEE TRANS. PAMI, APRIL 2011. 9 Algorithm 1 (Outer loop of RASL) INPUT: Images I1,...,In ∈ R w×h , initial transformations τ1,...,τn in a certain... |
106 | Unsupervised joint alignment of complex images.
- Huang, Jain, et al.
- 2007
(Show Context)
Citation Context ...age similarity [4]. Learned-Miller’s influential congealing algorithm seeks an alignment that minimizes the sum of entropies of pixel values at each pixel location in the batch of aligned images [5], =-=[6]-=-. If we stack the aligned images as the columns of a large matrix, this criterion demands that each row of this matrix be nearly constant. Conversely, the least squares congealing procedure of [7], [8... |
94 | Stable principal component pursuit,
- Zhou, Li, et al.
- 2010
(Show Context)
Citation Context ...ontain some noise of small magnitude in each pixel. This can be easily augmented into our model by adding a noise matrix Z of bounded magnitude to the equality constraint in (7). It has been shown in =-=[22]-=- that sparse and low-rank matrix decomposition (without transformations) by convex optimization is stable to additive Gaussian noise of small magnitude, in addition to sparse errors. It may be possibl... |
87 | Data driven image models through continuous joint alignment.
- Learned-Miller
- 2006
(Show Context)
Citation Context ...of image similarity [4]. Learned-Miller’s influential congealing algorithm seeks an alignment that minimizes the sum of entropies of pixel values at each pixel location in the batch of aligned images =-=[5]-=-, [6]. If we stack the aligned images as the columns of a large matrix, this criterion demands that each row of this matrix be nearly constant. Conversely, the least squares congealing procedure of [7... |
77 |
Recovering low-rank and sparse components of matrices from incomplete and noisy observations.
- Tao, Yuan
- 2011
(Show Context)
Citation Context ...es [32], [33], [34]. In particular, the convergence for the Principal Component Pursuit problem – essentially problem (7) without the term associated with ∆τ – has been established in [29]. Recently, =-=[35]-=- obtained a convergence result for certain three-term alternation applied to the noisy principal component pursuit problem (see also [36]). However, [35] reflects a very similar theory-practice gap – ... |
73 | Transformation-invariant clustering using the EM algorithm.
- Frey, Jojic
- 2003
(Show Context)
Citation Context ...e a log-determinant measure that can be viewed as a smooth surrogate for the rank function [10]. The low-rank objective can also be directly enforced, as in Transformed Component Analysis (TCA) [11], =-=[12]-=-, which uses an EM algorithm to fit a low-dimensional linear model, subject to domain transformations drawn from a known group. A major drawback of the above approaches is that they do not simultaneou... |
54 |
Transformed component analysis: joint estimation of spatial transformations and image components.
- Frey, Jojic
- 1999
(Show Context)
Citation Context ...inimize a log-determinant measure that can be viewed as a smooth surrogate for the rank function [10]. The low-rank objective can also be directly enforced, as in Transformed Component Analysis (TCA) =-=[11]-=-, [12], which uses an EM algorithm to fit a low-dimensional linear model, subject to domain transformations drawn from a known group. A major drawback of the above approaches is that they do not simul... |
53 | Image manifolds which are isometric to euclidean space,
- Donoho, Grimes
- 2005
(Show Context)
Citation Context ..., are required for the algorithm to converge. It is important to realize that, in general, manifolds formed by transformed images may not be C 2 (or not even C 1 ), due to the presence of sharp edges =-=[26]-=-. However, in our case, we can view the digital images Ii ◦τi as resampling transformations of an ideal bandlimited reconstruction Îi obtained from the digital image Ii, in which case the mapping x ↦→... |
52 | Robust parameterized component analysis: theory and applications to 2d facial appearance models
- Torre, Black
- 2003
(Show Context)
Citation Context ...ariations and gross pixel corruptions or partial occlusions that often occur in real images (e.g., shadows, hats, glasses in Figure 1). The Robust Parameterized Component Analysis (RPCA) algorithm of =-=[13]-=- also fits a low-rank model, and uses a robust fitting function to reduce the influence of corruption and occlusion. Unfortunately, this leads to a difficult, nonconvex optimization problem, with no t... |
43 | Robust video denoising using Low rank matrix completion,”
- Ji, Liu, et al.
- 2010
(Show Context)
Citation Context ...ly 20% of the pixels in each image (i.e., ρ = 0.2). The results are shown in Figure 7. We observe that the output images are well-aligned with respect to each other and free of corruptions. Recently, =-=[38]-=- proposed an image denoising algorithm based on low-rank matrix completion. Our method differs from that work in three main aspects. Firstly, we denoise the images globally instead of in a patch-based... |
30 | Least squares congealing for unsupervised alignment of images.
- Cox, Sridharan, et al.
- 2008
(Show Context)
Citation Context ...5], [6]. If we stack the aligned images as the columns of a large matrix, this criterion demands that each row of this matrix be nearly constant. Conversely, the least squares congealing procedure of =-=[7]-=-, [8] seeks an alignment that minimizes the sum of squared distances between pairs of images, and hence demands that the columns be nearly constant. In both cases, if the criterion is satisfied exactl... |
27 |
Robust principal component analysis?,” preprint,
- Cands, Li, et al.
- 2009
(Show Context)
Citation Context ...lem of fitting a low-rank model to highly corrupted data [8], a problem that until recently lacked a polynomial-time algorithm with strong performance guarantees. Recent advances in rank minimization =-=[5, 4]-=- have shown that it is indeed possible to efficiently and exactly recover low-rank matrices despite significant corruption, using tools from convex programming. These developments prompt us to revisit... |
18 | Fast algorithms for recovering a corrupted low-rank matrix,”
- Ganesh, Lin, et al.
- 2009
(Show Context)
Citation Context ...ssential for its practical use. Fortunately, a recent flurry of work on high-dimensional nuclear norm minimization has shown that such problems are well within the capabilities of a standard PC [27], =-=[28]-=-, [29]. In this section, we show how one such fast first-order method, the Augmented Lagrange Multiplier (ALM) algorithm [29], [30], [16], can be adapted to efficiently solve (7). The basic idea of th... |
16 |
Strong uniqueness and second order convergence in nonlinear discrete approximation
- Jittorntrum, Osborne
- 1980
(Show Context)
Citation Context ...lgorithms was extensively studied in the late 1970’s and early 1980’s, and they continue to draw attention today [23]. We draw upon this body of work, in particular results of Jittorntrum and Osborne =-=[24]-=- (building on work of Cromme [25]) to understand the local convergence of RASL. The result of [24] concerns the problem of minimizing the composition of a norm ‖ · ‖⋄ : R n → R with a C 2 mapping f : ... |
16 |
Strong Uniqueness: A Far-Reaching Criterion for the Convergence
- Cromme
- 1978
(Show Context)
Citation Context ... in the late 1970’s and early 1980’s, and they continue to draw attention today [23]. We draw upon this body of work, in particular results of Jittorntrum and Osborne [24] (building on work of Cromme =-=[25]-=-) to understand the local convergence of RASL. The result of [24] concerns the problem of minimizing the composition of a norm ‖ · ‖⋄ : R n → R with a C 2 mapping f : R p → R n : The authors of [25], ... |
15 | A proximal method for composite minimization
- Lewis, Wright
- 2008
(Show Context)
Citation Context ...th convex function with a smooth, nonlinear mapping. The convergence behavior of such algorithms was extensively studied in the late 1970’s and early 1980’s, and they continue to draw attention today =-=[23]-=-. We draw upon this body of work, in particular results of Jittorntrum and Osborne [24] (building on work of Cromme [25]) to understand the local convergence of RASL. The result of [24] concerns the p... |
10 |
Rasl: Robust alignment via sparse and low-rank decomposition for linearly correlated images
- Peng, Ganesh, et al.
- 2010
(Show Context)
Citation Context ...er alternative convex optimization methods. In particular, it is about 5-10 times faster than the accelerated proximal gradient (APG) method originally proposed in the conference version of this work =-=[31]-=-. Although the convergence of the ALM method (15) has been well established in the optimization literature, we currently know of no proof that its approximation (16) converges too. The main difficulty... |
9 |
Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities,”
- He
- 2009
(Show Context)
Citation Context ...m associated with ∆τ – has been established in [29]. Recently, [35] obtained a convergence result for certain three-term alternation applied to the noisy principal component pursuit problem (see also =-=[36]-=-). However, [35] reflects a very similar theory-practice gap – the three-term alternation for which convergence has been established is slower in practice than an alternation in the form of algorithm ... |
3 |
Joint Alignment up to (Lossy) Transformations
- Vedaldi, Guidi, et al.
- 2008
(Show Context)
Citation Context ... Fig. 1), the matrix of aligned images might have an unknown rank higher than one. In this case, it is more appropriate to search for an alignment that minimizes the rank of the aligned images. So in =-=[9]-=-, Vedaldi et al. choose to minimize a log-determinant measure that can be viewed as a smooth surrogate for the rank function [10]. The low-rank objective can also be directly enforced, as in Transform... |
1 |
congealing for large numbers of images
- “Least-squares
- 2009
(Show Context)
Citation Context ...6]. If we stack the aligned images as the columns of a large matrix, this criterion demands that each row of this matrix be nearly constant. Conversely, the least squares congealing procedure of [7], =-=[8]-=- seeks an alignment that minimizes the sum of squared distances between pairs of images, and hence demands that the columns be nearly constant. In both cases, if the criterion is satisfied exactly, th... |
1 |
Joint alignment up to (lossy) transforamtions
- Vedaldi, Guidi, et al.
- 2008
(Show Context)
Citation Context ...igure 1), the matrix of aligned images might have an unknown rank higher than one. In this case, it is more appropriate to search for an alignment that minimizes the rank of the aligned images. So in =-=[9]-=-, Vedaldi et. al. choose to minimize a log-determinant measure that can be viewed as a smooth surrogate for the rank function [10]. The low-rank objective can also be directly enforced, as in Transfor... |
1 |
clustering using the EM algorithm
- “Transformation-invariant
(Show Context)
Citation Context ...e a log-determinant measure that can be viewed as a smooth surrogate for the rank function [10]. The low-rank objective can also be directly enforced, as in Transformed Component Analysis (TCA) [11], =-=[12]-=-, which uses an EM algorithm to fit a low-dimensional linear model, subject to domain transformations drawn from a known group. A major drawback of the above approaches is that they do not simultaneou... |
1 |
framework for robust subspace learning
- “A
- 2003
(Show Context)
Citation Context ...ees of robustness or convergence rate. This somewhat unsatisfactory status quo is mainly due to the extremely difficult nature of the core problem of fitting a low-rank model to highly corrupted data =-=[14]-=-, a problem that until recently lacked a polynomial-time algorithm with strong performance guarantees. Recent advances in rank minimization [15], [16] have shown that it is indeed possible to efficien... |
1 |
Least-Squares Congealing for Large
- Cox, Lucey, et al.
- 2009
(Show Context)
Citation Context ...6]. If we stack the aligned images as the columns of a large matrix, this criterion demands that each row of this matrix be nearly constant. Conversely, the least squares congealing procedure of [7], =-=[8]-=- seeks an alignment that minimizes the sum of squared distances between pairs of images, and hence demands that the columns be nearly constant. In both cases, if the criterion is satisfied exactly, th... |
1 |
A Proximal Method for Composite Minimization,” technical report
- Lewis, Wright
- 2008
(Show Context)
Citation Context ...ooth convex function with a smooth, nonlinear mapping. The convergence behavior of such algorithms was extensively studied in the late 1970s and early 1980s, and they continue to draw attention today =-=[23]-=-. We draw upon this body of work, in particular results of Jittorntrum and Osborne [24] (building on the work of Cromme [25]), to understand the local convergence of RASL. The result of [24] concerns ... |