### Citations

4182 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...m is the observation, A : Rn → Rm, and J is either the ℓ1-norm, the ℓ∞-norm, the ℓ1 − ℓ2-norm, the TV semi-norm or the nuclear norm. Example 4.1 (ℓ1-norm). For x ∈ R n, the sparsity promoting ℓ1-norm =-=[8, 28]-=- is J(x) = n∑ i=1 |xi|. It can verified that J is a polyhedral norm, and thus J ∈ PSLSx(Tx) for the model subspace M = Tx = { u ∈ Rn : supp(u) ⊆ supp(x) } , and ex = sign(x). The proximity operator of... |

2704 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ...m is the observation, A : Rn → Rm, and J is either the ℓ1-norm, the ℓ∞-norm, the ℓ1 − ℓ2-norm, the TV semi-norm or the nuclear norm. Example 4.1 (ℓ1-norm). For x ∈ R n, the sparsity promoting ℓ1-norm =-=[8, 28]-=- is J(x) = n∑ i=1 |xi|. It can verified that J is a polyhedral norm, and thus J ∈ PSLSx(Tx) for the model subspace M = Tx = { u ∈ Rn : supp(u) ⊆ supp(x) } , and ex = sign(x). The proximity operator of... |

2264 | Nonlinear total variation based noise removal algorithms
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context .... Let J0 be a closed convex function and D is a linear operator. Popular examples are the TV semi-norm in which case J0 = || · ||1 and D∗ = DDIF is a finite difference approximation of the derivative =-=[26]-=-, or the fused Lasso for D = [DDIF, ǫId] [29]. If J0 ∈ PSD∗x(M0), then it is shown in [19, Theorem 4.2] that under an appropriate transversality condition, J ∈ PSx(M) where M = { u ∈ Rn : D∗u ∈ M0 } .... |

1152 | Model selection and estimation in regression with grouped variables
- Yuan, Lin
- 2006
(Show Context)
Citation Context ...) } , and ex = sign(x). The proximity operator of the ℓ1-norm is given by a simple soft-thresholding. Example 4.2 (ℓ1 − ℓ2-norm). The ℓ1 − ℓ2-norm is usually used to promote group-structured sparsity =-=[34]-=-. Let the support of x ∈ Rn be divided into non-overlapping blocks B such that⋃ b∈B b = {1, . . . , n}. The ℓ1 − ℓ2-norm is given by J(x) = ||x||B = ∑ b∈B ||xb||, where xb = (xi)i∈b ∈ R |b|. || · ||B ... |

864 | Exact matrix completion via convex optimization
- Candes, Recht
- 2009
(Show Context)
Citation Context ...of a x is defined as J(x) = ||x||∗ = r∑ i=1 (Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning [2, 12], matrix completion =-=[24, 5]-=- and phase retrieval [6]. It can be shown that the nuclear norm is partly smooth relative to the manifold [20, Example 2], M = { z ∈ Rn1×n2 : rank(z) = r } . The tangent space to M at x and ex are giv... |

561 | Gaurateed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- Recht, Fazel, et al.
- 2007
(Show Context)
Citation Context ...of a x is defined as J(x) = ||x||∗ = r∑ i=1 (Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning [2, 12], matrix completion =-=[24, 5]-=- and phase retrieval [6]. It can be shown that the nuclear norm is partly smooth relative to the manifold [20, Example 2], M = { z ∈ Rn1×n2 : rank(z) = r } . The tangent space to M at x and ex are giv... |

506 | Signal recovery by proximal forwardbackward splitting.”
- Combettes, Wajs
- 2005
(Show Context)
Citation Context ..., this derivative is given by DPM(x) = PTM(x) = PTx . The case where M is linear is immediate. This conlcudes the proof. Proof of Theorem 3.1. (1) Classical convergence results of the FB scheme, e.g. =-=[9]-=-, show that xk converges to some x⋆ ∈ ArgminΦ 6= ∅ by assumption (A.3). Assumptions (A.1)-(A.2) entail that (3.1) is equivalent to 0 ∈ ri ∂ ( Φ(x⋆) ) . Since F ∈ C2 around x⋆, the smooth perturbation ... |

330 | Sparsity and smoothness via the fused lasso
- Tibshirani, Saunders, et al.
- 2005
(Show Context)
Citation Context ... a linear operator. Popular examples are the TV semi-norm in which case J0 = || · ||1 and D∗ = DDIF is a finite difference approximation of the derivative [26], or the fused Lasso for D = [DDIF, ǫId] =-=[29]-=-. If J0 ∈ PSD∗x(M0), then it is shown in [19, Theorem 4.2] that under an appropriate transversality condition, J ∈ PSx(M) where M = { u ∈ Rn : D∗u ∈ M0 } . In particular, for the case of the TV semi-n... |

280 |
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
- Bauschke, Combettes
- 2011
(Show Context)
Citation Context ...erence operator, anti-sparsity regularization when J = || · ||∞, and nuclear norm regularization when J = || · ||∗. The standard (non-relaxed) version of the Forward–Backward (FB) splitting algorithm =-=[3]-=- to solve (P) updates to a new iterate xk+1 based on the following rule, xk+1 = proxγkJ ( xk − γk∇F (xk) ) , (1.1) starting from any point x0 ∈ Rn, where 0 < γ ≤ γk ≤ γ < 2/β. The proximity operator i... |

159 |
A coordinate gradient descent method for nonsmooth separable minimization
- Tseng, Yun
- 2009
(Show Context)
Citation Context ...s, the proximal point algorithm. Their work extends that of e.g. [33] on identifiable surfaces from the convex case to a general non-smooth setting. Using these results, [14] considered the algorithm =-=[30]-=- to solve (P) where J is partly smooth, but not necessarily convex and F is C2(Rn), and proved finite identitification of the active manifold. However, the convergence rate remains an open problem in ... |

154 | Convex Analysis and Minimization Algorithms, Vol. I - Hiriart-Urruty, Lemaréchal - 1996 |

132 |
Phaselift : exact and stable signal recovery from magnitude measurements via convex programming
- Candes, Strohmer, et al.
(Show Context)
Citation Context ...||x||∗ = r∑ i=1 (Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning [2, 12], matrix completion [24, 5] and phase retrieval =-=[6]-=-. It can be shown that the nuclear norm is partly smooth relative to the manifold [20, Example 2], M = { z ∈ Rn1×n2 : rank(z) = r } . The tangent space to M at x and ex are given by TM(x) = { z ∈ Rn1×... |

83 | Local differentiability of distance functions,”
- Poliquin, Rockafellar, et al.
- 2000
(Show Context)
Citation Context ... x, and thus x′ − x = PTx(x ′ − x) + o ( ||x′ − x|| ) . If J ∈ PSLx(Tx), then x′ − x = PTx(x ′ − x). Proof. Partial smoothness implies thatM is a C2–manifold around x, then PM(x ′) is uniquely valued =-=[23]-=- and moreover C1 near x [20, Lemma 4]. Thus, continuous differentiability shows x′ − x = PM(x ′)− PM(x) = DPM(x)(x− x ′) + o ( ||x− x′|| ) , where DPM(x) is the derivative of PM at x. By virtue of [20... |

81 |
Local extremes, runs, strings and multiresolution.
- Davies, Kovac
- 2001
(Show Context)
Citation Context ...p(D∗u) ⊆ I } and ex = PTxDsign(D ∗x) where I = supp(D∗x). The proximity operator for the 1D TV, though not available in closed form, can be obtained efficiently using either the taut string algorithm =-=[11]-=- or the graph cuts [7]. 6 Example 4.4 (ℓ∞-norm). For x ∈ R n, the anti-sparsity promoting ℓ∞-norm is defined as following J(x) = max 1≤i≤N |xi|. It plays a prominent role in a variety of applications ... |

75 | Consistency of trace norm minimization.
- Bach
- 2008
(Show Context)
Citation Context ...VD) of x. The nuclear norm of a x is defined as J(x) = ||x||∗ = r∑ i=1 (Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning =-=[2, 12]-=-, matrix completion [24, 5] and phase retrieval [6]. It can be shown that the nuclear norm is partly smooth relative to the manifold [20, Example 2], M = { z ∈ Rn1×n2 : rank(z) = r } . The tangent spa... |

68 | Fixed-point continuation for ℓ1minimization: methodology and convergence
- Hale, Yin, et al.
- 2008
(Show Context)
Citation Context ...s established in [4] under either a very restrictive injectivity assumption, or a non-degeneracy assumption which is a specialization of ours (see (3.1)) to the ℓ1 norm. A similar result is proved in =-=[13]-=-, for F being a smooth convex and locally C2 function and J the ℓ1 norm, under restricted injectivity and non-degeneracy assumptions. The ℓ1 norm is a partly smooth function and hence covered by our r... |

52 | Linear convergence of iterative softthresholding,”
- Bredies, Lorenz
- 2008
(Show Context)
Citation Context ...ort our theoretical findings. 1.3 Related work Finite support identification and local R-linear convergence of FB to solve the Lasso problem, though in infinite-dimensional setting, is established in =-=[4]-=- under either a very restrictive injectivity assumption, or a non-degeneracy assumption which is a specialization of ours (see (3.1)) to the ℓ1 norm. A similar result is proved in [13], for F being a ... |

48 | Active sets, nonsmoothness, and sensitivity
- Lewis
- 2003
(Show Context)
Citation Context ...interior relative to its affine hull. 2 Partial smoothness In addition to (A.1), our central assumption is that J is a partly smooth function. Partial smoothness of functions is originally defined in =-=[19]-=-. Our definition hereafter specializes it to the case of finite-valued convex functions. Definition 2.1. Let J be a finite-valued convex function. J is partly smooth at x relative to a set M containin... |

48 | Alternating projections on manifolds - Lewis, Malick - 2008 |

37 | Variational analysis, volume 317 - Rockafellar, Wets - 1998 |

36 | Fast global convergence of gradient methods for high-dimensional statistical recovery
- Agarwal, Negahban, et al.
- 2011
(Show Context)
Citation Context ...being a smooth convex and locally C2 function and J the ℓ1 norm, under restricted injectivity and non-degeneracy assumptions. The ℓ1 norm is a partly smooth function and hence covered by our results. =-=[1]-=- proved Q-linear convergence of FB to solve (P) for F satisfying restricted smoothness and strong convexity assumptions, and J being a so-called convex decomposable regularizer. Again, the latter is a... |

35 | Identifying active constraints via partial smoothness and prox-regularity
- Hare, Lewis
(Show Context)
Citation Context ...lass of partly smooth functions, and their result is then covered by ours. For example, our framework covers the total variation (TV) semi-norm and ℓ∞-norm regularizers which are not decomposable. In =-=[15, 16]-=-, the authors have shown finite identitification of active manifolds associated to partly smooth functions for various algorithms, including the (sub)gradient projection method, Newton-like methods, t... |

29 | Trace lasso: a trace norm regularization for correlated designs.
- Grave, Obozinski, et al.
- 2011
(Show Context)
Citation Context ...VD) of x. The nuclear norm of a x is defined as J(x) = ||x||∗ = r∑ i=1 (Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning =-=[2, 12]-=-, matrix completion [24, 5] and phase retrieval [6]. It can be shown that the nuclear norm is partly smooth relative to the manifold [20, Example 2], M = { z ∈ Rn1×n2 : rank(z) = r } . The tangent spa... |

29 | Identifiable surfaces in constrained optimization
- Wright
- 1993
(Show Context)
Citation Context ...nifolds associated to partly smooth functions for various algorithms, including the (sub)gradient projection method, Newton-like methods, the proximal point algorithm. Their work extends that of e.g. =-=[33]-=- on identifiable surfaces from the convex case to a general non-smooth setting. Using these results, [14] considered the algorithm [30] to solve (P) where J is partly smooth, but not necessarily conve... |

20 | Uncertainty principles and vector quantization,”
- Lyubarskii, Vershynin
- 2010
(Show Context)
Citation Context ...rsity promoting ℓ∞-norm is defined as following J(x) = max 1≤i≤N |xi|. It plays a prominent role in a variety of applications including approximate nearest neighbor search [18] or vector quantization =-=[21]-=-, see also [27] and references therein. It can verified that J is a polyhedral norm, and thus J ∈ PSLSx(Tx) for the model subspace M = Tx = { α : α(I) = rs(I), r ∈ R } , and ex = s |I| , where s = sig... |

18 | Anti-sparse coding for approximate nearest neighbor search,” arXiv:1110.3767v2
- Jégou, Furon, et al.
- 2011
(Show Context)
Citation Context ...). For x ∈ R n, the anti-sparsity promoting ℓ∞-norm is defined as following J(x) = max 1≤i≤N |xi|. It plays a prominent role in a variety of applications including approximate nearest neighbor search =-=[18]-=- or vector quantization [21], see also [27] and references therein. It can verified that J is a polyhedral norm, and thus J ∈ PSLSx(Tx) for the model subspace M = Tx = { α : α(I) = rs(I), r ∈ R } , an... |

15 | Newton methods for nonsmooth convex minimization: connections among U-Lagrangian, Riemannian Newton and SQP
- Miller, Malick
- 2005
(Show Context)
Citation Context ...k correctly identifies the manifold, then one can turn to geometric methods along the manifold M, where even faster convergence rates can be achieved. For instance the Newton like methods proposed in =-=[22]-=- attains local quadratic convergence for partly smooth functions, with the proviso that the gradient and Hessian along the manifold can be computed. 4 When J ∈ PSLSx⋆(T ), it turns out that the restri... |

9 | Identifying active manifolds
- Hare, Lewis
(Show Context)
Citation Context ...lass of partly smooth functions, and their result is then covered by ours. For example, our framework covers the total variation (TV) semi-norm and ℓ∞-norm regularizers which are not decomposable. In =-=[15, 16]-=-, the authors have shown finite identitification of active manifolds associated to partly smooth functions for various algorithms, including the (sub)gradient projection method, Newton-like methods, t... |

8 | Orthogonal Invariance and Identifiability
- Daniilidis, Drusvyatskiy, et al.
(Show Context)
Citation Context ...er, absolutely permutation-invariant convex and partly smooth functions of the singular values of a real matrix, i.e. spectral functions, are convex and partly smooth spectral functions of the matrix =-=[10]-=-. It then follows that all the examples discussed in Section 1, including ℓ1, ℓ1 − ℓ2, ℓ∞, TV and nuclear norm regularizers, are partly smooth. In fact, the nuclear norm is partly smooth at a matix x ... |

5 |
Identifying active manifolds in regularization problems
- Hare
- 2011
(Show Context)
Citation Context ...ion method, Newton-like methods, the proximal point algorithm. Their work extends that of e.g. [33] on identifiable surfaces from the convex case to a general non-smooth setting. Using these results, =-=[14]-=- considered the algorithm [30] to solve (P) where J is partly smooth, but not necessarily convex and F is C2(Rn), and proved finite identitification of the active manifold. However, the convergence ra... |

3 |
Signal representations with minimum ℓ∞-norm
- Studer, Yin, et al.
- 2012
(Show Context)
Citation Context ... ℓ∞-norm is defined as following J(x) = max 1≤i≤N |xi|. It plays a prominent role in a variety of applications including approximate nearest neighbor search [18] or vector quantization [21], see also =-=[27]-=- and references therein. It can verified that J is a polyhedral norm, and thus J ∈ PSLSx(Tx) for the model subspace M = Tx = { α : α(I) = rs(I), r ∈ R } , and ex = s |I| , where s = sign(x) and I = { ... |

3 | Model selection with piecewise regular gauges. submitted - Vaiter, Fadili, et al. - 2013 |

2 |
A parametric maximum flow approach for discrete total variation regularization
- Chambolle, Darbon
- 2012
(Show Context)
Citation Context ...TxDsign(D ∗x) where I = supp(D∗x). The proximity operator for the 1D TV, though not available in closed form, can be obtained efficiently using either the taut string algorithm [11] or the graph cuts =-=[7]-=-. 6 Example 4.4 (ℓ∞-norm). For x ∈ R n, the anti-sparsity promoting ℓ∞-norm is defined as following J(x) = max 1≤i≤N |xi|. It plays a prominent role in a variety of applications including approximate ... |

1 |
Partly smooth regularization of inverse problems
- Vaiter, Peyré, et al.
- 2014
(Show Context)
Citation Context .... In fact, the nuclear norm is partly smooth at a matix x relative to the manifold M = { x′ : rank(x′) = rank(x) } . The first three regularizers are all part of the class PSLx(Tx), see Section 4 and =-=[32]-=- for details. We now define a subclass of partly smooth functions where the active manifold is actually a subspace and the generalized sign vector ex is locally constant. Definition 2.2. J belongs to ... |