Results 11  20
of
115
Hessian SchattenNorm Regularization for Linear Inverse Problems
"... Abstract — We introduce a novel family of invariant, convex, and nonquadratic functionals that we employ to derive regularized solutions of illposed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
(Show Context)
Abstract — We introduce a novel family of invariant, convex, and nonquadratic functionals that we employ to derive regularized solutions of illposed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as secondorder extensions of the popular totalvariation (TV) seminorm since they satisfy the same invariance properties. Meanwhile, by taking advantage of secondorder derivatives, they avoid the staircase effect, a common artifact of TVbased reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primaldual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto ℓq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data. Index Terms — Eigenvalue optimization, Hessian operator, image reconstruction, matrix projections, Schatten norms.
Shaping Level Sets with Submodular Functions
"... We consider a class of sparsityinducing regularization terms based on submodular functions. While previous work has focused on nondecreasing functions, we explore symmetric submodular functions and their Lovász extensions. We show that the Lovász extension may be seen as the convex envelope of a f ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
We consider a class of sparsityinducing regularization terms based on submodular functions. While previous work has focused on nondecreasing functions, we explore symmetric submodular functions and their Lovász extensions. We show that the Lovász extension may be seen as the convex envelope of a function that depends on level sets (i.e., the set of indices whose corresponding components of the underlying predictor are greater than a given constant): this leads to a class of convex structured regularization terms that impose prior knowledge on the level sets, and not only on the supports of the underlying predictors. We provide unified optimization algorithms, such as proximal operators, and theoretical guarantees (allowed level sets and recovery conditions). By selecting specific submodular functions, we give a new interpretation to known norms, such as the total variation; we also define new norms, in particular ones that are based on order statistics with application to clustering and outlier detection, and on noisy cuts in graphs with application to change point detection in the presence of outliers. 1
Learning Separable Filters ⋆
, 2012
"... Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficien ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficiently using separable filters, by looking for low rank filters. We evaluate our approach on both the image categorization and the pixel classification tasks and show that we obtain similar accuracy as stateoftheart methods, at a fraction of the computational cost. 1
Efficient Discriminative Projections for Compact Binary Descriptors
"... Abstract. Binary descriptors of image patches are increasingly popular given that they require less storage and enable faster processing. This, however, comes at a price of lower recognition performances. To boost these performances, we project the image patches to a more discriminative subspace, an ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Binary descriptors of image patches are increasingly popular given that they require less storage and enable faster processing. This, however, comes at a price of lower recognition performances. To boost these performances, we project the image patches to a more discriminative subspace, and threshold their coordinates to build our binary descriptor. However, applying complex projections to the patches is slow, which negates some of the advantages of binary descriptors. Hence, our key idea is to learn the discriminative projections so that they can be decomposed into a small number of simple filters for which the responses can be computed fast. We show that with as few as 32 bits per descriptor we outperform the stateoftheart binary descriptors in terms of both accuracy and efficiency. 1
Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows
, 2011
"... We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network. In this context, it is of much interest to take into account the problem structure, and automatically select a subgraph with a small number of connected components. By exp ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network. In this context, it is of much interest to take into account the problem structure, and automatically select a subgraph with a small number of connected components. By exploiting prior knowledge, one can indeed improve the prediction performance and/or obtain better interpretable results. Regularization or penalty functions for selecting features in graphs have recently been proposed but they raise new algorithmic challenges. For example, they typically require solving a combinatorially hard selection problem among all connected subgraphs. In this paper, we propose computationally feasible strategies to select a sparse and “well connected” subset of features sitting on a directed acyclic graph (DAG). We introduce structured sparsity penalties over paths on a DAG called “path coding ” penalties. Unlike existing regularization functions, path coding penalties can both model long range interactions between features in the graph and be tractable. The penalties and their proximal operators involve path selection problems, which we efficiently solve by leveraging network flow optimization. We experimentally show on synthetic, image, and genomic data that our approach is scalable and lead to more connected subgraphs than other regularization functions for graphs.
Nonparametric Group Orthogonal Matching Pursuit for Sparse Learning with Multiple Kernels
"... We consider regularized risk minimization in a large dictionary of Reproducing kernel Hilbert Spaces (RKHSs) over which the target function has a sparse representation. This setting, commonly referred to as Sparse Multiple Kernel Learning (MKL), may be viewed as the nonparametric extension of group ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We consider regularized risk minimization in a large dictionary of Reproducing kernel Hilbert Spaces (RKHSs) over which the target function has a sparse representation. This setting, commonly referred to as Sparse Multiple Kernel Learning (MKL), may be viewed as the nonparametric extension of group sparsity in linear models. While the two dominant algorithmic strands of sparse learning, namely convex relaxations using l1 norm (e.g., Lasso) and greedy methods (e.g., OMP), have both been rigorously extended for group sparsity, the sparse MKL literature has so far mainly adopted the former with mild empirical success. In this paper, we close this gap by proposing a GroupOMP based framework for sparse MKL. Unlike l1MKL, our approach decouples the sparsity regularizer (via a direct l0 constraint) from the smoothness regularizer (via RKHS norms), which leads to better empirical performance and a simpler optimization procedure that only requires a blackbox singlekernel solver. The algorithmic development and empirical studies are complemented by theoretical analyses in terms of Rademacher generalization bounds and sparse recovery conditions analogous to those for OMP [27] and GroupOMP [16]. 1
A Simple and scalable response prediction for display advertising
"... Clickthrough and conversation rates estimation are two core predictions tasks in display advertising. We present in this paper a machine learning framework based on logistic regression that is specifically designed to tackle the specifics of display advertising. The resulting system has the followin ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Clickthrough and conversation rates estimation are two core predictions tasks in display advertising. We present in this paper a machine learning framework based on logistic regression that is specifically designed to tackle the specifics of display advertising. The resulting system has the following characteristics: it is easy to implement and deploy; it is highly scalable (we have trained it on terabytes of data); and it provides models with stateoftheart accuracy.
Convex relaxations of structured matrix factorizations
, 2013
"... We consider the factorization of a rectangular matrix X into a positive linear combination of rankone factors of the form uv ⊤ , where u and v belongs to certain sets U and V, that may encode specific structures regarding the factors, such as positivity or sparsity. In this paper, we show that comp ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
We consider the factorization of a rectangular matrix X into a positive linear combination of rankone factors of the form uv ⊤ , where u and v belongs to certain sets U and V, that may encode specific structures regarding the factors, such as positivity or sparsity. In this paper, we show that computing the optimal decomposition is equivalent to computing a certain gauge function of X and we provide a detailed analysis of these gauge functions and their polars. Since these gaugefunctions are typically hard to compute, we present semidefinite relaxations and several algorithms that may recover approximate decompositions with approximation guarantees. We illustrate our results with simulations on finding decompositions with elements in {0,1}. As side contributions, we present a detailed analysis of variational quadratic representations of norms as well as a new iterative basis pursuit algorithm that can deal with inexact firstorder oracles. 1
Simultaneous lowpass filtering and total variation denoising
 IEEE Trans. Signal Process
, 2014
"... Abstract—This paper seeks to combine linear timeinvariant (LTI) filtering and sparsitybased denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsitybased denoisi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract—This paper seeks to combine linear timeinvariant (LTI) filtering and sparsitybased denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsitybased denoising is suitable for signals admitting a sparse representation with respect to a known transform. However, some signals cannot be accurately categorized as either bandlimited or sparse. This paper addresses the problem of filtering noisy data for the particular case where the underlying signal comprises a lowfrequency component and a sparse or sparsederivative component. A convex optimization approach is presented and two algorithms derived: one based on majorizationminimization (MM), and the other based on the alternating direction method of multipliers (ADMM). It is shown that a particular choice of discretetime filter, namely zerophase noncausal recursive filters for finitelength data formulated in terms of banded matrices, makes the algorithms effective and computationally efficient. The efficiency stems from the use of fast algorithms for solving banded systems of linear equations. The method is illustrated using data from a physiologicalmeasurement technique (i.e., near infrared spectroscopic time series imaging) that in many cases yields data that is wellapproximated as the sum of lowfrequency, sparse or sparsederivative, and noise components. Index Terms—Total variation denoising, sparse signal, sparsity, lowpass filter, Butterworth filter, zerophase filter. I.
Sparse Localized Deformation Components
"... Figure 1: Our method automatically decomposes any mesh animations like performance captured faces (left) or muscle deformations (right) into sparse and localized deformation modes (shown in blue). Left: a new facial expression is generated by summing deformation components. Our method automatically ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Figure 1: Our method automatically decomposes any mesh animations like performance captured faces (left) or muscle deformations (right) into sparse and localized deformation modes (shown in blue). Left: a new facial expression is generated by summing deformation components. Our method automatically separates spatially confined effects like separate eyebrow motions from the data. Right: Our algorithm extracts individual muscle and bone deformations. The deformation components can then be used for convenient editing of the captured animation. Here, the deformation component of the clavicle is overexaggerated to achieve an artistically desired look. We propose a method that extracts sparse and spatially localized deformation modes from an animated mesh sequence. To this end, we propose a new way to extend the theory of sparse matrix decompositions to 3D mesh sequence processing, and further contribute with an automatic way to ensure spatial locality of the decomposition in a new optimization framework. The extracted dimensions often have an intuitive and clear interpretable meaning. Our method optionally accepts userconstraints to guide the process of discovering the underlying latent deformation space. The capabilities of our efficient, versatile, and easytoimplement method are extensively demonstrated on a variety of data sets and application contexts. We demonstrate its power for user friendly intuitive editing of captured mesh animations, such as faces, full body motion, cloth animations, and muscle deformations. We further show its benefit for statistical geometry processing and biomechanically meaningful animation editing. It is further shown qualitatively and quantitatively that our method outperforms other unsupervised decomposition methods and other animation parameterization approaches in the above use cases.