Results 31  40
of
111
Tight convex relaxations for sparse matrix factorization
, 2014
"... Based on a new atomic norm, we propose a new convex formulation for sparse matrix factorization problems in which the number of nonzero elements of the factors is assumed fixed and known. The formulation counts sparse PCA with multiple factors, subspace clustering and lowrank sparse bilinear regre ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Based on a new atomic norm, we propose a new convex formulation for sparse matrix factorization problems in which the number of nonzero elements of the factors is assumed fixed and known. The formulation counts sparse PCA with multiple factors, subspace clustering and lowrank sparse bilinear regression as potential applications. We compute slow rates and an upper bound on the statistical dimension Amelunxen et al. (2013) of the suggested norm for rank 1 matrices, showing that its statistical dimension is an order of magnitude smaller than the usual `1norm, trace norm and their combinations. Even though our convex formulation is in theory hard and does not lead to provably polynomial time algorithmic schemes, we propose an active set algorithm leveraging the structure of the convex problem to solve it and show promising numerical results.
Riemannian Similarity Learning
"... We consider a similarityscore based paradigm to address scenarios where either the class labels are only partially revealed during learning, or the training and testing data are drawn from heterogeneous sources. The learning problem is subsequently formulated as optimization over a bilinear form of ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider a similarityscore based paradigm to address scenarios where either the class labels are only partially revealed during learning, or the training and testing data are drawn from heterogeneous sources. The learning problem is subsequently formulated as optimization over a bilinear form of fixed rank. Our paradigm bears similarity to metric learning, where the major difference lies in its aim of learning a rectangular similarity matrix, instead of a proper metric. We tackle this problem in a Riemannian optimization framework. In particular, we consider its applications in pairwisebased action recognition, and crossdomain imagebased object recognition. In both applications, the proposed algorithm produces competitive performance on respective benchmark datasets. 1.
Poisson Image Reconstruction with Hessian Schattennorm Regularization
, 2013
"... Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we prop ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrixvalued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmentedLagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an ℓ p norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
WebScale MultiTask Feature Selection for Behavioral Targeting
"... A typical behavioral targeting system optimizing purchase activities, called conversions, faces two main challenges: the webscale amounts of user histories to process on a daily basis, and the relative sparsity of conversions. In this paper, we try to address these challenges through feature select ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
A typical behavioral targeting system optimizing purchase activities, called conversions, faces two main challenges: the webscale amounts of user histories to process on a daily basis, and the relative sparsity of conversions. In this paper, we try to address these challenges through feature selection. We formulate a multitask (or group) featureselection problem among a set of related tasks (sharing a common set of features), namely advertising campaigns. We apply a groupsparse penalty consisting of a combination of an ℓ1 and ℓ2 penalty and an associated fast optimization algorithm for distributed parameter estimation. Our algorithm relies on a variant of the well known Fast Iterative Thresholding Algorithm (FISTA), a closedform solution for mixed norm programming and a distributed subgradient oracle. To efficiently handle webscale user histories, we present a distributed inference algorithm for the problem that scales to billions of instances and millions of attributes. We show the superiority of our algorithm in terms of both sparsity and ROC performance over baseline feature selection methods (both singletask L1regularization and multitask mutualinformation gain).
Robust Spectral Learning for Unsupervised Feature Selection
"... Abstract—In this paper, we consider the problem of unsupervised feature selection. Recently, spectral feature selection algorithms, which leverage both graph Laplacian and spectral regression, have received increasing attention. However, existing spectral feature selection algorithms suffer from tw ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider the problem of unsupervised feature selection. Recently, spectral feature selection algorithms, which leverage both graph Laplacian and spectral regression, have received increasing attention. However, existing spectral feature selection algorithms suffer from two major problems: 1) since the graph Laplacian is constructed from the original feature space, noisy and irrelevant features may have adverse effect on the estimated graph Laplacian and hence degenerate the quality of the induced graph embedding; 2) since the cluster labels are discrete in natural, relaxing and approximating these labels into a continuous embedding can inevitably introduce noise into the estimated cluster labels. Without considering the noise in the cluster labels, the feature selection process may be misguided. In this paper, we propose a Robust Spectral learning framework for unsupervised Feature Selection (RSFS), which jointly improves the robustness of graph embedding and sparse spectral regression. Compared with existing methods which are sensitive to noisy features, our proposed method utilizes a robust local learning method to construct the graph Laplacian and a robust spectral regression method to handle the noise on the learned cluster labels. In order to solve the proposed optimization problem, an efficient iterative algorithm is proposed. We also show the close connection between the proposed robust spectral regression and robust Huber Mestimator. Experimental results on different datasets show the superiority of RSFS. I.
Epigraphical splitting for solving constrained convex formulations of inverse problems with proximal tools
, 2014
"... We propose a proximal approach to deal with a class of convex variational problems involving nonlinear constraints. A large family of constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose a proximal approach to deal with a class of convex variational problems involving nonlinear constraints. A large family of constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but possibly overlapping, blocks of the signal. For such constraints, the associated projection operator generally does not have a simple form. We circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum. A closed halfspace constraint is also enforced, in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set. In this paper, we focus on a family of constraints involving linear transforms of distance functions to a convex set or `1,p norms with p ∈ {1, 2,+∞}. In these cases, the projection onto the epigraph of the involved function has a closed form expression. The proposed approach is validated in the context of image restoration with missing samples, by making use of constraints based on NonLocal Total Variation. Experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems. A second application to a pulse shape design problem is provided in order to illustrate the flexibility of the proposed approach.
Style Compatibility for 3D Furniture Models
"... Figure 1: This paper proposes a method to learn a metric for stylistic compatibility between furniture in a scene. (a) The image on the left shows a plausible furniture arrangement, but with a randomly chosen mix of clashing furniture styles. The scene on the right has the same arrangement but with ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Figure 1: This paper proposes a method to learn a metric for stylistic compatibility between furniture in a scene. (a) The image on the left shows a plausible furniture arrangement, but with a randomly chosen mix of clashing furniture styles. The scene on the right has the same arrangement but with furniture pieces chosen to optimize stylistic compatibility according to our metric. This paper presents a method for learning to predict the stylistic compatibility between 3D furniture models from different object classes: e.g., how well does this chair go with that table? To do this, we collect relative assessments of style compatibility using crowdsourcing. We then compute geometric features for each 3D model and learn a mapping of them into a space where Euclidean distances represent style incompatibility. Motivated by the geometric subtleties of style, we introduce partaware geometric feature vectors that characterize the shapes of different parts of an object separately. Motivated by the need to compute style compatibility between different object classes, we introduce a method to learn object classspecific mappings from geometric features to a shared feature space. During experiments with these methods, we find that they are effective at predicting style compatibility agreed upon by people. We find in user studies that the learned compatibility metric is useful for novel interactive tools that: 1) retrieve stylistically compatible models for a query, 2) suggest a piece of furniture for an existing scene, and 3) help guide an interactive 3D modeler towards scenes with compatible furniture.
Scalable Hierarchical Multitask Learning Algorithms for Conversion Optimization in Display Advertising
"... Many estimation tasks come in groups and hierarchies of related problems. In this paper we propose a hierarchical model and a scalable algorithm to perform inference for multitask learning. It infers task correlation and subtask structure in a joint sparse setting. Implementation is achieved by a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Many estimation tasks come in groups and hierarchies of related problems. In this paper we propose a hierarchical model and a scalable algorithm to perform inference for multitask learning. It infers task correlation and subtask structure in a joint sparse setting. Implementation is achieved by a distributed subgradient oracle and the successive application of proxoperators pertaining to groups and subgroups of variables. We apply this algorithm to conversion optimization in display advertising. Experimental results on over 1TB data for up to 1 billion observations and 1 million attributes show that the algorithm provides significantly better prediction accuracy while simultaneously being efficiently scalable by distributed parameter synchronization.
A Class of Randomized PrimalDual Algorithms for Distributed Optimization, arXiv preprint arXiv:1406.6404v3
, 2014
"... Abstract Based on a preconditioned version of the randomized blockcoordinate forwardbackward algorithm recently proposed in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract Based on a preconditioned version of the randomized blockcoordinate forwardbackward algorithm recently proposed in