Results 1 
9 of
9
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
, 2013
"... We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of realworld appl ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of realworld applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [9, 25] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to logfactors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.
Fast MultiStage Submodular Maximization
, 2014
"... Motivated by extremely largescale machine learning problems, we introduce a new multistage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Motivated by extremely largescale machine learning problems, we introduce a new multistage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target submodular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multistage framework and give examples on how to design instances of MULTGREED for a broad range of natural submodular functions. We show that MULTGREED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several realworld problems, including data subset selection on millions of speech samples where MULTGREED yields at least a thousand times speedup and superior results over the stateoftheart selection methods.
Provable submodular minimization using Wolfe’s algorithm
 In NIPS
, 2014
"... Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a critical problem. Theoretically, unconstrained SFM can be performed in polynomial time [10, 11]. However, these algorithms are typically not practical. In 1976, Wolfe [ ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a critical problem. Theoretically, unconstrained SFM can be performed in polynomial time [10, 11]. However, these algorithms are typically not practical. In 1976, Wolfe [21] proposed an algorithm to find the minimum Euclidean norm point in a polytope, and in 1980, Fujishige [3] showed how Wolfe’s algorithm can be used for SFM. For general submodular functions, this FujishigeWolfe minimum norm algorithm seems to have the best empirical performance. Despite its good practical performance, very little is known about Wolfe’s minimum norm algorithm theoretically. To our knowledge, the only result is an exponential time analysis due to Wolfe [21] himself. In this paper we give a maiden convergence analysis of Wolfe’s algorithm. We prove that in t iterations, Wolfe’s algorithm returns an O(1/t)approximate solution to the minnorm point on any polytope. We also prove a robust version of Fujishige’s theorem which shows that anO(1/n2)approximate solution to the minnorm point on the base polytope implies exact submodular minimization. As a corollary, we get the first pseudopolynomial time guarantee for the FujishigeWolfe minimum norm algorithm for unconstrained submodular function minimization. 1
On approximate nonsubmodular minimization via treestructured supermodularity.
 In 18th International Conference on Artificial Intelligence and Statistics (AISTATS2015),
, 2015
"... Abstract We address the problem of minimizing nonsubmodular functions where the supermodularity is restricted to treestructured pairwise terms. We are motivated by several real world applications, which require submodularity along with structured supermodularity, and this forms a rich class of ex ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract We address the problem of minimizing nonsubmodular functions where the supermodularity is restricted to treestructured pairwise terms. We are motivated by several real world applications, which require submodularity along with structured supermodularity, and this forms a rich class of expressive models, where the nonsubmodularity is restricted to a tree. While this problem is NP hard (as we show), we develop several practical algorithms to find approximate and nearoptimal solutions for this problem, some of which provide lower and others of which provide upper bounds thereby allowing us to compute a tightness gap for any problem. We compare our algorithms on synthetic data, and also demonstrate the advantage of the formulation on the real world application of image segmentation, where we incorporate structured supermodularity into higherorder submodular energy minimization.
Fast MultiStage Submodular Maximization: Extended version
"... Motivated by extremely largescale machine learning problems, we introduce a new multistage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Motivated by extremely largescale machine learning problems, we introduce a new multistage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target submodular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multistage framework and give examples on how to design instances of MULTGREED for a broad range of natural submodular functions. We show that MULTGREED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several realworld problems, including data subset selection on millions of speech samples where MULTGREED yields at least a thousand times speedup and superior results over the stateoftheart selection methods. 1
On the Reducibility of Submodular Functions
"... Abstract The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introdu ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbationreduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses.
Maximization of Approximately Submodular Functions
"... Abstract We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that F is εap ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that F is εapproximately submodular if there exists a submodular function f such that (1−ε)f (S) ≤ F (S) ≤ (1+ε)f (S) for all subsets S. We are interested in characterizing the querycomplexity of maximizing F subject to a cardinality constraint k as a function of the error level ε > 0. We provide both lower and upper bounds: for ε > n −1/2 we show an exponential querycomplexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms.
Submodular Point Processes with Applications to Machine Learning
"... Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Logsubmodu ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Logsubmodular and Logsupermodular distributions (LogSPPs) such as determinantal point processes (DPPs), SPPs are themselves submodular (or supermodular). In this paper, we analyze the computational complexity of probabilistic inference in SPPs. We show that computing the partition function for SPPs (and LogSPPs), requires exponential complexity in the worst case, and also provide algorithms which approximate SPPs up to polynomial factors. Moreover, for several subclasses of interesting submodular functions that occur in applications, we show how we can provide efficient closed form expressions for the partition functions, and thereby marginals and conditional distributions. We also show how SPPs are closed under mixtures, thus enabling maximum likelihood based strategies for learning mixtures of submodular functions. Finally, we argue how SPPs complement existing LogSPP distributions, and are a natural model for several applications.
Monotone Closure of Relaxed Constraints in Submodular Optimization: Connections Between Minimization and Maximization: Extended Version
, 2014
"... It is becoming increasingly evident that many machine learning problems may be reduced to some form of submodular optimization. Previous work addresses generic discrete approaches and specific relaxations. In this work, we take a generic view from a relaxation perspective. We show a relaxation form ..."
Abstract
 Add to MetaCart
(Show Context)
It is becoming increasingly evident that many machine learning problems may be reduced to some form of submodular optimization. Previous work addresses generic discrete approaches and specific relaxations. In this work, we take a generic view from a relaxation perspective. We show a relaxation formulation and simple rounding strategy that, based on the monotone closure of relaxed constraints, reveals analogies between minimization and maximization problems, and includes known results as special cases and extends to a wider range of settings. Our resulting approximation factors match the corresponding integrality gaps. The results in this paper complement, in a sense explained in the paper, related discrete gradient based methods [30], and are particularly useful given the ever increasing need for efficient submodular optimization methods in very largescale machine learning. For submodular maximization, a number of relaxation approaches have been proposed. A critical challenge for the practical applicability of these techniques, however, is the complexity of evaluating the multilinear extension. We show that this extension can be efficiently evaluated for a number of useful submodular functions, thus making these otherwise impractical algorithms viable for many realworld machine learning problems.