Results 1  10
of
55
On finding dense subgraphs
 In ICALP ’09
, 2009
"... Abstract. Given an undirected graph G = (V, E), the density of a subgraph on vertex set S is defined as d(S) = E(S), where E(S) is the set of edges S in the subgraph induced by nodes in S. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion t ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Given an undirected graph G = (V, E), the density of a subgraph on vertex set S is defined as d(S) = E(S), where E(S) is the set of edges S in the subgraph induced by nodes in S. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets S and T of vertices, the density of the subgraph
Truncated Power Method for Sparse Eigenvalue Problems
"... This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A st ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest ksubgraph problem. Extensive experiments on several synthetic and realworld data sets demonstrate the competitive empirical performance of our method.
Lasserre hierarchy, higher eigenvalues, and approximation schemes for graph partitioning and quadratic integer programming with PSD objectives
 In Proceedings of 52nd Annual Symposium on Foundations of Computer Science (FOCS
, 2011
"... We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NPhardness results. Our algorithm is based on rounding semidefinite programs from the Lasserre hierarchy, and the analysis uses bounds for lowrank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time nO(r/ε 2) with approximation ratio 1+εmin{1,λr} , where λr is the r’th smallest eigenvalue of the normalized graph Laplacian L. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lowerorder terms of the stipulated bound. Our results imply (1 + O(ε)) factor approximation in time nO(r
INAPPROXIMABILITY RESULTS FOR MAXIMUM EDGE BICLIQUE, MINIMUM LINEAR ARRANGEMENT, AND SPARSEST CUT
, 2011
"... We consider the Minimum Linear Arrangement problem and the (Uniform) Sparsest Cut problem. So far, these two notorious NPhard graph problems have resisted all attempts to prove inapproximability results. We show that they have no polynomial time approximation scheme, unless NPcomplete problems ca ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We consider the Minimum Linear Arrangement problem and the (Uniform) Sparsest Cut problem. So far, these two notorious NPhard graph problems have resisted all attempts to prove inapproximability results. We show that they have no polynomial time approximation scheme, unless NPcomplete problems can be solved in randomized subexponential time. Furthermore, we show that the same techniques can be used for the Maximum Edge Biclique problem, for which we obtain a hardness factor similar to previous results but under a more standard assumption.
MaxSum Diversification, Monotone Submodular Functions and Dynamic Updates (Extended Abstract)
, 2012
"... ..."
(Show Context)
Denser than the Densest Subgraph: Extracting Optimal QuasiCliques with Quality Guarantees
"... Finding dense subgraphs is an important graphmining task with many applications. Given that the direct optimization of edge density is not meaningful, as even a single edge achieves maximum density, research has focused on optimizing alternative density functions. A very popular among such function ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
(Show Context)
Finding dense subgraphs is an important graphmining task with many applications. Given that the direct optimization of edge density is not meaningful, as even a single edge achieves maximum density, research has focused on optimizing alternative density functions. A very popular among such functions is the average degree, whose maximization leads to the wellknown densestsubgraph notion. Surprisingly enough, however, densest subgraphs are typically large graphs, with small edge density and large diameter. In this paper, we define a novel density function, which gives subgraphs of much higher quality than densest subgraphs: thegraphsfoundbyourmethodarecompact, dense, and with smaller diameter. We show that the proposed function can be derived from a general framework, which includes other important density functions as subcases and for which we show interesting general theoretical properties. To optimize the proposed function we provide an additive approximation algorithm and a localsearch heuristic. Both algorithms are very efficient and scale well to large graphs. Weevaluateouralgorithmsonrealandsyntheticdatasets, and we also devise several application studies as variants of our original problem. When compared with the method that finds the subgraph of the largest average degree, our algorithms return denser subgraphs with smaller diameter. Finally, we discuss new interesting research directions that our problem leaves open. Categories andSubjectDescriptors
A Revealed Preference Approach to Computational Complexity in Economics
, 2010
"... One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worstcase under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worstcase view of computational complexity in important ways, and is methodologically close to mainstream economics.
Improved approximation algorithms for label cover problems
 In ESA
, 2009
"... Abstract In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem, for which, till now, the best approximation ratios known were O ( √ n). In fact, several recent papers reduced Label Cover to other problems, arguing that ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem, for which, till now, the best approximation ratios known were O ( √ n). In fact, several recent papers reduced Label Cover to other problems, arguing that if better approximation algorithms for their problems existed, then a o ( √ n)approximation algorithm for Label Cover would exist. We show, in fact, that there are a O(n 1/3)approximation algorithm for Max Rep and a O(n 1/3 log 2/3 n)approximation algorithm for Min Rep. In addition, we also exhibit a randomized reduction from Densest kSubgraph to Max Rep, showing that any approximation factor for Max Rep implies the same factor (up to a constant) for Densest kSubgraph. 1
Multiskill Collaborative Teams based on Densest Subgraphs
, 2011
"... We consider the problem of identifying a team of skilled individuals for collaboration, in the presence of a social network. Each node in the input social network may be an expert in one or more skills such as theory, databases or data mining. The edge weights specify the affinity or collaborative ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of identifying a team of skilled individuals for collaboration, in the presence of a social network. Each node in the input social network may be an expert in one or more skills such as theory, databases or data mining. The edge weights specify the affinity or collaborative compatibility between respective nodes. Given a project that requires a set of specified number of skilled individuals in each area of expertise, the goal is to identify a team that maximizes the collaborative compatibility. For example, the requirement may be to form a team that has at least three databases experts and at least two theory experts. We explore team formation where the collaborative compatibility objective is measured as the density of the induced subgraph on selected nodes. The problem of maximizing density is NPhard even when the team requires a certain number of individuals of only one specific skill. We present a 3approximation algorithm that improves upon a naive extension of the previously known algorithm for densest at least k subgraph problem. We further show how the same approximation can be extended to a special case of multiple skills as well. Our problem generalizes the formulation studied by Lappas et al. [KDD ’09]. Further, they measured collaborative compatibility in terms of diameter and the spanning tree costs. Our density based objective also turns out to be more robust in certain aspects. Experiments are performed on a crawl of the DBLP graph where individuals can be skilled in at most four areas theory, databases, data mining, and artificial intelligence. In addition to our main algorithm, we also present heuristic extensions to trade off between the size of the solution and its induced density. These densitybased algorithms outperform the diameterbased objective on several metrics for assessing the collaborative compatibility of teams. The solutions suggested are also intuitively meaningful and scale well with the increase in the number of skilled individuals required.
On the Maximum Quadratic Assignment Problem
"... Quadratic Assignment is a basic problem in combinatorial optimization, which generalizes several other problems such as Traveling Salesman, Linear Arrangement, Dense k Subgraph, and Clustering with given sizes. The input to the Quadratic Assignment Problem consists of two n × n symmetric nonnegativ ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Quadratic Assignment is a basic problem in combinatorial optimization, which generalizes several other problems such as Traveling Salesman, Linear Arrangement, Dense k Subgraph, and Clustering with given sizes. The input to the Quadratic Assignment Problem consists of two n × n symmetric nonnegative matrices W = (wi,j) and D = (di,j). Given matrices W, D, and a permutation π: [n] → [n], the objective function is Q(π). = � i,j∈[n],i�=j wi,j · dπ(i),π(j). In this paper, we study the Maximum Quadratic Assignment Problem, where the goal is to find a permutation π that maximizes Q(π). We give an Õ(√n) approximation algorithm, which is the first nontrivial approximation guarantee for this problem. The above guarantee also holds when the matrices W, D are asymmetric. An indication of the hardness of Maximum Quadratic Assignment is that it contains as a special case, the Dense k Subgraph problem, for which the best known approximation ratio ≈ n1/3 (Feige et al. [8]). When one of the matrices W, D satisfies triangle inequality, we obtain a 2e e−1 ≈ 3.16 approximation algorithm. This improves over the previously bestknown approximation guarantee of 4 (Arkin et al. [4]) for this special case of Maximum Quadratic Assignment. The performance guarantee for Maximum Quadratic Assignment with triangle inequality can be proved relative to an optimal solution of a natural linear programming relaxation, that has been used earlier in BranchandBound approaches (see eg. Adams and Johnson [1]). It can also be shown that this LP has an integrality gap of ˜ Ω ( √ n) for general Maximum Quadratic Assignment.