Results 1  10
of
29
On finding dense subgraphs
 In ICALP ’09
, 2009
"... Abstract. Given an undirected graph G = (V, E), the density of a subgraph on vertex set S is defined as d(S) = E(S), where E(S) is the set of edges S in the subgraph induced by nodes in S. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion t ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract. Given an undirected graph G = (V, E), the density of a subgraph on vertex set S is defined as d(S) = E(S), where E(S) is the set of edges S in the subgraph induced by nodes in S. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets S and T of vertices, the density of the subgraph
A Revealed Preference Approach to Computational Complexity in Economics
, 2010
"... One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worstcase under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worstcase view of computational complexity in important ways, and is methodologically close to mainstream economics.
Improved approximation algorithms for label cover problems
 In ESA
, 2009
"... Abstract In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem, for which, till now, the best approximation ratios known were O ( √ n). In fact, several recent papers reduced Label Cover to other problems, arguing that ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem, for which, till now, the best approximation ratios known were O ( √ n). In fact, several recent papers reduced Label Cover to other problems, arguing that if better approximation algorithms for their problems existed, then a o ( √ n)approximation algorithm for Label Cover would exist. We show, in fact, that there are a O(n 1/3)approximation algorithm for Max Rep and a O(n 1/3 log 2/3 n)approximation algorithm for Min Rep. In addition, we also exhibit a randomized reduction from Densest kSubgraph to Max Rep, showing that any approximation factor for Max Rep implies the same factor (up to a constant) for Densest kSubgraph. 1
On the Maximum Quadratic Assignment Problem
"... Quadratic Assignment is a basic problem in combinatorial optimization, which generalizes several other problems such as Traveling Salesman, Linear Arrangement, Dense k Subgraph, and Clustering with given sizes. The input to the Quadratic Assignment Problem consists of two n × n symmetric nonnegativ ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Quadratic Assignment is a basic problem in combinatorial optimization, which generalizes several other problems such as Traveling Salesman, Linear Arrangement, Dense k Subgraph, and Clustering with given sizes. The input to the Quadratic Assignment Problem consists of two n × n symmetric nonnegative matrices W = (wi,j) and D = (di,j). Given matrices W, D, and a permutation π: [n] → [n], the objective function is Q(π). = � i,j∈[n],i�=j wi,j · dπ(i),π(j). In this paper, we study the Maximum Quadratic Assignment Problem, where the goal is to find a permutation π that maximizes Q(π). We give an Õ(√n) approximation algorithm, which is the first nontrivial approximation guarantee for this problem. The above guarantee also holds when the matrices W, D are asymmetric. An indication of the hardness of Maximum Quadratic Assignment is that it contains as a special case, the Dense k Subgraph problem, for which the best known approximation ratio ≈ n1/3 (Feige et al. [8]). When one of the matrices W, D satisfies triangle inequality, we obtain a 2e e−1 ≈ 3.16 approximation algorithm. This improves over the previously bestknown approximation guarantee of 4 (Arkin et al. [4]) for this special case of Maximum Quadratic Assignment. The performance guarantee for Maximum Quadratic Assignment with triangle inequality can be proved relative to an optimal solution of a natural linear programming relaxation, that has been used earlier in BranchandBound approaches (see eg. Adams and Johnson [1]). It can also be shown that this LP has an integrality gap of ˜ Ω ( √ n) for general Maximum Quadratic Assignment.
Multiskill Collaborative Teams based on Densest Subgraphs
, 2011
"... We consider the problem of identifying a team of skilled individuals for collaboration, in the presence of a social network. Each node in the input social network may be an expert in one or more skills such as theory, databases or data mining. The edge weights specify the affinity or collaborative ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We consider the problem of identifying a team of skilled individuals for collaboration, in the presence of a social network. Each node in the input social network may be an expert in one or more skills such as theory, databases or data mining. The edge weights specify the affinity or collaborative compatibility between respective nodes. Given a project that requires a set of specified number of skilled individuals in each area of expertise, the goal is to identify a team that maximizes the collaborative compatibility. For example, the requirement may be to form a team that has at least three databases experts and at least two theory experts. We explore team formation where the collaborative compatibility objective is measured as the density of the induced subgraph on selected nodes. The problem of maximizing density is NPhard even when the team requires a certain number of individuals of only one specific skill. We present a 3approximation algorithm that improves upon a naive extension of the previously known algorithm for densest at least k subgraph problem. We further show how the same approximation can be extended to a special case of multiple skills as well. Our problem generalizes the formulation studied by Lappas et al. [KDD ’09]. Further, they measured collaborative compatibility in terms of diameter and the spanning tree costs. Our density based objective also turns out to be more robust in certain aspects. Experiments are performed on a crawl of the DBLP graph where individuals can be skilled in at most four areas theory, databases, data mining, and artificial intelligence. In addition to our main algorithm, we also present heuristic extensions to trade off between the size of the solution and its induced density. These densitybased algorithms outperform the diameterbased objective on several metrics for assessing the collaborative compatibility of teams. The solutions suggested are also intuitively meaningful and scale well with the increase in the number of skilled individuals required.
Truncated Power Method for Sparse Eigenvalue Problems
"... This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A st ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest ksubgraph problem. Extensive experiments on several synthetic and realworld data sets demonstrate the competitive empirical performance of our method.
Derandomized Parallel Repetition of Structured PCPs
"... Abstract—A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is cal ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract—A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with subconstant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have subconstant soundness error. Our construction can be viewed as a combinatorial alternative to the “manifold vs. point ” construction, which is the only construction in the literature for this parameter range. The “manifold vs. point ” PCP is based on a low degree test, while our construction is based on a direct product test. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a deBruijn graph.
PTAS for Densest kSubgraph in Interval Graphs
, 2011
"... Given an interval graph and integer k, we consider the problem of finding a subgraph of size k with a maximum number of induced edges, called densest ksubgraph problem in interval graphs. It has been shown that this problem is NPhard even for chordal graphs [17], and there is probably no PTAS for ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Given an interval graph and integer k, we consider the problem of finding a subgraph of size k with a maximum number of induced edges, called densest ksubgraph problem in interval graphs. It has been shown that this problem is NPhard even for chordal graphs [17], and there is probably no PTAS for general graphs [12]. However, the exact complexity status for interval graphs is a longstanding open problem [17], and the best known approximation result is a 3approximation algorithm [16]. We shed light on the approximation complexity of finding a densest ksubgraph in interval graphs by presenting a polynomialtime approximation scheme (PTAS), that is, we show that there is an (1 + ǫ)approximation algorithm for any ǫ> 0, which is the first such approximation scheme for the densest ksubgraph problem in an important graph class without any further restrictions. 1
A plant location guide for the unsure
, 2008
"... This paper studies an extension of the kmedian problem where we are given a metric space (V, d) and not just one but m client sets {Si ⊆ V} m i=1, and the goal is to open k facilities F to minimize: maxi∈[m] j∈Si d(j, F) �, i.e., the worstcase cost over all the client sets. This is a “minmax ” or ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper studies an extension of the kmedian problem where we are given a metric space (V, d) and not just one but m client sets {Si ⊆ V} m i=1, and the goal is to open k facilities F to minimize: maxi∈[m] j∈Si d(j, F) �, i.e., the worstcase cost over all the client sets. This is a “minmax ” or “robust ” version of the kmedian problem; however, note that in contrast to previous papers on robust/stochastic problems, we have only one stage of decisionmaking—where should we place the facilities? We present an O(log n+log m) approximation for robust kmedian: The algorithm is combinatorial and very simple, and is based on reweighting/Lagrangeanrelaxation ideas. In fact, we give a general framework for (minimization) facility location problems where there is a bound on the number of open facilities. For robust and stochastic versions of such location problems, we show that if the problem satisfies a certain “projection” property, essentially the same algorithm gives a logarithmic approximation ratio in both versions. We use our framework to give the first approximation algorithms for robust/stochastic versions of ktree, capacitated kmedian, and faulttolerant kmedian. 1