Results 11  20
of
166
Approximations of Weighted Independent Set and Hereditary Subset Problems
 JOURNAL OF GRAPH ALGORITHMS AND APPLICATIONS
, 2000
"... The focus of this study is to clarify the approximability of weighted versions of the maximum independent set problem. In particular, we report improved performance ratios in boundeddegree graphs, inductive graphs, and general graphs, as well as for the unweighted problem in sparse graphs. Wher ..."
Abstract

Cited by 73 (6 self)
 Add to MetaCart
The focus of this study is to clarify the approximability of weighted versions of the maximum independent set problem. In particular, we report improved performance ratios in boundeddegree graphs, inductive graphs, and general graphs, as well as for the unweighted problem in sparse graphs. Where possible, the techniques are applied to related hereditary subgraph and subset problem, obtaining ratios better than previously reported for e.g. Weighted Set Packing, Longest Common Subsequence, and Independent Set in hypergraphs.
Mechanism Design for Policy Routing
, 2006
"... The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that ..."
Abstract

Cited by 58 (9 self)
 Add to MetaCart
The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that maximizes the overall welfare (i.e., the sum of all ASes’ utilities for their selected routes). We show that, if the utility functions are unrestricted, this problem is NPhard even to approximate closely. We then study a natural class of restricted utilities that we call nexthop preferences. We present a strategyproof, polynomialtime computable mechanism for welfaremaximizing routing over this restricted domain. However, we show that, in contrast to earlier work on lowestcost routing mechanism design, this mechanism appears to be incompatible with
CABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
, 2005
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..."
Abstract

Cited by 58 (9 self)
 Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bidordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the runtime distribution does not have a heavy tail.
Hardness of Approximating the Minimum Distance of a Linear Code
, 2003
"... We show that the minimum distance d of a linear code is not approximable to within anyconstant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP. We also show that the minimum distance is not approximable to within an additiveerror that is linear in the b ..."
Abstract

Cited by 58 (6 self)
 Add to MetaCart
(Show Context)
We show that the minimum distance d of a linear code is not approximable to within anyconstant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP. We also show that the minimum distance is not approximable to within an additiveerror that is linear in the block length n of the code. Under the stronger assumption that NPis not contained in RQP (random quasipolynomial time), we show that the minimum distance is not approximable to within the factor 2log 1ffl(n), for any ffl> 0. Our results hold for codes over any finite field, including binary codes. In the process we show that it is hard to findapproximately nearest codewords even if the number of errors exceeds the unique decoding radius d/2 by only an arbitrarily small fraction ffld. We also prove the hardness of the nearestcodeword problem for asymptotically good codes, provided the number of errors exceeds (2
Algorithmic and Analysis Techniques in Property Testing
"... Property testing algorithms are “ultra”efficient algorithms that decide whether a given object (e.g., a graph) has a certain property (e.g., bipartiteness), or is significantly different from any object that has the property. To this end property testing algorithms are given the ability to perform ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
Property testing algorithms are “ultra”efficient algorithms that decide whether a given object (e.g., a graph) has a certain property (e.g., bipartiteness), or is significantly different from any object that has the property. To this end property testing algorithms are given the ability to perform (local) queries to the input, though the decision they need to make usually concern properties with a global nature. In the last two decades, property testing algorithms have been designed for many types of objects and properties, amongst them, graph properties, algebraic properties, geometric properties, and more. In this article we survey results in property testing, where our emphasis is on common analysis and algorithmic techniques. Among the techniques surveyed are the following: • The selfcorrecting approach, which was mainly applied in the study of property testing of algebraic properties; • The enforce and test approach, which was applied quite extensively in the analysis of algorithms for testing graph properties (in the densegraphs model), as well as in other contexts;
Extractors from ReedMuller Codes
 In Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science
, 2001
"... Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, w ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, researchers had failed to build extractors directly from a good code, without using other tools from pseudorandomness. We succeed in constructing an extractor directly from a ReedMuller code. To do this, we develop a novel proof technique. Furthermore, our construction is the first and only construction with degree close to linear. In contrast, the best previous constructions had brought the log of the degree within a constant of optimal, which gives polynomial degree. This improvement is important for certain applications. For example, it follows that approximating the VC dimension to within a factor of N
Evolutionary algorithm with the guided mutation for the maximum clique problem
 IEEE Transactions on Evolutionary Computation
, 2005
"... Abstract—Estimation of distribution algorithms sample new solutions (offspring) from a probability model which characterizes the distribution of promising solutions in the search space at each generation. The location information of solutions found so far (i.e., the actual positions of these solutio ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
(Show Context)
Abstract—Estimation of distribution algorithms sample new solutions (offspring) from a probability model which characterizes the distribution of promising solutions in the search space at each generation. The location information of solutions found so far (i.e., the actual positions of these solutions in the search space) is not directly used for generating offspring in most existing estimation of distribution algorithms. This paper introduces a new operator, called guided mutation. Guided mutation generates offspring through combination of global statistical information and the location information of solutions found so far. An evolutionary algorithm with guided mutation (EA/G) for the maximum clique problem is proposed in this paper. Besides guided mutation, EA/G adopts a strategy for searching different search areas in different search phases. Marchiori’s heuristic is applied to each new solution to produce a maximal clique in EA/G. Experimental results show that EA/G outperforms the heuristic genetic algorithm of Marchiori (the best evolutionary algorithm reported so far) and a MIMIC algorithm on DIMACS benchmark graphs. Index Terms—Estimation of distribution algorithms, evolutionary algorithm, guided mutation, heuristics, hybrid genetic algorithm, maximum clique problem (MCP). I.
Decomposition plans for geometric constraint systems
 J. Symbolic Computation
, 2001
"... A central issue in dealing with geometric constraint systems for CAD/CAM/CAE is the generation of an optimal decomposition plan that not only aids efficient solution, but also captures design intent and supports conceptual design. Though complex, this issue has evolved and crystallized over the past ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
A central issue in dealing with geometric constraint systems for CAD/CAM/CAE is the generation of an optimal decomposition plan that not only aids efficient solution, but also captures design intent and supports conceptual design. Though complex, this issue has evolved and crystallized over the past few years, permitting us to take the next important step: in this paper, we formalize, motivate and explain the decomposition–recombination (DR)planning problem as well as several performance measures by which DRplanning algorithms can be analyzed and compared. These measures include: generality, validity, completeness, Church–Rosser property, complexity, best and worstchoice approximation factors, (strict) solvability preservation, ability to deal with underconstrained systems, and ability to incorporate conceptual design decompositions specified by the designer. The problem and several of the performance measures are formally defined here for the first time—they closely reflect specific requirements of CAD/CAM applications. The clear formulation of the problem and performance measures allow us to precisely analyze and compare existing DRplanners that use two wellknown types of decomposition methods: SR (constraint shape recognition) and MM (generalized maximum matching) on constraint graphs. This analysis additionally serves to illustrate and provide intuitive substance to the newly formalized measures. In Part II of this article, we use the new performance measures to guide the development of a new DRplanning algorithm which excels with respect to these performance measures. c ○ 2001 Academic Press 1.
ON THE COMPLEXITY OF APPROXIMATING kSET PACKING
 COMPUTATIONAL COMPLEXITY
, 2006
"... Given a kuniform hypergraph, the Maximum kSet Packing problem is to find the maximum disjoint set of edges. We prove that this problem cannot be efficiently approximated to within a factor of Ω(k / ln k) unless P = NP. This improves the previous hardness of approximation factor of k/2 O( √ ln k) ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
Given a kuniform hypergraph, the Maximum kSet Packing problem is to find the maximum disjoint set of edges. We prove that this problem cannot be efficiently approximated to within a factor of Ω(k / ln k) unless P = NP. This improves the previous hardness of approximation factor of k/2 O( √ ln k) by Trevisan. This result extends to the problem of kDimensionalMatching.