Results 1 
9 of
9
Randomness and Structure
"... This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challengin ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challenging problems. Random problems have proved useful in a number of different ways. Firstly, they provide a relatively “unbiased ” sample for benchmarking algorithms. In the early days of CP, many algorithms were compared using only a limited sample of problem instances. In some cases, this may have lead to premature conclusions. Random problems, by comparison, permit algorithms to be tested on statistically significant samples of hard problems. However, as we outline in the rest of this chapter, there remain pitfalls waiting the unwary in their use. For example, random problems may not contain structures found in many real world problems, and these structures can make problems much easier or much harder to solve. As a second example, the process of generating random problems may itself be “flawed”, giving problem instances which are not, at least asymptotically, combinatorially hard. Random problems have also provided insight into problem hardness. For example, the influential paper by Cheeseman, Kanefsky and Taylor [12] highlighted the computational difficulty of problems which are on the “knifeedge ” between satisfiability and unsatisfiability [84]. There is even hope within certain quarters that random problems may be one of the links in resolving the P=NP question. Finally, insight into problem hardness provided by random problems has helped inform the design of better algorithms and heuristics. For example, the design of a number of branching heuristics for the Davis Logemann Loveland satisfiability (DPLL) procedure has been heavily influenced by the hardness of random problems. As a second example, the rapid randomization and restart (RRR) strategy [45, 44] was motivated by the discovery of heavytailed runtime distributions in backtracking style search procedures on random quasigroup completion problems.
The backbone of the travelling salesperson
 IN: PROCEEDINGS OF THE 19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI’05
"... We study the backbone of the travelling salesperson optimization problem. We prove that it is intractable to approximate the backbone with any performance guarantee, assuming that P!=NP and there is a limit on the number of edges falsely returned. Nevertheless, in practice, it appears that much of t ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study the backbone of the travelling salesperson optimization problem. We prove that it is intractable to approximate the backbone with any performance guarantee, assuming that P!=NP and there is a limit on the number of edges falsely returned. Nevertheless, in practice, it appears that much of the backbone is present in close to optimal solutions. We can therefore often find much of the backbone using approximation methods based on good heuristics. We demonstrate that such backbone information can be used to guide the search for an optimal solution. However, the variance in runtimes when using a backbone guided heuristic is large. This suggests that we may need to combine such heuristics with randomization and restarts. In addition, though backbone guided heuristics are useful for finding optimal solutions, they are less help in proving optimality.
Is Computational Complexity a Barrier to Manipulation?
"... Abstract. When agents are acting together, they may need a simple mechanism to decide on joint actions. One possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s). Unfortunately, agents may try to manipulate such an e ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. When agents are acting together, they may need a simple mechanism to decide on joint actions. One possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s). Unfortunately, agents may try to manipulate such an election by misreporting their preferences. Fortunately, it has been shown that it is NPhard to compute how to manipulate a number of different voting rules. However, NPhardness only bounds the worstcase complexity. Recent theoretical results suggest that manipulation may often be easy in practice. To address this issue, I suggest studying empirically if computational complexity is in practice a barrier to manipulation. The basic tool used in my investigations is the identification of computational “phase transitions”. Such an approach has been fruitful in identifying hard instances of propositional satisfiability and other NPhard problems. I show that phase transition behaviour gives insight into the hardness of manipulating voting rules, increasing concern that computational complexity is indeed any sort of barrier. Finally, I look at the problem of computing manipulation of other, related problems like stable marriage and tournament problems. 1
Backbones for Equality
"... Abstract. This paper generalizes the notion of the backbone of a CNF formula to capture also equations between literals. Each such equation applies to remove a variable from the original formula thus simplifying the formula without changing its satisfiability, or the number of its satisfying assignm ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper generalizes the notion of the backbone of a CNF formula to capture also equations between literals. Each such equation applies to remove a variable from the original formula thus simplifying the formula without changing its satisfiability, or the number of its satisfying assignments. We prove that for a formula with n variables, the generalized backbone is computed with at most n+1 satisfiable calls and exactly one unsatisfiable call to the SAT solver. We illustrate the integration of generalized backbone computation to facilitate the encoding of finite domain constraints to SAT. In this context generalized backbones are computed for small groups of constraints and then propagated to simplify the entire constraint model. A preliminary experimental evaluation is provided. 1
Exploiting Bounds in Operations Research and Artificial Intelligence
"... Abstract. Combinatorial optimization problems are ubiquitous in scientific research, engineering, and even our daily lives. A major research focus in developing combinatorial search algorithms has been on the attainment of efficient methods for deriving tight lower and upper bounds. These bounds res ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Combinatorial optimization problems are ubiquitous in scientific research, engineering, and even our daily lives. A major research focus in developing combinatorial search algorithms has been on the attainment of efficient methods for deriving tight lower and upper bounds. These bounds restrict the search space of combinatorial optimization problems and facilitate the computation of what might otherwise be intractable problems. In this paper, we survey the history of the use of bounds in both AI and OR. While research has been extensive in both domains, until very recently it has been too narrowly focused and has overlooked great opportunities to exploit bounds. In the past, the focus has been on the relaxations of constraints. We present methods for deriving bounds by tightening constraints, adding or deleting decision variables, and modifying the objective function. Then a formalization of the use of bounds as a twostep procedure is introduced. Finally, we discuss recent developments demonstrating how the use of this framework is conducive for eliciting methods that go beyond searchtree pruning. 1
A priori performance measures for arcbased formulations of the Vehicle Routing Problem ∗
, 2006
"... The Vehicle Routing Problem (VRP) is a central problem for many transportation applications, and although it is well known that it is difficult to solve, how much of this difficulty is due to the formulation of the problem is less understood. In this paper we experimentally investigate how the solut ..."
Abstract
 Add to MetaCart
(Show Context)
The Vehicle Routing Problem (VRP) is a central problem for many transportation applications, and although it is well known that it is difficult to solve, how much of this difficulty is due to the formulation of the problem is less understood. In this paper we experimentally investigate how the solution times to solve a VRP with a general IP solver are affected by the formulation of the VRP used. The different formulations are evaluated by examining solution efficiency as a function of several a priori performance measures based on the data parameters. Our experimental results show how the solution run times are sensitive to problem parameters; in particular the sensitivity of formulations to the coefficient of variation of the cost matrix of travel times is explained by two interacting factors.
Factors that impact solution run times of arcbased formulations of the Vehicle Routing Problem ∗
, 2005
"... It is well known that the Vehicle Routing Problem (VRP) becomes more difficult to solve as the problem size increases. However little is known about what makes a VRP difficult or easy to solve for problems of the same size. In this paper we investigate the effect of the formulation and data paramete ..."
Abstract
 Add to MetaCart
(Show Context)
It is well known that the Vehicle Routing Problem (VRP) becomes more difficult to solve as the problem size increases. However little is known about what makes a VRP difficult or easy to solve for problems of the same size. In this paper we investigate the effect of the formulation and data parameters on the efficiency with which we can obtain exact solutions to the VRP with a general IP solver. Our results show that solution run times for arcbased formulations with exponentially many constraints are mostly insensitive to problem parameters, whereas polynomial arcbased formulations, which can solve larger problems because of the smaller memory requirement, are sensitive to problem parameters. For instance, we observe that solution times for polynomial formulations significantly decrease for larger capacities and number of vehicles.
www.elsevier.com/locate/disopt Iterative patching and the asymmetric traveling salesman problem
, 2006
"... Although BranchandBound (BnB) methods are among the most widely used techniques for solving hard problems, it is still a challenge to make these methods smarter. In this paper, we investigate iterative patching, a technique in which a fixed patching procedure is applied at each node of the BnB sea ..."
Abstract
 Add to MetaCart
(Show Context)
Although BranchandBound (BnB) methods are among the most widely used techniques for solving hard problems, it is still a challenge to make these methods smarter. In this paper, we investigate iterative patching, a technique in which a fixed patching procedure is applied at each node of the BnB search tree for the Asymmetric Traveling Salesman Problem. Computational experiments show that iterative patching results in general in search trees that are smaller than the classical BnB trees, and that solution times are lower for usual random and sparse instances. Furthermore, it turns out that, on average, iterative patching with the ContractorPatch procedure of Glover, Gutin, Yeo and Zverovich (2001) and the Karp–Steele procedure are the fastest, and that ‘iterative ’ Modified Karp–Steele patching generates the smallest search trees. c © 2005 Elsevier B.V. All rights reserved.