Results 1  10
of
40
Comparing trailing and copying for constraint programming
 In Proceedings of the International Conference on Logic Programming
, 1999
"... A central service of a constraint programming system is search. In almost all constraint programming systems search is based on trailing, which is well understood and known to be efficient. This paper compares trailing to copying. Copying offers more expressiveness as required by parallel and concur ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
A central service of a constraint programming system is search. In almost all constraint programming systems search is based on trailing, which is well understood and known to be efficient. This paper compares trailing to copying. Copying offers more expressiveness as required by parallel and concurrent systems. However, little is known how trailing compares to copying as it comes to implementation effort, runtime efficiency, and memory requirements. This paper discusses these issues. Execution speed of a copyingbased system is shown to be competitive with stateoftheart trailingbased systems. For the first time, a detailed analysis and comparison with respect to memory usage is made. It is shown how recomputation decreases memory requirements which can be prohibitive for large problems with copying alone. The paper introduces an adaptive recomputation strategy that is shown to speedup search while keeping memory consumption low. It is demonstrated that copying with recomputation outperforms trailing on large problems with respect to both space and time. 1
Controlling Search in Declarative Programs
 In Principles of Declarative Programming (Proc. Joint International Symposium PLILP/ALP’98
, 1998
"... Logic languages can deal with nondeterministic computations via builtin search facilities. However, standard search methods like global backtracking are often not sufficient and a source of many programming errors. Therefore, we propose the addition of a single primitive to logicoriented language ..."
Abstract

Cited by 29 (21 self)
 Add to MetaCart
Logic languages can deal with nondeterministic computations via builtin search facilities. However, standard search methods like global backtracking are often not sufficient and a source of many programming errors. Therefore, we propose the addition of a single primitive to logicoriented languages to control nondeterministic computation steps. Based on this primitive, a number of different search strategies can be easily implemented. These search operators can be applied if the standard search facilities are not successful or to encapsulate search. The latter is important if logic programs interact with the (nonbacktrackable) outside world. We define the search control primitive based on an abstract notion of computation steps so that it can be integrated into various logicoriented languages, but to provide concrete examples we also present the integration of such a control primitive into the multiparadigm declarative language Curry. The lazy evaluation strategy of Curr...
Efficient Logic Variables for Distributed Computing
"... We define a practical algorithm for distributed rational tree unification and prove its correctness in both the offline and online cases. We derive the distributed algorithm from a centralized one, showing clearly the tradeoffs between local and distributed execution. The algorithm is used to rea ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
We define a practical algorithm for distributed rational tree unification and prove its correctness in both the offline and online cases. We derive the distributed algorithm from a centralized one, showing clearly the tradeoffs between local and distributed execution. The algorithm is used to realize logic variables in the Mozart Programming System, which implements the Oz language (see
A Proof Theoretic View of Constraint Programming
, 1998
"... We provide here a proof theoretic account of constraint programming that attempts to capture the essential ingredients of this programming style. We exemplify it by presenting proof rules for linear constraints over interval domains, and illustrate their use by analyzing the constraint propagation p ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
We provide here a proof theoretic account of constraint programming that attempts to capture the essential ingredients of this programming style. We exemplify it by presenting proof rules for linear constraints over interval domains, and illustrate their use by analyzing the constraint propagation process for the SEND + MORE = MONEY puzzle. We also show how this approach allows one to build new constraint solvers. 1 Introduction 1.1 Motivation One of the most interesting recent developments in the area of programming has been constraint programming. A prominent instance of it is constraint logic programming exemplified by such programming languages as CLP(R), Prolog III or ECL i PS e . But recently also imperative constraint programming languages emerged, such as 2LP of McAloon & Tretkoff (1995) or CLAIRE of Caseau & Laburthe (1996). (For an overview of this area and related references see Hentenryck, Saraswat & et al. (1996)). The aim of this paper is to explain the essence of t...
Constraint and Integer Programming in OPL
 INFORMS Journal on Computing
, 2002
"... In recent years, it has been increasingly recognized that constraint and integer programming have orthogonal and complementary strengths in stating and solving combinatorial optimization applications. In addition, their integration has become an active research topic. The optimization programming la ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
In recent years, it has been increasingly recognized that constraint and integer programming have orthogonal and complementary strengths in stating and solving combinatorial optimization applications. In addition, their integration has become an active research topic. The optimization programming language opl was a first attempt at integrating these technologies both at the language and at the solver levels. In particular, opl is a modeling language integrating the rich language of constraint programming and the ability to specify search procedures at a high level of abstraction. Its implementation includes both constraint and mathematical programming solvers, as well as some cooperation schemes to make them collaborate on a given problem. The purpose of this paper is to illustrate, using opl, the constraintprogramming approach to combinatorial optimization and the complementary strengths of constraint and integer programming. (Artificial Intelligence; Computer Science; Integer Programming) 1.
Search and Strategies in OPL
, 2000
"... OPL is a modeling language for mathematical programming and combinatorial optimization. It is the first language to combine highlevel algebraic and set notations from mathematical modeling languages with a rich constraint language and the ability to specify search procedures and strategies that are ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
OPL is a modeling language for mathematical programming and combinatorial optimization. It is the first language to combine highlevel algebraic and set notations from mathematical modeling languages with a rich constraint language and the ability to specify search procedures and strategies that are the essence of constraint programming. This paper describes the facilities available in OPL to specify search procedures. It describes the abstractions of OPL to specify both the search tree (search) and how to explore it (strategies). The paper also illustrates how to use these highlevel constructs to implement traditional search procedures in constraint programming and scheduling.
Programming Deep Concurrent Constraint Combinators
 Practical Aspects of Declarative Languages, Second International Workshop, PADL 2000, volume 1753 of Lecture Notes in Computer Science
, 2000
"... Constraint combination methods are essential for a flexible constraint programming system. This paper presents deep concurrent constraint combinators based on computation spaces as combination mechanism. It introduces primitives and techniques needed to program constraint combinators from computa ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Constraint combination methods are essential for a flexible constraint programming system. This paper presents deep concurrent constraint combinators based on computation spaces as combination mechanism. It introduces primitives and techniques needed to program constraint combinators from computation spaces. The paper applies computation spaces to a broad range of combinators: negation, generalized reification, disjunction, and implication. Even though computation spaces have been conceived in the context of Oz, they are mainly programming language independent. This point is stressed by discussing them here in the context of Standard ML with concurrency features.
Parallel Search Made Simple
 University of Singapore
, 2000
"... . Search in constraint programming is a time consuming task. Search can be speeded up by exploring subtrees of a search tree in parallel. This paper presents distributed search engines that achieve parallelism by distribution across networked computers. The main point of the paper is a simple de ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
. Search in constraint programming is a time consuming task. Search can be speeded up by exploring subtrees of a search tree in parallel. This paper presents distributed search engines that achieve parallelism by distribution across networked computers. The main point of the paper is a simple design of the parallel search engine. Simplicity comes as an immediate consequence of clearly separating search, concurrency, and distribution. The obtained distributed search engines are simple yet o#er substantial speedup on standard network computers. 1 Introduction Search in constraint programming is a time consuming task. Search can be speeded up by exploring several subtrees of a search tree in parallel by cooperating search engines called workers. The paper develops search engines that achieve parallelism by distributing workers across standard networked computers. The paper has two main points. The first point is to provide a simple, highlevel, and reusable design for parallel s...