Results 1  10
of
148
Automatic Translation of FORTRAN Programs to Vector Form
 ACM Transactions on Programming Languages and Systems
, 1987
"... This paper discusses the theoretical concepts underlying a project at Rice University to develop an automatic translator, called PFC (for Parallel FORTRAN Converter), from FORTRAN to FORTRAN 8x. The Rice project, based initially upon the research of Kuck and others at the University of Illinois [6, ..."
Abstract

Cited by 293 (32 self)
 Add to MetaCart
This paper discusses the theoretical concepts underlying a project at Rice University to develop an automatic translator, called PFC (for Parallel FORTRAN Converter), from FORTRAN to FORTRAN 8x. The Rice project, based initially upon the research of Kuck and others at the University of Illinois [6, 1721, 24, 32, 36], is a continuation of work begun while on leave at IBM Research in Yorktown Heights, N.Y. Our first implementation was based on the Illinois PARAFRASE compiler [20, 36], but the current version is a completely new program (although it performs many of the same transformations as PARAFRASE). Other projects that have influenced our work are the Texas Instruments ASC compiler [9, 33], the Cray1 FORTRAN compiler [15], and the Massachusetts Computer Associates Vectorizer [22, 25]. The paper is organized into seven sections. Section 2 introduces FORTRAN 8x and gives examples of its use. Section 3 presents an overview of the translation process along with an extended translation example. Section 4 develops the concept of interstatement dependence and shows how it can be applied to the problem of vectorization. Loop carried dependence and loop independent dependence are introduced in this section to extend dependence to multiple statements and multiple loops. Section 5 develops dependencebased algorithms for code generation and transformations for enhancing the parallelism of a statement. Section 6 describes a method for extending the power of data dependence to control statements by the process of IF conversion. Finally, Section 7 details the current state of PFC and our plans for its continued development
A Knowledge Compilation Map
 Journal of Artificial Intelligence Research
, 2002
"... We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. ..."
Abstract

Cited by 157 (22 self)
 Add to MetaCart
We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime.
NonStandard Reasoning Services for the Debugging of Description Logic Terminologies
, 2003
"... Current Description Logic reasoning systems provide only limited support for debugging logically erroneous knowledge bases. In this paper we propose new nonstandard reasoning services which we designed and implemented to pinpoint logical contradictions when developing the medical terminology DICE. ..."
Abstract

Cited by 106 (5 self)
 Add to MetaCart
Current Description Logic reasoning systems provide only limited support for debugging logically erroneous knowledge bases. In this paper we propose new nonstandard reasoning services which we designed and implemented to pinpoint logical contradictions when developing the medical terminology DICE. We provide complete algorithms for unfoldable ACCTBoxes based on minimisation of axioms using Boolean methods for minimal unsatisfiabilitypresening subTBoxes, and an incomplete bottomup method for generalised incoherencepreserving terminologies. 1
Mini: A heuristic approach for logic minimization
 IBM Journal of Research and Development
, 1974
"... Abstract: MINI is a heuristic logic minimization technique for manyvariable problems. It accepts as input a Boolean logic specification expressed as an inputoutput table, thus avoiding a long list of minterms. It seeks a minimal implicant solution, without generating all prime implicants, which ca ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
Abstract: MINI is a heuristic logic minimization technique for manyvariable problems. It accepts as input a Boolean logic specification expressed as an inputoutput table, thus avoiding a long list of minterms. It seeks a minimal implicant solution, without generating all prime implicants, which can be converted to prime implicants if desired. New and effective subprocesses, such as expanding, reshaping, and removing redundancy from cubes, are iterated until there is no further reduction in the solution. The process is general in that it can minimize both conventional logic and logic functions of multivalued variables.
G.: Logicbased benders decomposition
 Mathematical Programming
, 2003
"... Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual take ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual takes the form of a logical deduction that yields Benders cuts. The dual is therefore very different from other generalized duals that have been proposed. The approach is illustrated by working out the details for propositional satisfiability and 01 programming problems. Computational tests are carried out for the latter, but the most promising contribution of logicbased Benders may be to provide a framework for combining optimization and constraint programming methods.
The Minimum Equivalent DNF Problem and Shortest Implicants
, 1998
"... We prove that the Minimum Equivalent DNF problem is \Sigma p 2 complete, resolving a conjecture due to Stockmeyer. The proof involves as an intermediate step a variant of a related problem in logic minimization, namely, that of finding the shortest implicant of a Boolean function. We also obtain ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
We prove that the Minimum Equivalent DNF problem is \Sigma p 2 complete, resolving a conjecture due to Stockmeyer. The proof involves as an intermediate step a variant of a related problem in logic minimization, namely, that of finding the shortest implicant of a Boolean function. We also obtain certain results concerning the complexity of the Shortest Implicant problem that may be of independent interest. When the input is a formula, the Shortest Implicant problem is \Sigma p 2  complete, and \Sigma p 2 hard to approximate to within an n 1=2\Gammaffl factor. When the input is a circuit, approximation is \Sigma p 2  hard to within an n 1\Gammaffl factor. However, when the input is a DNF formula, the Shortest Implicant problem cannot be \Sigma p 2 complete unless \Sigma p 2 = NP[log 2 n] NP . 1. Introduction Twolevel (DNF) logic minimization is a central practical problem in logic synthesis and also one of the more natural problems in the polynomial hierarchy....
A Continuous Approach to Inductive Inference
 Mathematical Programming
, 1992
"... In this paper we describe an interior point mathematical programming approach to inductive inference. We list several versions of this problem and study in detail the formulation based on hidden Boolean logic. We consider the problem of identifying a hidden Boolean function F : f0; 1g n ! f0; 1g ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
In this paper we describe an interior point mathematical programming approach to inductive inference. We list several versions of this problem and study in detail the formulation based on hidden Boolean logic. We consider the problem of identifying a hidden Boolean function F : f0; 1g n ! f0; 1g using outputs obtained by applying a limited number of random inputs to the hidden function. Given this inputoutput sample, we give a method to synthesize a Boolean function that describes the sample. We pose the Boolean Function Synthesis Problem as a particular type of Satisfiability Problem. The Satisfiability Problem is translated into an integer programming feasibility problem, that is solved with an interior point algorithm for integer programming. A similar integer programming implementation has been used in a previous study to solve randomly generated instances of the Satisfiability Problem. In this paper we introduce a new variant of this algorithm, where the Riemannian metric used...
F.: Debugging incoherent terminologies
 In: Journal of Automated Reasoning
"... Abstract. In this paper we study the diagnosis and repair of incoherent terminologies. We define a number of new nonstandard reasoning services to explain incoherence through pinpointing, and we present algorithms for all of these services. For one of the core tasks of debugging, the calculation of ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
Abstract. In this paper we study the diagnosis and repair of incoherent terminologies. We define a number of new nonstandard reasoning services to explain incoherence through pinpointing, and we present algorithms for all of these services. For one of the core tasks of debugging, the calculation of minimal unsatisfiability preserving subterminologies, we developed two different algorithms, one implementing a bottomup approach using support of an external DL reasoner, the other implementing a specialized tableaubased calculus. Both algorithms have been prototypically implemented. We study the effectiveness of our algorithms in two ways: we present a realistic casestudy where we diagnose a terminology used in a practical application, and we perform controlled benchmark experiments to get a better understanding of the computational properties of our algorithms in particular, and the debugging problem in general.
Solving Covering Problems Using LPRBased Lower Bounds
 In Proceedings of the ACM/IEEE Design Automation Conference
, 1996
"... Unate and binate covering problems are a special class of general integer linear programming problems with which several problems in logic synthesis, such as twolevel logic minimization and technology mapping, are formulated. Previous branchandbound methods for exactly solving these problems use ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
Unate and binate covering problems are a special class of general integer linear programming problems with which several problems in logic synthesis, such as twolevel logic minimization and technology mapping, are formulated. Previous branchandbound methods for exactly solving these problems use lowerbounding techniques based on finding maximal independent sets. In this paper we examine lowerbounding techniques based on linear programming relaxation (LPR) for the binate covering problem. We show that a combination of traditional reductions (essentiality and dominance) and incremental computation of LPRbased lower bounds can exactly solve difficult covering problems orders of magnitude faster than traditional methods. KeywordsCovering problems, integer linear programming I. INTRODUCTION Covering problems (unate and binate) are important combinatorial optimization problems with which several problems in logic synthesis (such as twolevel logic minimization [12], state minimizati...
Simplification of Quantifierfree Formulas over Ordered Fields
 Journal of Symbolic Computation
, 1995
"... this article is to provide a collection of practicable methods that have been implemented and extensively tested for their relevance. We further show how to combine different ideas for simplification in such a way that a formula is obtained which cannot be further simplified with any of the describe ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
this article is to provide a collection of practicable methods that have been implemented and extensively tested for their relevance. We further show how to combine different ideas for simplification in such a way that a formula is obtained which cannot be further simplified with any of the described methods. In other words, our simplifiers viewed as a function are idempotent. Achieving this is by no means trivial. On the algorithmic side, we introduce the concept of a background theory that is implicitly enlarged when entering a formula for simplification. Originally developed for detecting interactions between atomic formulas on different Boolean levels, it has turned out that this concept captures also other simplifiers that we had developed some time ago. These simplifiers, namely the Grobner simplifier and the Tableau simplifiers, could even be generalized due to this new viewpoint. 1.1. definitions Our formulas combine atomic formulas using the Boolean connectives "," "," "\Gamma!," "/\Gamma," "/!," and ":." Conjunction and disjunction are not binary but allow an arbitrary number of arguments. The atomic formulas are equations constructed with "=,"