Results 1  10
of
23
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
On Unapproximable Versions of NPComplete Problems
"... . We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polyn ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
. We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polynomialtime algorithm can even approximate log (k) of the magnitude of these problems to within any constant factor, where log (k) denotes the logarithm iterated k times, unless NP is recognized by slightly superpolynomial randomized machines. We use the same technique to improve the constant ffl such that MAX CLIQUE is hard to approximate to within a factor of n ffl . Finally, we show that it is even harder to approximate two counting problems: counting the number of satisfying assignments to a monotone 2SAT formula and computing the permanent of1,0,1 matrices. Key words. NPcomplete, unapproximable, randomized reduction, clique, counting problems, permanent, 2SAT AMS subject clas...
Dynamic Circuit Generation for Solving Specific Problem Instances of Boolean Satisfiability
 IN IEEE SYMPOSIUM ON FPGAS FOR CUSTOM COMPUTING MACHINES
, 1998
"... Optimization and query problems provide the best clear opportunity for configurable computing systems to achieve a significant performance advantage over ASICs. Programmable hardware can be optimized to solve a specific problem instance that only needs to be solved once, and the circuit can be throw ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Optimization and query problems provide the best clear opportunity for configurable computing systems to achieve a significant performance advantage over ASICs. Programmable hardware can be optimized to solve a specific problem instance that only needs to be solved once, and the circuit can be thrown away after its single execution. This paper investigates the applicability of this technology to solving a specific query problem, known as Boolean Satisfiability. We provide a system for capturing the complete execution cost of this approach, by accounting for CAD tool execution time. The key to this approach is to circumvent the standard CAD tools and directly generate circuits at runtime. A set of example circuits is presented as part of the system evaluation, and a complete implementation on the Xilinx XC6216 FPGA is presented.
The Complexity of Verifying Memory Coherence
 In Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures (SPAA
, 2003
"... by ..."
A SAT Solver Using Reconfigurable Hardware and Virtual Logic
 Journal of Automated Reasoning
, 2000
"... In this paper, we present the architecture of a new SAT solver using reconfigurable logic and a virtual logic scheme. Our main contributions include new forms of massive finegrain parallelism, structured design techniques based on iterative logic arrays that reduce compilation times from hours to m ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In this paper, we present the architecture of a new SAT solver using reconfigurable logic and a virtual logic scheme. Our main contributions include new forms of massive finegrain parallelism, structured design techniques based on iterative logic arrays that reduce compilation times from hours to minutes, and a decomposition technique that creates independent subproblems that may be concurrently solved by unconnected FPGAs. The decomposition technique is the basis of the virtual logic scheme, since it allows solving problems that exceed the hardware capacity. Our architecture is easily scalable. Our results show several orders of magnitude speedup compared with a stateoftheart software implementation, and also with respect to prior SAT solvers using reconfigurable hardware.
Reconfigurable hardware SAT solvers: A survey of systems
 Proceedings of the 13 th International Conference on FieldProgrammable Logic and Applications – FPL’2003
, 2003
"... Abstract—By adapting to computations that are not so wellsupported by generalpurpose processors, reconfigurable systems achieve significant increases in performance. Such computational systems use highcapacity programmable logic devices and are based on processing units customized to the requirem ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract—By adapting to computations that are not so wellsupported by generalpurpose processors, reconfigurable systems achieve significant increases in performance. Such computational systems use highcapacity programmable logic devices and are based on processing units customized to the requirements of a particular application. A great deal of the research effort in this area is aimed at accelerating the solution of combinatorial optimization problems. Special attention in this context was given to the Boolean satisfiability (SAT) problem resulting in a considerable number of different architectures being proposed. This paper presents the stateoftheart in reconfigurable hardware SAT satisfiers. The analysis and classification of existing systems has been performed according to such criteria as algorithmic issues, reconfiguration modes, the execution model, the programming model, logic capacity, and performance. Index Terms—Boolean satisfiability, reconfigurable computing, FPGA, hardware acceleration. 1
Mathematical definition of “intelligence” (and consequences)
, 2006
"... In §9 we propose an abstract mathematical definition of, and practical way to measure, “intelligence.” Before that is much motivating discussion and arguments why it is a good definition, and after it we deduce several important consequences – fundamental theorems about intelligence. The most impo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In §9 we propose an abstract mathematical definition of, and practical way to measure, “intelligence.” Before that is much motivating discussion and arguments why it is a good definition, and after it we deduce several important consequences – fundamental theorems about intelligence. The most important (theorem 5 of §12) is our construction of an algorithm that implements an “asymptotically uniformly competitive intelligence” (UACI). Although our definition of intelligence initially seems “multidimensional”– two entities would seem capable of being relatively more or less intelligent independently in each of an infinite number of “dimensions” of intelligence – the UACI is an intelligent entity that is simultaneously as intelligent as any other entity (asymptotically) in every dimension simultaneously. This in a considerable sense
Relative Complexity of Algebras
, 1981
"... A simple algebraic model is proposed fr measuring the relative complexity of programming systems. The appropriateness of this model is illustrated by its use as a framework for the statement and proof of results dealing with codingindependent limitations on the relative complexity of basic alge ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A simple algebraic model is proposed fr measuring the relative complexity of programming systems. The appropriateness of this model is illustrated by its use as a framework for the statement and proof of results dealing with codingindependent limitations on the relative complexity of basic algebras.