Results 1 
9 of
9
TimeSpace Tradeoffs for Satisfiability
 Journal of Computer and System Sciences
, 1997
"... We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and p n space. We also give lower bounds for logspace uniform NC 1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministically with a small amount of time then we can collapse a nonconstant number of levels of the polynomialtime hierarchy. We combine this work with a result of Nepomnjascii that shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL a...
On computation with pulses
 Information and Computation
, 1999
"... We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI"). In preceding work it was shown that the computational power of formal models for computa ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI"). In preceding work it was shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a computational unit have an approximately linearly rising or linearly decreasing initial segment. This property is satis ed by common models for biological neurons. On the other hand several implementations of pulse stream VLSI employ pulses that are approximately piecewise constant (i.e. step functions). In this article we investigate the relevance of the shape of pulses in formal models for computation with pulses. It turns out that the computational power drops signi cantly if one replaces pulses with linearly rising or decreasing initial segments by piecewise constant pulses. We provide an exact characterization of the latter model in terms of a weak version of a random access machine (RAM). We also compare the language recognition capability of a recurrent version of this model with that of deterministic nite automata and Turing machines. 1
Subrecursion as Basis for a Feasible Programming Language
 Proceedings of CSL'94, number 933 in LNCS
, 1994
"... We are motivated by finding a good basis for the semantics of programming languages and investigate small classes in subrecursive hierarchies of functions. We do this with the help of pairing functions because in this way we can explore the amazing coding powers of Sexpressions of LISP within t ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We are motivated by finding a good basis for the semantics of programming languages and investigate small classes in subrecursive hierarchies of functions. We do this with the help of pairing functions because in this way we can explore the amazing coding powers of Sexpressions of LISP within the domain of natural numbers. In the process of doing this we introduce a missing stage in Grzegorczykbased hierarchies which solves the longstanding open problem of what is the precise relation between the small recursive classes and those of complexity theory. 1 Introduction We investigate subrecursive hierarchies based on pairing functions and solve a longstanding open problem in small recursive classes of what is the relationship between these and computational complexity classes (see [11]). The problem is solved by discovering that there is a missing stage in Grzegorczykbased hierarchies [7, 11]. The motivation for this research comes from our search for a good programming langu...
The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials
, 1996
"... Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatiotemporal coding, i.e. encoded in the time points when specific neurons "fire" (and thus send a "spike" to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are "accidental" aspects of their realization in biological "wetware". Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatiotemporal c...
Tools for Proving Zero Knowledge
 In Proc. of EUROCRYPT'92, Lecture Notes in Computer Science
, 1992
"... We develop general techniques that can be used to prove the zero knowledge property of most of the known zero knowledge protocols. Those techniques consist in reducing the circuit indistinguishability of the output distributions of two probabilistic Turing machines to the indistinguishability of the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We develop general techniques that can be used to prove the zero knowledge property of most of the known zero knowledge protocols. Those techniques consist in reducing the circuit indistinguishability of the output distributions of two probabilistic Turing machines to the indistinguishability of the output distributions of certain subroutines. 1 Introduction It is an important result in the theory of zero knowledge proofs that assuming the existence of a circuit secure encryption machine every language in NP has a zero knowledge proof. This result can be obtained by constructing a zero knowledge proof system for the NP  complete language 3C of three colourable graphs (see [1, 2]). In this protocol the prover and the verifier repeat a certain subprotocol a number of times which is polynomial in the length of the input. The encryption machine is called in a subroutine used in that subprotocol. The protocol can therefore be written in the form S = (MNO) nx (1) where MNO is the subprot...
Reducing Complexity Of 3D Object Reconstruction Due To Symmetry Of Model Knowledge
, 1997
"... In this paper we will examine symmetries of background knowledge for 3D object reconstruction from images and we will analyze the reduction of computational time for matching operations by using symmetrical properties of background knowledge. Symmetries of objects will be classified according to sym ..."
Abstract
 Add to MetaCart
In this paper we will examine symmetries of background knowledge for 3D object reconstruction from images and we will analyze the reduction of computational time for matching operations by using symmetrical properties of background knowledge. Symmetries of objects will be classified according to symmetry operations. As an application symmetries of buildings will be depicted and discussed. The representation of background knowledge for 3D object extraction will be briefly described. The most often occurring symmetry, the mirror symmetry, will be used in order to compute building model knowledge in an efficient manner and will be grouped with regard to its degree of compactness. A complexity estimation for building model knowledge computation will be discussed by considering the worstcase and the empirical case, whereas we use the observation that building models are much more compact than threedimensional descriptions of buildings. Finally, the discussed complexity estimation bound wi...
!()+, ./01 23456
, 1995
"... Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, ..."
Abstract
 Add to MetaCart
Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. 1 Introduction The main theme of this paper is to present efficient algorithms that solve the problem of computing the maximum bichromatic discrepancy for axis oriented rectangles. This problem arises naturally in different areas of computer science, such as computational learning theory, computational geometry and computer graphics ([Ma], [DG]), and has applications in all these areas. In computational learning theory, the problem of agnostic PAClearning with simple geometric hypotheses can be reduced to the problem of computing the maximum bichromatic discrepancy for simple geometric ra...
The Parallel Complexity of the AL Concept Language
"... Adequateness is one of the most important issues in knowledge representation and reasoning. Roughly speaking, a reasoning system is adequate if it solves simpler problems faster than more difficult ones, where simplicity is measured with respect to all available reasoning systems. It has been shown ..."
Abstract
 Add to MetaCart
Adequateness is one of the most important issues in knowledge representation and reasoning. Roughly speaking, a reasoning system is adequate if it solves simpler problems faster than more difficult ones, where simplicity is measured with respect to all available reasoning systems. It has been shown recently that adequateness implies massive parallelism and, hence, we should be interested in reasoning problems which admit an optimal or efficient parallel computational model. Such problems must be in P and must not be the hardest problems in P unless P = NC. Terminological reasoning systems were developed to provide fast and tractable reasoning services and, in particular, to compute the subsumption relation between concept descriptions or, equivalently, to determine the unsatisfiability of a concept description. In this paper we develop an efficient parallel algorithm for checking unsatisfiability for the AL concept language and prove its correctness and completeness.
A Framework for Deductive Traders of Context Information
, 2004
"... Context aware services often need derived and higher level information about users and their environments. Sensors and databases mostly offer lowlevel information only. The need to bridge this gap demands for refinement and ennoblement of contextual information. Therefore the concept of conventiona ..."
Abstract
 Add to MetaCart
Context aware services often need derived and higher level information about users and their environments. Sensors and databases mostly offer lowlevel information only. The need to bridge this gap demands for refinement and ennoblement of contextual information. Therefore the concept of conventional traders known from distributed systems is adapted. Their core, typically a database, is equipped with its deductive closure. A complete deductive closure is ineligible for practical purposes because of a fairly high runtime complexity. Therefore a representation is used which is simple enough to maintain the deductive closure but which is still strong enough to model our environment adequately. All three key functionalities of conventional traders offered at their interface are emulated, namely to add and to withdraw service offers and context information as well as to enquire the availability of a, maybe derived, service. Algorithms for adding and removal are efficient as far as time and memory accesses