Results 1  10
of
12
The Turing Machine Paradigm in Contemporary Computing
 Mathematics Unlimited  2001 and Beyond. LNCS
, 2000
"... this paper we will extend the Turing machine paradigm to include several key features of contemporary information processing systems. ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
this paper we will extend the Turing machine paradigm to include several key features of contemporary information processing systems.
GeneralPurpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winnertakeall, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
Polynomial time quantum computation with advice
 Inform. Proc. Lett., 90:195–204, 2003. ECCC
"... Abstract. Advice is supplementary information that enhances the computational power of an underlying computation. This paper focuses on advice that is given in the form of a pure quantum state. The notion of advised quantum computation has a direct connection to nonuniform quantum circuits and tall ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Advice is supplementary information that enhances the computational power of an underlying computation. This paper focuses on advice that is given in the form of a pure quantum state. The notion of advised quantum computation has a direct connection to nonuniform quantum circuits and tally languages. The paper examines the influence of such advice on the behaviors of an underlying polynomialtime quantum computation with boundederror probability and shows a power and a limitation of advice. Key Words: computational complexity, quantum circuit, advice function 1
A Computational Taxonomy and Survey of Neural Network Models
 of Numbers and Symbols. (BS 1749:1985) London: British Standards Institution
, 2001
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size (finite nets vs. infinite families), computation type (deterministic vs. probabilistic), etc. The underlying results concerning the computational power of perceptron, RBF, winnertakeall, and spiking neural networks are briey surveyed, with pointers to the relevant literature.
General Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (fee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size ( nite nets vs. in  nite families), computation type (deterministic vs. probabilistic), etc.
Refining Logical Characterizations of Advice Complexity Classes
 IN FIRST PANHELLENIC SYMPOSIUM ON LOGIC
, 1997
"... Numerical relations in logics are known to characterize, via the finite models of their sentences, polynomial advice nonuniform complexity classes. These are known to coincide with reduction classes of tally sets. Our contributions here are: 1/ a refinement of that characterization that individua ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Numerical relations in logics are known to characterize, via the finite models of their sentences, polynomial advice nonuniform complexity classes. These are known to coincide with reduction classes of tally sets. Our contributions here are: 1/ a refinement of that characterization that individualizes the reduction class of each tally set, and 2/ characterizing logarithmic advice classes via numerical constants, both in the (rather easy) case of C/log and in the more complex case of FullC/log; this proof requires to extend to classes below P the technical characterizations known for the class FullP/log.
Circuit Expressions of Low Kolmogorov Complexity
 In preparation
, 1999
"... We study circuit expressions of logarithmic and polylogarithmic polynomialtime Kolmogorov complexity, focusing on their complexitytheoretic characterizations and learnability properties. They provide a nontrivial circuitlike characterization for a natural nonuniform complexity class that lacked ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study circuit expressions of logarithmic and polylogarithmic polynomialtime Kolmogorov complexity, focusing on their complexitytheoretic characterizations and learnability properties. They provide a nontrivial circuitlike characterization for a natural nonuniform complexity class that lacked it up to now. We show that circuit expressions of this kind can be learned with membership queries in polynomial time if and only if every NEpredicate is Esolvable. Thus they are learnable given that the learner is allowed the extra use of an oracle in NP. The precise way of accessing the oracle is shown to be optimal under relativization. We present a precise characterization of the subclass defined by Kolmogoroveasy circuit expressions that can be constructed from membership queries in polynomial time, with some consequences for the structure of reduction and equivalence classes of tally sets of very low density. Preliminary, sometimes weaker versions of the results in this paper were...
Computational Power of Neural Networks: A Kolmogorov Complexity Characterization
"... The computational power of neural networks depends on properties of the real numbers used as weights. We focus on networks restricted to compute in polynomial time, operating on boolean inputs. Previous work has demonstrated that their computational power happens to coincide with the complexity c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The computational power of neural networks depends on properties of the real numbers used as weights. We focus on networks restricted to compute in polynomial time, operating on boolean inputs. Previous work has demonstrated that their computational power happens to coincide with the complexity classes P and P=poly, respectively, for networks with rational and arbitrary real weights. Here we prove that the crucial concept that characterizes this computational power is the Kolmogorov complexity of the weights, in the sense that, for each bound on this complexity, the networks can solve exactly the problems in a related nonuniform complexity class located between P and P=poly. By proving that the family of such nonuniform classes is infinite, we show that neural networks can be classified into an infinite hierarchy of different computing capabilities. 1 Introduction Consider briefly the task of implementing, for some practical purpose, a neural net with real weights. Be it on ...
LANGUAGES TO DIAGONALIZE AGAINST ADVICE CLASSES
"... Abstract. Variants of Kannan’s Theorem are given where the circuits of the original theorem are replaced by arbitrary recursively presentable classes of languages that use advice strings and satisfy certain mild conditions. Let poly k denote those functions in O(n k). These variants imply that DTIM ..."
Abstract
 Add to MetaCart
Abstract. Variants of Kannan’s Theorem are given where the circuits of the original theorem are replaced by arbitrary recursively presentable classes of languages that use advice strings and satisfy certain mild conditions. Let poly k denote those functions in O(n k). These variants imply that DTIME(nk ′ ) NE /polyk does not contain PNE, DTIME(2nk ′)/polyk does not contain EXP, SPACE(nk ′)/polyk does not contain PSPACE, uniform TC 0 /polyk does not contain CH, and uniform ACC/polyk does not contain ModPH. Consequences for selective sets are also obtained. In particular, it is shown that R DTIME(nk) T (NPsel) does not contain PNE, (Lsel) does not contain PSPACE. Finally, a circuit size hierarchy theorem is established.
On the Computational Power of Faulty and Asynchronous Neural Networks
"... This paper deals with finite size recurrent neural networks which consist of general (possibly with cycles) interconnections of evolving processors. Each neuron may assume real activation value. We provide the first rigorous foundations for recurrent networks which are built of unreliable analog dev ..."
Abstract
 Add to MetaCart
(Show Context)
This paper deals with finite size recurrent neural networks which consist of general (possibly with cycles) interconnections of evolving processors. Each neuron may assume real activation value. We provide the first rigorous foundations for recurrent networks which are built of unreliable analog devices and present asynchronicity in their updates. The first model considered incorporates unreliable devices (either neurons or the connections between them) which assume fixed error probabilities, independent of the history and the global state of the network. The model corresponds to the randomnoise philosophy of Shannon. Another model allows the error probabilities to depend both on the global state and the history. Next, we change the various faulty nets to update in a total asynchronity. We prove all the above models to be computationally equivalent and we express their power. In particular, we see that for some constrained models of networks, the random behavior adds nonunifromity to ...