Results 1  10
of
35
Analog Computation via Neural Networks
 THEORETICAL COMPUTER SCIENCE
, 1994
"... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn ..."
Abstract

Cited by 87 (8 self)
 Add to MetaCart
We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomialtime constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomialtime equivalent to classical digital computation in the previous work [20].) Moreover, there is a precise correspondence between nets and standard nonuniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NPhard problems, as the equality ...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Every linear threshold function has a lowweight approximator
 In Proceedings of the 21st Conference on Computational Complexity (CCC
, 2006
"... Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an ɛ fraction of inputs and has integer weights each of magnitude at most √ n · 2 Õ(1/ɛ2). We show that the construction is optimal in terms of its dependence on ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an ɛ fraction of inputs and has integer weights each of magnitude at most √ n · 2 Õ(1/ɛ2). We show that the construction is optimal in terms of its dependence on n by proving a lower bound of Ω ( √ n) on the weights required to approximate a particular linear threshold function. We give two applications. The first is a deterministic algorithm for approximately counting the fraction of satisfying assignments to an instance of the zeroone knapsack problem to within an additive ±ɛ. The algorithm runs in time polynomial in n (but exponential in 1/ɛ 2). In our second application, we show that any linear threshold function f is specified to within error ɛ by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive ±1/(n · 2 Õ(1/ɛ2)). This is the first such accuracy bound which is inverse polynomial in n (previous work of Goldberg [12] gave a 1/quasipoly(n) bound), and gives the first polynomial bound (in terms of n) on the number of examples required for learning linear threshold functions in the “restricted focus of attention ” framework.
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
Neural Networks and Complexity Theory
 In Proc. 17th International Symposium on Mathematical Foundations of Computer Science
, 1992
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. Whi ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks directly from examples of their desired inputoutput behavior, it is nevertheless important to pay attention also to the complexity issues: firstly, what kinds of functions are computable by networks of a given type and size, and secondly, what is the complexity of the synthesis problems considered. In fact, inattention to these issues was a significant factor in the demise of the first stage of neural networks research in the late 60's, under the criticism of Minsky and Papert [51]. The intent of this paper is to survey some of the centra...
Neural Networks with Real Weights: Analog Computational Complexity
, 1992
"... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomialtime constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomialtime equivalent to classical digital computation in the previous work [17].) Moreover, there is a precise correspondence between nets and standard nonuniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NPhard problems, as the equality "p...
Computational power of neural networks: a characterization in terms of kolmogorov complexity
 IEEE Transactions on Information Theory
, 1997
"... Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resourcebounded Kolmogorov complexity, taking into acco ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resourcebounded Kolmogorov complexity, taking into account the time required for constructing the numbers. In particular, we reveal a full and proper hierarchy of nonuniform complexity classes associated with networks having weights of increasing Kolmogorov complexity. Index Terms—Kolmogorov complexity, neural networks, Turing machines.
Dynamic Mechanistic Explanation: Computational Modeling of Circadian Rhythms as an Exemplar for Cognitive Science
"... Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cogn ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists ’ guiding ideas about mechanism have developed in conjunction with their styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. Cognitive modeling has both been fruitful and subject to certain limitations. A good way of exploring this is to contrast it with a different approach, one that involves more direct
On the Power of Networks of Majority Functions
 Proc. IWANN'91
, 1992
"... : Quantization of the synaptic weights is a central problem of hardware implementation of neural networks using 0 technology. In this paper, a particular linear threshold boolean function, called majority function is considered, whose synaptic weights are restricted to only three values: \Gamma1, 0, ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
: Quantization of the synaptic weights is a central problem of hardware implementation of neural networks using 0 technology. In this paper, a particular linear threshold boolean function, called majority function is considered, whose synaptic weights are restricted to only three values: \Gamma1, 0, +1. Some results about the complexity of the circuits composed of such gates are reported. They show that this simple family of functions remains powerful in term of circuit complexity. The learning problem with this subclass of threshold function is also studied and numerical experiments of different algorithms are reported. Keywords: neural network, linear threshold function, circuit complexity, synaptic weights quantization, majority functions. 1 Introduction and Motivation The works reported in the literature on artificial neural nets can be subdivided in two classes. On one hand, theorists deal with the general issues of connexionism such as: machine learning, classification, optimiz...
Improved approximation of linear threshold functions
 In Proc. 24nd Annual IEEE Conference on Computational Complexity (CCC
, 2009
"... We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the ndimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every nvariable threshold function f is ɛclose to a threshold function depending only ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the ndimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every nvariable threshold function f is ɛclose to a threshold function depending only on Inf(f) 2 · poly(1/ɛ) many variables, where Inf(f) denotes the total influence or average sensitivity of f. This is an exponential sharpening of Friedgut’s wellknown theorem [Fri98], which states that every Boolean function f is ɛclose to a function depending only on 2 O(Inf(f)/ɛ) many variables, for the case of threshold functions. We complement this upper bound by showing that Ω(Inf(f) 2 + 1/ɛ 2) many variables are required for ɛapproximating threshold functions. Our second result is a proof that every nvariable threshold function is ɛclose to a threshold function with integer weights at most poly(n) · 2 Õ(1/ɛ2/3). This is an improvement, in the dependence on the error parameter ɛ, on an earlier result of [Ser07] which gave a poly(n) · 2 Õ(1/ɛ2) bound. Our improvement is obtained via a new proof technique that uses strong anticoncentration bounds from probability theory. The new technique also gives a simple and modular proof of the original [Ser07] result, and extends to give lowweight approximators for threshold functions under a range of probability distributions other than the uniform distribution.