Results 11  20
of
387
Dynamical systems, Measures and Fractals via Domain Theory
 Information and Computation
, 1995
"... We introduce domain theory in dynamical systems, iterated function systems (fractals) and measure theory. For a discrete dynamical system given by the action of a continuous map f:X X on a metric space X, we study the extended dynamical systems (l/X,l/f), (UX, U f) and (LX, Lf) where 1/, U and L ar ..."
Abstract

Cited by 68 (19 self)
 Add to MetaCart
We introduce domain theory in dynamical systems, iterated function systems (fractals) and measure theory. For a discrete dynamical system given by the action of a continuous map f:X X on a metric space X, we study the extended dynamical systems (l/X,l/f), (UX, U f) and (LX, Lf) where 1/, U and L are respectively the Vietoris hyperspace, the upper hyperspace and the lower hyperspace functors. We show that if (X, f) is chaotic, then so is (UX, U f). When X is locally compact UX, is a continuous bounded complete dcpo. If X is second countable as well, then UX will be omegacontinuous and can be given an effective structure. We show how strange attractors, attractors of iterated function systems (fractals) and Julia sets are obtained effectively as fixed points of deterministic functions on UX or fixed points of nondeterministic functions on CUX where C is the convex (Plotkin) power domain. We also show that the set, M(X), of finite Borel measures on X can be embedded in PUX, where P is the probabilistic power domain. This provides an effective framework for measure theory. We then prove that the invariant measure of an hyperbolic iterated function system with probabilities can be obtained as the unique fixed point of an associated continuous function on PUX.
Turing Computability With Neural Nets
 Applied Mathematics Letters
, 1991
"... . This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. Highorder connections are not required. 1. Introduction This paper a ..."
Abstract

Cited by 61 (13 self)
 Add to MetaCart
. This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. Highorder connections are not required. 1. Introduction This paper addresses the question: What ultimate limitations, if any, are imposed by the use of neural nets as computing devices? In particular, and ignoring issues of training and practicality of implementation, one would like to know if every problem that can be solved by a digital computer is also solvable in principle using a net. This question has been asked before in the literature. Indeed, Jordan Pollack ([7]) showed that a certain recurrent net model which he called a "neuring machine," for "neural Turing" is universal. In his model, all neurons synchronously update their states according to a quadratic combination of past activation values. In general, one calls highorder nets those in...
Complexity of Bézout’s Theorem IV : Probability of Success, Extensions
 SIAM J. Numer. Anal
, 1996
"... � � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
� � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the classical implicit function theorem and revisit the condition number in this context. Further complexity theory is developed. 1. Introduction. 1A. Bezout’s Theorem Revisited. Let f: � n+1 → � n be a system of homogeneous polynomials f =(f1,...,fn), deg fi = di, i=1,...,n. The linear space of such f is denoted by H (d) where d = (d1,...,dn). Consider the
Dynamical Recognizers: Realtime Language Recognition by Analog Computers
 Theoretical Computer Science
, 1996
"... We consider a model of analog computation which can recognize various languages in real time. We encode an input word as a point in R d by composing iterated maps, and then apply inequalities to the resulting point to test for membership in the language. Each class of maps and inequalities, suc ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
We consider a model of analog computation which can recognize various languages in real time. We encode an input word as a point in R d by composing iterated maps, and then apply inequalities to the resulting point to test for membership in the language. Each class of maps and inequalities, such as quadratic functions with rational coefficients, is capable of recognizing a particular class of languages; for instance, linear and quadratic maps can have both stacklike and queuelike memories. We use methods equivalent to the VapnikChervonenkis dimension to separate some of our classes from each other, e.g. linear maps are less powerful than quadratic or piecewiselinear ones, polynomials are less powerful than elementary (trigonometric and exponential) maps, and deterministic polynomials of each degree are less powerful than their nondeterministic counterparts. Comparing these dynamical classes with various discrete language classes helps illuminate how iterated maps can...
A GraphConstructive Approach to Solving Systems of Geometric Constraints
 ACM TRANSACTIONS ON GRAPHICS
, 1997
"... ..."
Adaptive Nonlinear Approximations
, 1994
"... The problem of optimally approximating a function with a linear expansion over a redundant dictionary of waveforms is NPhard. The greedy matching pursuit algorithm and its orthogonalized variant produce suboptimal function expansions by iteratively choosing the dictionary waveforms which best matc ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
The problem of optimally approximating a function with a linear expansion over a redundant dictionary of waveforms is NPhard. The greedy matching pursuit algorithm and its orthogonalized variant produce suboptimal function expansions by iteratively choosing the dictionary waveforms which best match the function's structures. Matching pursuits provide a means of quickly computing compact, adaptive function approximations. Numerical experiments show that the approximation errors from matching pursuits initially decrease rapidly, but the asymptotic decay rate of the errors is slow. We explain this behavior by showing that matching pursuits are chaotic, ergodic maps. The statistical properties of the approximation errors of a pursuit can be obtained from the invariant measure of the pursuit. We characterize these measures using group symmetries of dictionaries and using a stochastic differential equation model. These invariant measures define a noise with respect to a given dictionary. ...
Lower Bounds for the Computational Power of Networks of Spiking Neurons
 Neural Computation
, 1995
"... We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network o ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response and thresholdfunctions of the spiking neurons are sufficient in order to employ them for such computations. 1 Introduction and Basic Definitions There exists substantial evidence that timing phenomena such as temporal differences between spikes and frequencies of oscillating subsystems are integral parts of various information processing mechanisms in biological neural systems (for a survey and references see e.g. Kandel et al., ...
Complexity of Bezout's theorem V: Polynomial time
 Theoretical Computer Science
, 1994
"... this paper is to show that the problem of finding approximately a zero of a polynomial system of equations can be solved in polynomial time, on the average. The number of arithmetic operations is bounded by cN ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
this paper is to show that the problem of finding approximately a zero of a polynomial system of equations can be solved in polynomial time, on the average. The number of arithmetic operations is bounded by cN
A General Approach To Removing Degeneracies
 SIAM J. Computing
, 1991
"... We wish to increase the power of an arbitrary algorithm designed for nondegenerate input, by allowing it to execute on all inputs. We concentrate on infinitesimal symbolic perturbations that do not affect the output for inputs in general position. Otherwise, if the problem mapping is continuous, th ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
We wish to increase the power of an arbitrary algorithm designed for nondegenerate input, by allowing it to execute on all inputs. We concentrate on infinitesimal symbolic perturbations that do not affect the output for inputs in general position. Otherwise, if the problem mapping is continuous, the input and output space topology are at least as coarse as the real euclidean one and the output space is connected, then our perturbations make the algorithm produce an output arbitrarily close or identical to the correct one. For a special class of algorithms, which includes several important algorithms in computational geometry,we describe a deterministic method that requires no symbolic computation. Ignoring polylogarithmic factors, this method increases only the worstcase bit complexity by a multiplicative factor which is linear in the dimension of the geometric space. For general algorithms, a randomized scheme with arbitrarily high probability of success is proposed; the bit complexity is then bounded by a smalldegree polynomial in the original worstcase complexity. In addition to being simpler than previous ones, these are the first efficient perturbation methods.