Results 1  10
of
48
Hypercomputation: computing more than the Turing machine
, 2002
"... In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which al ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines. I begin with a succinct introduction to the classical theory of computation and its place amongst some of the negative results of the 20 th Century. I then explain how the ChurchTuring Thesis is commonly misunderstood and present new theses which better describe the possible limits on computability. Following this, I introduce ten different hypermachines (including three of my own) and discuss in some depth the manners in which they attain their power and the physical plausibility of each method. I then compare the powers of the different models using a device from recursion theory. Finally, I examine the implications of hypercomputation to mathematics, physics, computer science and philosophy. Perhaps the most important of these implications is that the negative mathematical results of Gödel, Turing and Chaitin are each dependent upon the nature of physics. This both weakens these results and provides strong links between mathematics and physics. I conclude that hypercomputation is of serious academic interest within many disciplines, opening new possibilities that were previously ignored because of long held misconceptions about the limits of computation.
Beyond The Universal Turing Machine
, 1998
"... We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a phi ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
Rule Inference for Financial Prediction using Recurrent Neural Networks
, 1997
"... This paper considers the prediction of noisy time series data, specifically, the prediction of foreign exchange rate data. A novel hybrid neural network algorithm for noisy time series prediction is presented which exhibits excellent performance on the problem. The method is motivated by considerati ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
This paper considers the prediction of noisy time series data, specifically, the prediction of foreign exchange rate data. A novel hybrid neural network algorithm for noisy time series prediction is presented which exhibits excellent performance on the problem. The method is motivated by consideration of how neural networks work, and by fundamental difficulties with random correlations when dealing with small sample sizes and high noise data. The method permits the inference and extraction of rules. One of the greatest complaints against neural networks is that it is hard to figure out exactly what they are doing  this work provides one answer for the internal workings of the network. Furthermore, these rules can be used to gain insight into both the real world system and the predictor. This paper focuses on noisy time series prediction and rule inference  use of the system in trading would typically involve the utilization of other financial indicators and domain knowledge. 1 Intr...
On the Computational Power of Dynamical Systems and Hybrid Systems
 Theoretical Computer Science
, 1996
"... We explore the simulation and computational capabilities of discrete and continuous dynamical systems. We introduce and compare several notions of simulation between discrete and continuous systems. We give a general framework that allows discrete and continuous dynamical systems to be considered as ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
We explore the simulation and computational capabilities of discrete and continuous dynamical systems. We introduce and compare several notions of simulation between discrete and continuous systems. We give a general framework that allows discrete and continuous dynamical systems to be considered as computational machines. We introduce a new discrete model of computation: the analog automaton model. We characterize the computational power of this model as P=poly in polynomial time and as unbounded in exponential time. We prove that many very simple dynamical systems from literature are able to simulate analog automata. From this results we deduce that many dynamical systems have intrinsically superTuring capabilities. 1 Introduction The computational power of abstract machines which compute over the reals in unbounded precision in constant time is still an open problem. We refer the reader to [18] for an upto date survey. Indeed, a basic model for their computations has been propose...
Analog Computation with Dynamical Systems
 Physica D
, 1997
"... This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete th ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d , CoRP d , NP d
Hypercomputation and the Physical ChurchTuring Thesis
, 2003
"... A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Turing ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Turing definability (`hypercomputation'): supertask, nonwellfounded, analog, quantum, and retrocausal computation. These models depend on infinite computation, explicitly or implicitly, and appear physically implausible; moreover, even if infinite computation were realizable, the Halting Problem would not be a#ected. Therefore, Thesis P is not essentially di#erent from the standard ChurchTuring Thesis.
A theory of complexity for continuous time systems
 Journal of Complexity
, 2002
"... We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their comparison with discrete algorithms. We define polynomial and logarithmic continuous time complexity classes and show that an ODE which solves the maximum network flow problem has polynomial time complexity. We also analyze a simple flow that solves the Maximum problem in logarithmic time. We conjecture that a subclass of the continuous P is equivalent to the classical P. 2001 Elsevier Science (USA) Key Words: theory of analog computation; dynamical systems.
Computational power of neural networks: a characterization in terms of kolmogorov complexity
 IEEE Transactions on Information Theory
, 1997
"... Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resourcebounded Kolmogorov complexity, taking into acco ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract — The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resourcebounded Kolmogorov complexity, taking into account the time required for constructing the numbers. In particular, we reveal a full and proper hierarchy of nonuniform complexity classes associated with networks having weights of increasing Kolmogorov complexity. Index Terms—Kolmogorov complexity, neural networks, Turing machines.
Fuzzy Finitestate Automata Can Be Deterministically Encoded into Recurrent Neural Networks
, 1996
"... There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rulebased and linguistic information can be incorporated into adapt ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rulebased and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static inputoutput relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy finitestate automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finitestate automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discretetim...
BioSteps Beyond Turing
 BIOSYSTEMS
, 2004
"... Are there `biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically nega ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Are there `biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speedup methods which are based on nondeterminism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speedingup the communication channels.