Results 1  10
of
195
The induction of dynamical recognizers
 Machine Learning
, 1991
"... A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning pro ..."
Abstract

Cited by 210 (14 self)
 Add to MetaCart
A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network. This phase transition corresponds to the onset of the network’s capacity for generalizing to arbitrarylength strings. Second, a study of the automata resulting from the acquisition of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton consistent with the given exemplars, which is an NPHard problem, the architecture does appear capable of generating nonregular languages by exploiting fractal and chaotic dynamics. I end the paper with a hypothesis relating linguistic generative capacity to the behavioral regimes of nonlinear dynamical systems.
A multifractal wavelet model with application to TCP network traffic
 IEEE TRANS. INFORM. THEORY
, 1999
"... In this paper, we develop a new multiscale modeling framework for characterizing positivevalued data with longrangedependent correlations (1=f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the mo ..."
Abstract

Cited by 173 (30 self)
 Add to MetaCart
In this paper, we develop a new multiscale modeling framework for characterizing positivevalued data with longrangedependent correlations (1=f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the model provides a rapid O(N) cascade algorithm for synthesizing Npoint data sets. We study both the secondorder and multifractal properties of the model, the latter after a tutorial overview of multifractal analysis. We derive a scheme for matching the model to real data observations and, to demonstrate its effectiveness, apply the model to network traffic synthesis. The flexibility and accuracy of the model and fitting procedure result in a close fit to the real data statistics (variancetime plots and moment scaling) and queuing behavior. Although for illustrative purposes we focus on applications in network traffic modeling, the multifractal wavelet model could be useful in a number of other areas involving positive data, including image processing, finance, and geophysics.
Chaos and Nonlinear Dynamics: Application to Financial Markets
 Journal of Finance
, 1991
"... After the stock market crash of October 19, 1987, interest in nonlinear dynamics, especially deterministic chaotic dynamics, has increased in both the financial press and the academic literature. This has come about because the frequency of large moves in stock markets is greater than would be expec ..."
Abstract

Cited by 106 (3 self)
 Add to MetaCart
After the stock market crash of October 19, 1987, interest in nonlinear dynamics, especially deterministic chaotic dynamics, has increased in both the financial press and the academic literature. This has come about because the frequency of large moves in stock markets is greater than would be expected
Advanced Spectral Methods for Climatic Time Series
, 2001
"... The analysis of uni or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical eld of ..."
Abstract

Cited by 96 (29 self)
 Add to MetaCart
The analysis of uni or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical eld of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory.
Nearestneighbor searching and metric space dimensions
 In NearestNeighbor Methods for Learning and Vision: Theory and Practice
, 2006
"... Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives a data structure for this problem; the data structure is built using the distan ..."
Abstract

Cited by 87 (0 self)
 Add to MetaCart
Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives a data structure for this problem; the data structure is built using the distance function as a “black box”. The structure is able to speed up nearest neighbor searching in a variety of settings, for example: points in lowdimensional or structured Euclidean space, strings under Hamming and edit distance, and bit vector data from an OCR application. The data structures are observed to need linear space, with a modest constant factor. The preprocessing time needed per site is observed to match the query time. The data structure can be viewed as an application of a “kdtree ” approach in the metric space setting, using Voronoi regions of a subset in place of axisaligned boxes. 1
Intrinsic Dimension Estimation Using Packing Numbers
, 2003
"... We propose a new algorithm to estimate the intrinsic dimension of data sets. The method is based on geometric properties of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a similar, widelyused algorithm from t ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
We propose a new algorithm to estimate the intrinsic dimension of data sets. The method is based on geometric properties of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a similar, widelyused algorithm from the same family of geometric techniques. Experiments show that our method is more robust in terms of the data generating distribution and more reliable in the presence of noise.
Generalized Relevance Learning Vector Quantization
 Neural Networks
, 2002
"... We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specif ..."
Abstract

Cited by 49 (20 self)
 Add to MetaCart
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository.
Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography
 PHYS. REV. A
, 1991
"... A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electricalchemical properties of synaptic interactions. While not useful to yield insights at the single neuron lev ..."
Abstract

Cited by 47 (41 self)
 Add to MetaCart
A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electricalchemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing largescale properties of shortterm memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numerical algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked potential data being collected to investigate genetic predispositions to alcoholism and to extract brain “signatures” of shortterm memory. Using the numerical algorithm of Very Fast Simulated ReAnnealing, it is demonstrated that SMNI can indeed fit this data within experimentally observed ranges of its underlying neuronalsynaptic parameters, and use the quantitative modeling results to examine physical neocortical mechanisms to discriminate between highrisk and lowrisk populations genetically predisposed to alcoholism. Since this first study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is inconclusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent
Interdisciplinary application of nonlinear time series methods
 Phys. Reports
, 1998
"... This paper reports on the application to field measurements of time series methods developed on the basis of the theory of deterministic chaos. The major difficulties are pointed out that arise when the data cannot be assumed to be purely deterministic and the potential that remains in this situatio ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
This paper reports on the application to field measurements of time series methods developed on the basis of the theory of deterministic chaos. The major difficulties are pointed out that arise when the data cannot be assumed to be purely deterministic and the potential that remains in this situation is discussed. For signals with weakly nonlinear structure, the presence of nonlinearity in a general sense has to be inferred statistically. The paper reviews the relevant methods and discusses the implications for deterministic modeling. Most field measurements yield nonstationary time series, which poses a severe problem for their analysis. Recent progress in the detection and understanding of nonstationarity is reported. If a clear signature of approximate determinism is found, the notions of phase space, attractors, invariant manifolds etc. provide a convenient framework for time series analysis. Although the results have to be interpreted with great care, superior performance can be achieved for typical signal processing tasks. In particular, prediction and filtering of signals are discussed, as well as the classification of system states by means of time series recordings.
Estimating the Intrinsic Dimension of Data with a FractalBased Method
, 2002
"... In this paper, the problem of estimating the intrinsic dimension of a data set is investigated. A fractalbased approach using the GrassbergerProcaccia algorithm is proposed. Since the GrassbergerProcaccia algorithm performs badly on sets of high dimensionality, an empirical procedure, that improv ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
In this paper, the problem of estimating the intrinsic dimension of a data set is investigated. A fractalbased approach using the GrassbergerProcaccia algorithm is proposed. Since the GrassbergerProcaccia algorithm performs badly on sets of high dimensionality, an empirical procedure, that improves the original algorithm, has been developed. The procedure has been tested on data sets of known dimensionality and on time series of Santa Fe competition.