Results 1  10
of
188
Ensemble forecasting at NCEP and the breeding method
 Mon. Wea. Rev
, 1997
"... The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In ..."
Abstract

Cited by 78 (9 self)
 Add to MetaCart
The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In March 1994, the ensemble was expanded to seven independent breeding cycles on the Cray C90 supercomputer, and the forecasts were extended to 16 days. This provides 17 independent global forecasts valid for two weeks every day. For efficient ensemble forecasting, the initial perturbations to the control analysis should adequately sample the space of possible analysis errors. It is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere. The perturbation (i.e., the analysis error), carried forward in the firstguess forecasts, is ‘‘scaled down’ ’ at regular intervals by the use of observations. Because of this, growing errors associated with the evolving state of the atmosphere develop within the analysis cycle and dominate subsequent forecast error growth. The breeding method simulates the development of growing errors in the analysis cycle. A difference field between two nonlinear forecasts is carried forward (and scaled down at regular intervals) upon the evolving atmospheric analysis fields. By construction, the bred vectors are superpositions of the leading local (timedependent)
A practical method for calculating largest Lyapunov exponents from small data sets
 PHYSICA D
, 1993
"... Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close statespace trajectories and estimate the amount of chaos in a system. We present a new m ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close statespace trajectories and estimate the amount of chaos in a system. We present a new method for calculating the largest Lyapunov exponent from an experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate because it takes advantage of all the available data. We show that the algorithm is fast, easy to implement, and robust to changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level. Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. Thus, one sequence of computations will yield an estimate of both the level of chaos and the system complexity.
On The Computation Of Lyapunov Exponents For Continuous Dynamical Systems
, 1997
"... In this paper, we consider discrete and continuous QR algorithms for computing all of the Lyapunov exponents of a regular dynamical system. We begin by reviewing theoretical results for regular systems and present general perturbation results for Lyapunov exponents. We then present the algorithms, g ..."
Abstract

Cited by 48 (14 self)
 Add to MetaCart
In this paper, we consider discrete and continuous QR algorithms for computing all of the Lyapunov exponents of a regular dynamical system. We begin by reviewing theoretical results for regular systems and present general perturbation results for Lyapunov exponents. We then present the algorithms, give an error analysis of them, and describe their implementation. Finally, we give several numerical examples and some conclusions.
ConstrainedRealization MonteCarlo method for Hypothesis Testing
 Physica D
"... : We compare two theoretically distinct approaches to generating artificial (or "surrogate") data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single "best" model to the original data, and then to generate surrogate data sets that are "typica ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
: We compare two theoretically distinct approaches to generating artificial (or "surrogate") data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single "best" model to the original data, and then to generate surrogate data sets that are "typical realizations" of that model. The second approach concentrates not on the model but directly on the original data; it attempts to constrain the surrogate data sets so that they exactly agree with the original data for a specified set of sample statistics. Examples of these two approaches are provided for two simple cases: a test for deviations from a gaussian distribution, and a test for serial dependence in a time series. Additionally, we consider tests for nonlinearity in time series based on a Fourier transform (FT) method and on more conventional autoregressive movingaverage (ARMA) fits to the data. The comparative performance of hypothesis testing schemes based on these two approaches...
Interdisciplinary application of nonlinear time series methods
 Phys. Reports
, 1998
"... This paper reports on the application to field measurements of time series methods developed on the basis of the theory of deterministic chaos. The major difficulties are pointed out that arise when the data cannot be assumed to be purely deterministic and the potential that remains in this situatio ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
This paper reports on the application to field measurements of time series methods developed on the basis of the theory of deterministic chaos. The major difficulties are pointed out that arise when the data cannot be assumed to be purely deterministic and the potential that remains in this situation is discussed. For signals with weakly nonlinear structure, the presence of nonlinearity in a general sense has to be inferred statistically. The paper reviews the relevant methods and discusses the implications for deterministic modeling. Most field measurements yield nonstationary time series, which poses a severe problem for their analysis. Recent progress in the detection and understanding of nonstationarity is reported. If a clear signature of approximate determinism is found, the notions of phase space, attractors, invariant manifolds etc. provide a convenient framework for time series analysis. Although the results have to be interpreted with great care, superior performance can be achieved for typical signal processing tasks. In particular, prediction and filtering of signals are discussed, as well as the classification of system states by means of time series recordings.
Equations of motion from a data series
 Complex Systems
, 1987
"... Abstract. Temporal pattern learning, control and prediction, and chaotic data analysis share a common problem: deducing optimal equations of motion from observations of timedependent behavior. Each desires to obtain models of the physical world from limited information. We describe a method to reco ..."
Abstract

Cited by 41 (14 self)
 Add to MetaCart
Abstract. Temporal pattern learning, control and prediction, and chaotic data analysis share a common problem: deducing optimal equations of motion from observations of timedependent behavior. Each desires to obtain models of the physical world from limited information. We describe a method to reconstruct the deterministic portion of the equations of motion directly from a data series. These equations of motion represent a vast reduction of a chaotic data set’s observed complexity to a compact, algorithmic specification. This approach employs an informational measure of model optimality to guide searching through the space of dynamical systems. As corollary results, we indicate how to estimate the minimum embedding dimension, extrinsic noise level, metric entropy, and Lyapunov spectrum. Numerical and experimental applications demonstrate the method’s feasibility and limitations. Extensions to estimating parametrized families of dynamical systems from bifurcation data and to spatial pattern evolution are presented. Applications to predicting chaotic data and the design of forecasting, learning, and control systems, are discussed. 1.
Prediction of Chaotic Time Series with Neural Networks
 INT. J. BIFURCATION AND CHAOS
, 1992
"... This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
This paper shows that the dynamics of nonlinear systems that produce complex time series can be captured in a model system. The model system is an artificial neural network, trained with backpropagation, in a multistep prediction framework. Results from the MackeyGlass (D=30) will be presented to corroborate our claim. Our final intent is to study the applicability of the method to the electroencephalogram, but first several important questions must be answered to guarantee appropriate modeling.
Generalized Redundancies for Time Series Analysis
 Physica D
, 1995
"... Extensions to various informationtheoretic quantities (such as entropy, redundancy, and mutual information) are discussed in the context of their role in nonlinear time series analysis. We also discuss "linearized" versions of these quantities and their use as benchmarks in tests for nonlinearity. ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Extensions to various informationtheoretic quantities (such as entropy, redundancy, and mutual information) are discussed in the context of their role in nonlinear time series analysis. We also discuss "linearized" versions of these quantities and their use as benchmarks in tests for nonlinearity. Many of these quantities can be expressed in terms of the generalized correlation integral, and this expression permits us to more clearly exhibit the relationships of these quantities to each other and to other commonly used nonlinear statistics (such as the BDS and GreenSavit statistics). Further, numerical estimation of these quantities is found to be more accurate and more efficient when the the correlation integral is employed in the computation. Finally, we consider several "local" versions of these quantities, including a local KolmogorovSinai entropy, which gives an estimate of variability of the shortterm predictability. 1 Introduction In Shaw's influential (and prizewinning)...
Local Dynamic Modeling with SelfOrganizing Maps and Applications to Nonlinear System Identification and Control
 Proceedings of the IEEE
, 1998
"... The technique of local linear models is appealing for modeling complex time series due to the weak assumptions required and its intrinsic simplicity. Here, instead of deriving the local models from the data, we propose to estimate them directly from the weights of a self organizing map (SOM), which ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The technique of local linear models is appealing for modeling complex time series due to the weak assumptions required and its intrinsic simplicity. Here, instead of deriving the local models from the data, we propose to estimate them directly from the weights of a self organizing map (SOM), which functions as a dynamicpreserving model of the dynamics. We introduce one modification to the Kohonen learning to ensure good representation of the dynamics and use weighted least squares to ensure continuity among the local models. The proposed scheme is tested using synthetic chaotic time series and real world data. The practicality of the method is illustrated in the identification and control of the NASA Langley wind tunnel during aerodynamic tests of model aircrafts. Modeling the dynamics with a SOM leads to a predictive multiple model control strategy (PMMC). Comparison of the new controller against the existing controller in test runs shows the superiority of our method. 1. Introducti...
Is there chaos in the brain? II. Experimental evidence and related models
 C. R. Biol
, 2003
"... The search for chaotic patterns has occupied numerous investigators in neuroscience, as in many other fields of science. Their results and main conclusions are reviewed in the light of the most recent criteria that need to be satisfied since the first descriptions of the surrogate strategy. The meth ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
The search for chaotic patterns has occupied numerous investigators in neuroscience, as in many other fields of science. Their results and main conclusions are reviewed in the light of the most recent criteria that need to be satisfied since the first descriptions of the surrogate strategy. The methods used in each of these studies have almost invariably combined the analysis of experimental data with simulations using formal models, often based on modified Huxley and Hodgkin equations and/or of the Hindmarsh and Rose models of bursting neurons. Due to technical limitations, the results of these simulations have prevailed over experimental ones in studies on the nonlinear properties of large cortical networks and higher brain functions. Yet, and although a convincing proof of chaos (as defined mathematically) has only been obtained at the level of axons, of single and coupled cells, convergent results can be interpreted as compatible with the notion that signals in the brain are distributed according to chaotic patterns at all levels of its various forms of hierarchy. This chronological account of the main landmarks of nonlinear neurosciences follows an earlier publication [Faure, Korn, C. R. Acad. Sci. Paris, Ser. III 324 (2001) 773–793] that was focused on the basic concepts of nonlinear dynamics and methods of investigations which allow chaotic processes to be distinguished from stochastic ones and on the rationale for envisioning their control using external perturbations. Here we present the data and main arguments that support the existence of chaos at all levels from the simplest to the most complex forms of organization of the nervous system.