Results 1  10
of
607
A Unifying Review of Linear Gaussian Models
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Abstract

Cited by 351 (18 self)
 Add to MetaCart
(Show Context)
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.
Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter
 Physica D
, 2007
"... Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become availab ..."
Abstract

Cited by 152 (11 self)
 Add to MetaCart
Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become available, one uses the model to “forecast ” the current state, using a prior state estimate (which incorporates information from past data) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. This Bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. In this article, I describe a practical method for data assimilation in large, spatiotemporally chaotic systems. The method is a type of “Ensemble Kalman Filter”, in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. I discuss both the mathematical basis of this approach and its implementation; my primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble Kalman filtering. 1
Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex
 Neural Computation
, 1995
"... this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expec ..."
Abstract

Cited by 113 (20 self)
 Add to MetaCart
this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expectationdriven topdown signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the ExpectationMaximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Si
Storing Covariance With Nonlinearly Interacting Neurons
, 1977
"... A timedependent, nonlinear model of neuronal interaction which was probabilistically analyzed in a previous article is shown here to be a natural generalization of the HartlineRatliff model of the Limulus retina. Although the primary physical variables in the model are the membrane potentials of n ..."
Abstract

Cited by 112 (9 self)
 Add to MetaCart
A timedependent, nonlinear model of neuronal interaction which was probabilistically analyzed in a previous article is shown here to be a natural generalization of the HartlineRatliff model of the Limulus retina. Although the primary physical variables in the model are the membrane potentials of neurons, the equations which govern the means and covariances of the membrane potentials are coupled through the average firing rates; as a consequence, the average firing rates control the selective storage and retrieval of covariance information. Motor learning in the cerebellar cox"~ex is treated as a problem of covariance storage, and a prediction is made for the underlying synaptic plasticity: the change in synaptic strength between a parallel fiber and a Purkinje cell should be proportional to the covariance between discharges in the parallel fiber and the climbing fiber. Unlike previous proposals for synaptic plasticity, this prediction requires both facilitation and depression to occur (under different conditions) at the same synapse.
An Intelligent Predictive Control Approach to the HighSpeed CrossCountry Autonomous Navigation Problem
, 1995
"... mRIm9533 submitted in partial fulfiumtnr of the reqimlmts for the degm of ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
mRIm9533 submitted in partial fulfiumtnr of the reqimlmts for the degm of
Receding Horizon Control of Nonlinear Systems: A Control . . .
, 2000
"... n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parametervarying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gainscheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parametervarying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gainscheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, 12(3):101 107, June 1992. [Sch96] A. Schwartz. Theory and Implementation of Numerical Methods Based on RungeKutta Integration for Optimal Control Problems. PhD Disser tation, University of California, Berkeley, 1996. [SCH+00] M. Sznaier, J. Cloutier, R. Hull, D. Jacques, and C. Mracek. Reced ing horizon control lyapunov function approach to suboptimal regula tion of nonlinear systems. Journal of Guidance, Control, and Dynamics, 23(3):399 405, 2000. [SD90] M. Sznaier and M. J. Damborg. Heuristically enhanced feedback con trol of constrained discretetime linear systems. Automatica, 26:521 532, 1990. [SMR99] P. Scokaert, D. Mayne, and J. Rawlings. Suboptimal model predictive cont
An Overview of Nonlinear Model Predictive Control Applications
 Nonlinear Predictive Control
, 2000
"... . This paper provides an overview of nonlinear model predictive control (NMPC) applications in industry, focusing primarily on recent applications reported by NMPC vendors. A brief summary of NMPC theory is presented to highlight issues pertinent to NMPC applications. Five industrial NMPC implem ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
(Show Context)
. This paper provides an overview of nonlinear model predictive control (NMPC) applications in industry, focusing primarily on recent applications reported by NMPC vendors. A brief summary of NMPC theory is presented to highlight issues pertinent to NMPC applications. Five industrial NMPC implementations are then discussed with reference to modeling, control, optimization, and implementation issues. Results from several industrial applications are presented to illustrate the benefits possible with NMPC technology. A discussion of future needs in NMPC theory and practice is provided to conclude the paper. 1. Introduction The term Model Predictive Control (MPC) describes a class of computer control algorithms that control the future behavior of a plant through the use of an explicit process model. At each control interval the MPC algorithm computes an openloop sequence of manipulated variable adjustments in order to optimize future plant behavior. The first input in the optima...
Detection of Stochastic Processes
 IEEE Trans. Inform. Theory
, 1998
"... This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimiz ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
(Show Context)
This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimization of observation time and performance. This treatment deals exclusively with basic results developed for the situation in which the observations are modeled as continuoustime stochastic processes. The mathematics and intuition behind such developments as the matched filter, the RAKE receiver, the estimatorcorrelator, maximumlikelihood sequence detectors, multiuser detectors, sequential probability ratio tests, and cumulativesum quickest detectors, are described. Index Terms Dynamic programming, innovations processes, likelihood ratios, martingale theory, matched filters, optimal stopping, reproducing kernel Hilbert spaces, sequence detection, sequential methods, signal detection, signal estimation.
Measurement and Integration of 3D Structures by Tracking Edge Lines
, 1992
"... This paper describes techniques for dynamically modeling the 2D appearance and 3D geometry of a scene by integrating information from a moving camera. These techniques are illustrated by the design of a system which constructs a geometric description of a scene from the motion of a camera mounted ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
This paper describes techniques for dynamically modeling the 2D appearance and 3D geometry of a scene by integrating information from a moving camera. These techniques are illustrated by the design of a system which constructs a geometric description of a scene from the motion of a camera mounted on a robot arm. A framework
Climate change and salmon production in the Northeast Pacific Ocean
 In R.J. Beamish (Ed.) Climate Change and Northern Fish Populations
, 1994
"... Abstract: Alaskan salmon stocks have exhibited enormous fluctuations in production during the 20th century. In this paper, we investigate our hypothesis that largescale salmonproduction variability is driven by climatic processes in the Northeast Pacific Ocean. Using a timeseries analytical techn ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
(Show Context)
Abstract: Alaskan salmon stocks have exhibited enormous fluctuations in production during the 20th century. In this paper, we investigate our hypothesis that largescale salmonproduction variability is driven by climatic processes in the Northeast Pacific Ocean. Using a timeseries analytical technique known as intervention analysis, we demonstrate that Alaskan salmonids alternate between high and low production regimes. The transition from a high(low) regime to a low(high) regime is called an intervention. To test for interventions, we first fitted the salmon time series to univariate autoregressive integrated moving average (ARIMA) models. On the basis of tentatively identified climatic regime shifts, potential interventions were then identified and incorporated into the models, and the resulting fit was compared with the nonintervention models. A highly significant positive step intervention in the late 1970s and a significant negative step intervention in the late 1940s were identified in the four major Alaska salmon stocks analyzed. We review the evidence for synchronous climatic regime shifts in the late 1940s and late 1970s that coincide with the shifts in salmon production. Potential mechanisms linking North Pacific climatic processes to salmon production are identified. 1