Results 1 
9 of
9
A Robustification Approach to Stability and to Uniform Particle Approximation of Nonlinear Filters: The Example of PseudoMixing Signals
, 2002
"... We propose a new approach to study the stability of the optimal filter w.r.t. its initial condition, by introducing a "robust" filter, which is exponentially stable and which approximates the optimal filter uniformly in time. The "robust" filter is obtained here by truncation of the likelihood funct ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
We propose a new approach to study the stability of the optimal filter w.r.t. its initial condition, by introducing a "robust" filter, which is exponentially stable and which approximates the optimal filter uniformly in time. The "robust" filter is obtained here by truncation of the likelihood function, and the robustification result is proved under the assumption that the Markov transition kernel satisfies a pseudomixing condition (weaker than the usual mixing condition), and that the observations are "sufficiently good". This robustification approach allows us to prove also the uniform convergence of several particle approximations to the optimal filter, in some cases of nonergodic signals.
Optimal Estimation And CramerRao Bounds For Partial NonGaussian State Space Models
, 2001
"... Partial nonGaussian statespace models include many models of inter est while keeping a convenient analytical structure. In this paper, two problems related to partial nonGaussian models are addressed. First, we present an efficient sequential Monte Carlo method to perform Bayesian inference. ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Partial nonGaussian statespace models include many models of inter est while keeping a convenient analytical structure. In this paper, two problems related to partial nonGaussian models are addressed. First, we present an efficient sequential Monte Carlo method to perform Bayesian inference. Second, we derive simple recursions to conpute posterior Cramr~Rao bounds (PCRB). An application to jump Markov linear systems (JMLS) is given.
An Analysis of Regularized Interacting Particle Methods for Nonlinear Filtering
 Proceedings of the 3rd IEEE European Workshop on ComputerIntensive Methods in Control and Signal Processing
, 1998
"... Interacting particle methods have been recently proposed for the approximation of nonlinear filters. These are efficient recursive Monte Carlo methods, which in principle could be implemented in high dimensional problems  i.e. which could beat the curse of dimensionality  and where the particl ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Interacting particle methods have been recently proposed for the approximation of nonlinear filters. These are efficient recursive Monte Carlo methods, which in principle could be implemented in high dimensional problems  i.e. which could beat the curse of dimensionality  and where the particles automatically concentrate in regions of interest of the state space. In this paper we show that it is sometimes necessary to add a regularization step, and we analyze the approximation error for the resulting regularized interacting particle methods. 1. Introduction We consider the following model, where the unobserved state process fX t ; t 0g satisfies the stochastic differential equation (SDE) on R m dX t = b(X t ) dt + oe(X t ) dW t ; X 0 ¸ ¯ 0 ; (1) with standard Wiener process fW t ; t 0g, and where ddimensional observations fzn ; n 1g are available at discrete time instants 0 ! t 1 ! \Delta \Delta \Delta ! t n ! \Delta \Delta \Delta z n = h(X t n ) + v n ; in additional w...
Bayesian Methods for Neural Networks
, 1999
"... Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and meas ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and measured by probabilities. This formulation allows for a probabilistic treatment of our a priori knowledge, domain specific knowledge, model selection schemes, parameter estimation methods and noise estimation techniques. Many researchers have contributed towards the development of the Bayesian learning approach for neural networks. This thesis advances this research by proposing several novel extensions in the areas of sequential learning, model selection, optimisation and convergence assessment. The first contribution is a regularisation strategy for sequential learning based on extended Kalman filtering and noise estimation via evidence maximisation. Using the expectation maximisation (EM) algorithm, a similar algorithm is derived for batch learning. Much of the thesis is, however, devoted to Monte Carlo simulation methods. A robust Bayesian method is proposed to estimate,
Convergence of Empirical Processes for Interacting Particle Systems with Applications to Nonlinear Filtering
 Journal of Theoret. Probability
, 2000
"... In this paper, we investigate the convergence of empirical processes for a class of interacting particle numerical schemes arising in biology, genetic algorithms and advanced signal processing. The GlivenkoCantelli and Donsker theorems presented in this work extend the corresponding statements in t ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
In this paper, we investigate the convergence of empirical processes for a class of interacting particle numerical schemes arising in biology, genetic algorithms and advanced signal processing. The GlivenkoCantelli and Donsker theorems presented in this work extend the corresponding statements in the classical theory and apply to a class of genetic type particle numerical schemes of the nonlinear filtering equation. Keywords : Empirical processes, Interacting particle systems, GlivenkoCantelli and Donsker theorems. code A.M.S : 60G35, 92D25 UMR C55830, CNRS, Univ. PaulSabatier, 31062 Toulouse, delmoral@cict.fr y ledoux@cict.fr 1 Introduction 1.1 Background and Motivations Let E be a Polish space endowed with its Borel oefield B(E). We denote by M 1 (E) the space of all probability measures on E equipped with the weak topology. We recall that the weak topology is generated by the bounded continuous functions on E and we denote by C b (E) the space of these functions. Let O...
Modeling Genetic Algorithms with Interacting Particle Systems
 In Theoretical Aspects of Evolutionary Computing
, 2001
"... We present in this work a natural Interacting Particle System (IPS) approach for modeling and studying the asymptotic behavior of Genetic Algorithms (GAs). In this model, a population is seen as a distribution (or measure) on the search space, and the Genetic Algorithm as a measure valued dynamical ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present in this work a natural Interacting Particle System (IPS) approach for modeling and studying the asymptotic behavior of Genetic Algorithms (GAs). In this model, a population is seen as a distribution (or measure) on the search space, and the Genetic Algorithm as a measure valued dynamical system. This model allows one to apply recent convergence results from the IPS literature for studying the convergence of genetic algorithms when the size of the population tends to infinity. We first review a number of approaches to Genetic Algorithms modeling and related convergence results. We then describe a general and abstract discrete time Interacting Particle System model for GAs, an we propose a brief review of some recent asymptotic results about the convergence of the NIPS approximating model (of finite Nsizedpopulation GAs) towards the IPS model (of infinite population GAs), including law of large number theorems, IL p mean and exponential bounds as well as large deviations...
Ergodic properties of the Nonlinear Filter
 Stochastic Processes and their Applications, 95:1–24
, 2000
"... In a recent work [5] various Markov and ergodicity properties of the nonlinear filter, for the classical model of nonlinear filtering, were studied. It was shown that under quite general conditions, when the signal is a FellerMarkov process with values in a complete separable metric space E then th ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In a recent work [5] various Markov and ergodicity properties of the nonlinear filter, for the classical model of nonlinear filtering, were studied. It was shown that under quite general conditions, when the signal is a FellerMarkov process with values in a complete separable metric space E then the pair process (signal, filter) is also a FellerMarkov process with state space E P(E), where P(E) is the space of probability measures on E. Furthermore, it was shown that if the signal has a unique invariant measure then, under appropriate conditions, uniqueness of the invariant measure for the above pair process holds within a certain restricted class of invariant measures. In many asymptotic problems concerning approximate filters [6, 7] it is desirable to have the uniqueness of the invariant measure to hold in the class of all invariant measures. In this paper we first show that for a rich class of filtering problems, when the signal has a unique invariant measure, the property of...
Stability and Approximation of Nonlinear Filters: an Information Theoretic Approach
, 2000
"... It has recently been proved by Clark, Ocone and Coumarbatch that the relative entropy (or Kullback Leibler information distance) between two nonlinear filters with different initial conditions is a supermartingale, hence its expectation can only decrease with time. This result was obtained for a v ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
It has recently been proved by Clark, Ocone and Coumarbatch that the relative entropy (or Kullback Leibler information distance) between two nonlinear filters with different initial conditions is a supermartingale, hence its expectation can only decrease with time. This result was obtained for a very general model, where the unknown state and observation processes form jointly a continuoustime Markov process. The purpose of this paper is (i) to extend this result to a large class of fdivergences, including the total variation distance, the Hellinger distance, and not only the KullbackLeibler information distance, and (ii) to consider not only robustness w.r.t. the initial condition of the filter, but also w.r.t. perturbation of the state generator. On the other hand, the model considered here is much less general, and consists of a diffusion process observed in discretetime. Keywords : nonlinear filtering, stability, relative entropy, KullbackLeibler information, Hellinger...
On the Effect of Selection in Genetic Algorithms
"... In order to study the effect of selection with respect to mutation and mating in genetic algorithms, we consider two simplified examples, in the infinite population limit. Both algorithms are modeled as measure valued dynamical systems, and designed to maximize a linear fitness on the half line. Thu ..."
Abstract
 Add to MetaCart
In order to study the effect of selection with respect to mutation and mating in genetic algorithms, we consider two simplified examples, in the infinite population limit. Both algorithms are modeled as measure valued dynamical systems, and designed to maximize a linear fitness on the half line. Thus, they both trivially converge to infinity. We compute the rate of their growth and we show that, in both cases, selection is able to overcome a tendency to converge to zero. The first model is a mutationselection algorithm on the integer half line, which generates mutations along a simple random walk. We prove that the system goes to infinity at a positive speed, even in cases where the random walk itself is ergodic. This holds in several strong senses, since we show a.s. convergence, L p convergence, convergence in distribution and a large deviations principle for the sequence of measures. For the second model, we introduce a new class of matings, based upon Mandelbrot martingales. The mean fitness of the associated matingselection algorithms on the real half line grows exponentially fast, even in cases where the Mandelbrot martingale itself converges to zero.