Results 1  10
of
52,899
Cramer Rao Maximum APosteriori Bounds on Neural Network Training Error for NonGaussian Signals and Parameters
 International Journal of Intelligent Control and Systems
, 1996
"... Previously, it has been shown that neural networks approximate minimum mean square estimators. In minimum mean square estimation, an estimate ## of the Mdimensional random parameter vector # is obtained from a noisy Ndimensional input vector y where y has an additive noise component e. For the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. For the CramerRao maximum aposteriori bounds on the variance of elements of ### to be tight, two necessary conditions are that the mapping from y to # is bijective and that e and # are both Gaussian. In this paper, we relax the second condition and develop bounds on the variances of elements
Blind Signal Separation: Statistical Principles
, 2003
"... Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mut ..."
Abstract

Cited by 522 (4 self)
 Add to MetaCart
Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from observed mixtures (typically, the output of an array of sensors), exploiting only the assumption
Approximate Signal Processing
, 1997
"... It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tra ..."
Abstract

Cited by 516 (2 self)
 Add to MetaCart
It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing
On Sequential Monte Carlo Sampling Methods for Bayesian Filtering
 STATISTICS AND COMPUTING
, 2000
"... In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework is develop ..."
Abstract

Cited by 1032 (76 self)
 Add to MetaCart
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and nonGaussian. A general importance sampling framework
Standard CramerRao bound CramerRao bound with nuisance parameter Bayesian CramerRao bound Other bounds
"... We assume y(n) = a(n)e2ipif0n + b(n), n = 0,...,N − 1 with y(n) : the received signal a(n) : a zeromean random process or a timevarying amplitude. b(n) : circular white Gaussian stationary additive noise. Goal: Estimating the frequency f0 in multiplicative and additive noise ..."
Abstract
 Add to MetaCart
We assume y(n) = a(n)e2ipif0n + b(n), n = 0,...,N − 1 with y(n) : the received signal a(n) : a zeromean random process or a timevarying amplitude. b(n) : circular white Gaussian stationary additive noise. Goal: Estimating the frequency f0 in multiplicative and additive noise
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear
Optimal Estimation And CramerRao Bounds For Partial NonGaussian State Space Models
, 2001
"... Partial nonGaussian statespace models include many models of inter est while keeping a convenient analytical structure. In this paper, two problems related to partial nonGaussian models are addressed. First, we present an efficient sequential Monte Carlo method to perform Bayesian inference. ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Partial nonGaussian statespace models include many models of inter est while keeping a convenient analytical structure. In this paper, two problems related to partial nonGaussian models are addressed. First, we present an efficient sequential Monte Carlo method to perform Bayesian inference
Posterior CramérRao bounds for discretetime nonlinear filtering
 IEEE Trans. Signal Processing
, 1998
"... Abstract—A meansquare error lower bound for the discretetime nonlinear filtering problem is derived based on the Van Trees (posterior) version of the Cramér–Rao inequality. This lower bound is applicable to multidimensional nonlinear, possibly nonGaussian, dynamical systems and is more general tha ..."
Abstract

Cited by 178 (4 self)
 Add to MetaCart
Abstract—A meansquare error lower bound for the discretetime nonlinear filtering problem is derived based on the Van Trees (posterior) version of the Cramér–Rao inequality. This lower bound is applicable to multidimensional nonlinear, possibly nonGaussian, dynamical systems and is more general
Gaussian processes for machine learning
 in: Adaptive Computation and Machine Learning
, 2006
"... Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperpar ..."
Abstract

Cited by 631 (2 self)
 Add to MetaCart
Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn
Survey on Independent Component Analysis
 NEURAL COMPUTING SURVEYS
, 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 2241 (104 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation
Results 1  10
of
52,899