Results 1  10
of
50
From Laplace To Supernova Sn 1987a: Bayesian Inference In Astrophysics
, 1990
"... . The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
. The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to wellposed statistical problems, and is historically the original approach to statistics. The reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of wea...
Ensemble learning for independent component analysis
 in Advances in Independent Component Analysis
, 2000
"... i Abstract This thesis is concerned with the problem of Blind Source Separation. Specifically we considerthe Independent Component Analysis (ICA) model in which a set of observations are modelled by xt = Ast: (1) where A is an unknown mixing matrix and st is a vector of hidden source components atti ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
i Abstract This thesis is concerned with the problem of Blind Source Separation. Specifically we considerthe Independent Component Analysis (ICA) model in which a set of observations are modelled by xt = Ast: (1) where A is an unknown mixing matrix and st is a vector of hidden source components attime t. The ICA problem is to find the sources given only a set of observations. In chapter 1, the blind source separation problem is introduced. In chapter 2 the methodof Ensemble Learning is explained. Chapter 3 applies Ensemble Learning to the ICA model and chapter 4 assesses the use of Ensemble Learning for model selection.Chapters 57 apply the Ensemble Learning ICA algorithm to data sets from physics (a medical imaging data set consisting of images of a tooth), biology (data sets from cDNAmicroarrays) and astrophysics (Planck image separation and galaxy spectra separation).
Ensemble Learning for Blind Image Separation and Deconvolution
, 2000
"... Introduction Previous work on Blind Source Deconvolution has focused mainly on the problem of deconvolving sound samples. It is assumed that the observed sound samples are temporally convolved versions of the true source samples. Blind Deconvolution algorithms have fallen into two types, those wher ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
Introduction Previous work on Blind Source Deconvolution has focused mainly on the problem of deconvolving sound samples. It is assumed that the observed sound samples are temporally convolved versions of the true source samples. Blind Deconvolution algorithms have fallen into two types, those where the inverse of the convolution lter is learnt [1],[3] and those where the aim is to learn the lter itself [1]. When applying these ideas to the problem of deconvolving images two problems become apparent. Firstly in many real data sets (for instance the images generated by telescopes observing the sky or the power spectrum from a Nuclear Magnetic Resonance (NMR) spectrometer) the pixel values correspond to intensities. So the pixel values must be positive. The standard blind separation approaches of assuming that the sources are distributed as 1 cosh [3] or mixtures of Gaussians [2] lose this positivity of the source images. Deconvolution without a positivity con
Maximum Entropy Modeling with Clausal Constraints
 In Proceedings of the 7th International Workshop on Inductive Logic Programming
, 1997
"... We present the learning system Maccent which addresses the novel task of stochastic MAximum ENTropy modeling with Clausal Constraints. Maximum Entropy method is a Bayesian method based on the principle that the target stochastic model should be as uniform as possible, subject to known constraints. ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
We present the learning system Maccent which addresses the novel task of stochastic MAximum ENTropy modeling with Clausal Constraints. Maximum Entropy method is a Bayesian method based on the principle that the target stochastic model should be as uniform as possible, subject to known constraints. Maccent incorporates clausal constraints that are based on the evaluation of Prolog clauses in examples represented as Prolog programs. We build on an existing maximumlikelihood approach to maximum entropy modeling, which we upgrade along two dimensions: (1) Maccent can handle larger search spaces, due to a partial ordering defined on the space of clausal constraints, and (2) uses a richer firstorder logic format. In comparison with other inductive logic programming systems, Maccent seems to be the first that explicitly constructs a conditional probability distribution p(CjI) based on an empirical distribution ~ p(CjI) (where p(CjI) (~p(CjI)) gives the induced (observed) probability of ...
A MomentBased Variational Approach to Tomographic Reconstruction
 IEEE Transactions on Image Processing
, 1996
"... In this paper, we describe a variational framework for the tomographic reconstruction of an image from the maximum likelihood (ML) estimates of its orthogonal moments. We show how these estimated moments and their (correlated) error statistics can be computed directly, and in a linear fashion from g ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In this paper, we describe a variational framework for the tomographic reconstruction of an image from the maximum likelihood (ML) estimates of its orthogonal moments. We show how these estimated moments and their (correlated) error statistics can be computed directly, and in a linear fashion from given noisy and possibly sparse projection data. Moreover, thanks to the consistency properties of the Radon transform, this twostep approach (moment estimation followed by image reconstruction) can be viewed as a statistically optimal procedure. Furthermore, by focusing on the important role played by the mmnents of projection data, we immediately see the close connection between tomographic reconstruction of nonnegativevalued images and the problem of nonparametric estimation of probability densities given estimates of their moments. Taking advantage of this connection, our proposed variational algorithm is based on the minimization of a cost functional composed of a term measuring the divergence between a given prior estimate of the image and the current estimate of the image and a second quadratic term based on the error incurred in the estimation of the moments of the underlying image from the noisy projection data. We show that an iterative refinement of this algorithm leads to a practical algorithm for the solution of the highly complex equality constrained divergence minimization problem. We show tbat this iterative refinement results in superior reconstructions of images from very noisy data as compared with the classical filtered backprojection (FBP) algorithm.
Radio astronomical imaging in the presence of strong radio interference
 IEEE Trans. on Information Theory
, 2000
"... Abstract—Radioastronomical observations are increasingly contaminated by interference, and suppression techniques become essential. A powerful candidate for interference mitigation is adaptive spatial filtering. We study the effect of spatial filtering techniques on radioastronomical imaging. Curr ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Abstract—Radioastronomical observations are increasingly contaminated by interference, and suppression techniques become essential. A powerful candidate for interference mitigation is adaptive spatial filtering. We study the effect of spatial filtering techniques on radioastronomical imaging. Current deconvolution procedures, such as CLEAN, are shown to be unsuitable for spatially filtered data, and the necessary corrections are derived. To that end, we reformulate the imaging (deconvolution/calibration) process as a sequential estimation of the locations of astronomical sources. This not only leads to an extended CLEAN algorithm, but also the formulation allows the insertion of other array signal processing techniques for direction finding and gives estimates of the expected image quality and the amount of interference suppression that can be achieved. Finally, a maximumlikelihood (ML) procedure for the imaging is derived, and an approximate ML image formation technique is proposed to overcome the computational burden involved. Some of the effects of the new algorithms are shown in simulated images. Index Terms—CLEAN, interference mitigation, maximum likelihood, minimum variance, parametric imaging, radio astronomy, spatial filtering, synthesis imaging. I.
A New Look at the Entropy for Solving Linear Inverse Problems
 IEEE Transactions on Information Theory
, 1994
"... Entropybased methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear illposed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods ar ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Entropybased methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear illposed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods are well adapted to this context, we study here alternative methods which rely highly on some "informationbased" criteria. Our goal is to enlight the role played by entropy in this frame, and to present a new and deeper point of view on the entropy, using general tools and results of convex analysis and large deviations theory. Then, we present a new and large scheme of entropicbased inversion of linearnoisy inverse problems. This scheme was introduced by Navaza in 1985 [48] in connection with a physical modeling for crystallographic applications, and further studied by DacunhaCastelle and Gamboa [13]. Important features of this paper are (i) a unified presentation of many well kno...
Penalized Likelihood
 In Encyclopedia of Statistical Sciences, Update Volume 2
, 1996
"... this article. The scope for the application of penalized likelihood is greatest in nonparametric and semiparametric regression, interpreting the term very broadly, and such applications will be emphasised here. A brief discussion of application to density estimation will also be given. The emphasis ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
this article. The scope for the application of penalized likelihood is greatest in nonparametric and semiparametric regression, interpreting the term very broadly, and such applications will be emphasised here. A brief discussion of application to density estimation will also be given. The emphasis in this article is on methodology, not theory; for careful and illuminating accounts of the asymptotic theory of penalized likelihood estimators, we refer the reader to Cox and O'Sullivan [3], and Gu and Qiu [10]. 1.1 Nonparametric regression
The Maximum Entropy Approach and Probabilistic IR Models
 ACM TRANSACTIONS ON INFORMATION SYSTEMS
, 1998
"... The Principle of Maximum Entropy is discussed and two classic probabilistic models of information retrieval, the Binary Independence Model of Robertson and Sparck Jones and the Combination Match Model of Croft and Harper are derived using the maximum entropy approach. The assumptions on which the cl ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The Principle of Maximum Entropy is discussed and two classic probabilistic models of information retrieval, the Binary Independence Model of Robertson and Sparck Jones and the Combination Match Model of Croft and Harper are derived using the maximum entropy approach. The assumptions on which the classical models are based are not made. In their place, the probability distribution of maximum entropy consistent with a set of constraints is determined. It is argued that this subjectivist approach is more philosophically coherent than the frequentist conceptualization of probability that is often assumed as the basis of probabilistic modeling and that this philosophical stance has important practical consequences with respect to the realization of information retrieval research.