Results 1  10
of
137
Comparison of Approximate Methods for Handling Hyperparameters
 NEURAL COMPUTATION
"... I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants and noise levels. In the 'evidence framework' the model parameters are integrated over, and the resu ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants and noise levels. In the 'evidence framework' the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized
Watermarking Digital Images for Copyright Protection
, 1996
"... : A watermark is an invisible mark placed on an image that can be detected when the image is compared with the original. This mark is designed to identify both the source of an image as well as its intended recipient. The aims of this paper are to present an overview of watermarking techniques and t ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
: A watermark is an invisible mark placed on an image that can be detected when the image is compared with the original. This mark is designed to identify both the source of an image as well as its intended recipient. The aims of this paper are to present an overview of watermarking techniques and to demonstrate a solution to one of the key problems in image watermarking, namely how to hide robust invisible labels inside grey scale or colour digital images. 1 Introduction Computers, printers and high rate transmission facilities are becoming less expensive and more generally available. It is now feasible and very economical to transmit images and video sequences using computer networks rather than to send hard copies by post. In addition, images may be stored in databases in digital form. A major impediment to the use of electronic distribution and storage is the ease of intercepting, copying and redistributing electronic images and documents in their exact original form. As a result, ...
From Laplace To Supernova Sn 1987a: Bayesian Inference In Astrophysics
, 1990
"... . The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
. The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to wellposed statistical problems, and is historically the original approach to statistics. The reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of wea...
MIMO Channel Modelling and the Principle of Maximum Entropy
, 2004
"... In this paper , we devise theoretical grounds for constructing channel models for Multiinput Multioutput (MIMO) systems based on information theoretic tools. The paper provides a general method to derive a channel model which is consistent with one's state of knowledge. The framework we giv ..."
Abstract

Cited by 48 (25 self)
 Add to MetaCart
In this paper , we devise theoretical grounds for constructing channel models for Multiinput Multioutput (MIMO) systems based on information theoretic tools. The paper provides a general method to derive a channel model which is consistent with one's state of knowledge. The framework we give here has already been fruitfully explored with success in the context of Bayesian spectrum analysis and parameter estimation. For each channel model, we conduct an asymptotic analysis (in the number of antennas) of the achievable transmission rate using tools from random matrix theory. A central limit theorem is provided on the asymptotic behavior of the mutual information and validated in the finite case by simulations. The results are both useful in terms of designing a system based on criteria such as quality of service and in optimizing transmissions in multiuser networks .
Joint Bayesian Model Selection and Estimation of Noisy Sinusoids via Reversible Jump MCMC
, 1999
"... In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this dist ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this distribution. Unfortunately, a direct evaluation of this distribution and of its features, including posterior model probabilities, requires evaluation of some complicated highdimensional integrals. We develop an efficient stochastic algorithm based on reversible jump Markov chain Monte Carlo methods to perform the Bayesian computation. A convergence result for this algorithm is established. In simulation, it appears that the performance of detection based on posterior model probabilities outperforms conventional detection schemes.
Pitchscaled estimation of simultaneous voiced and turbulencenoise components in speech
 IEEE Trans. Speech Audio Processing
, 2001
"... Abstract—Almost all speech contains simultaneous contributions from more than one acoustic source within the speaker’s vocal tract. In this paper, we propose a method—the pitchscaled harmonic filter (PSHF)—which aims to separate the voiced and turbulencenoise components of the speech signal during ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Almost all speech contains simultaneous contributions from more than one acoustic source within the speaker’s vocal tract. In this paper, we propose a method—the pitchscaled harmonic filter (PSHF)—which aims to separate the voiced and turbulencenoise components of the speech signal during phonation, based on a maximum likelihood approach. The PSHF outputs periodic and aperiodic components that are estimates of the respective contributions of the different types of acoustic source. It produces four reconstructed time series signals by decomposing the original speech signal, first, according to amplitude, and then according to power of the Fourier coefficients. Thus, one pair of periodic and aperiodic signals is optimized for subsequent timeseries analysis, and another pair for spectral analysis. The performance of the PSHF algorithm was tested on synthetic signals, using three forms of disturbance (jitter, shimmer and additive noise), and the results were used to predict the performance on real speech. Processing recorded speech examples elicited latent features from the signals, demonstrating the PSHF’s potential for analysis of mixedsource speech. Index Terms—Periodic–aperiodic decomposition, speech modification, speech preprocessing. I.
Bayesian Analysis. I. Parameter Estimation Using Quadrature NMR Models
 J. Magn. Reson
, 1990
"... . In the analysis of magnetic resonance data, a great deal of prior information is available which is ordinarily not used. For example, considering high resolution NMR spectroscopy, one knows in general terms what functional form the signal will take (e.g., sum of exponentially decaying sinusoids) a ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
. In the analysis of magnetic resonance data, a great deal of prior information is available which is ordinarily not used. For example, considering high resolution NMR spectroscopy, one knows in general terms what functional form the signal will take (e.g., sum of exponentially decaying sinusoids) and that, for quadrature measurements, it will be the same in both channels except for a 90 ffi phase shift. When prior information is incorporated into the analysis of time domain data, the frequencies, decay rate constants, and amplitudes may be estimated much more precisely than by direct use of discrete Fourier transforms. Here, Bayesian probability theory is used to estimate parameters using quadrature models of NMR data. The calculation results in an interpretation of the quadrature model fitting that allows one to understand on an intuitive level what frequencies and decay rates will be estimated and why. Introduction Probability theory when interpreted as logic is a quantitative th...
Hyperparameters: optimize, or integrate out?
 IN MAXIMUM ENTROPY AND BAYESIAN METHODS, SANTA BARBARA
, 1996
"... I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants. In the `evidence framework' the model parameters are integrated over, and the resulting evidence is maximi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants. In the `evidence framework' the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparameters are used to define a Gaussian approximation to the posterior distribution. In the alternative `MAP' method, the true posterior probability is found by integrating over the hyperparameters. The true posterior is then maximized over the model parameters, and a Gaussian approximation is made. The similarities of the two approaches, and their relative merits, are discussed, and comparisons are made with the ideal hierarchical Bayesian solution. In moderately illposed problems, integration over hyperparameters yields a probability distribution with a skew peak which causes significant biases to arise in the MAP method. In contrast, the evidence framework is shown to introduce negligible predictive error, under straightforward conditions. General lessons are drawn concerning the distinctive properties of inference in many dimensions.
Bayesian Inference in Cyclical Component Dynamic Linear Models
 Journal of the American Statistical Association
, 1995
"... Dynamic linear models with timevarying cyclical components are developed for the analysis of times series with persistent though timevarying cyclical behaviour. The development covers inference on wavelengths of possibly several persistent cycles in nonstationary times series, permitting explicit ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
Dynamic linear models with timevarying cyclical components are developed for the analysis of times series with persistent though timevarying cyclical behaviour. The development covers inference on wavelengths of possibly several persistent cycles in nonstationary times series, permitting explicit time variation in amplitudes and phases of component waveforms, decomposition of stochastic inputs into purely observational noise and innovations that impact on the waveform characteristics, with extensions to incorporate ranges of (time varying) time series and regression terms within the standard DLM context. Bayesian inference via iterative stochastic simulation methods is developed and illustrated. Some indications of model extensions and generalisations are given. In addition to the specific focus on cyclical component models, the development provides the basis for Bayesian inference, via stochastic simulation, for state evolution matrix parameters and variance components in dynamic l...
A New Method for the Detection of a Periodic Signal of Unknown Shape and Period
 Astrophysical Journal
, 1992
"... We present a new method for the detection and measurement of a periodic signal in a data set when we haveno prior knowledge of the existence of such a signal or of its characteristics. It is applicable to data consisting of the locations or times of discrete events. We use Bayes ' theorem to ad ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
We present a new method for the detection and measurement of a periodic signal in a data set when we haveno prior knowledge of the existence of such a signal or of its characteristics. It is applicable to data consisting of the locations or times of discrete events. We use Bayes ' theorem to address both the signal detection problem, and the estimation problem of measuring the characteristics of a detected signal. To address the detection problem, we use Bayes ' theorem to compare a constant rate model for the signal to models with periodic structure. The periodic models describe the signal plus background rate as a stepwise distribution in m bins per period, for various values of m. The Bayesian posterior probability for a periodic model contains a term which quanti es Ockham's razor, penalizing successively more complicated periodic models for their greater complexity even though they are assigned equal prior probabilities. The calculation thus balances model simplicity with goodnessof t, allowing us to determine both whether there is evidence for a periodic signal, and the optimum number of bins for describing the structure in the data. Unlike the results of traditional \frequentist " calculations, the outcome of the Bayesian calculation does not depend on the number of periods examined, but only on the range examined.