Results 1  10
of
12
An Introduction to MCMC for Machine Learning
, 2003
"... This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of ..."
Abstract

Cited by 222 (2 self)
 Add to MetaCart
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
Sequential Monte Carlo Samplers
, 2002
"... In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal ..."
Abstract

Cited by 141 (24 self)
 Add to MetaCart
In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal which is a distribution of interest. To sample from these distributions, we use sequential Monte Carlo methods. We show that these methods can be interpreted as interacting particle approximations of a nonlinear FeynmanKac ow in distribution space. One interpretation of the FeynmanKac ow corresponds to a nonlinear Markov kernel admitting a speci ed invariant distribution and is a natural nonlinear extension of the standard MetropolisHastings algorithm. Many theoretical results have already been established for such ows and their particle approximations. We demonstrate the use of these algorithms through simulation.
Bayesian Harmonic Models for Musical Signal Analysis
 in Bayesian Statistics 7
, 2002
"... This paper is concerned with the Bayesian analysis of musical signals. The ultimate aim is to use Bayesian hierarchical structures in order to infer quantities at the highest level, including such quantities as musical pitch, dynamics, timbre, instrument identity, etc. Analysis of real musical si ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
This paper is concerned with the Bayesian analysis of musical signals. The ultimate aim is to use Bayesian hierarchical structures in order to infer quantities at the highest level, including such quantities as musical pitch, dynamics, timbre, instrument identity, etc. Analysis of real musical signals is complicated by many things, including the presence of transient sounds, noises and the complex structure of musical pitches in the frequency domain. The problem is truly Bayesian in that there is a wealth of (often subjective) prior knwowledge about how musical signals are constructed, which can be exploited in order to achieve more accurate inference about the musical structure. Here we propose developments to an earlier Bayesian model which describes each component `note' at a given time in terms of a fundamental frequency, partials (`harmonics'), and amplitude. This basic model is modified for greater realism to include nonwhite residuals, timevarying amplitudes and partials `detuned' from the natural linear relationship. The unknown parameters of the new model are simulated using a variable dimension MCMC algorithm, leading to a highly sophisticated analysis tool. We discuss how the models and algorithms can be applied for feature extraction, polyphonic music transcription, source separation and restoration of musical sources
Sound Source Separation in Monaural Music Signals
, 2006
"... Sound source separation refers to the task of estimating the signals produced by individual sound sources from a complex acoustic mixture. It has several applications, since monophonic signals can be processed more efficiently and flexibly than polyphonic mixtures. This thesis deals with the separat ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Sound source separation refers to the task of estimating the signals produced by individual sound sources from a complex acoustic mixture. It has several applications, since monophonic signals can be processed more efficiently and flexibly than polyphonic mixtures. This thesis deals with the separation of monaural, or, onechannel music recordings. We concentrate on separation methods, where the sources to be separated are not known beforehand. Instead, the separation is enabled by utilizing the common properties of realworld sound sources, which are their continuity, sparseness, and repetition in time and frequency, and their harmonic spectral structures. One of the separation approaches taken here use unsupervised learning and the other uses modelbased inference based on sinusoidal modeling. Most of the existing unsupervised separation algorithms are based on a linear instantaneous signal model, where each frame of the input mixture signal is
Bayesian Analysis of Polyphonic Western Tonal Music
 JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
, 2006
"... This paper deals with the computational analysis of musical audio from recorded audio waveforms. This general problem includes, as subtasks, music transcription, extraction of musical pitch, dynamics, timbre, instrument identity, and source separation. Analysis of real musical signals is a highly ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
This paper deals with the computational analysis of musical audio from recorded audio waveforms. This general problem includes, as subtasks, music transcription, extraction of musical pitch, dynamics, timbre, instrument identity, and source separation. Analysis of real musical signals is a highly illposed task which is made complicated by the presence of transient sounds, background interference or the complex structure of musical pitches in the timefrequency domain. This paper focuses on models and algorithms for computer transcription of multiple musical pitches in audio, elaborated from previous work by two of the authors. The audio data are supposedly presegmented into fixed pitch regimes such as individual chords. The models presented apply to pitched (tonal) music and are formulated via a Gabor representation of nonstationary signals. A Bayesian probabilistic structure is employed for representation of prior information about the parameters of the notes. This paper introduces a numerical Bayesian inference strategy for estimation of the pitches and other parameters of the waveform. The improved algorithm is much quicker, and makes the approach feasible in realistic sitautions. Results are
Multidimensional Optimisation of Harmonic Signals
 In Proc. European Conference on Signal Processing
, 1998
"... Harmonic models are a common class of sinusoidal models which are of great interest in speech and musical analysis. In this paper we present a method for estimating the parameters of an unknown number of musical notes, each with an unknown number of harmonics. We pose the estimation task in a Bayesi ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Harmonic models are a common class of sinusoidal models which are of great interest in speech and musical analysis. In this paper we present a method for estimating the parameters of an unknown number of musical notes, each with an unknown number of harmonics. We pose the estimation task in a Bayesian framework which allows for the specification of (possibly subjective) a priori knowledge of the model parameters. We use indicator variables to represent implicitly the model order and employ a MetropolisHastings algorithm to produce approximate maximum a posteriori parameter estimates. A novel choice of transition kernels is presented to explore the parameter space, exploiting the structure of the posterior distribution. 1 INTRODUCTION Sinusoidal models are popular in analysis of musical and speech signals due to considerations of the physical basis and periodic nature of voiced speech and of many musical instruments [5, 9, 11]. The signal is modelled as a series of frames, with the p...
Computational advances for and from Bayesian analysis
 Statist. Sci
, 2004
"... Abstract. The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistical computation are intermingled. Key words and phrases: Monte Carlo methods, importance sampling, Markov chain Monte Carlo (MCMC) algorithms.
Bayesian Methods for Neural Networks
, 1999
"... Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and meas ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and measured by probabilities. This formulation allows for a probabilistic treatment of our a priori knowledge, domain specific knowledge, model selection schemes, parameter estimation methods and noise estimation techniques. Many researchers have contributed towards the development of the Bayesian learning approach for neural networks. This thesis advances this research by proposing several novel extensions in the areas of sequential learning, model selection, optimisation and convergence assessment. The first contribution is a regularisation strategy for sequential learning based on extended Kalman filtering and noise estimation via evidence maximisation. Using the expectation maximisation (EM) algorithm, a similar algorithm is derived for batch learning. Much of the thesis is, however, devoted to Monte Carlo simulation methods. A robust Bayesian method is proposed to estimate,
Variational MCMC
, 2001
"... We propose a new class of learning algorithms that combines variational approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms that use the variational approximation as proposal distribution can perform poorly because this approximation tends to underestimate the true varianc ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We propose a new class of learning algorithms that combines variational approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms that use the variational approximation as proposal distribution can perform poorly because this approximation tends to underestimate the true variance and other features of the data. We solve this problem by introducing more sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC kernels: a random walk Metropolis kernel and a block MetropolisHastings (MH) kernel with a variational approximation as proposal distribution. The MH kernel allows one to locate regions of high probability eciently. The Metropolis kernel allows us to explore the vicinity of these regions. This algorithm outperforms variational approximations because it yields slightly better estimates of the mean and considerably better estimates of higher moments, such as covariances. It also outperforms standard MCMC algorithms because it locates the regions of high probability quickly, thus speeding up convergence. We also present and adaptive MCMC algorithm that iterates between improving the variational approximation and improving the MCMC approximation. We demonstrate the algorithms on the problem of Bayesian parameter estimation for logistic (sigmoid) belief networks. 1
Classification of Chirp Signals using Hierarchical Bayesian Learning and MCMC Methods
, 2001
"... This paper addresses the problem of classifying chirp signals using hierarchical Bayesian learning together with Markov Chain Monte Carlo (MCMC) methods. Bayesian learning consists of estimating the distribution of the observed data conditional upon each class from a set of training samples. Unfortu ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper addresses the problem of classifying chirp signals using hierarchical Bayesian learning together with Markov Chain Monte Carlo (MCMC) methods. Bayesian learning consists of estimating the distribution of the observed data conditional upon each class from a set of training samples. Unfortunately, this estimation requires to evaluate intractable multidimensional integrals. This paper studies an original implementation of hierarchical Bayesian learning which estimates the class conditional probability densities using MCMC methods. The performance of this implementation is first studied via an academic example for which the class conditional densities are known. The problem of classifying chirp signals is then addressed by using a similar hierarchical Bayesian learning implementation based on a Metropoliswithin Gibbs algorithm.