Results 1  10
of
146
An integrated Bayesian approach to layer extraction from image sequences
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... AbstractÐThis paper describes a Bayesian approach for modeling 3D scenes as a collection of approximately planar layers that are arbitrarily positioned and oriented in the scene. In contrast to much of the previous work on layerbased motion modeling, which computes layered descriptions of 2D image ..."
Abstract

Cited by 109 (18 self)
 Add to MetaCart
AbstractÐThis paper describes a Bayesian approach for modeling 3D scenes as a collection of approximately planar layers that are arbitrarily positioned and oriented in the scene. In contrast to much of the previous work on layerbased motion modeling, which computes layered descriptions of 2D image motion, our work leads to a 3D description of the scene. There are two contributions within the paper. The first is to formulate the prior assumptions about the layers and scene within a Bayesian decision making framework which is used to automatically determine the number of layers and the assignment of individual pixels to layers. The second is algorithmic. In order to achieve the optimization, a Bayesian version of RANSAC is developed with which to initialize the segmentation. Then, a generalized expectation maximization method is used to find the MAP solution. Index TermsÐLayer extraction, segmentation, stereo matching, motion estimation. 1
Modelling and interpretation of architecture from several images
"... The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a ..."
Abstract

Cited by 83 (6 self)
 Add to MetaCart
The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a vast amount of specialist time and equipment. Because of this much research has been undertaken in the computer vision community into automating all or part of the process of acquiring a 3D model from a sequence of images. This thesis focuses specifically on the automatic acquisition of architectural models from short image sequences. An architectural model is defined as a set of planes corresponding to walls which contain a variety of labelled primitives such as doors and windows. As well as a label defining its type, each primitive contains parameters defining its shape and texture. The key advantage of this representation is that the model defines not only geometry and texture, but also an interpretation of the scene. This is crucial as it enables reasoning about the scene; for instance, structure and texture can be inferred in areas of the model which are unseen in any
A Bayesian approach to source separation
 in Proceedings of Independent Component Analysis Workshop
, 1999
"... The problem of source separation is by its very nature an inductive inference problem. There is not enough information to deduce the solution, so one must use any available information to infer the most probable solution. We demonstrate that source separation problems are wellsuited for the Bayesia ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
The problem of source separation is by its very nature an inductive inference problem. There is not enough information to deduce the solution, so one must use any available information to infer the most probable solution. We demonstrate that source separation problems are wellsuited for the Bayesian approach which provides a natural and logically consistent method by which one can incorporate prior knowledge to estimate the most probable solution given that knowledge. We derive the BellSejnowski ICA algorithm from first principles, i.e. Bayes ' Theorem and demonstrate how the Bayesian methodology makes explicit the underlying assumptions. We then further demonstrate the power of the Bayesian approach by deriving two separation algorithms that
The study of correlation structures of dna sequences: a critical review
 Computers Chem
, 1997
"... to be published in the special issue of Computer & Chemistry ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
to be published in the special issue of Computer & Chemistry
Studies in astronomical time series analysis. V. Bayesian blocks, a new method to analyze structure in photon counting data, Astrophys
 J
, 1998
"... Subject headings: numerical methods – data analysis — models – Xray astronomy — γray astronomy Received; accepted Astrophysical Journal2 I describe a new timedomain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. Th ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
Subject headings: numerical methods – data analysis — models – Xray astronomy — γray astronomy Received; accepted Astrophysical Journal2 I describe a new timedomain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: timetagged photon events (TTE), binned counts, or timetospill (TTS) data. The output is the most likely segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant – i.e. has a fixed intensity without statistically significant variations. Since the analysis is based on Bayesian statistics, I call the resulting structures Bayesian Blocks. Unlike most, this method does not stipulate time bins – instead the data themselves
Integrating experiential and distributional data to learn semantic representations
 Psychological Review
, 2009
"... The authors identify 2 major types of statistical data from which semantic representations can be learned. These are denoted as experiential data and distributional data. Experiential data are derived by way of experience with the physical world and comprise the sensorymotor data obtained through s ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
The authors identify 2 major types of statistical data from which semantic representations can be learned. These are denoted as experiential data and distributional data. Experiential data are derived by way of experience with the physical world and comprise the sensorymotor data obtained through sense receptors. Distributional data, by contrast, describe the statistical distribution of words across spoken and written language. The authors claim that experiential and distributional data represent distinct data types and that each is a nontrivial source of semantic information. Their theoretical proposal is that human semantic representations are derived from an optimal statistical combination of these 2 data types. Using a Bayesian probabilistic model, they demonstrate how word meanings can be learned by treating experiential and distributional data as a single joint distribution and learning the statistical structure that underlies it. The semantic representations that are learned in this manner are measurably more realistic—as verified by comparison to a set of humanbased measures of semantic representation—than those available from either data type individually or from both sources independently. This is not a result of merely using quantitatively more data, but rather it is because experiential and distributional data are qualitatively distinct, yet intercorrelated, types of data. The semantic representations that are learned are based on statistical structures that exist both within and between the experiential and distributional data types.
A progressive scheme for stereo matching
 LNCS 2018: 3D Structure from Images  SMILE 2000
, 2001
"... Bruteforce dense matching is usually not satisfactory because the same search range is used for the entire image, yielding potentially many false matches. In this paper, we propose a progressive scheme for stereo matching which uses two fundamental concepts: the disparity gradient limit principle a ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Bruteforce dense matching is usually not satisfactory because the same search range is used for the entire image, yielding potentially many false matches. In this paper, we propose a progressive scheme for stereo matching which uses two fundamental concepts: the disparity gradient limit principle and the least commitment strategy. The first states that the disparity should vary smoothly almost everywhere, and the disparity gradient should not exceed a certain limit. The second states that we should first select only the most reliable matches and therefore postpone unreliable decisions until enough confidence is accumulated. Our technique starts with a few reliable point matches obtained automatically via feature correspondence or through user input. New matches are progressively added during an iterative matching process. At each stage, the current reliable matches constrain the search range for their neighbors according to the disparity gradient limit, thereby reducing potential matching ambiguities of those neighbors. Only unambiguous matches are selected and added to the set of reliable matches in accordance with the least commitment strategy. In addition, a correlation match measure that allows rotation of the match template is used to provide a more robust estimate. The entire process is cast within a Bayesian inference framework. Experimental results illustrate the robustness of our proposed dense stereo matching approach.
Uncertainty Assessment for Reconstructions Based on Deformable Geometry
, 1997
"... Deformable geometric models can be used in the context of Bayesian analysis to solve illposed tomographic reconstruction problems. The uncertainties associated with a Bayesian analysis may be assessed by generating a set of random samples from the posterior, which may be accomplished using a Markov ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
Deformable geometric models can be used in the context of Bayesian analysis to solve illposed tomographic reconstruction problems. The uncertainties associated with a Bayesian analysis may be assessed by generating a set of random samples from the posterior, which may be accomplished using a MarkovChain MonteCarlo (MCMC) technique. We demonstrate the combination of these techniques for a reconstruction of a twodimensional object from two orthogonal noisy projections. The reconstructed object is modeled in terms of a deformable geometricallydefined boundary with a uniform interior density yielding a nonlinear reconstruction problem. We show how an MCMC sequence can be used to estimate uncertainties in the location of the edge of the reconstructed object.
A Gibbs sampler for identification of symmetrically structured, spaced DNA motifs with improved estimation of the signal length
, 2005
"... ..."
Bayes in the sky: Bayesian inference and model selection in cosmology
 Contemp. Phys
"... The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology. This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes ’ Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the theory and application of Bayesian model comparison, discussing the notions of Bayesian evidence and effective model complexity, and how to compute and interpret those quantities. Recent developments in cosmological parameter extraction and Bayesian cosmological model building are summarized, highlighting the challenges that lie ahead.