Results 1  10
of
13
A Computational Approach to Bayesian Inference
, 1996
"... xxx Although the Bayesian approach provides a complete solution to modelbased analysis, it is often di# cult to obtain closedform solutions for complex models. However, numerical solutions to Bayesian modeling problems are now becoming attractive because of the advent of powerful, lowcost comput ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
xxx Although the Bayesian approach provides a complete solution to modelbased analysis, it is often di# cult to obtain closedform solutions for complex models. However, numerical solutions to Bayesian modeling problems are now becoming attractive because of the advent of powerful, lowcost computers and new algorithms. We describe a generalpurpose implementation of the Bayesian methodology on workstations that can deal with complex nonlinear models in a very flexible way. The models are represented by a dataflow diagram that may be manipulated by the analyst through a graphicalprogramming environment that is based on a fully objectoriented design. Maximum a posteriori solutions are achieved using a general optimization algorithm. A new technique for estimating and visualizing the uncertainties in specific aspects of the model is incorporated.
3D Tomograph Reconstruction Using Geometrical Models
, 1997
"... We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
The Bayes inference engine
 in Maximum Entropy and Bayesian Methods
, 1995
"... Abstract. We are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully objectoriented design. The models are repr ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
Abstract. We are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully objectoriented design. The models are represented by a dataflow diagram that may be manipulated by the analyst through a graphicalprogramming environment. Maximum a posteriori solutions are achieved using a general, gradientbased optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.
Estimators for the Cauchy Distribution
 in Maximum Entropy and Bayesian Methods, edited by G. Heidbreder (Kluwer Academic
, 1996
"... We discuss the properties of various estimators of the central position of the Cauchy distribution. The performance of these estimators is evaluated for a set of simulated experiments. Estimators based on the maximum and mean the posterior density function are empirically found to be well behaved wh ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We discuss the properties of various estimators of the central position of the Cauchy distribution. The performance of these estimators is evaluated for a set of simulated experiments. Estimators based on the maximum and mean the posterior density function are empirically found to be well behaved when more than two measurements are available. On the contrary, because of the infinite variance of the Cauchy distribution, the average of the measured positions is an extremely poor estimator of the location of the source. However, the median of the measured positions is well behaved The rms errors for the various estimators are compared to the FisherCramerRao lower bound. We find that the square root of the variance of the posterior density function is predictive of the rms error in the mean posterior estimator.
Intelligent Machines in the 21st Century: Foundations Of Inference and Inquiry
 Soc. Lond. A
, 2003
"... The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have e ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in understanding the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we identified the algebra of questions as the free distributive algebra, which now allows us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper we begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. We will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, experiment to perform, or measurement to take given what they have learned and what they are designed to understand.
The Effects of Magnetic Resonance Image Inhomogeneities on Automated Tissue Classification
 AAAI Spring Symposium on Applications of Computer Vision to Medical Image Processing
, 1994
"... this paper, the training data consists of hand labeled pixels from three coronal slice images. Figures 1 and 2 are one of the 57 slices used for testing. This data was collected during a single scanning run on a 1.5 Tesla GE MRI system at the University of Iowa. MR parameters were chosen to provide ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
this paper, the training data consists of hand labeled pixels from three coronal slice images. Figures 1 and 2 are one of the 57 slices used for testing. This data was collected during a single scanning run on a 1.5 Tesla GE MRI system at the University of Iowa. MR parameters were chosen to provide the best visual separation of the classes (echo time = 32 and 96 msec with repetition time = 3,000 msec). The slices are 3 mm thick and contiguous. The gray matter in Figures 1 and 2 has also been hand segmented. These labeled testing pixels are used to quantify the performance of the classifiers discussed. In order to suppress the highfrequency noise in the images while maintaining sharp edges, variable conductance diffusion is applied [9]. This algortithm uses gradient information from both the 0
Adjoint Differentiation of Hydrodynamic Codes
 in CNLS Research Highlights, Center for Nonlinear Studies, Los Alamos National Laboratory
, 1998
"... Many problems in physics and modern computing are inverse problems  problems where the desired output is known, and the task is to find the set of input parameters that will best reproduce that output in a hydrodynamics code (hydrocode). Optimization methods tackle this type of problem, and a cent ..."
Abstract
 Add to MetaCart
Many problems in physics and modern computing are inverse problems  problems where the desired output is known, and the task is to find the set of input parameters that will best reproduce that output in a hydrodynamics code (hydrocode). Optimization methods tackle this type of problem, and a central task in applying optimization methods is to be able to determine the gradient of the output with respect to the input parameters that are being adjusted. Presented here is the authors' progress (through the use of adjoint differentiation) in obtaining those gradients in the case of some relatively simple hydrocodes. 1 Introduction When a program simulates a physical system, it does so through the use of a set of equations and mathematical relationships known as a physical model. This model will generally contain a number of parameters that influence the system. Depending on the problem of interest, there may be several situations that arise. One such situation is the inverse problem, wh...
Style template and guidelines for SPIE Proceedings
"... This document shows the desired format and appearance of a manuscript prepared for the Proceedings of the SPIE. It contains general formatting instructions and hints about how to use LaTeX. The LaTeX source file that produced this document, article.tex (Version 3.3), provides a template, used in con ..."
Abstract
 Add to MetaCart
This document shows the desired format and appearance of a manuscript prepared for the Proceedings of the SPIE. It contains general formatting instructions and hints about how to use LaTeX. The LaTeX source file that produced this document, article.tex (Version 3.3), provides a template, used in conjunction with spie.cls (Version 3.3).
Digital Image Computing: Techniques and Applications Image Reconstruction from Contrast Information
"... An iterative algorithm for the reconstruction of natural images given only their contrast map is presented. The solution is neurophysiologically inspired, where the retinal cells, for the most part, transfer only the contrast information to the cortex, which at some stage performs reconstruction fo ..."
Abstract
 Add to MetaCart
An iterative algorithm for the reconstruction of natural images given only their contrast map is presented. The solution is neurophysiologically inspired, where the retinal cells, for the most part, transfer only the contrast information to the cortex, which at some stage performs reconstruction for perception. We provide an image reconstruction algorithm based on least squares error minimization using gradient descent as well as its corresponding Bayesian framework for the underlying problem. Starting from an initial image, we compute its contrast map using the Difference of Gaussians (DoG) operator at each iteration, which is then compared to the contrast map of the original image generating a contrast error map. This contrast map is processed by a nonlinearity to deal with saturation effects. Pixel values are then updated proportionally to the resulting contrast errors. Using a least squares error measure, the result is a convex error surface with a single minimum, thus providing consistent convergence. Our experiments show that the algorithmâ€™s convergence is robust to initial conditions but not the performance. A good initial estimate results in faster convergence. Finally, an extension of the algorithm to colour images is presented. We test our algorithm on images from the COREL public image database. The paper provides a novel approach to manipulating an image in its contrast domain. 1