Results 1 
6 of
6
Learning lowlevel vision
 International Journal of Computer Vision
, 2000
"... We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently prop ..."
Abstract

Cited by 468 (25 self)
 Add to MetaCart
We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently propagate image information. Monte Carlo simulations justify this approximation. We apply this to the \superresolution " problem (estimating high frequency details from a lowresolution image), showing good results. For the motion estimation problem, we show resolution of the aperture problem and llingin arising from application of the same probabilistic machinery.
Learning to estimate scenes from images
 Adv. Neural Information Processing Systems 11
, 1999
"... We seek the scene interpretation that best explains image data. ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
We seek the scene interpretation that best explains image data.
Image Segmentation and Edge Enhancement with Stabilized Inverse Diffusion Equations.
 IEEE Transactions on Image Processing
, 1999
"... We introduce a family of firstorder multidimensional ordinary differential equations (ODEs) with discontinuous righthand sides and demonstrate their applicability in image processing. An equation belonging to this family is an inverse diffusion everywhere except at local extrema, where some stabil ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
We introduce a family of firstorder multidimensional ordinary differential equations (ODEs) with discontinuous righthand sides and demonstrate their applicability in image processing. An equation belonging to this family is an inverse diffusion everywhere except at local extrema, where some stabilization is introduced. For this reason, we call these equations "stabilized inverse diffusion equations" ("SIDEs"). Existence and uniqueness of solutions, as well as stability, are proven for SIDEs. A SIDE in one spatial dimension may be interpreted as a limiting case of a semidiscretized PeronaMalik equation [14], [15]. In an experimental section, SIDEs are shown to suppress noise while sharpening edges present in the input signal. Their application to image segmentation is also demonstrated.
An occlusion model generating scaleinvariant images
 in Workshop on Statistical and Computational Theories of Vision, Fort Collins
, 1999
"... We present a model for scale invariance of natural images based on the ideas of images as collages of statistically independent objects. The model takes occlusions into account, and produces images that show translational invariance, and approximate scale invariance under block averaging and median ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present a model for scale invariance of natural images based on the ideas of images as collages of statistically independent objects. The model takes occlusions into account, and produces images that show translational invariance, and approximate scale invariance under block averaging and median ltering. We compare the statistics of the simulated images with data from natural scenes, and nd good agreement for shortrange and middlerange statistics. Furthermore, we discuss the implications of the model on a 3D description of the world. 1
Algorithms from Statistical Physics for Generative Models of Images
 Image and Vision Computing
, 2002
"... A general framework for defining generative models of images is Markov random fields (MRF's), with shiftinvariant (homogeneous) MRF's being an important special case for modeling textures and generic images. Given a dataset of natural images and a set of filters from which filter histogram statisti ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A general framework for defining generative models of images is Markov random fields (MRF's), with shiftinvariant (homogeneous) MRF's being an important special case for modeling textures and generic images. Given a dataset of natural images and a set of filters from which filter histogram statistics are obtained, a shiftinvariant MRF can be defined (as in Zhu [12]) as a distribution of images whose mean filter histogram values match the empirical values obtained from the data set. Certain parameters in the MRF model, called potentials, must be determined in order for the model to match the empirical statistics. Standard methods for calculating the potentials are computationally very demanding, such as Generalized Iterative Scaling (GIS), an iterative procedure that converges to the correct potential values. We apply the BetheKikuchi approximation, a standard technique from statistical physics, to speed up the GIS procedure. Results are demonstrated on a model using two filters, and we show synthetic images that have been sampled from the model. Finally, we show a connection between GIS and our previous work on the gfactor.
Energy Minimization by E ective JumpDi usion Method for Range Segmentation
"... This paper presents a stochastic jumpdi usion method for optimizing a Bayesian posterior probability in segmenting range data and their associated re ectance images. The algorithm works welloncomplexrealworld scenes (indoor and outdoor), which consist of an unknown number of objects (or surfaces) o ..."
Abstract
 Add to MetaCart
This paper presents a stochastic jumpdi usion method for optimizing a Bayesian posterior probability in segmenting range data and their associated re ectance images. The algorithm works welloncomplexrealworld scenes (indoor and outdoor), which consist of an unknown number of objects (or surfaces) of various sizes and types, such as planes, conics, smooth surfaces, and cluttered objects (like trees and bushes). Formulated in the Bayesian framework, the posterior probability is distributed over a countable number of subspaces of varying dimensions. To search for globally optimal solution, the paper adopts a stochastic jumpdi usion process[16] to simulate a Markov chain random walk for exploring this complex solution space. A number of reversible jump[15] dynamics realize the moves between di erent subspaces, such as switching surface models and changing the number of objects. The stochastic Langevin equation realizes di usions, such as region competition[39] in each subspace. To achieve e ective computation, the algorithm precomputes some importance proposal probabilities through Hough transforms, edge detection, and data clustering. The latter is used by the Markov chains for fast mixing. For the varying sizes (scales) of objects in natural scenes, the algorithm computes in a multiscale fashion. The algorithm is rst tested against an ensemble of 1D simulated data for performance analysis. Then the algorithm is applied to three datasets of range images under the same parameter setting. The results are satisfactory in comparison with manual segmentation.