Results 1  10
of
35
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 666 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the KullbackLeibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classifica...
Distributing the Kalman filters for largescale systems
 IEEE Trans. on Signal Processing, http://arxiv.org/pdf/0708.0242
"... Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, largescale,dimensional, dynamical system monitored by a network of sensors. Local Kalman filters are implemented ondimensional subsystems,, obtained by spatially decomposing the largescale sys ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, largescale,dimensional, dynamical system monitored by a network of sensors. Local Kalman filters are implemented ondimensional subsystems,, obtained by spatially decomposing the largescale system. The distributed Kalman filter is optimal under an th order Gauss–Markov approximation to the centralized filter. We quantify the information loss due to this thorder approximation by the divergence, which decreases as increases. The order of the approximation leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and loworder computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation ofdimensional vectors and matrices is required; only dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed. Index Terms—Distributed algorithms, distributed estimation, information filters, iterative methods, Kalman filtering, largescale systems, matrix inversion, sparse matrices. I.
Unsupervised Image Restoration and Edge Location Using Compound GaussMarkov Random Fields and the MDL Principle
 IEEE Trans. Image Processing
, 1997
"... Discontinuitypreserving Bayesian image restoration typically involves two Markov random fields: one representing the image intensities/gray levels to be recovered and another one signaling discontinuities/edges to be preserved. The usual strategy is to perform joint maximum a posteriori (MAP) estim ..."
Abstract

Cited by 27 (10 self)
 Add to MetaCart
(Show Context)
Discontinuitypreserving Bayesian image restoration typically involves two Markov random fields: one representing the image intensities/gray levels to be recovered and another one signaling discontinuities/edges to be preserved. The usual strategy is to perform joint maximum a posteriori (MAP) estimation of the image and its edges, which requires the specification of priors for both fields. In this paper, instead of taking an edge prior, we interpret discontinuities (in fact their locations) as deterministic unknown parameters of the compound GaussMarkov random field (CGMRF), which is assumed to model the intensities. This strategy should allow inferring the discontinuity locations directly from the image with no further assumptions. However, an additional problem emerges: The number of parameters (edges) is unknown. To deal with it, we invoke the minimum description length (MDL) principle; according to MDL, the best edge configuration is the one that allows the shortest description of the image and its edges. Taking the other model parameters (noise and CGMRF variances) also as unknown, we propose a new unsupervised discontinuitypreserving image restoration criterion. Implementation is carried out by a continuationtype iterative algorithm which provides estimates of the number of discontinuities, their locations, the noise variance, the original image variance, and the original image itself (restored image). Experimental results with real and synthetic images are reported.
Hyperspectral Imagery: Clutter Adaptation in Anomaly Detection
 IEEE Trans. Inform. Theory
, 2000
"... Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral informat ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral information about the objects and backgrounds in the scene. In this paper, we present an adaptive anomaly detector designed assuming that the background clutter in the hyperspectral imagery is a threedimensional Gauss–Markov random field. This model leads to an efficient and effective algorithm for discriminating manmade objects (the anomalies) in real hyperspectral imagery. The major focus of the paper is on the adaptive stage of the detector, i.e., the estimation of the Gauss–Markov random field parameters. We develop three methods: maximumlikelihood; least squares; and approximate maximumlikelihood. We study these approaches along three directions: estimation error performance, computational cost, and detection performance. In terms of estimation error, we derive the Cramér–Rao bounds and carry out Monte Carlo simulation studies that show that the three estimation procedures have similar performance when the fields are highly correlated, as is often the case with real hyperspectral imagery. The approximate maximumlikelihood method has a clear advantage from the computational point of view. Finally, we test extensively with real hyperspectral imagery the adaptive anomaly detector incorporating either the least squares or the approximate maximumlikelihood estimators. Its performance compares very favorably with that of the RX algorithm, an alternative detector commonly used with multispectral data, while reducing by up to an order of magnitude the associated computational cost. Index Terms—Anomaly detection, Cramér–Rao bounds, Gauss– Markov random field, hyperspectral imagery, least squares, maximum
Efficient detection in hyperspectral imagery
 IEEE Transactions on image processing
, 2001
"... Abstract—Hyperspectral sensors collect hundreds of narrow and contiguously spaced spectral bands of data. Such sensors provide fully registered high resolution spatial and spectral images that are invaluable in discriminating between manmade objects and natural clutter backgrounds. The price paid f ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Abstract—Hyperspectral sensors collect hundreds of narrow and contiguously spaced spectral bands of data. Such sensors provide fully registered high resolution spatial and spectral images that are invaluable in discriminating between manmade objects and natural clutter backgrounds. The price paid for this high resolution data is extremely large data sets, several hundred of Mbytes for a single scene, that make storage and transmission difficult, thus requiring fast onboard processing techniques to reduce the data being transmitted. Attempts to apply traditional maximum likelihood detection techniques for inflight processing of these massive amounts of hyperspectral data suffer from two limitations: first, they neglect the spatial correlation of the clutter by treating it as spatially white noise; second, their computational cost renders them prohibitive without significant data reduction like by grouping the spectral bands into clusters, with a consequent loss of spectral resolution. This paper presents a maximum likelihood detector that successfully confronts both problems: rather than ignoring the spatial and spectral correlations, our detector exploits them to its advantage; and it is computationally expedient, its complexity increasing only linearly with the number of spectral bands available. Our approach is based on a Gauss–Markov random field (GMRF) modeling of the clutter, which has the advantage of providing a direct parameterization of the inverse of the clutter covariance, the quantity of interest in the test statistic. We discuss in detail two alternative GMRF detectors: one based on a binary hypothesis approach, and the other on a ‘single ’ hypothesis formulation. We analyze extensively with real hyperspectral imagery data (HYDICE and SEBASS) the performance of the detectors, comparing them to a benchmark detector, the RXalgorithm. Our results show that the GMRF ‘single ’ hypothesis detector outperforms significantly in computational cost the RXalgorithm, while delivering noticeable detection performance improvement. Index Terms—Gauss–Markov random field, hyperspectral sensor imagery, maximumlikelihood detection, ‘single ’ hypothesis test. I.
A double MetropolisHastings sampler for spatial models with intractable normalizing constants
 Journal of Statistical Computing and Simulation
"... The problem of simulating from distributions with intractable normalizing constants has received much attention in the recent literature. In this paper, we propose an asymptotic algorithm, the socalled double MetropolisHastings (MH) sampler, for tickling this problem. Unlike other auxiliary variabl ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
The problem of simulating from distributions with intractable normalizing constants has received much attention in the recent literature. In this paper, we propose an asymptotic algorithm, the socalled double MetropolisHastings (MH) sampler, for tickling this problem. Unlike other auxiliary variable algorithms, the double MH sampler removes the need of exact sampling, the auxiliary variables being generated using MH kernels, and thus can be applied to a wide range of problems for which exact sampling is not available. While for the problems for which exact sampling is available, it can typically produce the same accurate results as the exchange algorithm, but using much less CPU time. The new method is illustrated by various spatial models.
Data assimilation in large timevarying multidimensional fields
 IEEE Trans. Image Process
, 1999
"... Abstract — In the physical sciences, e.g., meteorology and oceanography, combining measurements with the dynamics of the underlying models is usually referred to as data assimilation. Data assimilation improves the reconstruction of the image fields of interest. Assimilating data with algorithms lik ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
(Show Context)
Abstract — In the physical sciences, e.g., meteorology and oceanography, combining measurements with the dynamics of the underlying models is usually referred to as data assimilation. Data assimilation improves the reconstruction of the image fields of interest. Assimilating data with algorithms like the Kalman–Bucy filter (KBf) is challenging due to their computational cost which for twodimensional (2D) fields is of where is the linear dimension of the domain. In this paper, we combine the block structure of the underlying dynamical models and the sparseness of the measurements (e.g., satellite scans) to develop four efficient implementations of the KBf that reduce its computational cost to in the case of the block KBf and the scalar KBf, and to in the case of the local block KBf (lbKBf) and the local scalar KBf (lsKBf). We illustrate the application of the lbKBf to assimilate altimetry satellite data in a Pacific equatorial basin. Index Terms—Computed imaging, data assimilation, Kalman– Bucy filter, Gauss–Markov fields, physical oceanography, satellite altimetry. I.
Clustering under prior knowledge with application to image segmentation
 Advances in Neural Information Processing Systems 19
, 2007
"... This paper proposes a new approach to modelbased clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixt ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
This paper proposes a new approach to modelbased clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closedform Estep, in contrast with other recent approaches to semisupervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., waveletbased) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semisupervised, or discriminative modes. 1
Efficient Compression Of Arbitrary MultiView Video Signals
, 1996
"... Multiple views of a scene, obtained from cameras positioned at distinct viewpoints, can provide a viewer with the benefits of added realism, selective viewing, and improved scene understanding. The importance of these signals is evidenced by the recently proposed MultiView Profile (MVP) extension t ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Multiple views of a scene, obtained from cameras positioned at distinct viewpoints, can provide a viewer with the benefits of added realism, selective viewing, and improved scene understanding. The importance of these signals is evidenced by the recently proposed MultiView Profile (MVP) extension to the MPEG2 video compression standard, and their explicit incorporation into the future MPEG4 standard. However, multiview compression implementations typically rely on singleview image sequence model assumptions. We hypothesize (and demonstrate) that impressive system bandwidth reduction can be achieved by utilizing displacement vector field and image intensity models tuned to the special characteristics of multiview video signals. This thesis focuses on the predictive coding of nonperiodic, i.e., arbitrary, multiview video signals for the applications of simulated motion parallax and viewerspecified degree of stereoscopy. To facilitate their practical use, we desire algorithms that are applicable to the common waveformbased, hybrid encoder framework, which consists of a framebased prediction followed by residual encoding. Three novel techniques are developed, which respectively improve the processes of frame based prediction, residual encoding, and viewpoint interpolation. These are:
• a simple method to adaptively select the best possible reference frame, based on estimated occlusion percentage with the frame to be encoded;
• a low bit rate residual encoding technique that compensates for pixel intensity nonstationarities along a displacement trajectory and for the practical limitations of the prediction process; and
• an algorithm that correctly handles displacement estimation errors, occlusions and ambiguouslyreferenced image regions for the interpolation of subjectivelypleasing “virtual” viewpoints from a noisy displacement vector field.
We demonstrate the superiority of each of these algorithms on numerous multiview video signals through comparisons with conventional techniques, and we analyze their cost/benefit ratio in terms of increases in system complexity and storage, offset by ratedistortion improvements. Finally, we indicate the relative significance of these algorithms, and provide insight into how and when they should be combined into a complete, efficient multiview encoder/decoder system.
Bayesian image segmentation using Gaussian field priors
 In CVPR Workshop on Energy MinimizationMethods inComputer VisionandPatternRecognition
, 2005
"... Abstract. The goal of segmentation is to partition an image into a finite set of regions, homogeneous in some (e.g., statistical) sense, thus being an intrinsically discrete problem. Bayesian approaches to segmentation use priors to impose spatial coherence; the discrete nature of segmentation deman ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The goal of segmentation is to partition an image into a finite set of regions, homogeneous in some (e.g., statistical) sense, thus being an intrinsically discrete problem. Bayesian approaches to segmentation use priors to impose spatial coherence; the discrete nature of segmentation demands priors defined on discretevalued fields, thus leading to difficult combinatorial problems. This paper presents a formulation which allows using continuous priors, namely Gaussian fields, for image segmentation. Our approach completely avoids the combinatorial nature of standard Bayesian approaches to segmentation. Moreover, it’s completely general, i.e., itcanbeused in supervised, unsupervised, or semisupervised modes, with any probabilistic observation model (intensity, multispectral, or texture features). To use continuous priors for image segmentation, we adopt a formulation which is common in Bayesian machine learning: introduction of hidden fields to which the region labels are probabilistically related. Since these hidden fields are realvalued, we can adopt any type of spatial prior for continuousvalued fields, such as Gaussian priors. We show how, under this model, Bayesian MAP segmentation is carried out by a (generalized) EM algorithm. Experiments on synthetic and real data shows that the proposed approach performs very well at a low computational cost. 1