Results 1 
6 of
6
Efficient multiscale regularization with applications to the computation of optical flow
 IEEE Trans. Image Process
, 1994
"... AbsfruetA new approach to regularization methods for image processing is introduced and developed using as a vehicle the problem of computing dense optical flow fields in an image sequence. Standard formulations of this problem require the computationally intensive solution of an elliptic partial d ..."
Abstract

Cited by 98 (33 self)
 Add to MetaCart
AbsfruetA new approach to regularization methods for image processing is introduced and developed using as a vehicle the problem of computing dense optical flow fields in an image sequence. Standard formulations of this problem require the computationally intensive solution of an elliptic partial differential equation that arises from the often used “smoothness constraint” ’yl”. regularization. The interpretation of the smoothness constraint is utilized as a “fractal prior ” to motivate regularization based on a recently introduced class of multiscale stochastic models. The solution of the new problem formulation is computed with an efficient multiscale algorithm. Experiments on several image sequences demonstrate the substantial computational savings that can be achieved due to the fact that the algorithm is noniterative and in fact has a per pixel computational complexity that is independent of image size. The new approach also has a number of other important advantages. Specifically, multiresolution flow field estimates are available, allowing great flexibility in dealing with the tradeoff between resolution and accuracy. Multiscale error covariance information is also available, which is of considerable use in assessing the accuracy of the estimates. In particular, these error statistics can be used as the basis for a rational procedure for determining the spatiallyvarying optimal reconstruction resolution. Furthermore, if there are compelling reasons to insist upon a standard smoothness constraint, our algorithm provides an excellent initialization for the iterative algorithms associated with the smoothness constraint problem formulation. Finally, the usefulness of our approach should extend to a wide variety of illposed inverse problems in which variational techniques seeking a “smooth ” solution are generally Used. I.
Multiscale Representations of Markov Random Fields
 IEEE TRANSACTIONS ON SIGNAL PROCESSING. VOL 41. NO 12. DECEMBER 1993
, 1993
"... Recently, a framework for multiscale stochastic modeling was introduced based on coarsetofine scalerecursive dynamics defined on trees. This model class has some attractive characteristics which lead to extremely efficient, statistically optimal signal and image processing algorithms. In this pap ..."
Abstract

Cited by 84 (26 self)
 Add to MetaCart
Recently, a framework for multiscale stochastic modeling was introduced based on coarsetofine scalerecursive dynamics defined on trees. This model class has some attractive characteristics which lead to extremely efficient, statistically optimal signal and image processing algorithms. In this paper, we show that this model class is also quite rich. In particular, we describe how 1D Markov processes and 2D Markov random fields (MRF’s) can be represented within this framework. The recursive structure of 1D Markov processes makes them simple to analyze, and generally leads to computationally efficient algorithms for statistical inference. On the other hand, 2D MRF’s are well known to be very difficult to analyze due to their noncausal structure, and thus their use typically leads to computationally intensive algorithms for smoothing and parameter identification. In contrast, our multiscale representations are based on scalerecursive models and thus lead naturally to scalerecursive algorithms, which can be substantially more efficient computationally than those associated with MRF models. In 1D, the multiscale representation is a generalization of the midpoint deflection construction of Brownian motion. The representation of 2D MRF’s is based on a further generalization to a “midline ” deflection construction. The exact representations of 2D MRF’s are used to motivate a class of multiscale approximate MRF models based on onedimensional wavelet transforms. We demonstrate the use of these latter models in the context of texture representation and, in particular, we show how they can be used as approximations for or alternatives to wellknown MRF texture models.
Multiscale segmentation and anomaly enhancement of SAR imagery
 IEEE Trans. Image Processing
, 1997
"... Abstract — In this paper, we present efficient multiscale approaches to the segmentation of natural clutter, specifically grass and forest, and to the enhancement of anomalies in synthetic aperture radar (SAR) imagery. The methods we propose exploit the coherent nature of SAR sensors. In particular, ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
Abstract — In this paper, we present efficient multiscale approaches to the segmentation of natural clutter, specifically grass and forest, and to the enhancement of anomalies in synthetic aperture radar (SAR) imagery. The methods we propose exploit the coherent nature of SAR sensors. In particular, they take advantage of the characteristic statistical differences in imagery of different terrain types, as a function of scale, due to radar speckle. We employ a recently introduced class of multiscale stochastic processes that provide a powerful framework for describing random processes and fields that evolve in scale. We build models representative of each category of terrain of interest (i.e., grass and forest) and employ them in directing decisions on pixel classification, segmentation, and anomalous behavior. The scaleautoregressive nature of our models allows extremely efficient calculation of likelihoods for different terrain classifications over windows of SAR imagery. We subsequently use these likelihoods as the basis for both image pixel classification and grass–forest boundary estimation. In addition, anomaly enhancement is possible with minimal additional computation. Specifically, the residuals produced by our models in predicting SAR imagery from coarser scale images are theoretically uncorrelated. As a result, potentially anomalous pixels and regions are enhanced and pinpointed by noting regions whose residuals display a high level of correlation throughout scale. We evaluate the performance of our techniques through testing on 0.3m SAR data gathered with Lincoln Laboratory’s millimeterwave SAR. I.
A Multiscale Hypothesis Testing Approach to Anomaly Detection and Localization from Noisy Tomograhic Data
 IEEE Transactions on Image Processing
, 1998
"... Abstract—In this paper, we investigate the problems of anomaly detection and localization from noisy tomographic data. These are characteristic of a class of problems that cannot be optimally solved because they involve hypothesis testing over hypothesis spaces with extremely large cardinality. Our ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract—In this paper, we investigate the problems of anomaly detection and localization from noisy tomographic data. These are characteristic of a class of problems that cannot be optimally solved because they involve hypothesis testing over hypothesis spaces with extremely large cardinality. Our multiscale hypothesis testing approach addresses the key issues associated with this class of problems. A multiscale hypothesis test is a hierarchical sequence of composite hypothesis tests that discards large portions of the hypothesis space with minimal computational burden and zooms in on the likely true hypothesis. For the anomaly detection and localization problems, hypothesis zooming corresponds to spatial zooming—anomalies are successively localized to finer and finer spatial scales. The key challenges we address include how to hierarchically divide a large hypothesis space and how to process the data at each stage of the hierarchy to decide which parts of the hypothesis space deserve more attention. To answer the former we draw on [1] and [7]–[10]. For the latter, we pose and solve a nonlinear optimization problem for a decision statistic that maximally disambiguates composite hypotheses. With no more computational complexity, our optimized statistic shows substantial improvement over conventional approaches. We provide examples that demonstrate this and quantify how much performance is sacrificed by the use of a suboptimal method as compared to that achievable if the optimal approach were computationally feasible. Index Terms—Anomaly detection, composite hypothesis testing, hypothesis zooming, nonlinear optimization, quadratic programming, tomography. I.
Local tests for consistency of support hyperplane data
 J. Math. Imaging and Vision
, 1995
"... Abstract. Support functions and samples of convex bodies in R ~ are studied with regard to conditions for their validity or consistency. Necessary and sufficient conditions for a function to be a support function are reviewed in a general setting. An apparently little known classical such result for ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. Support functions and samples of convex bodies in R ~ are studied with regard to conditions for their validity or consistency. Necessary and sufficient conditions for a function to be a support function are reviewed in a general setting. An apparently little known classical such result for the planar case due to Rademacher and based on a determinantal inequality is presented and a generalization to, arbitrary dimensions is developed. These conditions are global in the sense that they involve values of the support function at widely separated points. The corresponding discrete problem of determining the validity of a set of samples of a support function is treated. Conditions similar to the continuous inequality results are given for the consistency of a set of discrete support observations. These conditions are in terms of a series of local inequality tests involving only neighboring support samples. Our results serve to generalize existing planar conditions to arbitrary dimensions by providing a generalization of the notion of nearest neighbor for plane vectors which utilizes a simple positive cone condition on the respective support sample normals.
An Efficient Region of Interest Acquisition Method for Dynamic Magnetic Resonance Imaging
 IN IEEE TRANS. IMAGE PROCESSING
, 2001
"... Motivated by recent work in the area of dynamic magnetic resonance imaging (MRI), we develop a new approach toward the problem of reduced order MRI acquisition. Recent efforts in this field have concentrated on the use of Fourier and Singular Value Decomposition (SVD) methods to obtain low order rep ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Motivated by recent work in the area of dynamic magnetic resonance imaging (MRI), we develop a new approach toward the problem of reduced order MRI acquisition. Recent efforts in this field have concentrated on the use of Fourier and Singular Value Decomposition (SVD) methods to obtain low order representations of an entire image plane. We augment this work to the case of imaging an arbitrarily shaped region of interest (ROI) embedded within the full image. After developing a natural error metric for this problem, we show that determining the minimal order required to meet a prescribed error level is in general intractable, but can be solved under certain assumptions. We then develop an optimization approach to the related problem of minimizing the error for a given order. Finally we demonstrate the utility of this approach and its advantages over existing Fourier and SVD methods on a number of MRI images.