Results 1  10
of
37
Scalespace and edge detection using anisotropic diffusion
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... AbstractThe scalespace technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful ” edges at coarse sca ..."
Abstract

Cited by 1267 (1 self)
 Add to MetaCart
AbstractThe scalespace technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful ” edges at coarse scales. In this paper we suggest a new definition of scalespace, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the “no new maxima should be generated at coarse scales ” property of conventional scale space is preserved. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the image making parallel hardware implementations feasible. Index TermsAdaptive filtering, analog VLSI, edge detection, edge enhancement, nonlinear diffusion, nonlinear filtering, parallel algo
Connectionist Learning Procedures
 ARTIFICIAL INTELLIGENCE
, 1989
"... A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way ..."
Abstract

Cited by 339 (6 self)
 Add to MetaCart
A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradientdescent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradientdescent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks.
Deterministic edgepreserving regularization in computed imaging
 IEEE Trans. Image Processing
, 1997
"... Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such ..."
Abstract

Cited by 231 (23 self)
 Add to MetaCart
Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edgepreserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion halfquadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of tomography, but this method can be applied in a large number of applications in image processing. I.
Bayesian Estimation Of Motion Vector Fields
 IEEE Trans. Pattern Anal. Machine Intell
, 1992
"... This paper presents a new approach to the estimation of twodimensional motion vector fields from timevarying images. The approach is stochastic, both in its formulation and in the solution method. The formulation involves the specification of a deterministic structural model, along with stochastic ..."
Abstract

Cited by 121 (19 self)
 Add to MetaCart
This paper presents a new approach to the estimation of twodimensional motion vector fields from timevarying images. The approach is stochastic, both in its formulation and in the solution method. The formulation involves the specification of a deterministic structural model, along with stochastic observation and motion field models. Two motion models are proposed: a globally smooth model based on vector Markov random fields and a piecewise smooth model derived from coupled vectorbinary Markov random fields. Two estimation criteria are studied. In the Maximum A Posteriori Probability (MAP) estimation the a posteriori probability of motion given data is maximized, while in the Minimum Expected Cost (MEC) estimation the expectation of a certain cost function is minimized. The MAP estimation is performed via simulated annealing, while the MEC algorithm performs iterationwise averaging. Both algorithms generate sample fields by means of stochastic relaxation implemented via the Gibbs s...
Signal Matching Through Scale Space
 International Journal of Computer Vision
, 1987
"... Given a collection of similar signals that have been deformed with respect to each other, the general signalmatching problem is to recover the deformation. We formulate the problem as the minimization of an energy measure that combines a smoothness term and a similarity term. The minimization reduc ..."
Abstract

Cited by 75 (3 self)
 Add to MetaCart
Given a collection of similar signals that have been deformed with respect to each other, the general signalmatching problem is to recover the deformation. We formulate the problem as the minimization of an energy measure that combines a smoothness term and a similarity term. The minimization reduces to a dynamic system governed by a set of coupled, firstorder differential equations. The dynamic system finds an optimal solution at a coarse scale and then tracks it continuously to a fine scale. Among the major themes in recent work on visual signal matching have been the notions of matching as constrained optimization, of variational surface reconstruction, and of coarsetofine matching. Our solution captures these in a precise, succinct, and unified form. Results are presented for onedimensional signals, a motion sequence, and a stereo pair. 1
A distributed connectionist production system
 Cognitive Science
, 1988
"... DCPS is a connectionist production system interpreter that uses distributed representations. As a connectionist model it consists of many simple, richly interconnected neuronlike computing units that cooperate to solve problems in parallel. One motivation far constructing DCPS was to demonstrate ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
DCPS is a connectionist production system interpreter that uses distributed representations. As a connectionist model it consists of many simple, richly interconnected neuronlike computing units that cooperate to solve problems in parallel. One motivation far constructing DCPS was to demonstrate that connectionist models ore copable of representing and using explicit rules. A second motivation was to show how “coarse coding ” or “distributed representations ” can be used to construct a working memory that requires far fewer units than the number of different facts that can potentially be stored. The simulation we present is intended as a detailed demonstration of the feasibility of certain ideas and should not be viewed as a full implementation of production systems. Our current model only has o few of the many interesting emergent properties that we eventually hope to demonstrate: It is damageresistant, it performs matching and variable binding by massively parallel constraint satisfaction, and the capacity of its working memory is dependent on the similarity of the items being stored. 1.
Convex halfquadratic criteria and interacting auxiliary variables for image restoration
 IEEE Trans. Image Processing
, 2001
"... © 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 38 (13 self)
 Add to MetaCart
© 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—This paper deals with convex halfquadratic criteria and associated minimization algorithms for the purpose of image restoration. It brings a number of original elements within a unified mathematical presentation based on convex duality. Firstly, Geman and Yang’s [1] and Geman and Reynolds’s [2] constructions are revisited, with a view to establish convexity properties of the resulting halfquadratic augmented criteria, when the original nonquadratic criterion is already convex. Secondly, a family of convex Gibbsian energies that incorporate interacting auxiliary variables is revealed as a potentially fruitful extension of Geman and Reynolds’s construction. Index Terms—Convex duality, coordinate descent algorithms, edgepreserving restoration, Gibbs–Markov models, line processes. I.
Analog "Neuronal" Networks in Early Vision
, 1985
"... Many problems in early vision can be formulated in terms of minimizing an' energy or cost function. Examples are shapefromshading, edge detection, motion snatysis, structure from motion and surface interpolation (Poggio, Torre and Koch, 1985). It has been shown that all quadratic variational probl ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
Many problems in early vision can be formulated in terms of minimizing an' energy or cost function. Examples are shapefromshading, edge detection, motion snatysis, structure from motion and surface interpolation (Poggio, Torre and Koch, 1985). It has been shown that all quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical or chemical networks (Poggio and Koch, 1985). In a variety of situations the cost function is nonquadratic, however, for instance in the presence of discontinuities. The use of nonquadratic cost functions raises the question of designing efficient algorithms for computing the optimal solution. Recently. Hopfield and Tank (1985) have shown that networks of nonlinear analog "neurons" can be effect. lye in computing the solution of optimization problems, In this paper, we show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific network solving the problem of reconstructing a smooth surface while preserving its discontinuities from sparsely sampled data (Geman and Geman, 1984; Marroquin, 1984; Terzopoulos, 1984). These results suggest a novel computational strategy for solving such problems for both biological and artificial vision systems.
Toward 3D Vision from Range Images: An Optimization Framework and Parallel Networks
"... We propose a unified approach to solve low, intermediate and high level computer vision problems for 3D object recognition from range images. All three levels of computation are cast in an optimization framework and can be implemented on neural network style architecture. In the low level computatio ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
We propose a unified approach to solve low, intermediate and high level computer vision problems for 3D object recognition from range images. All three levels of computation are cast in an optimization framework and can be implemented on neural network style architecture. In the low level computation, the tasks are to estimate curvature images from the input range data. Subsequent processing at the intermediate level is concerned with segmenting these curvature images into coherent curvature sign maps. In the high level, image features are matched against model features based on an object description called attributed relational graph (ARG). We show that the above computational tasks at each of the three different levels can all be formulated as optimizing a twoterm energy function. The first term encodes unary constraints while the second term binary ones. These energy functions are minimized using parallel and distributed relaxationbased algorithms which are well suited for neural...
Bayesian Image Restoration And Segmentation By Constrained Optimization
 IEEE Transactions on Image Processing
, 1996
"... A constrained optimization method, called the LagrangeHopfield (LH) method, is presented for solving Markov random field (MRF) based Bayesian image estimation problems for restoration and segmentation. The method combines the augmented Lagrangian multiplier technique with the Hopfield network to so ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
A constrained optimization method, called the LagrangeHopfield (LH) method, is presented for solving Markov random field (MRF) based Bayesian image estimation problems for restoration and segmentation. The method combines the augmented Lagrangian multiplier technique with the Hopfield network to solve a constrained optimization problem into which the original Bayesian estimation problem is reformulated. The LH method effectively overcomes instabilities that are inherent in the penalty method (e.g. Hopfield network) or the Lagrange multiplier method in constrained optimization. An additional advantage of the LH method is its suitability for neurallike analog implementation. Experimental results are presented which show that LH yields good quality solutions at reasonable computational costs. 1. INTRODUCTION Image restoration is to recover a degraded image and segmentation is to partition an image into regions of similar image properties. Both can be posed generally as image estimation...