Results 1  10
of
17
Global Optimization of Statistical Functions with Simulated Annealing
 Journal of Econometrics
, 1994
"... Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simu ..."
Abstract

Cited by 126 (1 self)
 Add to MetaCart
Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.
Gibbs Random Fields, CoOccurrences, and Texture Modeling
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1993
"... : Gibbs random field (GRF) models and cooccurrence statistics are typically considered as separate but useful tools for texture discrimination. In this paper we show an explicit relationship between cooccurrences and a large class of GRF's. This result comes from a new framework based on a setthe ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
: Gibbs random field (GRF) models and cooccurrence statistics are typically considered as separate but useful tools for texture discrimination. In this paper we show an explicit relationship between cooccurrences and a large class of GRF's. This result comes from a new framework based on a settheoretic concept called the "aura set" and on measures of this set, "aura measures". This framework is also shown to be useful for relating different texture analysis tools: We show how the aura set can be constructed with morphological dilation, how its measure yields cooccurrences, and how it can be applied to characterizing the behavior of the Gibbs model for texture. In particular, we show how the aura measure generalizes, to any number of gray levels and neighborhood order, some properties previously known for just the binary, nearestneighbor GRF. Finally, we illustrate how these properties can guide one's intuition about the types of GRF patterns which are most likely to form. Index Te...
A stochastic approach to stereo vision
 In AAAI
, 1986
"... A stochastic optimization approach to stereo matching is presented. Unlike conventional correlation matching and feature matching, the approach provides a dense array of disparities, eliminating the need for interpolation. First, the stereo matching problem is defined in terms of finding a disparit ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
A stochastic optimization approach to stereo matching is presented. Unlike conventional correlation matching and feature matching, the approach provides a dense array of disparities, eliminating the need for interpolation. First, the stereo matching problem is defined in terms of finding a disparity map that satisfies two competing constraints: (1) matched points should have similar image intensity, and (2) the disparity map should be smooth. These constraints are expressed in an ‘(energy” function that can be evaluated locally. A simulated annealing algorithm is used to find a disparity map that has very low energy (i.e., in which both constraints have simultaneously been approximately satisfied). Annealing allows the largescale structure of the disparity map to emerge at higher temperatures, and avoids the problem of converging too quickly on a local minimum. Results are shown for a sparse randomdot stereogram, a vertical aerial stereogram (shown in comparison to ground truth), and an oblique groundlevel scene with occlusion boundaries. 1
Principled Halftoning Based on Human Vision Models
, 1992
"... When models of human vision adequately measure the relative quality of candidate halftonings of an image, the problem of halftoning the image becomes equivalent to the search r o problem of finding a halftone that optimizes the quality metric. Because of the vast numbe f possible halftones, and the ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
When models of human vision adequately measure the relative quality of candidate halftonings of an image, the problem of halftoning the image becomes equivalent to the search r o problem of finding a halftone that optimizes the quality metric. Because of the vast numbe f possible halftones, and the complexity of image quality measures, this principled approach t t has usually been put aside in favor of fast algorithms that seem to perform well. We find tha he principled approach can lead to a range of useful halftoning algorithms, as we trade off f t speed for quality by varying the complexity of the quality measure and the thoroughness o he search. High quality halftones can be obtained reasonably quickly, for example, by using as a s t measure the vector length of the error image filtered by a contrast sensitivity function, and, a he search procedure, the sequential adjustment of individual pixels to improve the quality  t measure. If computational resources permit, simulated anne...
Fine Structures Preserving Markov Model for Image Processing
 Proceedings of the 9th Scandinavian Conference on Image Analysis
, 1995
"... We propose a new model based on a Markovian framework for either binary image restoration or segmentation. We first show that classical regularization with Isingmodel means "to minimize length of edges", which is not equivalent with "to minimize noise". With such a model, fine structures and lines ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We propose a new model based on a Markovian framework for either binary image restoration or segmentation. We first show that classical regularization with Isingmodel means "to minimize length of edges", which is not equivalent with "to minimize noise". With such a model, fine structures and lines are lost during a restoration process. Therefore we propose a new regularization model whose energy allows the independent control of edge, line and noise penalization. The chosen set of cliques allows the definition of edges and lines in eight directions. We thus consider an isotropic model with respect to those directions. This leads to a characterization of images closer to the reality than by Isingtype model. During a restoration process, noise deletion can be performed without erasing ne structures. Moreover, ne lines are preserved. The improvements of results on the classical Ising model are demonstrated both theoretically and by simulations.
Simulated Annealing and Genetic Algorithms for Shape Detection
, 1996
"... this paper we consider the problem of recognizing simple geometric shapes in a picture corrupted by noise. The algorithmic techniques we use for its solution are simulated annealing, genetic algorithms and a constructive method based on noise filtering. Simulated annealing is a powerful stochastic t ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper we consider the problem of recognizing simple geometric shapes in a picture corrupted by noise. The algorithmic techniques we use for its solution are simulated annealing, genetic algorithms and a constructive method based on noise filtering. Simulated annealing is a powerful stochastic technique for solving combinatorial optimization problems. One of the main drawbacks of simulated annealing is its high computational requirements. Because of this, a number of parallel implementations have been proposed [1, 5, 8, 10, 17, 23, 30]. In particular, in [10] some problem independent parallel implementations of simulated annealing have been described. Simulated annealing has been proposed to solve image recognition problems [6, 7, 28]. In particular, in [6] a parallel implementation of simulated annealing for the shape detection problem has been proposed. In this paper we present the results obtained using the farming implementation of simulated annealing as it was proposed in [10] for other applications. In Section 2 of this paper, the shape detection problem is formally defined and its representation in terms of a combinatorial optimization problem is described. In Section 3 the general simulated annealing algorithm is described together with some of the parallel implementations proposed for it. In Section 4 we describe a genetic algorithm for the shape detection problem. This algorithm is inherently parallel. In Section 5 we present a constructive heuristic for the shape detection problem which is based on a noise filter. Performance measurements presented in Section 6 for the different algorithms finish the paper. 2 The Shape Detection Problem
MLattice: A System For Signal Synthesis And Processing Based On ReactionDiffusion
 PROCESSING BASED ON REACTIONDIFFUSION. SCD THESIS, MIT
, 1994
"... This research begins with reactiondiffusion, first proposed by Alan Turing in 1952 to account for morphogenesis  the formation of hydranth tentacles, leopard spots, zebra stripes, etc. Reactiondiffusion systems have been researched primarily by biologists working on theories of natural pattern f ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This research begins with reactiondiffusion, first proposed by Alan Turing in 1952 to account for morphogenesis  the formation of hydranth tentacles, leopard spots, zebra stripes, etc. Reactiondiffusion systems have been researched primarily by biologists working on theories of natural pattern formation and by chemists modeling dynamics of oscillating reactions. The past few years have seen a new interest in reactiondiffusion spring up within the computer graphics and image processing communities. However, reactiondiffusion systems are generally unbounded, making them impractical for many applications. In this thesis we introduce a bounded and more flexible nonlinear system, the "Mlattice", which preserves the natural patternformation properties of reactiondiffusion. On the theoretical front, we establish relationships between reactiondiffusion systems and paradigms in linear systems theory and certain types of artificial "neurallyinspired" systems. The Mlattice is closel...
Principled Methods For Color Dithering Based On Models Of The Human Visual Syste
"... this paper, we wil ssume that the palette is fixed, which is the case for p many printers and liquid crystal displays. When the alette is one which is "separable" in the red, green  p and blue components (i.e. a given level of one phos hor may be displayed regardless of the states of the y other ph ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper, we wil ssume that the palette is fixed, which is the case for p many printers and liquid crystal displays. When the alette is one which is "separable" in the red, green  p and blue components (i.e. a given level of one phos hor may be displayed regardless of the states of the y other phosphors), then a simple approach is to apply our favorite achromatic dithering algorithm to the r red, green and blue component images. We shall efer to this as the "independent component" d c method, since the resulting dither image for the re omponent does not depend on the values in the y n green or blue component images. (This method ma ot be suitable for printers, since the inks in general will not combine additively.) A weakness of the independent component s t method (as well as most other standard methods) i hat it does not exploit the fact that the human visual n system has relatively poor acuity for chromatic sigals which do not vary in luminance. Humans can s
Evolutionary Approaches to FigureGround Separation
, 1999
"... The problem of figureground separation is tackled from the perspective of combinatorial optimization. Previous attempts have used deterministic optimization techniques based on relaxation and gradient descentbased search, and stochastic optimization techniques based on simulated annealing and mic ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The problem of figureground separation is tackled from the perspective of combinatorial optimization. Previous attempts have used deterministic optimization techniques based on relaxation and gradient descentbased search, and stochastic optimization techniques based on simulated annealing and microcanonical annealing. A mathematical model encapsulating the figureground separation problem that makes explicit the definition of shape in terms of attributes such as cocircularity, smoothness, proximity and contrast is described. The model is based on the formulation of an energy function that incorporates pairwise interactions between local image features in the form of edgels and is shown to be isomorphic to the interacting spin (Ising) system from quantum physics. This paper explores a class of stochastic optimization techniques based on evolutionary algorithms for the problem of figureground separation. A class of hybrid evolutionary stochastic optimization algorithms based on a combination of evolutionary algorithms, simulated annealing and microcanonical annealing are shown to exhibit superior performance when compared to their purely evolutionary counterparts and to classical simulated annealing and microcanonical annealing algorithms. Experimental results on synthetic edgel maps and edgel maps derived from gray scale images are presented.