Results 1 
5 of
5
Simulated Annealing with Extended Neighbourhood
, 1991
"... Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. V ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. Various methods have been proposed to reduce the computation time, but they mainly deal with the careful tuning of SA's control parameters. This paper first analyzes the impact of SA's neighbourhood on SA's performance and shows that SA with a larger neighbourhood is better than SA with a smaller one. The paper also gives a general model of SA, which has both dynamic generation probability and acceptance probability, and proves its convergence. All variants of SA can be unified under such a generalization. Finally, a method of extending SA's neighbourhood is proposed, which uses a discrete approximation to some continuous probability function as the generation function in SA, and several impo...
Entropic Priors
, 1991
"... : Entropic priors assign probabilities by combining in an inseparable way the information theoretic concept of entropy with the underlying Riemannian geometry of the hypothesis space. These priors form the cornerstone of a developing new and more objective Bayesian theory of inference. Contents 1 I ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
: Entropic priors assign probabilities by combining in an inseparable way the information theoretic concept of entropy with the underlying Riemannian geometry of the hypothesis space. These priors form the cornerstone of a developing new and more objective Bayesian theory of inference. Contents 1 Introduction 2 2 Background: Entropy, Geometry, and Priors 3 2.1 The Kullback Number : : : : : : : : : : : : : : : : : : : : : : : : 3 2.2 Fisher Information Metric : : : : : : : : : : : : : : : : : : : : : : 4 2.3 Prior Information is More Data : : : : : : : : : : : : : : : : : : : 5 3 Applications 6 3.1 Empirical Bayes : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 3.2 Time Series : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 3.2.1 The Signal Manifold : : : : : : : : : : : : : : : : : : : : : 9 3.2.2 Separating Frequencies from Amplitudes : : : : : : : : : : 11 3.3 Image Reconstruction : : : : : : : : : : : : : : : : : : : : : : : : 13 3.3.1 Digital Imaging : : :...
Annealing chaotic neural network with nonlinear selffeedback and its application to clustering problem
 Pattern Recognition
"... Chaos is a revolutionary concept, which brings a novel strategy of science for researchers. In this paper, a chaotic neural network is proposed and the simulated annealing strategy also embedded to construct an annealed chaotic neural network (ACNN) and apply to the clustering problem. In addition t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Chaos is a revolutionary concept, which brings a novel strategy of science for researchers. In this paper, a chaotic neural network is proposed and the simulated annealing strategy also embedded to construct an annealed chaotic neural network (ACNN) and apply to the clustering problem. In addition to retain the characteristics of the conventional neural units, the ACNN displays a rich range of behavior reminiscent of that observed in neurons. Unlike the conventional neural network, the ACNN has rich range and #exible dynamics, so that it can be expected to have higher ability of searching for globally optimal or nearoptimum results. However, the chaotic neural network does not stay in the global solution due to the chaotic dynamical mechanism being not clear. A chaotic mechanism with annealing strategy is introduced into the Hop"eld network to construct a ACNN for expecting a better opportunity of converging to the optimal solution in this paper. In experimental results, unlike the fuzzy clustering methods getting local minima solutions, the ACNN method can always obtain the nearglobal optimal results. From the classi"cation of real multispectral images, the ACNN can obtain suitable results. ( 2001 Pattern Recognition Society. Published by Elsevier
A hybrid approach for image halftoning combining simulated annealing and Neural Networks based techniques
"... Abstract: A classes of stochastic algorithms, which are very powerful in the case of the degraded image reconstruction, are simulated annealing based algorithms. However, the reconstruction of a degraded image using iterative stochastic process require a large number of operations and is still out o ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: A classes of stochastic algorithms, which are very powerful in the case of the degraded image reconstruction, are simulated annealing based algorithms. However, the reconstruction of a degraded image using iterative stochastic process require a large number of operations and is still out of real time. On the other hand, learning and generalization capability of ANN models allows a large panel of techniques improving classical techniques limitations. We are investigating in parallel implementation of image processing techniques. In this paper, we present a hybrid approach for image halftoning combining simulated annealing and neural network based techniques. Simulation and experimental results will be reported.
Entropic Priors
, 1991
"... Abstract: Entropic priors assign probabilities by combining in an inseparable way the information theoretic concept of entropy with the underlying Riemannian geometry of the hypothesis space. These priors form the cornerstone of a developing new and more objective Bayesian theory of inference. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Entropic priors assign probabilities by combining in an inseparable way the information theoretic concept of entropy with the underlying Riemannian geometry of the hypothesis space. These priors form the cornerstone of a developing new and more objective Bayesian theory of inference.