Results 1  10
of
11
The complexity of analog computation
 in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An NPcomplete problem, 3SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by wellbehaved ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial optimization. 1.
Learning With Preknowledge: Clustering With Point and Graph Matching Distance Measures
 Neural Computation
, 1996
"... Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transform ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transformationstranslation, rotation, scale and shearand permutations. It operates between noisy images with missing and spurious points. The graph matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (¸ million variables) are efficiently minimized using a combination of optimization techniquessoftassign, algebraic transformations, clocked objectives, and deterministic annealing. 1 Introduction While few biologists today would subscribe to Locke's description of the nascent mind as a tabula rasa, the nature of the inherent constraintsKant's preknowledgethat helps org...
Algebraic Transformations of Objective Functions
 Neural Networks
, 1994
"... Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of...
Bayesian inference on visual grammars by neural nets that optimize
 YALE COMPUTER SCIENCE DEPARTMENT
, 1991
"... We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model inf ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for highlevel vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including "abstract" levels remote from the final image data, and
Analog Parallel Computational Geometry
, 1993
"... We introduce a novel approach to Parallel Computational Geometry by using networks of analog components, referred to as analog networks or analog circuits. The analog network we study here is the Analog Hopfield Net which was origially introduced by Hopfield (1983) as a simplified electronic mode ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a novel approach to Parallel Computational Geometry by using networks of analog components, referred to as analog networks or analog circuits. The analog network we study here is the Analog Hopfield Net which was origially introduced by Hopfield (1983) as a simplified electronic model of human brain cells. Massively parallel Analog Hopfield Nets with large numbers of processing elements (neurons) exist in hardware and have proven to be efficient architectures for important problems (e.g. for constructing an associative memory). We demonstrate how Computational Geometry problems can be solved by exploiting the features of such analog parallel architectures. Using massively parallel analog networks requires a radically different approach to traditional parallel geometric problem solving because (i) time is continuous instead of the discretized time step used for traditional parallel (or sequential) processing, and (ii) geometric data is represented by analog components (e.g. voltages at certain positions of the circuit) instead of the usual digital representation.
Combinatorial Optimization and Image Analysis: a literature survey.
, 1995
"... This report presents a selection of combinatorial problems that arise at different stages in the analysis of realistic (and military relevant) imagery. They may serve as a testbed to compare different optimization techniques. This report is intended for researchers specialized in combinatorial optim ..."
Abstract
 Add to MetaCart
This report presents a selection of combinatorial problems that arise at different stages in the analysis of realistic (and military relevant) imagery. They may serve as a testbed to compare different optimization techniques. This report is intended for researchers specialized in combinatorial optimization. The combinatorial optimization techniques are therefore only briefly introduced. A brief introduction to some image processing terminology is also provided. Only few comparative studies on the effectiveness of combinatorial optimization techniques to solve combinatorial problems in image analysis are currently available. This report gives a survey of the results of these studies. 4 Combinatorische Optimalizatie en Beeldanalyse: een literatuuroverzicht A. Toet SAMENVATTING Het EUCLID (EUropean Cooperation for the Long term In Defence) CALMA (Combinatorial Algorithms for Military Applications) RTP (Research and Technology Project) 6.4 project heeft als voornaamste doel het onderzoeken van de bruikbaarheid van verschillende bestaande combinatorische optimalizatie technieken voor het oplossen van complexe combinatorische problemen Er zijn de laatste dertig jaar verscheidene combinatorische optimalizatietechnieken ontwikkeld (bijv. simulated annealing, mean field annealing, neurale netwerken, genetische algoritmen, tabu search). De meeste daarvan zijn alleen getest op sterk vereenvoudigde problemen (bijv. het "handelsreizigers probleem"). Het is momenteel dan ook niet bekend in hoeverre deze methoden in staat zijn om complexe realistische problemen (zoals die in militaire omgevingen voorkomen) op te lossen, en wat de benodigde rekencapaciteit is. zoals die voorkomen in militaire omgevingen. Verschillende stadia van de vereisen het oplossen van grote combinatorische prob...
AFRLRHWPTR20110019 AN INTELLIGENT DECISION SUPPORT SYSTEM FOR WORKFORCE FORECAST
, 2011
"... Distribution A: Approved for public release; distribution is unlimited. ..."