Results 1 
8 of
8
The complexity of analog computation
 in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An NPcomplete problem, 3SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by wellbehaved ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial optimization. 1.
Learning With Preknowledge: Clustering With Point and Graph Matching Distance Measures
 Neural Computation
, 1996
"... Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transform ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transformationstranslation, rotation, scale and shearand permutations. It operates between noisy images with missing and spurious points. The graph matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (¸ million variables) are efficiently minimized using a combination of optimization techniquessoftassign, algebraic transformations, clocked objectives, and deterministic annealing. 1 Introduction While few biologists today would subscribe to Locke's description of the nascent mind as a tabula rasa, the nature of the inherent constraintsKant's preknowledgethat helps org...
Algebraic Transformations of Objective Functions
 Neural Networks
, 1994
"... Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of...
Bayesian inference on visual grammars by neural nets that optimize
 YALE COMPUTER SCIENCE DEPARTMENT
, 1991
"... We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model inf ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for highlevel vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including "abstract" levels remote from the final image data, and
Analog Parallel Computational Geometry
, 1993
"... We introduce a novel approach to Parallel Computational Geometry by using networks of analog components, referred to as analog networks or analog circuits. The analog network we study here is the Analog Hopfield Net which was origially introduced by Hopfield (1983) as a simplified electronic mode ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a novel approach to Parallel Computational Geometry by using networks of analog components, referred to as analog networks or analog circuits. The analog network we study here is the Analog Hopfield Net which was origially introduced by Hopfield (1983) as a simplified electronic model of human brain cells. Massively parallel Analog Hopfield Nets with large numbers of processing elements (neurons) exist in hardware and have proven to be efficient architectures for important problems (e.g. for constructing an associative memory). We demonstrate how Computational Geometry problems can be solved by exploiting the features of such analog parallel architectures. Using massively parallel analog networks requires a radically different approach to traditional parallel geometric problem solving because (i) time is continuous instead of the discretized time step used for traditional parallel (or sequential) processing, and (ii) geometric data is represented by analog components (e.g. voltages at certain positions of the circuit) instead of the usual digital representation.
A Lagrangian Formulation of Neural Networks I: Theory and Analog Dynamics
, 1997
"... We expand the mathematical apparatus for relaxation networks, which conventionally consists of an objective function E and a dynamics given by a system of differential equations along whose trajectories E is diminished. Instead we (1) retain the objective function in a standard neural network form, ..."
Abstract
 Add to MetaCart
We expand the mathematical apparatus for relaxation networks, which conventionally consists of an objective function E and a dynamics given by a system of differential equations along whose trajectories E is diminished. Instead we (1) retain the objective function in a standard neural network form, as the measure of the network's computational functionality; (2) derive the dynamics from a Lagrangian function which depends on both and a measure of computational cost; and (3) tune the form of the Lagrangian according to a metaobjective which may involve measuring cost and functionality over many runs of the network. key new features are the Lagrangian, which specifies an objective function that depends on the neural network's state over all times (analogous to Lagrangians which play a similar fundamental role in physics), and its associated functional derivative from which neuralnet relaxation dynamics can be derived. It is the variation which requires the dissipation critical to optim...