Results 1 
6 of
6
The complexity of analog computation
 in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An NPcomplete problem, 3SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by wellbehaved ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial optimization. 1.
An Analysis of a Class of Neural Networks for Solving Linear Programming Problems
 IEEE Trans. Auto. Contr
, 1995
"... Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear program ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunovtype theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples. Index Terms—Invariant sets, linear programming, neural networks, nondifferentiable optimization, penalty functions, sliding modes. I.
Accurate and Precise Computation using Analog VLSI, with Applications to Computer Graphics and Neural Networks
, 1993
"... This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multidimensional gradient estimation for a grad...
A dual neural network for bicriteria kinematic control of redundant manipulators
 IEEE Trans. Robot. Automat
, 2002
"... Abstract—A dual neural network is presented for the bicriteria kinematic control of redundant manipulators. To diminish the discontinuity of minimum infinitynorm solutions, the kinematiccontrol problem is formulated in the bicriteria of the infinity and Euclidean norms. Physical constraints such ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—A dual neural network is presented for the bicriteria kinematic control of redundant manipulators. To diminish the discontinuity of minimum infinitynorm solutions, the kinematiccontrol problem is formulated in the bicriteria of the infinity and Euclidean norms. Physical constraints such as joint limits and joint velocity limits are also incorporated simultaneously into the proposed kinematic control scheme. The singlelayer dual neural network model with a simple structure is developed for bicriteria redundant resolution of redundant manipulators subject to robot physical constraints. The dual neural network is shown to be globally convergent to optimal solutions in the bicriteria sense, and is demonstrated to be effective in controlling the PA10 robot manipulator. Index Terms—Bicriteria, dual neural network, joint limits, joint velocity limits, kinematically redundant manipulators. I.
An Analog "Neural Net " Based Suboptimal Controller for Constrained Discretetime Linear Systems Brief Paper
"... Key WordsAnalog computer control; discretetime systems; feedback control; multivariable control systems; neural nets; online operations; stability; suboptimal control. AImtraetA large class of problems frequently encountered in practice involves the control of linear time invariant systems wit ..."
Abstract
 Add to MetaCart
Key WordsAnalog computer control; discretetime systems; feedback control; multivariable control systems; neural nets; online operations; stability; suboptimal control. AImtraetA large class of problems frequently encountered in practice involves the control of linear time invariant systems with states and controls restricted to closed convex regions of their respective spaces. In spite of the significance of this problem, to date it has not been solved satisfactorily except in some restricted cases. In this paper we propose a suboptimal feedback control algorithm based upon online optimization during the sampling interval. Theoretical results are presented showing that our approach yields asymptotically stable systems. Finally an implementation of the control algorithm using an analog circuit is discussed. This implementation provides an alternative to the use of digital computers in the feedback loop that offers advantages in terms of cost and reliability. We believe that it may prove to be specially valuable when the time available for computations is limited. A 5thorder model of a F100 jet engine is used as an example application of the controller. 1.