Results 1  10
of
104
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 830 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Neural Networks for Combinatorial Optimization: A Review of More Than a Decade of Research
, 1999
"... This article briefly summarizes the work that has been done and presents the current standing of neural networks for combinatorial optimization by considering each of the major classes of combinatorial optimization problems. Areas which have not yet been studied are identified for future research. ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
This article briefly summarizes the work that has been done and presents the current standing of neural networks for combinatorial optimization by considering each of the major classes of combinatorial optimization problems. Areas which have not yet been studied are identified for future research.
The complexity of analog computation
 in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
(Show Context)
We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An NPcomplete problem, 3SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by wellbehaved ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial optimization. 1.
Unsupervised Learning by Convex and Conic Coding
 Advances in Neural Information Processing Systems 9
, 1997
"... Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develo ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
(Show Context)
Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer. 1 Introduction Vector quantization (VQ) and principal component analysis (PCA) are two widely used unsupervised learning algorithms, based on two fundamentally different ways of encoding data. In VQ, the input is encoded as the index of the closest prototype stored in memory. In PCA, the input is encoded as the coefficients of a linear superposition of a set of basis ...
Algebraic Transformations of Objective Functions
 Neural Networks
, 1994
"... Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of...
A Survey of Factory Control Algorithms which Can be Implemented in a MultiAgent Heterarchy: Dispatching, Scheduling, and Pull
 TO APPEAR IN THE JOURNAL OF MANUFACTURING SYSTEMS ©1998 SME
, 1998
"... This paper describes various multiagent architectures including the heterarchical architecture. It reviews the claimed advantages for multiagent heterarchies and describes the types of factories which could use this architecture. It surveys the three common types of factory control algorithms: dis ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper describes various multiagent architectures including the heterarchical architecture. It reviews the claimed advantages for multiagent heterarchies and describes the types of factories which could use this architecture. It surveys the three common types of factory control algorithms: dispatching algorithms, scheduling algorithms, and pull algorithms. It then asks the question: which of these algorithms can be implemented in a multiagent heterarchy? This paper describes how all common factory control algorithms used in industry can be implemented in an multiagent heterarchy. It discusses how many of the algorithms which are popular in current research can be implemented in a multiagent heterarchy, while others will require further research.
A CMOS Analog Adaptive BAM with OnChip Learning and Weight Refreshing
, 1993
"... In this paper we will extend the transconductancemode (Tmode) approach [1] to implement analog continuanstime neural network hardware systems to include onchip Hebbian learning and onchip analog weight storage capability. The demonstration vehicle used is a 5+5 neurons bidirectional associative m ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In this paper we will extend the transconductancemode (Tmode) approach [1] to implement analog continuanstime neural network hardware systems to include onchip Hebbian learning and onchip analog weight storage capability. The demonstration vehicle used is a 5+5 neurons bidirectional associative memory (BAM) prototype fabricated in a standard 2tm doublemetal doublepolysilicon CMOS process (through and thanks to MOSIS). Mismatches and nonidealities in learning neural hardware are supposed not to be critical if onchip learning is available, because they will be implicitly compensated. However, mismatches in the learning circuits themselves cannot always be compensated. This mismatch is specially important if the learning circuits use transistors operating in weak inversion. In this paper we will estimate the expected mismatch between learning circuits in the BAM network prototype and evaluate its effect on the learning performance, using theoretical computations and Monte Carlo Hspice simulations. Afterwards we will verify these theoretical predictions with the experimentally measured results on the test vehicle prototype.
Problem Solving with Optimization Networks
, 1993
"... previously seemed, since they can be successfully applied to only a limited number of problems exhibiting special, amenable properties. Combinatorial optimization, neural networks, mean eld annealing. i optimization networks Key words: Summary I am greatly indebted to my supervisor, Richard Prager, ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
previously seemed, since they can be successfully applied to only a limited number of problems exhibiting special, amenable properties. Combinatorial optimization, neural networks, mean eld annealing. i optimization networks Key words: Summary I am greatly indebted to my supervisor, Richard Prager, for initially allowing me the freedom to explore various research areas, and subsequently providing invaluable support as my work progressed. Members of the Speech, Vision and Robotics Group at the Cambridge University Department of Engineering have provided a stimulating and friendly environment to work in: special thanks must go to Patrick Gosling and Tony Robinson for maintaining a superb computing service, and to Sree Aiyer for both setting me on the right course and for numerous helpful discussions since then. I would like to thank the Science and Engineering Research Council of Great Britain, the Cambridge University Department of Engineering and Queen
A Neural Architecture for a Class of Abduction Problems
 IEEE Transactions on Systems Man and Cybernetics
, 1996
"... The general task of abduction is to infer a hypothesis that best explains a set of data. A typical subtask of this is to synthesize a composite hypothesis that best explains the entire data from elementary hypotheses which can explain portions of it. The synthesis subtask of abduction is computat ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
The general task of abduction is to infer a hypothesis that best explains a set of data. A typical subtask of this is to synthesize a composite hypothesis that best explains the entire data from elementary hypotheses which can explain portions of it. The synthesis subtask of abduction is computationally expensive, more so in the presence of certain types of interactions between the elementary hypotheses. In this paper, we first formulate the abduction task as a nonmonotonic constrainedoptimization problem. We then consider a special version of the general abduction task that is linear and monotonic. Next, we describe a neural network based on the Hopfield model of computation for the special version of the abduction task. The connections in this network are symmetric, the energy function contains product forms, and the minimization of this function requires a network of order greater than two. We then discuss another neural architecture which is composed of functional module...