Results 1 
3 of
3
Adaptive CMOS: From Biological Inspiration to SystemsonaChip
 PROCEEDINGS OF THE IEEE
, 2002
"... ..."
Competitive Learning With FloatingGate Circuits
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2002
"... Competitive learning is a general technique for training clustering and classification networks. We have developed an 11transistor silicon circuit, that we term an automaximizing bump circuit, that uses silicon physics to naturally implement a similarity computation, local adaptation, simultaneous ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Competitive learning is a general technique for training clustering and classification networks. We have developed an 11transistor silicon circuit, that we term an automaximizing bump circuit, that uses silicon physics to naturally implement a similarity computation, local adaptation, simultaneous adaptation and computation and nonvolatile storage. This circuit is an ideal building block for constructing competitivelearning networks. We illustrate the adaptive nature of the automaximizing bump in two ways. First, we demonstrate a silicon competitivelearning circuit that clusters onedimensional (1D) data. We then illustrate a general architecture based on the automaximizing bump circuit; we show the effectiveness of this architecture, via software simulation, on a general clustering task. We corroborate our analysis with experimental data from circuits fabricated in a 0.35µm CMOS process.
Cost functions to estimate a posteriori probabilities in multiclass problems
 IEEE Trans. Neural Networks
, 1999
"... Abstract—The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in oneclass oneoutput networks whose outputs are consistent with probability law ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in oneclass oneoutput networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions; those which verify two usually interesting properties: symmetry and separability (wellknown cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for singlelayer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions. Index Terms — Neural networks, pattern classification, probability estimation.