Results 1 
2 of
2
Annealed Theories of Learning
 In J.H
, 1995
"... We study annealed theories of learning boolean functions using a concept class of finite cardinality. The naive annealed theory can be used to derive a universal learning curve bound for zero temperature learning, similar to the inverse square root bound from the VapnikChervonenkis theory. Tighter, ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We study annealed theories of learning boolean functions using a concept class of finite cardinality. The naive annealed theory can be used to derive a universal learning curve bound for zero temperature learning, similar to the inverse square root bound from the VapnikChervonenkis theory. Tighter, nonuniversal learning curve bounds are also derived. A more refined annealed theory leads to still tighter bounds, which in some cases are very similar to results previously obtained using onestep replica symmetry breaking. 1. Introduction The annealed approximation 1 has proven to be an invaluable tool for studying the statistical mechanics of learning from examples. Previously it was found that the annealed approximation gave qualitatively correct results for several models of perceptrons learning realizable rules. 2 Because of its simplicity relative to the full quenched theory, the annealed approximation has since been used in studies of more complicated multilayer architectures. ...
Stochastic Perceptron and Semiparametric Statistical Inference
, 1993
"... It was reported (Kabashima and Shinomoto 1992) that estimators of a binary decision boundary show asymptotically strange behaviors when the probability model is illposed. We give a rigorous analysis of this phenomenon in a stochastic perceptron by using the estimating function method. A stochastic ..."
Abstract
 Add to MetaCart
It was reported (Kabashima and Shinomoto 1992) that estimators of a binary decision boundary show asymptotically strange behaviors when the probability model is illposed. We give a rigorous analysis of this phenomenon in a stochastic perceptron by using the estimating function method. A stochastic perceptron consists of a neuron which is excited depending on the weighted sum of inputs but its probability distribution form is unknown here. It is shown that there exists no p nconsistent estimator of the threshold value h, that is, no estimator h which converges to h in the order of 1= p n as the number n of observations increases. Therefore, the accuracy of estimation is much worse in this semiparametric case with an unspecified probability function than in the ordinary case. On the other hand, it is shown that there is a p nconsistent estimator w of the synaptic weight vector. These results elucidate strange behaviors of learning curves in a semiparametric statistical model....