Results 1 
6 of
6
Theory Refinement on Bayesian Networks
, 1991
"... Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced ..."
Abstract

Cited by 184 (5 self)
 Add to MetaCart
Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation ", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. 1 Introduction Theory refinement is the task of updating a domain theory in the light of...
Part 1: Overview of the Probably Approximately Correct (PAC) Learning Framework
, 1995
"... Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criticisms. Finally, we look briefly at other models recently proposed in computational learning theory.
A Statistical Perspective on Data Mining
 Future Generation Computer Systems
, 1997
"... Data mining can be regarded as a collection of methods for drawing inferences from data. The aims of data mining, and some of its methods, overlap with those of classical statistics. However, there are some philosophical and methodological differences. We examine these differences, and we describ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Data mining can be regarded as a collection of methods for drawing inferences from data. The aims of data mining, and some of its methods, overlap with those of classical statistics. However, there are some philosophical and methodological differences. We examine these differences, and we describe three approaches to machine learning that have developed largely independently: classical statistics, Vapnik's statistical learning theory, and computational learning theory. Comparing these approaches, we conclude that statisticians and data miners can profit by studying each other's methods and using a judiciously chosen combination of them. Key words: classification, frequentist inference, PAC learning, statistical learning theory. 1
Two Papers on FeedForward Networks
, 1991
"... REPORT DOCUMENTATION PAGE I OMB No. 07040188 Pubtic reporting burden for this collection of information _s estimated to average 1 hour per response, including the time for reviewing instructlo=ns, searching existing data source_, gather ncj and maintaining the data needed and compieting and review ..."
Abstract
 Add to MetaCart
REPORT DOCUMENTATION PAGE I OMB No. 07040188 Pubtic reporting burden for this collection of information _s estimated to average 1 hour per response, including the time for reviewing instructlo=ns, searching existing data source_, gather ncj and maintaining the data needed and compieting and reviewing the col)ection of information Send comments rec_arding this burden estimate or any other a.sp_c_=t of t,hrs collection of information, including suggestions for reducing this Ouraen, to Washington Headquarters Services, Directorate tot reformat on Operat ons and Reports, 215 Jenersun
parameters such as cr and A, that is Pr(lw , or) = Pr(). Posterior probabilities of network weights are as follows. For regression with Gaussian error and unknown a,
"... This paper has covered Bayesian theory relevant to the problem of training feedforward connectionist networks. We now sketch out how this might be put together in practice, assuming a standard gradient descent algorithm as used during search ..."
Abstract
 Add to MetaCart
This paper has covered Bayesian theory relevant to the problem of training feedforward connectionist networks. We now sketch out how this might be put together in practice, assuming a standard gradient descent algorithm as used during search
On Information Regularization
 In Proceedings of the 19th UAI
, 2003
"... We formulate a principle for classification with the knowledge of the marginal distribution over the data points (unlabeled data). The principle is cast in terms of Tikhonov style regularization where the regularization penalty articulates the way in which the marginal density should constrain ..."
Abstract
 Add to MetaCart
We formulate a principle for classification with the knowledge of the marginal distribution over the data points (unlabeled data). The principle is cast in terms of Tikhonov style regularization where the regularization penalty articulates the way in which the marginal density should constrain otherwise unrestricted conditional distributions.