Results 1  10
of
496
Logical minimisation of metarules within MetaInterpretive Learning
"... Abstract. MetaInterpretive Learning (MIL) is an ILP technique which uses higherorder metarules to support predicate invention and learning of recursive definitions. In MIL the selection of metarules is analogous to the choice of refinement operators in a refinement graph search. The metarules d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. MetaInterpretive Learning (MIL) is an ILP technique which uses higherorder metarules to support predicate invention and learning of recursive definitions. In MIL the selection of metarules is analogous to the choice of refinement operators in a refinement graph search. The metarules
Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces
 Journal of Machine Learning Research
, 2004
"... We propose a novel method of dimensionality reduction for supervised learning problems. Given a regression or classification problem in which we wish to predict a response variable Y from an explanatory variable X, we treat the problem of dimensionality reduction as that of finding a lowdimensional ..."
Abstract

Cited by 162 (34 self)
 Add to MetaCart
using covariance operators on reproducing kernel Hilbert spaces. This characterization allows us to derive a contrast function for estimation of the effective subspace. Unlike many conventional methods for dimensionality reduction in supervised learning, the proposed method requires neither assumptions
Learning MetaRules Of Selection In Expert Systems
"... Rule selection in forward chaining is a critical factor in the performance of expert systems. Uninformed selection causes many rules to be fired, that are not useful in the attainment of the reasoning goal. As a result, users have to answer more questions than needed and the system's performanc ..."
Abstract
 Add to MetaCart
. The idea rests on the use of a neural network to (meta)learn dynamically (i.e., whilst the expert system is run) in which situations which rules are worth applying. Prior metarules, if they exist, may also be improved by the neural network. Empirical results demonstrate that the (meta)knowledge encoded
Incremental Online Learning in High Dimensions
 Neural Computation
, 2005
"... Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally e ..."
Abstract

Cited by 164 (19 self)
 Add to MetaCart
dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic leaveoneout cross
A Preliminary Performance Comparison of Five Machine Learning Algorithms for Practical IP Traffic Flow Classification
 COMPUTER COMMUNICATION REVIEW
, 2006
"... The identification of network applications through observation of associated packet traffic flows is vital to the areas of network management and surveillance. Currently popular methods such as port number and payloadbased identification exhibit a number of shortfalls. An alternative is to use mach ..."
Abstract

Cited by 113 (4 self)
 Add to MetaCart
machine learning (ML) techniques and identify network applications based on perflow statistics, derived from payloadindependent features such as packet length and interarrival time distributions. The performance impact of feature set reduction, using Consistencybased and Correlationbased feature
Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning
 Journal of Machine Learning Research
, 2001
"... We consider the use of two additive control variate methods to reduce the variance of performance gradient estimates in reinforcement learning problems. The first approach we consider is the baseline method, in which a function of the current state is added to the discounted value estimate. We relat ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
We consider the use of two additive control variate methods to reduce the variance of performance gradient estimates in reinforcement learning problems. The first approach we consider is the baseline method, in which a function of the current state is added to the discounted value estimate. We
Active Learning in Multilayer Perceptrons
, 1996
"... We propose an active learning method with hiddenunit reduction, which is devised specially for multilayer perceptrons (MLP). First, we review our active learning method, and point out that many Fisherinformationbased methods applied to MLP have a critical problem: the information matrix may be si ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
We propose an active learning method with hiddenunit reduction, which is devised specially for multilayer perceptrons (MLP). First, we review our active learning method, and point out that many Fisherinformationbased methods applied to MLP have a critical problem: the information matrix may
Name tagging with word clusters and discriminative training
 Proceedings of HLT
, 2004
"... We present a technique for augmenting annotated training data with hierarchical word clusters that are automatically derived from a large unannotated corpus. Cluster membership is encoded in features that are incorporated in a discriminatively trained tagging model. Active learning is used to select ..."
Abstract

Cited by 86 (0 self)
 Add to MetaCart
We present a technique for augmenting annotated training data with hierarchical word clusters that are automatically derived from a large unannotated corpus. Cluster membership is encoded in features that are incorporated in a discriminatively trained tagging model. Active learning is used
Kernel dimensionality reduction for supervised learning
, 2003
"... We propose a novel method of dimensionality reduction for supervised learning. Given a regression or classification problem in which we wish to predict a variable Y from an explanatory vector X, we treat the problem of dimensionality reduction as that of finding a lowdimensional “effective subspace ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We propose a novel method of dimensionality reduction for supervised learning. Given a regression or classification problem in which we wish to predict a variable Y from an explanatory vector X, we treat the problem of dimensionality reduction as that of finding a lowdimensional “effective
Instance pruning techniques
 MACHINE LEARNING: PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE (ICML’97
, 1997
"... The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements and ..."
Abstract

Cited by 79 (8 self)
 Add to MetaCart
The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements
Results 1  10
of
496