• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 7,272
Next 10 →

Dynamic Bayesian Networks: Representation, Inference and Learning

by Kevin Patrick Murphy , 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Abstract - Cited by 770 (3 self) - Add to MetaCart
been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete

Reinforcement Learning I: Introduction

by Richard S. Sutton, Andrew G. Barto , 1998
"... In which we try to give a basic intuitive sense of what reinforcement learning is and how it differs and relates to other fields, e.g., supervised learning and neural networks, genetic algorithms and artificial life, control theory. Intuitively, RL is trial and error (variation and selection, search ..."
Abstract - Cited by 5614 (118 self) - Add to MetaCart
In which we try to give a basic intuitive sense of what reinforcement learning is and how it differs and relates to other fields, e.g., supervised learning and neural networks, genetic algorithms and artificial life, control theory. Intuitively, RL is trial and error (variation and selection

Neural network ensembles, cross validation, and active learning

by Anders Krogh, Jesper Vedelsby - Neural Information Processing Systems 7 , 1995
"... Learning of continuous valued functions using neural network en-sembles (committees) can give improved accuracy, reliable estima-tion of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members aver-aged over unlabeled data, so it qua ..."
Abstract - Cited by 479 (6 self) - Add to MetaCart
Learning of continuous valued functions using neural network en-sembles (committees) can give improved accuracy, reliable estima-tion of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members aver-aged over unlabeled data, so

A new learning algorithm for blind signal separation

by S. Amari, A. Cichocki, H. H. Yang - , 1996
"... A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract - Cited by 622 (80 self) - Add to MetaCart
of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural

Gradient-based learning applied to document recognition

by Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner - Proceedings of the IEEE , 1998
"... Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify hi ..."
Abstract - Cited by 1533 (84 self) - Add to MetaCart
Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify

Hierarchical mixtures of experts and the EM algorithm

by Michael I. Jordan, Robert A. Jacobs , 1993
"... We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood ..."
Abstract - Cited by 885 (21 self) - Add to MetaCart
problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parame-ters of the architecture. We also develop an on-line learning algorithm in which the pa-rameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Cognitive Radio: Brain-Empowered Wireless Communications

by Simon Haykin , 2005
"... Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and use ..."
Abstract - Cited by 1541 (4 self) - Add to MetaCart
and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: • highly reliable communication whenever and wherever needed; • efficient utilization of the radio spectrum. Following

An evaluation of statistical approaches to text categorization

by Yiming Yang - Journal of Information Retrieval , 1999
"... Abstract. This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine th ..."
Abstract - Cited by 663 (22 self) - Add to MetaCart
were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performance; except for a Naive Bayes approach, the other learning algorithms also performed relatively well.

Probabilistic Inference Using Markov Chain Monte Carlo Methods

by Radford M. Neal , 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. R ..."
Abstract - Cited by 736 (24 self) - Add to MetaCart
from data, and Bayesian learning for neural networks.

Data Mining: Concepts and Techniques

by Jiawei Han, Micheline Kamber , 2000
"... Our capabilities of both generating and collecting data have been increasing rapidly in the last several decades. Contributing factors include the widespread use of bar codes for most commercial products, the computerization of many business, scientific and government transactions and managements, a ..."
Abstract - Cited by 3142 (23 self) - Add to MetaCart
warehouses, and other massive information repositories. Data mining is a multidisciplinary field, drawing work from areas including database technology, artificial intelligence, machine learning, neural networks, statistics, pattern recognition, knowledge based systems, knowledge acquisition, information
Next 10 →
Results 1 - 10 of 7,272
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University