• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 16,394
Next 10 →

Selection of relevant features and examples in machine learning

by Avrim L. Blum, Pat Langley - ARTIFICIAL INTELLIGENCE , 1997
"... In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been mad ..."
Abstract - Cited by 606 (2 self) - Add to MetaCart
In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been

Text Categorization with Support Vector Machines: Learning with Many Relevant Features

by Thorsten Joachims , 1998
"... This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies, why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substan ..."
Abstract - Cited by 2303 (9 self) - Add to MetaCart
This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies, why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and they behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual parameter tuning.

Selection of relevant features in machine learning.

by Pat Langley - Proceedings of the AAAI Fall Symposium on Relevance. , 1994
"... Abstract In this paper, we review the problem of selecting relevant features for use in machine learning. We describe this problem in terms of heuristic search through a space of feature sets, and we identify four dimensions along which approaches to the problem can vary. We consider recent work on ..."
Abstract - Cited by 175 (1 self) - Add to MetaCart
Abstract In this paper, we review the problem of selecting relevant features for use in machine learning. We describe this problem in terms of heuristic search through a space of feature sets, and we identify four dimensions along which approaches to the problem can vary. We consider recent work

Simultaneous Classification and Relevant Feature . . .

by C. Bhattacharyya, et al. , 2003
"... Molecular profiling technologies monitor many thousands of transcripts, proteins, metabolites or other specdI ccdIJMMIdA in a biologicF sample of interest. Givensuc high-dimensional data for different types of samples,cples,dIF6M6 methods aim to assignspecndKI to knowncwndF#J#;d Relevant feature ide ..."
Abstract - Add to MetaCart
Molecular profiling technologies monitor many thousands of transcripts, proteins, metabolites or other specdI ccdIJMMIdA in a biologicF sample of interest. Givensuc high-dimensional data for different types of samples,cples,dIF6M6 methods aim to assignspecndKI to knowncwndF#J#;d Relevant feature

Strains Relevant features Reference

by unknown authors
"... Table S1. List of strains and plasmids used in this study. ..."
Abstract - Add to MetaCart
Table S1. List of strains and plasmids used in this study.

Wrappers for Feature Subset Selection

by Ron Kohavi, George H. John - AIJ SPECIAL ISSUE ON RELEVANCE , 1997
"... In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a ..."
Abstract - Cited by 1569 (3 self) - Add to MetaCart
In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set

Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy

by Hanchuan Peng, Fuhui Long, Chris Ding - IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2005
"... Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first der ..."
Abstract - Cited by 571 (8 self) - Add to MetaCart
derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a

Irrelevant Features and the Subset Selection Problem

by George H. John, Ron Kohavi, Karl Pfleger - MACHINE LEARNING: PROCEEDINGS OF THE ELEVENTH INTERNATIONAL , 1994
"... We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features ..."
Abstract - Cited by 757 (26 self) - Add to MetaCart
We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features

Efficient Algorithms for Identifying Relevant Features

by Hussein Almuallim, Thomas G. Dietterich - In Proceedings of the Ninth Canadian Conference on Artificial Intelligence , 1992
"... This paper describes efficient methods for exact and approximate implementation of the MINFEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first ..."
Abstract - Cited by 46 (2 self) - Add to MetaCart
This paper describes efficient methods for exact and approximate implementation of the MINFEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first

Sparse Bayesian Learning and the Relevance Vector Machine

by Michael E. Tipping , 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vect ..."
Abstract - Cited by 966 (5 self) - Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance
Next 10 →
Results 1 - 10 of 16,394
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University