Results 1 - 10
of
13
R.: Machine learning for clinical diagnosis from functional magnetic resonance imaging
, 2005
"... Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active human brain. FMRI provides a sequence of 3D brain images with intensities representing brain activations. Standard techniques for fMRI analysis traditionally focused on finding the area of most significant br ..."
Abstract
-
Cited by 16 (3 self)
- Add to MetaCart
(Show Context)
Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active human brain. FMRI provides a sequence of 3D brain images with intensities representing brain activations. Standard techniques for fMRI analysis traditionally focused on finding the area of most significant brain activation for different sensations or activities. In this paper, we explore a new application of machine learning methods to a more challenging problem: classifying subjects into groups based on the observed 3D brain images when the subjects are performing the same task. Here we address the separation of drug-addicted subjects from healthy non-drug-using controls. In this paper, we explore a number of classification approaches. We introduce a novel algorithm that integrates side information into the use of boosting. Our algorithm clearly outperformed wellestablished classifiers as documented in extensive experimental results. This is the first time that machine learning techniques based on 3D brain images are applied to a clinical diagnosis that currently is only performed through patient self-report. Our tools can therefore provide information not addressed by traditional analysis methods and substantially improve diagnosis. 1 1.
Conditional infomax learning: an integrated framework for feature extraction and fusion
- In Proc. European Conf. on Computer Vision
, 2006
"... Abstract. The paper introduces a new framework for feature learning in classification motivated by information theory. We first systematically study the information structure and present a novel perspective revealing the two key factors in information utilization: class-relevance and redun-dancy. We ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
(Show Context)
Abstract. The paper introduces a new framework for feature learning in classification motivated by information theory. We first systematically study the information structure and present a novel perspective revealing the two key factors in information utilization: class-relevance and redun-dancy. We derive a new information decomposition model where a novel concept called class-relevant redundancy is introduced. Subsequently a new algorithm called Conditional Informative Feature Extraction is for-mulated, which maximizes the joint class-relevant information by explic-itly reducing the class-relevant redundancies among features. To address the computational difficulties in information-based optimization, we in-corporate Parzen window estimation into the discrete approximation of the objective function and propose a Local Active Region method which substantially increases the optimization efficiency. To effectively utilize the extracted feature set, we propose a Bayesian MAP formulation for feature fusion, which unifies Laplacian Sparse Prior and Multivariate Logistic Regression to learn a fusion rule with good generalization ca-pability. Realizing the inefficiency caused by separate treatment of the extraction stage and the fusion stage, we further develop an improved design of the framework to coordinate the two stages by introducing a feedback from the fusion stage to the extraction stage, which signifi-cantly enhances the learning efficiency. The results of the comparative experiments show remarkable improvements achieved by our framework. 1
Pursuing Informative Projection on Grassmann Manifold
- in IEEE Conference on Computer Vision and Pattern Recognition
, 2006
"... Inspired by the underlying relationship between classifi-cation capability and the mutual information, in this paper, we first establish a quantitative model to describe the in-formation transmission process from feature extraction to final classification and identify the critical channel in this pr ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
Inspired by the underlying relationship between classifi-cation capability and the mutual information, in this paper, we first establish a quantitative model to describe the in-formation transmission process from feature extraction to final classification and identify the critical channel in this propagation path, and then propose a Maximum Effective Information Criteria for pursuing the optimal subspace in the sense of preserving maximum information that can be conveyed to final decision. Considering the orthogonality and rotation invariance properties of the solution space, we present a Conjugate Gradient method constrained on a Grassmann manifold to exploit the geometric traits of the solution space for enhancing the efficiency of optimization. Comprehensive experiments demonstrate that the frame-work integrating the Maximum Effective Information Cri-teria and Grassmann manifold-based optimization method significantly improves the classification performance. 1
Virtual gene: Using correlations between genes to select informative genes on microarray datasets
- LNCS Transactions on Computational Systems Biology II, LNBI 3680
, 2005
"... Abstract. Gene Selection is one class of most used data analysis algorithms on microarray datasets. The goal of gene selection algorithms is to filter out a small set of informative genes that best explains experimental variations. Traditional gene selection algorithms are mostly single-gene based. ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Gene Selection is one class of most used data analysis algorithms on microarray datasets. The goal of gene selection algorithms is to filter out a small set of informative genes that best explains experimental variations. Traditional gene selection algorithms are mostly single-gene based. Some discriminative scores are calculated and sorted for each gene. Top ranked genes are then se-lected as informative genes for further study. Such algorithms ignore completely correlations between genes, although such correlations is widely known. Genes interact with each other through various pathways and regulative networks. In this paper, we propose to use, instead of ignoring, such correlations for gene selec-tion. Experiments performed on three public available datasets show promising results. 1
Virtual Gene: a Gene Selection Algorithm for Sample Classification on Microarray Datasets
"... Abstract. Gene Selection is one class of most used data analysis algorithms on microarray dataset. The goal of gene selection algorithms is to filter out a small set of informative genes that best explains experimental variations. Traditional gene selection algorithms are mostly single-gene based. S ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Gene Selection is one class of most used data analysis algorithms on microarray dataset. The goal of gene selection algorithms is to filter out a small set of informative genes that best explains experimental variations. Traditional gene selection algorithms are mostly single-gene based. Some discriminative scores are calculated and sorted for each gene. Top ranked genes are then selected as informative genes for further study. Such algorithms ignore completely correlations between genes, although such correlations is widely known. Genes interact with each other through various pathways and regulative networks. In this paper, we propose to use, instead of ignoring, such correlations for gene selection. Experiments performed on three public available datasets show promising results. 1
Boost Feature Subset Selection: A New Gene Selection Algorithm for Microarray Dataset
"... Abstract. Gene selection is usually the crucial first step in microarray data analysis. One class of typical approaches is to calculate some discriminative scores using data associated with a single gene. Such discriminative scores are then sorted and top ranked genes are selected for further analys ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Gene selection is usually the crucial first step in microarray data analysis. One class of typical approaches is to calculate some discriminative scores using data associated with a single gene. Such discriminative scores are then sorted and top ranked genes are selected for further analysis. However, such an approach will result in redundant gene set since it ignores the complex relationships between genes. Recent researches in feature subset selection began to tackle this problem by limiting the correlations of the selected feature set. In this paper, we propose a novel general framework BFSS: Boost Feature Subset Selection to improve the performance of single-gene based discriminative scores using bootstrapping techniques. Features are selected from dynamically adjusted bootstraps of the training dataset. We tested our algorithm on three well-known publicly available microarray data sets in the bioinformatics community. Encouraging results are reported in this paper. 1
Contents lists available at ScienceDirect Computer Communications
"... journal homepage: www.elsevier.com/locate/comcom ..."
ARTICLE IN PRESSAbstract
"... In this work a Newton interior-point method for the solution of Karush–Kuhn– Tucker systems is presented. A crucial feature of this iterative method is the solution, at each iteration, of the inner subproblem. This subproblem is a linear-quadratic programming problem, that can solved approximately b ..."
Abstract
- Add to MetaCart
(Show Context)
In this work a Newton interior-point method for the solution of Karush–Kuhn– Tucker systems is presented. A crucial feature of this iterative method is the solution, at each iteration, of the inner subproblem. This subproblem is a linear-quadratic programming problem, that can solved approximately by an inner iterative method such as the Hestenes multipliers method. A deep analysis on the choices of the parameters of the method (perturbation and damping parameters) has been done. The global convergence of the Newton interior-point method is proved when it is viewed as an inexact Newton method for the solution of nonlinear systems with restric-tion on the sign of some variables.An inexact Newton method combined with Hestenes multipliers scheme for the solution of Karush–Kuhn–Tucker systemsq