Results 11  20
of
35
The Performance Of Statistical Pattern Recognition Methods In High Dimensional Settings
 IEEE Signal Processing Workshop on Higher Order Statistics. Ceasarea
, 1994
"... We report on an extensive simulation study comparing eight statistical classification methods, focusing on problems where the number of observations is less than the number of variables. Using a wide range of artificial and real data, two types of classifiers were contrasted; methods that classify u ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We report on an extensive simulation study comparing eight statistical classification methods, focusing on problems where the number of observations is less than the number of variables. Using a wide range of artificial and real data, two types of classifiers were contrasted; methods that classify using all variables, and methods that first reduce the number of dimensions to two or three. The full feature space methods include linear, quadratic and regularized discriminant analysis, and the nearest neighbour method. The four dimensionality reducing classifiers are characterized by the transform they implement. The four transforms compared are the Fisher discriminant plane, the FisherFukunagaKoonz, the Fisherradius, and the Fishervariance transforms. The FisherFukunaga and the Fisherradius transform based classifiers have recently been proposed for two class classification problems. We also present an extension to these transforms such that they can be applied to classification pro...
An Experimental Comparison of Kernel Clustering Methods
, 2008
"... In this paper, we compare the performances of some among the most popular kernel clustering methods on several data sets. The methods are all based on central clustering and incorporate in various ways the concepts of fuzzy clustering and kernel machines. The data sets are a sample of several appl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we compare the performances of some among the most popular kernel clustering methods on several data sets. The methods are all based on central clustering and incorporate in various ways the concepts of fuzzy clustering and kernel machines. The data sets are a sample of several application domains and sizes. A thorough discussion about the techniques for validating results is also presented. Results indicate that clustering in kernel space generally outperforms standard clustering, although no method can be proven to be consistently better than the others.
Automatic Coefficient Selection in Weighted Maximum Margin Criterion ∗
"... In this paper, one of the problems of Linear Discriminant Analysis (LDA), that is, it pays more attention on minimizing the withinclass scatter than on maximizing the betweenclass scatter, is treated. Though the Weighted Maximum Margin Criterion (WMMC) with an appropriate weighted coefficient can s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper, one of the problems of Linear Discriminant Analysis (LDA), that is, it pays more attention on minimizing the withinclass scatter than on maximizing the betweenclass scatter, is treated. Though the Weighted Maximum Margin Criterion (WMMC) with an appropriate weighted coefficient can solve this problem, how to select this coefficient automatically is still difficult as most of previous works determine it manually. To deal with this problem, a novel approach to determine the coefficient automatically is proposed. The description and analysis of the approach are given in details, and the experiments on BernDBS face database and JAFFE expression database are performed. The results show that the WMMC with the weighted coefficients determined by this novel approach outperforms traditional LDA. 1.
Pattern Recognition
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
EÆcient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables
"... Abstract. We discuss Bayesian methods for model averaging and model selection among Bayesiannetwork models with hidden variables. In particular, we examine largesample approximations for the marginal likelihood of naiveBayes models in which the root node is hidden. Such models are useful for clu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss Bayesian methods for model averaging and model selection among Bayesiannetwork models with hidden variables. In particular, we examine largesample approximations for the marginal likelihood of naiveBayes models in which the root node is hidden. Such models are useful for clustering or unsupervised learning. We consider a Laplace approximation and the less accurate but more computationally eÆcient approximation known as the Bayesian Information Criterion (BIC), which is equivalent to Rissanen's (1987) MinimumDescription Length (MDL). Also, we consider approximations that ignore some odiagonal elements of the observed information matrix and an approximation proposed by Cheeseman and Stutz (1995). We evaluate the accuracy of these approximations using a MonteCarlo gold standard. In experiments with articial and real examples, we nd that (1) none of the approximations are accurate when used for model averaging, (2) all of the approximations, with the exception of BIC/MDL, are accurate for model selection, (3) among the accurate approximations, the Cheeseman{Stutz and Diagonal approximations are the most computationally eÆcient, (4) all of the approximations, with the exception of BIC/MDL, can be sensitive to the prior distribution over model parameters, and (5) the Cheeseman{Stutz approximation can be more accurate than the other approximations, including the Laplace approximation, in situations where the parameters in the maximum a posteriori conguration are near a boundary.
A NEW APPROACH OF STEGANOGRAPHY USING RADIAL BASIS FUNCTION NEURAL NETWORK
"... Abstract Steganographic tools and techniques are becoming more potential and widespread. Illegal use of steganography poses serious challenges to the law enforcement agencies. Limited work has been carried out on supervised steganalysis using neural network as a classifier. We present a combined me ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Steganographic tools and techniques are becoming more potential and widespread. Illegal use of steganography poses serious challenges to the law enforcement agencies. Limited work has been carried out on supervised steganalysis using neural network as a classifier. We present a combined method of identifying the presence of covert information in a carrier image using fisher’s linear discriminant (FLD) function followed by the radial basis function (RBF). Experiments show promising results when compared to the existing supervised steganalysis methods, but arranging the retrieved information is still a challenging problem.
Contents lists available at ScienceDirect Pattern Recognition
"... journal homepage: www.e lsev ier.com / locate /pr Perturbation LDA: Learning the difference between the class empiricalmean ..."
Abstract
 Add to MetaCart
(Show Context)
journal homepage: www.e lsev ier.com / locate /pr Perturbation LDA: Learning the difference between the class empiricalmean
LINEAR DISCRIMINANT ANALYSIS WITH A GENERALIZATION OF THE MOORE–PENROSE PSEUDOINVERSE
"... The Linear Discriminant Analysis (LDA) technique is an important and welldeveloped area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observ ..."
Abstract
 Add to MetaCart
The Linear Discriminant Analysis (LDA) technique is an important and welldeveloped area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving LDA in this area, and we present a new approach which uses a generalization of the Moore–Penrose pseudoinverse to remove this weakness. Our new approach, in addition to managing the problem of inverting the covariance matrix, significantly improves the quality of classification, also on data sets where we can invert the covariance matrix. Experimental results on various data sets demonstrate that our improvements to LDA are efficient and our approach outperforms LDA.