Results 1  10
of
13
Statistical pattern recognition: A review
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques ..."
Abstract

Cited by 904 (31 self)
 Add to MetaCart
(Show Context)
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have bean receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the wellknown methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
Nonparametric Weighted Feature Extraction for Classification
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2004
"... ..."
(Show Context)
Classification of High Dimensional Data With Limited Training Samples
, 1998
"...  iiTABLE OF CONTENTS ABSTRACT.......................................................................................iv ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
(Show Context)
 iiTABLE OF CONTENTS ABSTRACT.......................................................................................iv
Statistics enhancement in hyperspectral data analysis using spectralspatial labeling, the EM algorithm, and the leaveoneout covariance estimator
 Proc. SPIE
, 1999
"... Hyperspectral data potentially contain more information than multispectral data because of higher dimensionality. Information extraction algorithm performance is strongly related to the quantitative precision with which the desired classes are defined, a characteristic which increases rapidly with d ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Hyperspectral data potentially contain more information than multispectral data because of higher dimensionality. Information extraction algorithm performance is strongly related to the quantitative precision with which the desired classes are defined, a characteristic which increases rapidly with dimensionality. Due to the limited number of training samples used in defining classes, the information extraction of hyperspectral data may not perform as well as needed. In this paper, schemes for statistics enhancement are investigated for alleviating this problem. Previous works including the EM algorithm and the LeaveOneOut covariance estimator are discussed. The HALF covariance estimator is proposed for twoclass problems by using the symmetry property of the normal distribution. A spectralspatial labeling scheme is proposed to increase the training sample sizes automatically. We also seek to combine previous works with the proposed methods so as to take full advantage of statistics enhancement. Using these techniques, improvement in classification accuracy has been observed.
Feature Selection For OffLine Recognition of Different Size Signatures
, 2002
"... The aim of this work is to select a set of features, which have good performance to solve the problem of signature recognition of different sizes. The signature database was formed for three sizes of signatures per user, small, median and big. This study used structural features, pseudodynamic feat ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The aim of this work is to select a set of features, which have good performance to solve the problem of signature recognition of different sizes. The signature database was formed for three sizes of signatures per user, small, median and big. This study used structural features, pseudodynamic features and five moment groups. The feature selection method chosen was the one that select the best individual features based on classifiers like bayes and kNN.
Covariance Estimation For Limited Training Samples
 IN: INT. GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
, 1998
"... ..."
(Show Context)
"Small Sample Size": A Methodological Problem in Bayes Plugin Classifier for Image Recognition
"... New technologies in the form of improved instrumentation have made it possible to take detailed measurements over recognition patterns. This increase in the number of features or parameters for each pattern of interest not necessarily generates better classification performance. In fact, in probl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
New technologies in the form of improved instrumentation have made it possible to take detailed measurements over recognition patterns. This increase in the number of features or parameters for each pattern of interest not necessarily generates better classification performance. In fact, in problems where the number of training samples is less than the number of parameters, i.e. "small sample size" problems, not all parameters can be estimated and traditional classifiers often used to analyse lower dimensional data deteriorate. The Bayes plugin classifier has been successfully applied to discriminate high dimensional data. This classifier is based on similarity measures that involve the inverse of the sample group covariance matrices. However, these matrices are singular in "small sample size" problems. Thus, several other methods of covariance estimation have been proposed where the sample group covariance estimate is replaced by covariance matrices of various forms. In this report, some of these approaches are reviewed and a new covariance estimator is proposed. The new estimator does not require an optimisation procedure, but an eigenvectoreigenvalue ordering process to select information from the projected sample group covariance matrices whenever possible and the pooled covariance otherwise. The effectiveness of the method is shown by some experimental results. 1
IMPLEMENTATION OF PATTERN RECOGNITION TECHNIQUES AND OVERVIEW OF ITS APPLICATIONS IN VARIOUS AREAS OF ARTIFICIAL INTELLIGENCE
"... A pattern is an entity, vaguely defined, that could be given a name, e.g. fingerprint image, handwritten word, human face, speech signal, DNA sequence. Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and m ..."
Abstract
 Add to MetaCart
A pattern is an entity, vaguely defined, that could be given a name, e.g. fingerprint image, handwritten word, human face, speech signal, DNA sequence. Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the categories of the patterns. The goal of pattern recognition research is to clarify complicated mechanisms of decision making processes and automatic these function using computers. Pattern recognition systems can be designed using the following main approaches: template matching, statistical methods, syntactic methods and neural networks. This paper reviews Pattern Recognition, Process, Design Cycle, Application, Models etc. This paper focuses on Statistical method of pattern Recognition.
c © Copyright by
, 2005
"... A classical problem in computer vision is to find that feature representation which is best for a given task. This problem has been typically addressed using feature extraction algorithms. Unfortunately, most feature extractors fail when different classes cannot be discriminated using unimodal dist ..."
Abstract
 Add to MetaCart
(Show Context)
A classical problem in computer vision is to find that feature representation which is best for a given task. This problem has been typically addressed using feature extraction algorithms. Unfortunately, most feature extractors fail when different classes cannot be discriminated using unimodal distributions. For example, to successfully classify objects, one needs to find that image representation which is robust to within class variations; e.g., while shape cues are generally useful to separate most pears from apples, texture is the key to separate apples from asian pears. In the first part of this thesis we present a method based on multimodal feature extraction which can automatically discover those features that are most adequate to represent distinct classes. This multimodality can be easily formulated by dividing each of the classes into a set of subclasses. In particular, we will concentrate on the use of subclassbased AdaBoost, Principal Component Analysis and Linear Discriminant Analysis. Multimodal representations are also necessary for face and facial feature detection algorithms. Although, these are two class problems (e.g. faces versus nonfaces, eyes
Multimodal Oriented Discriminant Analysis
, 2005
"... Linear discriminant analysis (LDA) has been an active topic of research during the last century. However, the existing algorithms have several limitations when applied to visual data. LDA is only optimal for Gaussian distributed classes with equal covariance matrices, and only classes1 features can ..."
Abstract
 Add to MetaCart
(Show Context)
Linear discriminant analysis (LDA) has been an active topic of research during the last century. However, the existing algorithms have several limitations when applied to visual data. LDA is only optimal for Gaussian distributed classes with equal covariance matrices, and only classes1 features can be extracted. On the other hand, LDA does not scale well to high dimensional data (overfitting), and it cannot handle optimally multimodal distributions. In this paper, we introduce Multimodal Oriented Discriminant Analysis (MODA), an LDA extension which can overcome these drawbacks. A new formulation and several novelties are proposed: • An optimal dimensionality reduction for multimodal Gaussian classes with different covariances is derived. The new criteria allows for extracting more than classes1 features. • A covariance approximation is introduced to improve generalization and avoid overfitting when dealing with high dimensional data. • A linear time iterative majorization method is suggested in order to find a local