Results 1  10
of
19
Multiresolution image classification by hierarchical modeling with two dimensional hidden Markov models
 IEEE TRANS. INFORMATION THEORY
, 2000
"... This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum likelihood, in particular by employing the expectationmaximization algorithm. An image is classified by finding the optimal set of states with maximum a posteriori probability. States are then mapped into classes. The multiresolution model enables multiscale information about context to be incorporated into classification. Suboptimal algorithms based on the model provide progressive classification that is much faster than the algorithm based on singleresolution hidden Markov models.
Automatic image orientation detection
 in Proc. IEEE ICIP’99
"... Abstract—We present an algorithm for automatic image orientation estimation using a Bayesian learning framework. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a learning vector quantizer (LVQ) can be used to estimate the ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Abstract—We present an algorithm for automatic image orientation estimation using a Bayesian learning framework. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a learning vector quantizer (LVQ) can be used to estimate the classconditional densities of the observed features needed for the Bayesian methodology. We further show how principal component analysis (PCA) and linear discriminant analysis (LDA) can be used as a feature extraction mechanism to remove redundancies in the highdimensional feature vectors used for classification. The proposed method is compared with four different commonly used classifiers, namelynearest neighbor, support vector machine (SVM), a mixture of Gaussians, and hierarchical discriminating regression (HDR) tree. Experiments on a database of 16 344 images have shown that our proposed algorithm achieves an accuracy of approximately 98 % on the training set and over 97% on an independent test set. A slight improvement in classification accuracy is achieved by employing classifier combination techniques. Index Terms—Bayesian learning, classifier combination, expectation maximization, feature extraction, hierarchical discriminant regression, image database, image orientation, learning vector quantization, support vector machine. I.
Contextbased Multiscale Classification of Document Images Using Wavelet Coefficient Distributions
, 2000
"... In this paper, an algorithm is developed for segmenting document images into four classes: background, photograph, text, and graph. Features used for classification are based on the distribution patterns of wavelet coefficients in high frequency bands. Two important attributes of the algorithm are ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
In this paper, an algorithm is developed for segmenting document images into four classes: background, photograph, text, and graph. Features used for classification are based on the distribution patterns of wavelet coefficients in high frequency bands. Two important attributes of the algorithm are its multiscale natureit classifies an image at different resolutions adaptively, enabling accurate classification at class boundaries as well as fast classification overall and its use of accumulated context information for improving classification accuracy.
The Enhanced LBG Algorithm
, 2001
"... Clustering applications cover several elds such as audio and video data compression, pattern recognition, computer vision, medical image recognition, etc. In this paper we present a new clustering algorithm called Enhanced LBG (ELBG). It belongs to the hard and Kmeans vector quantization groups an ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Clustering applications cover several elds such as audio and video data compression, pattern recognition, computer vision, medical image recognition, etc. In this paper we present a new clustering algorithm called Enhanced LBG (ELBG). It belongs to the hard and Kmeans vector quantization groups and derives directly from the simpler LBG. The basic idea we have developed is the concept of utility of a codeword, a powerful instrument to overcome one of the main drawbacks of clustering algorithms: generally, the results achieved are not good in the case of a bad choice of the initial codebook. We will present our experimental results showing that ELBG is able to nd better codebooks than previous clustering techniques and the computational complexity is virtually the same as the simpler LBG.
A Vector Quantizer for Image Restoration
 IEEE Trans. Image Processing
, 1996
"... A vector quantization algorithm is presented which accomplishes image restoration concurrently with image compression. The algorithm is based on nonlinear interpolative vector quantization (NLIVQ). An efficient codebook design procedure is also presented. A theoretical discussion of the algorithm is ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
A vector quantization algorithm is presented which accomplishes image restoration concurrently with image compression. The algorithm is based on nonlinear interpolative vector quantization (NLIVQ). An efficient codebook design procedure is also presented. A theoretical discussion of the algorithm is included along with results from simulations. 1. Introduction Vector quantization (VQ) is another name for what Shannon called block source coding subject to a fidelity criterion [1] . Coding of this type maps consecutive, usually nonoverlapping, segments of input data to their best matching entry in a codebook of reproduction vectors. In the context of image coding, VQ is generally considered a data compression technique. However, VQ algorithms have been presented which perform other signal processing tasks concurrently with compression. These span the range from speech processing tasks such as speaker recognition and noise suppression, to image processing tasks like halftoning, edge det...
Vector Quantization and Density Estimation
 In SEQUENCES97
, 1997
"... The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords are chosen from among the ensemble of codebooks so as to minimize bit rate, then the codebook selected provides an implicit estimate of the underlying class. Less is known about the corresponding connections between lossy compression and continuous sources. Here we consider aspects of estimating conditional and unconditional densities in conjunction with Bayesrisk weighted vector quantization for joint compression and classification.
Optimality of KLT for HighRate Transform Coding of Gaussian VectorScale Mixtures: Application to Reconstruction, Estimation and Classification ∗
"... The KarhunenLoève transform (KLT) is known to be optimal for highrate transform coding of Gaussian vectors for both fixedrate and variablerate encoding. The KLT is also known to be suboptimal for some nonGaussian models. This paper proves highrate optimality of the KLT for variablerate encod ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The KarhunenLoève transform (KLT) is known to be optimal for highrate transform coding of Gaussian vectors for both fixedrate and variablerate encoding. The KLT is also known to be suboptimal for some nonGaussian models. This paper proves highrate optimality of the KLT for variablerate encoding of a broad class of nonGaussian vectors: Gaussian vectorscale mixtures (GVSM), which extend the Gaussian scale mixture (GSM) model of natural signals. A key concavity property of the scalar GSM (same as the scalar GVSM) is derived to complete the proof. Optimality holds under a broad class of quadratic criteria, which include mean squared error (MSE) as well as generalized fdivergence loss in estimation and binary classification systems. Finally, the theory is illustrated using two applications: signal estimation in multiplicative noise and joint optimization of classification/reconstruction systems.
Tree Structured Nonlinear Signal Modeling and Prediction
 Proc. of the IEEE 1995 International Conference on Acoustics, Speech and Signal Processing
"... Abstract—In this paper, we develop a regression tree approach to identification and prediction of signals that evolve according to an unknown nonlinear state space model. In this approach, a tree is recursively constructed that partitions the �dimensional state space into a collection of piecewise ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract—In this paper, we develop a regression tree approach to identification and prediction of signals that evolve according to an unknown nonlinear state space model. In this approach, a tree is recursively constructed that partitions the �dimensional state space into a collection of piecewise homogeneous regions utilizing a P �ary splitting rule with an entropybased node impurity criterion. On this partition, the joint density of the state is approximately piecewise constant, leading to a nonlinear predictor that nearly attains minimum mean square error. This process decomposition is closely related to a generalized version of the thresholded AR signal model (ART), which we call piecewise constant AR (PCAR). We illustrate the method for two cases where classical linear prediction is ineffective: a chaotic “doublescroll” signal measured at the output of a Chuatype electronic circuit and a secondorder ART model. We show that the prediction errors are comparable with the nearest neighbor approach to nonlinear prediction but with greatly reduced complexity. Index Terms—Chaotic signal analysis, nonlinear and nonparametric modeling and prediction, piecewise constant AR models, recursive partitioning, regression trees. I.
Combined compression and classification with learning vector quantization
 IEEE Trans. Info. Theory
, 1999
"... Abstract—Combined compression and classification problems are becoming increasingly important in many applications with large amounts of sensory data and large sets of classes. These applications range from automatic target recognition (ATR) to medical diagnosis, speech recognition, and fault detect ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract—Combined compression and classification problems are becoming increasingly important in many applications with large amounts of sensory data and large sets of classes. These applications range from automatic target recognition (ATR) to medical diagnosis, speech recognition, and fault detection and identification in manufacturing systems. In this paper, we develop and analyze a learning vector quantization (LVQ) based algorithm for combined compression and classification. We show convergence of the algorithm using the ODE method from stochastic approximation. We illustrate the performance of our algorithm with some examples. Index Terms — Classification, compression, learning vector quantization, nonparametric, stochastic approximation.