Results 1 
6 of
6
Incremental Linear Discriminant Analysis Using Sufficient Spanning Set Approximations
"... This paper presents a new incremental learning solution for Linear Discriminant Analysis (LDA). We apply the concept of the sufficient spanning set approximation in each update step, i.e. for the betweenclass scatter matrix, the projected data matrix as well as the total scatter matrix. The algorit ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper presents a new incremental learning solution for Linear Discriminant Analysis (LDA). We apply the concept of the sufficient spanning set approximation in each update step, i.e. for the betweenclass scatter matrix, the projected data matrix as well as the total scatter matrix. The algorithm yields a more general and efficient solution to incremental LDA than previous methods. It also significantly reduces the computational complexity while providing a solution which closely agrees with the batch LDA result. The proposed algorithm has a time complexity of O(Nd 2) and requires O(Nd) space, where d is the reduced subspace dimension and N the data dimension. We show two applications of incremental LDA: First, the method is applied to semisupervised learning by integrating it into an EM framework. Secondly, we apply it to the task of merging large databases which were collected during MPEG standardization for face image retrieval. 1.
Ocfs: Optimal orthogonal centroid feature selection for text categorization
 In Proc. of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
, 2005
"... ABSTRACT 1 Text categorization is an important research area in many Information Retrieval (IR) applications. To save the storage space and computation time in text categorization, efficient and effective algorithms for reducing the data before analysis are highly desired. Traditional techniques for ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
ABSTRACT 1 Text categorization is an important research area in many Information Retrieval (IR) applications. To save the storage space and computation time in text categorization, efficient and effective algorithms for reducing the data before analysis are highly desired. Traditional techniques for this purpose can generally be classified into feature extraction and feature selection. Because of efficiency, the latter is more suitable for text data such as web documents. However, many popular feature selection techniques such as Information Gain (IG) and 2 χtest (CHI) are all greedy in nature and thus may not be optimal according to some criterion. Moreover, the performance of these greedy methods may be deteriorated when the reserved data dimension is extremely low. In this paper, we propose an efficient optimal feature selection algorithm by optimizing the objective function of Orthogonal Centroid (OC) subspace learning algorithm in a discrete solution space, called Orthogonal Centroid Feature Selection (OCFS). Experiments on 20 Newsgroups (20NG), Reuters Corpus Volume 1 (RCV1) and Open Directory Project (ODP) data show that OCFS is consistently better than IG and CHI with smaller computation time especially when the reduced dimension is extremely small.
Mining Adaptive Ratio Rules from Distributed Data Sources
, 2005
"... Abstract. Different from traditional associationrule mining, a new paradigm called Ratio Rule (RR) was proposed recently. Ratio rules are aimed at capturing the quantitative association knowledge, We extend this framework to mining ratio rules from distributed and dynamic data sources. This is a no ..."
Abstract
 Add to MetaCart
Abstract. Different from traditional associationrule mining, a new paradigm called Ratio Rule (RR) was proposed recently. Ratio rules are aimed at capturing the quantitative association knowledge, We extend this framework to mining ratio rules from distributed and dynamic data sources. This is a novel and challenging problem. The traditional techniques used for ratio rule mining is an eigensystem analysis which can often fall victim to noise. This has limited the application of ratio rule mining greatly. The distributed data sources impose additional constraints for the mining procedure to be robust in the presence of noise, because it is difficult to clean all the data sources in real time in realworld tasks. In addition, the traditional batch methods for ratio rule mining cannot cope with dynamic data. In this paper, we propose an integrated method to mining ratio rules from distributed and changing data sources, by first mining the ratio rules from each data source separately through a novel robust and adaptive onepass algorithm (which is called Robust and Adaptive Ratio Rule (RARR)), and then integrating the rules of each data source in a simple probabilistic model. In this way, we can acquire the global rules from all the local information sources adaptively. We show that the RARR technique can converge to a fixed point and is robust as well. Moreover, the integration of rules is efficient
Toshiba Research Europe Ltd Computer Vision Group
"... This paper presents a new incremental learning solution for Linear Discriminant Analysis (LDA). We apply the concept of the sufficient spanning set approximation in each update step, i.e. for the betweenclass scatter matrix, the projected data matrix as well as the total scatter matrix. The algorit ..."
Abstract
 Add to MetaCart
This paper presents a new incremental learning solution for Linear Discriminant Analysis (LDA). We apply the concept of the sufficient spanning set approximation in each update step, i.e. for the betweenclass scatter matrix, the projected data matrix as well as the total scatter matrix. The algorithm yields a more general and efficient solution to incremental LDA than previous methods. It also significantly reduces the computational complexity while providing a solution which closely agrees with the batch LDA result. The proposed algorithm has a time complexity of O(Nd 2) and requires O(Nd) space, where d is the reduced subspace dimension and N the data dimension. We show two applications of incremental LDA: First, the method is applied to semisupervised learning by integrating it into an EM framework. Secondly, we apply it to the task of merging large databases which were collected during MPEG standardization for face image retrieval. 1.
Int J Comput Vis DOI 10.1007/s1126301003813 Incremental Linear Discriminant Analysis Using Sufficient Spanning Sets and Its Applications
, 2009
"... Abstract This paper presents an incremental learning solution for Linear Discriminant Analysis (LDA) and its applications to object recognition problems. We apply the sufficient spanning set approximation in three steps i.e. update for the total scatter matrix, betweenclass scatter matrix and the p ..."
Abstract
 Add to MetaCart
Abstract This paper presents an incremental learning solution for Linear Discriminant Analysis (LDA) and its applications to object recognition problems. We apply the sufficient spanning set approximation in three steps i.e. update for the total scatter matrix, betweenclass scatter matrix and the projected data matrix, which leads an online solution which closely agrees with the batch solution in accuracy while significantly reducing the computational complexity. The algorithm yields an efficient solution to incremental LDA even when the number of classes as well as the set size is large. The incremental LDA method has been also shown useful for semisupervised online learning. Label propagation is done by integrating the incremental LDA into an EM framework. The method has been demonstrated in the task of merging large datasets which were collected during MPEG standardization for face image retrieval, face authentication using the BANCA dataset, and object categorisation using the Caltech101 dataset.
INCIDE the Brain of a Bee: Visualising Honeybee Brain Activity in Real Time by Semantic Segmentation
"... Figure 1: The honeybee brain encodes odors by activity patterns of units called glomeruli in the antennal lobe (AL). These patterns can be observed in calcium imaging movies. For orientation, glomeruli 17 and 33 are labelled. Left: Frontal view onto an AL model with a total number of 160 glomeruli. ..."
Abstract
 Add to MetaCart
Figure 1: The honeybee brain encodes odors by activity patterns of units called glomeruli in the antennal lobe (AL). These patterns can be observed in calcium imaging movies. For orientation, glomeruli 17 and 33 are labelled. Left: Frontal view onto an AL model with a total number of 160 glomeruli. Right: Raw data (upper row) and visualisation result after processing with the presented method (lower row). We show consecutive images before and during odor application. We present a software solution for processing recordings of honeybee brain activity in real time. In the honeybee brain, odors elicit spatiotemporal activity patterns that encode odor identity. These patterns of neural activity in units called glomeruli can be recorded by calcium imaging with fluorescent dyes, but so far glomerulus segmentation was only possible offline, making interactive experiments impossible. Our main contribution is an adaptive algorithm for image processing, along with a fast implementation for the graphics processing unit that enables semantic segmentation in real time. Semantics is based on the temporal dimension, relying on the fact that time series of pixels within a glomerulus are correlated. We evaluate our software on reference data, demonstrate applicability in a biological experiment, and provide free source code. This paves the way for interactive experiments where neural units can be selected online based on their past activity. 1