Results 1  10
of
190
Scaling Clustering Algorithms to Large Databases”, Microsoft Research Report
, 1998
"... Practical clustering algorithms require multiple data scans to achieve convergence. For large databases, these scans become prohibitively expensive. We present a scalable clustering framework applicable to a wide class of iterative clustering. We require at most one scan of the database. In this wor ..."
Abstract

Cited by 247 (5 self)
 Add to MetaCart
Practical clustering algorithms require multiple data scans to achieve convergence. For large databases, these scans become prohibitively expensive. We present a scalable clustering framework applicable to a wide class of iterative clustering. We require at most one scan of the database. In this work, the framework is instantiated and numerically justified with the popular KMeans clustering algorithm. The method is based on identifying regions of the data that are compressible, regions that must be maintained in memory, and regions that are discardable. The algorithm operates within the confines of a limited memory buffer. Empirical results demonstrate that the scalable scheme outperforms a samplingbased approach. In our scheme, data resolution is preserved to the extent possible based upon the size of the allocated memory buffer and the fit of current clustering model to the data. The framework is naturally extended to update multiple clustering models simultaneously. We empirically evaluate on synthetic and publicly available data sets.
Penalized Discriminant Analysis
 Annals of Statistics
, 1995
"... Fisher's linear discriminant analysis (LDA) is a popular dataanalytic tool for studying the relationship between a set of predictors and a categorical response. In this paper we describe a penalized version of LDA. It is designed for situations in which there are many highly correlated predictors, ..."
Abstract

Cited by 131 (9 self)
 Add to MetaCart
Fisher's linear discriminant analysis (LDA) is a popular dataanalytic tool for studying the relationship between a set of predictors and a categorical response. In this paper we describe a penalized version of LDA. It is designed for situations in which there are many highly correlated predictors, such as those obtained by discretizing a function, or the greyscale values of the pixels in a series of images. In cases such as these it is natural, efficient, and sometimes essential to impose a spatial smoothness constraint on the coefficients, both for improved prediction performance and interpretability. We cast the classification problem into a regression framework via optimal scoring. Using this, our proposal facilitates the use of any penalized regression technique in the classification setting. The technique is illustrated with examples in speech recognition and handwritten character recognition. AMS 1991 Classifications: Primary 62H30, Secondary 62G07 1 Introduction Linear discrim...
Blind Separation of Mixture of Independent Sources Through a Maximum Likelihood Approach
 In Proc. EUSIPCO
, 1997
"... In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption af ..."
Abstract

Cited by 99 (8 self)
 Add to MetaCart
In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption afterward. The first method is specially adapted to temporally independent non Gaussian sources and is based on the use of nonlinear separating functions. The second method is specially adapted to correlated sources with distinct spectra and is based on the use of linear separating filters. A theoretical analysis of the performance of the methods has been made. A simple procedure for choosing optimally the separating functions from a given linear space of functions is proposed. Further, in the second method, a simple implementation based on the simultaneous diagonalization of two symmetric matrices is provided. Finally, some numerical and simulation results are given illustrating the performan...
Bayesian Model Assessment In Factor Analysis
, 2004
"... Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variable ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variables at study can be iteratively verified and/or refuted. Bayesian inference in factor analytic models has received renewed attention in recent years, partly due to computational advances but also partly to applied focuses generating factor structures as exemplified by recent work in financial time series modeling. The focus of our current work is on exploring questions of uncertainty about the number of latent factors in a multivariate factor model, combined with methodological and computational issues of model specification and model fitting. We explore reversible jump MCMC methods that build on sets of parallel Gibbs samplingbased analyses to generate suitable empirical proposal distributions and that address the challenging problem of finding e#cient proposals in highdimensional models. Alternative MCMC methods based on bridge sampling are discussed, and these fully Bayesian MCMC approaches are compared with a collection of popular model selection methods in empirical studies.
Joint Approximate Diagonalization Of Positive Definite Hermitian Matrices
"... This paper provides an iterative algorithm to jointly approximately diagonalize K Hermitian positive definite matrices Γ_1, ..., Γ_K . Specifically it calculates the matrix B which minimizes the criterion P K k=1 n k [log det diag(BC k B ) log det(BC k B )], n k being positive numbers, ..."
Abstract

Cited by 57 (8 self)
 Add to MetaCart
This paper provides an iterative algorithm to jointly approximately diagonalize K Hermitian positive definite matrices Γ_1, ..., Γ_K . Specifically it calculates the matrix B which minimizes the criterion P K k=1 n k [log det diag(BC k B ) log det(BC k B )], n k being positive numbers, which is a measure of the deviation from diagonality of the matrices BC_k B*. The convergence of the algorithm is discussed and some numerical experiments are performed showing the good performance of the algorithm.
Accelerating Reinforcement Learning through Implicit Imitation
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 2003
"... Imitation can be viewed as a means of enhancing learning in multiagent environments. It augments ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
Imitation can be viewed as a means of enhancing learning in multiagent environments. It augments
A Singular Evolutive Extended Kalman Filter For Data Assimilation In Oceanography
 Journal of Marine Systems
, 1996
"... In this work, we propose a modified form of the extended Kalman filter for assimilating oceanic data into numerical models. Its development consists essentially in approximating the error covariance matrix by a singular low rank matrix, which amounts in practice to making no correction in those dire ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
In this work, we propose a modified form of the extended Kalman filter for assimilating oceanic data into numerical models. Its development consists essentially in approximating the error covariance matrix by a singular low rank matrix, which amounts in practice to making no correction in those directions for which the error is attenuated by the system. This not only reduce the implementation cost to an acceptable level but may also improve the filter stability as well. The "directions of correction" of the filter evolve with time according to the model evolution, which is the most original feature of this filter, distinguishing it from other sequential assimilation methods based on the projection onto a fixed basis of functions. A method for initializing the filter based on the empirical orthogonal functions is also described. An example of assimilation based on the quasigeostrophic model for a square ocean domain with a certain wind stress forcing pattern, is given. Although this is ...
XGvis: Interactive Data Visualization with Multidimensional Scaling
, 2001
"... this article. Section 2 gives an overview of how a user operates the XGvis system. Section 3 deals with algorithm animation, direct manipulation and perturbation of the con guration. Section 4 gives details about the cost functions and their interactively controlled parameters for transformation, s ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
this article. Section 2 gives an overview of how a user operates the XGvis system. Section 3 deals with algorithm animation, direct manipulation and perturbation of the con guration. Section 4 gives details about the cost functions and their interactively controlled parameters for transformation, subsetting and weighting of dissimilarities. Section 5 describes diagnostics for MDS. Section 6 is about computational and systems aspects, including coordination of windows, algorithms, and large data problems. Finally, Section 7 gives a tour of applications with examples of proximity analysis, dimension reduction, and graph layout in two and more dimensions
Classification trees with unbiased multiway splits
 Journal of the American Statistical Association
, 2001
"... Two univariate split methods and one linear combination split method are proposed for the construction of classification trees with multiway splits. Examples are given where the trees are more compact and hence easier to interpret than binary trees. A major strength of the univariate split methods i ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
Two univariate split methods and one linear combination split method are proposed for the construction of classification trees with multiway splits. Examples are given where the trees are more compact and hence easier to interpret than binary trees. A major strength of the univariate split methods is that they have negligible bias in variable selection, both when the variables differ in the number of splits they offer and when they differ in number of missing values. This is an advantage because inferences from the tree structures can be adversely affected by selection bias. The new methods are shown to be highly competitive in terms of computational speed and classification accuracy of future observations. Key words and phrases: Decision tree, linear discriminant analysis, missing value, selection bias. 1