Results 1  10
of
118
Efficient SVM training using lowrank kernel representations
 Journal of Machine Learning Research
, 2001
"... SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty ba ..."
Abstract

Cited by 188 (3 self)
 Add to MetaCart
SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.
On Beamforming with Finite Rate Feedback in Multiple Antenna Systems
, 2003
"... In this paper, we study a multiple antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the quantized information for beamforming, we derive a universal lower bound on the outage probability for any f ..."
Abstract

Cited by 188 (13 self)
 Add to MetaCart
In this paper, we study a multiple antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the quantized information for beamforming, we derive a universal lower bound on the outage probability for any finite set of beamformers. The universal lower bound provides a concise characterization of the gain with each additional bit of feedback information regarding the channel. Using the bound, it is shown that finite information systems approach the perfect information case as (t 1)2 , where B is the number of feedback bits and t is the number of transmit antennas. The geometrical bounding technique, used in the proof of the lower bound, also leads to a design criterion for good beamformers, whose outage performance approaches the lower bound. The design criterion minimizes the maximum inner product between any two beamforming vectors in the beamformer codebook, and is equivalent to the problem of designing unitary space time codes under certain conditions. Finally, we show that good beamformers are good packings of 2dimensional subspaces in a 2tdimensional real Grassmannian manifold with chordal distance as the metric.
Blind Separation of Mixture of Independent Sources Through a Maximum Likelihood Approach
 In Proc. EUSIPCO
, 1997
"... In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption af ..."
Abstract

Cited by 101 (8 self)
 Add to MetaCart
In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption afterward. The first method is specially adapted to temporally independent non Gaussian sources and is based on the use of nonlinear separating functions. The second method is specially adapted to correlated sources with distinct spectra and is based on the use of linear separating filters. A theoretical analysis of the performance of the methods has been made. A simple procedure for choosing optimally the separating functions from a given linear space of functions is proposed. Further, in the second method, a simple implementation based on the simultaneous diagonalization of two symmetric matrices is provided. Finally, some numerical and simulation results are given illustrating the performan...
Diffusion Wavelets
, 2004
"... We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their fast application. Classes of operators satisfying these conditions include diffusionlike operators, in any dimension, on manifolds, graphs, and in nonhomogeneous media. In this case our construction can be viewed as a farreaching generalization of Fast Multipole Methods, achieved through a different point of view, and of the nonstandard wavelet representation of CalderónZygmund and pseudodifferential operators, achieved through a different multiresolution analysis adapted to the operator. We show how the dyadic powers of an operator can be used to induce a multiresolution analysis, as in classical LittlewoodPaley and wavelet theory, and we show how to construct, with fast and stable algorithms, scaling function and wavelet bases associated to this multiresolution analysis, and the corresponding downsampling operators, and use them to compress the corresponding powers of the operator. This allows to extend multiscale signal processing to general spaces (such as manifolds and graphs) in a very natural way, with corresponding fast algorithms.
Conic Reconstruction and Correspondence from Two Views
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... Conics are widely accepted as one of the most fundamental image features together with points and line segments. The problem of space reconstruction and correspondence of two conics from two views is addressed in this paper. It is shown that there are two independent polynomial conditions on the cor ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
Conics are widely accepted as one of the most fundamental image features together with points and line segments. The problem of space reconstruction and correspondence of two conics from two views is addressed in this paper. It is shown that there are two independent polynomial conditions on the corresponding pair of conics across two views, given the relative orientation of the two views. These two correspondence conditions are derived algebraically and one of them is shown to be fundamental in establishing the correspondences of conics. A unified closedform solution is also developed for both projective reconstruction of conics in space from two views for uncalibrated cameras and metric reconstruction from calibrated cameras. Experiments are conducted to demonstrate the discriminality of the correspondence conditions and the accuracy and stability of the reconstruction both for simulated and real images. Keywords conic, stereo correspondence, reconstruction. I. Introduction In...
ALGORITHM 656  An Extended Set of Basic Linear Algebra . . .
, 1988
"... ... Subprograms (Level 2 BLAS). Level 2 BLAS are targeted at matrixvector operations with the aim of providing more efficient, but portable, implementations of algorithms on highperformance computers. The model implementation provides a portable set of FORTRAN 77 Level 2 BLAS for machines where sp ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
... Subprograms (Level 2 BLAS). Level 2 BLAS are targeted at matrixvector operations with the aim of providing more efficient, but portable, implementations of algorithms on highperformance computers. The model implementation provides a portable set of FORTRAN 77 Level 2 BLAS for machines where specialized implementations do not exist or are not required. The test software aims to verify that specialized implementations meet the specification of Level 2 BLAS and that implementations are correctly installed.
Approximating Matrix Multiplication for Pattern Recognition Tasks
 In Proceedings of the Eighth Annual ACMSIAM Symposium on Discrete Algorithms
, 1997
"... Many pattern recognition tasks, including estimation, classification, and the finding of similar objects, make use of linear models. The fundamental operation in such tasks is the computation of the dot product between a query vector and a large database of instance vectors. Often we are interested ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Many pattern recognition tasks, including estimation, classification, and the finding of similar objects, make use of linear models. The fundamental operation in such tasks is the computation of the dot product between a query vector and a large database of instance vectors. Often we are interested primarily in those instance vectors which have high dot products with the query. We present a random sampling based algorithm that enables us to identify, for any given query vector, those instance vectors which have large dot products, while avoiding explicit computation of all dot products. We provide experimental results that demonstrate considerable speedups for text retrieval tasks. 1 Introduction In pattern recognition tasks, a database of instances to be processed (images, signals, documents,...) is commonly represented as a set of a vectors x 1 ; : : : ; xn of numeric feature values. Examples of feature values include the number of times a word occurs in a document, the coordinates...
Web Search Via Hub Synthesis
, 2001
"... We present a model for web search that captures in a unified manner three critical components of the problem: how the link structure of the web is generated, how the content of a web document is generated, and how a human searcher generates a query. The key to this unification lies in capturing the ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
We present a model for web search that captures in a unified manner three critical components of the problem: how the link structure of the web is generated, how the content of a web document is generated, and how a human searcher generates a query. The key to this unification lies in capturing the correlations between these components in terms of proximity in a shared latent semantic space. Given such a combined model, the correct answer to a search query is well defined, and thus it becomes possible to evaluate web search algorithms rigorously. We present a new web search algorithm, based on spectral techniques, and prove that it is guaranteed to produce an approximately correct answer in our model. The algorithm assumes no knowledge of the model, and is welldefined regardless of the model's accuracy. 1.
Document clustering via adaptive subspace iteration
 In SIGIR
, 2004
"... Document clustering has long been an important problem in information retrieval. In this paper, we present a new clustering algorithm ASI1, which uses explicitly modeling of the subspace structure associated with each cluster. ASI simultaneously performs data reduction and subspace identification vi ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Document clustering has long been an important problem in information retrieval. In this paper, we present a new clustering algorithm ASI1, which uses explicitly modeling of the subspace structure associated with each cluster. ASI simultaneously performs data reduction and subspace identification via an iterative alternating optimization procedure. Motivated from the optimization procedure, we then provide a novel method to determine the number of clusters. We also discuss the connections of ASI with various existential clustering approaches. Finally, extensive experimental results on real data sets show the effectiveness of ASI algorithm.