Results 1 - 10
of
1,010
Fast Computation of Sparse Datacubes
- In VLDB
, 1997
"... Datacube queries compute aggregates over database relations at a variety of granularities, and they constitute an important class of decision support queries. Real-world data is frequently sparse, and hence efficiently computing datacubes over large sparse relations is important. We show that curren ..."
Abstract
-
Cited by 106 (6 self)
- Add to MetaCart
that current techniques for computing datacubes over sparse relations do not scale well with the number of CUBE BY attributes, especially when the relation is much larger than main memory. We propose a novel algorithm for the fast computation of datacubes over sparse relations, and demonstrate the efficiency
Fast Computation of Sparse Datacubes Abstract
"... Datacube queries compute aggregates over database relations at a variety of granularities, and they constitute an important class of decision support queries. Real-world data is frequently sparse, and hence efficiently computing datacubes over large sparse relations is important. We show that curren ..."
Abstract
- Add to MetaCart
that current techniques for computing datacubes over sparse relations do not scale well with the number of CUBE BY attributes, especially when the relation is much larger than main memory. We propose a novel algorithm for the fast com-putation of datacubes over sparse relations, and demonstrate the efficiency
Sequential minimal optimization: A fast algorithm for training support vector machines
- Advances in Kernel Methods-Support Vector Learning
, 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract
-
Cited by 461 (3 self)
- Add to MetaCart
possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation
Regularization paths for generalized linear models via coordinate descent
, 2009
"... We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic ..."
Abstract
-
Cited by 724 (15 self)
- Add to MetaCart
elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
Locality-constrained linear coding for image classification
- IN: IEEE CONFERENCE ON COMPUTER VISION AND PATTERN CLASSIFICATOIN
, 2010
"... The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC util ..."
Abstract
-
Cited by 443 (20 self)
- Add to MetaCart
, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a
Simplifying Parallel Datacube Computation
, 2002
"... This thesis is dedicated to my wife, Lan. This thesis involves the design and implementation of disk-based datacube programs on IBM RS6000/SP3. Parallel and sequential programs are based on simpler data structures and algorithms than previous systems developed for disk-based datacube computation. In ..."
Abstract
- Add to MetaCart
This thesis is dedicated to my wife, Lan. This thesis involves the design and implementation of disk-based datacube programs on IBM RS6000/SP3. Parallel and sequential programs are based on simpler data structures and algorithms than previous systems developed for disk-based datacube computation
A Data Structure for Dynamic Trees
, 1983
"... A data structure is proposed to maintain a collection of vertex-disjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Each operation requires O(log n) ti ..."
Abstract
-
Cited by 347 (21 self)
- Add to MetaCart
) time. Using this data structure, new fast algorithms are obtained for the following problems: (1) Computing nearest common ancestors. (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows. (3) Computing certain kinds of constrained minimum spanning
Optimizing Selections over Datacubes
- In Proceedings of the IEEE International Conference on Scienti and Statistical Database Management
, 1998
"... Datacube queries compute aggregates over database relations at a variety of granularities. Often one wants only datacube output tuples whose aggregate value satisfies a certain condition, such as exceeding a given threshold. We develop algorithms for processing a datacube query using the selection c ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Datacube queries compute aggregates over database relations at a variety of granularities. Often one wants only datacube output tuples whose aggregate value satisfies a certain condition, such as exceeding a given threshold. We develop algorithms for processing a datacube query using the selection
Bottom-up computation of sparse and Iceberg CUBE
- In Proceedings of the 5th ACM international workshop on Data Warehousing and OLAP, DOLAP ’02
, 1999
"... We introduce the Iceberg-CUBE problem as a reformulation of the datacube (CUBE) problem. The Iceberg-CUBE problem is to compute only those group-by partitions with an aggregate value (e.g., count) above some minimum support threshold. The result of Iceberg-CUBE can be used (1) to answer group-by que ..."
Abstract
-
Cited by 187 (4 self)
- Add to MetaCart
We introduce the Iceberg-CUBE problem as a reformulation of the datacube (CUBE) problem. The Iceberg-CUBE problem is to compute only those group-by partitions with an aggregate value (e.g., count) above some minimum support threshold. The result of Iceberg-CUBE can be used (1) to answer group
New spectral methods for ratio cut partition and clustering
- IEEE TRANS. ON COMPUTER-AIDED DESIGN
, 1992
"... Partitioning of circuit netlists is important in many phases of VLSI design, ranging from layout to testing and hardware simulation. The ratio cut objective function [29] has received much attention since it naturally captures both min-cut and equipartition, the two traditional goals of partitionin ..."
Abstract
-
Cited by 296 (17 self)
- Add to MetaCart
of partitioning. In this paper, we show that the second smallest eigenvalue of a matrix derived from the netlist gives a provably good approx-imation of the optimal ratio cut partition cost. We also dem-onstrate that fast Lanczos-type methods for the sparse sym-metric eigenvalue problem are a robust basis
Results 1 - 10
of
1,010