Results 1 
7 of
7
Efficient SVM training using lowrank kernel representations
 Journal of Machine Learning Research
, 2001
"... SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty ba ..."
Abstract

Cited by 188 (3 self)
 Add to MetaCart
SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.
Everything Old Is New Again: A Fresh Look at Historical Approaches
 in Machine Learning. PhD thesis, MIT
, 2002
"... 2 Everything Old Is New Again: A Fresh Look at Historical ..."
Abstract

Cited by 88 (6 self)
 Add to MetaCart
2 Everything Old Is New Again: A Fresh Look at Historical
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Modifying a Sparse Cholesky Factorization
, 1997
"... Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and mani ..."
Abstract

Cited by 41 (14 self)
 Add to MetaCart
Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill, Golub, Murray, and Saunders for modifying a dense Cholesky factorization. Our algorithm involves a new sparse matrix concept, the multiplicity of an entry in L. The multiplicity is essentially a measure of the number of times an entry is modified during symbolic factorization. We show that our methods extend to the general case where an arbitrary sparse symmetric positive definite matrix is modified. Our methods are optimal in the sense that they take time proportional to the number of nonzero entries in L that change. This work was supported by National Science Foundation grants DMS9404431 and DMS9504974. y davis@cise.uf...
Singular Value DecompositionBased Methods For Signal And Image Restoration
, 1998
"... ... K is a matrix of large dimension representing the blurring phenomena, g is a vector representing the observed signal, and n is a vector representing noise. Restoration methods attempt to construct an approximation to the true signal f , given g, K, and, in some cases, statistical information abo ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
... K is a matrix of large dimension representing the blurring phenomena, g is a vector representing the observed signal, and n is a vector representing noise. Restoration methods attempt to construct an approximation to the true signal f , given g, K, and, in some cases, statistical information about the noise. Often K is severely illconditioned, and both K and g are corrupted with noise. Thus, standard techniques to solve Kf = g are likely to produce solutions that are highly corrupted with noise. The large dimension of K adds to the difficulty, since it is not practical to explicitly form K. In many cases, K is a Toeplitz or block Toeplitz matrix, and this structure can be exploited. Large, structured linear least squares (ls) and total least squares<F
An Efficient Algorithm For Simulating Fracture Using Large Fuse Networks
, 2005
"... The high computational cost involved in modeling of the progressive fracture simulations using large discrete lattice networks stems from the requirement to solve a new large set of linear equations every time a new lattice bond is broken. To address this problem, we propose an algorithm that combin ..."
Abstract
 Add to MetaCart
The high computational cost involved in modeling of the progressive fracture simulations using large discrete lattice networks stems from the requirement to solve a new large set of linear equations every time a new lattice bond is broken. To address this problem, we propose an algorithm that combines the multiplerank sparse Cholesky downdating algorithm with the rankp inverse updating algorithm based on the ShermanMorrisonWoodbury formula for the simulation of progressive fracture in disordered quasibrittle materials using discrete lattice networks. Using the present algorithm, the computational complexity of solving the new set of linear equations after breaking a bond reduces to the same order as that of a simple backsolve (forward elimination and backward substitution) using the already LU factored matrix. That is, the computational cost is O(nnz(L)), where nnz(L) denotes the number of nonzeros of the Cholesky factorization L of the stiffness matrix A. This algorithm using the direct sparse solver is faster than the Fourier accelerated preconditioned conjugate gradient (PCG) iterative solvers, and eliminates the critical slowing down associated with the iterative solvers that is especially severe close to the critical points. Numerical results using random resistor networks substantiate the efficiency of the present algorithm. Key words: PACS: 62.20.Mk, 46.50.+a
An efficient algorithm for modelling progressive damage
"... accumulation in disordered materials ‡ ..."