Results 1  10
of
34
The homebots system and field tests: A multi commodity market for predictive load management
 In Proceedings of the Fourth International Conference and Exhibition on The Practical Application of Intelligent Agents and MultiAgents (PAAM99
, 1999
"... Paper submission to PAAM’99. ..."
On resourceoriented multicommodity market computations
 In Third International Conference on MultiAgent Systems, 365371
, 1998
"... In search for general equilibrium in multicommodity markets, priceoriented schemes are normally used. That is, a set of prices (one price for each commodity) is updated until supply meets demand for each commodity. In some cases such an approach is very inef cient, and a resourceoriented scheme c ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
In search for general equilibrium in multicommodity markets, priceoriented schemes are normally used. That is, a set of prices (one price for each commodity) is updated until supply meets demand for each commodity. In some cases such an approach is very inef cient, and a resourceoriented scheme can be highly competitive. In a resourceoriented scheme the allocations are updated until the market equilibrium is found. It is well known that in a twocommodity market resourceoriented schemes are possible. In this paper we show that resourceoriented algorithms can be used for the general multicommodity case as well, and present and analyze a speci c algorithm. The algorithm has been implemented and some performance properties, for a speci c example, are presented. 1
Adaptive Winograd’s Matrix Multiplications
, 2008
"... Modern architectures have complex memory hierarchies and increasing parallelism (e.g., multicores). These features make achieving and maintaining good performance across rapidly changing architectures increasingly difficult. Performance has become a complex tradeoff, not just a simple matter of cou ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Modern architectures have complex memory hierarchies and increasing parallelism (e.g., multicores). These features make achieving and maintaining good performance across rapidly changing architectures increasingly difficult. Performance has become a complex tradeoff, not just a simple matter of counting cost of simple CPU operations. We present a novel, hybrid, and adaptive recursive StrassenWinograd’s matrix multiplication (MM) that uses automatically tuned linear algebra software (ATLAS) or GotoBLAS. Our algorithm applies to any size and shape matrices stored in either row or column major layout (in doubleprecision in this work) and thus is efficiently applicable to both C and FORTRAN implementations. In addition, our algorithm divides the computation into equivalent incomplexity subMMs and does not require any extra computation to combine the intermediary subMM results. We achieve up to 22 % executiontime reduction versus GotoBLAS/ATLAS alone for a single core system and up to 19 % for a 2 dualcore processor system. Most importantly, even for small matrices such as 1500×1500, our approach attains already 10 % executiontime reduction and, for MM of matrices larger than 3000×3000, it delivers performance that would correspond, for a classic O(n3) algorithm, to fasterthanprocessor peak performance (i.e., our algorithm delivers the equivalent of 5 GFLOPS performance on a system with 4.4 GFLOPS peak performance and where GotoBLAS achieves only 4 GFLOPS). This is a result of the savings in operations (and thus FLOPS). Therefore, our algorithm is faster than any classic MM algorithms could ever be for matrices of this size. Furthermore, we present experimental evidence based on established methodologies found in the literature that our algorithm is, for a family of matrices, as accurate as the classic algorithms.
Additive Preconditioning, Eigenspaces, and the Inverse Iteration ∗
"... We incorporate our recent preconditioning techniques into the classical inverse power (Rayleigh quotient) iteration for computing matrix eigenvectors. Every loop of this iteration essentially amounts to solving an ill conditioned linear system of equations. Due to our modification we solve a well co ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We incorporate our recent preconditioning techniques into the classical inverse power (Rayleigh quotient) iteration for computing matrix eigenvectors. Every loop of this iteration essentially amounts to solving an ill conditioned linear system of equations. Due to our modification we solve a well conditioned linear system instead. We prove that this modification preserves local quadratic convergence, show experimentally that fast global convergence is preserved as well, and yield similar results for higher order inverse iteration, covering the cases of multiple and clustered eigenvalues. Key words:
Parallelizing Strassen's Method for Matrix Multiplication on DistributedMemory MIMD Architectures
 Computers for Mathematics with Applications
, 1994
"... We present a parallel method for matrix multiplication on distributedmemory MIMD architectures based on Strassen's method. Our timing tests, performed on an Intel Paragon, demonstrate that our method realizes the potential of the Strassen's method with a complexity of 4:7M 2:807 at the system leve ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present a parallel method for matrix multiplication on distributedmemory MIMD architectures based on Strassen's method. Our timing tests, performed on an Intel Paragon, demonstrate that our method realizes the potential of the Strassen's method with a complexity of 4:7M 2:807 at the system level rather than the node level at which several earlier works have been focused. The parallel efficiency is nearly perfect when the processor number is divisible by 7. The parallelized Strassen's method is always faster than the traditional matrix multiplication methods whose complexity is 2M 3 coupled with the BMR method and the Ring method at the system level. The speed gain depends on matrix order M : 20% for M ß 1000 and more than 100% for M ß 5000. Key words: matrix multiplication, parallel computation, Strassen's method AMS (MOS) Subject Classification: 65F30, 65Y05, 68Q25 Submitted to SIAM Journal on Scientific Computing y To whom correspondence should be sent. His email address...
Randomized Preprocessing of Homogeneous Linear Systems
, 2009
"... Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove num ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove numerical stability of our randomized algorithms and extend our approach to solving nonsingular linear systems, inversion and generalized (Moore–Penrose) inversion of general and structured matrices by means of Newton’s iteration, approximation of a matrix by a nearby matrix that has a smaller rank or a smaller displacement rank, matrix eigensolving, and rootfinding for polynomial and secular equations. Some byproducts and extensions of our study can be of independent technical intersest, e.g., our extensions of the Sherman–Morrison– Woodbury formula for matrix inversion, our estimates for the condition number of randomized matrix products, preprocessing via augmentation, and the link of preprocessing to aggregation. Key words: Linear systems of equations, Randomized preprocessing, Conditioning
Fast Solvers and Domain Decomposition Preconditioners for Spectral Element Discretizations of Problems in H(curl
, 2001
"... The URLs given were last checked and found valid in November 2001. To Alla, for all her help and care To Lady Mathematics, for all the fun and moments of enlightenment iv Acknowledgments First and foremost I want to thank my advisor and friend, Olof Widlund, for proposing the thesis subject, and for ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The URLs given were last checked and found valid in November 2001. To Alla, for all her help and care To Lady Mathematics, for all the fun and moments of enlightenment iv Acknowledgments First and foremost I want to thank my advisor and friend, Olof Widlund, for proposing the thesis subject, and for all his support and help in the last six years, four of them as his student. I also want to thank Yu Chen and Jonathan Goodman for their willingness to serve as readers on short notice. I thank all the faculty, staff, and students of the Courant Institute who contributed to create a warm and motivating atmosphere. I thank all the professors who transmitted some of their excitement about mathematics to me, I especially want to mention John Rinzel and Bud Mishra. I also want to thank all who fed my neverending interest in all things mathematical and who were as curious as me about mathematics, physics and all that. Thank you, Sávio and Franz. I have had the privilege of becoming very good friends with four fantastic people during my several
Solving Linear Systems with Randomized Augmentation ∗
"... Our randomized preprocessing of a matrix by means of augmentation counters its degeneracy and ill conditioning, uses neither pivoting nor orthogonalization, readily preserves matrix structure and sparseness, and leads to dramatic speedup of the solution of general and structured linear systems of eq ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Our randomized preprocessing of a matrix by means of augmentation counters its degeneracy and ill conditioning, uses neither pivoting nor orthogonalization, readily preserves matrix structure and sparseness, and leads to dramatic speedup of the solution of general and structured linear systems of equations in terms of both estimated arithmetic time and observed CPU time.
Optimization Techniques for Small Matrix Multiplication
, 2010
"... The complexity of matrix multiplication has attracted a lot of attention in the last forty years. In this paper, instead of considering asymptotic aspects of this problem, we are interested in reducing the cost of multiplication for matrices of small size, say up to 30. Following previous work in a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The complexity of matrix multiplication has attracted a lot of attention in the last forty years. In this paper, instead of considering asymptotic aspects of this problem, we are interested in reducing the cost of multiplication for matrices of small size, say up to 30. Following previous work in a similar vein by Probert & Fischer, Smith, and Mezzarobba, we base our approach on previous algorithms for small matrices, due to Strassen, Winograd, Pan, Laderman,... and show how to exploit these standard algorithms in an improved way. We illustrate the use of our results by generating multiplication code over various rings, such as integers, polynomials, differential operators or linear recurrence operators. Keywords: matrix multiplication, small matrix, complexity.