Results 1  10
of
60
Bundle adjustment – a modern synthesis
 Vision Algorithms: Theory and Practice, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 386 (12 self)
 Add to MetaCart
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
The GrADS project: Software support for highlevel grid application development
 International Journal of High Performance Computing Applications
, 2001
"... Advances in networking technologies will soon make it possible to use the global information infrastructure in a qualitatively different way—as a computational resource as well as an information resource. This idea for an integrated computation and information resource called the Computational Power ..."
Abstract

Cited by 137 (23 self)
 Add to MetaCart
Advances in networking technologies will soon make it possible to use the global information infrastructure in a qualitatively different way—as a computational resource as well as an information resource. This idea for an integrated computation and information resource called the Computational Power Grid has been described by the recent book entitled The Grid: Blueprint for a New Computing Infrastructure [18]. The Grid will connect the nation’s computers, databases, instruments, and people in a seamless web, supporting emerging computationrich application concepts such as remote computing, distributed supercomputing, teleimmersion, smart instruments, and data mining. To realize this vision, significant scientific and technical obstacles must be overcome. Principal among these is usability. Because the Grid will be inherently more complex than existing computer systems, programs that execute on the Grid will reflect some of this complexity. Hence, making Grid resources useful and accessible to scientists and engineers will require new software tools that embody major advances in both the theory and practice of building Grid applications. The goal of the Grid Application Development Software (GrADS) Project is to simplify distributed heterogeneous computing in the same way that the World Wide Web simplified information sharing
Optimizing the performance of sparse matrixvector multiplication
, 2000
"... Copyright 2000 by EunJin Im ..."
Smoothed analysis of the condition numbers and growth factors of matrices
 SIAM J. Matrix Anal. Appl
, 2002
"... Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that the smoothed precision necessary to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic. Moreover, when A is an allzero square matrix, our results significantly improve the averagecase analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997). Partially supported by NSF grant CCR0112487
Fast linear algebra is stable
 In preparation
, 2006
"... In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(n ω+η) operations. 1
Improving data locality by chunking
 In CC’12 Intl. Conference on Compiler Construction, LNCS 2622
, 2003
"... Abstract. Cache memories were invented to decouple fast processors from slow memories. However, this decoupling is only partial, and many researchers have attempted to improve cache use by program optimization. Potential benefits are significant since both energy dissipation and performance highly d ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
Abstract. Cache memories were invented to decouple fast processors from slow memories. However, this decoupling is only partial, and many researchers have attempted to improve cache use by program optimization. Potential benefits are significant since both energy dissipation and performance highly depend on the traffic between memory levels. But modeling the traffic is difficult; this observation has led to the use of heuristic methods for steering program transformations. In this paper, we propose another approach: we simplify the cache model and we organize the target program in such a way that an asymptotic evaluation of the memory traffic is possible. This information is used by our optimization algorithm in order to find the best reordering of the program operations, at least in an asymptotic sense. Our method optimizes both temporal and spatial locality. It can be applied to any static control program with arbitrary dependences. The optimizer has been partially implemented and applied to nontrivial programs. We present experimental evidence that the amount of cache misses is drastically reduced with corresponding performance improvements. 1
Selfadapting numerical software for next generation applications
 Int. J. High Perf. Comput. Appl
, 2002
"... The challenge for the development of next generation software is the successful management of the complex grid environment while delivering to the scientist the full power of flexible compositions of the available algorithmic alternatives. SelfAdapting Numerical Software (SANS) systems are intended ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
The challenge for the development of next generation software is the successful management of the complex grid environment while delivering to the scientist the full power of flexible compositions of the available algorithmic alternatives. SelfAdapting Numerical Software (SANS) systems are intended to meet this significant challenge. A SANS system comprises intelligent next generation numerical software that domain scientists – with disparate levels of knowledge of algorithmic and programmatic complexities of the underlying numerical software – can use to easily express and efficiently solve their problem. The components of a SANS system are: • A SANS agent with: – An intelligent component that automates method selection based on data, algorithm and system attributes. – A system component that provides intelligent management of and access to the computational grid. – A history database that records relevant information generated by the intelligent component and maintains past performance data of the interaction (e.g., algorithmic, hardware specific, etc.) between SANS components. • A simple scripting language that allows a structured multilayered implementation of the SANS while ensuring portability and extensibility of the user interface and underlying libraries. • An XML/CCAbased vocabulary of metadata to describe behavioural properties of both data and algorithms. • System components, including a runtime adaptive scheduler, and prototype libraries that automate the process of architecturedependent tuning to optimize performance on different platforms. A SANS system can dramatically improve the ability of computational scientists to model complex, interdisciplinary phenomena with maximum efficiency and a minimum of extradomain expertise. SANS innovations (and their generalizations) will provide to the scientific and engineering community a dynamic computational environment in which the most effective library components are automatically selected based on the problem characteristics, data attributes, and the state of the grid. 1
Model Reduction Software in the SLICOT Library
 Applied and Computational Control, Signals, and Circuits, volume 629 of The Kluwer International Series in Engineering and Computer Science
, 2000
"... We describe the model reduction software developed recently for the control and systems library SLICOT. Besides a powerful collection of Fortran 77 routines implementing the last algorithmic developments for several wellknown balancing related methods, we also describe model reduction tools develop ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
We describe the model reduction software developed recently for the control and systems library SLICOT. Besides a powerful collection of Fortran 77 routines implementing the last algorithmic developments for several wellknown balancing related methods, we also describe model reduction tools developed to facilitate the usage of SLICOT routines in user friendly environments like Matlab or Scilab. Extensive testing of the implemented tools has been done using both special benchmark problems as well as models of several complex industrial plants. Testing results and performance comparisons show the superiority of SLICOT model reduction tools over existing model reduction software.
Multicategory proximal support vector machine classifiers
 Machine Learning
, 2001
"... Abstract. Given a dataset, each element of which labeled by one of k labels, we construct by a very fast algorithm, a kcategory proximal support vector machine (PSVM) classifier. Proximal support vector machines and related approaches (Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999) can be in ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Abstract. Given a dataset, each element of which labeled by one of k labels, we construct by a very fast algorithm, a kcategory proximal support vector machine (PSVM) classifier. Proximal support vector machines and related approaches (Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999) can be interpreted as ridge regression applied to classification problems (Evgeniou, Pontil, & Poggio, 2000). Extensive computational results have shown the effectiveness of PSVM for twoclass classification problems where the separating plane is constructed in time that can be as little as two orders of magnitude shorter than that of conventional support vector machines. When PSVM is applied to problems with more than two classes, the well known onefromtherest approach is a natural choice in order to take advantage of its fast performance. However, there is a drawback associated with this onefromtherest approach. The resulting twoclass problems are often very unbalanced, leading in some cases to poor performance. We propose balancing the k classes and a novel Newton refinement modification to PSVM in order to deal with this problem. Computational results indicate that these two modifications preserve the speed of PSVM while often leading to significant test set improvement over a plain PSVM onefromtherest application. The modified approach is considerably faster than other onefromtherest methods that use conventional SVM formulations, while still giving comparable test set correctness.
DISCRETE APPROACHES FOR SOLVING MOLECULAR DISTANCE GEOMETRY PROBLEMS USING NMR DTA
 INTERNATIONAL JOURNAL OF COMPUTATIONAL BIOSCIENCE
, 2010
"... The molecular distance geometry problem (MDGP) is the problem of finding the conformation of a molecule by exploiting known distances between some pairs of its atoms. Estimates of the distances between the atoms can be obtained through experiments of nuclear magnetic resonance (NMR) spectroscopy. Th ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
The molecular distance geometry problem (MDGP) is the problem of finding the conformation of a molecule by exploiting known distances between some pairs of its atoms. Estimates of the distances between the atoms can be obtained through experiments of nuclear magnetic resonance (NMR) spectroscopy. The information on the distances, however, is usually limited, because only distances between hydrogens and shorter than 6 ˚A are usually available, and this makes the solution of the MDGP quite hard. In this paper, we focus our attention on protein backbones and we present a methodology for computing their fullatom conformations starting from NMR data. This task is performed by solving two MDGPs. First of all, only hydrogens are considered: we define an artificial backbone of hydrogens for which particular assumptions needed for the discretization of the problem are satisfied. This allows for solving the first MDGP with an ad hoc algorithm. Secondly, by exploiting the coordinates of the hydrogens and known bond lengths and bond angles, we compute the coordinates of the other atoms forming the protein backbone by using a polynomialtime algorithm. Computational experiments related to real proteins are presented.