Results 1 - 10
of
16
From Flop to MegaFlops: Java for Technical Computing
- ACM Transactions on Programming Languages and Systems
, 1998
"... . Although there has been some experimentation with Java as a language for numerically intensive computing, there is a perception by many that the language is not suited for such work. In this paper we show how optimizing array bounds checks and null pointer checks creates loop nests on which ag ..."
Abstract
-
Cited by 52 (11 self)
- Add to MetaCart
(Show Context)
. Although there has been some experimentation with Java as a language for numerically intensive computing, there is a perception by many that the language is not suited for such work. In this paper we show how optimizing array bounds checks and null pointer checks creates loop nests on which aggressive optimizations can be used. Applying these optimizations by hand to a simple matrix-multiply test case leads to Java compliant programs whose performance is in excess of 500 Mflops on an RS/6000 SP 332MHz SMP node. We also report in this paper the effect that each optimization has on performance. Since all of these optimizations can be automated, we conclude that Java will soon be a serious contender for numerically intensive computing. 1 Introduction The scientific programming community has recently demonstrated a great deal of interest in the use of Java for technical computing. There are many compelling reasons for such use of Java: a large supply of programmers, it is obj...
Java Programming for High-Performance Numerical Computing
, 2000
"... Class Figure 5 Simple Array construction operations //Simple 3 x 3 array of integers intArray2D A = new intArray2D(3,3); //This new array has a copy of the data in A, //and the same rank and shape. ..."
Abstract
-
Cited by 52 (8 self)
- Add to MetaCart
(Show Context)
Class Figure 5 Simple Array construction operations //Simple 3 x 3 array of integers intArray2D A = new intArray2D(3,3); //This new array has a copy of the data in A, //and the same rank and shape.
Efficient Support for Complex Numbers in Java
, 1999
"... One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such a Complex class and a compiler that understands its se ..."
Abstract
-
Cited by 35 (9 self)
- Add to MetaCart
One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such a Complex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines. We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels. 1 Introduction The Java Grande Forum has identified several critical issues related to the role of Java (TM)1 in numerical computing [14]. One of the key requirements is that Java must support efficient operations on complex numbers. Complex arithmetic and access to elements of complex arrays must be as efficient as the manipulation o...
Automatic Loop Transformations and Parallelization for Java
- In Proceedings of the 2000 International Conference on Supercomputing
, 2000
"... From a software engineering perspective, the Java programming language provides an attractive platform for writing numerically intensive applications. A major drawback hampering its widespread adoption in this domain has been its poor performance on numerical codes. This paper describes a prototype ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
(Show Context)
From a software engineering perspective, the Java programming language provides an attractive platform for writing numerically intensive applications. A major drawback hampering its widespread adoption in this domain has been its poor performance on numerical codes. This paper describes a prototype Java compiler which demonstrates that it is possible to achieve performance levels approaching those of current state-of-the-art C, C++ and Fortran compilers on numerical codes. We describe a new transformation called alias versioning that takes advantage of the simplicity of pointers in Java. This transformation, combined with other techniques that we have developed, enables the compiler to perform high order loop transformations (for better data locality) and parallelization completely automatically. We believe that our compiler is the first to have such capabilities of optimizing numerical Java codes. We achieve, with Java, between 80 and 100% of the performance of highly optimized Fortra...
Optimizing Java Programs in the Presence of Exceptions
, 2000
"... The support for precise exceptions in Java, combined with frequent checks for runtime exceptions, leads to severe limitations on the compiler's ability to perform program optimizations that involve reordering of instructions. This paper presents a novel framework that allows a compiler to r ..."
Abstract
-
Cited by 26 (2 self)
- Add to MetaCart
The support for precise exceptions in Java, combined with frequent checks for runtime exceptions, leads to severe limitations on the compiler's ability to perform program optimizations that involve reordering of instructions. This paper presents a novel framework that allows a compiler to relax these constraints. We first present an algorithm using dynamic analysis, and a variant using static analysis, to identify the subset of program state that need not be preserved if an exception is thrown. This allows many spurious dependence constraints between potentially excepting instructions (PEIs) and writes into variables to be eliminated. Our dynamic algorithm is particularly suitable for dynamically dispatched methods in object-oriented languages, where static analysis may be quite conservative. We then present the first software-only solution that allows dependence constraints among PEIs to be completely ignored while applying program optimizations, with no need to execute...
An Algebraic Programming Style for Numerical Software and Its Optimization
, 1998
"... The abstract mathematical theory of... may be useful for other styles and in other application domains as well. ..."
Abstract
-
Cited by 21 (7 self)
- Add to MetaCart
The abstract mathematical theory of... may be useful for other styles and in other application domains as well.
High Performance Numerical Computing in Java: Language and Compiler Issues
- 12th International Workshop on Languages and Compilers for Parallel Computing
, 1999
"... Poor performance on numerical codes has slowed the adoption of Java within the technical computing community. In this paper we describe a prototype array library and a research prototype compiler that support standard Java and deliver near-Fortran performance on numerically intensive codes. We dis ..."
Abstract
-
Cited by 20 (6 self)
- Add to MetaCart
(Show Context)
Poor performance on numerical codes has slowed the adoption of Java within the technical computing community. In this paper we describe a prototype array library and a research prototype compiler that support standard Java and deliver near-Fortran performance on numerically intensive codes. We discuss in detail our implementation of: (i) an efficient Java package for true multidimensional arrays; (ii) compiler techniques to generate efficient access to these arrays; and (iii) compiler optimizations that create safe, exception free regions of code that can be aggressively optimized. These techniques work together synergistically to make Java an efficient language for technical computing. In a set of four benchmarks, we achieve between 50 and 90% of the performance of highly optimized Fortran code. This represents a several-fold improvement compared to what can be achieved by the next best Java environment. 1
A Modal Model of Memory
- In V.N.Alexandrov, J.J. Dongarra, Computer Science
, 2001
"... . We consider the problem of automatically guiding program transformations for locality, despite incomplete information due to complicated program structures, changing target architectures, and lack of knowledge of the properties of the input data. Our system, the modal model of memory, uses limited ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
(Show Context)
. We consider the problem of automatically guiding program transformations for locality, despite incomplete information due to complicated program structures, changing target architectures, and lack of knowledge of the properties of the input data. Our system, the modal model of memory, uses limited static analysis and bounded runtime experimentation to produce performance formulas that can be used to make runtime locality transformation decisions. Static analysis is performed once per program to determine its memory reference properties, using modes, a small set of parameterized, kernel reference patterns. Once per architectural system, our system automatically performs a set of experiments to determine a family of kernel performance formulas. The system can use these kernel formulas to synthesize a performance formula for any program's mode tree. Finally, with program transformations represented as mappings between mode trees, the generated performance formulas can be used to guide transformation decisions. 1
The NINJA Project: Making Java Work for High Performance Numerical Computing
, 2001
"... this article from being used in a dynamic compiler. Moreover, by using the quasi-static dynamic compilation model [10], the more expensive optimization and 5 analysis techniques employed by TPO can be done off-line, sharply reducing the impact of compilation overhead ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
this article from being used in a dynamic compiler. Moreover, by using the quasi-static dynamic compilation model [10], the more expensive optimization and 5 analysis techniques employed by TPO can be done off-line, sharply reducing the impact of compilation overhead
High Performance Computing in Java: Language and Compiler Issues
, 1999
"... Poor performance on numerical codes has slowed adoption of Java within the technical computing community. In this paper we describe a prototype array library and a research prototype compiler that support standard Java and deliver near-Fortran performance on numerically intensive codes. We discuss ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Poor performance on numerical codes has slowed adoption of Java within the technical computing community. In this paper we describe a prototype array library and a research prototype compiler that support standard Java and deliver near-Fortran performance on numerically intensive codes. We discuss in detail our implementation of: (i) an efficient Java package for true multidimensional arrays; (ii) compiler techniques to generate efficient access to these arrays; and (iii) compiler optimizations that create safe, exception free regions of code that can be aggressively optimized. These techniques work together synergistically to make Java an efficient language for technical computing. In a set of four benchmarks, we achieve between 50 and 90% of the performance of highly optimized Fortran code. This represents a several-fold improvement compared to what can be achieved by the next best Java environment. 1 Introduction Despite the advantages of Java TM1 as a simple, object ori...