Results 1  10
of
17
Models and Languages for Parallel Computation
 ACM COMPUTING SURVEYS
, 1998
"... We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architectureindependent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in ..."
Abstract

Cited by 134 (4 self)
 Add to MetaCart
We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architectureindependent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in 6 categories, depending on the level of abstraction they provide.
Matching Power
 Proceedings of RTA’2001, Lecture Notes in Computer Science, Utrecht (The Netherlands
, 2001
"... www.loria.fr/{~cirstea,~ckirchne,~lliquori} Abstract. In this paper we give a simple and uniform presentation of the rewriting calculus, also called Rho Calculus. In addition to its simplicity, this formulation explicitly allows us to encode complex structures such as lists, sets, and objects. We pr ..."
Abstract

Cited by 31 (20 self)
 Add to MetaCart
www.loria.fr/{~cirstea,~ckirchne,~lliquori} Abstract. In this paper we give a simple and uniform presentation of the rewriting calculus, also called Rho Calculus. In addition to its simplicity, this formulation explicitly allows us to encode complex structures such as lists, sets, and objects. We provide extensive examples of the calculus, and we focus on its ability to represent some object oriented calculi, namely the Lambda Calculus of Objects of Fisher, Honsell, and Mitchell, and the Object Calculus of Abadi and Cardelli. Furthermore, the calculus allows us to get object oriented constructions unreachable in other calculi. In summa, we intend to show that because of its matching ability, the Rho Calculus represents a lingua franca to naturally encode many paradigms of computations. This enlightens the capabilities of the rewriting calculus based language ELAN to be used as a logical as well as powerful semantical framework. 1
In Black and White: An Integrated Approach to ClassLevel Testing of ObjectOriented Programs
 ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY
, 1998
"... Because of the growing importance of objectoriented programming, a number of testing strategies have been proposed. They are based either on pure blackbox or whitebox techniques. We propose in this paper a methodology to integrate the black and whitebox techniques. The blackbox technique is us ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Because of the growing importance of objectoriented programming, a number of testing strategies have been proposed. They are based either on pure blackbox or whitebox techniques. We propose in this paper a methodology to integrate the black and whitebox techniques. The blackbox technique is used to select test cases. The whitebox technique is mainly applied to decide whether two objects resulting from the program execution of a test case are observationally equivalent. It is also used to select test cases in some situations.
We define the concept of a fundamental pair as a pair of equivalent terms that are formed by replacing all the variables on both sides of an axiom by normal forms. We prove that an implementation is consistent with respect to all equivalent terms if and only if it is consistent with respect to all fundamental pairs. In other words, the testing coverage of fundamental pairs is as good as that of all possible term rewritings, and hence we need only concentrate on the testing of fundamental pairs. Our strategy is based on mathematical theorems. According to the strategy, we propose an algorithm for selecting a finite set of fundamental pairs as test cases.
Given a pair of equivalent terms as a test case, we should then determine whether the objects that result from executing the implemented program are observationally equivalent. We prove, however, that the observational equivalence of objects cannot be determined using a finite set of observable contexts (namely, operation sequences ending with an observer function) derived from any blackbox technique. Hence we supplement our approach with a "relevant observable context" technique, which is a whitebox technique, to determine the observational equivalence. The relevant observable contexts are constructed from a Data Member Relevance Graph, which is an abstraction of the given implementation for a given specification. A semiautomatic tool has been developed to support this technique.
A Requirements Capture Method and its use in an Air Traffic Control Application
, 1995
"... This paper describes our experience in capturing, using a formal specification language, a model of the knowledgeintensive domain of oceanic air traffic control. This model is intended to form part of the requirements specification for a decision support system for air traffic controllers. We give ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
This paper describes our experience in capturing, using a formal specification language, a model of the knowledgeintensive domain of oceanic air traffic control. This model is intended to form part of the requirements specification for a decision support system for air traffic controllers. We give an overview of the methods we used in analysing the scope of the domain, choosing an appropriate formalism, developing a domain model, and validating the model in various ways. Central to the method was the development of a formal requirements engineering environment which provided automated tools for model validation and maintenance
Typechecking Revisited: Modular Errorhandling
 In Proceedings of the Workshop on Semantics of Specification Languages
, 1993
"... Staticsemantics determines the validity of a program, while a typechecker provides more specific type error information. Typecheckers are specified based on the static semantics specification, for the purpose of identifying and presenting type errors in invalid programs. We discuss a style of ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Staticsemantics determines the validity of a program, while a typechecker provides more specific type error information. Typecheckers are specified based on the static semantics specification, for the purpose of identifying and presenting type errors in invalid programs. We discuss a style of algebraically specifying the static semantics of a language which facilitates automatic generation of a typechecker and a language specific error reporter. Such a specification can also be extended in a modular manner to yield humanreadable error messages. 1 An Introduction Staticsemantics of a language determines the validity of a program written in that language. Typechecking of a program, to be useful in practice, should not only indicate whether a given program is valid or not, but also summarize the type errors and show the location of the erroneous constructs which caused the errors. Thus, specifying a typechecker that is useful in practice results in (textually) modifying th...
Simulation and Performance Estimation for the Rewrite Rule Machine
 In Proceedings, Fourth Symposium on the Frontiers of Massively Parallel Computation
, 1992
"... The Rewrite Rule Machine (RRM) is a massively parallel machine being developed at SRI International that combines the power of SIMD with the generality of MIMD. The RRM exploits both extremely finegrain and coarsegrain parallelism, and is based on an abstract model of computation that eases creatin ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The Rewrite Rule Machine (RRM) is a massively parallel machine being developed at SRI International that combines the power of SIMD with the generality of MIMD. The RRM exploits both extremely finegrain and coarsegrain parallelism, and is based on an abstract model of computation that eases creating and porting parallel programs. In particular, the RRM can be programmed very naturally with very highlevel declarative languages featuring implicit parallelism. This paper gives an overview of the RRM's architecture and discusses performance estimates based on very detailed registerlevel simulations at the chip level together with more abstract simulations and modeling for higher levels. 1 Introduction Following an overview of the Rewrite Rule Machine (RRM) architecture and model of computation, this paper discusses recent performance estimates based on simulation. The architecture is a multilevel hierarchy, which is SIMD at the lower (chip) levels, and MIMD at the higher levels. This...
Compilation Techniques for AssociativeCommutative Normalisation
 In Proceedings of International Workshop on Theory and Practice of Algebraic Specifications ASF+SDF 97, Workshops in Computing
, 1997
"... We consider the problem of term normalisation modulo associativecommutative (AC) theories and describe several techniques for compiling manytoone AC matching and reduced term construction. The proposed method, illustrated on three examples, is based on compact bipartite graphs, and is designed ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We consider the problem of term normalisation modulo associativecommutative (AC) theories and describe several techniques for compiling manytoone AC matching and reduced term construction. The proposed method, illustrated on three examples, is based on compact bipartite graphs, and is designed for working very efficiently on specific classes of AC patterns. Our experimental results provide strong evidence that compilation of manytoone AC normalisation is a useful technique for improving the performance of algebraic programming languages. 1 Introduction Term rewriting techniques are used in software development, compiler generation, computer algebra, theorem provers, more recently in constraint solving [6]. Several programming languages (for example ASF+SDF [12], OBJ [7], or Maude [3]) use term rewriting as the execution engine of their programs. Algebraic structures often involve axioms like associativecommutative (AC) properties of function symbols, that cannot be used as...
A Model Inference System for Generic Specification with Application to Code Sharing
 In Proc. of TAPSOFT95, Col. on Formal Approaches in Software Engineering, LNCS 915
, 1995
"... . This paper presents a model inference system to control instantiation of generic modules. Generic parameters are specified by properties which represent classes of modules sharing some common features. Just as type checking consists in verifying that an expression is well typed, model checking all ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
. This paper presents a model inference system to control instantiation of generic modules. Generic parameters are specified by properties which represent classes of modules sharing some common features. Just as type checking consists in verifying that an expression is well typed, model checking allows to detect whether a (possibly generic) instantiation of a generic module is valid, i.e. whether the instantiation module is a model of the parameterizing property. Equality of instances can be derived from a canonical representation of modules. At last, we show how the code of generic modules can be shared for all instances of modules. 1 Introduction Genericity is a useful feature for specification languages, and for programming languages alike, because it allows to reuse already written packages by instantiating them on various ways, thus limits the risk of bugs and reduces software costs. When a generic module is instantiated and imported into another module, one has to check that the...
Strategic Computation and Deduction
, 2009
"... I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things take time. We tend to be impatient, but we need some historical perspective. The study of logic has a very long history, going back at least as far as Aristotle. During some of this time not very much progress was made. It's gratifying to realize how much has been accomplished in the less than fty years since serious e orts to mechanize logic began.