Results 1 - 10
of
39
Generating Robust Parsers using Island Grammars
- IN PROCEEDINGS OF THE 8TH WORKING CONFERENCE ON REVERSE ENGINEERING
, 2001
"... Source model extraction---the automated extraction of information from system artifacts---is a common phase in reverse engineering tools. One of the major challenges of this phase is creating extractors that can deal with irregularities in the artifacts that are typical for the reverse engineering d ..."
Abstract
-
Cited by 138 (5 self)
- Add to MetaCart
Source model extraction---the automated extraction of information from system artifacts---is a common phase in reverse engineering tools. One of the major challenges of this phase is creating extractors that can deal with irregularities in the artifacts that are typical for the reverse engineering domain (for example, syntactic errors, incomplete source code, language dialects and embedded languages). This paper
Semantics of Programming Languages: A Tool-Oriented Approach
- ACM SIGPLAN Notices
, 1999
"... By paying more attention to semantics-based tool generation, programming language semantics can significantly increase its impact. Ultimately, this may lead to "Language Design Assistants" incorporating substantial amounts of semantic knowledge. 1991 ACM Computing Classification System: ..."
Abstract
-
Cited by 40 (5 self)
- Add to MetaCart
(Show Context)
By paying more attention to semantics-based tool generation, programming language semantics can significantly increase its impact. Ultimately, this may lead to "Language Design Assistants" incorporating substantial amounts of semantic knowledge. 1991 ACM Computing Classification System: D.2.2, D.3.1, D.3.4, F.3.2 Keywords and Phrases: semantics of programming languages, tool generation, language development system, language design assistant, domain-specific language, compiler toolkit, software renovation tool Note: Submitted to ACM SIGPLAN Notices. This research was supported in part by the Telematica Instituut under the Domain-Specific Languages project. 1 The Role of Programming Language Semantics Programming language semantics has lost touch with large groups of potential users [39]. Among the reasons for this unfortunate state of a#airs, one stands out. Semantic results are rarely incorporated in practical systems that would help language designers to implement and test a ...
New frontiers of reverse engineering
- in: 2007 Future of Software Engineering, IEEE Computer Society
, 2007
"... interests include software maintenance and reverse engineering, service oriented software engineering, and experimental software engineering. He has co-authored more than 100 papers published in international journals and referred conferences and workshops. He was an associate editor of IEEE Transac ..."
Abstract
-
Cited by 36 (1 self)
- Add to MetaCart
(Show Context)
interests include software maintenance and reverse engineering, service oriented software engineering, and experimental software engineering. He has co-authored more than 100 papers published in international journals and referred conferences and workshops. He was an associate editor of IEEE Transactions on Software Engineering and he currently serves on the Editorial Board of the Journal of Software Maintenance and Evolution. He is a member of
A Framework for Classifying and Comparing Software Reverse Engineering and Design Recovery Techniques
, 1999
"... Several techniques have been suggested for supporting reverse engineering and design recovery activities. While many of these techniques have been cataloged in various collections and surveys, the evaluation of the corresponding support tools has focused primarily on their usability and supported so ..."
Abstract
-
Cited by 25 (0 self)
- Add to MetaCart
Several techniques have been suggested for supporting reverse engineering and design recovery activities. While many of these techniques have been cataloged in various collections and surveys, the evaluation of the corresponding support tools has focused primarily on their usability and supported source languages, mostly ignoring evaluation of the appropriateness of the by-products of a tool for facilitating particular types of maintenance tasks. In this paper, we describe criteria that can be used to evaluate tool by-products based on semantic quality, where the semantic quality measures the ability of a by-product to convey certain behavioral information. We use these criteria to review, compare, and contrast several representative tools and approaches. 1 Introduction Software maintenance has long been recognized as one of the most costly phases in software development [1]. A software system is termed a legacy system if that system has a long maintenance history. Many techniques h...
Static Analysis for a Software Transformation Tool
, 1997
"... Software is difficult and costly to modify correctly. Automating tiresome mechanical tasks such as program restructuring is one approach to reducing the burden of software maintenance. Several restructuring tools have been proposed and prototyped, all centered on the concept of meaning-preserving tr ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
(Show Context)
Software is difficult and costly to modify correctly. Automating tiresome mechanical tasks such as program restructuring is one approach to reducing the burden of software maintenance. Several restructuring tools have been proposed and prototyped, all centered on the concept of meaning-preserving transformations similar in spirit to compiler optimizations. Like optimizing compilers, these tools rely on static analysis to reason about the correctness of program changes. However, the cost (in both time and space) of static analysis serves as the limiting factor for transformation tools, resulting in slow, complex tool designs that scale poorly for use on large systems. To reduce these costs, this thesis proposes efficient, demand-driven flow analysis techniques as an alternate to traditional, compiler-based methods. These techniques operate directly on the abstract syntax tree (AST), the data structure most appropriate for use in a source-to-source tool architecture. By eliminating the need for other program representations such as the standard control flow graph (CFG) or program dependence graph (PDG), this approach greatly simplifies program modification. A key contribution of this work is the idea of virtual control flow, a method for computing the control successors or predecessors of individual AST expressions on demand. This method handles all types of structured and unstructured jumps found in an imperative programming language such as C. Virtual control flow couples well with demand-driven data flow analysis to minimize the cost of determining semantic information. To conservatively estimate data flow relationships, the effects of aliasing between memory locations can be inexpensively approximated using flow-insensitive points-to analysis based on type inference. These techniques were implemented in a prototype tool called Cstructure to support a simple restructuring transformation for reordering program statements. To check that this transformation does not change the program's behavior requires syntax, control flow and data dependence analysis. Experimental results on three programs ranging in size from 72,000 to 213,000 lines of code demonstrate the performance advantages of such aggressive demand-driven approaches. For the largest program, gcc, check times for the average statement were 50 milliseconds on a desktop workstation.
Chopping: A generalization of slicing
, 1994
"... A new method for extracting partial representations of a program is described. Given two sets of variable instances, source and sink, a graph is constructed showing the statements that cause definitions of source to affect uses of sink. This criterion can express a wider range of queries than the v ..."
Abstract
-
Cited by 19 (0 self)
- Add to MetaCart
A new method for extracting partial representations of a program is described. Given two sets of variable instances, source and sink, a graph is constructed showing the statements that cause definitions of source to affect uses of sink. This criterion can express a wider range of queries than the various forms of slice criteria, which it subsumes as special cases. On the standard slice criterion (backward slicing from a use or definition) it produces better results than existing algorithms. The method is modular. By treating all statements abstractly as def-use relations, it can present a procedure call as a simple statement, so that it appears in the graph as a single node whose role may be understood without looking beyond the context of the call.
Object-Oriented Re-Architecturing
- In 5th European Software Engineering Conference (ESEC'95
, 1995
"... . Many organizations face the problem of improving the value of their legacy systems. Modernizing the architecture of old software helps to gain control over maintenance cost, to improve system performance, and it supports moving to a distributed or more efficient environment. We propose a re-archit ..."
Abstract
-
Cited by 16 (7 self)
- Add to MetaCart
. Many organizations face the problem of improving the value of their legacy systems. Modernizing the architecture of old software helps to gain control over maintenance cost, to improve system performance, and it supports moving to a distributed or more efficient environment. We propose a re-architecturing of old procedural software to an object-oriented architecture. To overcome limits of classical reverse engineering approaches building exclusively on information extractable from source code we integrate domain knowledge in the process. The resulting object-oriented software helps reduce future maintenance cost, since modern (and more calculable) maintenance technology can then be applied. In this paper, we point out the basic concepts of the re-architecturing process, the generation of design documents at different levels of abstraction, and the necessary syntactic adaptations of the source code. 1 Introduction Legacy systems are an increasing problem for IT groups in large organi...
Finding Reusable Software Components in Large Systems
, 1996
"... The extraction of reusable software components from existing systems is an attractive idea. The goal of the work in this paper is not to extract a component automatically, but to identify its tightly coupled region (subsystem) for extraction by hand or knowledge-based system. Much of our experience ..."
Abstract
-
Cited by 16 (0 self)
- Add to MetaCart
The extraction of reusable software components from existing systems is an attractive idea. The goal of the work in this paper is not to extract a component automatically, but to identify its tightly coupled region (subsystem) for extraction by hand or knowledge-based system. Much of our experience is anecdotal. Our experience with scientific systems differs from much of the work in reverse engineering that focuses on COBOL systems. Module and data interconnection was collected from three large scientific systems over a 12 year period from 1980 to 1992. The interconnection data was analyzed in an attempt to identify subsystems that correspond to domain-specific components. The difficulties of dealing with large scientific systems and their organizations are discussed. The failures and successes of various subsystem analysis methods is discussed. A simple algorithm for the identification of subsystems is presented. A pattern of object hierarchies of subsystems is briefly mentioned. The average subsystem is surprisingly large at 17,000 source lines and 35 modules. The concept of a subsystem is informally validated by developers from subsystem interconnection diagrams. The actual reusability of these identified components is not assessed. Keywords: reuse, component, subsystem 1.
Applying Graph Transformations to Database Re-Engineering
- In Ehrig et al
, 1999
"... modeling concepts like inheritance, aggregation, and n-ary associations cannot be expressed in the relational data model. Additionally, many physical schemas of LDAs comprise optimizations that makei teven harder to grasp the real semantics of the data structure. Hence, the first activity in the DR ..."
Abstract
-
Cited by 15 (2 self)
- Add to MetaCart
(Show Context)
modeling concepts like inheritance, aggregation, and n-ary associations cannot be expressed in the relational data model. Additionally, many physical schemas of LDAs comprise optimizations that makei teven harder to grasp the real semantics of the data structure. Hence, the first activity in the DRE process is analyze available sources of information about the LDA, in order to yield a semantically annotated database schema. The resulting annotated schema is translated into an equivalent object-oriented conceptual schema. Subsequently, the conceptual schema might be extended or used as the basis for further re-engineering activities. Both tasks, legacy schema analysis and conceptual translation, are considered to human-intensive iterative [1,5], i.e., they cannot be perfomed in a fully-automatic, batch-oriented process. The reason for this is that legacy systems vary with respect to many technical and non-technical parameters: they use various hardand software platforms and comprise arcane coding concepts [6]). Computer aided DRE tools have a great potential to reduce the complexity (2 risk) of re-engineering largeLDAs that comprise several hundred thousand 6.1.
Rapid System Understanding: Two COBOL Case Studies
, 1998
"... Rapid system understanding is required in the planning, feasibility assessment and cost estimating phases of a system renovation project. In this paper, we apply a number of analyses on two large legacy COBOL systems from the banking field. We describe the analyses performed, and discuss possible in ..."
Abstract
-
Cited by 15 (8 self)
- Add to MetaCart
(Show Context)
Rapid system understanding is required in the planning, feasibility assessment and cost estimating phases of a system renovation project. In this paper, we apply a number of analyses on two large legacy COBOL systems from the banking field. We describe the analyses performed, and discuss possible interpretations of these analyses. Lessons learned include: (1) The open architecture adopted is satisfactory, and can take advantage of a wide range of understanding tools available; and (2) To handle inter-system variability effectively, the flexibility of lexical analysis is required. 1991 Computing Reviews Classification System: D.2.2, D.2.7., D.3.4. Keywords and Phrases: Software visualization, lexical analysis, software reuse. Note: To appear in Proceedings of the 6th IEEE International Workshop on Program Comprehenson, June, 1998, Ischia. Note: Work carried out under project SEN-1.1, Software Renovation. 1