• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

A unified approach to global program optimization. In: (1973)

by G A Kildall
Venue:POPL.
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 371
Next 10 →

Compositional Model Checking

by E. M. Clarke, D. E. Long, K. L. Mcmillan , 1999
"... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..."
Abstract - Cited by 3252 (70 self) - Add to MetaCart
We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*.
(Show Context)

Citation Context

...aught the undergraduate course on Compilers. In preparing for this course, I read a number of papers on data-flow analysis including: – G. Killdall, A Unified Approach to Global Program Optimization, =-=[Kil73]-=-. 1 I was unaware of the work by Basu and Yeh [BY75] until I saw it cited in Emerson’s paper in this volume. The paper shows that the weakest precondition for total correctness is the least fixed poin...

Program Analysis and Specialization for the C Programming Language

by Lars Ole Andersen , 1994
"... Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. ..."
Abstract - Cited by 629 (0 self) - Add to MetaCart
Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is time-consuming, and is likely to exceed the production of today’s programmers. New techniques are required to solve this so-called software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results.

Interprocedural dataflow analysis via graph reachability

by Thomas Reps, Susan Horwitz, Mooly Sagiv , 1994
"... The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in poly-nomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow fun ..."
Abstract - Cited by 454 (34 self) - Add to MetaCart
The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in poly-nomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of prob-lems includes—but is not limited to—the classical separ-able problems (also known as “gen/kill ” or “bit-vector” problems)—e.g., reaching definitions, available expres-sions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant pro-pagation, and possibly-uninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables). 1.
(Show Context)

Citation Context

...ind precise solutions to a large class of interprocedural dataflow-analysis problems in polynomial time. In contrast with intraprocedural dataflow analysis, where “precise” means “meet-over-all-paths”=-=[20]-=-, a precise interprocedural dataflow-analysis algorithm must provide the “meet-over-all-valid-paths” solution. (A path is valid if it respects the fact that when a procedure finishes it returns to the...

A Static Analyzer for Large Safety-Critical Software

by Bruno Blanchet, Patrick Cousot, Radhia Cousot, Jerome Feret, Laurent Mauborgne, Antoine Mine, David Monniaux, Xavier Rival , 2003
"... We show that abstract interpretation-based static program analysis can be made e#cient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to p ..."
Abstract - Cited by 271 (54 self) - Add to MetaCart
We show that abstract interpretation-based static program analysis can be made e#cient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing).
(Show Context)

Citation Context

...f thousands of boolean variables (not counting hundreds of thousands of program points) is still a real challenge. Moreover very simple static program analyzes, such as Kildall’s constant propagation =-=[24]-=-, involve an infinite abstract domain which cannot be encoded using finite boolean vectors thus requiring the user to provide beforehand all predicates that will be indispensable to the static analysi...

A System and Language for Building System-Specific, Static Analyses

by Seth Hallem, Benjamin Chelf, Yichen Xie, Dawson Engler - In Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation , 2002
"... This paper presents a novel approach to bug-finding analysis and an implementation of that approach. Our goal is to find as many serious bugs as possible. To do so, we designed a flexible, easy-to-use extension language for specifying analyses and an efficent algorithm for executing these extensions ..."
Abstract - Cited by 228 (14 self) - Add to MetaCart
This paper presents a novel approach to bug-finding analysis and an implementation of that approach. Our goal is to find as many serious bugs as possible. To do so, we designed a flexible, easy-to-use extension language for specifying analyses and an efficent algorithm for executing these extensions. The language, metal, allows the users of our system to specify a broad class of analyses in terms that resemble the intuitive description of the rules that they check. The system, xgcc, executes these analyses efficiently using a context-sensitive, interprocedural analysis.
(Show Context)

Citation Context

...they are crucial for the interprocedural caching described in the next section. Our algorithm computes a fixed point that is similar to the meet-over-paths solution in a traditional dataflow analysis =-=[16]-=-. The analysis stops when the block summary (i.e., cache) at each block contains all state tuples that can reach that block along any control path (i.e., the maximal fixedpoint solution). The algorith...

The essence of command injection attacks in web applications

by Zhendong Su , 2006
"... Web applications typically interact with a back-end database to retrieve persistent data and then present the data to the user as dynamically generated output, such as HTML web pages. However, this interaction is commonly done through a low-level API by dynamically constructing query strings within ..."
Abstract - Cited by 185 (5 self) - Add to MetaCart
Web applications typically interact with a back-end database to retrieve persistent data and then present the data to the user as dynamically generated output, such as HTML web pages. However, this interaction is commonly done through a low-level API by dynamically constructing query strings within a general-purpose programming language, such as Java. This low-level interaction is ad hoc because it does not take into account the structure of the output language. Accordingly, user inputs are treated as isolated lexical entities which, if not properly sanitized, can cause the web application to generate unintended output. This is called a command injection attack, which poses a serious threat to web application security. This paper presents the first formal definition of command injection attacks in the context of web applications, and gives a sound and complete algorithm for preventing them based on context-free grammars and compiler parsing techniques. Our key observation is that, for an attack to succeed, the input that gets propagated into the database query or the output document must change the intended syntactic structure of the query or document. Our definition and algorithm are general and apply to many forms of command injection attacks. We validate our approach with SQLCHECK, an implementation for the setting of SQL command injection attacks. We evaluated SQLCHECK on real-world web applications with systematically compiled real-world attack data as input. SQLCHECK produced no false positives or false negatives, incurred low runtime overhead, and applied straightforwardly to web applications written in different languages.
(Show Context)

Citation Context

...ecurity holes. Koved et al. study the complementary problem of statically determining the access rights required for a program or a component to run on a client machine [21] using a dataflow analysis =-=[16, 18]-=-. 7.4 Meta-Programming To be put in a broader context, our research can be viewed as an instance of providing runtime safety guarantee for meta-programming [39]. Macros are a very old and established ...

Undecidability of Static Analysis

by William Landi - ACM Letters on Programming Languages and Systems , 1992
"... Static Analysis of programs is indispensable to any software tool, environment, or system that requires compile time information about the semantics of programs. With the emergence of languages like C and LISP, Static Analysis of programs with dynamic storage and recursive data structures has bec ..."
Abstract - Cited by 165 (4 self) - Add to MetaCart
Static Analysis of programs is indispensable to any software tool, environment, or system that requires compile time information about the semantics of programs. With the emergence of languages like C and LISP, Static Analysis of programs with dynamic storage and recursive data structures has become a field of active research. Such analysis is difficult, and the Static Analysis community has recognized the need for simplifying assumptions and approximate solutions. However, even under the common simplifying assumptions, such analyses are harder than previously recognized. Two fundamental Static Analysis problems are May Alias and Must Alias. The former is not recursive (i.e., is undecidable) and the latter is not recursively enumerable (i.e., is uncomputable), even when all paths are executable in the program being analyzed for languages with if-statements, loops, dynamic storage, and recursive data structures. Categories and Subject Descriptors: D.3.1 [Programming Languages...
(Show Context)

Citation Context

...cepted by a Turing machine which may or may not halt on all inputs. Static Analysis originally concentrated on FORTRAN, and was predominately confined to a single procedure (intraprocedural analysis) =-=[7, 9, 15]-=-. However, even this simple form of Static Analysis is not recursive. The difficulty lies in conditionals. There are, in general, many paths through a procedure, but not all paths correspond to an exe...

Towards automatic generation of vulnerability-based signatures

by David Brumley, James Newsome, Dawn Song, Hao Wang, Somesh Jha , 2006
"... In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exerci ..."
Abstract - Cited by 153 (28 self) - Add to MetaCart
In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploitbased signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest. 1
(Show Context)

Citation Context

...work in this area. Program analysis. We use many static analysis techniques such as symbolic execution [31], abstract interpretation [18], model checking [15], theorem proving [20], dataflow analysis =-=[29]-=-, and program slicing [57]. Each of thesesareas is an active research area in which we can benefit from new or more advanced techniques. It would be impossible to note all related work in static analy...

A Schema for Interprocedural Modification Side-Effect Analysis With Pointer Aliasing

by Barbara G. Ryder, William A. Landi, Philip A. Stocks, Sean Zhang, Rita Altucher , 2001
"... The first interprocedural modification side-effects analysis for C (MODC) that obtains better than worst-case precision on programs with general-purpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with t ..."
Abstract - Cited by 139 (12 self) - Add to MetaCart
The first interprocedural modification side-effects analysis for C (MODC) that obtains better than worst-case precision on programs with general-purpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with two independent phases: one for determining pointer-induced aliases and a subsequent one for propagating interprocedural side effects. These MODC algorithms are parameterized by the aliasing method used. The empirical results compare the performance of two dissimilar MODC algorithms: MODC(FSAlias) uses a flow-sensitive, calling-context-sensitive interprocedural alias analysis; MODC(FIAlias) uses a flow-insensitive, calling-context-insensitive alias analysis which is much faster, but less accurate. These two algorithms were profiled on 45 programs ranging in size from 250 to 30,000 lines of C code, and the results demonstrate dramatically the possible costprecision trade-offs. This first comparative implementation of MODC analyses offers insight into the differences between flow-/context-sensitive and flow-/context-insensitive analyses. The analysis cost versus precision trade-offs in side-effect information obtained are reported. The results show surprisingly that the precision of flow-sensitive side-effect analysis is not always prohibitive in cost, and that the precision of flow-insensitive analysis is substantially better than worst-case estimates

Call Graph Construction in Object-Oriented Languages

by David Grove, et al. , 1997
"... Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in object-oriented or functional languages is comp ..."
Abstract - Cited by 130 (3 self) - Add to MetaCart
Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in object-oriented or functional languages is complicated by the difficulty of constructing an accurate program call graph. This paper presents a parameterized algorithmic framework for call graph construction in the presence of message sends and/or firstclass functions. We use this framework to describe and to implement a number of well-known and new algorithms. We then empirically assess these algorithms by applying them to a suite of medium-sized programs written in Cecil and Java, reporting on the relative cost of the analyses, the relative precision of the constructed call graphs, and the impact of this precision on the effectiveness of a number of interprocedural optimizations.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University