Results 1 - 10
of
371
Compositional Model Checking
, 1999
"... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..."
Abstract
-
Cited by 3252 (70 self)
- Add to MetaCart
(Show Context)
We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*.
Program Analysis and Specialization for the C Programming Language
, 1994
"... Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. ..."
Abstract
-
Cited by 629 (0 self)
- Add to MetaCart
Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is time-consuming, and is likely to exceed the production of today’s programmers. New techniques are required to solve this so-called software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results.
Interprocedural dataflow analysis via graph reachability
, 1994
"... The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in poly-nomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow fun ..."
Abstract
-
Cited by 454 (34 self)
- Add to MetaCart
(Show Context)
The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in poly-nomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of prob-lems includes—but is not limited to—the classical separ-able problems (also known as “gen/kill ” or “bit-vector” problems)—e.g., reaching definitions, available expres-sions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant pro-pagation, and possibly-uninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables). 1.
A Static Analyzer for Large Safety-Critical Software
, 2003
"... We show that abstract interpretation-based static program analysis can be made e#cient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to p ..."
Abstract
-
Cited by 271 (54 self)
- Add to MetaCart
(Show Context)
We show that abstract interpretation-based static program analysis can be made e#cient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing).
A System and Language for Building System-Specific, Static Analyses
- In Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation
, 2002
"... This paper presents a novel approach to bug-finding analysis and an implementation of that approach. Our goal is to find as many serious bugs as possible. To do so, we designed a flexible, easy-to-use extension language for specifying analyses and an efficent algorithm for executing these extensions ..."
Abstract
-
Cited by 228 (14 self)
- Add to MetaCart
(Show Context)
This paper presents a novel approach to bug-finding analysis and an implementation of that approach. Our goal is to find as many serious bugs as possible. To do so, we designed a flexible, easy-to-use extension language for specifying analyses and an efficent algorithm for executing these extensions. The language, metal, allows the users of our system to specify a broad class of analyses in terms that resemble the intuitive description of the rules that they check. The system, xgcc, executes these analyses efficiently using a context-sensitive, interprocedural analysis.
The essence of command injection attacks in web applications
, 2006
"... Web applications typically interact with a back-end database to retrieve persistent data and then present the data to the user as dynamically generated output, such as HTML web pages. However, this interaction is commonly done through a low-level API by dynamically constructing query strings within ..."
Abstract
-
Cited by 185 (5 self)
- Add to MetaCart
(Show Context)
Web applications typically interact with a back-end database to retrieve persistent data and then present the data to the user as dynamically generated output, such as HTML web pages. However, this interaction is commonly done through a low-level API by dynamically constructing query strings within a general-purpose programming language, such as Java. This low-level interaction is ad hoc because it does not take into account the structure of the output language. Accordingly, user inputs are treated as isolated lexical entities which, if not properly sanitized, can cause the web application to generate unintended output. This is called a command injection attack, which poses a serious threat to web application security. This paper presents the first formal definition of command injection attacks in the context of web applications, and gives a sound and complete algorithm for preventing them based on context-free grammars and compiler parsing techniques. Our key observation is that, for an attack to succeed, the input that gets propagated into the database query or the output document must change the intended syntactic structure of the query or document. Our definition and algorithm are general and apply to many forms of command injection attacks. We validate our approach with SQLCHECK, an implementation for the setting of SQL command injection attacks. We evaluated SQLCHECK on real-world web applications with systematically compiled real-world attack data as input. SQLCHECK produced no false positives or false negatives, incurred low runtime overhead, and applied straightforwardly to web applications written in different languages.
Undecidability of Static Analysis
- ACM Letters on Programming Languages and Systems
, 1992
"... Static Analysis of programs is indispensable to any software tool, environment, or system that requires compile time information about the semantics of programs. With the emergence of languages like C and LISP, Static Analysis of programs with dynamic storage and recursive data structures has bec ..."
Abstract
-
Cited by 165 (4 self)
- Add to MetaCart
(Show Context)
Static Analysis of programs is indispensable to any software tool, environment, or system that requires compile time information about the semantics of programs. With the emergence of languages like C and LISP, Static Analysis of programs with dynamic storage and recursive data structures has become a field of active research. Such analysis is difficult, and the Static Analysis community has recognized the need for simplifying assumptions and approximate solutions. However, even under the common simplifying assumptions, such analyses are harder than previously recognized. Two fundamental Static Analysis problems are May Alias and Must Alias. The former is not recursive (i.e., is undecidable) and the latter is not recursively enumerable (i.e., is uncomputable), even when all paths are executable in the program being analyzed for languages with if-statements, loops, dynamic storage, and recursive data structures. Categories and Subject Descriptors: D.3.1 [Programming Languages...
Towards automatic generation of vulnerability-based signatures
, 2006
"... In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exerci ..."
Abstract
-
Cited by 153 (28 self)
- Add to MetaCart
(Show Context)
In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploitbased signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest. 1
A Schema for Interprocedural Modification Side-Effect Analysis With Pointer Aliasing
, 2001
"... The first interprocedural modification side-effects analysis for C (MODC) that obtains better than worst-case precision on programs with general-purpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with t ..."
Abstract
-
Cited by 139 (12 self)
- Add to MetaCart
The first interprocedural modification side-effects analysis for C (MODC) that obtains better than worst-case precision on programs with general-purpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with two independent phases: one for determining pointer-induced aliases and a subsequent one for propagating interprocedural side effects. These MODC algorithms are parameterized by the aliasing method used. The empirical results compare the performance of two dissimilar MODC algorithms: MODC(FSAlias) uses a flow-sensitive, calling-context-sensitive interprocedural alias analysis; MODC(FIAlias) uses a flow-insensitive, calling-context-insensitive alias analysis which is much faster, but less accurate. These two algorithms were profiled on 45 programs ranging in size from 250 to 30,000 lines of C code, and the results demonstrate dramatically the possible costprecision trade-offs. This first comparative implementation of MODC analyses offers insight into the differences between flow-/context-sensitive and flow-/context-insensitive analyses. The analysis cost versus precision trade-offs in side-effect information obtained are reported. The results show surprisingly that the precision of flow-sensitive side-effect analysis is not always prohibitive in cost, and that the precision of flow-insensitive analysis is substantially better than worst-case estimates
Call Graph Construction in Object-Oriented Languages
, 1997
"... Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in object-oriented or functional languages is comp ..."
Abstract
-
Cited by 130 (3 self)
- Add to MetaCart
Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in object-oriented or functional languages is complicated by the difficulty of constructing an accurate program call graph. This paper presents a parameterized algorithmic framework for call graph construction in the presence of message sends and/or firstclass functions. We use this framework to describe and to implement a number of well-known and new algorithms. We then empirically assess these algorithms by applying them to a suite of medium-sized programs written in Cecil and Java, reporting on the relative cost of the analyses, the relative precision of the constructed call graphs, and the impact of this precision on the effectiveness of a number of interprocedural optimizations.