Results 1  10
of
33
Design of Embedded Systems: Formal Models, Validation, and Synthesis
 PROCEEDINGS OF THE IEEE
, 1999
"... This paper addresses the design of reactive realtime embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware ASICs with embedded software. The concurrent design process for such embedded systems involves solving the ..."
Abstract

Cited by 113 (9 self)
 Add to MetaCart
(Show Context)
This paper addresses the design of reactive realtime embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware ASICs with embedded software. The concurrent design process for such embedded systems involves solving the specification, validation, and synthesis problems. We review the variety of approaches to these problems that have been taken.
Automated Soundness Proofs for Dataflow Analyses and Transformations Via Local Rules
 In Proc. of the 32nd Symposium on Principles of Programming Languages
, 2005
"... We present Rhodium, a new language for writing compiler optimizations that can be automatically proved sound. Unlike our previous work on Cobalt, Rhodium expresses optimizations using explicit dataflow facts manipulated by local propagation and transformation rules. This new style allows Rhodium opt ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
(Show Context)
We present Rhodium, a new language for writing compiler optimizations that can be automatically proved sound. Unlike our previous work on Cobalt, Rhodium expresses optimizations using explicit dataflow facts manipulated by local propagation and transformation rules. This new style allows Rhodium optimizations to be mutually recursively defined, to be automatically composed, to be interpreted in both flowsensitive andinsensitive ways, and to be applied interprocedurally given a separate contextsensitivity strategy, all while retaining soundness. Rhodium also supports infinite analysis domains while guaranteeing termination of analysis. We have implemented a soundness checker for Rhodium and have specified and automatically proven the soundness of all of Cobalt’s optimizations plus a variety of optimizations not expressible in Cobalt, including Andersen’s pointsto analysis, arithmeticinvariant detection, loopinductionvariable strength reduction, and redundant array load elimination. Categories and Subject Descriptors: D.2.4 [Software
Explaining answers from the semantic web: The inference web approach
 Journal of Web Semantics
, 2004
"... The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know ..."
Abstract

Cited by 62 (29 self)
 Add to MetaCart
(Show Context)
The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase – an extensible webbased registry containing details about information sources, reasoners, languages, and rewrite rules; PML – the Proof Markup Language specification and API used for encoding portable proofs; IW browser – a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects.
Modeling and Designing Heterogeneous Systems
, 2002
"... We present the modeling mechanism employed in Metropolis, a design environment for heterogeneous embedded systems, and a design methodology based on the mechanism experimented for wireless communication systems. It is developed to favor the reusability of components in the systems, by decoupling the ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
We present the modeling mechanism employed in Metropolis, a design environment for heterogeneous embedded systems, and a design methodology based on the mechanism experimented for wireless communication systems. It is developed to favor the reusability of components in the systems, by decoupling the specification of orthogonal aspects explicitly over a set of abstraction levels. It uses a single model to represent designs specified this way, to which not only simulation but also analysis and synthesis algorithms can be applied relatively easily.
Semantic subtyping with an SMT solver
, 2010
"... We study a firstorder functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and typetest (a Boolean expression testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We study a firstorder functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and typetest (a Boolean expression testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; for example, intersection, union, negation, singleton, nullable, variant, and algebraic types are all derivable. We formulate a semantics in which expressions denote terms, and types are interpreted as firstorder logic formulas. Subtyping is defined as valid implication between the semantics of types. The formulas are interpreted in a specific model that we axiomatize using standard firstorder theories. On this basis, we present a novel typechecking algorithm able to eliminate many dynamic tests and to detect many errors statically. The key idea is to rely on an SMT solver to compute subtyping efficiently. Moreover, interpreting types as formulas allows us to call the SMT solver at runtime to compute instances of types.
Constraints Specification at Higher Levels of Abstraction
 In Proceedings of International Workshop on High Level Design Validation and Test
, 2001
"... We are proposing a formalism to express performance constraints at a high level of abstraction. The formalism allows specifying design performance constraints even before all low level details necessary to evaluate them are known. It is based on a solid mathematical foundation, to remove any ambigui ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
We are proposing a formalism to express performance constraints at a high level of abstraction. The formalism allows specifying design performance constraints even before all low level details necessary to evaluate them are known. It is based on a solid mathematical foundation, to remove any ambiguity in its interpretation, and yet it allows quite simple and natural specification of many typical constraints. Once the design details are known, the satisfaction of constraints can be checked either by simulation, or by formal techniques like theorem proving, and, in some cases, by automatic model checking.
Proving optimizations correct using parameterized program equivalence
 In Proceedings of the 2009 Conference on Programming Language Design and Implementation (PLDI 2009
, 2009
"... Translation validation is a technique for checking that, after an optimization has run, the input and output of the optimization are equivalent. Traditionally, translation validation has been used to prove concrete, fully specified programs equivalent. In this paper we present Parameterized Equivale ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Translation validation is a technique for checking that, after an optimization has run, the input and output of the optimization are equivalent. Traditionally, translation validation has been used to prove concrete, fully specified programs equivalent. In this paper we present Parameterized Equivalence Checking (PEC), a generalization of translation validation that can prove the equivalence of parameterized programs. A parameterized program is a partially specified program that can represent multiple concrete programs. For example, a parameterized program may contain a section of code whose only known property is that it does not modify certain variables. By proving parameterized programs equivalent, PEC can prove the correctness of transformation rules that represent complex optimizations once and for all, before they are ever run. We implemented our PEC technique in a tool that can establish the equivalence of two parameterized programs. To highlight the power of PEC, we designed a language for implementing complex optimizations using manytomany rewrite rules, and used this language to implement a variety of optimizations including software pipelining, loop unrolling, loop unswitching, loop interchange, and loop fusion. Finally, to demonstrate the effectiveness of PEC, we used our PEC implementation to verify that all the optimizations we implemented in our language preserve program behavior.
Knowledge Representation and Classical Logic
, 2007
"... Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics. Because of the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspe ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics. Because of the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspective of knowledge representation, their formal languages were not sufficiently expressive. On the other hand, most logicians were not concerned about the possibility of automated reasoning; from the perspective of knowledge representation, they were often too generous in the choice of syntactic constructs. In spite of these differences, classical mathematical logic has exerted significant influence on knowledge representation research, and it is appropriate to begin this handbook with a discussion of the relationship between these fields. The language of classical logic that is most widely used in the theory of knowledge representation is the language of firstorder (predicate) formulas. These are the formulas that John McCarthy proposed to use for representing declarative knowledge in his advice taker paper [176], and Alan Robinson proposed to prove automatically using resolution [236]. Propositional logic is, of course, the most important subset of firstorder logic; recent