Results 1  10
of
89
Completing the Temporal Picture
, 1991
"... The paper presents a relatively complete proof system for proving the validity of temporal properties of reactive programs. The presented proof system improves on previous temporal systems, in that it reduces the validity of program properties into pure assertional reasoning, not involving additiona ..."
Abstract

Cited by 76 (17 self)
 Add to MetaCart
The paper presents a relatively complete proof system for proving the validity of temporal properties of reactive programs. The presented proof system improves on previous temporal systems, in that it reduces the validity of program properties into pure assertional reasoning, not involving additional temporal reasoning. The proof system is based on the classification of temporal properties according to the Borel hierarchy, providing appropriate proof rules for the classes of safety, response, and reactivity properties.
Deferred Acceptance Algorithms: History, Theory, Practice, and Open Questions
 INTERNATIONAL JOURNAL OF GAME THEORY, SPECIAL ISSUE IN HONOR OF DAVID GALE'S 85 TH BIRTHDAY
, 2007
"... The deferred acceptance algorithm proposed by Gale and Shapley (1962) has had a profound influence on market design, both directly, by being adapted into practical matching mechanisms, and, indirectly, by raising new theoretical questions. Deferred acceptance algorithms are at the basis of a number ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
The deferred acceptance algorithm proposed by Gale and Shapley (1962) has had a profound influence on market design, both directly, by being adapted into practical matching mechanisms, and, indirectly, by raising new theoretical questions. Deferred acceptance algorithms are at the basis of a number of labor market clearinghouses around the world, and have recently been implemented in school choice systems in Boston and New York City. In addition, the study of markets that have failed in ways that can be fixed with centralized mechanisms has led to a deeper understanding of some of the tasks a marketplace needs to accomplish to perform well. In particular, marketplaces work well when they provide thickness to the market, help it deal with the congestion that thickness can bring, and make it safe for participants to act effectively on their preferences. Centralized clearinghouses organized around the deferred acceptance algorithm can have these properties, and this has sometimes allowed failed markets to be reorganized.
A Study of The Fragile Base Class Problem
 IN EUROPEAN CONFERENCE ON OBJECTORIENTED PROGRAMMING
, 1998
"... In this paper we study the fragile base class problem. This problem occurs in open objectoriented systems employing code inheritance as an implementation reuse mechanism. System developers unaware of extensions to the system developed by its users may produce a seemingly acceptable revision of a ba ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
(Show Context)
In this paper we study the fragile base class problem. This problem occurs in open objectoriented systems employing code inheritance as an implementation reuse mechanism. System developers unaware of extensions to the system developed by its users may produce a seemingly acceptable revision of a base class which may damage its extensions. The fragile
Robust Semantics for Argumentation Frameworks
 Journal of Logic and Computation
, 1999
"... We suggest a socalled "robust" semantics for a model of argumentation which represents arguments and their interactions, called "argumentation frameworks". We study a variety of additional definitions of acceptability of arguments; we explore the properties of these definitions; ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
(Show Context)
We suggest a socalled "robust" semantics for a model of argumentation which represents arguments and their interactions, called "argumentation frameworks". We study a variety of additional definitions of acceptability of arguments; we explore the properties of these definitions; we describe their interrelationships: e.g. robust models can be characterized using the minimal (wellfounded) models of a metaframework. The various definitions of acceptability of argument sets can all deal with contradiction within an argumentation framework. Keywords: Argumentation framework, semantics 1 Introduction In this paper we present semantics for a formal model of argumentation. As in other works such as [Pol94] and [Dun95], we abstract from the actual contents and form of the arguments themselves, and rather concentrate on the analysis of interactions between arguments. Argumentationtheoretic interpretations and proofprocedures are applicable in practical reasoning, legal reasoning ([KT96]...
DYNAMIC CONGRUENCE vs. PROGRESSING BISIMULATION for CCS
 Fundamenta Informaticae
, 1992
"... Weak Observational Congruence (woc) defined on CCS agents is not a bisimulation since it does not require two states reached by bisimilar computations of woc agents to be still woc, e.g. ff:ø:fi:nil and ff:fi:nil are woc but ø:fi:nil and fi:nil are not. This fact prevent us from characterizing CCS s ..."
Abstract

Cited by 36 (12 self)
 Add to MetaCart
Weak Observational Congruence (woc) defined on CCS agents is not a bisimulation since it does not require two states reached by bisimilar computations of woc agents to be still woc, e.g. ff:ø:fi:nil and ff:fi:nil are woc but ø:fi:nil and fi:nil are not. This fact prevent us from characterizing CCS semantics (when ø is considered invisible) as a final algebra, since the semantic function would induce an equivalence over the agents that is both a congruence and a bisimulation. In the paper we introduce a new behavioural equivalence for CCS agents, which is the coarsest among those bisimulations which are also congruences. We call it Dynamic Observational Congruence because it expresses a natural notion of equivalence for concurrent systems required to simulate each other in the presence of dynamic, i.e. run time, (re)configurations. We provide an algebraic characterization of Dynamic Congruence in terms of a universal property of finality. Furthermore we introduce Progressing Bisimulatio...
A Weakest Precondition Semantics for an Objectoriented Language of Refinement
 Lecture Notes in Computer Science 1708
, 1999
"... We define a predicatetransformer semantics for an objectoriented language that includes specification constructs from refinement calculi. The language includes recursive classes, visibility control, dynamic binding, and recursive methods. Using the semantics, we formulate basic notions of refineme ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
We define a predicatetransformer semantics for an objectoriented language that includes specification constructs from refinement calculi. The language includes recursive classes, visibility control, dynamic binding, and recursive methods. Using the semantics, we formulate basic notions of refinement, with respect to which the constructs are shown to be monotonic. Such results are a first step towards a refinement calculus. The step is not trivial, because of the number of features in the language and especially the complexity of dynamic binding.
A Conservative Look at Operational Semantics with Variable Binding
 INFORMATION AND COMPUTATION
, 1998
"... We set up a formal framework to describe transition system specifications in the style of Plotkin. This framework has the power to express manysortedness, general binding mechanisms and substitutions, among other notions such as negative hypotheses and unary predicates on terms. The framework i ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We set up a formal framework to describe transition system specifications in the style of Plotkin. This framework has the power to express manysortedness, general binding mechanisms and substitutions, among other notions such as negative hypotheses and unary predicates on terms. The framework is used to present a conservativity format in operational semantics, which states sufficient criteria to ensure that the extension of a transition system specification with new transition rules does not affect the semantics of the original terms.
Practical Verification And Synthesis Of Low Latency Asynchronous Systems
, 1994
"... A new theory and methodology for the practical verification and synthesis of asynchronous systems is developed to aid in the rapid and correct implementation of complex control structures. Specifications are based on a simple process algebra called CCS that is concise and easy to understand and use. ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
A new theory and methodology for the practical verification and synthesis of asynchronous systems is developed to aid in the rapid and correct implementation of complex control structures. Specifications are based on a simple process algebra called CCS that is concise and easy to understand and use. A software prototype CAD tool called Analyze was written as part of this dissertation to allow the principles of this work to be tested and applied. Attention to complexity, efficient algorithms, and compositional methods has resulted in a tool that can be several orders of magnitude faster than currently available tools for comparable applications. A new theory for loose specifications based on partial orders is developed for both trace and bisimulation semantics. Formal verification uses these partial orders as the foundation of conformance between a specification and its refinement. The definitions support freedom of design choices by identifying the necessary behaviors, the illegal beh...
Deterministic and NonDeterministic Stable Models
 Journal of Logic and Computation
, 1997
"... Stable models have been first introduced in the domain of total interpretations (T stable models), where the existence of multiple Tstable models for the same program provides a powerful mechanism to express nondeterminism. Stable models have been later extended to the domain of partial interpre ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
Stable models have been first introduced in the domain of total interpretations (T stable models), where the existence of multiple Tstable models for the same program provides a powerful mechanism to express nondeterminism. Stable models have been later extended to the domain of partial interpretations (Pstable models). In this paper, we show that the presence of multiple Pstable models need not be a direct manifestation of nondeterminism, for it can be instead an expression of assorted degrees of undefinedness. To separate the two factors, nondeterminism and undefinedness, this paper introduces the notion of deterministic stable models and strictly nondeterministic ones. Deterministic stable models form an interesting family, having a lattice structure where the wellfounded model serves as the bottom; the top of the lattice, the maximum deterministic stable model, resolves differences between any two Pstable models in the family. On the other hand, every two models in a fam...