Results 1 
6 of
6
The reactive simulatability (RSIM) framework for asynchronous systems
 Information and Computation
, 2007
"... We define reactive simulatability for general asynchronous systems. Roughly, simulatability means that a real system implements an ideal system (specification) in a way that preserves security in a general cryptographic sense. Reactive means that the system can interact with its users multiple times ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
We define reactive simulatability for general asynchronous systems. Roughly, simulatability means that a real system implements an ideal system (specification) in a way that preserves security in a general cryptographic sense. Reactive means that the system can interact with its users multiple times, e.g., in many concurrent protocol runs or a multiround game. In terms of distributed systems, reactive simulatability is a type of refinement that preserves particularly strong properties, in particular confidentiality. A core feature of reactive simulatability is composability, i.e., the real system can be plugged in instead of the ideal system within arbitrary larger systems; this is shown in followup papers, and so is the preservation of many classes of individual security properties from the ideal to the real systems. A large part of this paper defines a suitable system model. It is based on probabilistic IO automata (PIOA) with two main new features: One is generic distributed scheduling. Important special cases are realistic adversarial scheduling, procedurecalltype scheduling among colocated system parts, and special schedulers such as for fairness, also in combinations. The other is the definition of the reactive runtime via a realization by Turing machines such that notions like polynomialtime are composable. The simple complexity of the transition functions of the automata is not composable. As specializations of this model we define securityspecific concepts, in particular a separation between honest users and adversaries and several trust models. The benefit of IO automata as the main model, instead of only interactive Turing machines as usual in cryptographic multiparty computation, is that many cryptographic systems can be specified with an ideal system consisting of only one simple, deterministic IO automaton without any cryptographic objects, as many followup papers show. This enables the use of classic formal methods and automatic proof tools for proving larger distributed protocols and systems that use these cryptographic systems.
S.: Universally composable symbolic analysis of Diffie–Hellman based key exchange. Cryptology ePrint Archive, Report 2010/303
, 2010
"... Canetti and Herzog (TCC’06) show how to efficiently perform fully automated, computationally sound security analysis of key exchange protocols with an unbounded number of sessions. A key tool in their analysis is composability, which allows deducing security of the multisession case from the securi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Canetti and Herzog (TCC’06) show how to efficiently perform fully automated, computationally sound security analysis of key exchange protocols with an unbounded number of sessions. A key tool in their analysis is composability, which allows deducing security of the multisession case from the security of a single session. However, their framework only captures protocols that use public key encryption as the only cryptographic primitive, and only handles static corruptions. We extend the [CH’06] modeling in two ways. First, we handle also protocols that use digital signatures and DiffieHellman exchange. Second, we handle also forward secrecy under fully adaptive party corruptions. This allows us to automatically analyze systems that use an unbounded number of sessions of realistic key exchange protocols such as the ISO 97983 or TLS protocol. A central tool in our treatment is a new abstract modeling of plain DiffieHellman key exchange. Specifically, we show that plain DiffieHellman securely realizes an idealized version of
Threshold Homomorphic Encryption in the Universally Composable Cryptographic Library
"... Abstract. The universally composable cryptographic library by Backes, Pfitzmann and Waidner provides DolevYaolike, but cryptographically sound abstractions to common cryptographic primitives like encryptions and signatures. The library has been used to give the correctness proofs of various protoc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The universally composable cryptographic library by Backes, Pfitzmann and Waidner provides DolevYaolike, but cryptographically sound abstractions to common cryptographic primitives like encryptions and signatures. The library has been used to give the correctness proofs of various protocols; while the arguments in such proofs are similar to the ones done with the DolevYao model that has been researched for a couple of decades already, the conclusions that such arguments provide are cryptographically sound. Various interesting protocols, for example evoting, make extensive use of primitives that the library currently does not provide. The library can certainly be extended, and in this paper we provide one such extension — we add threshold homomorphic encryption to the universally composable cryptographic library and demonstrate its usefulness by (re)proving the security of a wellknown evoting protocol. 1
Universally Composable KeyManagement
"... Abstract. We present the first universally composable keymanagement functionality, formalized in the GNUC framework by Hofheinz and Shoup. It allows the enforcement of a wide range of security policies and can be extended by diverse key usage operations with no need to repeat the security proof. We ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We present the first universally composable keymanagement functionality, formalized in the GNUC framework by Hofheinz and Shoup. It allows the enforcement of a wide range of security policies and can be extended by diverse key usage operations with no need to repeat the security proof. We illustrate its use by proving an implementation of a security token secure with respect to arbitrary keyusage operations and explore a proof technique that allows the storage of cryptographic keys externally, a novel development in simulationbased security frameworks. 1
The layered games framework for specifications and analysis of security protocols
 of Lecture Notes in Computer Science
, 2008
"... Abstract. We establish rigorous foundations to the use of modular, layered design for building complex distributed systems, resilient to failures and attacks. Layering is key to the design of the Internet and other distributed systems. Hence, solid, theoretical foundations are essential, especially ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We establish rigorous foundations to the use of modular, layered design for building complex distributed systems, resilient to failures and attacks. Layering is key to the design of the Internet and other distributed systems. Hence, solid, theoretical foundations are essential, especially when considering adversarial settings, such as for security and cryptographic protocols. We use games to define specifications for each layer. A protocol realizes a layer (over some lower layer), if it ‘wins’, with high probability, a specified game, when running over any implementation of the lower layer. This is in contrast to existing frameworks allowing modular design of cryptographic protocols, e.g. Universal Composability [15], where protocols must emulate an ideal functionality. Ideal functionalities are a very elegant method for specifications, but we argue that often, gamebased specifications are more appropriate. In particular, it may be hard to design the ‘correct ’ ideal functionality, and avoid overspecification (‘forcing’ the protocol to follow a particular design) and underspecification (e.g., allowing protocols that work reasonably only for worstcase adversary but poorly for realistic adversaries); see details within. Our definitions include the basic concepts for modular, layered design: protocols, systems, configurations, executions, and models. We also define three basic relations: indistinguishability (between two systems), satisfaction (of a model by a system), and realization (by protocol, of one model over another model). We prove several basic properties, including the layering lemma and the indistinguishability lemma. The layering lemma shows that given protocols {πi} u i=1, if every protocol πi realizes model Mi over model Mi−1, then the composite protocol π1...u realizes model Mu over M0. This allows specification, design and analysis of each layer independently, and combining the results to ensure properties of the complete system. 1
2.2 Notions of SimulationBased Security.............................. 7
, 2009
"... For most basic cryptographic tasks, such as public key encryption, digital signatures, authentication, key exchange, and many other more sophisticated tasks, ideal functionalities have been formulated in the simulationbased security approach, along with their realizations. Surprisingly, however, no ..."
Abstract
 Add to MetaCart
(Show Context)
For most basic cryptographic tasks, such as public key encryption, digital signatures, authentication, key exchange, and many other more sophisticated tasks, ideal functionalities have been formulated in the simulationbased security approach, along with their realizations. Surprisingly, however, no such functionality exists for symmetric encryption, except for a more abstract DolevYao style functionality. In this paper, we fill this gap. We propose two functionalities for symmetric encryption, an unauthenticated and an authenticated version, and show that they can be implemented based on standard cryptographic assumptions for symmetric encryption schemes, namely INDCCA security and authenticated encryption, respectively. We also illustrate the usefulness of