Results 1  10
of
30
StegoTorus: A camouflage proxy for the Tor anonymity system
, 2012
"... Internet censorship by governments is an increasingly common practice worldwide. Internet users and censors are locked in an arms race: as users find ways to evade censorship schemes, the censors develop countermeasures for the evasion tactics. One of the most popular and effective circumvention too ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Internet censorship by governments is an increasingly common practice worldwide. Internet users and censors are locked in an arms race: as users find ways to evade censorship schemes, the censors develop countermeasures for the evasion tactics. One of the most popular and effective circumvention tools, Tor, must regularly adjust its network traffic signature to remain usable. We present StegoTorus, a tool that comprehensively disguises Tor from protocol analysis. To foil analysis of packet contents, Tor’s traffic is steganographed to resemble an innocuous cover protocol, such as HTTP. To foil analysis at the transport level, the Tor circuit is distributed over many shorterlived connections with perpacket characteristics that mimic coverprotocol traffic. Our evaluation demonstrates that StegoTorus improves the resilience of Tor to fingerprinting attacks and delivers usable performance.
More efficient oblivious transfer and extensions for faster secure computation
, 2013
"... Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perf ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perform many OTs is to extend a small number of base OTs using OT extensions based on symmetric cryptography. In this work we present optimizations and efficient implementations of OT and OT extensions in the semihonest model. We propose a novel OT protocol with security in the standard model and improve OT extensions with respect to communication complexity, computation complexity, and scalability. We also provide specific optimizations of OT extensions that are tailored to the secure computation protocols of Yao and GoldreichMicaliWigderson and reduce the communication complexity even further. We experimentally verify the efficiency gains of our protocols and optimizations. By applying our implementation to current secure computation frameworks, we can securely compute a Levenshtein distance circuit with 1.29 billion AND gates at a rate of 1.2 million AND gates per second. Moreover, we demonstrate the importance of correctly implementing OT within secure computation protocols by presenting an attack on the FastGC framework.
A unified approach to deterministic encryption: New constructions and a connection to computational entropy
 TCC 2012, volume 7194 of LNCS
, 2012
"... We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessa ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessage security from lossy trapdoor functions. The security proofs for these schemes are enabled by three tools that are of broader interest: • A weaker and more precise sufficient condition for semantic security on a highentropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution ME, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.) • A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log 2 1/p. • A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakageresilient cryptography: leaking λ bits of information reduces the quality of computational entropy by a factor of 2 λ and its quantity by λ.
SoK: SSL and HTTPS: Revisiting past challenges and evaluating certificate trust model enhancements
 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2013
"... Internet users today depend daily on HTTPS for secure communication with sites they intend to visit. Over the years, many attacks on HTTPS and the certificate trust model it uses have been hypothesized, executed, and/or evolved. Meanwhile the number of browsertrusted (and thus, de facto, usertrus ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Internet users today depend daily on HTTPS for secure communication with sites they intend to visit. Over the years, many attacks on HTTPS and the certificate trust model it uses have been hypothesized, executed, and/or evolved. Meanwhile the number of browsertrusted (and thus, de facto, usertrusted) certificate authorities has proliferated, while the due diligence in baseline certificate issuance has declined. We survey and categorize prominent security issues with HTTPS and provide a systematic treatment of the history and ongoing challenges, intending to provide context for future directions. We also provide a comparative evaluation of current proposals for enhancing the certificate infrastructure used in practice.
Instantiating Random Oracles via UCEs
, 2013
"... This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; p ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; pointfunction obfuscation; OAEP; encryption secure for keydependent messages; encryption secure under relatedkey attack; proofs of storage; and adaptivelysecure garbled circuits with short tokens. We can take existing, natural and efficient ROM schemes and show that the instantiated scheme resulting from replacing the RO with a UCE function is secure in the standard model. In several cases this results in the first standardmodel schemes for these goals. The definition of UCEsecurity itself is quite simple, asking that outputs of the function look random given some “leakage, ” even if the adversary knows the key, as long as the leakage does not permit the adversary to compute the inputs.
Overcoming weak expectations
, 2012
"... Abstract. Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some nontrivial amount of (min) entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), w ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some nontrivial amount of (min) entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), where X is a weak source in question. We show an elementary inequality which essentially upper bounds such ‘weak expectation ’ by two terms, the first of which is independent of f, while the second only depends on the ‘variance ’ of f under uniform distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some ‘unexpected ’ results, in several cases noticeably simplifying/improving prior techniques for the same problem. Examples include nonmalleable extractors, leakageresilient symmetric encryption, alternative to the dense model theorem, seeddependent condensers and improved entropy loss for the leftover hash lemma. 1
Confused Johnny: When Automatic Encryption Leads to Confusion and Mistakes
"... A common approach to designing usable security is to hide as many security details as possible from the user to reduce the amount of information and actions a user must encounter. This paper gives an overview of Pwm (Private Webmail), our secure webmail system that uses security overlays to integrat ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
A common approach to designing usable security is to hide as many security details as possible from the user to reduce the amount of information and actions a user must encounter. This paper gives an overview of Pwm (Private Webmail), our secure webmail system that uses security overlays to integrate tightly with existing webmail services like Gmail. Pwm’s security is mostly transparent, including automatic key management and automatic encryption. We describe a series of Pwm user studies indicating that while nearly all users can use the system without any prior training, the security details are so transparent that a small percentage of users mistakenly sent out unencrypted messages and some users are unsure whether they should trust Pwm. We then conducted user studies with an alternative prototype to Pwm that uses manual encryption. Surprisingly users were accepting of the extra steps of cutting and pasting ciphertext themselves. They avoided mistakes and had more trust in the system with manual encryption. Our results suggest that designers may want to reconsider manual encryption as a way to reduce transparency and foster greater trust.
ANOTHER LOOK AT HMAC
"... Abstract. HMAC is the most widelydeployed cryptographichashfunctionbased message authentication code. First, we describe a security issue that arises because of inconsistencies in the standards and the published literature regarding keylength. We prove a separation result between two versions of ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. HMAC is the most widelydeployed cryptographichashfunctionbased message authentication code. First, we describe a security issue that arises because of inconsistencies in the standards and the published literature regarding keylength. We prove a separation result between two versions of HMAC, which we denote HMAC std and HMAC Bel, the former being the realworld version standardized by Bellare et al. in 1997 and the latter being the version described in Bellare’s proof of security in his Crypto 2006 paper. Second, we describe how HMAC NIST (the FIPS version standardized by NIST), while provably secure, succumbs to a practical attack in the multiuser setting. Third, we describe a fundamental defect from a practiceoriented standpoint in Bellare’s 2006 security result for HMAC, and show that because of this defect his proof gives a security guarantee that is of little value in practice. We give a new proof of NMAC security that gives a stronger result for NMAC and HMAC – and solves an “interesting open problem ” from Bellare’s Crypto 2006 paper – and discuss its limitations. 1.
Computational extractors and pseudorandomness
 In Theory of Cryptography
, 2012
"... Computational extractors are efficient procedures that map a source of sufficiently high minentropy to an output that is computationally indistinguishable from uniform. By relaxing the statistical closeness property of traditional randomness extractors one hopes to improve the efficiency and entropy ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Computational extractors are efficient procedures that map a source of sufficiently high minentropy to an output that is computationally indistinguishable from uniform. By relaxing the statistical closeness property of traditional randomness extractors one hopes to improve the efficiency and entropy parameters of these extractors, while keeping their utility for cryptographic applications. In this work we investigate computational extractors and consider questions of existence and inherent complexity from the theoretical and practical angles, with particular focus on the relationship to pseudorandomness. An obvious way to build a computational extractor is via the “extractthenprg ” method: apply a statistical extractor and use its output to seed a PRG. This approach carries with it the entropy cost inherent to implementing statistical extractors, namely, the source entropy needs to be substantially higher than the PRG’s seed length. It also requires a PRG and thus relies on oneway functions. We study the necessity of oneway functions in the construction of computational extractors and determine matching lower and upper bounds on the “blackbox efficiency ” of generic constructions of computational extractors that use a oneway permutation as an oracle. Under this efficiency measure we prove a direct correspondence between the complexity of computational extractors and that of pseudorandom generators, showing the optimality of the extractthenprg approach for generic constructions of computational extractors and confirming the intuition that to build a computational extractor via a PRG one needs to make up for the entropy gap intrinsic to statistical extractors. On the other hand, we show that with stronger cryptographic primitives one can have more entropy and computationallyefficient constructions. In particular, we show a construction of a very practical computational extractor from any weak PRF without resorting to statistical extractors.
Computational Fuzzy Extractors
, 2013
"... Fuzzy extractors derive strong keys from noisy sources. Their security is defined informationtheoretically, which limits the length of the derived key, sometimes making it too short to be useful. We ask whether it is possible to obtain longer keys by considering computational security, and show the ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Fuzzy extractors derive strong keys from noisy sources. Their security is defined informationtheoretically, which limits the length of the derived key, sometimes making it too short to be useful. We ask whether it is possible to obtain longer keys by considering computational security, and show the following. • Negative Result: Noise tolerance in fuzzy extractors is usually achieved using an information reconciliation component called a “secure sketch. ” The security of this component, which directly affects the length of the resulting key, is subject to lower bounds from coding theory. We show that, even when defined computationally, secure sketches are still subject to lower bounds from coding theory. Specifically, we consider two computational relaxations of the informationtheoretic security requirement of secure sketches, using conditional HILL entropy and unpredictability entropy. For both cases we show that computational secure sketches cannot outperform the best informationtheoretic secure sketches in the case of highentropy Hamming metric sources. • Positive Result: We show that the negative result can be overcome by analyzing computational fuzzy extractors directly. Namely, we show how to build a computational fuzzy extractor whose output key length equals the entropy of the source (this is impossible in the informationtheoretic setting). Our construction is based on the hardness of the Learning with Errors (LWE) problem, and is secure when the noisy source is uniform or symbolfixing (that is, each dimension is either uniform or fixed). As part of the security proof, we show a result of independent interest, namely that the decision version of LWE is secure even when a small number of dimensions has no error.