Results 1  10
of
535
Biometric template security,”
 EURASIP,
, 2008
"... Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance ..."
Abstract

Cited by 132 (11 self)
 Add to MetaCart
(Show Context)
Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a highlevel categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.
Lossy Trapdoor Functions and Their Applications
, 2007
"... We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of lattice problems. Using lossy TDFs, we develop a ..."
Abstract

Cited by 126 (21 self)
 Add to MetaCart
(Show Context)
We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of lattice problems. Using lossy TDFs, we develop a new approach for constructing several important cryptographic primitives, including (injective) trapdoor functions, collisionresistant hash functions, oblivious transfer, and chosen ciphertextsecure cryptosystems. All of the constructions are simple, efficient, and blackbox. These results resolve some longstanding open problems in cryptography. They give the first known injective trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCAsecure cryptosystem based solely on the worstcase complexity of lattice problems.
On the Foundations of Quantitative Information Flow
"... Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “sma ..."
Abstract

Cited by 119 (10 self)
 Add to MetaCart
(Show Context)
Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “small ” leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information.Butauseful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees—the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk) and which measures uncertainty using Rényi’s minentropy, rather than Shannon entropy. 1
FPGA intrinsic pUFs and their use for IP protection
, 2007
"... In recent years, IP protection of FPGA hardware designs has become a requirement for many IP vendors. In [34], Simpson and Schaumont proposed a fundamentally dierent approach to IP protection on FPGAs based on the use of Physical Unclonable Functions (PUFs). Their work only assumes the existence o ..."
Abstract

Cited by 116 (9 self)
 Add to MetaCart
(Show Context)
In recent years, IP protection of FPGA hardware designs has become a requirement for many IP vendors. In [34], Simpson and Schaumont proposed a fundamentally dierent approach to IP protection on FPGAs based on the use of Physical Unclonable Functions (PUFs). Their work only assumes the existence of a PUF on the FPGAs without actually proposing a PUF construction. In this paper, we propose new protocols for the IP protection problem on FPGAs and provide the rst construction of a PUF intrinsic to current FPGAs based on SRAM memory randomness present on current FPGAs. We analyze SRAMbased PUF statistical properties and investigate the trade os that can be made when implementing a fuzzy extractor.
Simultaneous hardcore bits and cryptography against memory attacks
 IN TCC
, 2009
"... This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptograp ..."
Abstract

Cited by 116 (11 self)
 Add to MetaCart
(Show Context)
This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever stored in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES. We show that the publickey encryption scheme of Regev (STOC 2005), and the identitybased encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secretkey, or more generally, can compute an arbitrary function of the secretkey of bounded output length. This is done without increasing the size of the secretkey, and without introducing any
Combining crypto with biometrics effectively
 IEEE Trans. on Computers
, 2006
"... Abstract—We propose the first practical and secure way to integrate the iris biometric into cryptographic applications. A repeatable binary string, which we call a biometric key, is generated reliably from genuine iris codes. A wellknown difficulty has been how to cope with the 10 to 20 percent of ..."
Abstract

Cited by 112 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We propose the first practical and secure way to integrate the iris biometric into cryptographic applications. A repeatable binary string, which we call a biometric key, is generated reliably from genuine iris codes. A wellknown difficulty has been how to cope with the 10 to 20 percent of error bits within an iris code and derive an errorfree key. To solve this problem, we carefully studied the error patterns within iris codes and devised a twolayer error correction technique that combines Hadamard and ReedSolomon codes. The key is generated from a subject’s iris image with the aid of auxiliary errorcorrection data, which do not reveal the key and can be saved in a tamperresistant token, such as a smart card. The reproduction of the key depends on two factors: the iris biometric and the token. The attacker has to procure both of them to compromise the key. We evaluated our technique using iris samples from 70 different eyes, with 10 samples from each eye. We found that an errorfree key can be reproduced reliably from genuine iris codes with a 99.5 percent success rate. We can generate up to 140 bits of biometric key, more than enough for 128bit AES. The extraction of a repeatable binary string from biometrics opens new possible applications, where a strong binding is required between a person and cryptographic operations. For example, it is possible to identify individuals without maintaining a central database of biometric templates, to which privacy objections might be raised.
Secure multiparty computation of approximations
, 2001
"... Approximation algorithms can sometimes provide efficient solutions when no efficient exact computation is known. In particular, approximations are often useful in a distributed setting where the inputs are held by different parties and may be extremely large. Furthermore, for some applications, the ..."
Abstract

Cited by 108 (25 self)
 Add to MetaCart
Approximation algorithms can sometimes provide efficient solutions when no efficient exact computation is known. In particular, approximations are often useful in a distributed setting where the inputs are held by different parties and may be extremely large. Furthermore, for some applications, the parties want to compute a function of their inputs securely, without revealing more information than necessary. In this work we study the question of simultaneously addressing the above efficiency and security concerns via what we call secure approximations. We start by extending standard definitions of secure (exact) computation to the setting of secure approximations. Our definitions guarantee that no additional information is revealed by the approximation beyond what follows from the output of the function being approximated. We then study the complexity of specific secure approximation problems. In particular, we obtain a sublinearcommunication protocol for securely approximating the Hamming distance and a polynomialtime protocol for securely approximating the permanent and related #Phard problems. 1
Efficient lattice (H)IBE in the standard model
 In EUROCRYPT 2010, LNCS
, 2010
"... Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors ..."
Abstract

Cited by 98 (15 self)
 Add to MetaCart
(Show Context)
Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptivelysecure IBE and a Hierarchical IBE. 1
Reusable cryptographic fuzzy extractors
 ACM CCS 2004, ACM
, 2004
"... We show that a number of recent definitions and constructions of fuzzy extractors are not adequate for multiple uses of the same fuzzy secret—a major shortcoming in the case of biometric applications. We propose two particularly stringent security models that specifically address the case of fuzzy s ..."
Abstract

Cited by 96 (2 self)
 Add to MetaCart
We show that a number of recent definitions and constructions of fuzzy extractors are not adequate for multiple uses of the same fuzzy secret—a major shortcoming in the case of biometric applications. We propose two particularly stringent security models that specifically address the case of fuzzy secret reuse, respectively from an outsider and an insider perspective, in what we call a chosen perturbation attack. We characterize the conditions that fuzzy extractors need to satisfy to be secure, and present generic constructions from ordinary building blocks. As an illustration, we demonstrate how to use a biometric secret in a remote error tolerant authentication protocol that does not require any storage on the client’s side. 1
PublicKey Cryptosystems Resilient to Key Leakage
"... Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture sidechannel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent sidec ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
(Show Context)
Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture sidechannel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent sidechannel attacks, especially the “cold boot attacks ” of Halderman et al. (USENIX Security ’08), Akavia, Goldwasser and Vaikuntanathan (TCC ’09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of sidechannel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of publickey encryption, Akavia et al. showed that Regev’s latticebased scheme (STOC ’05) is resilient to any leakage of