Results 1 - 10
of
1,347
Quantum Error Correction Via Codes Over GF(4)
, 1997
"... The problem of finding quantum-error-correcting codes is transformed into the problem of finding additive codes over the field GF(4) which are self-orthogonal with respect to a certain trace inner product. Many new codes and new bounds are presented, as well as a table of upper and lower bounds on s ..."
Abstract
-
Cited by 304 (18 self)
- Add to MetaCart
(Show Context)
The problem of finding quantum-error-correcting codes is transformed into the problem of finding additive codes over the field GF(4) which are self-orthogonal with respect to a certain trace inner product. Many new codes and new bounds are presented, as well as a table of upper and lower bounds on such codes of length up to 30 qubits.
A Gröbner free alternative for polynomial system solving
- Journal of Complexity
, 2001
"... Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic ..."
Abstract
-
Cited by 108 (19 self)
- Add to MetaCart
(Show Context)
Given a system of polynomial equations and inequations with coefficients in the field of rational numbers, we show how to compute a geometric resolution of the set of common roots of the system over the field of complex numbers. A geometric resolution consists of a primitive element of the algebraic extension defined by the set of roots, its minimal polynomial and the parametrizations of the coordinates. Such a representation of the solutions has a long history which goes back to Leopold Kronecker and has been revisited many times in computer algebra. We introduce a new generation of probabilistic algorithms where all the computations use only univariate or bivariate polynomials. We give a new codification of the set of solutions of a positive dimensional algebraic variety relying on a new global version of Newton’s iterator. Roughly speaking the complexity of our algorithm is polynomial in some kind of degree of the system, in its height, and linear in the complexity of evaluation
Can Homomorphic Encryption be Practical?
"... Abstract. The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption ..."
Abstract
-
Cited by 82 (8 self)
- Add to MetaCart
(Show Context)
Abstract. The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted. In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency – can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold. First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is “somewhat ” homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes. Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the “ring learning with errors ” (Ring LWE) problem. The system is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.
A new approach to the conjugacy problem in Garside groups
, 2008
"... The cycling operation endows the super summit set Sx of any element x of a Garside group G with the structure of a directed graph Γx. We establish that the subset Ux of Sx consisting of the circuits of Γx can be used instead of Sx for deciding conjugacy to x in G, yielding a faster and more practica ..."
Abstract
-
Cited by 64 (6 self)
- Add to MetaCart
The cycling operation endows the super summit set Sx of any element x of a Garside group G with the structure of a directed graph Γx. We establish that the subset Ux of Sx consisting of the circuits of Γx can be used instead of Sx for deciding conjugacy to x in G, yielding a faster and more practical solution to the conjugacy problem for Garside groups. Moreover, we present a probabilistic approach to the conjugacy search problem in Garside groups. The results are likely to have implications for the security of recently proposed cryptosystems based on the hardness of problems related to the conjugacy (search) problem in braid groups.
Classical and modular approaches to exponential Diophantine equations I. Fibonacci and Lucas perfect powers
- Annals of Math
"... Abstract. This is the second in a series of papers where we combine the classical approach to exponential Diophantine equations (linear forms in logarithms, Thue equations, etc.) with a modular approach based on some of the ideas of the proof of Fermat’s Last Theorem. In this paper we use a general ..."
Abstract
-
Cited by 63 (15 self)
- Add to MetaCart
(Show Context)
Abstract. This is the second in a series of papers where we combine the classical approach to exponential Diophantine equations (linear forms in logarithms, Thue equations, etc.) with a modular approach based on some of the ideas of the proof of Fermat’s Last Theorem. In this paper we use a general and powerful new lower bound for linear forms in three logarithms, together with a combination of classical, elementary and substantially improved modular methods to solve completely the Lebesgue-Nagell equation for D in the range 1 ≤ D ≤ 100. x 2 + D = y n, x, y integers, n ≥ 3, 1.
Computing the equidimensional decomposition of an algebraic closed set by means of lifting fibers
- J. Complexity
, 2000
"... We present a new probabilistic method for solving systems of polynomial equations and inequations. Our algorithm computes the equidimensional decomposition of the Zariski closure of the solution set of such systems. Each equidimensional component is encoded by a generic fiber, that is a finite set o ..."
Abstract
-
Cited by 60 (2 self)
- Add to MetaCart
We present a new probabilistic method for solving systems of polynomial equations and inequations. Our algorithm computes the equidimensional decomposition of the Zariski closure of the solution set of such systems. Each equidimensional component is encoded by a generic fiber, that is a finite set of points obtained from the intersection of the component with a generic transverse affine subspace. Our algorithm is incremental in the number of equations to be solved. Its complexity is mainly cubic in the maximum of the degrees of the solution sets of the intermediate systems counting multiplicities. Our method is designed for coefficient fields having characteristic zero or big enough with respect to the number of solutions. If the base field is the field of the rational numbers then the resolution is first performed modulo a random prime number after we have applied a random change of coordinates. Then we search for coordinates with small integers and lift the solutions up to the rational numbers. Our implementation is available within our package Kronecker from version 0.166, which is written in the Magma computer algebra system. 1
Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups
"... Abstract. We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In p ..."
Abstract
-
Cited by 56 (0 self)
- Add to MetaCart
Abstract. We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision Diffie-Hellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups. We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-Goh-Nissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai.
Efficient and generalized pairing computation on Abelian varieties
, 2008
"... In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the R-ate pairing. This pairing is a generalization of the Ate and Atei pairing, and also improves efficiency of the pairing computation. Using the R-ate pairing, the loop length in ..."
Abstract
-
Cited by 54 (3 self)
- Add to MetaCart
In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the R-ate pairing. This pairing is a generalization of the Ate and Atei pairing, and also improves efficiency of the pairing computation. Using the R-ate pairing, the loop length in Miller’s algorithm can be as small as log(r 1/φ(k) ) for some pairing-friendly elliptic curves which have not reached this lower bound. Therefore we obtain from 29 % to 69 % savings in overall costs compared to the Atei pairing. On supersingular hyperelliptic curves of genus 2, we show that this approach makes the loop length in Miller’s algorithm shorter than that of the Ate pairing.
The PHOTON Family of Lightweight Hash Functions
- CRYPTO, volume 6841 of LNCS
, 2011
"... Abstract. RFID security is currently one of the major challenges cryptography has to face, often solved by protocols assuming that an on-tag hash function is available. In this article we present the PHOTON lightweight hash-function family, available in many different flavors and suitable for extrem ..."
Abstract
-
Cited by 52 (9 self)
- Add to MetaCart
Abstract. RFID security is currently one of the major challenges cryptography has to face, often solved by protocols assuming that an on-tag hash function is available. In this article we present the PHOTON lightweight hash-function family, available in many different flavors and suitable for extremely constrained devices such as passive RFID tags. Our proposal uses a sponge-like construction as domain extension algorithm and an AES-like primitive as internal unkeyed permutation. This allows us to obtain the most compact hash function known so far (about 1120 GE for 64-bit collision resistance security), reaching areas very close to the theoretical optimum (derived from the minimal internal state memory size). Moreover, the speed achieved by PHOTON also compares quite favorably to its competitors. This is mostly due to the fact that unlike for previously proposed schemes, our proposal is very simple to analyze and one can derive tight AES-like bounds on the number of active Sboxes. This kind of AES-like primitive is usually not well suited for ultra constrained environments, but we describe in this paper a new method for generating the column mixing layer in a serial way, lowering drastically the area required. Finally, we slightly extend the sponge framework in order to offer interesting trade-offs between speed and preimage security for small messages, the classical use-case in hardware.