Results 1  10
of
11
Holographic Algorithms: From Art to Science
 Electronic Colloquium on Computational Complexity Report
, 2007
"... We develop the theory of holographic algorithms. We give characterizations of algebraic varieties of realizable symmetric generators and recognizers on the basis manifold, and a polynomial time decision algorithm for the simultaneous realizability problem. Using the general machinery we are able to ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
We develop the theory of holographic algorithms. We give characterizations of algebraic varieties of realizable symmetric generators and recognizers on the basis manifold, and a polynomial time decision algorithm for the simultaneous realizability problem. Using the general machinery we are able to give unexpected holographic algorithms for some counting problems, modulo certain Mersenne type integers. These counting problems are #Pcomplete without the moduli. Going beyond symmetric signatures, we define dadmissibility and drealizability for general signatures, and give a characterization of 2admissibility and some general constructions of admissible and realizable families. 1
On the Theory of Matchgate Computations
 Submitted. Also available at Electronic Colloquium on Computational Complexity Report
, 2007
"... Valiant has proposed a new theory of algorithmic computation based on perfect matchings and the Pfaffian. We study the properties of matchgates—the basic building blocks in this new theory. We give a set of algebraic identities which completely characterize these objects in terms of the GrassmannPl ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Valiant has proposed a new theory of algorithmic computation based on perfect matchings and the Pfaffian. We study the properties of matchgates—the basic building blocks in this new theory. We give a set of algebraic identities which completely characterize these objects in terms of the GrassmannPlücker identities. In the important case of 4 by 4 matchgate matrices, which was used in Valiant’s classical simulation of a fragment of quantum computations, we further realize a group action on the character matrix of a matchgate, and relate this information to its compound matrix. Then we use Jacobi’s theorem to prove that in this case the invertible matchgate matrices form a multiplicative group. These results are useful in establishing limitations on the ultimate capabilities of Valiant’s theory of matchgate computations and his closely related theory of Holographic Algorithms. 1
Valiant’s Holant Theorem and Matchgate Tensors (Extended Abstract
 In Proceedings of TAMC 2006: Lecture Notes in Computer Science
"... Abstract We propose matchgate tensors as a natural and proper language to develop Valiant's newtheory of Holographic Algorithms. We give a treatment of the central theorem in this theorythe Holant Theoremin terms of matchgate tensors. Some generalizations are presented. 1 Background In a remarka ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Abstract We propose matchgate tensors as a natural and proper language to develop Valiant's newtheory of Holographic Algorithms. We give a treatment of the central theorem in this theorythe Holant Theoremin terms of matchgate tensors. Some generalizations are presented. 1 Background In a remarkable paper, Valiant [9] in 2004 has proposed a completely new theory of Holographic Algorithms or Holographic Reductions. In this framework, Valiant has developed a most novel methodology of designing polynomial time (indeed NC2) algorithms, a methodology by which one can design a custom made process capable of carrying out a seemingly exponential computation with exponentially many cancellations so that the computation can actually be done in polynomial time. The simplest analogy is perhaps with Strassen's matrix multiplication algorithm [5]. Here the algorithm computes some extraneous quantities in terms of the submatrices, which do not directly appear in the answer yet only to be canceled later, but the purpose of which is to speedup computation by introducing cancelations. In the several cases such clever algorithms had been found, they tend to work in a linear algebraic setting, in particular the computation of the determinant figures prominently [8, 2, 6]. Valiant's new theory manages to create a process of custom made cancelation which gives polynomial time algorithms for combinatorial problems which do not appear to be linear algebraic. In terms of its broader impact in complexity theory, one can view Valiant's new theory as another algorithmic design paradigm which pushes back the frontier of what is solvable by polynomial time. Admittedly, at this early stage, it is still premature to say what drastic consequence it might have on the landscape of the big questions of complexity theory, such as P vs. NP. But the new theory has already been used by Valiant to devise polynomial time algorithms for a number of problems for which no polynomial time algorithms were known before.
Bases Collapse in Holographic Algorithms
 Electronic Colloquium on Computational Complexity Report
, 2007
"... Holographic algorithms are a novel approach to design polynomial time computations using linear superpositions. Most holographic algorithms are designed with basis vectors of dimension 2. Recently Valiant showed that a basis of dimension 4 can be used to solve in P an interesting (restrictive SAT) c ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Holographic algorithms are a novel approach to design polynomial time computations using linear superpositions. Most holographic algorithms are designed with basis vectors of dimension 2. Recently Valiant showed that a basis of dimension 4 can be used to solve in P an interesting (restrictive SAT) counting problem mod 7. This problem without modulo 7 is #Pcomplete, and counting mod 2 is NPhard. We give a general collapse theorem for bases of dimension 4 to dimension 2 in the holographic algorithms framework. We also define an extension of holographic algorithms to allow more general support vectors. Finally we give a Basis Folding Theorem showing that in a natural setting the support vectors can be simulated by bases of dimension 2. 1
Minimal Complete Primitives for Secure MultiParty Computation
, 2001
"... The study of minimal cryptographic primitives needed to implement secure computation among two or more players is a fundamental question in cryptography. The issue of complete primitives for the case of two players has been thoroughly studied. However, in the multiparty setting, when there are ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The study of minimal cryptographic primitives needed to implement secure computation among two or more players is a fundamental question in cryptography. The issue of complete primitives for the case of two players has been thoroughly studied. However, in the multiparty setting, when there are n > 2 players and t of them are corrupted, the question of what are the simplest complete primitives remained open for t n=3. We consider this question, and introduce complete primitives of minimal cardinality for secure multiparty computation. The cardinality issue (number of players accessing the primitive) is essential in settings where the primitives are implemented by some other means, and the simpler the primitive the easier it is to realize it. We show that our primitives are complete and of minimal cardinality possible.
TwoParty Computing with Encrypted Data
 ASIACRYPT'07
, 2007
"... We consider a new model for online secure computation on encrypted inputs in the presence of malicious adversaries. The inputs are independent of the circuit computed in the sense that they can be contributed by separate third parties. The model attempts to emulate as closely as possible the model o ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider a new model for online secure computation on encrypted inputs in the presence of malicious adversaries. The inputs are independent of the circuit computed in the sense that they can be contributed by separate third parties. The model attempts to emulate as closely as possible the model of “Computing with Encrypted Data” that was put forth in 1978 by Rivest, Adleman and Dertouzos which involved a single online message. In our model, two parties publish their public keys in an offline stage, after which any party (i.e., any of the two and any third party) can publish encryption of their local inputs. Then in an online stage, given any common input circuit C and its set of inputs from among the published encryptions, the first party sends a single message to the second party, who completes the computation.
On Valiant’s holographic algorithms
"... Leslie Valiant recently proposed a theory of holographic algorithms. These novel algorithms achieve exponential speedups for certain computational problems compared to naive algorithms for the same problems. The methodology uses Pfaffians and (planar) perfect matchings as basic computational primit ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Leslie Valiant recently proposed a theory of holographic algorithms. These novel algorithms achieve exponential speedups for certain computational problems compared to naive algorithms for the same problems. The methodology uses Pfaffians and (planar) perfect matchings as basic computational primitives, and attempts to create exponential cancellations in computation. In this article we survey this new theory of matchgate computations and holographic algorithms.
Constrained Codes as Networks of Relations
"... Abstract — We address the wellknown problem of determining the capacity of constrained coding systems. While the onedimensional case is well understood to the extent that there are techniques for rigorously deriving the exact capacity, in contrast, computing the exact capacity of a twodimensional ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract — We address the wellknown problem of determining the capacity of constrained coding systems. While the onedimensional case is well understood to the extent that there are techniques for rigorously deriving the exact capacity, in contrast, computing the exact capacity of a twodimensional constrained coding system is still an elusive research challenge. The only known exception in the twodimensional case is an exact (however, not rigorous) solution to the (1, ∞)RLL system on the hexagonal lattice. Furthermore, only exponentialtime algorithms are known for the related problem of counting the exact number of constrained twodimensional information arrays. We present the first known rigorous technique that yields an exact capacity of a twodimensional constrained coding system. In addition, we devise an efficient (polynomial time) algorithm for counting the exact number of constrained arrays of any given size. Our approach is a composition of a number of ideas and techniques: describing the capacity problem as a solution to a counting problem in networks of relations, graphtheoretic tools originally developed in the field of statistical mechanics, techniques for efficiently simulating quantum circuits, as well as ideas from the theory related to the spectral distribution of Toeplitz matrices. Using our technique we derive a closed form solution to the capacity related to the PathCover constraint in a twodimensional triangular array (the resulting calculated capacity is 0.72399217...). PathCover is a generalization of the well known onedimensional (0, 1)RLL constraint for which the capacity is known to be 0.69424... Index Terms — capacity of constrained systems, capacity of twodimensional constrained systems, holographic reductions, networks of relations, FKT method, spectral distribution of Toeplitz matrices I.
ConstantRound Private Function Evaluation with Linear Complexity
"... We consider the problem of private function evaluation (PFE) in the twoparty setting. Here, informally, one party holds an input x while the other holds a circuit describing a function f; the goal is for one (or both) of the parties to learn f(x) while revealing nothing more to either party. In con ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the problem of private function evaluation (PFE) in the twoparty setting. Here, informally, one party holds an input x while the other holds a circuit describing a function f; the goal is for one (or both) of the parties to learn f(x) while revealing nothing more to either party. In contrast to the usual setting of secure computation — where the function being computed is known to both parties — PFE is useful in settings where the function (i.e., algorithm) itself must remain secret, e.g., because it is proprietary or classified. It is known that PFE can be reduced to standard secure computation by having the parties evaluate a universal circuit, and this is the approach taken in most prior work. Using a universal circuit, however, introduces additional overhead and results in a more complex implementation. We show here a completely new technique for PFE that avoids universal circuits, and results in constantround protocols with communication/computational complexity linear in the size of the circuit computing f. This gives the first constantround protocol for PFE with linear complexity (without using fully homomorphic encryption), even restricted to semihonest adversaries. 1
Efficient Interconnection Schemes for VLSI and Parallel Computation
, 1989
"... This thesis is primarily concerned with two problems of interconnecting components in VLSI technologies. In the first case, the goal is to construct efficient interconnection networks for generalpurpose parallel computers. The second problem is a more specialized problem in the design of VLSI chips ..."
Abstract
 Add to MetaCart
This thesis is primarily concerned with two problems of interconnecting components in VLSI technologies. In the first case, the goal is to construct efficient interconnection networks for generalpurpose parallel computers. The second problem is a more specialized problem in the design of VLSI chips, namely multilayer channel routing. In addition, a final part of this thesis provides lower bounds on the area required for VLSI implementations of finitestate machines. This thesis shows that networks based on Leiserson's fattree architecture are nearly as good as any network built in a comparable amount of physical space. It shows that these "universal" networks can efficiently simulate competing networks by means of an appropriate correspondence between network components and efficient algorithms for routing messages on the universal network. In particular, a universal network of area A can simulate competing networks with O(lg 3 A) slowdown (in bittimes), using a very simple rando...