Results 1  10
of
50
Computational Complexity  A Modern Approach
, 2009
"... Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these incl ..."
Abstract

Cited by 155 (2 self)
 Add to MetaCart
Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these include new probabilistic definitions of classical complexity classes (IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shor’s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon computational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of more classical results. It is intended to both serve as a textbook and as a reference for selfstudy. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a web site for this book at
Monotone Circuits for Matching Require Linear Depth
"... We prove that monotone circuits computing the perfect matching function on nvertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone circuits. ..."
Abstract

Cited by 76 (8 self)
 Add to MetaCart
We prove that monotone circuits computing the perfect matching function on nvertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone circuits.
Analysis of the binary Euclidean algorithm
 Directions and Recent Results in Algorithms and Complexity
, 1976
"... The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In particular, we correct some errors in the literature, discuss some recent results of Vallée, and describe a numerical computation which supports a conjecture of Vallée. 1
User interface design with matrix algebra
 ACM Transactions on CHI
, 2004
"... It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some importa ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some important design issues and issues of usability. Matrix algebra, in particular, is useful for explicit calculation and for proof of various user interface properties. With matrix algebra, we are able to undertake with ease unusally thorough reviews of real user interfaces: this article examines a mobile phone, a handheld calculator and a digital multimeter as case studies, and draws general conclusions about the approach and its relevance to design.
Integral closure of ideals, rings, and modules
 London Mathematical Society Lecture Note Series 336
, 2006
"... v Table of basic properties ix ..."
Parallel Linear Programming in Fixed Dimension Almost Surely In Constant Time
, 1992
"... For any fixed dimension d, the linear programming problem with n inequality constraints can be solved on a probabilistic CRCW PRAM with O(n) processors almost surely in constant time. The algorithm always finds the correct solution. With nd/log² d processors, the probability that the algorithm wi ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
For any fixed dimension d, the linear programming problem with n inequality constraints can be solved on a probabilistic CRCW PRAM with O(n) processors almost surely in constant time. The algorithm always finds the correct solution. With nd/log² d processors, the probability that the algorithm will not finish within O(d² log² d) time tends to zero exponentially with n.
Fast modular composition in any characteristic
, 2008
"... We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n p ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n polynomials over Fq requiring (n 1.5+o(1) + n 1+o(1) log q) log 1+o(1) q bit operations, improving upon the methods of von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998). Our results also imply algorithms for irreducibility testing and computing minimal polynomials whose running times are bestpossible, up to lower order terms. As in Umans (2008), we reduce modular composition to certain instances of multipoint evaluation of multivariate polynomials. We then give an algorithm that solves this problem optimally (up to lower order terms), in arbitrary characteristic. The main idea is to lift to characteristic 0, apply a small number of rounds of multimodular reduction, and finish with a small number of multidimensional FFTs. The final evaluations are then reconstructed using the Chinese Remainder Theorem. As a bonus, we obtain a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithm uses techniques which are commonly employed in practice, so it may be competitive for real problem sizes. This contrasts with previous asymptotically fast methods relying on fast matrix multiplication. Supported by NSF DMS0545904 (CAREER) and a Sloan Research Fellowship.
Complete, exact, and efficient computations with cubic curves
 In Proc. 20th Annu. ACM Symp. Comput. Geom
, 2004
"... The BentleyOttmann sweepline method can be used to compute the arrangement of planar curves provided a number of geometric primitives operating on the curves are available. We discuss the mathematics of the primitives for planar algebraic curves of degree three or less and derive efficient realiza ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
The BentleyOttmann sweepline method can be used to compute the arrangement of planar curves provided a number of geometric primitives operating on the curves are available. We discuss the mathematics of the primitives for planar algebraic curves of degree three or less and derive efficient realizations. As a result, we obtain a complete, exact, and efficient algorithm for computing arrangements of cubic curves. Conics and cubic splines are special cases of cubic curves. The algorithm is complete in that it handles all possible degeneracies including singularities. It is exact in that it provides the mathematically correct result. It is efficient in that it can handle hundreds of curves with a quarter million of segments in the final arrangement.
Secure computation of the mean and related statistics
 in Proceedings of the Theory of Cryptography Conference, ser. Lecture Notes in Computer Science
"... Abstract. In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for legal or commercial reasons, this is not possible. The idea of using the theory of multiparty computation to analyse efficient algorithms for privacy preserving datamining was proposed by Pinkas and Lindell. The point is that algorithms developed in this way can be used to overcome the apparent impasse described above: the owners of data can, in effect, pool their data while ensuring that privacy is maintained. Motivated by this, we describe how to securely compute the mean of an attribute value in a database that is shared between two parties. We also demonstrate that existing solutions in the literature that could be used to do this leak information, therefore underlining the importance of applying rigorous theoretical analysis rather than settling for ad hoc techniques. 1
Arithmetic Circuits: a survey of recent results and open questions
"... A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last fi ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last five decades, partially due to its simplicity and elegance. Being a more structured model than Boolean circuits, one could hope that the fundamental problems of theoretical computer science, such as separating P from NP, will be easier to solve for arithmetic circuits. However, in spite of the appearing simplicity and the vast amount of mathematical tools available, no major breakthrough has been seen. In fact, all the fundamental questions are still open for this model as well. Nevertheless, there has been a lot of progress in the area and beautiful results have been found, some in the last few years. As examples we mention the connection between polynomial identity testing and lower bounds of Kabanets and Impagliazzo, the lower bounds of Raz for multilinear formulas, and two new approaches for proving lower bounds: Geometric Complexity Theory and Elusive Functions. The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what we find to be the most interesting and accessible research directions. We aim to cover the main results and techniques, with an emphasis on works from the last two decades. In particular, we