Results 1 - 10
of
14
Extractors for circuit sources
, 2011
"... We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are: (1) We extract k(k/nd) O(1) bits with exponentially small error from n-bit sources of min-entropy k that are generated by functions f: {0, 1} ℓ → {0, 1} n where e ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are: (1) We extract k(k/nd) O(1) bits with exponentially small error from n-bit sources of min-entropy k that are generated by functions f: {0, 1} ℓ → {0, 1} n where each output bit depends on ≤ d input bits. In particular, we extract from NC 0 sources, corresponding to d = O(1). (2) We extract k(k/n 1+γ) O(1) bits with super-polynomially small error from n-bit sources of min-entropy k that are generated by poly(n)-size AC 0 circuits, for any γ> 0. As our starting point, we revisit the connection by Trevisan and Vadhan (FOCS 2000) between circuit lower bounds and extractors for sources generated by circuits. We note that such extractors (with very weak parameters) are equivalent to lower bounds for generating distributions (FOCS 2010; with Lovett, CCC 2011). Building on those bounds, we prove that the sources in (1) and (2) are (close to) a convex combination of high-entropy “bit-block ” sources. Introduced here, such sources are a special case of affine ones. As extractors for (1) and (2) one can use the extractor for low-weight affine sources by Rao (CCC 2009). Along the way, we exhibit an explicit boolean function b: {0, 1} n → {0, 1} such that poly(n)-size AC 0 circuits cannot generate the distribution (Y, b(Y)), solving a problem about the complexity of distributions. Independently, De and Watson (RANDOM 2011) obtain a result similar to (1) in the special case d = o(lg n). Supported by NSF grant CCF-0845003.
Towards an Understanding of Polynomial Calculus: New Separations and Lower Bounds
"... Abstract. During the last decade, an active line of research in proof complexity has been into the space complexity of proofs and how space is related to other measures. By now these aspects of resolution are fairly well understood, but many open problems remain for the related but stronger polynomi ..."
Abstract
-
Cited by 6 (5 self)
- Add to MetaCart
Abstract. During the last decade, an active line of research in proof complexity has been into the space complexity of proofs and how space is related to other measures. By now these aspects of resolution are fairly well understood, but many open problems remain for the related but stronger polynomial calculus (PC/PCR) proof system. For instance, the space complexity of many standard “benchmark formulas ” is still open, as well as the relation of space to size and degree in PC/PCR. We prove that if a formula requires large resolution width, then making XOR substitution yields a formula requiring large PCR space, providing some circumstantial evidence that degree might be a lower bound for space. More importantly, this immediately yields formulas that are very hard for space but very easy for size, exhibiting a size-space separation similar to what is known for resolution. Using related ideas, we show that if a graph has good expansion and in addition its edge set can be partitioned into short cycles, then the Tseitin formula over this graph requires large PCR space. In particular, Tseitin formulas over random 4-regular graphs almost surely require space at least Ω ` √ n ´. Our proofs use techniques recently introduced in [Bonacina-Galesi ’13]. Our final contribution, however, is to show that these techniques provably cannot yield non-constant space lower bounds for the functional pigeonhole principle, delineating the limitations of this framework and suggesting that we are still far from characterizing PC/PCR space. 1
Local correctability of expander codes
- In Automata, Languages, and Programming
, 2013
"... ar ..."
Algorithms and hardness for robust subspace recovery
, 2014
"... We consider a fundamental problem in unsupervised learning called subspace recovery: given a collection of m points in Rn, if many but not necessarily all of these points are contained in a d-dimensional subspace T can we find it? The points contained in T are called inliers and the remaining points ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We consider a fundamental problem in unsupervised learning called subspace recovery: given a collection of m points in Rn, if many but not necessarily all of these points are contained in a d-dimensional subspace T can we find it? The points contained in T are called inliers and the remaining points are outliers. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds T when it contains more than a dn fraction of the points. Hence, for say d = n/2 this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find T when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.
Quantum money with classical verification
, 2011
"... Abstract-We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against ad ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract-We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversariesthis property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.
Locally Computable UOWHF with Linear Shrinkage ∗
"... We study the problem of constructing locally computable Universal One-Way Hash Functions (UOWHFs) H: {0, 1} n → {0, 1} m. A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by [Applebaum, Ishai, and Kush ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We study the problem of constructing locally computable Universal One-Way Hash Functions (UOWHFs) H: {0, 1} n → {0, 1} m. A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by [Applebaum, Ishai, and Kushilevitz, SICOMP 2006]. However, this construction suffers from two limitations: (1) It can only achieve a sub-linear shrinkage of n − m = n 1−ɛ; and (2) It has a super-constant input locality, i.e., some inputs influence a large super-constant number of outputs. This leaves open the question of realizing UOWHFs with constant output locality and linear shrinkage of n−m = ɛn, or UOWHFs with constant input locality and minimal shrinkage of n − m = 1. We settle both questions simultaneously by providing the first construction of UOWHFs with linear shrinkage, constant input locality, and constant output locality. Our construction is based on the one-wayness of “random ” local functions – a variant of an assumption made by Goldreich (ECCC 2000). Using a transformation of [Ishai, Kushilevitz, Ostrovsky and Sahai, STOC 2008], our UOWHFs give rise to a digital signature scheme with a minimal additive complexity overhead: signing n-bit messages with security parameter κ takes only O(n + κ) time instead of O(nκ) as in typical constructions. Previously, such signatures were only known to exist under an exponential hardness assumption. As an additional contribution, we obtain new locally-computable hardness amplification procedures for UOWHFs that preserve linear shrinkage. 1
The complexity of joint computation
, 2012
"... Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple inde-pendent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success proba-
SlimShot: In-Database Probabilistic Inference for Knowledge BasesE
"... ABSTRACT Increasingly large Knowledge Bases are being created, by crawling the Web or other corpora of documents, and by extracting facts and relations using machine learning techniques. To manage the uncertainty in the data, these KBs rely on probabilistic engines based on Markov Logic Networks (M ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT Increasingly large Knowledge Bases are being created, by crawling the Web or other corpora of documents, and by extracting facts and relations using machine learning techniques. To manage the uncertainty in the data, these KBs rely on probabilistic engines based on Markov Logic Networks (MLN), for which probabilistic inference remains a major challenge. Today's state of the art systems use variants of MCMC, which have no theoretical error guarantees, and, as we show, suffer from poor performance in practice. In this paper we describe SlimShot (Scalable Lifted Inference and Monte Carlo Sampling Hybrid Optimization Technique), a probabilistic inference engine for knowledge bases. SlimShot converts the MLN to a tuple-independent probabilistic database, then uses a simple Monte Carlo-based inference, with three key enhancements: (1) it combines sampling with safe query evaluation, (2) it estimates a conditional probability by jointly computing the numerator and denominator, and (3) it adjusts the proposal distribution based on the sample cardinality. In combination, these three techniques allow us to give formal error guarantees, and we demonstrate empirically that SlimShot outperforms today's state of the art probabilistic inference engines used in knowledge bases.