Results 1 - 10
of
55
On the Compressibility of NP Instances and Cryptographic Applications
"... We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of N P decision problems. We consider N P problems that have long instances but relatively short witnesses. The question is, can one efficientl ..."
Abstract
-
Cited by 38 (0 self)
- Add to MetaCart
(Show Context)
We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of N P decision problems. We consider N P problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness and polylog in the length of original input. We discuss the differences between this notion and similar notions from parameterized complexity. Such compression enables to succinctly store instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We give a new classification of N P with respect to compression. This classification forms a stratification of N P that we call the VC hierarchy. The hierarchy is based on a new type of reduction called W-reduction and there are compression-complete problems for each class. Our motivation for studying this issue stems from the vast cryptographic implications compressibility has. For example, we say that SAT is compressible if there exists a polynomial p(·, ·) so that given a
Closed Timelike Curves Make Quantum and Classical Computing Equivalent
"... While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: b ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
(Show Context)
While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency ” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information. 1
A survey on continuous time computations
- New Computational Paradigms
"... Abstract. We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing resu ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
Abstract. We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature. 1
SIMPL system: on a public key variant of physical unclonable function,” Cryptology ePrint Archive
, 2009
"... This paper theoretically discusses a novel security tool termed SIMPL system, which can be regarded as a public key version of physical unclonable functions (PUFs). Like the latter, a SIMPL system S is physically unique and non-reproducible, and implements an individual function FS. In opposition to ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
(Show Context)
This paper theoretically discusses a novel security tool termed SIMPL system, which can be regarded as a public key version of physical unclonable functions (PUFs). Like the latter, a SIMPL system S is physically unique and non-reproducible, and implements an individual function FS. In opposition to a PUF, however, a SIMPL system S possesses a publicly known numerical description D(S), which allows its digital simulation and prediction. At the same time, it is required that any digital simulation of a SIMPL system S must work at a detectably lower speed than its real-time behavior. In other words, the holder of a SIMPL system S can evaluate a publicly known, publicly computable function FS faster than anyone else. This feature, so we argue in this paper, allows a number of improved practicality and security features. Once implemented successfully, SIMPL systems would have specific advantages over PUFs, certificates of authenticity, physically obfuscated keys, and also over standard mathematical cryptotechniques. 2
Quantitative approaches to information recovery from black holes
, 2011
"... The evaporation of black holes into apparently thermal radiation poses a serious conundrum for theoretical physics: at face value, it appears that in the presence of a black hole quantum evolution is non-unitary and destroys information. This information loss paradox has its seed in the presence of ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
The evaporation of black holes into apparently thermal radiation poses a serious conundrum for theoretical physics: at face value, it appears that in the presence of a black hole quantum evolution is non-unitary and destroys information. This information loss paradox has its seed in the presence of a horizon causally separating the interior and asymptotic regions in a black hole spacetime. A quantitative resolution of the paradox could take several forms: (a) a precise argument that the underlying quantum theory is unitary, and that information loss must be an artifact of approximations in the derivation of black hole evaporation, (b) an explicit construction showing how information can be recovered by the asymptotic observer, (c) a demonstration that the causal disconnection of the black hole interior from infinity is an artifact of the semiclassical approximation. This review summarizes progress on all these fronts.
Why philosophers should care about computational complexity
- In Computability: Gödel, Turing, Church, and beyond (eds
, 2012
"... One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing
How much can analog and hybrid systems be proved (super-)Turing
- Applied Mathematics and Computation
, 2006
"... Church thesis and its variants say roughly that all reasonable models of computation do not have more power than Turing Machines. In a contrapositive way, they say that any model with super-Turing power must have something unreasonable. Our aim is to discuss how much theoretical computer science can ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Church thesis and its variants say roughly that all reasonable models of computation do not have more power than Turing Machines. In a contrapositive way, they say that any model with super-Turing power must have something unreasonable. Our aim is to discuss how much theoretical computer science can quantify this, by considering several classes of continuous time dynamical systems, and by studying how much they can be proved Turing or super-Turing. 1
An algorithmic framework for compression and text indexing
"... We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lower-order terms. For a text T of n symbols drawn from an alphabet Σ, our bounds are stated in terms of the hth-order empirical entropy of the text, Hh. In ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lower-order terms. For a text T of n symbols drawn from an alphabet Σ, our bounds are stated in terms of the hth-order empirical entropy of the text, Hh. In particular, we provide a tight analysis of the Burrows-Wheeler transform (bwt) establishing a bound of nHh + M(T,Σ,h) bits, where M(T,Σ,h) denotes the asymptotical number of bits required to store the empirical statistical model for contexts of order h appearing in T. Using the same framework, we also obtain an implementation of the compressed suffix array (csa) which achieves nHh + M(T,Σ,h) + O(nlg lg n/lg |Σ | n) bits of space while still retaining competitive full-text indexing functionality. The novelty of the proposed framework lies in its use of the finite set model instead of the empirical probability model (as in previous work), giving us new insight into the design and analysis of our algorithms. For example, we show that our analysis gives improved bounds since M(T,Σ,h) ≤ min{g ′ h lg(n/g ′ h + 1),H ∗ hn + lg n + g′′ h}, where g ′ h = O(|Σ|h+1) and g ′′ h = O(|Σ | h+1 lg |Σ | h+1) do not depend on the text length n, while H ∗ h ≥ Hh is the modified hthorder empirical entropy of T. Moreover, we show a strong relationship between a compressed full-text index and the succinct dictionary problem. We also examine the importance of lowerorder terms, as these can dwarf any savings achieved by high-order entropy. We report further results and tradeoffs on high-order entropy-compressed text indexes in the paper. 1
Can Intelligence Explode?
, 2012
"... The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers ’ (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers ’ and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences