Results 1  10
of
42,710
PCP characterizations of NP: Towards a polynomiallysmall errorprobability
 In Proc. 31st ACM Symp. on Theory of Computing
, 1999
"... This paper strengthens the lowerror PCP characterization of NP, coming closer to the upper limit of the BGLR conjecture. Consider the task of verifying a witness for the membership of a given input in an NP language, using a constant number of accesses. We show that it is possible to achieve an err ..."
Abstract

Cited by 27 (15 self)
 Add to MetaCart
an error probability exponentially small in the number of bits accessed, where the number of bits in each access is as high as log β n, for any constant β < 1. The BGLR conjecture asserts the same for a constant β where β ≤ 1. Our results are in fact stronger, implying that the Gap
PCP Characterizations of NP: Towards a PolynomiallySmall ErrorProbability
, 1999
"... this paper we show a PCPcharacterization of NP of small (subconstant) error probability. In addition, we impose additional structure on the localreaders, namely require the tests to be quadraticequations over some finite field. We define the gap version of the Maximum QuadraticSolvability proble ..."
Abstract
 Add to MetaCart
this paper we show a PCPcharacterization of NP of small (subconstant) error probability. In addition, we impose additional structure on the localreaders, namely require the tests to be quadraticequations over some finite field. We define the gap version of the Maximum Quadratic
1Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C
"... Abstract—For the additive white Gaussian noise channel with average codeword power constraint, sparse superposition codes are developed. These codes are based on the statistical highdimensional regression framework. The paper [IEEE Trans. Inform. Theory 55 (2012), 2541 – 2557] investigated decoding ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
decoding using the optimal maximumlikelihood decoding scheme. Here a fast decoding algorithm, called adaptive successive decoder, is developed. For any rate R less than the capacity C communication is shown to be reliable with exponentially small error probability. Index Terms—gaussian channel, multiuser
Quantum complexity theory
 in Proc. 25th Annual ACM Symposium on Theory of Computing, ACM
, 1993
"... Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This constructi ..."
Abstract

Cited by 574 (5 self)
 Add to MetaCart
BPP. The class BQP of languages that are efficiently decidable (with small errorprobability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P ♯P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing
The SmallWorld Phenomenon: An Algorithmic Perspective
 in Proceedings of the 32nd ACM Symposium on Theory of Computing
, 2000
"... Long a matter of folklore, the “smallworld phenomenon ” — the principle that we are all linked by short chains of acquaintances — was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960’s. This work was among the first to m ..."
Abstract

Cited by 824 (5 self)
 Add to MetaCart
Long a matter of folklore, the “smallworld phenomenon ” — the principle that we are all linked by short chains of acquaintances — was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960’s. This work was among the first
Solving multiclass learning problems via errorcorrecting output codes
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1995
"... Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass l ..."
Abstract

Cited by 726 (8 self)
 Add to MetaCart
thatlike the other methodsthe errorcorrecting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that errorcorrecting output codes provide a generalpurpose method for improving the performance of inductive learning programs on multiclass
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 574 (9 self)
 Add to MetaCart
chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero
The Evolution of Social and Economic Networks
 JOURNAL OF ECONOMIC THEORY 106, 265–295
, 2002
"... We examine the dynamic formation and stochastic evolution of networks connecting individuals. The payoff to an individual from an economic or social activity depends on the network of connections among individuals. Over time individuals form and sever links connecting themselves to other individuals ..."
Abstract

Cited by 889 (37 self)
 Add to MetaCart
individuals based on the improvement that the resulting network offers them relative to the current network. In addition to intended changes in the network there is a small probability of unintended changes or errors. Predictions can be made regarding the likelihood that the stochastic process will lead
Divergence measures based on the Shannon entropy
 IEEE Transactions on Information theory
, 1991
"... AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, ..."
Abstract

Cited by 666 (0 self)
 Add to MetaCart
, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The new measures are also well characterized by the properties of nonnegativity, finiteness
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods
 ADVANCES IN LARGE MARGIN CLASSIFIERS
, 1999
"... The output of a classifier should be a calibrated posterior probability to enable postprocessing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract

Cited by 1051 (0 self)
 Add to MetaCart
. However, training with a maximum likelihood score will produce nonsparse kernel machines. Instead, we train an SVM, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities. This chapter compares classification error rate and likelihood scores for an SVM plus
Results 1  10
of
42,710