Results 1  10
of
9,807
Simple analysis of graph tests for linearity and PCP
 IEEE Conference on Computational Complexity
"... We give a simple analysis of the PCP with low amortized query complexity of Samorodnitsky and Trevisan [16]. The analysis also applies to the linearity testing over finite fields, giving a better estimate of the acceptance probability in terms of the distance of the tested function to the closest li ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We give a simple analysis of the PCP with low amortized query complexity of Samorodnitsky and Trevisan [16]. The analysis also applies to the linearity testing over finite fields, giving a better estimate of the acceptance probability in terms of the distance of the tested function to the closest
Property Testing and its connection to Learning and Approximation
"... We study the question of determining whether an unknown function has a particular property or is fflfar from any function with that property. A property testing algorithm is given a sample of the value of the function on instances drawn according to some distribution, and possibly may query the fun ..."
Abstract

Cited by 498 (68 self)
 Add to MetaCart
the function on instances of its choice. First, we establish some connections between property testing and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being kcolorable or having a aeclique (clique of density ae
Statistical mechanics of complex networks
 Rev. Mod. Phys
"... Complex networks describe a wide range of systems in nature and society, much quoted examples including the cell, a network of chemicals linked by chemical reactions, or the Internet, a network of routers and computers connected by physical links. While traditionally these systems were modeled as ra ..."
Abstract

Cited by 2083 (10 self)
 Add to MetaCart
as random graphs, it is increasingly recognized that the topology and evolution of real
A Threshold of ln n for Approximating Set Cover
 JOURNAL OF THE ACM
, 1998
"... Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhar ..."
Abstract

Cited by 778 (5 self)
 Add to MetaCart
Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhard. We prove that (1 \Gamma o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low order terms) between the ratio of approximation achievable by the greedy algorithm (which is (1 \Gamma o(1)) ln n), and previous results of Lund and Yannakakis, that showed hardness of approximation within a ratio of (log 2 n)=2 ' 0:72 lnn. For max kcover we show an approximation threshold of (1 \Gamma 1=e) (up to low order terms), under the assumption that P != NP .
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 664 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the KullbackLeibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classifica...
Monitoring the future: National survey results on drug use
 I: Secondary school students (NIH Publication No. 055726). Bethesda, MD: National Institute on Drug Abuse
, 2005
"... by ..."
Robust Characterizations of Polynomials with Applications to Program Testing
, 1996
"... The study of selftesting and selfcorrecting programs leads to the search for robust characterizations of functions. Here we make this notion precise and show such a characterization for polynomials. From this characterization, we get the following applications. ..."
Abstract

Cited by 377 (42 self)
 Add to MetaCart
The study of selftesting and selfcorrecting programs leads to the search for robust characterizations of functions. Here we make this notion precise and show such a characterization for polynomials. From this characterization, we get the following applications.
The PCP theorem by gap amplification
 In Proceedings of the ThirtyEighth Annual ACM Symposium on Theory of Computing
, 2006
"... The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PC ..."
Abstract

Cited by 169 (9 self)
 Add to MetaCart
constraintsystem, with only a linear blowup in the size of the system. The amplification step causes an increase in alphabetsize that is corrected by a (standard) PCP composition step. Iterative application of these two steps yields a proof for the PCP theorem. The amplification lemma relies on a new notion
Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions
 COGNITIVE SCIENCE
, 1995
"... We examine the problem of generating definite noun phrases that are appropriate referring expressions: that is, noun phrases that (a) successfully identify the intended referent to the hearer whilst (b) not conveying to him or her any false conversational implicatures (Grice, 1975). We review severa ..."
Abstract

Cited by 366 (35 self)
 Add to MetaCart
We examine the problem of generating definite noun phrases that are appropriate referring expressions: that is, noun phrases that (a) successfully identify the intended referent to the hearer whilst (b) not conveying to him or her any false conversational implicatures (Grice, 1975). We review several possible computational interpretations of the conversational implicature maxims, with different computational costs, and argue that the simplest may be the best, because it seems to be closest to what human speakers do. We describe our recommended algorithm in detail, along with a specification of the resources a host system must provide in order to make use of the algorithm, and an implementation used in the natural language generation component of the IDAS system.
Results 1  10
of
9,807