Results 1  10
of
49
Graph Approximations to Geodesics on Embedded Manifolds
, 2000
"... this paper, we discuss some of the theoretical claims for Isomap made in [1]. In particular, we give a full proof of the asymptotic convergence theorem referred to in that paper. ..."
Abstract

Cited by 98 (2 self)
 Add to MetaCart
this paper, we discuss some of the theoretical claims for Isomap made in [1]. In particular, we give a full proof of the asymptotic convergence theorem referred to in that paper.
Tutorial on Practical Prediction Theory for Classification
, 2005
"... We discuss basic prediction theory and it's impact on classification success evaluation, implications for learning algorithm design, and uses in learning algorithm execution. This tutorial is meant to be a comprehensive compilation of results which are both theoretically rigorous and practically use ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
We discuss basic prediction theory and it's impact on classification success evaluation, implications for learning algorithm design, and uses in learning algorithm execution. This tutorial is meant to be a comprehensive compilation of results which are both theoretically rigorous and practically useful. There are two important implications...
A new multilayered PCP and the hardness of hypergraph vertex cover
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a factor of ( k 1 ") for arbitrary constants "> 0 and k> = 3. The resultis nearly tight as this problem can be easily approximated within factor k. Our constructionmakes use of the biased LongCode and is analyzed using combinatorial properties of swise tintersecting families of subsets.We also give a different proof that shows an inapproximability factor of b k 2 c ". In additionto being simpler, this proof also works for superconstant values of k up to (log N)1/c where
Sampling Algorithms: Lower Bounds and Applications (Extended Abstract)
, 2001
"... ] Ziv BarYossef y Computer Science Division U. C. Berkeley Berkeley, CA 94720 zivi@cs.berkeley.edu Ravi Kumar IBM Almaden 650 Harry Road San Jose, CA 95120 ravi@almaden.ibm.com D. Sivakumar IBM Almaden 650 Harry Road San Jose, CA 95120 siva@almaden.ibm.com ABSTRACT We develop a fr ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
] Ziv BarYossef y Computer Science Division U. C. Berkeley Berkeley, CA 94720 zivi@cs.berkeley.edu Ravi Kumar IBM Almaden 650 Harry Road San Jose, CA 95120 ravi@almaden.ibm.com D. Sivakumar IBM Almaden 650 Harry Road San Jose, CA 95120 siva@almaden.ibm.com ABSTRACT We develop a framework to study probabilistic sampling algorithms that approximate general functions of the form f : A n ! B, where A and B are arbitrary sets. Our goal is to obtain lower bounds on the query complexity of functions, namely the number of input variables x i that any sampling algorithm needs to query to approximate f(x1 ; : : : ; xn ). We define two quantitative properties of functions  the block sensitivity and the minimum Hellinger distance  that give us techniques to prove lower bounds on the query complexity. These techniques are quite general, easy to use, yet powerful enough to yield tight results. Our applications include the mean and higher statistical moments, the median and other selection functions, and the frequency moments, where we obtain lower bounds that are close to the corresponding upper bounds. We also point out some connections between sampling and streaming algorithms and lossy compression schemes. 1.
A structural EM algorithm for phylogenetic inference
 Journal of Computational Biology
, 2002
"... Head of the school for Engineering and Computer Science ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
Head of the school for Engineering and Computer Science
Extractor Codes
, 2001
"... We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties ar ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. Wegive an application of extractor codes to extracting many hardcore bits from a oneway function, using few auxiliary random bits. Finally,weshow that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.
Robustness and Pricing with Uncertain Growth
 REV. FINANC. STUD
, 2000
"... We study how decision makers' concerns about robustness affect prices and quantities in a stochastic growth model. In the model economy, growth rates in technology are altered by infrequent large shocks and continuous small shocks. An investor observes movements in the technology level but cannot pe ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
We study how decision makers' concerns about robustness affect prices and quantities in a stochastic growth model. In the model economy, growth rates in technology are altered by infrequent large shocks and continuous small shocks. An investor observes movements in the technology level but cannot perfectly distinguish their sources. Instead the investor solves a signal extraction problem. We depart from most of the macroeconomics and finance literature by presuming that the investor treats the specification of technology evolution as an approximation. To promote a decision rule that is robust to model misspecification, an investor acts as if a malevolent player threatens to perturb the actual data generating process relative to his approximating model. We study how a concern about robustness alters asset prices. We show that the dynamic evolution of the riskreturn tradeoff is dominated by movements in the growthstate probabilities and that the evolution of the dividendprice ratio is driven primarily by the capitaltechnology ratio.
A Simple Shortest Path Algorithm with Linear Average Time
"... We present a simple shortest path algorithm. If the input lengths are positive and uniformly distributed, the algorithm runs in linear time. The worstcase running time of the algorithm is O(m + n log C), where n and m are the number of vertices and arcs of the input graph, respectively, and C i ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We present a simple shortest path algorithm. If the input lengths are positive and uniformly distributed, the algorithm runs in linear time. The worstcase running time of the algorithm is O(m + n log C), where n and m are the number of vertices and arcs of the input graph, respectively, and C is the ratio of the largest and the smallest nonzero arc length.
PACBayesian generalization error bounds for Gaussian process classification
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... ..."