• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,822
Next 10 →

Approximate list-decoding of direct product . . .

by Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets
"... Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is, gi ..."
Abstract - Cited by 33 (8 self) - Add to MetaCart
Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is

Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming

by M. X. Goemans, D.P. Williamson - Journal of the ACM , 1995
"... We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution ..."
Abstract - Cited by 1211 (13 self) - Add to MetaCart
We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds

Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms

by Thomas G. Dietterich , 1998
"... This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I err ..."
Abstract - Cited by 723 (8 self) - Add to MetaCart
This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I

The space complexity of approximating the frequency moments

by Noga Alon, Yossi Matias, Mario Szegedy - JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1996
"... The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, ..."
Abstract - Cited by 845 (12 self) - Add to MetaCart
The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly

Data Streams: Algorithms and Applications

by S. Muthukrishnan , 2005
"... In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerg ..."
Abstract - Cited by 533 (22 self) - Add to MetaCart
emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic

Fast approximate nearest neighbors with automatic algorithm configuration

by Marius Muja, David G. Lowe - In VISAPP International Conference on Computer Vision Theory and Applications , 2009
"... nearest-neighbors search, randomized kd-trees, hierarchical k-means tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems ..."
Abstract - Cited by 455 (2 self) - Add to MetaCart
nearest-neighbors search, randomized kd-trees, hierarchical k-means tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high

Factor Graphs and the Sum-Product Algorithm

by Frank R. Kschischang, Brendan J. Frey, Hans-Andrea Loeliger - IEEE TRANSACTIONS ON INFORMATION THEORY , 1998
"... A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple c ..."
Abstract - Cited by 1791 (69 self) - Add to MetaCart
computational rule, the sum-product algorithm operates in factor graphs to compute---either exactly or approximately---various marginal functions by distributed message-passing in the graph. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can

Loopy belief propagation for approximate inference: An empirical study. In:

by Kevin P Murphy , Yair Weiss , Michael I Jordan - Proceedings of Uncertainty in AI, , 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract - Cited by 676 (15 self) - Add to MetaCart
-limit performance of "Turbo Codes" -codes whose decoding algorithm is equivalent to loopy belief propagation in a chain-structured Bayesian network. In this paper we ask: is there something spe cial about the error-correcting code context, or does loopy propagation work as an ap proximate inference scheme

Fast and robust fixed-point algorithms for independent component analysis

by Aapo Hyvärinen - IEEE TRANS. NEURAL NETW , 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract - Cited by 884 (34 self) - Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s

List-Decodable Codes

by n.n.
"... The field of coding theory is motivated by the problem of communicating reliably over noisy channels — where the data sent over the channel may come out corrupted on the other end, but we nevertheless want the receiver to be able to correct the errors and recover the original message. There is a vas ..."
Abstract - Add to MetaCart
. ” In particular, a generalization of the notion of an error-correcting code yields a framework that we will use to unify all of the main pseudorandom objects covered in this survey (averaging samplers, expander graphs, randomness extractors, list-decodable codes, pseudorandom generators).
Next 10 →
Results 1 - 10 of 4,822
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University