Results 1  10
of
6,019
ClassBased ngram Models of Natural Language
 Computational Linguistics
, 1992
"... We address the problem of predicting a word from previous words in a sample of text. In particular we discuss ngram models based on calsses of words. We also discuss several statistical algoirthms for assigning words to classes based on the frequency of their cooccurrence with other words. We find ..."
Abstract

Cited by 986 (5 self)
 Add to MetaCart
We address the problem of predicting a word from previous words in a sample of text. In particular we discuss ngram models based on calsses of words. We also discuss several statistical algoirthms for assigning words to classes based on the frequency of their cooccurrence with other words. We
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 797 (39 self)
 Add to MetaCart
in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include
A firstorder primaldual algorithm for convex problems with applications to imaging
, 2010
"... In this paper we study a firstorder primaldual algorithm for convex optimization problems with known saddlepoint structure. We prove convergence to a saddlepoint with rate O(1/N) in finite dimensions, which is optimal for the complete class of nonsmooth problems we are considering in this paper ..."
Abstract

Cited by 436 (20 self)
 Add to MetaCart
In this paper we study a firstorder primaldual algorithm for convex optimization problems with known saddlepoint structure. We prove convergence to a saddlepoint with rate O(1/N) in finite dimensions, which is optimal for the complete class of nonsmooth problems we are considering
A general approximation technique for constrained forest problems
 SIAM J. COMPUT.
, 1995
"... We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles, or paths satisfying certain requirements. In particular, many basic combinatorial optimization proble ..."
Abstract

Cited by 414 (21 self)
 Add to MetaCart
We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles, or paths satisfying certain requirements. In particular, many basic combinatorial optimization
A scaled conjugate gradient algorithm for fast supervised learning
 NEURAL NETWORKS
, 1993
"... A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural netwo ..."
Abstract

Cited by 451 (0 self)
 Add to MetaCart
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural
Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I
, 1960
"... this paper in L a T E Xpartly supported by ARPA (ONR) grant N000149410775 to Stanford University where John McCarthy has been since 1962. Copied with minor notational changes from CACM, April 1960. If you want the exact typography, look there. Current address, John McCarthy, Computer Science Depa ..."
Abstract

Cited by 457 (3 self)
 Add to MetaCart
this paper in L a T E Xpartly supported by ARPA (ONR) grant N000149410775 to Stanford University where John McCarthy has been since 1962. Copied with minor notational changes from CACM, April 1960. If you want the exact typography, look there. Current address, John McCarthy, Computer Science
Ngrambased text categorization
 In Proc. of SDAIR94, 3rd Annual Symposium on Document Analysis and Information Retrieval
, 1994
"... Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in ..."
Abstract

Cited by 445 (0 self)
 Add to MetaCart
in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an Ngrambased approach to text categorization that is tolerant of textual errors. The system
New tight frames of curvelets and optimal representations of objects with piecewise C² singularities
 COMM. ON PURE AND APPL. MATH
, 2002
"... This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along C2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needleshap ..."
Abstract

Cited by 428 (21 self)
 Add to MetaCart
the wavelet decomposition of the object. For instance, the nterm partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ‖f − f C n ‖ 2 L2 ≤ C · n−2 · (log n) 3, n → ∞. This rate of convergence holds uniformly over a class of functions which are C 2 except
Multicommodity maxflow mincut theorems and their use in designing approximation algorithms
 J. ACM
, 1999
"... In this paper, we establish maxflow mincut theorems for several important classes of multicommodity flow problems. In particular, we show that for any nnode multicommodity flow problem with uniform demands, the maxflow for the problem is within an O(log n) factor of the upper bound implied by ..."
Abstract

Cited by 357 (6 self)
 Add to MetaCart
In this paper, we establish maxflow mincut theorems for several important classes of multicommodity flow problems. In particular, we show that for any nnode multicommodity flow problem with uniform demands, the maxflow for the problem is within an O(log n) factor of the upper bound implied
On the Use of VariableSize Fuzzy Clustering for Classification
, 2006
"... Hard cmeans can be used for building classifiers in supervised machine learning. For example, in a nclass problem, c clusters are built for each of the classes. This results into n · c centroids. Then, new examples can be classified according to the nearest centroid. In this work we consider th ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Hard cmeans can be used for building classifiers in supervised machine learning. For example, in a nclass problem, c clusters are built for each of the classes. This results into n · c centroids. Then, new examples can be classified according to the nearest centroid. In this work we consider
Results 1  10
of
6,019