Results 1  10
of
945,143
Increasing the Output Length of ZeroError Dispersers
, 2008
"... Let C be a class of probability distributions over a finite set Ω. A function D: Ω ↦ → {0, 1} m is a disperser for C with entropy threshold k and error ɛ if for any distribution X in C such that X gives positive probability to at least 2k elements we have that the distribution D(X) gives positive pr ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
in explicitly constructing zeroerror dispersers (that is dispersers with error ɛ = 0). For several interesting classes of distributions there are explicit constructions in the literature of zeroerror dispersers with “small ” output length m and we give improved constructions that achieve “large ” output
Large margin methods for structured and interdependent output variables
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract

Cited by 612 (12 self)
 Add to MetaCart
Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses
Solving multiclass learning problems via errorcorrecting output codes
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1995
"... Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass l ..."
Abstract

Cited by 730 (8 self)
 Add to MetaCart
output representations. This paper compares these three approaches to a new technique in which errorcorrecting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range
Insiders and Outsiders: The Choice between Informed and Arm'sLength Debt
, 1991
"... While the benefits of bank financing are relatively well understood, the costs are not. This paper argues that while informed banks make flexible financial decisions which prevent a firm's projects from going awry, the cost of this credit is that banks have bargaining power over the firm's ..."
Abstract

Cited by 846 (18 self)
 Add to MetaCart
While the benefits of bank financing are relatively well understood, the costs are not. This paper argues that while informed banks make flexible financial decisions which prevent a firm's projects from going awry, the cost of this credit is that banks have bargaining power over the firm's profits, once projects have begun. The firm's portfolio choice of borrowing source and the choice of priority for its debt claims attempt to optimally circumscribe the powers of banks.
Okapi at TREC3
, 1996
"... this document length correction factor is #global": it is added at the end, after the weights for the individual terms have been summed, and is independentofwhich terms match. ..."
Abstract

Cited by 593 (5 self)
 Add to MetaCart
this document length correction factor is #global": it is added at the end, after the weights for the individual terms have been summed, and is independentofwhich terms match.
Raptor codes
 IEEE Transactions on Information Theory
, 2006
"... LTCodes are a new class of codes introduced in [1] for the purpose of scalable and faulttolerant distribution of data over computer networks. In this paper we introduce Raptor Codes, an extension of LTCodes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: ..."
Abstract

Cited by 567 (6 self)
 Add to MetaCart
: for a given integer k, and any real ε> 0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + ε) is sufficient to recover the original k symbols with high probability. Each output symbol is generated using O(log(1/ε)) operations
A Trainable Document Summarizer
, 1995
"... To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. ..."
Abstract

Cited by 525 (2 self)
 Add to MetaCart
To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary.
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 569 (9 self)
 Add to MetaCart
In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly
Gaussian processes for machine learning
 in: Adaptive Computation and Machine Learning
, 2006
"... Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperpar ..."
Abstract

Cited by 631 (2 self)
 Add to MetaCart
the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work. Supervised learning in the form of regression (for continuous outputs) and classification (for discrete outputs) is an important constituent
A MaximumEntropyInspired Parser
, 1999
"... We present a new parser for parsing down to Penn treebank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan dard" se ..."
Abstract

Cited by 963 (19 self)
 Add to MetaCart
We present a new parser for parsing down to Penn treebank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan dard
Results 1  10
of
945,143