Results 1 
7 of
7
A survey of smoothing techniques for ME models
 IEEE Transactions on Speech and Audio Processing
, 2000
"... Abstract—In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood (ML) training for exponential models, and like other ML methods is prone to overfitting of training data. Several smoothing methods for ME models have been proposed to address this problem, but previous r ..."
Abstract

Cited by 86 (1 self)
 Add to MetaCart
Abstract—In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood (ML) training for exponential models, and like other ML methods is prone to overfitting of training data. Several smoothing methods for ME models have been proposed to address this problem, but previous results do not make it clear how these smoothing methods compare with smoothing methods for other types of related models. In this work, we survey previous work in ME smoothing and compare the performance of several of these algorithms with conventional techniques for smoothinggram language models. Because of the mature body of research ingram model smoothing and the close connection between ME and conventionalgram models, this domain is wellsuited to gauge the performance of ME smoothing methods. Over a large number of data sets, we find that fuzzy ME smoothing performs as well as or better than all other algorithms under consideration. We contrast this method with previousgram smoothing methods to explain its superior performance. Index Terms—Exponential models, language modeling, maximum entropy, minimum divergence,gram models, smoothing.
Maximum entropy distribution estimation with generalized regularization
 Proc. Annual Conf. Computational Learning Theory
, 2006
"... Abstract. We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence pr ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Abstract. We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we can easily derive performance guarantees for many known regularization types, including ℓ1, ℓ2, ℓ 2 2 and ℓ1 + ℓ 2 2 style regularization. Furthermore, our general approach enables us to use information about the structure of the feature space or about sample selection bias to derive entirely new regularization functions with superior guarantees. We propose an algorithm solving a large and general subclass of generalized maxent problems, including all discussed in the paper, and prove its convergence. Our approach generalizes techniques based on information geometry and Bregman divergences as well as those based more directly on compactness. 1
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
, 2003
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations.
Research Fellow Award
"... al Examples " was the winner of the 1995 ASME Adaptive Structures "Best Paper Award in Structural Dynamics and Control." This is one of two annual awards issued in the field of Smart Structures and Materials by the ASME Committee on "Adaptive Structures and Material Systems." The paper first appeare ..."
Abstract
 Add to MetaCart
al Examples " was the winner of the 1995 ASME Adaptive Structures "Best Paper Award in Structural Dynamics and Control." This is one of two annual awards issued in the field of Smart Structures and Materials by the ASME Committee on "Adaptive Structures and Material Systems." The paper first appeared as an ICASE report in April 1992. It addresses the computational and theoretical foundation for the actual experiments carried out at NASA Langley in 1995. The calculation predicted a 20db reduction in vibration and noise, versus the 18db reduction achieved in the experiment. Inside this Issue Manuel Salas : : : : : : : : : : : : : : : : : : : : : : : : : : : : page 2 Hans Mark : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : page 3 M.Y. Hussaini : : : : : : : : : : : : : : : : : : : : : : : : : : : page 4 Control volume mixed finiteelement : : : : : page 6 LES of of periodic shear flow : : : : : :
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equal ..."
Abstract
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equality can be violated to reflect this unreliability. We evaluate the inequality ME model using text categorization datasets. We also propose an extension of the inequality ME model, which results in a natural integration with the Gaussian MAP estimation. Experimental results demonstrate the advantage of the inequality models and the proposed extension. 1
Departamento de Física,
, 2001
"... We describe the Kerr black hole in the ingoing and outgoing KerrSchild horizon penetrating coordinates. Starting from the null vector naturally defined in these coordinates, we construct the null tetrad for each case, as well as the corresponding geometrical quantities allowing us to explicitly der ..."
Abstract
 Add to MetaCart
We describe the Kerr black hole in the ingoing and outgoing KerrSchild horizon penetrating coordinates. Starting from the null vector naturally defined in these coordinates, we construct the null tetrad for each case, as well as the corresponding geometrical quantities allowing us to explicitly derive the field equations for the Ψ0 (1) and Ψ4 (1) perturbed scalar projections of the Weyl tensor, including arbitrary source terms. This perturbative description, including arbitrary sources, described in horizon penetrating coordinates is desirable in several lines of research on black holes, and contributes to the implementation of a formalism aimed to study the evolution of the space time in the region where two black holes are close.
Building Maximum Entropy . . .
"... Over the recent years, text classification has become one of the key techniques for organizing information. Since handcoding text classifiers is impractical and handlabeling text is time and labor consuming, it is preferable to learn classifiers from a small amount of labeled examples and a large ..."
Abstract
 Add to MetaCart
Over the recent years, text classification has become one of the key techniques for organizing information. Since handcoding text classifiers is impractical and handlabeling text is time and labor consuming, it is preferable to learn classifiers from a small amount of labeled examples and a large example of unlabeled data. In many cases, such as online information retrieval or database applications, such unlabeled data are easily and abundantly available. Although a lot of this kind of learning algorithms have been designed, most of them rely on certain assumptions, which are dependent on specific datasets. Consequently, the lack of generality makes these algorithms unstable across different datasets. Therefore, we favor an algorithm with as little dependence on such assumptions or as weak assumption as possible. The maximum entropy models (MaxEnt) offers a generic framework meeting this requirement. Built upon a set of features which is equivalent to undirected graphical models, it provides a natural leverage of feature selection. Most importantly, the only assumption made by MaxEnt is that the average feature values on labeled data give a