Results 1 
9 of
9
A survey of smoothing techniques for ME models
 IEEE Transactions on Speech and Audio Processing
, 2000
"... ..."
Maximum entropy distribution estimation with generalized regularization
 Proc. Annual Conf. Computational Learning Theory
, 2006
"... Abstract. We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence pr ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
Abstract. We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we can easily derive performance guarantees for many known regularization types, including ℓ1, ℓ2, ℓ 2 2 and ℓ1 + ℓ 2 2 style regularization. Furthermore, our general approach enables us to use information about the structure of the feature space or about sample selection bias to derive entirely new regularization functions with superior guarantees. We propose an algorithm solving a large and general subclass of generalized maxent problems, including all discussed in the paper, and prove its convergence. Our approach generalizes techniques based on information geometry and Bregman divergences as well as those based more directly on compactness. 1
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
, 2003
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations.
Building Maximum Entropy . . .
"... Over the recent years, text classification has become one of the key techniques for organizing information. Since handcoding text classifiers is impractical and handlabeling text is time and labor consuming, it is preferable to learn classifiers from a small amount of labeled examples and a large ..."
Abstract
 Add to MetaCart
Over the recent years, text classification has become one of the key techniques for organizing information. Since handcoding text classifiers is impractical and handlabeling text is time and labor consuming, it is preferable to learn classifiers from a small amount of labeled examples and a large example of unlabeled data. In many cases, such as online information retrieval or database applications, such unlabeled data are easily and abundantly available. Although a lot of this kind of learning algorithms have been designed, most of them rely on certain assumptions, which are dependent on specific datasets. Consequently, the lack of generality makes these algorithms unstable across different datasets. Therefore, we favor an algorithm with as little dependence on such assumptions or as weak assumption as possible. The maximum entropy models (MaxEnt) offers a generic framework meeting this requirement. Built upon a set of features which is equivalent to undirected graphical models, it provides a natural leverage of feature selection. Most importantly, the only assumption made by MaxEnt is that the average feature values on labeled data give a
Departamento de Física,
, 2001
"... We describe the Kerr black hole in the ingoing and outgoing KerrSchild horizon penetrating coordinates. Starting from the null vector naturally defined in these coordinates, we construct the null tetrad for each case, as well as the corresponding geometrical quantities allowing us to explicitly der ..."
Abstract
 Add to MetaCart
We describe the Kerr black hole in the ingoing and outgoing KerrSchild horizon penetrating coordinates. Starting from the null vector naturally defined in these coordinates, we construct the null tetrad for each case, as well as the corresponding geometrical quantities allowing us to explicitly derive the field equations for the Ψ0 (1) and Ψ4 (1) perturbed scalar projections of the Weyl tensor, including arbitrary source terms. This perturbative description, including arbitrary sources, described in horizon penetrating coordinates is desirable in several lines of research on black holes, and contributes to the implementation of a formalism aimed to study the evolution of the space time in the region where two black holes are close.
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equal ..."
Abstract
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equality can be violated to reflect this unreliability. We evaluate the inequality ME model using text categorization datasets. We also propose an extension of the inequality ME model, which results in a natural integration with the Gaussian MAP estimation. Experimental results demonstrate the advantage of the inequality models and the proposed extension. 1
unknown title
"... Maximum entropy models with inequality constraints † A case study on text categorization ..."
Abstract
 Add to MetaCart
Maximum entropy models with inequality constraints † A case study on text categorization
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equal ..."
Abstract
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. However, the equality constraint is inappropriate for sparse and therefore unreliable features. This study explores an ME model with boxtype inequality constraints, where the equality can be violated to reflect this unreliability. We evaluate the inequality ME model using text categorization datasets. We also propose an extension of the inequality ME model, which results in a natural integration with the Gaussian MAP estimation. Experimental results demonstrate the advantage of the inequality models and the proposed extension. 1