Results 11  20
of
35
AUEB at TAC 2008
 In TAC 2008 Workshop
, 2008
"... This paper describes aueb’s participation in tac 2008. Specifically, we participated in the summarization and textual entailment recognition tracks. For the former we trained a Support Vector Regression model that is used to rank the summary’s candidate sentences; and for the latter we used a Maximu ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper describes aueb’s participation in tac 2008. Specifically, we participated in the summarization and textual entailment recognition tracks. For the former we trained a Support Vector Regression model that is used to rank the summary’s candidate sentences; and for the latter we used a Maximum Entropy classifier along with string similarity measures applied to several abstractions of the original texts.
A Logically Sound Method for Uncertain Reasoning With Quantified Conditionals
 IN PROCEEDINGS ECSQARU / FAPR97, LNAI 1244
, 1997
"... Conditionals play a central part in knowledge representation and reasoning. Describing certain relationships between antecedents and consequences by "ifthensentences" their range of expressiveness includes commonsense knowledge as well as scientific statements. In this paper, we present the prin ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Conditionals play a central part in knowledge representation and reasoning. Describing certain relationships between antecedents and consequences by "ifthensentences" their range of expressiveness includes commonsense knowledge as well as scientific statements. In this paper, we present the principles of maximum entropy resp. of minimum crossentropy (MEprinciples) as a logically sound and practicable method for representing and reasoning with quantified conditionals. First the meaning of these principles is made clear by sketching a characterization from a completely conditionallogical point of view. Then we apply the techniques presented to derive MEdeduction schemes and illustrate them by examples in the second part of this paper.
Inference for Multiplicative Models
"... The paper introduces a generalization for known probabilistic models such as loglinear and graphical models, called here multiplicative models. These models, that express probabilities via product of parameters are shown to capture multiple forms of contextual independence between variables, includ ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The paper introduces a generalization for known probabilistic models such as loglinear and graphical models, called here multiplicative models. These models, that express probabilities via product of parameters are shown to capture multiple forms of contextual independence between variables, including decision graphs and noisyOR functions. An inference algorithm for multiplicative models is provided and its correctness is proved. The complexity analysis of the inference algorithm uses a more refined parameter than the treewidth of the underlying graph, and shows the computational cost does not exceed that of the variable elimination algorithm in graphical models. The paper ends with examples where using the new models and algorithm is computationally beneficial.
WHAT DOES A RANDOM CONTINGENCY TABLE LOOK LIKE?
, 2008
"... Abstract. Let R = (r1,..., rm) and C = (c1,..., cn) be positive integer vectors such that r1 +... + rm = c1 +... + cn. We consider the set Σ(R, C) of nonnegative m × n integer matrices (contingency tables) with row sums R and column sums C as a finite probability space with the uniform measure. We ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. Let R = (r1,..., rm) and C = (c1,..., cn) be positive integer vectors such that r1 +... + rm = c1 +... + cn. We consider the set Σ(R, C) of nonnegative m × n integer matrices (contingency tables) with row sums R and column sums C as a finite probability space with the uniform measure. We prove that a random table D ∈ Σ(R, C) is close with high probability to a particular matrix (“typical table”) Z defined as follows. We let g(x) = (x + 1) ln(x + 1) − x ln x for x ≥ 0 and let g(X) = P ij g(xij) for a nonnegative matrix X = (xij). Then g(X) is strictly concave and attains its maximum on the polytope of nonnegative m × n matrices X with row sums R and column sums C at a unique point, which we call the typical table Z.
ACME: An Associative Classifier based on Maximum Entropy Principle
 In 16th International Conference on Algorithmic Learning Theory (ALT
"... Abstract. Recent studies in classification have proposed ways of exploiting the association rule mining paradigm. These studies have performed extensive experiments to show their techniques to be both efficient and accurate. However, existing studies in this paradigm either do not provide any theore ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Recent studies in classification have proposed ways of exploiting the association rule mining paradigm. These studies have performed extensive experiments to show their techniques to be both efficient and accurate. However, existing studies in this paradigm either do not provide any theoretical justification behind their approaches or assume independence between some parameters. In this work, we propose a new classifier based on association rule mining. Our classifier rests on the maximum entropy principle for its statistical basis and does not assume any independence not inferred from the given dataset. We use the classical generalized iterative scaling algorithm (GIS) to create our classification model. We show that GIS fails in some cases when itemsets are used as features and provide modifications to rectify this problem. We show that this modified GIS runs much faster than the original GIS. We also describe techniques to make GIS tractable for large feature spaces – we provide a new technique to divide a feature space into independent clusters each of which can be handled separately. Our experimental results show that our classifier is generally more accurate than the existing classification methods. 1
AUEB at TAC 2009
 In TAC 2009 Workshop, National Institute of Standards and Technology
, 2009
"... This paper describes AUEB’s participation in TAC 2009. Specifically, we participated in the textual entailment recognition track for which we used string similarity measures applied to shallow abstractions of the input sentences, and a Maximum Entropy classifier to learn how to combine the resulting ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper describes AUEB’s participation in TAC 2009. Specifically, we participated in the textual entailment recognition track for which we used string similarity measures applied to shallow abstractions of the input sentences, and a Maximum Entropy classifier to learn how to combine the resulting features. We also exploited WordNet to detect synonyms and a dependency parser to measure similarity in the grammatical structure of T and H. 1
FreeBSD CVS log for ports/INDEX with Asami' s song texts: http://www.freebsd.org/cgi/cvsweb.cgi/ports/INDEX FreeBSD porters Handbook: http://www.freebsd.org/doc/en_US.ISO88591/books/portershandbook OpenBSD: "Building an OpenBSD port" http://ww
 In the
"... Abstract. KullbackLeibler relativeentropy, in cases involving distributions resulting from relativeentropy minimization, has a celebrated property reminiscent of squared Euclidean distance: it satisfies an analogue of the Pythagoras ’ theorem. And hence, this property is referred to as Pythagoras ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. KullbackLeibler relativeentropy, in cases involving distributions resulting from relativeentropy minimization, has a celebrated property reminiscent of squared Euclidean distance: it satisfies an analogue of the Pythagoras ’ theorem. And hence, this property is referred to as Pythagoras ’ theorem of relativeentropy minimization or triangle equality and plays a fundamental role in geometrical approaches of statistical estimation theory like information geometry. Equvalent of Pythagoras’ theorem in the generalized nonextensive formalism is established in (Dukkipati at
Spatial allocation of agricultural production using a crossentropy approach
 Environment and Production Technology Division Discussion Paper No. 126. Washington D.C.: International Food Policy Research Institute
, 2003
"... While production statistics are reported on a geopolitical – often national basis we often need to know, for example, the status of production or productivity within specific subregions, watersheds, or agroecological zones. Such reaggregations are typically made using expert judgments or simple ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
While production statistics are reported on a geopolitical – often national basis we often need to know, for example, the status of production or productivity within specific subregions, watersheds, or agroecological zones. Such reaggregations are typically made using expert judgments or simple areaweighting rules. We describe a new, entropybased approach to the plausible estimates of the spatial distribution of crop production. Using this approach tabular crop production statistics are blended judiciously with an array of other secondary data to assess the production of specific crops within individual ‘pixels ’ – typically 1 to 25 square kilometers in size. The information utilized includes crop production statistics, farming system characterization, satellitebased interpretation of land cover, biophysical crop suitability assessments, and population density. An application is presented in which Brazilian state level production statistics are used to generate pixel level crop production data for eight crops. To validate the spatial allocation we aggregated the pixel estimates to obtain synthetic estimates of municipio level production in Brazil, and compared those estimates with actual municipio statistics. The approach produced extremely promising results. We then examined the robustness of these results compared to shortcut approaches to spatializing crop production statistics and showed that, while computationally intensive, the crossentropy method does provide more reliable spatial allocations.
LINEAR MODELS ANALYSIS OF INCOMPLETE MULTIVARIATE CATEGORICAL DATA
, 1972
"... This research deals with experiments or surveys producing multivariate categorical data which is incomplete, in the sense that not all variables of interest are measured on every subject or element of the sample. For the most part, incompleteness is taken to arise by design, rather than by random fa ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This research deals with experiments or surveys producing multivariate categorical data which is incomplete, in the sense that not all variables of interest are measured on every subject or element of the sample. For the most part, incompleteness is taken to arise by design, rather than by random failure of the measurement process. In these circumstances, one can often assume that counts derived from appropriate disjoint subsets of the data arise from independent multinomial distributions with linearly related parameters. Best asymptotically normal oJ estimates of these parameters may be determined by maximizing the likelihood of the observations or by minimizing Pearson'sx 2, Neyman's X~,