Results 1  10
of
240
Inference Networks for Document Retrieval
, 1990
"... The use of inference networks to support document retrieval is introduced. A networkbasead retrieval model is described and compared to conventional probabilistic and Boolean models. 1 ..."
Abstract

Cited by 237 (8 self)
 Add to MetaCart
The use of inference networks to support document retrieval is introduced. A networkbasead retrieval model is described and compared to conventional probabilistic and Boolean models. 1
Evaluation of an Inference NetworkBased Retrieval Model
 ACM Transactions on Information Systems
, 1991
"... The use of inference networks to support document retrieval is introduced. A networkbased retrieval model is described and compared to conventional probabilistic and Boolean models. The performance of a retrieval system based on the inference network model is evaluated and compared to performance w ..."
Abstract

Cited by 230 (20 self)
 Add to MetaCart
The use of inference networks to support document retrieval is introduced. A networkbased retrieval model is described and compared to conventional probabilistic and Boolean models. The performance of a retrieval system based on the inference network model is evaluated and compared to performance with conventional retrieval models,
A Logic for Reasoning about Probabilities
 Information and Computation
, 1990
"... We consider a language for reasoning about probability which allows us to make statements such as “the probability of E, is less than f ” and “the probability of E, is at least twice the probability of E,, ” where E, and EZ are arbitrary events. We consider the case where all events are measurable ( ..."
Abstract

Cited by 215 (21 self)
 Add to MetaCart
We consider a language for reasoning about probability which allows us to make statements such as “the probability of E, is less than f ” and “the probability of E, is at least twice the probability of E,, ” where E, and EZ are arbitrary events. We consider the case where all events are measurable (i.e., represent measurable sets) and the more general case, which is also of interest in practice, where they may not be measurable. The measurable case is essentially a formalization of (the propositional fragment of) Nilsson’s probabilistic logic. As we show elsewhere, the general (nonmeasurable) case corresponds precisely to replacing probability measures by DempsterShafer belief functions. In both cases, we provide a complete axiomatization and show that the problem of deciding satistiability is NPcomplete, no worse than that of propositional logic. As a tool for proving our complete axiomatizations, we give a complete axiomatization for reasoning about Boolean combinations of linear inequalities, which is of independent interest. This proof and others make crucial use of results from the theory of linear programming. We then extend the language to allow reasoning about conditional probability and show that the resulting logic is decidable and completely axiomatizable, by making use of the theory of real closed fields. ( 1990 Academic Press. Inc 1.
Probabilistic Logic Programming
, 1992
"... Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian ..."
Abstract

Cited by 133 (7 self)
 Add to MetaCart
Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian [5, 6, 49, 50], Kifer et al [29, 30, 31]) have restricted themselves to nonprobabilistic semantical characterizations. In this paper, we take a few steps towards rectifying this situation. We define a logic programming language that is syntactically similar to the annotated logics of [5, 6], but in which the truth values are interpreted probabilistically. A probabilistic model theory and fixpoint theory is developed for such programs. This probabilistic model theory satisfies the requirements proposed by Fenstad [16] for a function to be called probabilistic. The logical treatment of probabilities is complicated by two facts: first, that the connectives cannot be interpreted truth function...
Control of Selective Perception Using Bayes Nets and Decision Theory
, 1993
"... A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the neces ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decisionmaking are central issues for selective vision, which takes advantage of prior knowledge of a domain's abstract and geometrical structure and models for the expected performance and cost of visual operators. The TEA1 selective vision system uses Bayes nets for representation and benefitcost analysis for control of visual and nonvisual actions. It is the highlevel control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA1 demonstrates that Bayes nets and decision theoretic techniques provide a general, reusable framework for constructi...
Two views of belief: Belief as generalized probability and belief as evidence
, 1992
"... : Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized prob ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
: Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding the...
Computational Methods for A Mathematical Theory of Evidence
, 1981
"... Many knowledgebased expert systems employ numerical schemes to represent evidence, rate competing hypotheses, and guide search through the domain’s problem space. This paper has two objectives: first, to introduce one such scheme, developed by Arthur Dempster and Glen Shafer, to a wider audience; ..."
Abstract

Cited by 64 (1 self)
 Add to MetaCart
Many knowledgebased expert systems employ numerical schemes to represent evidence, rate competing hypotheses, and guide search through the domain’s problem space. This paper has two objectives: first, to introduce one such scheme, developed by Arthur Dempster and Glen Shafer, to a wider audience; second, to present results that can reduce the computationtime complexity from exponential to linear, allowing this scheme to be implemented in many more systems. In order to enjoy this reduction, some assumptions about the structure of the type of evidence represented and combined must be made. The assumption made here is that each piece of the evidence either confirms or denies a single proposition rather than a disjunction. For any domain in which the assumption is justified, the savings are available.
"Is This Document Relevant? ...Probably": A Survey of Probabilistic Models in Information Retrieval
, 2001
"... This article surveys probabilistic approaches to modeling information retrieval. The basic concepts of probabilistic approaches to information retrieval are outlined and the principles and assumptions upon which the approaches are based are presented. The various models proposed in the developmen ..."
Abstract

Cited by 63 (14 self)
 Add to MetaCart
This article surveys probabilistic approaches to modeling information retrieval. The basic concepts of probabilistic approaches to information retrieval are outlined and the principles and assumptions upon which the approaches are based are presented. The various models proposed in the development of IR are described, classified, and compared using a common formalism. New approaches that constitute the basis of future research are described
Evidencebased Static Branch Prediction using Machine Learning
 ACM Transactions on Programming Languages and Systems
, 1996
"... Correctly predicting the direction that branches will take is increasingly important in today's wideissue computer architectures. The name programbased branch prediction is given to static branch prediction techniques that base their prediction on a program's structure. In this paper, we investigat ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
Correctly predicting the direction that branches will take is increasingly important in today's wideissue computer architectures. The name programbased branch prediction is given to static branch prediction techniques that base their prediction on a program's structure. In this paper, we investigate a new approach to programbased branch prediction that uses a body of existing programs to predict the branch behavior in a new program. We call this approach to programbased branch prediction evidencebased static prediction, or ESP. The main idea of ESP is that the behavior of a corpus of programs can be used to infer the behavior of new programs. In this paper, we use neural networks and decision trees to map static features associated with each branch to the probability that the branch will be taken. ESP shows significant advantages over other prediction mechanisms. Specifically, it is a programbased technique, it is effective across a range of programming languages and programming s...
Multivalued Logics: A Uniform Approach to Inference in Artificial Intelligence
 Computational Intelligence
, 1988
"... This paper describes a uniform formalization of much of the current work in AI on inference systems. We show that many of these systems, including firstorder theorem provers, assumptionbased truth maintenance systems (atms's) and unimplemented formal systems such as default logic or circumscriptio ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper describes a uniform formalization of much of the current work in AI on inference systems. We show that many of these systems, including firstorder theorem provers, assumptionbased truth maintenance systems (atms's) and unimplemented formal systems such as default logic or circumscription can be subsumed under a single general framework. We begin by defining this framework, which is based on a mathematical structure known as a bilattice. We present a formal definition of inference using this structure, and show that this definition generalizes work involving atms's and some simple nonmonotonic logics. Following the theoretical description, we describe a constructive approach to inference in this setting; the resulting generalization of both conventional inference and atms's is achieved without incurring any substantial computational overhead. We show that our approach can also be used to implement a default reasoner, and discuss a combination of default and atms methods th...