Results 21  30
of
2,556
Detecting Deception in Reputation Management
, 2003
"... We previously developed a social mechanism for distributed reputation management, in which an agent combines testimonies from several witnesses to determine its ratings of another agent. However, that approach does not fully protect against spurious ratings generated by malicious agents. This paper ..."
Abstract

Cited by 106 (3 self)
 Add to MetaCart
We previously developed a social mechanism for distributed reputation management, in which an agent combines testimonies from several witnesses to determine its ratings of another agent. However, that approach does not fully protect against spurious ratings generated by malicious agents. This paper focuses on the problem of deception in testimony propagation and aggregation. We introduce some models of deception and study how to efficiently detect deceptive agents following those models. Our approach involves a novel application of the wellknown weighted majority technique to belief function and their aggregation. We describe simulation experiments to study the number of apparently accurate witnesses found in different settings, the number of witnesses on prediction accuracy, and the evolution of trust networks.
Representing Default Rules in Possibilistic Logic
, 1992
"... A key issue when reasoning with default rules is how to order them so as to derive plausible conclusions according to the more specific rules applicable to the situation under concern, to make sure that default rules are not systematically inhibited by more general rules, and to cope with the proble ..."
Abstract

Cited by 100 (41 self)
 Add to MetaCart
A key issue when reasoning with default rules is how to order them so as to derive plausible conclusions according to the more specific rules applicable to the situation under concern, to make sure that default rules are not systematically inhibited by more general rules, and to cope with the problem of irrelevance of facts with respect to exceptions. Pearl's system Z enables us to rankorder default rules. In this paper we show how to encode such a rankordered set of defaults in possibilistic logic. We can thus take advantage of the deductive machinery available in possibilistic logic. We point out that the notion of inconsistency tolerant inference in possibilistic logic corresponds to the bold inference ; 1 in system Z. We also show how to express defaults by means of qualitative possibility relations. Improvements to the ordering provided by system Z are also proposed.
Perspectives on the Theory and Practice of Belief Functions
 International Journal of Approximate Reasoning
, 1990
"... The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answer ..."
Abstract

Cited by 91 (7 self)
 Add to MetaCart
The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answers to that question. The theory of belief functions is more flexible; it allows us to derive degrees of belief for a question from probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ from probabilities will depend on how closely the two questions are related. Examples of what we would now call belieffunction reasoning can be found in the late seventeenth and early eighteenth centuries, well before Bayesian ideas were developed. In 1689, George Hooper gave rules for combining testimony that can be recognized as special cases of Dempster's rule for combining belief functions (Shafer 1986a). Similar rules were formulated by Jakob Bernoulli in his Ars Conjectandi, published posthumously in 1713, and by JohannHeinrich Lambert in his Neues Organon, published in 1764 (Shafer 1978). Examples of belieffunction reasoning can also be found in more recent work, by authors
Toward normative expert systems: Part I. The Pathfinder project
 Methods of Information in Medicine
, 1992
"... ..."
Web Usage Mining: Discovery and Application of Interestin Patterns from Web Data
, 2000
"... Web Usage Mining is the application of data mining techniques to Web clickstream data in order to extract usage patterns. As Web sites continue to grow in size and complexity, the results of Web Usage Mining have become critical for a number of applications such as Web site design, business and mark ..."
Abstract

Cited by 82 (0 self)
 Add to MetaCart
Web Usage Mining is the application of data mining techniques to Web clickstream data in order to extract usage patterns. As Web sites continue to grow in size and complexity, the results of Web Usage Mining have become critical for a number of applications such as Web site design, business and marketing decision support, personalization, usability studies, and network trac analysis. The two major challenges involved in Web Usage Mining are preprocessing the raw data to provide an accurate picture of how a site is being used, and ltering the results of the various data mining algorithms in order to present only the rules and patterns that are potentially interesting. This thesis develops and tests an architecture and algorithms for performing Web Usage Mining. An evidence combination framework referred to as the information lter is developed to compare and combine usage, content, and structure information about a Web site. The information lter automatically identi es the discovered ...
Static Branch Frequency and Program Profile Analysis
 In 27th International Symposium on Microarchitecture
, 1994
"... : Program profiles identify frequently executed portions of a program, which are the places at which optimizations offer programmers and compilers the greatest benefit. Compilers, however, infrequently exploit program profiles, because profiling a program requires a programmer to instrument and run ..."
Abstract

Cited by 81 (1 self)
 Add to MetaCart
: Program profiles identify frequently executed portions of a program, which are the places at which optimizations offer programmers and compilers the greatest benefit. Compilers, however, infrequently exploit program profiles, because profiling a program requires a programmer to instrument and run the program. An attractive alternative is for the compiler to statically estimate program profiles. . This paper presents several new techniques for static branch prediction and profiling. The first technique combines multiple predictions of a branch's outcome into a prediction of the probability that the branch is taken. Another technique uses these predictions to estimate the relative execution frequency (i.e., profile) of basic blocks and controlflow edges within a procedure. A third algorithm uses local frequency estimates to predict the global frequency of calls, procedure invocations, and basic block and controlflow edge executions. Experiments on the SPEC92 integer benchmarks and Uni...
Plausibility Measures and Default Reasoning
 Journal of the ACM
, 1996
"... this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. W ..."
Abstract

Cited by 81 (12 self)
 Add to MetaCart
this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]:
A UNIFYING FIELD IN LOGICS: NEUTROSOPHIC LOGIC. NEUTROSOPHY, NEUTROSOPHIC SET, NEUTROSOPHIC PROBABILITY AND STATISTICS (fourth edition)
, 2005
"... ..."
Two views of belief: Belief as generalized probability and belief as evidence
, 1992
"... : Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized prob ..."
Abstract

Cited by 78 (12 self)
 Add to MetaCart
: Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding the...
Reasoning with Goal Models
, 2002
"... Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goa ..."
Abstract

Cited by 77 (45 self)
 Add to MetaCart
Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on preliminary experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.