Results 1  10
of
236
Trust management for the semantic web
 In ISWC
, 2003
"... Abstract. Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give ea ..."
Abstract

Cited by 206 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign topdown or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise. 1.
Products of Experts
, 1999
"... It is possible to combine multiple probabilistic models of the same data by multiplying the probabilities together and then renormalizing. This is a very efficient way to model highdimensional data which simultaneously satisfies many different lowdimensional constraints. Each individual expert mod ..."
Abstract

Cited by 152 (4 self)
 Add to MetaCart
(Show Context)
It is possible to combine multiple probabilistic models of the same data by multiplying the probabilities together and then renormalizing. This is a very efficient way to model highdimensional data which simultaneously satisfies many different lowdimensional constraints. Each individual expert model can focus on giving high probability to data vectors that satisfy just one of the constraints. Data vectors that satisfy this one constraint but violate other constraints will be ruled out by their low probability under the other expert models. Training a product of models appears difficult because, in addition to maximizing the probabilities that the individual models assign to the observed data, it is necessary to make the models disagree on unobserved regions of the data space: It is fine for one model to assign a high probability to an unobserved region as long as some other model assigns it a very low probability. Fortunately, if the individual models are tractable there is a fairly efficient way to train a product of models. This training algorithm suggests a biologically plausible way of learning neural population codes.
On Combining Artificial Neural Nets
 Connection Science
, 1996
"... This paper reviews research on combining artificial neural nets, and provides an overview of, and an introduction to, the papers contained this Special Issue, and its companion (Connection Science, 9, 1). Two main approaches, ensemblebased, and modular, are identified and considered. An ensembl ..."
Abstract

Cited by 91 (3 self)
 Add to MetaCart
This paper reviews research on combining artificial neural nets, and provides an overview of, and an introduction to, the papers contained this Special Issue, and its companion (Connection Science, 9, 1). Two main approaches, ensemblebased, and modular, are identified and considered. An ensemble, or committee, is made up of a set of nets, each of which is a general function approximator. The members of the ensemble are combined in order to obtain better generalisation performance than would be achieved by any of the individual nets. The main issues considered here under the heading of ensemblebased approaches, are (a) how to combine the outputs of the ensemble members (b) how to create candidate ensemble members and (c) which methods lead to the most effective ensembles? Under the heading of modular approaches we begin by considering a divideandconquer approach by which a function is automatically decomposed into a number of subfunctions which are treated by specialis...
Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation
 Journal of Prediction Markets
, 2002
"... In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. ..."
Abstract

Cited by 78 (5 self)
 Add to MetaCart
(Show Context)
In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals.
Two views of belief: Belief as generalized probability and belief as evidence
, 1992
"... : Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized prob ..."
Abstract

Cited by 78 (12 self)
 Add to MetaCart
: Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding the...
Improved recognition of nativelike protein structures using a combination of sequencedependent and sequenceindependent features of proteins
 Proteins
, 1999
"... ABSTRACT We describe the development of a scoring function based on the decomposition P(structure0sequence) � P(sequence0structure) *P(structure), which outperforms previous scoring functions in correctly identifying nativelike protein structures in large ensembles of compact decoys. The first ter ..."
Abstract

Cited by 71 (20 self)
 Add to MetaCart
ABSTRACT We describe the development of a scoring function based on the decomposition P(structure0sequence) � P(sequence0structure) *P(structure), which outperforms previous scoring functions in correctly identifying nativelike protein structures in large ensembles of compact decoys. The first term captures sequencedependent features of protein structures, such as the burial of hydrophobic residues in the core, the second term, universal sequenceindependent features, such as the assembly of �strands into �sheets. The efficacies of a wide variety of sequencedependent and sequenceindependent features of protein structures for recognizing nativelike structures were systematically evaluated using ensembles ofD30,000 compact conformations with fixed secondary structure for each of 17 small protein domains. The best results were obtained using a core scoring function with P(sequence0structure) parameterized similarly to our previous work (Simons et al., J Mol Biol 1997;268:209–225] and P(structure) focused on secondary structure packing preferences; while several additional features had some discriminatory power on their own, they did not provide any additional discriminatory power when combined with the core scoring function. Our results, on both the training set and the independent decoy set of Park and Levitt (J Mol Biol 1996;258:367–392), suggest that this scoring function should contribute to the prediction of tertiary structure from knowledge of sequence and secondary structure. Proteins 1999;34:82–95. � 1999 WileyLiss, Inc. Key words: protein folding; structure prediction; knowledgebased scoring functions; fold recognition
Uncertainty analysis of climate change and policy response
 Climatic Change
, 2003
"... Abstract. To aid climate policy decisions, accurate quantitative descriptions of the uncertainty in climate outcomes under various possible policies are needed. Here, we apply an earth systems model to describe the uncertainty in climate projections under two different policy scenarios. This study i ..."
Abstract

Cited by 59 (13 self)
 Add to MetaCart
Abstract. To aid climate policy decisions, accurate quantitative descriptions of the uncertainty in climate outcomes under various possible policies are needed. Here, we apply an earth systems model to describe the uncertainty in climate projections under two different policy scenarios. This study illustrates an internally consistent uncertainty analysis of one climate assessment modeling framework, propagating uncertainties in both economic and climate components, and constraining climate parameter uncertainties based on observation. We find that in the absence of greenhouse gas emissions restrictions, there is a one in forty chance that global mean surface temperature change will exceed 4.9 ◦C by the year 2100. A policy case with aggressive emissions reductions over time lowers the temperature change to a one in forty chance of exceeding 3.2 ◦C, thus reducing but not eliminating the chance of substantial warming. 1.
Statistical Methods for Eliciting Probability Distributions
 Journal of the American Statistical Association
, 2005
"... Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticia ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively, and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful, i.e. what criteria should be employed? Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily “true ” in some objectivistic sense, and cannot be judged that way. We see elicitation as simply part of the process of statistical modeling. Indeed in a hierarchical model it is ambiguous at which point the likelihood ends and the prior begins. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions.
Olap over uncertain and imprecise data
 IN VLDB
, 2005
"... We extend the OLAP data model to represent data ambiguity, specifically imprecision and uncertainty, and introduce an allocationbased approach to the semantics of aggregation queries over such data. We identify three natural query properties and use them to shed light on alternative query semantics ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
We extend the OLAP data model to represent data ambiguity, specifically imprecision and uncertainty, and introduce an allocationbased approach to the semantics of aggregation queries over such data. We identify three natural query properties and use them to shed light on alternative query semantics. While there is much work on representing and querying ambiguous data, to our knowledge this is the first paper to handle both imprecision and uncertainty in an OLAP setting.
Parallel consensual neural networks
 MULTIPLE CLASSIFIERS APPLIED TO MULTISOURCE REMOTE SENSING DATA 2299
, 1997
"... Abstract — A new type of a neuralnetwork architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage ne ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
Abstract — A new type of a neuralnetwork architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugategradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data. Index Terms — Consensus theory, wavelet packets, accuracy, classification, probability density estimation, statistical pattern