Results 1  10
of
24
Sentiment analysis of blogs by combining lexical knowledge with text classification
 In KDD
, 2009
"... The explosion of usergenerated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around their products. Tracking such discussion on weblogs, provides useful insight on how to improve products or ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
The explosion of usergenerated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around their products. Tracking such discussion on weblogs, provides useful insight on how to improve products or market them more effectively. An important component of such analysis is to characterize the sentiment expressed in blogs about specific brands and products. Sentiment Analysis focuses on this task of automatically identifying whether a piece of text expresses a positive or negative opinion about the subject matter. Most previous work in this area uses prior lexical knowledge in terms of the sentimentpolarity of words. In contrast, some recent approaches treat the task as a text classification problem, where they learn to classify sentiment based only on labeled training data. In this paper, we present a unified framework in which one can use background lexical information in terms of wordclass associations, and refine this information for specific domains using any available training examples. Empirical results on diverse domains show that our approach performs better than using background knowledge or training data in isolation, as well as alternative approaches to using lexical knowledge with text classification.
Combining probability distributions from dependent information sources
 Management Sci
, 1981
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Information markets vs. opinion pools: An empirical comparison
 In Proceedings of the Sixth ACM Conference on Electronic Commerce (EC’05
, 2005
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
Aggregating Learned Probabilistic Beliefs
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the wellknown aggregation operator LinOP is ideally suited for that task. We propose a LinOPbased learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
Stochastic approximation for consensus seeking: mean square and almost sure convergence
 in Proceedings of the 46th IEEE Conference on Decision and Control
"... Abstract — We consider stochastic consensus problems in strongly connected directed graph models where each agent has noisy measurements of its neighbors ’ states. For consensus seeking, we develop stochastic approximation type algorithms with a decreasing step size and establish mean square and alm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract — We consider stochastic consensus problems in strongly connected directed graph models where each agent has noisy measurements of its neighbors ’ states. For consensus seeking, we develop stochastic approximation type algorithms with a decreasing step size and establish mean square and almost sure convergence of the agents ’ states to the same limit. I.
Aggregation of Imprecise Probabilities
, 1997
"... . Methods to aggregate convex sets of probabilities are proposed. Source reliability is taken into account by transforming the given information and making it less precise. An important property of the aggregation will be that the precision of the result will depend on the initial compatibility of s ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
. Methods to aggregate convex sets of probabilities are proposed. Source reliability is taken into account by transforming the given information and making it less precise. An important property of the aggregation will be that the precision of the result will depend on the initial compatibility of sources. Special attention will be paid to the particular case of probability intervals giving adaptations of aggregation procedures. 1 Introduction The problem of aggregating probabilities for the same set of events assigned by different experts has recieved a great deal of attention in the literature. See Genest and Zidek [8] and French [7] for surveys of classical statistical methods. In general, it is assumed that there is a finite set of mutually exclusive and exhaustive hypotheses under consideration. It is also considered that each expert expresses his opinion by means of a probability distribution on the set of hypotheses. In general, the methods for aggregating probabilities calcula...
Logarithmic Pooling of Priors Linked by a Deterministic Simulation Model
 Journal of Computational and Graphical Statistics
, 1999
"... We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e. pooling) expert opinion. We survey alternative strategies for aggregation, then describe c ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e. pooling) expert opinion. We survey alternative strategies for aggregation, then describe computational approaches for implementing pooled inference for simulation models. Our approach (1) numerically transforms all priors to the same space, (2) uses log pooling to combine priors, and (3) then draws standard Bayesian inference. We use importance sampling methods, including an iterative, adaptive approach which is more flexible and has less bias in some instances than a simpler alternative. Our exploratory examples are the first steps toward extension of the approach for highly complex and even noninvertible models. Key Words: Prior Coherization, Adaptive Importance Sampling, Bayesian Statistics, Model Inversion. 1 Introduction Much research of natural processes and systems is bas...
Stochastic Lyapunov analysis for consensus algorithms with noisy measurements
 Proc. American Control Conference
, 2007
"... Abstract — This paper studies the coordination and consensus of networked agents in an uncertain environment. We consider a group of agents on an undirected graph with fixed topology, but differing from most existing work, each agent has only noisy measurements of its neighbors ’ states. Traditional ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract — This paper studies the coordination and consensus of networked agents in an uncertain environment. We consider a group of agents on an undirected graph with fixed topology, but differing from most existing work, each agent has only noisy measurements of its neighbors ’ states. Traditional consensus algorithms in general cannot deal with such a scenario. For consensus seeking, we introduce stochastic approximation type algorithms with a decreasing step size. We present a stochastic Lyaponuv analysis based upon the total mean potential associated with the agents. Subsequently, the socalled direction of invariance is introduced, which combined with the decay property of the stochastic Lyapunov function leads to mean square convergence of the consensus algorithm. I.
A Market Framework for Pooling Opinions
, 1998
"... Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular assumptions or measures. Many researchers over many years have failed to form a consensus on which method is best. We propose a marketbased pooling procedure, and analyze its properties. Participants bet on securities, each paying off contingent on an uncertain event, so as to maximize their own expected utilities. The consensus probability of each event is defined as the corresponding security's equilibrium price. The market framework provides explicit monetary incentives for participation and honesty, and allows agents to maintain individual rationality and limited privacy. "No arbitrage" arguments ensure that the equilibrium prices form legal probabilities. We show that, when events are...
Combining Expert Judgment By Hierarchical Modeling: An Application To Physician Staffing
, 1998
"... Expert panels are playing an increasingly important role in U.S. health policy decision making. A fundamental issue in these applications is how to synthesize the judgments of individual experts into a group judgment. In this paper we propose an approach to synthesis based on Bayesian hierarchical m ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Expert panels are playing an increasingly important role in U.S. health policy decision making. A fundamental issue in these applications is how to synthesize the judgments of individual experts into a group judgment. In this paper we propose an approach to synthesis based on Bayesian hierarchical models, and apply it to the problem of determining physician staffing at medical centers operated by the U.S. Department of Veteran Affairs (VA). Our starting point is the socalled supraBayesian approach to synthesis, whose principal motivation in the present context is to generate an estimate of the uncertainty associated with a panel's evaluation of the number of physicians required under specified conditions. Hierarchical models are particularly natural in this context since variability in the experts' judgments results in part from heterogeneity in their baseline experiences at different VA medical centers. We derive alternative hierarchical Bayes synthesis distributions for the number ...