Results 1  10
of
71
Sentiment analysis of blogs by combining lexical knowledge with text classification
 In KDD
, 2009
"... The explosion of usergenerated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around their products. Tracking such discussion on weblogs, provides useful insight on how to improve products or ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
The explosion of usergenerated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around their products. Tracking such discussion on weblogs, provides useful insight on how to improve products or market them more effectively. An important component of such analysis is to characterize the sentiment expressed in blogs about specific brands and products. Sentiment Analysis focuses on this task of automatically identifying whether a piece of text expresses a positive or negative opinion about the subject matter. Most previous work in this area uses prior lexical knowledge in terms of the sentimentpolarity of words. In contrast, some recent approaches treat the task as a text classification problem, where they learn to classify sentiment based only on labeled training data. In this paper, we present a unified framework in which one can use background lexical information in terms of wordclass associations, and refine this information for specific domains using any available training examples. Empirical results on diverse domains show that our approach performs better than using background knowledge or training data in isolation, as well as alternative approaches to using lexical knowledge with text classification.
Evaluating, comparing and combining density forecasts using the KLIC with an application to the Bank of England and NIESR “fan” charts of inflation
, 2005
"... ..."
Optimal combination of density forecasts
 NATIONAL INSTITUTE OF ECONOMIC AND SOCIAL RESEARCH DISCUSSION PAPER NO
, 2005
"... This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple datadriven approach to direct combination of density forecasts using optimal weights. These optimal weights are those weigh ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple datadriven approach to direct combination of density forecasts using optimal weights. These optimal weights are those weights that minimize the ‘distance’, as measured by the KullbackLeibler information criterion, between the forecasted and true but unknown density. We explain how this minimization both can and should be achieved. Comparisons with the optimal combination of point forecasts are made. An application to simple timeseries density forecasts and two widely used published density forecasts for U.K. inflation, namely the Bank of England and NIESR “fan” charts, illustrates that combination can but need not always help.
Aggregating disparate estimates of chance
, 2004
"... We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We ad ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We address the problem of revising the probability estimates of the panel so as to produce a coherent set that best represents the group’s expertise.
Information markets vs. opinion pools: An empirical comparison
 In Proceedings of the Sixth ACM Conference on Electronic Commerce (EC’05
, 2005
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
Aggregating Learned Probabilistic Beliefs
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the wellknown aggregation operator LinOP is ideally suited for that task. We propose a LinOPbased learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
Forecasting market prices in a supply chain game
 In International Joint Conference on Autonomous Agents and Multiagent Systems
, 2006
"... Future market conditions can be a pivotal factor in making business decisions. We present and evaluate methods used by our agent, Deep Maize, to forecast market prices in the Trading Agent Competition Supply Chain Management Game. As a guiding principle we seek to exploit as many sources of availabl ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Future market conditions can be a pivotal factor in making business decisions. We present and evaluate methods used by our agent, Deep Maize, to forecast market prices in the Trading Agent Competition Supply Chain Management Game. As a guiding principle we seek to exploit as many sources of available information as possible to inform predictions. Since information comes in several different forms, we integrate wellknown methods in a novel way to make predictions. The core of our predictor is a nearestneighbors machine learning algorithm that identifies historical instances with similar economic indicators. We augment this with an online learning procedure that transforms the predictions by optimizing a scoring rule. This allows us to select more relevant historical contexts using additional information available during individual games. We also explore the advantages of two different representations for predicting price distributions. One uses absolute prices, and the other uses statistics of price time series to exploit local stability. We evaluate these methods using both data from the 2005 tournament final round and additional simulations. We compare several variations of our predictor to one another and a baseline predictor similar to those used by many other tournament agents. We show substantial improvements over the baseline predictor, and demonstrate that each element of our predictor contributes to improved performance.
The expected value of information and the probability of surprise
 Risk Anal
, 1999
"... Risk assessors attempting to use probabilistic approaches to describe uncertainty often find themselves in a datasparse situation: available data are only partially relevant to the parameter of interest, so one needs to adjust empirical distributions, use explicit judgmental distributions, or colle ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Risk assessors attempting to use probabilistic approaches to describe uncertainty often find themselves in a datasparse situation: available data are only partially relevant to the parameter of interest, so one needs to adjust empirical distributions, use explicit judgmental distributions, or collect new data. In determining whether or not to collect additional data, whether by measurement or by elicitation of experts, it is useful to consider the expected value of the additional information. The expected value of information depends on the prior distribution used to represent current information; if the prior distribution is too narrow, in many riskanalytic cases the calculated expected value of information will be biased downward. The welldocumented tendency toward overconfidence, including the neglect of potential surprise, suggests this bias may be substantial. We examine the expected value of information, including the role of surprise, test for bias in estimating the expected value of information, and suggest procedures to guard against overconfidence and underestimation of the expected value of information when developing prior distributions and when combining distributions obtained from multiple experts. The methods are illustrated with applications to potential carcinogens in food, commercial energy demand, and global climate change. KEY WORDS: Probability; uncertainty; data; risk assessment. 1.
Active AppearanceBased Robot Localization Using Stereo Vision
, 2005
"... A visionbased robot localization system must be robust: able to keep track of the position of the robot at any time even if illumination conditions change and, in the extreme case of a failure, able to efficiently recover the correct position of the robot. With this objective in mind, we enhance t ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
A visionbased robot localization system must be robust: able to keep track of the position of the robot at any time even if illumination conditions change and, in the extreme case of a failure, able to efficiently recover the correct position of the robot. With this objective in mind, we enhance the existing appearancebased robot localization framework in two directions by exploiting the use of a stereo camera mounted on a panandtilt device. First, we move from the classical passive appearancebased localization framework to an active one where the robot sometimes executes actions with the only purpose of gaining information about its location in the environment. Along this line, we introduce an entropybased criterion for action selection that can be efficiently evaluated in our probabilistic localization system. The execution of the actions selected using this criterion allows the robot to quickly find out its position in case it gets lost. Secondly, we introduce the use of depth maps obtained with the stereo cameras. The information provided by depth maps is less sensitive to changes of illumination than that provided by plain images. The main drawback of depth maps is that they include missing values: points for which it is not possible to reliably determine depth information. The presence of missing values makes Principal Component Analysis (the standard method used to compress images in the appearancebased framework) unfeasible. We describe a novel ExpectationMaximization algorithm to determine the principal components of a data set including missing values and we apply it to depth maps. The experiments we present show that the combination of the active localization with the use of depth maps gives an efficient and robust appearancebased robot localization system.
Uncertainty in Risk Analysis: Towards a General SecondOrder Approach Combining Interval, Probabilistic, and Fuzzy Techniques
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval) , plus any additional information that we may have about the probability of different values within this set. ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval) , plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general secondorder formalism for handling different types of uncertainty.