Results 1  10
of
14
Opinion Mining and Sentiment Analysis
, 2008
"... An important part of our informationgathering behavior has always been to find out what other people think. With the growing availability and popularity of opinionrich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, active ..."
Abstract

Cited by 367 (4 self)
 Add to MetaCart
An important part of our informationgathering behavior has always been to find out what other people think. With the growing availability and popularity of opinionrich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a firstclass object. This survey covers techniques and approaches that promise to directly enable opinionoriented informationseeking systems. Our focus is on methods that seek to address the new challenges raised by sentimentaware applications, as compared to those that are already present in more traditional factbased analysis. We include materialon summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinionoriented informationaccess services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
Fast convergence to Wardrop equilibria by adaptive sampling methods
 in Proc. 38th Ann. ACM. Symp. on Theory of Comput. (STOC
, 2006
"... We study rerouting policies in a dynamic roundbased variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop’s model focus mostly on the static analysis of equilibria. In this paper, we ask the question whethe ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We study rerouting policies in a dynamic roundbased variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop’s model focus mostly on the static analysis of equilibria. In this paper, we ask the question whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently. The rerouting policies that we study are of the following kind. In each round, each agent samples an alternative routing path and compares the latency on this path with its current latency. If the agent observes that it can improve its latency then it switches with some probability depending on the possible improvement to the better path. We can show various positive results based on a rerouting policy using an adaptive sampling rule that implicitly amplifies paths that carry a large amount of traffic in the Wardrop equilibrium. For general asymmetric games, we show that a simple replication protocol in which agents adopt strategies of more successful agents reaches a certain kind of bicriteria equilibrium within a time bound that is independent of the size and the structure of the network but only depends on a parameter of the latency functions, that we call the relative slope. For symmetric games, this result has an intuitive interpretation: Replication approximately satisfies almost everyone very quickly. In order to achieve convergence to a Wardrop equilibrium besides replication one also needs an exploration component discovering possibly unused strategies. We present a
Online Decision Problems with Large Strategy Sets
, 2005
"... In an online decision problem, an algorithm performs a sequence of trials, each of which involves selecting one element from a fixed set of alternatives (the “strategy set”) whose costs vary over time. After T trials, the combined cost of the algorithm’s choices is compared with that of the single s ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
In an online decision problem, an algorithm performs a sequence of trials, each of which involves selecting one element from a fixed set of alternatives (the “strategy set”) whose costs vary over time. After T trials, the combined cost of the algorithm’s choices is compared with that of the single strategy whose combined cost is minimum. Their difference is called regret, and one seeks algorithms which are efficient in that their regret is sublinear in T and polynomial in the problem size. We study an important class of online decision problems called generalized multiarmed bandit problems. In the past such problems have found applications in areas as diverse as statistics, computer science, economic theory, and medical decisionmaking. Most existing algorithms were efficient only in the case of a small (i.e. polynomialsized) strategy set. We extend the theory by supplying nontrivial algorithms and lower bounds for cases in which the strategy set is much larger (exponential or infinite) and
DSybil: Optimal SybilResistance for Recommendation Systems
, 2009
"... Recommendation systems can be attacked in various ways, and the ultimate attack form is reached with a sybil attack, where the attacker creates a potentially unlimited number of sybil identities to vote. Defending against sybil attacks is often quite challenging, and the nature of recommendation sys ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Recommendation systems can be attacked in various ways, and the ultimate attack form is reached with a sybil attack, where the attacker creates a potentially unlimited number of sybil identities to vote. Defending against sybil attacks is often quite challenging, and the nature of recommendation systems makes it even harder. This paper presents DSybil, a novel defense for diminishing the influence of sybil identities in recommendation systems. DSybil provides strong provable guarantees that hold even under the worstcase attack and are optimal. DSybil can defend against an unlimited number of sybil identities over time. DSybil achieves its strong guarantees by i) exploiting the heavytail distribution of the typical voting behavior of the honest identities, and ii) carefully identifying whether the system is already getting “enough help ” from the (weighted) voters already taken into account or whether more “help ” is needed. Our evaluation shows that DSybil would continue to provide highquality recommendations even when a millionnode botnet uses an optimal strategy to launch a sybil attack. 1.
The influence limiter: Provably manipulationresistant recommender systems
 In To appear in Proceedings of the ACM Recommender Systems Conference (RecSys07
, 2007
"... This appendix should be read in conjunction with the article by Resnick and Sami [1]. Here, we include the proofs that were omitted from the main article due to shortage of space. A.1 Lemma 5 Lemma 5: For the quadratic scoring rule (MSE) loss, for all q,u ∈ [0,1], GF(qu) ≥ D(qu) 2. Proof of Lem ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
This appendix should be read in conjunction with the article by Resnick and Sami [1]. Here, we include the proofs that were omitted from the main article due to shortage of space. A.1 Lemma 5 Lemma 5: For the quadratic scoring rule (MSE) loss, for all q,u ∈ [0,1], GF(qu) ≥ D(qu) 2. Proof of Lemma 5: Because both D(qu) = D(1 − q1 − u) and GF(qu) = GF(1 − q1 − u), we can assume u ≥ q without loss of generality. Keeping q fixed, we want to show that the result holds for all u. Note that D(qq) = GF(qq) = 0. Thus, differentiating with respect to u, it is sufficient to prove that GF ′ (qu) ≥ D ′ (qu)/2 for all u ≥ q,u ≤ 1. We change variables by setting y = u − q. We use the notation D ′ (y) to denote D ′ (qu)u=q+y, treating q as fixed and implicit. Likewise, we use the notation GF ′ (y). For brevity, we use q to denote (1 − q). D(qu) = q[(q − y) 2 − q 2]+q[(q+y) 2 − q 2] = q[y 2 − 2yq]+q[y 2 + 2qy] = y 2 ⇒ D ′ (y) = 2y 1 GF(qu) = qlog(1+y 2 − 2qy)+qlog(1+y 2 + 2qy)
The information cost of manipulationresistance in recommender systems
 In: RecSys 08: Proceedings of the 2008 ACM conference on Recommender systems
"... Attackers may seek to manipulate recommender systems in order to promote or suppress certain items. Existing defenses based on analysis of ratings also discard useful information from honest raters. In this paper, we show that this is unavoidable and provide a lower bound on how much information mus ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Attackers may seek to manipulate recommender systems in order to promote or suppress certain items. Existing defenses based on analysis of ratings also discard useful information from honest raters. In this paper, we show that this is unavoidable and provide a lower bound on how much information must be discarded. We use an informationtheoretic framework to exhibit a fundamental tradeoff between manipulationresistance and optimal use of genuine ratings in recommender systems. We define a recommender system to be (n, c)robust if an attacker with n sybil identities cannot cause more than a limited amount c units of damage to predictions. We prove that any robust recommender system must also discard Ω(log n) units of useful c information from each genuine rater.
Friend or Frenemy? Predicting Signed Ties in Social Networks
"... We study the problem of labeling the edges of a social network graph (e.g., acquaintance connections in Facebook) as either positive (i.e., trust, true friendship) or negative (i.e., distrust, possible frenemy) relations. Such signed relations provide much stronger signal in tying the behavior of on ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the problem of labeling the edges of a social network graph (e.g., acquaintance connections in Facebook) as either positive (i.e., trust, true friendship) or negative (i.e., distrust, possible frenemy) relations. Such signed relations provide much stronger signal in tying the behavior of online users than the unipolar Homophily effect, yet are largely unavailable as most social graphs only contain unsigned edges. We show the surprising fact that it is possible to infer signed social ties with good accuracy solely based on users’ behavior of decision making (or using only a small fraction of supervision information) via unsupervised and semisupervised algorithms. This work hereby makes it possible to turn an unsigned acquaintance network (e.g. Facebook, Myspace) into a signed trustdistrust network (e.g. Epinion, Slashdot). Our results are based on a mixed effects framework that simultaneously captures users ’ behavior, social interactions as well as the interplay between the two. The framework includes a series of latent factor models and it accommodates the principles of balance and status from Social psychology. Experiments on Epinion and Yahoo! Pulse networks illustrate that (1) signed social ties can be predicted with highaccuracy even in fully unsupervised settings, and (2) the predicted signed ties are significantly more useful for social behavior prediction than simple Homophily.
A Case for Neuromorphic ISAs
"... The desire to create novel computing systems, paired with recent advances in neuroscientific understanding of the brain, has led researchers to develop neuromorphic architectures that emulate the brain. To date, such models are developed, trained, and deployed on the same substrate. However, excessi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The desire to create novel computing systems, paired with recent advances in neuroscientific understanding of the brain, has led researchers to develop neuromorphic architectures that emulate the brain. To date, such models are developed, trained, and deployed on the same substrate. However, excessive codependence between the substrate and the algorithm prevents portability, or at the very least requires reconstructing and retraining the model whenever the substrate changes. This paper proposes a welldefined abstraction layer – the Neuromorphic instruction set architecture, or NISA – that separates a neural application’s algorithmic specification from the underlying execution substrate, and describes the Aivo 1 framework, which demonstrates the concrete advantages of such an abstraction layer. Aivo consists of a NISA implementation for a rateencoded neuromorphic system based on the cortical column
ABSTRACT Online Collaborative Filtering with Nearly Optimal Dynamic Regret
"... We consider a model for sequential online decisionmaking by many diverse agents. On each day, each agent makes a decision, and pays a penalty if it is a mistake. Obviously, it would be good for agents to avoid repeating the same mistakes made by other agents; however, difficulty may arise when some ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider a model for sequential online decisionmaking by many diverse agents. On each day, each agent makes a decision, and pays a penalty if it is a mistake. Obviously, it would be good for agents to avoid repeating the same mistakes made by other agents; however, difficulty may arise when some agents disagree over what constitutes a mistake, perhaps maliciously. As a metric of success for this problem, we consider dynamic regret, i.e., regret versus the offline optimal sequence of decisions. Previous regret bounds usually use the much weaker notion of static regret, i.e., regret versus the best single decision in hindsight. We assume there is a set of “honest ” players whose valuations for the decisions at each time step are identical. No assumptions are made about the remaining players, and the algorithm assumes no information about which are the honest players. We present an algorithm for this setting whose expected dynamic regret per honest player is optimal up to a multiplicative constant and an additive polylogarithmic term, assuming the number of options is bounded. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed artificial intelligence—Multiagent systems; G.3 [Probability and Statistics]:
General Terms
"... An attacker can draw attention to items that don’t deserve that attention by manipulating recommender systems. We describe an influencelimiting algorithm that can turn existing recommender systems into manipulationresistant systems. Honest reporting is the optimal strategy for raters who wish to m ..."
Abstract
 Add to MetaCart
An attacker can draw attention to items that don’t deserve that attention by manipulating recommender systems. We describe an influencelimiting algorithm that can turn existing recommender systems into manipulationresistant systems. Honest reporting is the optimal strategy for raters who wish to maximize their influence. If an attacker can create only a bounded number of shills, the attacker can mislead only a small amount. However, the system eventually makes full use of information from honest, informative raters. We describe both the influence limits and the information loss incurred due to those limits in terms of informationtheoretic concepts of loss functions and entropies.