Results 11  20
of
939
Fountain codes
 IEE Communications
, 2005
"... Fountain codes are recordbreaking sparsegraph codes for channels with erasures – such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard filetransfer protocols simply chop a file up into K packetsized p ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Fountain codes are recordbreaking sparsegraph codes for channels with erasures – such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard filetransfer protocols simply chop a file up into K packetsized pieces, then repeatedly transmit each packet until it is successfully received. A backchannel is required for the transmitter to find out which packets need retransmitting. In contrast, fountain codes make packets that are random functions of the whole file. The transmitter sprays packets at the receiver without any knowledge of which packets are received. Once the receiver has received any N packets, where N is just slightly greater than the original filesize K, he can recover the whole file. In this paper I review random linear fountain codes, LT codes, and raptor codes. The computational costs of the best fountain codes are astonishingly small, scaling linearly with the file size. 1 Erasure channels Channels with erasures are of great importance. For example, files sent over the internet are chopped into packets, and each packet is either received without error or not received. Noisy channels to which good errorcorrecting codes have been applied also behave like erasure channels: much of the time, the errorcorrecting code performs perfectly; occasionally, the decoder fails, and reports that it has failed, so the receiver knows the whole packet has been lost. A simple channel model describing this situation is a qary erasure channel (figure 1), which has (for all inputs in the input alphabet {0, 1, 2,..., q − 1}) a probability 1−f of transmitting the input without error, and probability f of delivering the output ‘?’.
Dependency parsing by belief propagation
 In Proceedings of EMNLP
, 2008
"... We formulate dependency parsing as a graphical model with the novel ingredient of global constraints. We show how to apply loopy belief propagation (BP), a simple and effective tool for approximate learning and inference. As a parsing algorithm, BP is both asymptotically and empirically efficient. E ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
We formulate dependency parsing as a graphical model with the novel ingredient of global constraints. We show how to apply loopy belief propagation (BP), a simple and effective tool for approximate learning and inference. As a parsing algorithm, BP is both asymptotically and empirically efficient. Even with secondorder features or latent variables, which would make exact parsing considerably slower or NPhard, BP needs only O(n3) time with a small constant factor. Furthermore, such features significantly improve parse accuracy over exact firstorder methods. Incorporating additional features would increase the runtime additively rather than multiplicatively. 1
Parameter estimation for text analysis
, 2004
"... Abstract. Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
Abstract. Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, including a discussion of Dirichlet hyperparameter estimation. Finally, analysis methods of LDA models are discussed.
SybilInfer: Detecting Sybil Nodes using Social Networks
"... SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and realworld network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results. 1
A ContextualBandit Approach to Personalized News Article Recommendation
"... Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamic ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its articleselection strategy based on userclick feedback to maximize total user clicks. The contributions of this work are threefold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 % click lift compared to a standard contextfree bandit algorithm, and the advantage becomes even greater when data gets more scarce.
Decentralised Coordination of LowPower Embedded Devices Using the MaxSum Algorithm
 In: 7 th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS08
, 2008
"... This paper considers the problem of performing decentralised coordination of lowpower embedded devices (as is required within many environmental sensing and surveillance applications). Specifically, we address the generic problem of maximising social welfare within a group of interacting agents. We ..."
Abstract

Cited by 53 (19 self)
 Add to MetaCart
This paper considers the problem of performing decentralised coordination of lowpower embedded devices (as is required within many environmental sensing and surveillance applications). Specifically, we address the generic problem of maximising social welfare within a group of interacting agents. We propose a novel representation of the problem, as a cyclic bipartite factor graph, composed of variable and function nodes (representing the agents’ states and utilities respectively). We show that such representation allows us to use an extension of the maxsum algorithm to generate approximate solutions to this global optimisation problem through local decentralised message passing. We empirically evaluate this approach on a canonical coordination problem (graph colouring), and benchmark it against state of the art approximate and complete algorithms (DSA and DPOP). We show that our approach is robust to lossy communication, that it generates solutions closer to those of DPOP than DSA is able to, and that it does so with a communication cost (in terms of total messages size) that scales very well with the number of agents in the system (compared to the exponential increase of DPOP). Finally, we describe a hardware implementation of our algorithm operating on lowpower Chipcon CC2431 SystemonChip sensor nodes.
Flux compactification
"... Contents We review recent work in which compactifications of string and M theory are constructed in which all scalar fields (moduli) are massive, and supersymmetry is broken with a small positive cosmological constant, features needed to reproduce real world physics. We explain how this work implies ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
Contents We review recent work in which compactifications of string and M theory are constructed in which all scalar fields (moduli) are massive, and supersymmetry is broken with a small positive cosmological constant, features needed to reproduce real world physics. We explain how this work implies that there is a “landscape ” of string/M theory vacua, perhaps containing many candidates
Bayesian Compressed Sensing via Belief Propagation
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses ( log ()) measurements and ( log 2 ()) computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
Detecting the overlapping and hierarchical community structure in complex networks
 New J. Phys. p
, 2009
"... Abstract. Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most im ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Abstract. Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most important problems in the field of complex networks. Networks often show a hierarchical organization, with communities embedded within other communities; moreover, nodes can be shared between different communities. Here we present the first algorithm that finds both overlapping communities and the hierarchical structure. The method is based on the local optimization of a fitness function. Community structure is revealed by peaks in the fitness histogram. The resolution can be tuned by a parameter enabling to investigate different hierarchical levels of organization. Tests on real and artificial networks give excellent results. PACS numbers: 89.75.k, 89.75.Hc, 05.40a, 89.75.Fb, 87.23.GeDetecting the overlapping and hierarchical community structure in complex networks 2 1.
Fast approximate energy minimization with label costs
, 2010
"... The αexpansion algorithm [7] has had a significant impact in computer vision due to its generality, effectiveness, and speed. Thus far it can only minimize energies that involve unary, pairwise, and specialized higherorder terms. Our main contribution is to extend αexpansion so that it can simult ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
The αexpansion algorithm [7] has had a significant impact in computer vision due to its generality, effectiveness, and speed. Thus far it can only minimize energies that involve unary, pairwise, and specialized higherorder terms. Our main contribution is to extend αexpansion so that it can simultaneously optimize “label costs ” as well. An energy with label costs can penalize a solution based on the set of labels that appear in it. The simplest special case is to penalize the number of labels in the solution. Our energy is quite general, and we prove optimality bounds for our algorithm. A natural application of label costs is multimodel fitting, and we demonstrate several such applications in vision: homography detection, motion segmentation, and unsupervised image segmentation. Our C++/MATLAB implementation is publicly available.