Results 1  10
of
265
Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules
, 2002
"... In a recent Physical Review Letters paper, Vicsek et. al. propose a simple but compelling discretetime model of n autonomous agents fi.e., points or particlesg all moving in the plane with the same speed but with dierent headings. Each agent's heading is updated using a local rule based on the a ..."
Abstract

Cited by 597 (42 self)
 Add to MetaCart
In a recent Physical Review Letters paper, Vicsek et. al. propose a simple but compelling discretetime model of n autonomous agents fi.e., points or particlesg all moving in the plane with the same speed but with dierent headings. Each agent's heading is updated using a local rule based on the average of its own heading plus the headings of its \neighbors." In their paper, Vicsek et. al. provide simulation results which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves. This paper provides a theoretical explanation for this observed behavior. In addition, convergence results are derived for several other similarly inspired models.
Lexrank: Graphbased lexical centrality as salience in text summarization
 Journal of Artificial Intelligence Research
, 2004
"... We introduce a stochastic graphbased method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a doc ..."
Abstract

Cited by 144 (8 self)
 Add to MetaCart
We introduce a stochastic graphbased method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudosentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intrasentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degreebased methods (including LexRank) outperform both centroidbased methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degreebased techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents. 1.
Input/output hmms for sequence processing
 IEEE Transactions on Neural Networks
, 1996
"... We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a modular structure that associates a subnetwork to each state. The model has a statistical interpretation ..."
Abstract

Cited by 97 (12 self)
 Add to MetaCart
We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a modular structure that associates a subnetwork to each state. The model has a statistical interpretation we call Input/Output Hidden Markov Model (IOHMM). It can be trained by the EM or GEM algorithms, considering state trajectories as missing data, which decouples temporal credit assignment and actual parameter estimation. The model presents similarities to hidden Markov models (HMMs), but allows us to map input sequences to output sequences, using the same processing style as recurrent neural networks. IOHMMs are trained using a more discriminant learning paradigm than HMMs, while potentially taking advantage of the EM algorithm. We demonstrate that IOHMMs are well suited for solving grammatical inference problems on a benchmark problem. Experimental results are presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization.
Lexrank: graphbased centrality as salience in text summarization
 Journal of Artificial Intelligence Research (JAIR
, 2004
"... We introduce a stochastic graphbased method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a doc ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
We introduce a stochastic graphbased method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudosentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intrasentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degreebased methods (including LexRank) outperform both centroidbased methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degreebased techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents. 1.
Finite Markov Chain Results in Evolutionary Computation: A Tour d'Horizon
, 1998
"... . The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
. The theory of evolutionary computation has been enhanced rapidly during the last decade. This survey is the attempt to summarize the results regarding the limit and finite time behavior of evolutionary algorithms with finite search spaces and discrete time scale. Results on evolutionary algorithms beyond finite space and discrete time are also presented but with reduced elaboration. Keywords: evolutionary algorithms, limit behavior, finite time behavior 1. Introduction The field of evolutionary computation is mainly engaged in the development of optimization algorithms which design is inspired by principles of natural evolution. In most cases, the optimization task is of the following type: Find an element x 2 X such that f(x ) f(x) for all x 2 X , where f : X ! IR is the objective function to be maximized and X the search set. In the terminology of evolutionary computation, an individual is represented by an element of the Cartesian product X \Theta A, where A is a possibly...
Power Control and Capacity of Spread Spectrum Wireless Networks
 Automatica
, 1999
"... Transmit power control is a central technique for resource allocation and interference management in spreadspectrum wireless networks. With the increasing popularity of spreadspectrum as a multiple access technique, there has been significant research in the area in recent years. While power contr ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
Transmit power control is a central technique for resource allocation and interference management in spreadspectrum wireless networks. With the increasing popularity of spreadspectrum as a multiple access technique, there has been significant research in the area in recent years. While power control has been considered traditionally as a means to counteract the harmful effect of channel fading, the more general emerging view is that it is a flexible mechanism to provide QualityofService to individual users. In this paper, we will review the main threads of ideas and results in the recent development of this area, with a bias towards issues that have been the focus of our own research. For different receivers of varying complexity, we study both questions about optimal power control as well as the problem of characterizing the resulting network capacity. Although spreadspectrum communications has been traditionally viewed as a physicallayer subject, we argue that by suitable abstr...
Exponential Stability for Nonlinear Filtering
, 1996
"... We study the a.s. exponential stability of the optimal filter w.r.t. its initial conditions. A bound is provided on the exponential rate (equivalently, on the memory length of the filter) for a general setting both in discrete and in continuous time, in terms of Birkhoff's contraction coefficient. C ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
We study the a.s. exponential stability of the optimal filter w.r.t. its initial conditions. A bound is provided on the exponential rate (equivalently, on the memory length of the filter) for a general setting both in discrete and in continuous time, in terms of Birkhoff's contraction coefficient. Criteria for exponential stability and explicit bounds on the rate are given in the specific cases of a diffusion process on a compact manifold, and discrete time Markov chains on both continuous and discretecountable state spaces. R'esum'e Nous 'etudions la stabilit'e du filtre optimal par raport `a ses conditions initiales. Le taux de d'ecroissance exponentielle est calcul'e dans un cadre g'en'eral, pour temps discret et temps continu, en terme du coefficient de contraction de Birkhoff. Des crit`eres de stabilit'e exponentielle et des bornes explicites sur le taux sont calcul'ees pour les cas particuliers d'une diffusion sur une vari'ete compacte, ainsi que pour des chaines de Markov sur ...
Computable bounds for geometric convergence rates of Markov chains
, 1994
"... Recent results for geometrically ergodic Markov chains show that there exist constants R ! 1; ae ! 1 such that sup jfjV j Z P n (x; dy)f(y) \Gamma Z ß(dy)f(y)j RV (x)ae n where ß is the invariant probability measure and V is any solution of the drift inequalities Z P (x; dy)V (y) V (x) ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
Recent results for geometrically ergodic Markov chains show that there exist constants R ! 1; ae ! 1 such that sup jfjV j Z P n (x; dy)f(y) \Gamma Z ß(dy)f(y)j RV (x)ae n where ß is the invariant probability measure and V is any solution of the drift inequalities Z P (x; dy)V (y) V (x) + b1l C (x) which are known to guarantee geometric convergence for ! 1; b ! 1 and a suitable small set C. In this paper we identify for the first time computable bounds on R and ae in terms of ; b and the minorizing constants which guarantee the smallness of C. In the simplest case where C is an atom ff with P (ff; ff) ffi we can choose any ae ? # where [1 \Gamma #] \Gamma1 = 1 (1 \Gamma ) 2 h 1 \Gamma + b + b 2 + i ff (b(1 \Gamma ) + b 2 ) i and i ff i 34 \Gamma 8ffi 2 ffi 3 ji b 1 \Gamma j 2 ; and we can then choose R ae=[ae \Gamma #]. The bounds for general small sets C are similar but more complex. We apply these to simple queueing models and Markov chain Mo...
Eigenvalues and Expansion of Regular Graphs
 Journal of the ACM
, 1995
"... The spectral method is the best currently known technique to prove lower bounds on expansion. Ramanujan graphs, which have asymptotically optimal second eigenvalue, are the best known explicit expanders. The spectral method yielded a lower bound of k=4 on the expansion of linear sized subsets of kr ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
The spectral method is the best currently known technique to prove lower bounds on expansion. Ramanujan graphs, which have asymptotically optimal second eigenvalue, are the best known explicit expanders. The spectral method yielded a lower bound of k=4 on the expansion of linear sized subsets of kregular Ramanujan graphs. We improve the lower bound on the expansion of Ramanujan graphs to approximately k=2. Moreover, we construct a family of kregular graphs with asymptotically optimal second eigenvalue and linear expansion equal to k=2. This shows that k=2 is the best bound one can obtain using the second eigenvalue method. We also show an upper bound of roughly 1 + p k \Gamma 1 on the average degree of linearsized induced subgraphs of Ramanujan graphs. This compares positively with the classical bound 2 p k \Gamma 1. As a byproduct, we obtain improved results on random walks on expanders and construct selection networks (resp. extrovert graphs) of smaller size (resp. degree) th...
A New Cluster Algorithm for Graphs
 National Research Institute for Mathematics and Computer Science in the
, 1998
"... A new cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
A new cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced.