Results 11  20
of
257
Energyefficient scheduling of packet transmissions over wireless networks
 in Proc. INFOCOM Conf
"... Abstract—The paper develops algorithms for minimizing the energy required to transmit packets in a wireless environment. It is motivated by the following observation: In many channel coding schemes it is possible to significantly lower the transmission energy by transmitting packets over a long peri ..."
Abstract

Cited by 104 (3 self)
 Add to MetaCart
Abstract—The paper develops algorithms for minimizing the energy required to transmit packets in a wireless environment. It is motivated by the following observation: In many channel coding schemes it is possible to significantly lower the transmission energy by transmitting packets over a long period of time. Based on this observation, we show that for a variety of scenarios the offline energyefficient transmission scheduling problem reduces to a convex optimization problem. Unlike for the special case of a single transmitterreceiver pair studied in [5], the problem does not, in general, admit a closedform solution when there are multiple users. By exploiting the special structure of the problem, however, we are able to devise energyefficient transmission schedules. For the downlink channel, with a single transmitter and multiple receivers, we devise an iterative algorithm, called MoveRight, that yields the optimal offline schedule. The MoveRight algorithm also optimally solves the downlink problem with additional constraints imposed by
Testing that distributions are close
 In IEEE Symposium on Foundations of Computer Science
, 2000
"... Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions ..."
Abstract

Cited by 81 (16 self)
 Add to MetaCart
(Show Context)
Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases ɛ when the distance between the distributions is small (less than max ( 2 32 3 √ n, ɛ 4 √)) or large (more n than ɛ) in L1distance. We also give an Ω(n 2/3 ɛ −2/3) lower bound. Our algorithm has applications to the problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.
Extracting all the Randomness and Reducing the Error in Trevisan's Extractors
 In Proceedings of the 31st Annual ACM Symposium on Theory of Computing
, 1999
"... We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log² n) additional random bits, and can extract all the minentropy using O(log³ n) addition ..."
Abstract

Cited by 79 (17 self)
 Add to MetaCart
(Show Context)
We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log&sup2; n) additional random bits, and can extract all the minentropy using O(log&sup3; n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all minentropies and extracts a constant fraction of the minentropy. We then improve our second construction and show that we can reduce the entropy loss to 2 log(1=") +O(1) bits, while still using O(log&sup3; n) truly random bits (where entropy loss is defined as [(source minentropy) + (# truly random bits used) (# output bits)], and " is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. our...
An informationtheoretic approach to automatic query expansion
 ACM Transactions on Information Systems
, 2001
"... Techniques for automatic query expansion from top retrieved documents have shown promise for improving retrieval effectiveness on large collections; however, they often rely on an empirical ground, and there is a shortage of crosssystem comparisons. Using ideas from Information Theory, we present a ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
Techniques for automatic query expansion from top retrieved documents have shown promise for improving retrieval effectiveness on large collections; however, they often rely on an empirical ground, and there is a shortage of crosssystem comparisons. Using ideas from Information Theory, we present a computationally simple and theoretically justified method for assigning scores to candidate expansion terms. Such scores are used to select and weight expansion terms within Rocchio’s framework for query reweighting. We compare ranking with informationtheoretic query expansion versus ranking with other query expansion techniques, showing that the former achieves better retrieval effectiveness on several performance measures. We also discuss the effect on retrieval effectiveness of the main parameters involved in automatic query expansion, such as data sparseness, query difficulty, number of selected documents, and number of selected terms, pointing out interesting relationships.
Information Based Adaptive Robotic Exploration
 in Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
, 2002
"... Exploration involving mapping and concurrent localization in an unknown environment is a pervasive task in mobile robotics. In general, the accuracy of the mapping process depends directly on the accuracy of the localization process. This paper address the problem of maximizing the accuracy of the m ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
(Show Context)
Exploration involving mapping and concurrent localization in an unknown environment is a pervasive task in mobile robotics. In general, the accuracy of the mapping process depends directly on the accuracy of the localization process. This paper address the problem of maximizing the accuracy of the map building process during exploration by adaptively selecting control actions that maximize localisation accuracy. The map building and exploration task is modeled using an Occupancy Grid (OG) with concurrent localisation performed using a featurebased Simultaneous Localisation And Mapping (SLAM) algorithm . Adaptive sensing aims at maximizing the map information by simultaneously maximizing the expected Shannon information gain (Mutual Information) on the OG map and minimizing the uncertainty of the vehicle pose and map feature uncertainty in the SLAM process. The resulting map building system is demonstrated in an indoor environment using data from a laser scanner mounted on a mobile platform.
An Experiment in Integrated Exploration
 In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS
, 2002
"... Integrated exploration strategy advocated in this paper refers to a tight coupling between the tasks of localization, mapping, and motion control and the effect of this coupling on the overall effectiveness of an exploration strategy. Our approach to exploration calls for a balanced evaluation of al ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
(Show Context)
Integrated exploration strategy advocated in this paper refers to a tight coupling between the tasks of localization, mapping, and motion control and the effect of this coupling on the overall effectiveness of an exploration strategy. Our approach to exploration calls for a balanced evaluation of alternative motion actions from the point of view of information gain, localization quality, and navigation cost. To provide a uniform basis of comparison of localization quality between different locations, a "localizability" metric is introduced. It is based on the estimate of the lowest vehicle pose covariance attainable from a given location.
Realtime particle filters
 Proceedings of the IEEE
, 2004
"... ctkwok,fox£ Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher rate than the update rate of the filter. The prevalent approach to dealing with such situati ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
(Show Context)
ctkwok,fox£ Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher rate than the update rate of the filter. The prevalent approach to dealing with such situations is to update the particle filter as often as possible and to discard sensor information that cannot be processed in time. In this paper we present realtime particle filters, which make use of all sensor information even when the filter update rate is below the update rate of the sensors. This is achieved by representing posteriors as mixtures of sample sets, where each mixture component integrates one observation arriving during a filter update. The weights of the mixture components are set so as to minimize the approximation error introduced by the mixture representation. Thereby, our approach focuses computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach yields strong improvements over other approaches. 1
Similaritybased approaches to natural language processing
, 1997
"... Statistical methods for automatically extracting information about associations between words or documents from large collections of text have the potential to have considerable impact in a number of areas, such as information retrieval and naturallanguagebased user interfaces. However, even huge ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
(Show Context)
Statistical methods for automatically extracting information about associations between words or documents from large collections of text have the potential to have considerable impact in a number of areas, such as information retrieval and naturallanguagebased user interfaces. However, even huge bodies of text yield highly unreliable estimates of the probability of relatively common events, and, in fact, perfectly reasonable events may not occur in the training data at all. This is known as the sparse data problem. Traditional approaches to the sparse data problem use crude approximations. We propose a different solution: if we are able to organize the data into classes of similar events, then, if information about an event is lacking, we can estimate its behavior from information about similar events. This thesis presents two such similaritybased approaches, where, in general, we measure similarity by the KullbackLeibler divergence, an informationtheoretic quantity. Our first approach is to build soft, hierarchical clusters: soft, because each event belongs to each cluster with some probability; hierarchical, because cluster centroids are iteratively split to model finer distinctions. Our clustering method, which uses the technique of deterministic annealing,
Network Correlated Data Gathering with Explicit Communication: NPCompleteness and Algorithms
"... We consider the problem of correlated data gathering by a network with a sink node and a tree based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
(Show Context)
We consider the problem of correlated data gathering by a network with a sink node and a tree based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy based coding model with explicit communication where coding is simple and the transmission structure optimization is di#cult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NPhard. We propose some e#cient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal.
Distribution of mutual information
 Advances in Neural Information Processing Systems 14: Proceedings of the 2002 Conference
, 2002
"... expectation and variance of mutual information. The mutual information of two random variables ı and j with joint probabilities {πij} is commonly used in learning Bayesian nets as well as in many other fields. The chances πij are usually estimated by the empirical sampling frequency nij/n leading to ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
expectation and variance of mutual information. The mutual information of two random variables ı and j with joint probabilities {πij} is commonly used in learning Bayesian nets as well as in many other fields. The chances πij are usually estimated by the empirical sampling frequency nij/n leading to a point estimate I(nij/n) for the mutual information. To answer questions like “is I(nij/n) consistent with zero? ” or “what is the probability that the true mutual information is much larger than the point estimate? ” one has to go beyond the point estimate. In the Bayesian framework one can answer these questions by utilizing a (second order) prior distribution p(π) comprising prior information about π. From the prior p(π) one can compute the posterior p(πn), from which the distribution p(In) of the mutual information can be calculated. We derive reliable and quickly computable approximations for p(In). We concentrate on the mean, variance, skewness, and kurtosis, and noninformative priors. For the mean we also