Results 1 
9 of
9
LBA: Lifetime Balanced Data Aggregation in Low Duty Cycle Sensor Networks
"... Abstract—This paper proposes LBA, a lifetime balanced data aggregation scheme for asynchronous and duty cycle sensor networks under an applicationspecific requirement of endtoend data delivery delay bound. In contrast to existing aggregation schemes that focus on reducing the energy consumption a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper proposes LBA, a lifetime balanced data aggregation scheme for asynchronous and duty cycle sensor networks under an applicationspecific requirement of endtoend data delivery delay bound. In contrast to existing aggregation schemes that focus on reducing the energy consumption and extending the operational lifetime of each individual node, LBA has a unique design goal to balance the nodal lifetime and thus prolong the network lifetime more effectively. To achieve this goal in a distributed manner, LBA adaptively adjusts the aggregation holding time between neighboring nodes to balance their nodal lifetime; as such balancing take place in all neighborhoods, nodes in the entire network can gradually adjust their nodal lifetime towards the globally balanced status. Experimental studies on a sensor network testbed shows that LBA can achieve the design goal, yield longer network lifetime than other nonadaptive and nodal lifetimeunaware data aggregation schemes, and approach the theoretical upperbound performance, especially when nodes have highly different nodal lifetime. A. Motivations I.
Deadline Constrained Scheduling for Data Aggregation in Unreliable Sensor Networks
"... AbstractWe study the problem of maximizing the aggregated information in a wireless sensor network. We consider a sensor network with a tree topology, where the root corresponds to the sink, and the rest of the network detects an event and transmits data to the sink. We formulate an integer optimi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
AbstractWe study the problem of maximizing the aggregated information in a wireless sensor network. We consider a sensor network with a tree topology, where the root corresponds to the sink, and the rest of the network detects an event and transmits data to the sink. We formulate an integer optimization problem that maximizes the aggregated information that reaches the sink under deadline and interference constraints. This framework allows using a variety of error recovery schemes to tackle link unreliability. We show that the optimal solution involves solving a Job Interval Selection Problem (JISP) which is known to be MAX SNPHard. We construct a suboptimal version, and develop a low complexity, distributed optimal solution to this version. We investigate tree structures for which this solution is optimal to the original problem. Our numerical results show that the suboptimal solution outperforms existing JISP approximation algorithms even for general trees.
Maximizing aggregated information in sensor networks under deadline constraints,”
 IEEE Transactions on Automatic Control,
, 2011
"... AbstractWe study the problem of maximizing the aggregated information in sensor networks with deadline constraints. Our model is that of a sensor network that is arranged in the form of a tree topology, where the root corresponds to the sink node, and the rest of the network detects an event and t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
AbstractWe study the problem of maximizing the aggregated information in sensor networks with deadline constraints. Our model is that of a sensor network that is arranged in the form of a tree topology, where the root corresponds to the sink node, and the rest of the network detects an event and transmits data to the sink over one or more hops. We assume a timeslotted synchronized system and a nodeexclusive (also called a primary) interference model. We formulate this problem as an integer optimization problem and show that for unit capacity links, the optimal solution involves solving a Bipartite Maximum Weighted Matching problem at each hop. We propose a polynomial time algorithm that uses only local information at each hop to obtain the optimal solution. Thus, we answer the question of when a node should stop waiting to aggregate data from its predecessors and start transmitting in order to maximize aggregated information within a deadline imposed by the sink. We extend our model to allow for practical considerations such as arbitrary link capacities, and also for multiple overlapping events. Further, we show that our framework is general enough that it can be extended to a number of interesting cases such as incorporating sleepwake scheduling, minimizing aggregate sensing error, etc.
On Optimal Energy Efficient Convergecasting in Unreliable Sensor Networks with Applications to Target Tracking.
 In Proc. of ACM MobiHoc,
, 2011
"... ABSTRACT In this paper, we develop a mathematical framework for studying the problem of maximizing the "information" received at the sink in a data gathering wireless sensor network. We explicitly account for unreliable links, energy constraints, and innetwork computation. The network mo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT In this paper, we develop a mathematical framework for studying the problem of maximizing the "information" received at the sink in a data gathering wireless sensor network. We explicitly account for unreliable links, energy constraints, and innetwork computation. The network model is that of a sensor network arranged in the form of a tree topology, where the root corresponds to the sink node, and the rest of the network detects an event and transmits data to the sink over one or more hops. This problem of sending data from multiple sources to a common sink is often referred to as the convergecasting problem. We develop an integer optimization based framework for this problem, which allows for tackling link unreliability using general errorrecovery schemes. Even though this framework has a nonlinear objective function, and cannot be relaxed to a convex programming problem, we develop a low complexity, distributed solution. The solution involves finding a Maximum Weight Increasing Independent Set (MWIIS) in rectangle graphs over each hop of the network, and can be obtained in polynomial time. Further, we apply these techniques to a target tracking problem where we optimally select sensors to track a given target such that the information obtained is maximized subject to constraints on the pernode sensing and communication energy. We validate our algorithms through numerical evaluations, and illustrate the advantages of explicitly considering link unreliability in the optimization framework.
Maximizing a submodular utility for deadline constrained data collection in sensor networks
 10th IEEE International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt
, 2012
"... Abstract—We study the utility maximization problem for data collection in sensor networks subject to a deadline constraint, where the data on a selected subset of nodes are collected through a routing tree rooted at a sink subject to the 1hop interference model. Our problem can be viewed as a Netwo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We study the utility maximization problem for data collection in sensor networks subject to a deadline constraint, where the data on a selected subset of nodes are collected through a routing tree rooted at a sink subject to the 1hop interference model. Our problem can be viewed as a Network Utility Maximization (NUM) problem with binary decisions. However, instead of a separable concave form of system utility commonly seen in NUM, we consider the class of monotone submodular utility functions defined on subsets of nodes, which is more appropriate for the applications we consider. While submodular maximization subject to a cardinality constraint has been well understood, our problem is more challenging due to the multihop data forwarding nature even under a simple interference model. We have derived efficient approximation solutions to this problem both for raw data collection and when innetwork data aggregation is applied. I.
Minimum Cost Data Aggregation for Wireless Sensor Networks Computing Functions of Sensed Data
"... We consider a problem of minimum cost (energy) data aggregation in wireless sensor networks computing certain functions of sensed data. We use innetwork aggregation such that data can be combined at the intermediate nodes en route to the sink. We consider two types of functions: firstly the summat ..."
Abstract
 Add to MetaCart
(Show Context)
We consider a problem of minimum cost (energy) data aggregation in wireless sensor networks computing certain functions of sensed data. We use innetwork aggregation such that data can be combined at the intermediate nodes en route to the sink. We consider two types of functions: firstly the summationtype which includes sum, mean, and weighted sum, and secondly the extremetype which includes max and min. However for both types of functions the problem turns out to be NPhard. We first show that, for sum and mean, there exist algorithms which can approximate the optimal cost by a factor logarithmic in the number of sources. For weighted sum we obtain a similar result for Gaussian sources. Next we reveal that the problem for extremetype functions is intrinsically different from that for summationtype functions. We then propose a novel algorithm based on the crucial tradeoff in reducing costs between local aggregation of flows and finding a low cost path to the sink: the algorithm is shown to empirically find the best tradeoff point. We argue that the algorithm is applicable to many other similar types of problems. Simulation results show that significant cost savings can be achieved by the proposed algorithm.
Submodular Utility Maximization for Deadline Constrained Data Collection in Sensor Networks
"... AbstractWe study the utility maximization problem for data collection in a wireless sensor network subject to a deadline constraint, where the data on a selected subset of nodes are collected through a routing tree subject to the 1hop interference model. Our problem is closely related to the trad ..."
Abstract
 Add to MetaCart
AbstractWe study the utility maximization problem for data collection in a wireless sensor network subject to a deadline constraint, where the data on a selected subset of nodes are collected through a routing tree subject to the 1hop interference model. Our problem is closely related to the traditional utility maximization problems in networking and communications. However, instead of a separable concave form of utility functions commonly seen in this area, we consider the class of monotone submodular utility functions defined on subsets of nodes, which is more appropriate for the applications we consider. While submodular maximization subject to a cardinality constraint has been well understood, our problem is more challenging due to the multihop data forwarding nature even under the simple interference model. We have derived efficient approximation solutions to this problem both for raw data collection and when innetwork data aggregation is applied.
Hold 'em or Fold 'em? Aggregation Queries under Performance Variations
"... Abstract Systems are increasingly required to provide responses to queries, even if not exact, within stringent time deadlines. These systems parallelize computations over many processes and aggregate them hierarchically to get the final response (e.g., search engines and data analytics). Due to la ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Systems are increasingly required to provide responses to queries, even if not exact, within stringent time deadlines. These systems parallelize computations over many processes and aggregate them hierarchically to get the final response (e.g., search engines and data analytics). Due to large performance variations in clusters, some processes are slower. Therefore, aggregators are faced with the question of how long to wait for outputs from processes before combining and sending them upstream. Longer waits increase the response quality as it would include outputs from more processes. However, it also increases the risk of the aggregator failing to provide its result by the deadline. This leads to all its results being ignored, degrading response quality. Our algorithm, Cedar, proposes a solution to this quandary of deciding wait durations at aggregators. It uses an online algorithm to learn distributions of durations at each level in the hierarchy and collectively optimizes the wait duration. Cedar's solution is theoretically sound, fully distributed, and generically applicable across systems that use aggregation trees since it is agnostic to the causes of performance variations. Evaluation using production latency distributions from Google, Microsoft and Facebook using deployment and simulation shows that Cedar improves average response quality by over 100%. Categories and Subject Descriptors [Networks]: Cloud computing; [Computer Systems Organization]: Distributed Architectures Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, contact the Owner/Author(s). Request permissions from permissions@acm.org or Publications Dept., ACM, Inc., fax +1