Results 1  10
of
151
A Tutorial on Decomposition Methods for Network Utility Maximization
 IEEE J. SEL. AREAS COMMUN
, 2006
"... A systematic understanding of the decomposability structures in network utility maximization is key to both resource allocation and functionality allocation. It helps us obtain the most appropriate distributed algorithm for a given network resource allocation problem, and quantifies the comparison ..."
Abstract

Cited by 185 (4 self)
 Add to MetaCart
(Show Context)
A systematic understanding of the decomposability structures in network utility maximization is key to both resource allocation and functionality allocation. It helps us obtain the most appropriate distributed algorithm for a given network resource allocation problem, and quantifies the comparison across architectural alternatives of modularized network design. Decomposition theory naturally provides the mathematical language to build an analytic foundation for the design of modularized and distributed control of networks. In this tutorial paper, we first review the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss–Seidel iterations, and implication of different time scales of variable updates. Then, we introduce primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations and the meanings of primal and dual decompositions in terms of network architectures. Finally, we present recent examples on: systematic search for alternative decompositions; decoupling techniques for coupled objective functions; and decoupling techniques for coupled constraint sets that are not readily decomposable.
Complexity and robustness
 Proceedings of the National Academy of Sciences 99(Suppl
, 2002
"... Highly Optimized Tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes 1) highly structured, nongeneric, selfdissimilar internal configurations and 2) robust, yet ..."
Abstract

Cited by 156 (10 self)
 Add to MetaCart
(Show Context)
Highly Optimized Tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes 1) highly structured, nongeneric, selfdissimilar internal configurations and 2) robust, yet fragile external behavior. HOT claims these are the most important features of complexity and are not accidents of evolution or artifices of engineering design, but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on both real world examples and also model systems, particularly those from SelfOrganized Criticality (SOC).
Maximizing Throughput in Wireless Networks via Gossiping
, 2006
"... A major challenge in the design of wireless networks is the need for distributed scheduling algorithms that will efficiently share the common spectrum. Recently, a few distributed algorithms for networks in which a node can converse with at most a single neighbor at a time have been presented. These ..."
Abstract

Cited by 146 (30 self)
 Add to MetaCart
A major challenge in the design of wireless networks is the need for distributed scheduling algorithms that will efficiently share the common spectrum. Recently, a few distributed algorithms for networks in which a node can converse with at most a single neighbor at a time have been presented. These algorithms guarantee 50 % of the maximum possible throughput. We present the first distributed scheduling framework that guarantees maximum throughput. It is based on a combination of a distributed matching algorithm and an algorithm that compares and merges successive matching solutions. The comparison can be done by a deterministic algorithm or by randomized gossip algorithms. In the latter case, the comparison may be inaccurate. Yet, we show that if the matching and gossip algorithms satisfy simple conditions related to their performance and to the inaccuracy of the comparison (respectively), the framework attains the desired throughput. It is shown that the complexities of our algorithms, that achieve nearly 100 % throughput, are comparable to those of the algorithms that achieve 50 % throughput. Finally, we discuss extensions to general interference models. Even for such models, the framework provides a simple distributed throughput optimal algorithm.
Joint congestion control, routing and MAC for stability and fairness in wireless networks
 IEEE Journal on Selected Areas in Communications
, 2006
"... In this work, we describe and analyze a joint scheduling, routing and congestion control mechanism for wireless networks, that asymptotically guarantees stability of the buffers and fair allocation of the network resources. The queue lengths serve as common information to different layers of the ne ..."
Abstract

Cited by 126 (23 self)
 Add to MetaCart
(Show Context)
In this work, we describe and analyze a joint scheduling, routing and congestion control mechanism for wireless networks, that asymptotically guarantees stability of the buffers and fair allocation of the network resources. The queue lengths serve as common information to different layers of the network protocol stack. Our main contribution is to prove the asymptotic optimality of a primaldual congestion controller, which is known to model different versions of TCP well.
Efficient InterferenceAware TDMA Link Scheduling for Static Wireless Networks
 In ACM MobiCom
, 2006
"... We study efficient link scheduling for a multihop wireless network to maximize its throughput. Efficient link scheduling can greatly reduce the interference effect of closeby transmissions. Unlike the previous studies that often assume a unit disk graph model, we assume that different terminals cou ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
(Show Context)
We study efficient link scheduling for a multihop wireless network to maximize its throughput. Efficient link scheduling can greatly reduce the interference effect of closeby transmissions. Unlike the previous studies that often assume a unit disk graph model, we assume that different terminals could have different transmission ranges and different interference ranges. In our model, it is also possible that a communication link may not exist due to barriers or is not used by a predetermined routing protocol, while the transmission of a node always result interference to all nonintended receivers within its interference range. Using a mathematical formulation, we develop synchronized TDMA link schedulings that optimize the networking throughput. Specifically, by assuming known link capacities and link traffic loads, we study link scheduling under the RTS/CTS interference model and the protocol interference model with fixed transmission power. For both models, we present both efficient centralized and distributed algorithms that use time slots within a constant factor of the optimum. We also present efficient distributed algorithms whose performances are still comparable with optimum, but with much less communications. Our theoretical results are corroborated by extensive simulation studies.
Enabling Distributed Throughput Maximization in Wireless Mesh Networks  A Partitioning Approach
, 2006
"... This paper considers the interaction between channel assignment and distributed scheduling in multichannel multiradio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can a ..."
Abstract

Cited by 85 (4 self)
 Add to MetaCart
This paper considers the interaction between channel assignment and distributed scheduling in multichannel multiradio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can achieve only a fraction of the maximum possible throughput. As an alternative to increasing the throughput fraction by designing new algorithms, in this paper we present a novel approach that takes advantage of the inherent multiradio capability of WMNs. We show that this capability can enable partitioning of the network into subnetworks in which simple distributed scheduling algorithms can achieve 100 % throughput. The partitioning is based on the recently introduced notion of Local Pooling. Using this notion, we characterize topologies in which 100% throughput can be achieved distributedly. These topologies are used in order to develop a number of channel assignment algorithms that are based on a matroid intersection algorithm. These algorithms partition a network in a manner that not only expands the capacity regions of the subnetworks but also allows distributed algorithms to achieve these capacity regions. Finally, we evaluate the performance of the algorithms via simulation and show that they significantly increase the distributedly achievable capacity region.
Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
"... For many mathematicians and physicists, the Internet has become a popular realworld domain for the application and/or development of new theories related to the organization and behavior of largescale, complex, and dynamic systems. In some cases, the Internet has served both as inspiration and just ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
(Show Context)
For many mathematicians and physicists, the Internet has become a popular realworld domain for the application and/or development of new theories related to the organization and behavior of largescale, complex, and dynamic systems. In some cases, the Internet has served both as inspiration and justification for the popularization of new models and mathematics within the scientific enterprise. For example, scalefree network models of the preferential attachment type [8] have been claimed to describe the Internet’s connectivity structure, resulting in surprisingly general and strong claims about the network’s resilience to random failures of its components and its vulnerability to targeted attacks against its infrastructure [2]. These models have, as their trademark, powerlaw type node degree distributions that drastically distinguish them from the classical ErdősRényi type random graph models [13]. These “scalefree ” network models have attracted significant attention within the scientific community and have been partly responsible for launching and fueling the new field of network science [42, 4]. To date, the main role that mathematics has played in network science has been to put the physicists’ largely empirical findings on solid grounds Walter Willinger is at AT&T LabsResearch in Florham Park, NJ. His email address is walter@research.att. com.
Horizon: Balancing tcp over multiple paths in wireless mesh network
 In MobiCom
, 2008
"... wireless mesh network ..."
(Show Context)
CrossLayer Latency Minimization in Wireless Networks with SINR Constraints
 MOBIHOC’07, SEPTEMBER 9–14, 2007, MONTREAL, QUEBEC, CANADA
, 2007
"... Recently, there has been substantial interest in the design of cross
layer protocols for wireless networks. These protocols optimize
certain performance metric(s) of interest (e.g. latency, energy, rate)
by jointly optimizing the performance of multiple layers of the
protocol stack. Algorithm desig ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
Recently, there has been substantial interest in the design of cross
layer protocols for wireless networks. These protocols optimize
certain performance metric(s) of interest (e.g. latency, energy, rate)
by jointly optimizing the performance of multiple layers of the
protocol stack. Algorithm designers often use geometricgraph
theoretic models for radio interference to design such crosslayer
protocols. In this paper we study the problem of designing cross
layer protocols for multihop wireless networks using a more real
istic Signal to Interference plus Noise Ratio (SINR) model for radio
interference. The following crosslayer latency minimization prob
lem is studied: Given a set V of transceivers, and a set of source
destination pairs, (i) choose power levels for all the transceivers, (ii)
choose routes for all connections, and (iii) construct an endtoend
schedule such that the SINR constraints are satisfied at each time
step so as to minimize the makespan of the schedule (the time
by which all packets have reached their respective destinations).
We present a polynomialtime algorithm with provable worstcase
performance guarantee for this crosslayer latency minimization
problem. As corollaries of the algorithmic technique we show that
a number of variants of the crosslayer latency minimization prob
lem can also be approximated efficiently in polynomial time. Our
work extends the results of Kumar et al. (Proc. SODA, 2004) and
Moscibroda et al. (Proc. MOBIHOC, 2006). Although our algo
rithm considers multiple layers of the protocol stack, it can natu
rally be viewed as compositions of tasks specific to each layer —
this allows us to improve the overall performance while preserving
the modularity of the layered structure.
DiffQ: Practical Differential Backlog Congestion Control for Wireless Networks
 In Proc. of INFOCOM, Rio de Janeiro
, 2009
"... Abstract—Congestion control in wireless multihop networks is challenging and complicated because of two reasons. First, interference is ubiquitous and causes loss in the shared medium. Second, wireless multihop networks are characterized by the use of diverse and dynamically changing routing paths. ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Congestion control in wireless multihop networks is challenging and complicated because of two reasons. First, interference is ubiquitous and causes loss in the shared medium. Second, wireless multihop networks are characterized by the use of diverse and dynamically changing routing paths. Traditional end point based congestion control protocols are ineffective in such a setting resulting in unfairness and starvation. This paper adapts the optimal theoretical work of Tassiulas and Ephremedes [33] on crosslayer optimization of wireless networks involving congestion control, routing and scheduling, for practical solutions to congestion control in multihop wireless networks. This work is the first that implements in real offshelf radios, a differential backlog based MAC scheduling and routerassisted backpressure congestion control for multihop wireless networks. Our adaptation, called DiffQ, is implemented between transport and IP and supports legacy TCP and UDP applications. In a network of 46 IEEE 802.11 wireless nodes, we demonstrate that DiffQ far outperforms many previously proposed “practical” solutions for congestion control. I.