Results 1  10
of
15
Logarithmic lower bounds in the cellprobe model
 SIAM Journal on Computing
"... Abstract. We develop a new technique for proving cellprobe lower bounds on dynamic data structures. This enables us to prove Ω(lg n) bounds, breaking a longstanding barrier of Ω(lg n/lg lg n). We can also prove the first Ω(lgB n) lower bound in the external memory model, without assumptions on the ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
Abstract. We develop a new technique for proving cellprobe lower bounds on dynamic data structures. This enables us to prove Ω(lg n) bounds, breaking a longstanding barrier of Ω(lg n/lg lg n). We can also prove the first Ω(lgB n) lower bound in the external memory model, without assumptions on the data structure. We use our technique to prove better bounds for the partialsums problem, dynamic connectivity and (by reductions) other dynamic graph problems. Our proofs are surprisingly simple and clean. The bounds we obtain are often optimal, and lead to a nearly complete understanding of the problems. We also present new matching upper bounds for the partialsums problem. Key words. cellprobe complexity, lower bounds, data structures, dynamic graph problems, partialsums problem AMS subject classification. 68Q17
Lower bounds for dynamic connectivity
 STOC
, 2004
"... We prove an Ω(lg n) cellprobe lower bound on maintaining connectivity in dynamic graphs, as well as a more general tradeoff between updates and queries. Our bound holds even if the graph is formed by disjoint paths, and thus also applies to trees and plane graphs. The bound is known to be tight fo ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We prove an Ω(lg n) cellprobe lower bound on maintaining connectivity in dynamic graphs, as well as a more general tradeoff between updates and queries. Our bound holds even if the graph is formed by disjoint paths, and thus also applies to trees and plane graphs. The bound is known to be tight for these restricted cases, proving optimality of these data structures (e.g., Sleator and Tarjan’s dynamic trees). Our tradeoff is known to be tight for trees, and the best two data structures for dynamic connectivity in general graphs are points on our tradeoff curve. In this sense these two data structures are optimal, and this tightness serves as strong evidence that our lower bounds are the best possible. From a more theoretical perspective, our result is the first logarithmic cellprobe lower bound for any problem in the natural class of dynamic language membership problems, breaking the long standing record of Ω(lg n / lg lg n). In this sense, our result is the first datastructure lower bound that is “truly ” logarithmic, i.e., logarithmic in the problem size counted in bits. Obtaining such a bound is listed as one of three major challenges for future research by Miltersen [13] (the other two challenges remain unsolved). Our techniques form a general framework for proving cellprobe lower bounds on dynamic data structures. We show how our framework also applies to the partialsums problem to obtain a nearly complete understanding of the problem in cellprobe and algebraic models, solving several previously posed open problems.
Comparative Analysis of Arithmetic Coding Computational Complexity
, 2004
"... Some longheld assumptions about the most demanding computations for arithmetic coding are now obsolete due to new hardware. For instance, it is not advantageous to replace multiplicationwhich now can be done with high precision in a single CPU clock cyclewith comparisons and tablebased ap ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Some longheld assumptions about the most demanding computations for arithmetic coding are now obsolete due to new hardware. For instance, it is not advantageous to replace multiplicationwhich now can be done with high precision in a single CPU clock cyclewith comparisons and tablebased approximations. A good understanding of the cost of the arithmetic coding computations is needed to design e#cient implementations for the current and future processors. In this work we profile these computations by comparing the running times of many implementations, trying to change at most one part at a time, and avoiding small e#ects being masked by much larger ones. For instance, we test arithmetic operations ranging from 16bit integers to 48bit floating point; and renormalization outputs from single bit to 16 bits. To evaluate the complexity of adaptive coding we compare static models and di#erent adaptation strategies. We observe that significant speed gains are possible if we do not insist on updating the code immediately after each coded symbol. The results show that the fastest techniques are those that e#ectively use the CPU's hardware: fullprecision arithmetic, byte output, table lookup decoding, and periodic updating.
A Hierarchical Volumetric Shadow Algorithm for Single Scattering
"... eye light rays integration samples blocker integration sample blocked sample view ray i view ray i+1 epipolar rectification view rays (incremental evluation) light rays (integration direction) integration Figure 1: Rendering volumetric shadows in participating media requires integrating scattering o ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
eye light rays integration samples blocker integration sample blocked sample view ray i view ray i+1 epipolar rectification view rays (incremental evluation) light rays (integration direction) integration Figure 1: Rendering volumetric shadows in participating media requires integrating scattering over view rays. Left: The visibility component of this integral has a special structure: once a light ray hits an occluder, that light ray does not contribute to the integral along any view ray past the occluder. Middle: Our method exploits this structure by computing the integrals in an epipolar coordinate system, in which light rays (dashed grey) and view rays (solid black) are orthogonal and the integration can be performed asymptotically efficiently using a partial sum tree. Right: This enables us to compute highquality scattering integrals much faster than the previous state of the art. Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility must be determined for each sample using shadow rays or shadowmap lookups. We observe that in a suitably chosen coordinate system, the visibility function has a regular structure that we can exploit for significant acceleration compared to brute force sampling. We propose an efficient algorithm based on partial sum trees for computing the scattering integrals in a singlescattering homogeneous medium. On a CPU, we achieve speedups of 17–120x over ray marching.
A light differential download algorithm for software defined radio devices
 in IEEE Consumer Communications and Networking Conference
, 2005
"... Abstract — Radio configuration (RCFG) files for software defined radio (SDR) devices can be downloaded over the air, allowing these devices to support multimode functionality using a single transceiver. The drawback of this is that the wireless link is constrained and downloading the entire RCFG ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — Radio configuration (RCFG) files for software defined radio (SDR) devices can be downloaded over the air, allowing these devices to support multimode functionality using a single transceiver. The drawback of this is that the wireless link is constrained and downloading the entire RCFG could take some time. In an effort to achieve efficiency, this paper presents a new algorithm for differential download, referred to as light differential download algorithm (LDDA). The LDDA is the first differential download algorithm specifically designed for SDR devices. It presents several new features that make not only RCFG differential download a possibility, but also allows the update of any software by differential download. Experiments using Java 2 Micro Edition are performed to demonstrate the feasibility and superior performance of the LDDA when compared with other approaches.
An Enhanced Distributed System to improve the Time Complexity of Binary Indexed Trees
"... Abstract—Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multiuniform pr ..."
Abstract
 Add to MetaCart
Abstract—Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multiuniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
The difficulty of programming contests increases
"... Abstract. In this paper we give a detailed quantitative and qualitative analysis of the difficulty of programming contests in past years. We analyze task topics in past competition tasks, and also analyze an entire problem set in terms of required algorithm efficiency. We provide both subjective and ..."
Abstract
 Add to MetaCart
Abstract. In this paper we give a detailed quantitative and qualitative analysis of the difficulty of programming contests in past years. We analyze task topics in past competition tasks, and also analyze an entire problem set in terms of required algorithm efficiency. We provide both subjective and objective data on how contestants are getting better over the years and how the tasks are getting harder. We use an exact, formal method based on Item Response Theory to analyze past contest results. 1
A New Approach to Adaptive Encoding Data using Selforganizing Data Structures
"... Abstract—This paper demonstrates how techniques applicable for defining and maintaining a special case of binary search trees (BSTs) can be incorporated into “traditional ” compression techniques to yield enhanced superior schemes. We, specifically, demonstrate that the newly introduced data structu ..."
Abstract
 Add to MetaCart
Abstract—This paper demonstrates how techniques applicable for defining and maintaining a special case of binary search trees (BSTs) can be incorporated into “traditional ” compression techniques to yield enhanced superior schemes. We, specifically, demonstrate that the newly introduced data structure, the Fano Binary Search Tree (FBST) can be maintained adaptively and in a selforganizing manner. The correctness and properties of the encoding and decoding procedures that update the FBST are included. We also include the theoretical and empirical analysis, which shows that the number of shift operators is large for small files, and it tends to decrease (asymptotically towards zero) for large files. Index Terms—adaptive coding, selforganizing data structures, binary search trees. I.
A FraudPrevention Framework for Software Defined Radio Mobile Devices Approved by:
, 2005
"... To my wife Celia, my parents and sister. ..."