Results 1  10
of
119
Dynamic planar convex hull
 Proc. 43rd IEEE Sympos. Found. Comput. Sci
, 2002
"... In this paper we determine the amortized computational complexity of the dynamic convex hull problem in the planar case. We present a data structure that maintains a finite set of n points in the plane under insertion and deletion of points in amortized O(log n) time per operation. The space usage o ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
In this paper we determine the amortized computational complexity of the dynamic convex hull problem in the planar case. We present a data structure that maintains a finite set of n points in the plane under insertion and deletion of points in amortized O(log n) time per operation. The space usage of the data structure is O(n). The data structure supports extreme point queries in a given direction, tangent queries through a given point, and queries for the neighboring points on the convex hull in O(log n) time. The extreme point queries can be used to decide whether or not a given line intersects the convex hull, and the tangent queries to determine whether a given point is inside the convex hull. We give a lower bound on the amortized asymptotic time complexity that matches the performance of this data structure.
Abstract versus concrete computation on metric partial algebras
 ACM Transactions on Computational Logic
, 2004
"... Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological manysorted algebras. In the theory of computation on topological algebras there is a considerable gap between socalled abstract and concrete models of computation. We pr ..."
Abstract

Cited by 28 (17 self)
 Add to MetaCart
Data types containing infinite data, such as the real numbers, functions, bit streams and waveforms, are modelled by topological manysorted algebras. In the theory of computation on topological algebras there is a considerable gap between socalled abstract and concrete models of computation. We prove theorems that bridge the gap in the case of metric algebras with partial operations. With an abstract model of computation on an algebra, the computations are invariant under isomorphisms and do not depend on any representation of the algebra. Examples of such models are the ‘while ’ programming language and the BCSS model. With a concrete model of computation, the computations depend on the choice of a representation of the algebra and are not invariant under isomorphisms. Usually, the representations are made from the set N of natural numbers, and computability is reduced to classical computability on N. Examples of such models are computability via effective metric spaces, effective domain representations, and type two enumerability. The theory of abstract models is stable: there are many models of computation, and
Local stability of ergodic averages
 Transactions of the American Mathematical Society
"... We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages An ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages Anf do not converge to a computable element of L2([0,1]). In particular, there is no computable bound on the rate of convergence for that sequence. On the other hand, we show that, for any nonexpansive linear operator T on a separable Hilbert space, and any element f, it is possible to compute a bound on the rate of convergence of (Anf) from T, f, and the norm ‖f ∗ ‖ of the limit. In particular, if T is the Koopman operator arising from a computable ergodic measure preserving transformation of a probability space X and f is any computable element of L2(X), then there is a computable bound on the rate of convergence of the sequence (Anf). The mean ergodic theorem is equivalent to the assertion that for every function K(n) and every ε> 0, there is an n with the property that the ergodic averages Amf are stable to within ε on the interval [n, K(n)]. Even in situations where the sequence (Anf) does not have a computable limit, one can give explicit bounds on such n in terms of K and ‖f‖/ε. This tells us how far one has to search to find an n so that the ergodic averages are “locally stable ” on a large interval. We use these bounds to obtain a similarly explicit version of the pointwise ergodic theorem, and show that our bounds are qualitatively different from ones that can be obtained using upcrossing inequalities due to Bishop and Ivanov. Finally, we explain how our positive results can be viewed as an application of a body of general prooftheoretic methods falling under the heading of “proof mining.” 1
Polynomial differential equations compute all real computable functions on computable compact intervals
, 2007
"... ..."
Estimation of Congestion Price Using Probabilistic Packet Marking
 in Proc. IEEE INFOCOM
, 2002
"... One key component of recent pricingbased congestion control schemes is an algorithm for probabilistically setting the Explicit Congestion Notification bit at routers so that a receiver can estimate the sum of link congestion prices along a path. We consider two such algorithmsa wellknown algori ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
One key component of recent pricingbased congestion control schemes is an algorithm for probabilistically setting the Explicit Congestion Notification bit at routers so that a receiver can estimate the sum of link congestion prices along a path. We consider two such algorithmsa wellknown algorithm called Random Exponential Marking (REM) and a novel algorithm called Random Additive Marking (RAM). We show that if link prices are unbounded, a class of REMlike algorithms are the only ones possible. Unfortunately, REM computes a biased estimate of total price and requires setting a parameter for which no uniformly good choice exists in a network setting. However, we show that if prices can be bounded and therefore normalized, then there is an alternate class of feasible algorithms, of which RAM is representative and furthermore, only the REMlike and RAMlike classes are possible. For properly normalized link prices, RAM returns an optimal price estimate (in terms of mean squared error), outperforming REM even if the REM parameter is chosen optimally. RAM does not require setting a parameter like REM, but does require a router to know its position along the path taken by a packet. We present an implementation of RAM for the Internet that exploits the existing semantics of the timetolive field in IP to provide the necessary path position information.
Some recent developments on Shannon’s general purpose analog computer
 Mathematical Logic Quarterly
"... This paper revisits one of the first models of analog computation, the General Purpose Analog Computer (GPAC). In particular, we restrict our attention to the improved model presented in [11] and we show that it can be further refined. With this we prove the following: (i) the previous model can be ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
This paper revisits one of the first models of analog computation, the General Purpose Analog Computer (GPAC). In particular, we restrict our attention to the improved model presented in [11] and we show that it can be further refined. With this we prove the following: (i) the previous model can be simplified; (ii) it admits extensions having close connections with the class of smooth continuous time dynamical systems. As a consequence, we conclude that some of these extensions achieve Turing universality. Finally, it is shown that if we introduce a new notion of computability for the GPAC, based on ideas from computable analysis, then one can compute transcendentally transcendental functions such as the Gamma function or Riemann’s Zeta function. 1
Languages and tools for hybrid systems design
 Foundations and Trends in Electronic Design Automation
"... The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safetycritical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safetycritical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health care monitoring. The difficulties reside in accommodating constraints both on functionality and implementation. The correct behavior must be guaranteed under diverse states of the environment and potential failures; implementation has to meet cost, size, and power consumption requirements. The design is therefore subject to extensive mathematical analysis and simulation. However, traditionalmodels of information systems do not interface well to the continuous evolving nature of the environment in which these devices operate. Thus, in practice, different mathematical representations have to be mixed to analyze the overall behavior of the system. Hybrid systems are a particular class of mixed models that focus on the combination
Computability, noncomputability and undecidability of maximal intervals of IVPs
 Trans. Amer. Math. Soc
"... Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initialvalue problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E and (t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view o ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initialvalue problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E and (t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view of applications, there exist initialvalue problems with computable f and (t0, x0) whose maximal interval of existence (α, β) is noncomputable. The fact that f may be taken to be analytic shows that this is not a lack of regularity phenomenon. Moreover, we get upper bounds for the “degree of noncomputability” by showing that (α, β) is r.e. (recursively enumerable) open under very mild hypotheses. We also show that the problem of determining whether the maximal interval is bounded or unbounded is in general undecidable. 1.
An Optical Model of Computation
 Theoretical Computer Science
, 2004
"... We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a \Theta (log_2 n) unordered search algorithm with our model.
Computations via experiments with kinematic systems
, 2004
"... Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2dimensional kinematic system BA with a single particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be des ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2dimensional kinematic system BA with a single particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be designed to operate under (a) Newtonian mechanics or (b) Relativistic mechanics. The theorem proves that valid models of mechanical systems can compute all possible functions on discrete data. The proofs show how any information (coded by some A) can be embedded in the structure of a simple kinematic system and retrieved by simple observations of its behaviour. We reflect on this undesirable situation and argue that mechanics must be extended to include a formal theory for performing experiments, which includes the construction of systems. We conjecture that in such an extended mechanics the functions computed by experiments are precisely those computed by algorithms. We set these theorems and ideas in the context of the literature on the general problem “Is physical behaviour computable? ” and state some open problems.