Results 1  10
of
12
Trading off space for passes in graph streaming problems
 In ACMSIAM SODA. 714–723
, 2006
"... Data stream processing has recently received increasing attention as a computational paradigm for dealing with massive data sets. Surprisingly, no algorithm with both sublinear space and passes is known for natural graph problems in classical readonly streaming. Motivated by technological factors o ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Data stream processing has recently received increasing attention as a computational paradigm for dealing with massive data sets. Surprisingly, no algorithm with both sublinear space and passes is known for natural graph problems in classical readonly streaming. Motivated by technological factors of modern storage systems, some authors have recently started to investigate the computational power of less restrictive models where writing streams is allowed. In this paper, we show that the use of intermediate temporary streams is powerful enough to provide effective spacepasses tradeoffs for natural graph problems. In particular, for any space restriction of s bits, we show that singlesource shortest paths in directed graphs with small positive integer edge weights can be solved in O((n log 3/2 n) / √ s) passes. The result can be generalized to deal with multiple sources within the same bounds. This is the first known streaming algorithm for shortest paths in directed graphs. For undirected connectivity, we devise an O((n log n)/s) passes algorithm. Both problems require Ω(n/s) passes under the restrictions we consider. We also show that the model where intermediate temporary streams are allowed can be strictly more powerful than classical streaming for some problems, while maintaining all of its hardness for others.
Optimal Lower bounds on Regular Expression Size using Communication Complexity
 In: Proceedings of FoSSaCS: 273–286, LNCS 4962
, 2008
"... Abstract. The problem of converting deterministic finite automata into (short) regular expressions is considered. It is known that the required expression size is 2 Θ(n) in the worst case for infinite languages, and for finite languages it is n Ω(log log n) and n O(log n) , if the alphabet size grow ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Abstract. The problem of converting deterministic finite automata into (short) regular expressions is considered. It is known that the required expression size is 2 Θ(n) in the worst case for infinite languages, and for finite languages it is n Ω(log log n) and n O(log n) , if the alphabet size grows with the number of states n of the given automaton. A new lower bound method based on communication complexity for regular expression size is developed to show that the required size is indeed n Θ(log n). For constant alphabet size the best lower bound known to date is Ω(n 2), even when allowing infinite languages and nondeterministic finite automata. As the technique developed here works equally well for deterministic finite automata over binary alphabets, the lower bound is improved to n Ω(log n). 1
OntologyBased Data Access with Databases: A Short Course
"... Ontologybased data access (OBDA) is regarded as a key ingredient of the new generation of information systems. In the OBDA paradigm, an ontology defines a highlevel global schema of (already existing) data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries an ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Ontologybased data access (OBDA) is regarded as a key ingredient of the new generation of information systems. In the OBDA paradigm, an ontology defines a highlevel global schema of (already existing) data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system or a datalog engine. In this chapter, we mainly focus on OBDA with the ontology language OWL 2 QL, one of the three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases, although other possible languages will also be discussed. We consider different types of conjunctive query rewriting and their succinctness, different architectures of OBDA systems, and give an overview of the OBDA system Ontop.
Letting Alice and Bob choose which problem to solve: Implications to the study of cellular automata ✩
"... In previous works we found necessary conditions for a cellular automaton (CA) in order to be intrinsically universal (a CA is said to be intrinsically universal if it can simulate any other). The idea was to introduce different canonical communication problems, all of them parameterized by a CA. The ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In previous works we found necessary conditions for a cellular automaton (CA) in order to be intrinsically universal (a CA is said to be intrinsically universal if it can simulate any other). The idea was to introduce different canonical communication problems, all of them parameterized by a CA. The necessary condition was the following: if Ψ is an intrinsically universal CA then the communication complexity of all the canonical problems, when parameterized by Ψ, must be maximal. In this paper, instead of introducing a new canonical problem, we study the setting where they can all be used simultaneously. Roughly speaking, when Alice and Bob –the two parties of the communication complexity model – receive their inputs they may choose online which canonical problem to solve. We give results showing that such freedom makes this new problem, that we call Ovrl, a very strong filter for ruling out CAs from being intrinsically universal. More precisely, there are some CAs having high complexity in all the canonical problems but have much lower complexity in Ovrl. Key words: automata. communication complexity, intrinsic universality, cellular 1.
Tradeoff algorithms in streaming models
, 2006
"... In this report I will focus on the research activity carried on during the first two years of my PhD program, at the University of Rome “La Sapienza”, in the field of massive data sets, with particular emphasis on the streaming computational model: in this model data stored in external memory can be ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this report I will focus on the research activity carried on during the first two years of my PhD program, at the University of Rome “La Sapienza”, in the field of massive data sets, with particular emphasis on the streaming computational model: in this model data stored in external memory can be accessed only sequentially, in one or several passes, and is processed using an amount of internal memory that is small, compared to the size of the external memory. First several computational models for massive data sets will be introduced and motivated, then an overview of the results obtained within the framework of those models will be given, mainly relatively to a specific, but relevant class of problems (namely, graph problems). A presentation of the results obtained during these two years will follow, and then work in progress and future research directions will be outlined. 1
The Communication Complexity of the Universal Relation
, 1997
"... Consider the following communication problem. Alice gets a word x 2 f0; 1g n and Bob gets a word y 2 f0; 1g n . Alice and Bob are told that x 6= y. Their goal is to find an index 1 i n such that x i 6= y i (the index i should be known to both of them). This problem is one of the most basic com ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Consider the following communication problem. Alice gets a word x 2 f0; 1g n and Bob gets a word y 2 f0; 1g n . Alice and Bob are told that x 6= y. Their goal is to find an index 1 i n such that x i 6= y i (the index i should be known to both of them). This problem is one of the most basic communication problems. It arises naturally from the correspondence between circuit depth and communication complexity discovered by Karchmer and Wigderson. We present three protocols using which Alice and Bob can solve the problem by exchanging at most n + 2 bits. One of this protocols is due to Rudich and Tardos. These protocols improve the previous upper bound of n + log n, obtained by Karchmer. We also show that any protocol for solving the problem must exchange, in the worst case, at least n+ 1 bits. This improves a simple lower bound of n \Gamma 1 obtained by Karchmer. Our protocols, therefore, are at most one bit away from optimality. The three n + 2 bit protocols use two completely d...
Query Rewriting over Shallow Ontologies
"... Abstract. We investigate the size of conjunctive query rewritings over OWL 2 QL ontologies of depth 1 and 2 by means of a new formalism, called hypergraph programs, for computing Boolean functions. Both positive and negative results are obtained. All conjunctive queries over ontologies of depth 1 ha ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We investigate the size of conjunctive query rewritings over OWL 2 QL ontologies of depth 1 and 2 by means of a new formalism, called hypergraph programs, for computing Boolean functions. Both positive and negative results are obtained. All conjunctive queries over ontologies of depth 1 have polynomialsize nonrecursive datalog rewritings; treeshaped queries have polynomialsize positive existential rewritings; however, for some queries and ontologies of depth 1, positive existential rewritings can only be of superpolynomial size. Both positive existential and nonrecursive datalog rewritings of conjunctive queries and ontologies of depth 2 suffer an exponential blowup in the worst case, while firstorder rewritings can grow superpolynomially unless NP ⊆ P/poly. 1