Results 11  20
of
26
Regeneration with Virtual Copies for Replicated Databases
 In Proceedings of the 11th IEEE International Conference on Distributed Computing Systems
, 1991
"... We consider the consistency control problem for replicated data in a distributed computing system (DCS) and propose a new algorithm to dynamically regenerate copies of data objects in response to node failures and network partitioning in the system. The DCS is assumed to have strict consistency cons ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
We consider the consistency control problem for replicated data in a distributed computing system (DCS) and propose a new algorithm to dynamically regenerate copies of data objects in response to node failures and network partitioning in the system. The DCS is assumed to have strict consistency constraints for data object copies. The new algorithm combines the advantages of voting based algorithms and regeneration mechanisms to maintain mutual consistency of replicated data objects in the case of node failures and network partitioning. Our algorithm extends the feasibility of regeneration to DCS on wide area networks, and is able to satisfy user queries as long as there is one current partition in the system. 1 Introduction In a distributed computing environment, two types of failures may occur: the processor at a given site may fail (referred to as site failure), and communication between two sites may fail (referred to as communication link failure). When a site fails, processing at...
Regeneration with Virtual Copies for Distributed Computing Systems
 IEEE Trans. Softw. Eng
, 1994
"... We consider the consistency control problem for replicated data in a distributed computing system (DCS) and propose a new algorithm to dynamically regenerate copies of data objects in response to node failures and network partitioning in the system. The DCS is assumed to have strict consistency cons ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We consider the consistency control problem for replicated data in a distributed computing system (DCS) and propose a new algorithm to dynamically regenerate copies of data objects in response to node failures and network partitioning in the system. The DCS is assumed to have strict consistency constraints for data object copies. The new algorithm combines the advantages of voting based algorithms and regeneration mechanisms to maintain mutual consistency of replicated data objects in the case of node failures and network partitioning. Our algorithm extends the feasibility of regeneration to DCS on wide area networks, and is able to satisfy user queries as long as there is one current partition in the system. A stochastic availability analysis of our algorithm shows that it provides improved availability as compared to previously proposed dynamic voting algorithms. 1 Introduction In a distributed computing environment, two types of failures may occur: the processor at a given site may...
Time and cost tradeoff for distributed data processing
 Computers ind. Engng
, 1989
"... AbstractAn important design issue in distributed data processing systems is to determine optimal data distribution. The problem requires a tradeoff between time and cost. For instance, quick response time conflicts with low cost. The paper addresses the data distribution problem in this conflictin ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
AbstractAn important design issue in distributed data processing systems is to determine optimal data distribution. The problem requires a tradeoff between time and cost. For instance, quick response time conflicts with low cost. The paper addresses the data distribution problem in this conflicting environment. A formulation of the problem as a nonlinear program is developed. An algorithm employing a simple search procedure is presented, which gives an optimal data distribution. An example is solved to illustrate the method.
A PriorityDriven, ConsistencyPreserving Strategy for the Relocation Problem of Replicated Files
 Proc. 11th ITG/GI Conf.  Architecture of Computing Systems
, 1990
"... Suppose you got an excellent dynamic file assignment algorithm. But, how to proceed dynamically from the current to the optimal file allocation? Imagine replication of your files and some sort of voting strategy  the question then is, how to maintain consistency if the current and the optimal fil ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Suppose you got an excellent dynamic file assignment algorithm. But, how to proceed dynamically from the current to the optimal file allocation? Imagine replication of your files and some sort of voting strategy  the question then is, how to maintain consistency if the current and the optimal file assignment differ not only in the location of the files but also in the number of replicas? This paper tries to answer these questions and introduces the basic relocation protocols which preserve consistency during relocation, as well as a prioritydriven, storage capacitybased approach to bring the basic relocation protocols in an optimal sequence in order to move quickly and as close as possible towards the optimal file assignment. 1 Introduction We consider a mathematical model of an information network of jRj nodes, some of which contain copies of our jDj data files. The degree of replicas has not to be fixed. Within this network, every node is able to communicate with every other no...
File Allocation Algorithms to Minimize Data Transmission Time in Distributed Computing Systems *+
, 1998
"... This work addresses a files allocation problem (FAP) in distributed computing systems. This FAP attempts to minimize the expected data transfer time for a specific program that must access several data files from nonperfect computer sites. We assume that communication capacity can be reserved; henc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This work addresses a files allocation problem (FAP) in distributed computing systems. This FAP attempts to minimize the expected data transfer time for a specific program that must access several data files from nonperfect computer sites. We assume that communication capacity can be reserved; hence, the data transmission behavior is modeled as a manytoone multicommodity flow problem. A new criticalcut method is proposed to solve this reduced multicommodity flow problem. Based on this method, two algorithms which use branchandbound are proposed for this FAP. The proposed algorithms are able to allocate data files having single copies or multiple replicated copies. Simulation results are presented to demonstrate the performance of the algorithms.
unknown title
, 1992
"... This paper deals with the issue of allocating and utilizing centers in a distributed network, in its various forms. The paper discusses the signicant parameters of center allocation, denes the resulting optimization problems, and proposes several approximation algorithms for selecting centers and f ..."
Abstract
 Add to MetaCart
This paper deals with the issue of allocating and utilizing centers in a distributed network, in its various forms. The paper discusses the signicant parameters of center allocation, denes the resulting optimization problems, and proposes several approximation algorithms for selecting centers and for distributing the users among them. We concentrate mainly on balanced versions of the problem, i.e., in which it is required that the assignment of clients to centers be as balanced as possible. The main results are constant ratio approximation algorithms for the balanced centers and balanced weighted centers problems, and logarithmic ratio approximation algorithms for the dominating set and the ktolerant set problems.
2010 Second International Conference on Network Applications, Protocols and Services A Novel Evolutionary Algorithm for Solving Static Data Allocation Problem in Distributed Database Systems Ali Safari Mamaghani Young Researcher Club, Islamic Azad Univers
"... Abstract—Given a distributed database system and a set of queries from each site, the objective of a data allocation algorithm is to locate the data fragments at different sites so as to minimize the total data transfer cost incurred in executing the queries. The data allocation problem, however, is ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Given a distributed database system and a set of queries from each site, the objective of a data allocation algorithm is to locate the data fragments at different sites so as to minimize the total data transfer cost incurred in executing the queries. The data allocation problem, however, is NPcomplete, and thus requires fast heuristics and random approaches to generate efficient solutions. In this paper an approximate algorithm has been proposed. This algorithm is a hybrid evolutionary algorithm obtained from combining object migration learning automata and genetic algorithm. Experimental results show that proposed algorithm has significant superiority over the several wellknown methods. Keywordsobject migration learning automata;genetic algorithms;Distributed database system; Data fragment allocation; Evolutionary algorithm. I.
OPTIMAL LOCATION OF RESERVING FILES IN DISTRIBUTED COMPUTER SYSTEMS
"... Abstract. In this work we study one of the possible approaches to the optimization of the distribution of reserving files at the nodes of a network based on the reliability criterion. To estimate reliability we use an approach related to the notion of a “reliability order ” parameter. 1. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this work we study one of the possible approaches to the optimization of the distribution of reserving files at the nodes of a network based on the reliability criterion. To estimate reliability we use an approach related to the notion of a “reliability order ” parameter. 1.
Voting and Relocation Strategies Preserving Consistency among Replicated Files
"... Replication can enhance the availability of the data files in a distributed environment. This paper introduces a method for managing replicated data files. Unlike many others, our method provides protocols and algorithms for a more complicated scheme of replication that supports replication with loc ..."
Abstract
 Add to MetaCart
(Show Context)
Replication can enhance the availability of the data files in a distributed environment. This paper introduces a method for managing replicated data files. Unlike many others, our method provides protocols and algorithms for a more complicated scheme of replication that supports replication with locationvariant files and files with a variable degree of replication. We assume the existence of a dynamic file assignment algorithm as well as a blockoriented majority consensus voting approach. This paper investigates how to maintain consistency during relocation, if the current and the new file assignment differ not only in the location of the files but also in the number of replicas. We introduce the basic relocation protocols to preserve consistency during relocation and present the read and writeblock algorithms for accesses to data blocks of transient files. As a final result, we show that the interaction between the relocation protocols and these algorithms preserves consistency. ...
A Model and Decision Procedure for Data Storage in Cloud Computing
"... Abstract—Cloud computing offers many possibilities for prospective users; there are however many different storage and compute services to choose from between all the cloud providers and their multiple datacenters. In this paper we focus on the problem of selecting the best storage services accordin ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Cloud computing offers many possibilities for prospective users; there are however many different storage and compute services to choose from between all the cloud providers and their multiple datacenters. In this paper we focus on the problem of selecting the best storage services according to the application's requirements and the user's priorities. In previous work we described a capability based matching process that filters out any service that does not meet the requirements specified by the user. In this paper we introduce a mathematical model that takes this output lists of compatible storage services and constructs an integer linear programming problem. This ILP problem takes into account storage and compute cost as well as performance characteristics like latency, bandwidth, and job turnaround time; a solution to the problem yields an optimal assignment of datasets to storage services and of application runs to compute services. We show that with modern ILP solvers a reasonably sized problem can be solved in one second; even with an order of magnitude increase in cloud providers, number of datacenters, or storage services the problem instances can be solved under a minute. We finish our paper with two use cases, BLAST and MODIS. For MODIS our recommended data allocation leverages both cloud and local resources; it incurs in half the cost of a pure cloud solution and the job turnaround time is 52 % faster compared to a pure local solution.