Results 1 
4 of
4
Replication under Scalable Hashing: A Family of Algorithms for Scalable Decentralized Data Distribution
 In Proceedings of the 18th International Parallel & Distributed Processing Symposium (IPDPS 2004), Santa Fe, NM
, 2004
"... Typical algorithms for decentralized data distribution work best in a system that is fully built before it first used; adding or removing components results in either extensive reorganization of data or load imbalance in the system. We have developed a family of decentralized algorithms, RUSH (Repl ..."
Abstract

Cited by 53 (13 self)
 Add to MetaCart
Typical algorithms for decentralized data distribution work best in a system that is fully built before it first used; adding or removing components results in either extensive reorganization of data or load imbalance in the system. We have developed a family of decentralized algorithms, RUSH (Replication Under Scalable Hashing), that maps replicated objects to a scalable collection of storage servers or disks. RUSH algorithms distribute objects to servers according to userspecified server weighting. While all RUSH variants support addition of servers to the system, different variants have different characteristics with respect to lookup time in petabytescale systems, performance with mirroring (as opposed to redundancy codes), and storage server removal. All RUSH variants redistribute as few objects as possible when new servers are added or existing servers are removed, and all variants guarantee that no two replicas of a particular object are ever placed on the same server. Because there is no central directory, clients can compute data locations in parallel, allowing thousands of clients to access objects on thousands of servers simultaneously.
CRUSH: Controlled, scalable, decentralized placement of replicated data
 IN PROCEEDINGS OF THE 2006 ACM/IEEE CONFERENCE ON SUPERCOMPUTING (SC ’06
, 2006
"... Emerging largescale distributed storage systems are faced with the task of distributing petabytes of data among tens or hundreds of thousands of storage devices. Such systems must evenly distribute data and workload to efficiently utilize available resources and maximize system performance, while f ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
Emerging largescale distributed storage systems are faced with the task of distributing petabytes of data among tens or hundreds of thousands of storage devices. Such systems must evenly distribute data and workload to efficiently utilize available resources and maximize system performance, while facilitating system growth and managing hardware failures. We have developed CRUSH, a scalable pseudorandom data distribution function designed for distributed objectbased storage systems that efficiently maps data objects to storage devices without relying on a central directory. Because large systems are inherently dynamic, CRUSH is designed to facilitate the addition and removal of storage while minimizing unnecessary data movement. The algorithm accommodates a wide variety of data replication and reliability mechanisms and distributes data in terms of userdefined policies that enforce separation of replicas across failure domains.
Different Approaches to the Distribution of Primes
 MILAN JOURNAL OF MATHEMATICS
, 2009
"... In this lecture celebrating the 150th anniversary of the seminal paper of Riemann, we discuss various approaches to interesting questions concerning the distribution of primes, including several that do not involve the Riemann zetafunction. ..."
Abstract
 Add to MetaCart
In this lecture celebrating the 150th anniversary of the seminal paper of Riemann, we discuss various approaches to interesting questions concerning the distribution of primes, including several that do not involve the Riemann zetafunction.
HYPOTHESIS H AND AN IMPOSSIBILITY
"... Abstract. Dirichlet’s 1837 theorem that every coprime arithmetic progression a mod m contains infinitely many primes is often alluded to in elementary number theory courses but usually proved only in special cases (e.g., when m=3 or m=4), where the proofs parallel Euclid’s argument for the existence ..."
Abstract
 Add to MetaCart
Abstract. Dirichlet’s 1837 theorem that every coprime arithmetic progression a mod m contains infinitely many primes is often alluded to in elementary number theory courses but usually proved only in special cases (e.g., when m=3 or m=4), where the proofs parallel Euclid’s argument for the existence of infinitely many primes. It is natural to wonder whether Dirichlet’s theorem in its entirety can be proved by such “Euclidean ” arguments. In 1912, Schur showed that one can construct an argument of this type for every progression a mod m satisfying a 2 ≡ 1 (mod m), and in 1988 Murty showed that these are the only progressions for which such an argument can be given. Murty’s proof uses some deep results from algebraic number theory (in particular the Chebotarev density theorem). Here we give a heuristic explanation for this result by showing how it follows from Bunyakovsky’s conjecture on prime values of polynomials. We also propose a widening of Murty’s definition of a Euclidean proof. With this definition, it appears difficult to classify the progressions for which such a proof exists. However, assuming Schinzel’s Hypothesis H, we show that again such a proof exists only when a 2 ≡ 1 (mod m).