Results 1 
7 of
7
Local dependency dynamic programming in the presence of memory faults
 In STACS, volume 9 of LIPIcs
, 2011
"... memory faults ..."
Dynamic programming in faulty memory hierarchies (cacheobliviously) ∗
"... Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algor ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algorithms implicitly assume data to be stored in reliable memories, they might easily exhibit unpredictable behaviors even in the presence of a small number of faults. In this paper we investigate the design of dynamic programming algorithms in faulty memory hierarchies. Previous works on resilient algorithms considered a onelevel faulty memory model and, with respect to dynamic programming, could address only problems with local dependencies. Our improvement upon these works is twofold: (1) we significantly extend the class of problems that can be solved resiliently via dynamic programming in the presence of faults, settling challenging nonlocal problems such as allpairs shortest paths and matrix multiplication; (2) we investigate the connection between resiliency and cacheefficiency, providing cacheoblivious implementations that incur an (almost) optimal number of cache misses. Our approach yields the first resilient algorithms that can tolerate faults at any level of the memory hierarchy, while maintaining cacheefficiency. All our algorithms are correct with high probability and match the running time and cache misses of their standard nonresilient counterparts while tolerating a large (polynomial) number of faults. Our results also extend to Fast Fourier Transform. 1998 ACM Subject Classification B.8 [Performance and reliability]; F.2 [Analysis of algorithms and problem complexity]; I.2.8 [Dynamic programming].
Research Statement
, 2011
"... them. Most of my study is focused on understanding the algebraic structure behind combinatorial objects. The primary focus of my research is Locally Decodable Codes. A code C is said to be Locally Decodable Code (LDC) with q queries if it is possible to recover any symbol xi of a message x by making ..."
Abstract
 Add to MetaCart
them. Most of my study is focused on understanding the algebraic structure behind combinatorial objects. The primary focus of my research is Locally Decodable Codes. A code C is said to be Locally Decodable Code (LDC) with q queries if it is possible to recover any symbol xi of a message x by making at most q queries to C(x), such that even if a constant fraction of C(x) is corrupted, the decoding algorithm returns the correct answer with high probability. The main reason that LDCs are important is not because of their obvious applications to data transmission and data storage, but because of their applications to complexity theory and cryptography. Many important results in these fields rely on LDCs. LDCs are closely related to such subjects as worst case – average case reductions, pseudorandom generators, hardness amplification, and private information retrieval schemes, see for example [PS94, Lip90, CKGS98, STV01, Tre03, Tre04, Gas04]. Locally Decodable Codes also found applications in data structures and fault tolerant computations, see for example [CGdW10, dW09, Rom06]. Locally Decodable Codes implicitly appeared in the PCP literature already in the early 1990s, most notably in [BFLS91, PS94, Sud92]. However the first formal definition of LDCs was given by Katz and Trevisan [KT00] in 2000. Since then LDCs became widely used. The first constructions of LDCs [BIK05, KT00] were based on polynomial interpolation techniques. Later on more complicated recursive constructions were discovered [BIKR02, WY07]. All these constructions had exponential length. The tight lower bound of 2 Θ(n) codes were given in [KdW04, GKST06] for two queries LDCs. For many years it was conjectured (see [Gas04, Gol05]) that LDCs should have an exponential dependence on n for any constant number of queries, until Yekhanin’s breakthrough [Yek08]. Yekhanin obtained 3query LDCs with subexponential length. Yekhanin’s construction is based on an unproven but a highly believable conjecture in number theory and is quite complicated.
c ○ 2013 Society for Industrial and Applied Mathematics ERRORCORRECTING DATA STRUCTURES ∗
"... Abstract. We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. We measure t ..."
Abstract
 Add to MetaCart
Abstract. We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. We measure the efficiency of a data structure in terms of its length (the number of bits in its representation) and queryanswering time, measured by the number of bitprobes to the (possibly corrupted) representation. The main issue is the tradeoff between these two. This new model is the common generalization of (static) data structures and locally decodable errorcorrecting codes (LDCs). We prove a number of upper and lower bounds on various natural errorcorrecting data structure problems. In particular, we show that the optimal length of tprobe errorcorrecting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n such that membership queries can be answered efficiently) is approximately the optimal length of tprobe LDCs that encode strings of length s. It has been conjectured that LDCs with small t must be superpolynomially long. This bad probesversuslength tradeoff carries over to errorcorrecting data structures for Membership and many other data structure problems. We then circumvent this problem by defining socalled relaxed errorcorrecting data structures, inspired by the notion of “relaxed locally decodable codes”
c ○ XXXX Society for Industrial and Applied Mathematics ERRORCORRECTING DATA STRUCTURES ∗
"... Abstract. We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. We measure t ..."
Abstract
 Add to MetaCart
Abstract. We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. We measure the efficiency of a data structure in terms of its length (the number of bits in its representation) and queryanswering time, measured by the number of bitprobes to the (possibly corrupted) representation. The main issue is the tradeoff between these two. This new model is the common generalization of (static) data structures and locally decodable errorcorrecting codes (LDCs). We prove a number of upper and lower bounds on various natural errorcorrecting data structure problems. In particular, we show that the optimal length of tprobe errorcorrecting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n such that membership queries can be answered efficiently) is approximately the optimal length of tprobe LDCs that encode strings of length s. It has been conjectured that LDCs with small t must be superpolynomially long. This bad probesversuslength tradeoff carries over to errorcorrecting data structures for Membership and many other data structure problems. We then circumvent this problem by defining socalled relaxed errorcorrecting data structures, inspired by the notion of “relaxed locally decodable codes”
www.stacsconf.org EFFICIENT AND ERRORCORRECTING DATA STRUCTURES FOR MEMBERSHIP AND POLYNOMIAL EVALUATION
"... ABSTRACT. We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or ..."
Abstract
 Add to MetaCart
ABSTRACT. We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares “don’t know. ” Furthermore, if there is no noise on the data structure, it answers all queries correctly with high probability. Our model is the common generalization of an errorcorrecting data structure model proposed recently by de Wolf, and the notion of “relaxed locally decodable codes” developed in the PCP literature. We measure the efficiency of a data structure in terms of its length (the number of bits in its representation), and queryanswering time, measured by the number of bitprobes to the (possibly corrupted) representation. We obtain results for the following two data structure problems: • (Membership) Store a subset S of size at most s from a universe of size n such that membership queries can be answered efficiently, i.e., decide if a given element from the universe is in S. We construct an errorcorrecting data structure for this problem with length nearly linear in slog n that answers membership queries with O(1) bitprobes. This nearly matches the asymptotically optimal parameters for the noiseless case: length O(s log n) and one bitprobe, due to
Research statement
"... Over the last few decades, there have been tremendous advances in computing technology in our daily life. We are witnessing computing devices that are not only increasing in speed and storage, but also becoming more mobile, userfriendly, and widely connected. Spurred by these hardware advances, the ..."
Abstract
 Add to MetaCart
Over the last few decades, there have been tremendous advances in computing technology in our daily life. We are witnessing computing devices that are not only increasing in speed and storage, but also becoming more mobile, userfriendly, and widely connected. Spurred by these hardware advances, there has been an explosion of data in the last decade, due to increased digitization of information that are aggregated from a highly interconnected network of a growing number of users. The prevalence of these massive datasets requires a refined notion of how we measure the efficiency of an algorithm. Traditionally, algorithms that run in polynomial time are considered practical, and lineartime algorithms are the paradigm of efficiency. However, when working with huge datasets, especially those arising from the Internet, reading the input in its entirety may no longer be feasible. My research is motivated by understanding what algorithms can do when only a sublinear portion of the input is examined. Property testing The field of property testing, initiated by Blum, Luby, and Rubinfeld [3] is concerned with these sublinear algorithms that examine the input at a few select entries, and based on the values of these entries, decide whether the input satisfies a certain property or “looks different ” from any input that satisfies this property. In other words, a testing algorithm decides whether the data possesses the desired property or not.