Results 1 
4 of
4
Local dependency dynamic programming in the presence of memory faults
 In STACS, volume 9 of LIPIcs
, 2011
"... memory faults ..."
Dynamic programming in faulty memory hierarchies (cacheobliviously) ∗
"... Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algor ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algorithms implicitly assume data to be stored in reliable memories, they might easily exhibit unpredictable behaviors even in the presence of a small number of faults. In this paper we investigate the design of dynamic programming algorithms in faulty memory hierarchies. Previous works on resilient algorithms considered a onelevel faulty memory model and, with respect to dynamic programming, could address only problems with local dependencies. Our improvement upon these works is twofold: (1) we significantly extend the class of problems that can be solved resiliently via dynamic programming in the presence of faults, settling challenging nonlocal problems such as allpairs shortest paths and matrix multiplication; (2) we investigate the connection between resiliency and cacheefficiency, providing cacheoblivious implementations that incur an (almost) optimal number of cache misses. Our approach yields the first resilient algorithms that can tolerate faults at any level of the memory hierarchy, while maintaining cacheefficiency. All our algorithms are correct with high probability and match the running time and cache misses of their standard nonresilient counterparts while tolerating a large (polynomial) number of faults. Our results also extend to Fast Fourier Transform. 1998 ACM Subject Classification B.8 [Performance and reliability]; F.2 [Analysis of algorithms and problem complexity]; I.2.8 [Dynamic programming].
Research Statement
, 2011
"... them. Most of my study is focused on understanding the algebraic structure behind combinatorial objects. The primary focus of my research is Locally Decodable Codes. A code C is said to be Locally Decodable Code (LDC) with q queries if it is possible to recover any symbol xi of a message x by making ..."
Abstract
 Add to MetaCart
them. Most of my study is focused on understanding the algebraic structure behind combinatorial objects. The primary focus of my research is Locally Decodable Codes. A code C is said to be Locally Decodable Code (LDC) with q queries if it is possible to recover any symbol xi of a message x by making at most q queries to C(x), such that even if a constant fraction of C(x) is corrupted, the decoding algorithm returns the correct answer with high probability. The main reason that LDCs are important is not because of their obvious applications to data transmission and data storage, but because of their applications to complexity theory and cryptography. Many important results in these fields rely on LDCs. LDCs are closely related to such subjects as worst case – average case reductions, pseudorandom generators, hardness amplification, and private information retrieval schemes, see for example [PS94, Lip90, CKGS98, STV01, Tre03, Tre04, Gas04]. Locally Decodable Codes also found applications in data structures and fault tolerant computations, see for example [CGdW10, dW09, Rom06]. Locally Decodable Codes implicitly appeared in the PCP literature already in the early 1990s, most notably in [BFLS91, PS94, Sud92]. However the first formal definition of LDCs was given by Katz and Trevisan [KT00] in 2000. Since then LDCs became widely used. The first constructions of LDCs [BIK05, KT00] were based on polynomial interpolation techniques. Later on more complicated recursive constructions were discovered [BIKR02, WY07]. All these constructions had exponential length. The tight lower bound of 2 Θ(n) codes were given in [KdW04, GKST06] for two queries LDCs. For many years it was conjectured (see [Gas04, Gol05]) that LDCs should have an exponential dependence on n for any constant number of queries, until Yekhanin’s breakthrough [Yek08]. Yekhanin obtained 3query LDCs with subexponential length. Yekhanin’s construction is based on an unproven but a highly believable conjecture in number theory and is quite complicated.
Research statement
"... Over the last few decades, there have been tremendous advances in computing technology in our daily life. We are witnessing computing devices that are not only increasing in speed and storage, but also becoming more mobile, userfriendly, and widely connected. Spurred by these hardware advances, the ..."
Abstract
 Add to MetaCart
Over the last few decades, there have been tremendous advances in computing technology in our daily life. We are witnessing computing devices that are not only increasing in speed and storage, but also becoming more mobile, userfriendly, and widely connected. Spurred by these hardware advances, there has been an explosion of data in the last decade, due to increased digitization of information that are aggregated from a highly interconnected network of a growing number of users. The prevalence of these massive datasets requires a refined notion of how we measure the efficiency of an algorithm. Traditionally, algorithms that run in polynomial time are considered practical, and lineartime algorithms are the paradigm of efficiency. However, when working with huge datasets, especially those arising from the Internet, reading the input in its entirety may no longer be feasible. My research is motivated by understanding what algorithms can do when only a sublinear portion of the input is examined. Property testing The field of property testing, initiated by Blum, Luby, and Rubinfeld [3] is concerned with these sublinear algorithms that examine the input at a few select entries, and based on the values of these entries, decide whether the input satisfies a certain property or “looks different ” from any input that satisfies this property. In other words, a testing algorithm decides whether the data possesses the desired property or not.