• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 113,188
Next 10 →

Software Protection and Simulation on Oblivious RAMs

by Oded Goldreich, Rafail Ostrovsky , 1993
"... Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper we provide theoretical treatment of software protectio ..."
Abstract - Cited by 317 (15 self) - Add to MetaCart
protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if the sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one

Towards practical oblivious ram

by Emil Stefanov, Elaine Shi, Dawn Song - In NDSS , 2012
"... We take an important step forward in making Oblivious RAM (O-RAM) practical. We pro-pose an O-RAM construction achieving an amortized overhead of 20 ∼ 35X (for an O-RAM roughly 1 terabyte in size), about 63 times faster than the best existing scheme. On the theoretic front, we propose a fundamentall ..."
Abstract - Cited by 59 (13 self) - Add to MetaCart
We take an important step forward in making Oblivious RAM (O-RAM) practical. We pro-pose an O-RAM construction achieving an amortized overhead of 20 ∼ 35X (for an O-RAM roughly 1 terabyte in size), about 63 times faster than the best existing scheme. On the theoretic front, we propose a

Wide-area cooperative storage with CFS

by Frank Dabek, M. Frans Kaashoek, David Karger, Robert Morris, Ion Stoica , 2001
"... The Cooperative File System (CFS) is a new peer-to-peer readonly storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers pr ..."
Abstract - Cited by 1009 (56 self) - Add to MetaCart
The Cooperative File System (CFS) is a new peer-to-peer readonly storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers

The Cricket Location-Support System

by Nissanka B. Priyantha, Anit Chakraborty, Hari Balakrishnan , 2000
"... This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, locationdependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze informatio ..."
Abstract - Cited by 1036 (11 self) - Add to MetaCart
than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques

StackGuard: Automatic adaptive detection and prevention of buffer-overflow attacks

by Crispin Cowan, Calton Pu, Dave Maier, Heather Hinton, Jonathan Walpole, Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle, Qian Zhang - In Proceedings of the 7th USENIX Security Symposium , 1998
"... 1 ..."
Abstract - Cited by 594 (16 self) - Add to MetaCart
Abstract not found

A Hierarchical Internet Object Cache

by Anawat Chankhunthod , Peter B. Danzig, Chuck Neerdaels, Michael F. Schwartz, Kurt J. Worrell - IN PROCEEDINGS OF THE 1996 USENIX TECHNICAL CONFERENCE , 1995
"... This paper discusses the design andperformance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We believe that the conventional wisdom, that the benefits of hierarch ..."
Abstract - Cited by 501 (6 self) - Add to MetaCart
This paper discusses the design andperformance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We believe that the conventional wisdom, that the benefits of hierarchical file caching do not merit the costs, warrants reconsideration in the Internet environment. The cache implementation supports a highly concurrent stream of requests. We present performance measurements that show that the cache outperforms other popular Internet cache implementations by an order of magnitude under concurrent load. These measurements indicate that hierarchy does not measurably increase access latency. Our software can also be configured as a Web-server accelerator; we present data that our httpd-accelerator is ten times faster than Netscape's Netsite and NCSA 1.4 servers. Finally, we relate our experience fitting the cache into the increasingly complex and operational world of Internet information systems, including issues related to security, transparency to cache-unaware clients, and the role of file systems in support of ubiquitous wide-area information systems.

LogP: Towards a Realistic Model of Parallel Computation

by David Culler , Richard Karp , David Patterson, Abhijit Sahay, Klaus Erik Schauser, Eunice Santos, Ramesh Subramonian, Thorsten von Eicken , 1993
"... A vast body of theoretical research has focused either on overly simplistic models of parallel computation, notably the PRAM, or overly specific models that have few representatives in the real world. Both kinds of models encourage exploitation of formal loopholes, rather than rewarding developme ..."
Abstract - Cited by 562 (15 self) - Add to MetaCart
A vast body of theoretical research has focused either on overly simplistic models of parallel computation, notably the PRAM, or overly specific models that have few representatives in the real world. Both kinds of models encourage exploitation of formal loopholes, rather than rewarding development of techniques that yield performance across a range of current and future parallel machines. This paper offers a new parallel machine model, called LogP, that reflects the critical technology trends underlying parallel computers. It is intended to serve as a basis for developing fast, portable parallel algorithms and to offer guidelines to machine designers. Such a model must strike a balance between detail and simplicity in order to reveal important bottlenecks without making analysis of interesting problems intractable. The model is based on four parameters that specify abstractly the computing bandwidth, the communication bandwidth, the communication delay, and the efficiency of coupling communication and computation. Portable parallel algorithms typically adapt to the machine configuration, in terms of these parameters. The utility of the model is demonstrated through examples that are implemented on the CM-5.

The Amoeba Distributed Operating System

by Andrew S. Tanenbaum, Gregory J. Sharp, De Boelelaan A , 1992
"... INTRODUCTION Roughly speaking, we can divide the history of modern computing into the following eras: d 1970s: Timesharing (1 computer with many users) d 1980s: Personal computing (1 computer per user) d 1990s: Parallel computing (many computers per user) Until about 1980, computers were huge, e ..."
Abstract - Cited by 1070 (5 self) - Add to MetaCart
INTRODUCTION Roughly speaking, we can divide the history of modern computing into the following eras: d 1970s: Timesharing (1 computer with many users) d 1980s: Personal computing (1 computer per user) d 1990s: Parallel computing (many computers per user) Until about 1980, computers were huge, expensive, and located in computer centers. Most organizations had a single large machine. In the 1980s, prices came down to the point where each user could have his or her own personal computer or workstation. These machines were often networked together, so that users could do remote logins on other people's computers or share files in various (often ad hoc) ways. Nowadays some systems have many processors per user, either in the form of a parallel computer or a large collection of CPUs shared by a small user community. Such systems are usually called parallel or distributed computer systems. This devel

Parallel database systems: the future of high performance database systems

by David J. Dewitt, Jim Gray - Communications of the ACM , 1992
"... Abstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional shared-nothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper ..."
Abstract - Cited by 638 (13 self) - Add to MetaCart
Abstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional shared-nothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper reviews the techniques used by such systems, and surveys current commercial and research systems. 1.

A survey of general-purpose computation on graphics hardware

by John D. Owens, David Luebke, Naga Govindaraju, Mark Harris, Jens Krüger, Aaron E. Lefohn, Tim Purcell , 2007
"... The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware acompelling platform for computationally demanding tasks in awide variety of application domains. In this report, we describe, summarize, and analyze the l ..."
Abstract - Cited by 545 (18 self) - Add to MetaCart
The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware acompelling platform for computationally demanding tasks in awide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general-purpose computation to graphics hardware. We begin with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. We then aim the main body of this report at two separate audiences. First, we describe the techniques used in mapping general-purpose computation to graphics hardware. We believe these techniques will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques. Second, we survey and categorize the latest developments in general-purpose application development on graphics hardware.
Next 10 →
Results 1 - 10 of 113,188
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University