Results 1 - 10
of
14,651
A Fast File System for UNIX
- ACM Transactions on Computer Systems
, 1984
"... A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file ..."
Abstract
-
Cited by 565 (6 self)
- Add to MetaCart
to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrative control of resource usage.
Scale and performance in a distributed file system
- ACM Transactions on Computer Systems
, 1988
"... The Andrew File System is a location-transparent distributed tile system that will eventually span more than 5000 workstations at Carnegie Mellon University. Large scale affects performance and complicates system operation. In this paper we present observations of a prototype implementation, motivat ..."
Abstract
-
Cited by 933 (45 self)
- Add to MetaCart
, motivate changes in the areas of cache validation, server process structure, name translation, and low-level storage representation, and quantitatively demonstrate Andrew’s ability to scale gracefully. We establish the importance of whole-file transfer and caching in Andrew by comparing its performance
The file drawer problem and tolerance for null results
- Psychological Bulletin
, 1979
"... For any given research area, one cannot tell how many studies have been con-ducted but never reported. The extreme view of the "file drawer problem " is that journals are filled with the 5 % of the studies that show Type I errors, while the file drawers are filled with the 95 % of the stud ..."
Abstract
-
Cited by 497 (0 self)
- Add to MetaCart
% of the studies that show non-significant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed. Both behavioral researchers and statisti-cians have long suspected that the studies published in the behavioral
A Measurement Study of Peer-to-Peer File Sharing Systems
, 2002
"... The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architectures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics of the peers th ..."
Abstract
-
Cited by 1254 (15 self)
- Add to MetaCart
that choose to participate. Surprisingly, however, few of the peer-to-peer architectures currently being developed are evaluated with respect to such considerations. In this paper, we remedy this situation by performing a detailed measurement study of the two popular peer-to-peer file sharing systems, namely
The reviewing of object files: Object specific integration of information
- Cognitive Psychology
, 1992
"... A series of experiments explored a form of object-specific priming. In all experiments a preview field containing two or more letters is followed by a target letter that is to be named. The displays are designed to produce a perceptual interpretation of the target as a new state of an object that pr ..."
Abstract
-
Cited by 462 (4 self)
- Add to MetaCart
A series of experiments explored a form of object-specific priming. In all experiments a preview field containing two or more letters is followed by a target letter that is to be named. The displays are designed to produce a perceptual interpretation of the target as a new state of an object
A Delay-Tolerant Network Architecture for Challenged Internets
, 2003
"... The highly successful architecture and protocols of today’s Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme env ..."
Abstract
-
Cited by 953 (12 self)
- Add to MetaCart
The highly successful architecture and protocols of today’s Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme
The Paradyn Parallel Performance Measurement Tools
- IEEE COMPUTER
, 1995
"... Paradyn is a performance measurement tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs (hours or days) and large (thousand node) systems, and automates much of the search for performance bottlenecks. It can provide precise ..."
Abstract
-
Cited by 447 (39 self)
- Add to MetaCart
Paradyn is a performance measurement tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs (hours or days) and large (thousand node) systems, and automates much of the search for performance bottlenecks. It can provide precise
How to make a decision: the analytic hierarchy process
- European Journal of Operational Research
, 1990
"... Policy makers at all levels of decision making in organizations use multiple criteria to analyze their complex problems. Multicriteria thinking is used formally to facilitate their decision making. Through trade-offs it clarifies the advantages and disadvantages of policy options under circumstances ..."
Abstract
-
Cited by 411 (0 self)
- Add to MetaCart
feelings and our judgments must be subjected to the acid test of deductive thinking. But experience suggests that deductive thinking is not natural. Indeed, we have to practice, and for a long time, before we can do it well. Since complex problems usually have many related factors, traditional logical
Make: a program for maintaining computer programs.
- Softw: Pract. Exper.,
, 1979
"... ABSTRACT In a programming project, it is easy to lose track of which files need to be reprocessed or recompiled after a change is made in some part of the source. Make provides a simple mechanism for maintaining up-to-date versions of programs that result from many operations on a number of files. ..."
Abstract
-
Cited by 352 (0 self)
- Add to MetaCart
amount of effort. The basic operation of Make is to find the name of a needed target in the description, ensure that all of the files on which it depends exist and are up to date, and then create the target if it has not been modified since its generators were. The description file really defines
An Empirical Study of Operating System Errors
, 2001
"... We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previ-ous studies that consider errors found by manual inspec-tion of logs, testing, and surveys because static analysis is applied uniforml ..."
Abstract
-
Cited by 363 (9 self)
- Add to MetaCart
uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, au-tomation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed
Results 1 - 10
of
14,651