Results 1 - 10
of
5,015
Refactoring: Improving the Design of Existing Code
, 1999
"... As the application of object technology--particularly the Java programming language--has become commonplace, a new problem has emerged to confront the software development community.
Significant numbers of poorly designed programs have been created by less-experienced developers, resulting in applic ..."
Abstract
-
Cited by 1898 (2 self)
- Add to MetaCart
in applications that are inefficient and hard to maintain and extend. Increasingly, software system professionals are discovering just how difficult it is to work with these inherited, "non-optimal" applications. For several years,
expert-level object programmers have employed a growing collection
XORs in the air: practical wireless network coding
- In Proc. ACM SIGCOMM
, 2006
"... This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our de ..."
Abstract
-
Cited by 548 (20 self)
- Add to MetaCart
the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that COPE largely increases network throughput. The gains vary from a few percent to several
LLVM: A compilation framework for lifelong program analysis & transformation
, 2004
"... ... a compiler framework designed to support transparent, lifelong program analysis and transformation for arbitrary programs, by providing high-level information to compiler transformations at compile-time, link-time, run-time, and in idle time between runs. LLVM defines a common, low-level code re ..."
Abstract
-
Cited by 852 (20 self)
- Add to MetaCart
... a compiler framework designed to support transparent, lifelong program analysis and transformation for arbitrary programs, by providing high-level information to compiler transformations at compile-time, link-time, run-time, and in idle time between runs. LLVM defines a common, low-level code
How much should we trust differences-in-differences estimates?
, 2003
"... Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on femal ..."
Abstract
-
Cited by 828 (1 self)
- Add to MetaCart
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data
KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs
"... We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the cor ..."
Abstract
-
Cited by 557 (15 self)
- Add to MetaCart
the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage — on average over 90% per tool (median: over 94%) — and significantly beat the coverage
High confidence visual recognition of persons by a test of statistical independence
- IEEE TRANS. ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1993
"... A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the ..."
Abstract
-
Cited by 621 (8 self)
- Add to MetaCart
of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes
The Computational Brain.
, 1994
"... Keywords: reductionism, neural networks, distributed coding, Karl Pribram, computational neuroscience, receptive field 1.1 The broad goal of this book, expressed at the start, is ``to understand how neurons give rise to a mental life.'' A mental reductionism is assumed in this seductively ..."
Abstract
-
Cited by 450 (7 self)
- Add to MetaCart
Keywords: reductionism, neural networks, distributed coding, Karl Pribram, computational neuroscience, receptive field 1.1 The broad goal of this book, expressed at the start, is ``to understand how neurons give rise to a mental life.'' A mental reductionism is assumed
Control-Flow Analysis of Higher-Order Languages
, 1991
"... representing the official policies, either expressed or implied, of ONR or the U.S. Government. Keywords: data-flow analysis, Scheme, LISP, ML, CPS, type recovery, higher-order functions, functional programming, optimising compilers, denotational semantics, nonstandard Programs written in powerful, ..."
Abstract
-
Cited by 365 (10 self)
- Add to MetaCart
, higher-order languages like Scheme, ML, and Common Lisp should run as fast as their FORTRAN and C counterparts. They should, but they don’t. A major reason is the level of optimisation applied to these two classes of languages. Many FORTRAN and C compilers employ an arsenal of sophisticated global
The Stanford FLASH multiprocessor
- In Proceedings of the 21st International Symposium on Computer Architecture
, 1994
"... The FLASH multiprocessor efficiently integrates support for cache-coherent shared memory and high-performance message passing, while minimizing both hardware and software overhead. Each node in FLASH contains a microprocessor, a portion of the machine’s global memory, a port to the interconnection n ..."
Abstract
-
Cited by 349 (20 self)
- Add to MetaCart
, which are derived from our system-level simulator and our Verilog code, are given for several common protocol operations. The paper also describes our software strategy and FLASH’s current status. 1
Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria
- IN PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING
, 1994
"... This paper reports an experimental study investigating the effectiveness of two code-based test adequacy criteria for identifying sets of test cases that detect faults. The alledges and all-D Us (modified all-uses) coverage criteria were applied to 130 faulty program versions derived from seven mode ..."
Abstract
-
Cited by 313 (0 self)
- Add to MetaCart
moderate size base programs by seeding realistic faults. We generated several thousand test sets for each faulty program and examined the relationship between fault detection and coverage. Within the limited domain of our experiments, test sets achieving coverage levels over 90?Zo usually showed
Results 1 - 10
of
5,015