Results 1  10
of
4,850
A greedy algorithm for aligning DNA sequences
 J. COMPUT. BIOL
, 2000
"... For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy a ..."
Abstract

Cited by 585 (16 self)
 Add to MetaCart
For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy
Fast Effective Rule Induction
, 1995
"... Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recentlyproposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error r ..."
Abstract

Cited by 1274 (21 self)
 Add to MetaCart
rates higher than those of C4.5 and C4.5rules. We then propose a number of modifications resulting in an algorithm RIPPERk that is very competitive with C4.5rules with respect to error rates, but much more efficient on large samples. RIPPERk obtains error rates lower than or equivalent to C4.5rules
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
KLEE: Unassisted and Automatic Generation of HighCoverage Tests for Complex Systems Programs
"... We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentallyintensive programs. We used KLEE to thoroughly check all 89 standalone programs in the GNU COREUTILS utility suite, which form the cor ..."
Abstract

Cited by 557 (15 self)
 Add to MetaCart
of the developersâ€™ own handwritten test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 % coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where
The strength of weak learnability
 MACHINE LEARNING
, 1990
"... This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high prob ..."
Abstract

Cited by 871 (26 self)
 Add to MetaCart
of learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition
The Laplacian Pyramid as a Compact Image Code
, 1983
"... We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixeltopixel correlations a ..."
Abstract

Cited by 1388 (12 self)
 Add to MetaCart
are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the lowpass filtered image may represented at reduced sample density. Further data compression
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
limit performance of "Turbo Codes" codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something spe cial about the errorcorrecting code context, or does loopy propagation work as an ap proximate inference scheme
Equivalent Error Bars For Neural Network Classifiers Trained By Bayesian Inference
 In Proc. ESANN
, 1997
"... The topic of this paper is the problem of outlier detection for neural networks trained by Bayesian inference. I will show that marginalization is not a good method to get moderated probabilities for classes in outlying regions. The reason why marginalization fails to indicate outliers is analysed a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The topic of this paper is the problem of outlier detection for neural networks trained by Bayesian inference. I will show that marginalization is not a good method to get moderated probabilities for classes in outlying regions. The reason why marginalization fails to indicate outliers is analysed and an alternative measure, that is a more reliable indicator for outliers, is proposed. A simple artificial classification problem is used to visualize the differences. Finally both methods are used to classify a real world problem, where outlier detection is mandatory. 1 Introduction Neural networks are often used in safetycritical applications for regression or classification purpose. Since neural networks are unable to extrapolate into regions not covered by the training data (see [6]), one should not use their predictions in such regions. Consequently methods for outlier detection got a lot of attraction. Outliers may be detected by assigning a confidence measure to network decisions. ...
Symbolic Model Checking Using SAT Procedures instead of BDDs
 DAC 99
, 1999
"... In this paper, we study the application of propositional decision procedures in hardware verification. In particular, we apply bounded model checking, as introduced in [1], to equivalence and invariant checking. We present several optimizations that reduce the size of generated propositional formula ..."
Abstract

Cited by 329 (28 self)
 Add to MetaCart
In this paper, we study the application of propositional decision procedures in hardware verification. In particular, we apply bounded model checking, as introduced in [1], to equivalence and invariant checking. We present several optimizations that reduce the size of generated propositional
Confirmation, Disconfirmation, and Information in Hypothesis Testing
, 1987
"... Strategies for hypothesis testing in scientific investigation and everyday reasoning have interested both psychologists and philosophers. A number of these scholars stress the importance of disconnrmation in reasoning and suggest that people are instead prone to a general deleterious "confirmat ..."
Abstract

Cited by 333 (0 self)
 Add to MetaCart
in terms of a general positive test strategy. With this strategy, there is a tendency to test cases that are expected (or known) to have the property of interest rather than those expected (or known) to lack that property. This strategy is not equivalent to confirmation bias in the first sense; we show
Results 1  10
of
4,850