• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 21,815
Next 10 →

Near Shannon limit error-correcting coding and decoding

by Claude Berrou, Alain Glavieux, Punya Thitimajshima , 1993
"... Abstract- This paper deals with a new class of convolutional codes called Turbo-codes, whose performances in terms of Bit Error Rate (BER) are close to the SHANNON limit. The Turbo-Code encoder is built using a parallel concatenation of two Recursive Systematic Convolutional codes and the associated ..."
Abstract - Cited by 1776 (6 self) - Add to MetaCart
Abstract- This paper deals with a new class of convolutional codes called Turbo-codes, whose performances in terms of Bit Error Rate (BER) are close to the SHANNON limit. The Turbo-Code encoder is built using a parallel concatenation of two Recursive Systematic Convolutional codes

Surround-screen projection-based virtual reality: The design and implementation of the CAVE

by Carolina Cruz-neira, Daniel J. Sandin, Thomas A. Defanti , 1993
"... Abstract Several common systems satisfy some but not all of the VR This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches ..."
Abstract - Cited by 725 (27 self) - Add to MetaCart
the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing

Model-Based Analysis of Oligonucleotide Arrays: Model Validation, Design Issues and Standard Error Application

by Cheng Li, Wing Hung Wong , 2001
"... Background: A model-based analysis of oligonucleotide expression arrays we developed previously uses a probe-sensitivity index to capture the response characteristic of a specific probe pair and calculates model-based expression indexes (MBEI). MBEI has standard error attached to it as a measure of ..."
Abstract - Cited by 775 (28 self) - Add to MetaCart
Background: A model-based analysis of oligonucleotide expression arrays we developed previously uses a probe-sensitivity index to capture the response characteristic of a specific probe pair and calculates model-based expression indexes (MBEI). MBEI has standard error attached to it as a measure

A greedy algorithm for aligning DNA sequences

by Zheng Zhang, Scott Schwartz, Lukas Wagner, Webb Miller - J. COMPUT. BIOL , 2000
"... For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy a ..."
Abstract - Cited by 585 (16 self) - Add to MetaCart
For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy

Support-Vector Networks

by Corinna Cortes, Vladimir Vapnik - Machine Learning , 1995
"... The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special pr ..."
Abstract - Cited by 3703 (35 self) - Add to MetaCart
properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the supportvector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.

An empirical comparison of voting classification algorithms: Bagging, boosting, and variants.

by Eric Bauer , Philip Chan , Salvatore Stolfo , David Wolpert - Machine Learning, , 1999
"... Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several vari ..."
Abstract - Cited by 707 (2 self) - Add to MetaCart
in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting

Imagenet classification with deep convolutional neural networks.

by Alex Krizhevsky , Ilya Sutskever , Geoffrey E Hinton - In Advances in the Neural Information Processing System, , 2012
"... Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the pr ..."
Abstract - Cited by 1010 (11 self) - Add to MetaCart
Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than

The Quickhull algorithm for convex hulls

by C. Bradford Barber, David P. Dobkin, Hannu Huhdanpaa - ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE , 1996
"... The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull Algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algo ..."
Abstract - Cited by 713 (0 self) - Add to MetaCart
is implemented with floating-point arithmetic, this assumption can lead to serious errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick ” facets that contain all possible exact convex hulls of the input. A variation

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

by Christopher C. Paige, Michael A. Saunders - ACM Trans. Math. Software , 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract - Cited by 653 (21 self) - Add to MetaCart
numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate

Reinforcement learning: a survey

by Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore - Journal of Artificial Intelligence Research , 1996
"... This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem ..."
Abstract - Cited by 1714 (25 self) - Add to MetaCart
is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues
Next 10 →
Results 1 - 10 of 21,815
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University