Results 1 - 10
of
4,655
Comparison of performance of variants of single-layer perceptron algorithms on non-separable data
- Neural, Parallel and Scientific Computation
, 2000
"... We present a detailed experimental comparison of the pocket algorithm, thermal perceptron, and barycentric correction procedure algorithms that most commonly used algorithms for training threshold logic units (TLUs). Each of these algorithms represent stable variants of the standard perceptron learn ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
learning rule in that they guarantee convergence to zero classi cation errors on datasets that are linearly separable and attempt to classify as large a subset of the training patterns as possible for datasets that are not linearly separable. For datasets involving patterns distributed among M di erent
A tutorial on support vector machines for pattern recognition
- Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract
-
Cited by 3393 (12 self)
- Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when
Support-Vector Networks
- Machine Learning
, 1995
"... The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special pr ..."
Abstract
-
Cited by 3703 (35 self)
- Add to MetaCart
properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the supportvector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.
Rendering of Surfaces from Volume Data
- IEEE COMPUTER GRAPHICS AND APPLICATIONS
, 1988
"... The application of volume rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not required. Images are formed by directly shading each sample and projecting it onto the picture ..."
Abstract
-
Cited by 875 (12 self)
- Add to MetaCart
The application of volume rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not required. Images are formed by directly shading each sample and projecting it onto
Calibrating noise to sensitivity in private data analysis
- In Proceedings of the 3rd Theory of Cryptography Conference
, 2006
"... Abstract. We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the datab ..."
Abstract
-
Cited by 649 (60 self)
- Add to MetaCart
obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive. 1 Introduction We continue a line of research initiated in [10, 11] on privacy in statistical data-bases. A statistic is a quantity computed from a sample. Intuitively, if the database is a
The Contourlet Transform: An Efficient Directional Multiresolution Image Representation
- IEEE TRANSACTIONS ON IMAGE PROCESSING
"... The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a “true” two-dimensional transform that can capture the intrinsic geometrical structure t ..."
Abstract
-
Cited by 513 (20 self)
- Add to MetaCart
-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using non-separable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a
Adaptive Regularization of Weight Vectors
- Advances in Neural Information Processing Systems 22
, 2009
"... We present AROW, a new online learning algorithm that combines several useful properties: large margin training, confidence weighting, and the capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing it to perform ..."
Abstract
-
Cited by 71 (17 self)
- Add to MetaCart
We present AROW, a new online learning algorithm that combines several useful properties: large margin training, confidence weighting, and the capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing
The Determinants of Credit Spread Changes.
- Journal of Finance
, 2001
"... ABSTRACT Using dealer's quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are ..."
Abstract
-
Cited by 422 (2 self)
- Add to MetaCart
, and in turn, reduces the credit spreads. This prediction is borne out in their data. Further evidence is provided by Duffee (1998), who uses a sample restricted to non-callable bonds and 3 finds a significant, albeit weaker, negative relationship between changes in credit spreads and interest rates. Changes
Complexity of Data Flow Analysis for Non-Separable Frameworks
, 2006
"... Abstract — The complexity of round robin iterative data flow analysis has been traditionally defined as 1+d where d is the depth of a control flow graph. However, this bound is restricted to bit vector frameworks, which by definition, are separable. For non-separable frameworks, the complexity of an ..."
Abstract
- Add to MetaCart
Abstract — The complexity of round robin iterative data flow analysis has been traditionally defined as 1+d where d is the depth of a control flow graph. However, this bound is restricted to bit vector frameworks, which by definition, are separable. For non-separable frameworks, the complexity
Complexity of Data Flow Analysis for Non-Separable Frameworks
"... Abstract — The complexity of round robin iterative data flow analysis has been traditionally defined as 1 + d where d is the depth of a control flow graph. However, this bound is restricted to bit vector frameworks, which by definition, are separable. For non-separable frameworks, the complexity of ..."
Abstract
- Add to MetaCart
Abstract — The complexity of round robin iterative data flow analysis has been traditionally defined as 1 + d where d is the depth of a control flow graph. However, this bound is restricted to bit vector frameworks, which by definition, are separable. For non-separable frameworks, the complexity
Results 1 - 10
of
4,655