Results 1  10
of
4,758,454
Relations defined on sets
 Journal of Formalized Mathematics
, 1989
"... Summary. The article includes theorems concerning properties of relations defined as a subset of the Cartesian product of two sets (mode Relation of X,Y where X,Y are sets). Some notions, introduced in [4] such as domain, codomain, field of a relation, composition of relations, image and inverse ima ..."
Abstract

Cited by 517 (0 self)
 Add to MetaCart
Summary. The article includes theorems concerning properties of relations defined as a subset of the Cartesian product of two sets (mode Relation of X,Y where X,Y are sets). Some notions, introduced in [4] such as domain, codomain, field of a relation, composition of relations, image and inverse
A Framework for Defining Logics
 JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY
, 1993
"... The Edinburgh Logical Framework (LF) provides a means to define (or present) logics. It is based on a general treatment of syntax, rules, and proofs by means of a typed calculus with dependent types. Syntax is treated in a style similar to, but more general than, MartinLof's system of ariti ..."
Abstract

Cited by 807 (45 self)
 Add to MetaCart
The Edinburgh Logical Framework (LF) provides a means to define (or present) logics. It is based on a general treatment of syntax, rules, and proofs by means of a typed calculus with dependent types. Syntax is treated in a style similar to, but more general than, MartinLof's system
Minimum Error Rate Training in Statistical Machine Translation
, 2003
"... Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training cri ..."
Abstract

Cited by 663 (7 self)
 Add to MetaCart
Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training
Training Support Vector Machines: an Application to Face Detection
, 1997
"... We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision sur ..."
Abstract

Cited by 728 (1 self)
 Add to MetaCart
We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision
How Much Training is Needed in MultipleAntenna Wireless Links?
 IEEE Trans. Inform. Theory
, 2000
"... .... ..."
Defining Virtual Reality: Dimensions Determining Telepresence
 JOURNAL OF COMMUNICATION
, 1992
"... Virtual reality (VR) is typically defined in terms of technological hardware. This paper attempts to cast a new, variablebased definition of virtual reality that can be used to classify virtual reality in relation to other media. The defintion of virtual reality is based on concepts of "presen ..."
Abstract

Cited by 534 (0 self)
 Add to MetaCart
Virtual reality (VR) is typically defined in terms of technological hardware. This paper attempts to cast a new, variablebased definition of virtual reality that can be used to classify virtual reality in relation to other media. The defintion of virtual reality is based on concepts of "
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1848 (44 self)
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract

Cited by 641 (16 self)
 Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
Ktheory for operator algebras
 Mathematical Sciences Research Institute Publications
, 1998
"... p. XII line5: since p. 12: I blew this simple formula: should be α = −〈ξ, η〉/〈η, η〉. p. 2 I.1.1.4: The RieszFischer Theorem is often stated this way today, but neither Riesz nor Fischer (who worked independently) phrased it in terms of completeness of the orthogonal system {e int}. If [a, b] is a ..."
Abstract

Cited by 559 (0 self)
 Add to MetaCart
Neumann used the same name for Hilbert spaces in the modern sense (complete inner product spaces), which he defined in 1928. p. 3 line6: At the end of the line, 2ɛ should be 4ɛ. p. 3 I.1.2.3: The statement that a dense subspace of a Hilbert space H contains an orthonormal basis for H can be false if H
Discriminative Training and Maximum Entropy Models for Statistical Machine Translation
, 2002
"... We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language senten ..."
Abstract

Cited by 497 (30 self)
 Add to MetaCart
We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language
Results 1  10
of
4,758,454