Results 1  10
of
1,009,708
Learning ReadOnce Formulas with Queries
 J. ACM
, 1989
"... A readonce formula is a boolean formula in which each variable occurs at most once. Such formulas are also called ¯formulas or boolean trees. This paper treats the problem of exactly identifying an unknown readonce formula using specific kinds of queries. The main results are a polynomial time al ..."
Abstract

Cited by 117 (19 self)
 Add to MetaCart
algorithm for exact identification of monotone readonce formulas using only membership queries, and a polynomial time algorithm for exact identification of general readonce formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). Our results
LEARNING ARITHMETIC READONCE FORMULAS*
"... Abstract. A formula is readonce if each variable appears at most once in it. An arithmetic readonce formula is one in which the operators are addition, subtraction, multiplication, and division. We present polynomial time algorithms for exact learning of arithmetic readonce formulas over a field. ..."
Abstract
 Add to MetaCart
Abstract. A formula is readonce if each variable appears at most once in it. An arithmetic readonce formula is one in which the operators are addition, subtraction, multiplication, and division. We present polynomial time algorithms for exact learning of arithmetic readonce formulas over a field
Learning Probabilistic Readonce Formulas on Product Distributions
"... Abstract. This paper presents a polynomialtime algorithm for inferring a probabilistic generalization of the class of readonce Boolean formulas over the usual basis {AND, OR, NOT}. The algorithm effectively infers a good approximation of the target formula when provided with random examples which ..."
Abstract
 Add to MetaCart
Abstract. This paper presents a polynomialtime algorithm for inferring a probabilistic generalization of the class of readonce Boolean formulas over the usual basis {AND, OR, NOT}. The algorithm effectively infers a good approximation of the target formula when provided with random examples which
Interpolating Arithmetic ReadOnce Formulas in Parallel
 Journal of Computer and System Sciences
, 1998
"... A formula is readonce if each variable appears at most once in it. An arithmetic readonce formula is one in which the operations are addition, subtraction, multiplication, and division (and constants are allowed). We present a randomized (Las Vegas) parallel algorithm for the exact interpolatio ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
interpolation of arithmetic readonce formulas over sufficiently large fields. More specifically, for nvariable readonce formulas, and fields of size at least + 3n \Gamma 2), our algorithm runs in O(log processors (where the field operations are charged unit cost). This complements other results
Improved polynomial identity testing for readonce formulas
 In APPROXRANDOM
, 2009
"... An arithmetic readonce formula (ROF for short) is a formula (a circuit whose underlying graph is a tree) in which the operations are {+, ×} and such that every input variable labels at most one leaf. A preprocessed ROF (PROF for short) is a ROF in which we are allowed to replace each variable xi wi ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
An arithmetic readonce formula (ROF for short) is a formula (a circuit whose underlying graph is a tree) in which the operations are {+, ×} and such that every input variable labels at most one leaf. A preprocessed ROF (PROF for short) is a ROF in which we are allowed to replace each variable xi
Readonce Polynomial Identity Testing
"... An arithmetic readonce formula (ROF for short) is a formula (a circuit in which the fanout of every gate is at most 1) in which the operations are {+, ×} and such that every input variable labels at most one leaf. In this paper we study the problems of identity testing and reconstruction of readon ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
An arithmetic readonce formula (ROF for short) is a formula (a circuit in which the fanout of every gate is at most 1) in which the operations are {+, ×} and such that every input variable labels at most one leaf. In this paper we study the problems of identity testing and reconstruction of readonce
Possessions and the Extended Self
 Journal of Consumer Research
, 1988
"... Our possessions are a major contributor to and reflection of our identities. A variety of evidence is presented supporting this simple and compelling premise. Related streans of research are identified and drawn upon in devetopJng this concept and implications are derived for consumer behavior. Beca ..."
Abstract

Cited by 544 (2 self)
 Add to MetaCart
Our possessions are a major contributor to and reflection of our identities. A variety of evidence is presented supporting this simple and compelling premise. Related streans of research are identified and drawn upon in devetopJng this concept and implications are derived for consumer behavior. Because the construct of exterxJed self involves consumer behavior rather than buyer behavior. It apjpears to be a much richer construct than previous formulations positing a relationship between selfconcept and consumer brand choice. Hollow hands clasp ludicrous possessions because they are links in the chain of life If it breaks, they are truly losL—Dichlsr \ 964 W e cannot hope to understand consumer behavior without first gaining some understanding of ihe meanings that consumers attach to possessions..• \ key to understanding what possessions mean is recognizing thai, knowingly or unknowingly, intentionally or unintentionally, we regard our possessions as parts of ourselves. As Tuan argues, "Our fragile sense
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract

Cited by 619 (31 self)
 Add to MetaCart
objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend wellknown statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction
Boosting a Weak Learning Algorithm By Majority
, 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Abstract

Cited by 516 (15 self)
 Add to MetaCart
We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas
Results 1  10
of
1,009,708