Results 1  10
of
139
Learning a Hidden Hypergraph
, 2006
"... We consider the problem of learning a hypergraph using edgedetecting queries. In this model, the learner may query whether a set of vertices induces an edge of the hidden hypergraph or not. We show that ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We consider the problem of learning a hypergraph using edgedetecting queries. In this model, the learner may query whether a set of vertices induces an edge of the hidden hypergraph or not. We show that
Attribute efficient and nonadaptive learning of parities and DNF expressions
 Journal of Machine Learning Research
"... We consider the problems of attributeefficient PAC learning of two wellstudied concept classes: parity functions and DNF expressions over {0, 1}n. We show that attributeefficient learning of parities with respect to the uniform distribution is equivalent to decoding highrate random linear codes ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
from low number of errors, a longstanding open problem in coding theory. An algorithm is said to use membership queries (MQs) nonadaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple nonadaptive parity learning algorithm and a modification
AttributeEfficient and Nonadaptive Learning of Parities and DNF Expressions
, 2006
"... We consider the problems of attributeefficient PAC learning of two wellstudied concept classes: parity functions and DNF expressions over {0, 1}n. We show that attributeefficient learning of parities with respect to the uniform distribution is equivalent to decoding highrate random linear codes ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
from low number of errors, a longstanding open problem in coding theory. This is the first evidence that attributeefficient learning of a natural PAC learnable concept class can be computationally hard. An algorithm is said to use membership queries (MQs) nonadaptively if the points at which
Keeping Neural Networks Simple by Minimizing the Description Length of the Weights
"... Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a w ..."
Abstract

Cited by 164 (1 self)
 Add to MetaCart
weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the tradeoff between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error
OPTIMAL QUERY COMPLEXITY FOR RECONSTRUCTING HYPERGRAPHS
, 2010
"... Abstract. In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let G be a weighted hidden hypergraph of constant rank with n vertices and m hyperedges. For any m there exists a nonadaptive algorithm tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let G be a weighted hidden hypergraph of constant rank with n vertices and m hyperedges. For any m there exists a nonadaptive algorithm
Learning a hidden graph using O(log n) queries per edge
 In Conference on Learning Theory
, 2004
"... We consider the problem of learning a general graph using edgedetecting queries. In this model, the learner may query whether a set of vertices induces an edge of the hidden graph. This model has been studied for particular classes of graphs by Kucherov and Grebinski [7] and Alon et al.[3], motivat ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
], motivated by problems arising in genome sequencing. We give an adaptive deterministic algorithm that learns a general graph with n vertices and m edges using O(m log n) queries, which is tight up to a constant factor for classes of nondense graphs. Allowing randomness, we give a 5round Las Vegas algorithm
Using hypergraph homomorphisms to guess three secrets
, 2004
"... We present the first polynomialtime strategy for solving the problem of guessing three secrets as introduced by Chung, Graham and Leighton [6]. “Guessing three secrets ” is a combinatorial search problem in which an adversary holds three nbit secret strings, and a seeker attempts to learn as much ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
of the equivalence classes of mutuallyhomomorphic threeuniform intersecting hypergraphs. We also show a reduction from certain instances of the problem of learning a hidden subgraph (as defined in Alon and Asodi [2]) to the guessing secrets problem.
Learning a Hidden Matching  Combinatorial Identification of Hidden Matchings with Applications to Whole Genome Sequencing
, 2002
"... We consider the problem of learning a matching (i.e., a graph in which all vertices have degree 0 or 1) in a model where the only allowed operation is to query whether a set of vertices induces an edge. This is motivated by a problem that arises in molecular biology. In the deterministic nonadaptive ..."
Abstract
 Add to MetaCart
We consider the problem of learning a matching (i.e., a graph in which all vertices have degree 0 or 1) in a model where the only allowed operation is to query whether a set of vertices induces an edge. This is motivated by a problem that arises in molecular biology. In the deterministic
Temporal BYY Learning for State Space Approach, Hidden Markov Model, and Blind Source Separation
, 2000
"... Temporal BYY (TBYY) learning has been presented for modeling signal in a general state space approach, which provides not only a unified point of view on Kalman filter, hidden Markov model (HMM), independent component analysis (ICA), and blind source separation (BSS) with extensions, but also furthe ..."
Abstract

Cited by 29 (21 self)
 Add to MetaCart
Temporal BYY (TBYY) learning has been presented for modeling signal in a general state space approach, which provides not only a unified point of view on Kalman filter, hidden Markov model (HMM), independent component analysis (ICA), and blind source separation (BSS) with extensions, but also
Orthogonal Transformation of Output Principal Components For Improved Tolerance To Error
"... Abstract – Preprocessing of data to be learned by a neural network is typically done to improve neural network performance. Output preprocessing is especially important since it directly affects the influence of error in the hidden layers on the error in the neural network output. Principal componen ..."
Abstract
 Add to MetaCart
Abstract – Preprocessing of data to be learned by a neural network is typically done to improve neural network performance. Output preprocessing is especially important since it directly affects the influence of error in the hidden layers on the error in the neural network output. Principal
Results 1  10
of
139