Results 1 
6 of
6
Efficient Learning of OneVariable Pattern Languages from Positive Data
, 1996
"... A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. Descriptive patterns are a key concept for inductive inference o ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. Descriptive patterns are a key concept for inductive inference of pattern languages. A pattern is descriptive for a given sample if the sample is contained in the language L() generated by and no other pattern having this property generates a proper subset of the language L(). The best previously known algorithm for computing descriptive onevariable patterns requires time O(n log n), where n is the size of the sample. We present a simpler and more efficient algorithm solving the same problem in time O(n log n). In addition, we give a parallel version of this algorithm that requires time O(log n) and O(n = log n) processors on an EREWPRAM. Previously, no parallel algorithm was known for this problem. Using a
Parsimony hierarchies for inductive inference
 Journal of Symbolic Logic
"... Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requ ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. A limcomputable function is (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonious final programs by use of criteria which require the final, correct programs to be “notsonearly ” minimal size, e.g., to be within a limcomputable function of actual minimal size. It is shown that some parsimony in the final program is thereby retained, yet learning power strictly increases. Considered, then, are limcomputable functions as above but for which notations for constructive ordinals are used to bound the number of mind changes allowed regarding the output. This is a variant of an idea introduced by Freivalds and Smith. For this ordinal notation complexity bounded version of limcomputability, the power of
A Polynomial Time Learner for a Subclass of Regular Patterns
"... Abstract Presented is an algorithm (for learning a subclass of erasing regular pattern languages) which can be made to run with arbitrarily high probability of success on extended regular languages generated by patterns ss of the form x0ff1x1...ffmxm for unknown m but known c, from number of example ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Presented is an algorithm (for learning a subclass of erasing regular pattern languages) which can be made to run with arbitrarily high probability of success on extended regular languages generated by patterns ss of the form x0ff1x1...ffmxm for unknown m but known c, from number of examples polynomial in m (and exponential in c), where x0,..., xm are variables and where ff1,..., ffm are each strings of terminals of length c. This assumes that the algorithm randomly draws samples with natural and plausible assumptions on the distribution. With the aim of finding a better algorithm, we also explore computer simulations of a heuristic.
Abstract
"... Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on at most a bits? Let 0 ≤ a ≤ n − 1. We show that any deterministic protocol, as well as any errorfree quantum protocol (C ∗ version), for this problem requires at least n − 2 bits of communication. We ..."
Abstract
 Add to MetaCart
Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on at most a bits? Let 0 ≤ a ≤ n − 1. We show that any deterministic protocol, as well as any errorfree quantum protocol (C ∗ version), for this problem requires at least n − 2 bits of communication. We show the same bounds for the problem of determining if two strings differ in exactly a bits. We also prove a lower bound of n/2 − 1 for errorfree Q ∗ quantum protocols. Our results are obtained by lowerbounding the ranks of the appropriate matrices. 1
Lecture notes on KnowledgeBased and Learning Systems by Maciej Li´skiewicz
"... Patterns are a very natural way to define formal languages. Suppose you are interested in the language of all strings over the alphabet A = {0, 1} starting with 11, ending with 010, and containing the substring 01011 somewhere, but may be otherwise arbitrary. Thus, all strings in your language follo ..."
Abstract
 Add to MetaCart
(Show Context)
Patterns are a very natural way to define formal languages. Suppose you are interested in the language of all strings over the alphabet A = {0, 1} starting with 11, ending with 010, and containing the substring 01011 somewhere, but may be otherwise arbitrary. Thus, all strings in your language follow the pattern π1 = 11x001011x1010, provided you are willing to allow the variables x0, x1 to be substituted by any string over {0, 1} including the empty one. As for another example, consider the set of all strings having even length 2n such that the prefix of length n is identical to the suffix starting at position n + 1. In that case, the wanted language follows the pattern π2 = x0x0. Before continuing with learning of pattern languages, we provide another example showing how the languages may be applied to learn data structures. Quite often, one has huge bibliographies, i.e., list of references. For example, let us look at the following ones.