Results 1  10
of
2,584
A comparison of bayesian methods for haplotype reconstruction from population genotype data.
 Am J Hum Genet
, 2003
"... In this report, we compare and contrast three previously published Bayesian methods for inferring haplotypes from genotype data in a population sample. We review the methods, emphasizing the differences between them in terms of both the models ("priors") they use and the computational str ..."
Abstract

Cited by 557 (7 self)
 Add to MetaCart
In this report, we compare and contrast three previously published Bayesian methods for inferring haplotypes from genotype data in a population sample. We review the methods, emphasizing the differences between them in terms of both the models ("priors") they use and the computational
A fast and flexible statistical model for largescale population genotype data: Applications to inferring missing genotypes and haplotype phase
 American Journal of Human Genetics
, 2005
"... We present a statistical model for patterns of genetic variation in samples of unrelated individuals from natural populations. This model is based on the idea that, over short regions, haplotypes in a population tend to cluster into groups of similar haplotypes. To capture the fact that, because of ..."
Abstract

Cited by 408 (10 self)
 Add to MetaCart
missing genotypes and estimating haplotypic phase. For imputing missing genotypes, methods based on this model are as accurate or more accurate than existing methods. For haplotype estimation, the point estimates are slightly less accurate than those from the best existing methods (e.g., for unrelated
Image analogies
, 2001
"... Figure 1 An image analogy. Our problem is to compute a new “analogous ” image B ′ that relates to B in “the same way ” as A ′ relates to A. Here, A, A ′ , and B are inputs to our algorithm, and B ′ is the output. The fullsize images are shown in Figures 10 and 11. This paper describes a new framewo ..."
Abstract

Cited by 455 (8 self)
 Add to MetaCart
framework for processing images by example, called “image analogies. ” The framework involves two stages: a design phase, in which a pair of images, with one image purported to be a “filtered ” version of the other, is presented as “training data”; and an application phase, in which the learned filter
Population structure and eigenanalysis
 PLoS Genet 2(12): e190 DOI: 10.1371/journal.pgen.0020190
, 2006
"... Current methods for inferring population structure from genetic data do not provide formal significance tests for population differentiation. We discuss an approach to studying population structure (principal components analysis) that was first applied to genetic data by CavalliSforza and colleague ..."
Abstract

Cited by 263 (9 self)
 Add to MetaCart
Current methods for inferring population structure from genetic data do not provide formal significance tests for population differentiation. We discuss an approach to studying population structure (principal components analysis) that was first applied to genetic data by Cavalli
The induction of dynamical recognizers
 Machine Learning
, 1991
"... A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the le ..."
Abstract

Cited by 225 (14 self)
 Add to MetaCart
of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network. This phase transition corresponds to the onset of the network’s capacity for generalizing to arbitrary
Efficient learning of sparse representations with an energybased model
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2006
, 2006
"... We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying nonlinearity that turns a code vector into a quasibinary sparse code vector. Given an input, the optimal code minimizes the distance b ..."
Abstract

Cited by 219 (15 self)
 Add to MetaCart
between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a twophase EMlike fashion: (1) compute the minimumenergy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model
Reducing the Memory Complexity of TypeInference Algorithms
, 2002
"... In the Java Virtual Machine, the bytecode veri er checks lowlevel security properties that ensure that the downloaded code cannot bypass the virtual machine's security mechanisms. One of the statically ensured properties is type safety. The typeinference phase is the overwhelming resour ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In the Java Virtual Machine, the bytecode veri er checks lowlevel security properties that ensure that the downloaded code cannot bypass the virtual machine's security mechanisms. One of the statically ensured properties is type safety. The typeinference phase is the overwhelming
Phase Transitions within Grammatical Inference
"... It is now wellknown that the feasibility of inductive learning is ruled by statistical properties linking the empirical risk minimization principle and the “capacity ” of the hypothesis space. The discovery, a few years ago, of a phase transition phenomenon in inductive logic programming proves tha ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
that other fundamental characteristics of the learning problems may similarly affect the very possibility of learning under very general conditions. Our work examines the case of grammatical inference. We show that while there is no phase transition when considering the whole hypothesis space, there is a
An online selfconstructing neural fuzzy inference network and its applications
 IEEE. TRANS. FUZZY. SYS
, 1998
"... A selfconstructing neural fuzzy inference network (SONFIN) with online learning ability is proposed in this paper. The SONFIN is inherently a modified Takagi–Sugeno–Kang (TSK)type fuzzy rulebased model possessing neural network’s learning ability. There are no rules initially in the SONFIN. The ..."
Abstract

Cited by 92 (22 self)
 Add to MetaCart
A selfconstructing neural fuzzy inference network (SONFIN) with online learning ability is proposed in this paper. The SONFIN is inherently a modified Takagi–Sugeno–Kang (TSK)type fuzzy rulebased model possessing neural network’s learning ability. There are no rules initially in the SONFIN
Results 1  10
of
2,584