• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 608
Next 10 →

A fast learning algorithm for deep belief nets

by Geoffrey E. Hinton, Simon Osindero - Neural Computation , 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract - Cited by 970 (49 self) - Add to MetaCart
We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer

Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence

by Charles G. Lord, Lee Ross - Journal of Personality and Social Psychology , 1979
"... People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "discontinuing " evidence to critical evaluation, and as a result to draw und ..."
Abstract - Cited by 477 (8 self) - Add to MetaCart
punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations

Learning Bayesian belief networks: An approach based on the MDL principle

by Wai Lam, Fahiem Bacchus - Computational Intelligence , 1994
"... A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being lear ..."
Abstract - Cited by 254 (7 self) - Add to MetaCart
learned. In particular, our method can learn unrestricted multiply-connected belief networks. Furthermore, unlike other approaches our method allows us to tradeo accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can

Sparse deep belief net model for visual area V2

by Chaitanya Ekanadham - Advances in Neural Information Processing Systems 20 , 2008
"... Abstract 1 Motivated in part by the hierarchical organization of the neocortex, a number of recently proposed algorithms have tried to learn hierarchical, or “deep, ” structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed ..."
Abstract - Cited by 164 (19 self) - Add to MetaCart
some intriguing hypotheses about V2 computations. 1 This thesis is an extended version of an earlier paper by Honglak Lee, Chaitanya Ekanadham, and Andrew Ng titled “Sparse deep belief net model for visual area V2.” 1

Symbolic probabilistic inference in belief nets

by Ross D. Shachter, Bruce D Ambrosio, Brendan A. Del Favero , 1989
"... ..."
Abstract - Cited by 78 (1 self) - Add to MetaCart
Abstract not found

Learning Deep Architectures for AI

by Yoshua Bengio
"... Theoretical results suggest that in order to learn the kind of complicated functions that can represent highlevel abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as i ..."
Abstract - Cited by 183 (30 self) - Add to MetaCart
, such as in neural nets with many hidden layers or in complicated propositional formulae re-using many sub-formulae. Searching the parameter space of deep architectures is a difficult task, but learning algorithms such as those for Deep Belief Networks have recently been proposed to tackle this problem with notable

in densely connected systems

by J. P. Neirotti, D. Saad , 2005
"... PACS. 89.70.+c – Information theory and communication theory. PACS. 75.10.Nr – Spin-glass and other random models. PACS. 64.60.Cn – Order-disorder transformations; statistical mechanics of model systems. Abstract. – An improved inference method for densely connected systems is presented. The approac ..."
Abstract - Add to MetaCart
PACS. 89.70.+c – Information theory and communication theory. PACS. 75.10.Nr – Spin-glass and other random models. PACS. 64.60.Cn – Order-disorder transformations; statistical mechanics of model systems. Abstract. – An improved inference method for densely connected systems is presented

3-d object recognition with deep belief nets

by Vinod Nair, Geoffrey E. Hinton - Advances in Neural Information Processing Systems 22 , 2009
"... We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients. Performance is evaluated on the NORB d ..."
Abstract - Cited by 63 (8 self) - Add to MetaCart
We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients. Performance is evaluated on the NORB

Structural extension to logistic regression: Discriminative parameter learning of belief net classifiers

by Russell Greiner, Xiaoyuan Su, Bin Shen, Wei Zhou - In Proceedings of the Eighteenth Annual National Conference on Artificial Intelligence (AAAI-02 , 2002
"... Abstract. Bayesian belief nets (BNs) are often used for classification tasks — typically to return the most likely class label for each specified instance. Many BN-learners, however, attempt to find the BN that maximizes a different objective function — viz., likelihood, rather than classification a ..."
Abstract - Cited by 76 (8 self) - Add to MetaCart
Abstract. Bayesian belief nets (BNs) are often used for classification tasks — typically to return the most likely class label for each specified instance. Many BN-learners, however, attempt to find the BN that maximizes a different objective function — viz., likelihood, rather than classification

Pseudo Prior Belief Propagation for Densely Connected Discrete Graphs

by Jacob Goldberger, Amir Leshem
"... Abstract—This paper proposes a new algorithm for the linear least squares problem where the unknown variables are con-strained to be in a finite set. The factor graph that corresponds to this problem is very loopy; in fact, it is a complete graph. Hence, applying the Belief Propagation (BP) algorith ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
information on each variable. Next we integrate this information into a loopy Belief Propagation (BP) algorithm as a pseudo prior. We show that, unlike current paradigms, the Belief Propagation (BP) algorithm can be ad-vantageous even for dense graphs with many short loops. The performance of the proposed
Next 10 →
Results 1 - 10 of 608
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University