Results 1 - 10
of
4,307
Probabilistic Theories of Causality
- IN THE OXFORD HANDBOOK OF CAUSATION
, 2009
"... This chapter provides an overview of a range of probabilistic theories of causality, including those of Reichenbach, Good and Suppes, and the contemporary causal net approach. It discusses two key problems for probabilistic accounts: counterexamples to these theories and their failure to account for ..."
Abstract
-
Cited by 8 (7 self)
- Add to MetaCart
This chapter provides an overview of a range of probabilistic theories of causality, including those of Reichenbach, Good and Suppes, and the contemporary causal net approach. It discusses two key problems for probabilistic accounts: counterexamples to these theories and their failure to account
A Probabilistic Theory of Clustering
, 2004
"... clustering is typically considered a subjective process, which makes it problematic. For instance, how does one make statistical inferences based onclustering The matter is di#erent with pattern classi#cation, for which two fundamental characteristics can be stated: (1) the error of a classi#er c ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
#er can be estimatedusing "test data," and (2) a classi#er can be learnedusing "training data." This paper presents a probabilistic theory ofclustering including bothlearning (training and error estimation (testingb The theory is based on operators on random labeled point
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. R ..."
Abstract
-
Cited by 736 (24 self)
- Add to MetaCart
for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture
Inferring Probabilistic Theories from
"... When formulating a theory based on observations influenced by noise or other sources of uncertainty, it becomes necessary to decide whether the pro-posed theory agrees with the data “well enough.” This paper presents a criterion for making this judgement. The criterion is based on a gambling scenari ..."
Abstract
- Add to MetaCart
When formulating a theory based on observations influenced by noise or other sources of uncertainty, it becomes necessary to decide whether the pro-posed theory agrees with the data “well enough.” This paper presents a criterion for making this judgement. The criterion is based on a gambling
Probabilistic Theories of the Visual Cortex
"... THE VERY EARLY VISUAL SYSTEM This lecture first briefly reviews the structural organization of V1, the properties of simple cells, and divisive normalization. The lecture also illustrated principles such as sparsity, independence, and inverting generative models. A. Review: From Retina and LGN to V1 ..."
Abstract
- Add to MetaCart
THE VERY EARLY VISUAL SYSTEM This lecture first briefly reviews the structural organization of V1, the properties of simple cells, and divisive normalization. The lecture also illustrated principles such as sparsity, independence, and inverting generative models. A. Review: From Retina and LGN to V1 Light is captured in the retina, transmitted to the LGN, and then to area V1 of the visual cortex. Receptive field properties of neurons in retina and LGN are generally believed to be modelled by symmetric centersurround cells – i.e. the Laplacian of a Gaussian filter, which looks like a Mexican Hat. This may be an over-simplification (e.g., see meister for an alternative viewpoint) but Yang Dan reports that it is possible to reconstruct the input image from the responses of neurons in retina or LGB (which would seem to be impossible if the standard models were badly wrong). There is an expansion (by a factor between 80 and 400) as we move from the LGN to V1. This is not surprising because V1 starts the hard problem of interpreting the image (while the retina and LGN perform the simpler tasks of capturing the image and transmitting it to the cortex – at least this is the
Generalized Probabilistic Theories [1]
"... on non-local correlations from the structure of the local state space ..."
Results 1 - 10
of
4,307