Results 11  20
of
100
Modelling Activity Global Temporal Dependencies using Time Delayed Probabilistic Graphical Model
"... We present a novel approach for detecting global behaviour anomalies in multiple disjoint cameras by learning time delayed dependencies between activities cross camera views. Specifically, we propose to model multicamera activities using a Time Delayed Probabilistic Graphical Model (TDPGM) with di ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We present a novel approach for detecting global behaviour anomalies in multiple disjoint cameras by learning time delayed dependencies between activities cross camera views. Specifically, we propose to model multicamera activities using a Time Delayed Probabilistic Graphical Model (TDPGM) with different nodes representing activities in different semantically decomposed regions from different camera views, and the directed links between nodes encoding causal relationships between the activities. A novel twostage structure learning algorithm is formulated to learn globally optimised timedelayed dependencies. A new cumulative abnormality score is also introduced to replace the conventional loglikelihood score for gaining significantly more robust and reliable realtime anomaly detection. The effectiveness of the proposed approach is validated using a camera network installed at a busy underground station. 1.
BNT structure learning package: documentation and experiments
 Technical Report FRE CNRS 2645). Laboratoire PSI, Universitè et INSA de Rouen
, 2004
"... Bayesian networks are a formalism for probabilistic reasonning that is more and more used for classification task in datamining. In some situations, the network structure is given by an expert, otherwise, retrieving it from a database is a NPhard problem, notably because of the search space comple ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Bayesian networks are a formalism for probabilistic reasonning that is more and more used for classification task in datamining. In some situations, the network structure is given by an expert, otherwise, retrieving it from a database is a NPhard problem, notably because of the search space complexity. In the last decade, lot of methods have been introduced to learn the network structure automatically, by simplifying the search space (augmented naive bayes, K2) or by using an heuristic in this search space (greedy search). Most of these methods deal with completely observed data, but some others can deal with incomplete data (SEM, MWSTEM). The Bayes Net Toolbox introduced by [Murphy, 2001a] for Matlab allows us using Bayesian Networks or learning them. But this toolbox is not ’state of the art ’ if we want to perform a Structural Learning, that’s why we propose this package.
Learning Bayesian Network Classifiers: Searching . . .
, 2005
"... There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into different types of directed acyclic graph (DAG) topologies have since been developed, most of these being extensions (using augmenting arcs) or modifications of the Naive Bayes basic topology. In this paper, we present a new algorithm to induce classifiers based on Bayesian networks which obtains excellent results even when standard scoring functions are used. The method performs a simple local search in a space unlike unrestricted or augmented DAGs. Our search space consists of a type of partially directed acyclic graph (PDAG) which combines two concepts of DAG equivalence: classification equivalence and independence equivalence. The results of exhaustive experimentation indicate that the proposed method can compete with stateoftheart algorithms for classification.
Finding Optimal Bayesian Network Given a SuperStructure
"... Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independenc ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independency test (IT) approach and constrains on the directed acyclic graphs (DAG) considered during the searchandscore phase. Subsequently, we theorize the structural constraint by introducing the concept of superstructure S, which is an undirected graph that restricts the search to networks whose skeleton is a subgraph of S. We develop a superstructure constrained optimal search (COS): its time complexity is upper bounded by O(γm n), where γm < 2 depends on the maximal degree m of S. Empirically, complexity depends on the average degree ˜m and sparse structures allow larger graphs to be calculated. Our algorithm is faster than an optimal search by several orders and even finds more accurate results when given a sound superstructure. Practically, S can be approximated by IT approaches; significance level of the tests controls its sparseness, enabling to control the tradeoff between speed and accuracy. For incomplete superstructures, a greedily postprocessed version (COS+) still enables to significantly outperform other heuristic searches. Keywords: subset Bayesian networks, structure learning, optimal search, superstructure, connected 1.
A Kernelbased Causal Learning Algorithm
"... We describe a causal learning method, which employs measuring the strength of statistical dependences in terms of the HilbertSchmidt norm of kernelbased crosscovariance operators. Following the line of the common faithfulness assumption of constraintbased causal learning, our approach assumes th ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We describe a causal learning method, which employs measuring the strength of statistical dependences in terms of the HilbertSchmidt norm of kernelbased crosscovariance operators. Following the line of the common faithfulness assumption of constraintbased causal learning, our approach assumes that a variable Z is likely to be a common effect of X and Y, if conditioning on Z increases the dependence between X and Y. Based on this assumption, we collect “votes” for hypothetical causal directions and orient the edges by the majority principle. In most experiments with known causal structures, our method provided plausible results and outperformed the conventional constraintbased PC algorithm. 1.
A comparison of novel and stateoftheart polynomial Bayesian network learning algorithms
 Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI
, 2005
"... Learning the most probable a posteriori Bayesian network from data has been shown to be an NPHard problem and typical stateoftheart algorithms are exponential in the worst case. However, an important open problem in the field is to identify the least restrictive set of assumptions and correspond ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Learning the most probable a posteriori Bayesian network from data has been shown to be an NPHard problem and typical stateoftheart algorithms are exponential in the worst case. However, an important open problem in the field is to identify the least restrictive set of assumptions and corresponding algorithms under which learning the optimal network becomes polynomial. In this paper, we present a technique for learning the skeleton of a Bayesian network, called Polynomial MaxMin Skeleton (PMMS), and compare it with Three Phase Dependency Analysis, another stateoftheart polynomial algorithm. This analysis considers both the theoretical and empirical differences between the two algorithms, and demonstrates PMMS’s advantages in both respects. When extended with a greedy hillclimbing Bayesianscoring search to orient the edges, the novel algorithm proved more time efficient, scalable, and accurate in quality of reconstruction than most stateoftheart Bayesian network learning algorithms. The results show promise of the existence of polynomial algorithms that are provably correct under minimal distributional assumptions.
SemiSupervised Learning for Facial Expression Recognition
, 2003
"... Automatic classification by machines is one of the basic tasks required in any pattern recognition and human computer interaction applications. In this paper, we discuss training probabilistic classifiers with labeled and unlabeled data. We provide an analysis which shows under what conditions unlab ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Automatic classification by machines is one of the basic tasks required in any pattern recognition and human computer interaction applications. In this paper, we discuss training probabilistic classifiers with labeled and unlabeled data. We provide an analysis which shows under what conditions unlabeled data can be used in learning to improve classification performance. We discuss the implications of this analysis to a specific type of probabilistic classifiers, Bayesian networks, and propose a structure learning algorithm that can utilize unlabeled data to improve classification. Finally, we show how the resulting algorithms are successfully employed in a facial expression recognition application.
A SkeletonBased Approach to Learning Bayesian Networks From Data
 of Lecture
, 2003
"... Various different algorithms for learning Bayesian networks from data have been proposed to date. In this paper, we adopt a novel approach that combines the main advantages of these algorithms yet avoids their difficulties. In our approach, first an undirected graph, termed the skeleton, is construc ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Various different algorithms for learning Bayesian networks from data have been proposed to date. In this paper, we adopt a novel approach that combines the main advantages of these algorithms yet avoids their difficulties. In our approach, first an undirected graph, termed the skeleton, is constructed from the data, using zero and firstorder dependence tests. Then, a search algorithm is employed that builds upon a quality measure to find the best network from the search space that is defined by the skeleton. To corroborate the feasibility of our approach, we present the experimental results that we obtained on various different datasets generated from realworld networks. Within the experimental setting, we further study the reduction of the search space that is achieved by the skeleton.
HighDimensional Gaussian Graphical Model Selection: Walk Summability AND Local Separation Criterion
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2012
"... We consider the problem of highdimensional Gaussian graphical model selection. We identify a set ofgraphsforwhich anefficient estimation algorithmexists, and this algorithm is based on thresholding of empirical conditional covariances. Under a set of transparent conditions, we establish structuralc ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider the problem of highdimensional Gaussian graphical model selection. We identify a set ofgraphsforwhich anefficient estimation algorithmexists, and this algorithm is based on thresholding of empirical conditional covariances. Under a set of transparent conditions, we establish structuralconsistency (orsparsistency) forthe proposedalgorithm, when the number of samples n = Ω(J −2 minlogp), where p is the number of variables and Jmin is the minimum (absolute) edge potential of the graphical model. The sufficient conditions for sparsistency are based on the notion of walksummability of the model and the presence of sparse local vertex separators in the underlying graph. We also derive novel nonasymptotic necessary conditions on the number of samples required for sparsistency.