Results 1 
7 of
7
Structured machine learning: the next ten years
, 2008
"... The field of inductive logic programming (ILP) has made steady progress, since the first ILP workshop in 1991, based on a balance of developments in theory, implementations and applications. More recently there has been an increased emphasis on Probabilistic ILP and the related fields of Statistic ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
The field of inductive logic programming (ILP) has made steady progress, since the first ILP workshop in 1991, based on a balance of developments in theory, implementations and applications. More recently there has been an increased emphasis on Probabilistic ILP and the related fields of Statistical Relational Learning (SRL) and Structured Prediction. The goal of the current paper is to consider these emerging trends and chart out the strategic directions and open problems for the broader area of structured machine learning for the next 10 years.
A simpletransition model for relational sequences
 In IJCAI05
, 2005
"... We use “nearly sound ” logical constraints to infer hidden states of relational processes. We introduce a simpletransition cost model, which is parameterized by weighted constraints and a statetransition cost. Inference for this model, i.e. finding a minimumcost state sequence, reduces to a single ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We use “nearly sound ” logical constraints to infer hidden states of relational processes. We introduce a simpletransition cost model, which is parameterized by weighted constraints and a statetransition cost. Inference for this model, i.e. finding a minimumcost state sequence, reduces to a singlestate minimization (SSM) problem. For relational Horn constraints, we give a practical approach to SSM based on logical reasoning and bounded search. We present a learning method that discovers relational constraints using CLAUDIEN [De Raedt and Dehaspe, 1997] and then tunes their weights using perceptron updates. Experiments in relational video interpretation show that our learned models improve on a variety of competitors. 1
A PenaltyLogic SimpleTransition Model for Structured Sequences
"... We study the problem of learning to infer hidden state sequences of processes whose states and observations are propositionally or relationally factored. Unfortunately, standard exact inference techniques such as Viterbi and graphical model inference exhibit exponential complexity for these processe ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We study the problem of learning to infer hidden state sequences of processes whose states and observations are propositionally or relationally factored. Unfortunately, standard exact inference techniques such as Viterbi and graphical model inference exhibit exponential complexity for these processes. The main motivation behind our work is to identify a restricted space of models, which facilitate efficient inference, yet are expressive enough to remain useful in many applications. In particular, we present the penaltylogic simpletransition model, which utilizes a very simpletransition structure where the transition cost between any two states is constant. While not appropriate for all complex processes, we argue that it is often rich enough in many applications of interest, and when it is applicable there can be inference and learning advantages compared to more general models. In particular, we show that sequential inference for this model, that is, finding a minimumcost state sequence, efficiently reduces to a singlestate minimization (SSM) problem. We then show how to define atemporal cost models in terms of penalty logic, or weighted logical constraints, and how to use this representation for practically efficient SSM computation. We present a method for learning the weights of our model from labeled training data based on Perceptron updates. Finally, we give experiments in both propositional and relational videointerpretation domains showing advantages compared to more general models. 1.
A probabilistic relational model for characterizing situations in dynamic multiagent systems
 in Proc. of the 31th Annual Conference of the German Classification Society on Data Analysis, Machine Learning, and Applications
, 2007
"... Abstract. Artificial systems with a high degree of autonomy require reliable semantic information about the context they operate in. However, state interpretation is a difficult task. Interpretations may depend on a history states and there may be more than one valid interpretation. We propose a mod ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Artificial systems with a high degree of autonomy require reliable semantic information about the context they operate in. However, state interpretation is a difficult task. Interpretations may depend on a history states and there may be more than one valid interpretation. We propose a model for spatiotemporal situations using hidden Markov models based on relational state descriptions, which are extracted from the estimated state of an underlying dynamic system. Our model covers concurrent situations, scenarios with multiple agents, and situations of varying durations. In this work we apply our model to the concrete task of traffic analysis. 1
Learning the Behavior Model of a Robot
"... Complex artifacts are designed today from well specified and well modeled components. But most often, the models of these components cannot be composed into a global functional model of the artifact. A significant observation, modeling and identification effort is required to get such a global model ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Complex artifacts are designed today from well specified and well modeled components. But most often, the models of these components cannot be composed into a global functional model of the artifact. A significant observation, modeling and identification effort is required to get such a global model, which is needed in order to better understand, control and improve the designed artifact. Robotics provides a good illustration of this need. Autonomous robots are able to achieve more and more complex tasks, relying on more advanced sensorimotor functions. To better understand their behavior and improve their performance, it becomes necessary but more difficult to characterize and to model, at the global level, how robots behave in a given environment. Lowlevel models of sensors, actuators and controllers cannot be easily combined into a behavior model. Sometimes high level models operators used for planning are also available, but generally they are too coarse to represent the actual robot behavior. We propose here a general framework for learning from observation data the behavior model of a robot when performing a given task. The behavior is modeled as a Dynamic Bayesian Network, a convenient stochastic structured representations. We show how such a probabilistic model can be learned and how it can be used to improve, on line, the robot behavior with respect to a specific environment and user preferences. Framework and algorithms are detailed; they are substantiated by experimental results for autonomous navigation tasks. 1 1
Compressed Inference for Probabilistic Sequential Models
, 2011
"... Hidden Markov models (HMMs) and conditional random fields (CRFs) are two popular techniques for modeling sequential data. Inference algorithms designed over CRFs and HMMs allow estimation of the state sequence given the observations. In several applications, estimation of the state sequence is not t ..."
Abstract
 Add to MetaCart
Hidden Markov models (HMMs) and conditional random fields (CRFs) are two popular techniques for modeling sequential data. Inference algorithms designed over CRFs and HMMs allow estimation of the state sequence given the observations. In several applications, estimation of the state sequence is not the end goal; instead the goal is to compute some function of it. In such scenarios, estimating the state sequence by conventional inference techniques, followed by computing the functional mapping from the estimate is not necessarily optimal. A more formal approach is to directly infer the final outcome from the observations. In particular, we consider the specific instantiation of the problem where the goal is to find the state trajectories without exact transition points and derive a novel polynomial time inference algorithm that outperforms vanilla inference techniques. We show that this particular problem arises commonly in many disparate applications and present experiments on three of them: (1) Toy robot tracking; (2) Single stroke character recognition; (3) Handwritten word recognition.
LEARNING MODELS AND FORMULAS OF A TEMPORAL EVENT LOGIC
, 2004
"... ii To my parents ..."
(Show Context)