Results 1  10
of
1,722,126
A Symbolic OutofCore Solution Method for Markov Models
 In Proc. Workshop on Parallel and Distributed Model Checking (PDMC'02), volume 68.4 of Electronic Notes in Theoretical Computer Science
, 2002
"... Despite considerable eort, the statespace explosion problem remains an issue in the analysis of Markov models. Given structure, symbolic representations can result in very compact encoding of the models. However, a major obstacle for symbolic methods is the need to store the probability vector(s) e ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
Despite considerable eort, the statespace explosion problem remains an issue in the analysis of Markov models. Given structure, symbolic representations can result in very compact encoding of the models. However, a major obstacle for symbolic methods is the need to store the probability vector(s
An Efficient Symbolic OutofCore Solution Method for Markov Models
, 2003
"... In recent years, diskbased approaches to the analysis of Markov models have proved to be an effective method of combating the state space explosion problem. Coupled with parallel and symbolic techniques, diskbased methods have demonstrated impressive performance for numerical solution. In an earli ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. In an earlier paper, we presented a novel, symbolic outofcore algorithm which used MTBDDbased data structures for matrix storage in RAM and diskbased storage for solution vectors. This extended the size of models...
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 554 (18 self)
 Add to MetaCart
Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled
Outofcore problems
, 2005
"... ,,The only way to discover the limits of the possible is to go beyond them into the impossible.”[12] Arthur C. Clarke 1 In this paper we show outofcore problems and some issues connected with it. As the OOC is a very wide matter, we are concerning on only some problems. The main part of the paper ..."
Abstract
 Add to MetaCart
,,The only way to discover the limits of the possible is to go beyond them into the impossible.”[12] Arthur C. Clarke 1 In this paper we show outofcore problems and some issues connected with it. As the OOC is a very wide matter, we are concerning on only some problems. The main part of the paper
An introduction to hidden Markov models
 IEEE ASSp Magazine
, 1986
"... The basic theory of Markov chains has been known to ..."
Abstract

Cited by 1110 (2 self)
 Add to MetaCart
The basic theory of Markov chains has been known to
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 515 (18 self)
 Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model
Symbolic Model Checking for Realtime Systems
 INFORMATION AND COMPUTATION
, 1992
"... We describe finitestate programs over realnumbered time in a guardedcommand language with realvalued clocks or, equivalently, as finite automata with realvalued clocks. Model checking answers the question which states of a realtime program satisfy a branchingtime specification (given in an ..."
Abstract

Cited by 574 (50 self)
 Add to MetaCart
We describe finitestate programs over realnumbered time in a guardedcommand language with realvalued clocks or, equivalently, as finite automata with realvalued clocks. Model checking answers the question which states of a realtime program satisfy a branchingtime specification (given
Symbolic Model Checking without BDDs
, 1999
"... Symbolic Model Checking [3, 14] has proven to be a powerful technique for the verification of reactive systems. BDDs [2] have traditionally been used as a symbolic representation of the system. In this paper we show how boolean decision procedures, like Stalmarck's Method [16] or the Davis ..."
Abstract

Cited by 910 (74 self)
 Add to MetaCart
Symbolic Model Checking [3, 14] has proven to be a powerful technique for the verification of reactive systems. BDDs [2] have traditionally been used as a symbolic representation of the system. In this paper we show how boolean decision procedures, like Stalmarck's Method [16] or the Davis
The Symbol Grounding Problem
, 1990
"... There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system be m ..."
Abstract

Cited by 1072 (18 self)
 Add to MetaCart
There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract

Cited by 641 (16 self)
 Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
Results 1  10
of
1,722,126