Results 1  10
of
36
Principles and methods of Testing Finite State Machines a survey. The
 Proceedings of IEEE
, 1996
"... With advanced computer technology, systems are getting larger to fulfill more complicated tasks, however, they are also becoming less reliable. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. This moti ..."
Abstract

Cited by 250 (13 self)
 Add to MetaCart
With advanced computer technology, systems are getting larger to fulfill more complicated tasks, however, they are also becoming less reliable. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. This motivates the study of testing finite state machines to ensure the correct functioning of systems and to discover aspects of their behavior. A finite state machine contains a finite number of states and produces outputs on state transitions after receiving inputs. Finite state machines are widely used to model systems in diverse areas, including sequential circuits, certain types of programs, and, more recently, communication protocols. In a testing problem we have a machine about which we lack some information; we would like to deduce this information by providing a sequence of inputs to the machine and observing the outputs produced. Because of its practical importance and theoretical interest, the problem of testing finite state machines has been studied in different areas and at various times. The earliest published literature on this topic dates back to the 50’s. Activities in the 60’s and early 70’s were motivated mainly by automata theory and sequential circuit testing. The area seemed to have mostly died down until a few years ago when the testing problem was resurrected and is now being studied anew due to its applications to conformance testing of communication protocols. While some old problems which had been open for decades were resolved recently, new concepts and more intriguing problems from new applications emerge. We review the fundamental problems in testing finite state machines and techniques for solving these problems, tracing progress in the area from its inception to the present and the state of the art. In addition, we discuss extensions of finite state machines and some other topics related to testing. 21.
The Complexity of Stochastic Games
 Information and Computation
, 1992
"... We consider the complexity of stochastic games  simple games of chance played by two players. We show that the problem of deciding which player has the greatest chance of winning the game is in the class NP " coNP. 1 Introduction We consider the complexity of a natural combinatorial problem ..."
Abstract

Cited by 156 (2 self)
 Add to MetaCart
We consider the complexity of stochastic games  simple games of chance played by two players. We show that the problem of deciding which player has the greatest chance of winning the game is in the class NP " coNP. 1 Introduction We consider the complexity of a natural combinatorial problem, that of deciding the outcome of a special kind of stochastic game. A simple stochastic game (SSG) is a directed graph with three types of vertices, called max, min and average vertices. There is a special start vertex and two special sink vertices, called the 0sink and the 1sink. For simplicity, we assume that all vertices have exactly two (not necessarily distinct) neighbors, except for the sink vertices, which have no neighbors. The graph models a game between two players, 0 and 1. In the game, a token is initially placed on the start vertex, and at each step of the game the token is moved from a vertex to one of its neighbors, according to the following rules: At a min vertex, player 0 cho...
On Algorithms for Simple Stochastic Games
 Advances in Computational Complexity Theory, volume 13 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1993
"... We survey a number of algorithms for the simple stochastic game problem, which is to determine the winning probability of a type of stochastic process, where the transitions are partially controlled by two players. We show that four natural approaches to solving the problem are incorrect, and presen ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
We survey a number of algorithms for the simple stochastic game problem, which is to determine the winning probability of a type of stochastic process, where the transitions are partially controlled by two players. We show that four natural approaches to solving the problem are incorrect, and present two new algorithms for the problem. The first reduces the problem to that of finding a locally optimal solution to a (nonconvex) quadratic program with linear constraints. The second extends a technique of Shapley called the successive approximation technique, by using linear programming to maximize the improvement at each approximation step. Finally, we analyze a randomized variant of the HoffmanKarp strategy improvement algorithm. 1 Introduction In this paper, we study algorithms for the simple stochastic game problem. The problem is to find the winning probability of a type of stochastic process where transitions are partially controlled by two players. A simple stochastic game is a ...
MARKOV PAGING
, 2000
"... This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the faultrate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieve ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the faultrate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieves optimal faultrate on any Markov chain. Next, we address the problemof devising a paging strategy with low faultrate for a given Markov chain. We show that a number of intuitive approaches fail. Our main result is a polynomialtime procedure that, on any Markov chain, will give a paging algorithm with faultrate at most a constant times optimal. Our techniques show also that some algorithms that do poorly in practice fail in the Markov setting, despite known (good) performance guarantees when the requests are generated independently from a probability distribution.
On the complexity of space bounded interactive proofs (extended abstract
 In Proceedings of FOCS
, 1989
"... ..."
A Microeconomic View of Data Mining
, 1998
"... We present a rigorous framework, based on optimization, for evaluating data mining operations such as associations and clustering, in terms of their utility in decisionmaking. This framework leads quickly to some interesting computational problems related to sensitivity analysis, segmentation and th ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
We present a rigorous framework, based on optimization, for evaluating data mining operations such as associations and clustering, in terms of their utility in decisionmaking. This framework leads quickly to some interesting computational problems related to sensitivity analysis, segmentation and the theory of games. Department of Computer Science, Cornell University, Ithaca NY 14853. Email: kleinber@cs.cornell.edu. Supported in part by an Alfred P. Sloan Research Fellowship and by NSF Faculty Early Career Development Award CCR9701399. y Computer Science Division, Soda Hall, UC Berkeley, CA 94720. christos@cs.berkeley.edu z IBM Almaden Research Center, 650 Harry Road, San Jose CA 95120. pragh@almaden.ibm.com 1 Introduction Data mining is about extracting interesting patterns from raw data. There is some agreement in the literature on what qualifies as a "pattern" (association rules and correlations [1, 2, 3, 5, 6, 12, 20, 21] as well as clustering of the data points [9], are ...
Distinguishing Tests for Nondeterministic and Probabilistic Machines
, 1995
"... We study the problem of uniquely identifying the initial state of a given finitestate machine from among a set of possible choices, based on the inputoutput behavior. Equivalently, given a set of machines, the problem is to design a test that distinguishes among them. We consider nondeterministic ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
We study the problem of uniquely identifying the initial state of a given finitestate machine from among a set of possible choices, based on the inputoutput behavior. Equivalently, given a set of machines, the problem is to design a test that distinguishes among them. We consider nondeterministic machines as well as probabilistic machines. In both cases, we show that it is Pspacecomplete to decide whether there is a preset distinguishing strategy (i.e. a sequence of inputs fixed in advance), and it is Exptimecomplete to decide whether there is an adaptive distinguishing strategy (i.e. when the next input can be chosen based on the outputs observed so far). The probabilistic testing is closely related to probabilistic games, or Markov Decision Processes, with incomplete information. We also provide optimal bounds for deciding whether such games have strategies winning with probability 1. 1 Introduction Finitestate machines have been widely used to model systems in diverse areas o...
Using Difficulty of Prediction to Decrease Computation: Fast Sort, Priority Queue and Convex Hull on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashi ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and showed that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will ap proximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge,
Solving Simple Stochastic Games with Few Random Vertices
"... Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively the number of vertices, random vertices and edges, and the maximum bitlength of a transition probability. Our algorithm improves existing algorithms for solving SSGs in three aspects. First, our algorithm performs well on SSGs with few random vertices, second it does not rely on linear or quadratic programming, third it applies to all SSGs, not only stopping SSGs.
Complexity results for InfiniteHorizon Markov Decision Processes
, 2000
"... Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These models arise in diverse applications and have been developed extensively in fields such as operations research, control engineering, and the decision sciences in general. Recent research, especially in a ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These models arise in diverse applications and have been developed extensively in fields such as operations research, control engineering, and the decision sciences in general. Recent research, especially in artificial intelligence, has highlighted the significance of studying the computational properties of MDP problems. We address