Results 1  10
of
53
A Survey of Combinatorial Gray Codes
 SIAM Review
, 1996
"... The term combinatorial Gray code was introduced in 1980 to refer to any method for generating combinatorial objects so that successive objects differ in some prespecified, small way. This notion generalizes the classical binary reflected Gray code scheme for listing nbit binary numbers so that ..."
Abstract

Cited by 93 (2 self)
 Add to MetaCart
(Show Context)
The term combinatorial Gray code was introduced in 1980 to refer to any method for generating combinatorial objects so that successive objects differ in some prespecified, small way. This notion generalizes the classical binary reflected Gray code scheme for listing nbit binary numbers so that successive numbers differ in exactly one bit position, as well as work in the 1960's and 70's on minimal change listings for other combinatorial families, including permutations and combinations. The area of combinatorial Gray codes was popularized by Herbert Wilf in his invited address at the SIAM Discrete Mathematics Conference in 1988 and his subsequent SIAM monograph in which he posed some open problems and variations on the theme. This resulted in much recent activity in the area and most of the problems posed by Wilf are now solved. In this paper, we survey the area of combinatorial Gray codes, describe recent results, variations, and trends, and highlight some open problems. ...
Complexity and Algorithms for Reasoning About Time: A GraphTheoretic Approach
, 1992
"... Temporal events are regarded here as intervals on a time line. This paper deals with problems in reasoning about such intervals when the precise topological relationship between them is unknown or only partially specified. This work unifies notions of interval algebras in artificial intelligence ..."
Abstract

Cited by 90 (11 self)
 Add to MetaCart
Temporal events are regarded here as intervals on a time line. This paper deals with problems in reasoning about such intervals when the precise topological relationship between them is unknown or only partially specified. This work unifies notions of interval algebras in artificial intelligence with those of interval orders and interval graphs in combinatorics. The satisfiability, minimal labeling, all solutions and all realizations problems are considered for temporal (interval) data. Several versions are investigated by restricting the possible interval relationships yielding different complexity results. We show that even when the temporal data comprises of subsets of relations based on intersection and precedence only, the satisfiability question is NPcomplete. On the positive side, we give efficient algorithms for several restrictions of the problem. In the process, the interval graph sandwich problem is introduced, and is shown to be NPcomplete. This problem is als...
The Complexity of Counting in Sparse, Regular, and Planar Graphs
 SIAM Journal on Computing
, 1997
"... We show that a number of graphtheoretic counting problems remain NPhard, indeed #Pcomplete, in very restricted classes of graphs. In particular, it is shown that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
We show that a number of graphtheoretic counting problems remain NPhard, indeed #Pcomplete, in very restricted classes of graphs. In particular, it is shown that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree. To achieve these results, a new interpolationbased reduction technique which preserves properties such as constant degree is introduced. In addition, the problem of approximately counting minimum cardinality vertex covers is shown to remain NPhard even when restricted to graphs of maximal degree 3. Previously, restrictedcase complexity results for counting problems were elusive; we believe our techniques may help obtain similar results for many other counting problems. 1 Introduction Ever since the introduction of NPcompleteness in the early 1970's, the primary focus of complexity theory has been on decision ...
Markov Chains and Polynomial time Algorithms
, 1994
"... This paper outlines the use of rapidly mixing Markov Chains in randomized polynomial time algorithms to solve approximately certain counting problems. They fall into two classes: combinatorial problems like counting the number of perfect matchings in certain graphs and geometric ones like computing ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
This paper outlines the use of rapidly mixing Markov Chains in randomized polynomial time algorithms to solve approximately certain counting problems. They fall into two classes: combinatorial problems like counting the number of perfect matchings in certain graphs and geometric ones like computing the volumes of convex sets.
Generating Linear Extensions Fast
"... One of the most important sets associated with a poset P is its set of linear extensions, E(P) . "ExtensionFast.html" 87 lines, 2635 characters One of the most important sets associated with a poset P is its set of linear extensions, E(P) . In this paper, we present an algorithm to generat ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
One of the most important sets associated with a poset P is its set of linear extensions, E(P) . "ExtensionFast.html" 87 lines, 2635 characters One of the most important sets associated with a poset P is its set of linear extensions, E(P) . In this paper, we present an algorithm to generate all of the linear extensions of a poset in constant amortized time; that is, in time O(e(P)) , where e ( P ) =  E(P) . The fastest previously known algorithm for generating the linear extensions of a poset runs in time O(n e(P)) , where n is the number of elements of the poset. Our algorithm is the first constant amortized time algorithm for generating a ``naturally defined'' class of combinatorial objects for which the corresponding counting problem is #Pcomplete. Furthermore, we show that linear extensions can be generated in constant amortized time where each extension differs from its predecessor by one or two adjacent transpositions. The algorithm is practical and can be modified to efficiently count linear extensions, and to compute P(x < y) , for all pairs x,y , in time O( n^2 + e ( P )).
jpredictor: a predictive runtime analysis tool for java
 In ICSE
, 2008
"... jPredictor is a tool for detecting concurrency errors in Java programs. The Java program is instrumented to emit propertyrelevant events at runtime and then executed. The resulting execution trace is collected and analyzed by jPredictor, which extracts a causality relation sliced using static analy ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
jPredictor is a tool for detecting concurrency errors in Java programs. The Java program is instrumented to emit propertyrelevant events at runtime and then executed. The resulting execution trace is collected and analyzed by jPredictor, which extracts a causality relation sliced using static analysis and refined with lockatomicity information. The resulting abstract model, a hybrid of a partial order and atomic blocks, is then exhaustively analyzed against the property and errors with counterexamples are reported to the user. Thus, jPredictor can “predict ” errors that did not happen in the observed execution, but which could have happened under a different thread scheduling. The analysis technique employed in jPredictor is fully automatic, generic (works for any trace property), sound (produces no false alarms) but it is incomplete (may miss errors). Two common types of errors are investigated in this paper: dataraces and atomicity violations. Experiments show that jPredictor is precise (in its predictions), effective and efficient. After the code producing them was executed only once, jPredictor found all the errors reported by other tools. It also found errors missed by other tools, including static race detectors, as well as unknown errors in popular systems like Tomcat and the Apache FTP server. 1.
Complexity of Combinatorial Market Makers ∗
"... We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson’s popular logarithmic market scoring rule market maker (LMSR). Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We ex ..."
Abstract

Cited by 31 (17 self)
 Add to MetaCart
We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson’s popular logarithmic market scoring rule market maker (LMSR). Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We examine both permutation combinatorics, where outcomes are permutations of objects, and Boolean combinatorics, where outcomes are combinations of binary events. We look at three restrictive languages that limit what traders can bet on. Even with severely limited languages, we find that LMSR pricing is #Phard, even when the same language admits polynomialtime matching without the market maker. We then propose an approximation technique for pricing permutation markets based on a recent algorithm for online permutation learning. The connections we draw between LMSR pricing and the vast literature on online learning with expert advice may be of independent interest.
The DecisionTheoretic Video Advisor
 In: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence
, 1999
"... We describe ongoing work toward development of a decisiontheoretic agent to help users choose videos based on their preferences. The DIVA (DecisionTheoretic Interactive Video Advisor) system elicits user preferences using a casebased technique. Hard constraints are used to permit the user to comm ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
We describe ongoing work toward development of a decisiontheoretic agent to help users choose videos based on their preferences. The DIVA (DecisionTheoretic Interactive Video Advisor) system elicits user preferences using a casebased technique. Hard constraints are used to permit the user to communicate temporary deviations from his basic preferences. If the user is not happy with the system's recommendations, he can provide feedback, which is used to modify the represented preferences and generate a new set of recommendations. We describe the fundamental algorithms, the implementation, and some results from some initial experimentation.
Causal discovery via MML
 IN: PROCEEDINGS OF THE THIRTEENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1996
"... Automating the learning of causal models from sample data is a key step toward incorporating machine learning into decisionmaking and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
(Show Context)
Automating the learning of causal models from sample data is a key step toward incorporating machine learning into decisionmaking and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed encoding and search methods for discovering linear causal models. The initial experimental results presented in this paper show that the MML induction approach can recover causal models from generated data which are quite accurate re ections of the original models and compare favorably with those of TETRAD II (Spirtes et al. 1994) even when it is supplied with prior temporal information and MML is not.
Ranking with uncertain scores
 In ICDE
, 2009
"... Abstract — Large databases with uncertain information are becoming more common in many applications including data integration, location tracking, and Web search. In these applications, ranking records with uncertain attributes needs to handle new problems that are fundamentally different from conve ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Large databases with uncertain information are becoming more common in many applications including data integration, location tracking, and Web search. In these applications, ranking records with uncertain attributes needs to handle new problems that are fundamentally different from conventional ranking. Specifically, uncertainty in records ’ scores induces a partial order over records, as opposed to the total order that is assumed in the conventional ranking settings. In this paper, we present a new probabilistic model, based on partial orders, to encapsulate the space of possible rankings originating from score uncertainty. Under this model, we formulate several ranking query types with different semantics. We describe and analyze a set of efficient query evaluation algorithms. We show that our techniques can be used to solve the problem of rank aggregation in partial orders. In addition, we design novel sampling techniques to compute approximate query answers. Our experimental evaluation uses both real and synthetic data. The experimental study demonstrates the efficiency and effectiveness of our techniques in different settings.