Results 1  10
of
103
Robust Classification for Imprecise Environments
, 1989
"... In realworld environments it is usually difficult to specify target operating conditions precisely. This uncertainty makes building robust classification systems problematic. We present a method for the comparison of classifier performance that is robust to imprecise class distributions and misclas ..."
Abstract

Cited by 255 (14 self)
 Add to MetaCart
In realworld environments it is usually difficult to specify target operating conditions precisely. This uncertainty makes building robust classification systems problematic. We present a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. We then show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and ...
SCHAPIRE: Adaptive game playing using multiplicative weights
 Games and Economic Behavior
, 1999
"... We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the mult ..."
Abstract

Cited by 134 (14 self)
 Add to MetaCart
We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the multiplicativeweight methods of Littlestone and Warmuth, is analyzed using the Kullback–Liebler divergence. This analysis yields a new, simple proof of the min–max theorem, as well as a provable method of approximately solving a game. A variant of our gameplaying algorithm is proved to be optimal in a very strong sense. Journal of Economic Literature
Information and Control in GrayBox Systems
 SOSP'01, BANFF, CANADA
, 2001
"... In modern systems, developers are often unable to modify the underlying operating system. To build services in such an environment, we advocate the use of graybox techniques. When treating ..."
Abstract

Cited by 102 (21 self)
 Add to MetaCart
In modern systems, developers are often unable to modify the underlying operating system. To build services in such an environment, we advocate the use of graybox techniques. When treating
Knowledge, probability, and adversaries
 Journal of the ACM
, 1993
"... Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice ..."
Abstract

Cited by 72 (24 self)
 Add to MetaCart
Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries in uencing three di erent aspects of the game. The rst selects the outcome of all nondeterministic choices in the system� the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above di er) � the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show howto assign probability spaces to agents in a way most appropriate for that class, where \most appropriate " is made precise in terms of this betting game. We conclude by showing how di erent assignments of probability spaces (corresponding to di erent opponents) yield di erent levels of guarantees in probabilistic coordinated attack.
Implicit Coscheduling: Coordinated Scheduling with Implicit Information in Distributed Systems
 ACM TRANSACTIONS ON COMPUTER SYSTEMS
, 1998
"... In this thesis, we formalize the concept of an implicitlycontrolled system, also referred to as an implicit system. In an implicit system, cooperating components do not explicitly contact other components for control or state information; instead, components infer remote state by observing natural ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
In this thesis, we formalize the concept of an implicitlycontrolled system, also referred to as an implicit system. In an implicit system, cooperating components do not explicitly contact other components for control or state information; instead, components infer remote state by observing naturallyoccurring local events and their corresponding implicit information, i.e., information available outside of a defined interface. Many systems, particularly in distributed and networked environments, have leveraged implicit control to simplify the implementation of services with autonomous components. To concretely demonstrate the advantages of implicit control, we propose and implement implicit coscheduling, an algorithm for dynamically coordinating the time...
Computational mechanics: Pattern and prediction, structure and simplicity
 Journal of Statistical Physics
, 1999
"... Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with
A Sequential Procedure for Multihypothesis Testing
 IEEE Trans. Inform. Theory
, 1994
"... AbstractThe sequential testing of more than two hypotheses has important applications in directsequence spread spectrum signal acquisition, multipleresolutionelement radar, and other areas. A useful sequential test which we term the MSPRT is studied in this paper. The test is shown to be a gener ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
AbstractThe sequential testing of more than two hypotheses has important applications in directsequence spread spectrum signal acquisition, multipleresolutionelement radar, and other areas. A useful sequential test which we term the MSPRT is studied in this paper. The test is shown to be a generalization of the Sequential Probability Ratio Test. Under Bayesian assumptions, it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large. Bounds on error probabilities are derived, and asymptotic expressions for the stopping time and error probabilities are given. A design procedure is presented for determining the parameters of the MSPRT. Two examples involving Gaussian densities are included, and comparisons are made between simulation results and asymptotic expressions. Comparisons with Bayesian fixed sample size tests are also made, and it is found that the MSPRT requires two to three times fewer samples on average. Index TermsSequential analysis, hypothesis testing, informational divergence, nonlinear renewal theory. I.
Hierarchical testing designs for pattern recognition
, 2003
"... We explore the theoretical foundations of a “twenty questions” approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by app ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
We explore the theoretical foundations of a “twenty questions” approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by applications to scene interpretation in which there are a great many possible explanations for the data, one (“background”) is statistically dominant, and it is imperative to restrict intensive computation to genuinely ambiguous regions. The focus here is then on pattern filtering: Given a large set Y of possible patterns or explanations, narrow down the true one Y to a small (random) subset ̂Y ⊂ Y of “detected ” patterns to be subjected to further, more intense, processing. To this end, we consider a family of hypothesis tests for Y ∈ A versus the nonspecific alternatives Y ∈ A c. Each test has null type I error and the candidate sets A ⊂ Y are arranged in a hierarchy of nested partitions. These tests are then