Results 1  10
of
43
Monotone Complexity
, 1990
"... We give a general complexity classification scheme for monotone computation, including monotone spacebounded and Turing machine models not previously considered. We propose monotone complexity classes including mAC i , mNC i , mLOGCFL, mBWBP , mL, mNL, mP , mBPP and mNP . We define a simple ..."
Abstract

Cited by 2350 (12 self)
 Add to MetaCart
We give a general complexity classification scheme for monotone computation, including monotone spacebounded and Turing machine models not previously considered. We propose monotone complexity classes including mAC i , mNC i , mLOGCFL, mBWBP , mL, mNL, mP , mBPP and mNP . We define a simple notion of monotone reducibility and exhibit complete problems. This provides a framework for stating existing results and asking new questions. We show that mNL (monotone nondeterministic logspace) is not closed under complementation, in contrast to Immerman's and Szelepcs 'enyi's nonmonotone result [Imm88, Sze87] that NL = coNL; this is a simple extension of the monotone circuit depth lower bound of Karchmer and Wigderson [KW90] for stconnectivity. We also consider mBWBP (monotone bounded width branching programs) and study the question of whether mBWBP is properly contained in mNC 1 , motivated by Barrington's result [Bar89] that BWBP = NC 1 . Although we cannot answer t...
Fundamental Concepts of Dependability
, 2000
"... re unified by W. H. Pierce as the concept of failure tolerance in 1965 [8]. In 1967, A. Avizienis integrated masking with the practical techniques of error detection, fault diagnosis, and recovery into the concept of faulttolerant systems [9]. In the reliability modeling field, the major event was ..."
Abstract

Cited by 89 (1 self)
 Add to MetaCart
re unified by W. H. Pierce as the concept of failure tolerance in 1965 [8]. In 1967, A. Avizienis integrated masking with the practical techniques of error detection, fault diagnosis, and recovery into the concept of faulttolerant systems [9]. In the reliability modeling field, the major event was the introduction of the coverage concept by Bouricius, Carter and Schneider [10]. Seminal work on software fault tolerance was initiated by B. Randell [11,12], later it was complemented by Nversion programming [13]. DEPENDABILITY ATTRIBUTES AVAILABILITY RELIABILITY SAFETY CONFIDENTIALITY INTEGRITY MAINTAINABILITY FAULT PREVENTION FAULT TOLERANCE FAULT REMOVAL FAULT FORECASTING MEANS THREATS FAULTS ERRORS FAILURES The formation of the IEEECS TC on FaultTolerant Computing in 1970 and of IFIP WG 10.4 Dependable Computing and Fault Tolerance in 1980 accelerated the emergence of a consistent set of concepts and terminology. Seven positio
Oracles and Queries that are Sufficient for Exact Learning
 Journal of Computer and System Sciences
, 1996
"... We show that the class of all circuits is exactly learnable in randomized expected polynomial time using weak subset and weak superset queries. This is a consequence of the following result which we consider to be of independent interest: circuits are exactly learnable in randomized expected poly ..."
Abstract

Cited by 83 (5 self)
 Add to MetaCart
We show that the class of all circuits is exactly learnable in randomized expected polynomial time using weak subset and weak superset queries. This is a consequence of the following result which we consider to be of independent interest: circuits are exactly learnable in randomized expected polynomial time with equivalence queries and the aid of an NPoracle. We also show that circuits are exactly learnable in deterministic polynomial time with equivalence queries and a \Sigma 3 oracle. The hypothesis class for the above learning algorithms is the class of circuits of largerbut polynomially relatedsize. Also, the algorithms can be adapted to learn the class of DNF formulas with hypothesis class consisting of depth3  formulas (by the work of Angluin [A90], this is optimal in the sense that the hypothesis class cannot be reduced to DNF formulas, i.e. depth2  formulas).
Analysis of Random Processes via AndOr Tree Evaluation
 In Proceedings of the 9th Annual ACMSIAM Symposium on Discrete Algorithms
, 1998
"... We introduce a new set of probabilistic analysis tools based on the analysis of AndOr trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random lossresilient ..."
Abstract

Cited by 73 (23 self)
 Add to MetaCart
We introduce a new set of probabilistic analysis tools based on the analysis of AndOr trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random lossresilient codes, solving random kSAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random lossresilient codes introduced ...
Fault Tolerance Techniques for Wireless Ad Hoc Sensor Networks
 in IEEE Sensors
, 2002
"... Embedded sensor network is a system of nodes, each equipped with a certain amount of sensing, actuating, computation, communication, and storage resources. One of the key prerequisites for effective and efficient embedded sensor systems is development of low cost, low overhead, high resilient fault ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
Embedded sensor network is a system of nodes, each equipped with a certain amount of sensing, actuating, computation, communication, and storage resources. One of the key prerequisites for effective and efficient embedded sensor systems is development of low cost, low overhead, high resilient faulttolerance techniques. Cost sensitivity implies that traditional double and triple redundancies are not adequate solutions for embedded sensor systems due to their high cost and high energyconsumption.
Optimizing group judgmental accuracy in the presence of interdependencies
 Public Choice
, 1984
"... Consider a group of people confronted with a dichotomous choice (for example, a yes or no decision). Assume that we can characterize each person by a probability, pi, of making the 'better ' of the two choices open to the group, such that we define 'better ' in terms of some linear ordering of the a ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Consider a group of people confronted with a dichotomous choice (for example, a yes or no decision). Assume that we can characterize each person by a probability, pi, of making the 'better ' of the two choices open to the group, such that we define 'better ' in terms of some linear ordering of the alternatives. If individual choices are independent, and if the a priori likelihood that either of the two choices is correct is one half, we show that the group decision procedure that maximizes the likelihood that the group will make the better of the two choices open to it is a weighted voting rule that assigns weights, wi, such that Pi wi ~ log 1ffi " We then examine the implications for optimal group choice of interdependencies among individual choices.
Dependability and Its Threats: A Taxonomy
"... This paper gives the main definitions relating to dependability, a generic concept including as special case such attributes as reliability, availability, safety, confidentiality, integrity, maintainability, etc. Basic definitions are given first. They are then commented upon, and supplemented by ad ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This paper gives the main definitions relating to dependability, a generic concept including as special case such attributes as reliability, availability, safety, confidentiality, integrity, maintainability, etc. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability (faults, errors, failures), and the attributes of dependability. The discussion on the attributes encompasses the relationship of dependability with security, survivability and trustworthiness.
Lower bounds on twoterminal network reliability
, 1985
"... One measure of twoterminal network reliability, termed probabilistic connectedness, is the probability that two specified communication centers can communicate. A standard model of a network is a graph in which nodes represent communications centers and edges represent links between communication c ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
One measure of twoterminal network reliability, termed probabilistic connectedness, is the probability that two specified communication centers can communicate. A standard model of a network is a graph in which nodes represent communications centers and edges represent links between communication centers. Edges are assumed to have statistically independent probabilities of failing and nodes are assumed to be perfectly reliable. Exact calculation of twoterminal reliability for general networks has been shown to be #Pcomplete. As a result is desirable to compute upper and lower bounds that avoid the exponential computation likely required by exact algorithms. Two methods are considered for computing lower bounds on twoterminal reliability
Amplification and Percolation
, 1992
"... Moore and Shannon had shown that relays with arbitrarily high reliability can be built from relays with arbitrarily poor reliability. Valiant used similar methods to construct monotone readonce formulae of size O(n ff+2 ) (where ff = log p 5\Gamma1 2 ' 3:27) that amplify (/ \Gamma 1 n ; / + ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Moore and Shannon had shown that relays with arbitrarily high reliability can be built from relays with arbitrarily poor reliability. Valiant used similar methods to construct monotone readonce formulae of size O(n ff+2 ) (where ff = log p 5\Gamma1 2 ' 3:27) that amplify (/ \Gamma 1 n ; / + 1 n ) (where / = p 5\Gamma1 2 ' 0:62) to (2 \Gamman ; 1 \Gamma 2 \Gamman ) and deduced as a consequence the existence of monotone formulae of the same size that compute the majority of n bits. Boppana had shown that any monotone readonce formula that amplifies (p \Gamma 1 n ; p + 1 n ) to ( 1 4 ; 3 4 ) (where 0 ! p ! 1 is constant) has size of at least\Omega\Gamma n ff ) and that any monotone, not necessarily readonce, contact network (and in particular any monotone formula) that amplifies ( 1 4 ; 3 4 ) to (2 \Gamman ; 1 \Gamma 2 \Gamman ) has size of at least \Omega\Gamma n 2 ). We extend Boppana's results in two ways. We first show that his two lower bounds...