Results 1  10
of
54
Predicting protein complex membership using probabilistic network reliability
 Genome Res
, 2004
"... data ..."
(Show Context)
Matching and Scheduling Algorithms for Minimizing Execution Time and Failure Probability of Applications in Heterogeneous Computing
 IEEE Trans. Parallel and Distributed Systems
, 2002
"... AbstractÐIn a heterogeneous distributed computing system, machine and network failures are inevitable and can have an adverse effect on applications executing on the system. To reduce the effect of failures on an application executing on a failureprone system, matching and scheduling algorithms whi ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
(Show Context)
AbstractÐIn a heterogeneous distributed computing system, machine and network failures are inevitable and can have an adverse effect on applications executing on the system. To reduce the effect of failures on an application executing on a failureprone system, matching and scheduling algorithms which minimize not only the execution time but also the probability of failure of the application must be devised. However, because of the conflicting requirements, it is not possible to minimize both of the objectives at the same time. Thus, the goal of this paper is to develop matching and scheduling algorithms which account for both the execution time and the reliability of the application. This goal is achieved by modifying an existing matching and scheduling algorithm. The reliability of resources is taken into account using an incremental cost function proposed in this paper and the new algorithm is referred to as the reliable dynamic level scheduling algorithm. The incremental cost function can be defined based on one of the three cost functions developed here. These cost functions are unique in the sense that they are not restricted to treebased networks and a specific matching and scheduling algorithm. The simulation results confirm that the proposed incremental cost function can be incorporated into matching and scheduling algorithms to produce schedules where the effect of failures of machines and network resources on the execution of the application is reduced and the execution time of the application is minimized as well. Index TermsÐMatching and scheduling, precedenceconstrained tasks, heterogeneous computing, reliability, articulation points and bridges, DLS algorithm. 1
Distanceconstraint reachability computation in uncertain graphs
 PVLDB
"... Driven by the emerging network applications, querying and mining uncertain graphs has become increasingly important. In this paper, we investigate a fundamental problem concerning uncertain graphs, which we call the distanceconstraint reachability (DCR) problem: Given two vertices s and t, what is ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Driven by the emerging network applications, querying and mining uncertain graphs has become increasingly important. In this paper, we investigate a fundamental problem concerning uncertain graphs, which we call the distanceconstraint reachability (DCR) problem: Given two vertices s and t, what is the probability that the distance from s to t is less than or equal to a userdefined threshold d in the uncertain graph? Since this problem is #PComplete, we focus on efficiently and accurately approximating DCR online. Our main results include two new estimators for the probabilistic reachability. One is a HorvitzThomson type estimator based on the unequal probabilistic sampling scheme, and the other is a novel recursive sampling estimator, which effectively combines a deterministic recursive computational procedure with a sampling process to boost the estimation accuracy. Both estimators can produce much smaller variance than the direct sampling estimator, which considers each trial to be either 1 or 0. We also present methods to make these estimators more computationally efficient. The comprehensive experiment evaluation on both real and synthetic datasets demonstrates the efficiency and accuracy of our new estimators. 1.
Discovering highly reliable subgraphs in uncertain graphs
 In KDD
, 2011
"... In this paper, we investigate the highly reliable subgraph problem, which arises in the context of uncertain graphs. This problem attempts to identify all induced subgraphs for which the probability of connectivity being maintained under uncertainty is higher than a given threshold. This problem ari ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we investigate the highly reliable subgraph problem, which arises in the context of uncertain graphs. This problem attempts to identify all induced subgraphs for which the probability of connectivity being maintained under uncertainty is higher than a given threshold. This problem arises in a wide range of network applications, such as proteincomplex discovery, network routing, and social network analysis. Since exact discovery may be computationally intractable, we introduce a novel sampling scheme which enables approximate discovery of highly reliable subgraphs with high probability. Furthermore, we transform the core mining task into a new frequent cohesive set problem in deterministic graphs. Such transformation enables the development of an efficient twostage approach which combines novel peeling techniques for maximal set discovery with depthfirst search for further enumeration. We demonstrate the effectiveness and efficiency of the proposed algorithms on real and synthetic data sets.
Article Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
, 2012
"... sensors ..."
(Show Context)
Distributed File Allocation with Consistency Constraints
 in Proceedings of the International Conference on Distributed Computing Systems
, 1992
"... We consider the resource allocation problem in distributed computing systems that have strict mutual consistency requirements. Our model incorporates the behavior of consistency control algorithms, which ensure that mutual consistency of replicated data is preserved even when communication links of ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the resource allocation problem in distributed computing systems that have strict mutual consistency requirements. Our model incorporates the behavior of consistency control algorithms, which ensure that mutual consistency of replicated data is preserved even when communication links of the computer network and/or computers on which the files reside fail. The problem of resource allocation in these networks is significant in terms of the efficiency of operations and the reliability of the network. The constrained resource allocation problem is formulated as a mixed nonlinear integer program. An efficient algorithm is proposed to solve this problem. The performance of the algorithm is evaluated in terms of the algorithm's accuracy, efficiency and execution times, using a representative problem set. 1 Introduction Consider a distributed computing system (DCS) that is made up of a set of sites (nodes) connected through communication links which transmit information from one s...
Power indices in spanning connectivity games
 in Proc. of the AAIM
, 2009
"... Abstract. The Banzhaf index, ShapleyShubik index and other voting power indices measure the importance of a player in a coalitional game. We consider a simple coalitional game called the spanning connectivity game (SCG) based on an undirected, unweighted multigraph, where edges are players. We ex ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The Banzhaf index, ShapleyShubik index and other voting power indices measure the importance of a player in a coalitional game. We consider a simple coalitional game called the spanning connectivity game (SCG) based on an undirected, unweighted multigraph, where edges are players. We examine the computational complexity of computing the voting power indices of edges in the SCG. It is shown that computing Banzhaf values is #Pcomplete and computing ShapleyShubik indices or values is NPhard for SCGs. Interestingly, Holler indices and DeeganPackel indices can be computed in polynomial time. Among other results, it is proved that Banzhaf indices can be computed in polynomial time for graphs with bounded treewidth. It is also shown that for any reasonable representation of a simple game, a polynomial time algorithm to compute the ShapleyShubik indices implies a polynomial time algorithm to compute the Banzhaf indices. This answers (positively) an open question of whether computing ShapleyShubik indices for a simple game represented by the set of minimal winning coalitions is NPhard.
Stochastic Petri nets for the reliability analysis of communication network applications with alternaterouting
, 1996
"... ..."
COMBINATION OF CONDITIONAL MONTE CARLO AND APPROXIMATE ZEROVARIANCE IMPORTANCE SAMPLING FOR NETWORK RELIABILITY ESTIMATION
, 2010
"... We study the combination of two efficient rare event Monte Carlo simulation techniques for the estimation of the connectivity probability of a given set of nodes in a graph when links can fail: approximate zerovariance importance sampling and a conditional Monte Carlo method which conditions on the ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We study the combination of two efficient rare event Monte Carlo simulation techniques for the estimation of the connectivity probability of a given set of nodes in a graph when links can fail: approximate zerovariance importance sampling and a conditional Monte Carlo method which conditions on the event that a prespecified set of disjoint minpaths linking the set of nodes fails. Those two methods have been applied separately. Here we show how their combination can be defined and implemented, we derive asymptotic robustness properties of the resulting estimator when reliabilities of individual links go arbitrarily close to one, and we illustrate numerically the efficiency gain that can be obtained.
Metro Maps of Science
"... As the number of scientific publications soars, even the most enthusiastic reader can have trouble staying on top of the evolving literature. It is easy to focus on a narrow aspect of one’s field and lose track of the big picture. Information overload is indeed a major challenge for scientists today ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
As the number of scientific publications soars, even the most enthusiastic reader can have trouble staying on top of the evolving literature. It is easy to focus on a narrow aspect of one’s field and lose track of the big picture. Information overload is indeed a major challenge for scientists today, and is especially daunting for new investigators attempting to master a discipline and scientists who seek to cross disciplinary borders. In this paper, we propose metrics of influence, coverage, and connectivity for scientific literature. We use these metrics to create structured summaries of information, which we call metro maps. Most importantly, metro maps explicitly show the relations between papers in a way which captures developments in the field. Pilot user studies demonstrate that our method can help researchers acquire new knowledge efficiently: map users achieved better precision and recall scores and found more seminal papers while performing fewer searches. Categories and Subject Descriptors