Results 1  10
of
123
Secure routing for structured peertopeer overlay networks
, 2002
"... Structured peertopeer overlay networks provide a substrate for the construction of largescale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fract ..."
Abstract

Cited by 455 (12 self)
 Add to MetaCart
(Show Context)
Structured peertopeer overlay networks provide a substrate for the construction of largescale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peertopeer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peertopeer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes. 1
Biclustering algorithms for biological data analysis: a survey
 IEEE/ACM Transactions on Computational Biology and Bioinformatics
, 2004
"... Abstract—A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of ..."
Abstract

Cited by 436 (14 self)
 Add to MetaCart
Abstract—A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications. Index Terms—Biclustering, simultaneous clustering, coclustering, subspace clustering, bidimensional clustering, direct clustering, block clustering, twoway clustering, twomode clustering, twosided clustering, microarray data analysis, biological data analysis, gene expression data. 1
StreamIt: A Language for Streaming Applications
 In International Conference on Compiler Construction
, 2001
"... We characterize highperformance streaming applications as a new and distinct domain of programs that is becoming increasingly important. ..."
Abstract

Cited by 374 (26 self)
 Add to MetaCart
(Show Context)
We characterize highperformance streaming applications as a new and distinct domain of programs that is becoming increasingly important.
Vigilante: EndtoEnd Containment of Internet Worm Epidemics
, 2008
"... Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed networklevel techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. ..."
Abstract

Cited by 302 (6 self)
 Add to MetaCart
Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed networklevel techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. We propose Vigilante, a new endtoend architecture to contain worms automatically that addresses these limitations. In Vigilante, hosts detect worms by instrumenting vulnerable programs to analyze infection attempts. We introduce dynamic dataflow analysis: a broadcoverage hostbased algorithm that can detect unknown worms by tracking the flow of data from network messages and disallowing unsafe uses of this data. We also show how to integrate other hostbased detection mechanisms into the Vigilante architecture. Upon detection, hosts generate selfcertifying alerts (SCAs), a new type of security alert that can be inexpensively verified by any vulnerable host. Using SCAs, hosts can cooperate to contain an outbreak, without having to trust each other. Vigilante broadcasts SCAs over an overlay network that propagates alerts rapidly and resiliently. Hosts receiving an SCA protect themselves by generating filters with vulnerability condition slicing: an algorithm that performs dynamic analysis of the vulnerable program to identify controlflow conditions that lead
Denial of Service via Algorithmic Complexity Attacks
, 2003
"... We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For examp ..."
Abstract

Cited by 141 (2 self)
 Add to MetaCart
We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.
Provably Authenticated Group DiffieHellman Key Exchange
, 2001
"... Group DiffieHellman protocols for Authenticated Key Exchange (AKE) are designed to provide a pool of players with a shared secret key which may later be used, for example, to achieve multicast message integrity. Over the years, several schemes have been offered. However, no formal treatment for thi ..."
Abstract

Cited by 132 (19 self)
 Add to MetaCart
(Show Context)
Group DiffieHellman protocols for Authenticated Key Exchange (AKE) are designed to provide a pool of players with a shared secret key which may later be used, for example, to achieve multicast message integrity. Over the years, several schemes have been offered. However, no formal treatment for this cryptographic problem has ever been suggested. In this paper, we present a security model for this problem and use it to precisely define AKE (with "implicit" authentication) as the fundamental goal, and the entityauthentication goal as well. We then define in this model the execution of an authenticated group DiffieHellman scheme and prove its security.
Bayesian Treed Gaussian Process Models with an Application to Computer Modeling
 Journal of the American Statistical Association
, 2007
"... This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian proce ..."
Abstract

Cited by 78 (18 self)
 Add to MetaCart
(Show Context)
This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian processes and simple linear models can yield a more parsimonious spatial model while significantly reducing computational effort. The methodological developments and statistical computing details which make this approach efficient are described in detail. Illustrations of our model are given for both synthetic and real datasets. Key words: recursive partitioning, nonstationary spatial model, nonparametric regression, Bayesian model averaging 1
Fast concept analysis
 Working with Conceptual Structures – Contributions to ICCS 2000
, 2000
"... Formal concept analysis is increasingly used for large contexts that are built by programs. This paper presents an efficient algorithm for concept analysis that computes concepts together with their explicit lattice structure. An experimental evaluation uses randomly generated contexts to compare th ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
(Show Context)
Formal concept analysis is increasingly used for large contexts that are built by programs. This paper presents an efficient algorithm for concept analysis that computes concepts together with their explicit lattice structure. An experimental evaluation uses randomly generated contexts to compare the running time of the presented algorithm with two other algorithms. Running time increases quadratically with the number of concepts, but with a small quadratic component. At least contexts with sparsely filled context tables cause concept lattices grow quadratically with respect to the size of their base relation. The growth rate is controlled by the density of context tables. Modest growth combined with efficient algorithms lead to fast concept analysis. 1
Circuits for widewindow superscalar processors
 In Proceedings of the 27th Annual International Symposium on Computer Architecture
, 2000
"... Our program benchmarks and simulations of novel circuits indicate that largewindow processors are feasible. Using our redesigned superscalar components, a largewindow processor implemented in today’s technology can achieve an increase of 10–60 % (geometric mean of 31%) in program speed compared to ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
(Show Context)
Our program benchmarks and simulations of novel circuits indicate that largewindow processors are feasible. Using our redesigned superscalar components, a largewindow processor implemented in today’s technology can achieve an increase of 10–60 % (geometric mean of 31%) in program speed compared to today’s processors. The processor operates at clock speeds comparable to today’s processors, but achieves significantly higher ILP. To measure the impact of a large window on clock speed, we design and simulate new implementations of the logic components that most limit the critical path of our largewindow processor: the schedule logic and the wakeup logic. We use logdepth cyclic segmented prefix (CSP) circuits to reimplement these components. Our layouts and simulations of critical paths through these circuits indicate that our largewindow processor could be clocked at frequencies exceeding 500MHz in today’s technology. Our commit logic and rename logic can also run at these speeds. To measure the impact of a large window on ILP, we compare two microarchitectures, the first has a 128instruction window, an 8wide fetch unit, and 20wide issue (four integer, branch, multiply, float, and memory units), whereas the second has a 32instruction window, and a 4wide fetch unit and is comparable to today’s processors. For each, we simulate different window reuse and bypass policies. Our simulations show that the largewindow processor achieves significantly higher IPC. This performance increase comes despite the fact that the largewindow processor uses a wraparound window while the smallwindow processor uses a compressing window, thus effectively increasing its number of outstanding instructions. Furthermore, the largewindow processor sometimes pays an extra clock cycle for bypassing. 1.
Correctness of multiplicative proof nets is linear
 In LICS
, 1999
"... We reformulate Danos contractibility criterion in terms of a sort of unification. As for term unification, a direct implementation of the unification criterion leads to a quasilinear algorithm. Linearity is obtained after observing that the disjointset unionfind at the core of the unification c ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
We reformulate Danos contractibility criterion in terms of a sort of unification. As for term unification, a direct implementation of the unification criterion leads to a quasilinear algorithm. Linearity is obtained after observing that the disjointset unionfind at the core of the unification criterion is a special case of unionfind with a real linear time solution. 1