Results 1  10
of
120,164
The strength of weak learnability
 Machine Learning
, 1990
"... Abstract. This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with h ..."
Abstract

Cited by 861 (24 self)
 Add to MetaCart
Abstract. This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error e.
Evolving Neural Networks through Augmenting Topologies
 Evolutionary Computation
"... An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixedtopology method on a challenging benchmark reinforcement learning task ..."
Abstract

Cited by 524 (113 self)
 Add to MetaCart
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixedtopology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
A Digital Signature Scheme Secure Against Adaptive ChosenMessage Attacks
, 1995
"... We present a digital signature scheme based on the computational diculty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosenmessage attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a ..."
Abstract

Cited by 985 (43 self)
 Add to MetaCart
We present a digital signature scheme based on the computational diculty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosenmessage attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a way that depends on the signatures of previously chosen messages) can not later forge the signature of even a single additional message. This may be somewhat surprising, since the properties of having forgery being equivalent to factoring and being invulnerable to an adaptive chosenmessage attack were considered in the folklore to be contradictory. More generally, we show how to construct a signature scheme with such properties based on the existence of a "clawfree" pair of permutations  a potentially weaker assumption than the intractibility of integer factorization. The new scheme is potentially practical: signing and verifying signatures are reasonably fast, and signatures are compact.
The broadcast storm problem in a mobile ad hoc network
 ACM Wireless Networks
, 2002
"... Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm s ..."
Abstract

Cited by 1217 (15 self)
 Add to MetaCart
Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we refer as the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.
A Theory of Objects
, 1996
"... Objectoriented languages were invented to provide an intuitive view of data and computation, by drawing an analogy between software and the physical world of objects. The detailed explanation of this intuition, however, turned out to be quite complex; there are still no standard definitions of such ..."
Abstract

Cited by 1002 (13 self)
 Add to MetaCart
Objectoriented languages were invented to provide an intuitive view of data and computation, by drawing an analogy between software and the physical world of objects. The detailed explanation of this intuition, however, turned out to be quite complex; there are still no standard definitions of such fundamental notions as objects, classes, and inheritance. Much progress was made by investigating the notion of subtyping within procedural languages and their theoretical models (lambda calculi). These studies clarified the role of subtyping in objectoriented languages, but still relied on complex encodings to model objectoriented features. Recently, in joint work with Martin Abadi, I have studied more direct models of objectoriented features: object calculi. Object calculi embody, in a minimal setting, the objectoriented model of computation, as opposed to the imperative, functional, and process models. Object calculi are based exclusively on objects and methods, not on functions or data structures. They help in classifying and explaining the features of objectoriented languages, and in designing new, more regular languages. They directly inspired my design of Obliq, an objectoriented language for network programming.
JFlow: Practical MostlyStatic Information Flow Control
 In Proc. 26th ACM Symp. on Principles of Programming Languages (POPL
, 1999
"... A promising technique for protecting privacy and integrity of sensitive data is to statically check information flow within programs that manipulate the data. While previous work has proposed programming language extensions to allow this static checking, the resulting languages are too restrictive f ..."
Abstract

Cited by 579 (32 self)
 Add to MetaCart
A promising technique for protecting privacy and integrity of sensitive data is to statically check information flow within programs that manipulate the data. While previous work has proposed programming language extensions to allow this static checking, the resulting languages are too restrictive for practical use and have not been implemented. In this paper, we describe the new language JFlow, an extension to the Java language that adds staticallychecked information flow annotations. JFlow provides several new features that make information flow checking more flexible and convenient than in previous models: a decentralized label model, label polymorphism, runtime label checking, and automatic label inference. JFlow also supports many language features that have never been integrated successfully with static information flow control, including objects, subclassing, dynamic type tests, access control, and exceptions. This paper defines the JFlow language and presents formal rules tha...
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1427 (15 self)
 Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Scalable Application Layer Multicast
, 2002
"... We describe a new scalable applicationlayer multicast protocol, specifically designed for lowbandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the applicationlayer multicast peers and can support a number of different data deliv ..."
Abstract

Cited by 719 (21 self)
 Add to MetaCart
We describe a new scalable applicationlayer multicast protocol, specifically designed for lowbandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the applicationlayer multicast peers and can support a number of different data delivery trees with desirable properties. We present extensive simulations of both our protocol and the Narada applicationlayer multicast protocol over Internetlike topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25%), improved or similar endtoend latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic. Finally, we present results from our widearea testbed in which we experimented with 32100 member groups distributed over 8 different sites. In our experiments, averagegroup members established and maintained lowlatency paths and incurred a maximum packet loss rate of less than 1 % as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100.
A Bayesian method for the induction of probabilistic networks from data
 MACHINE LEARNING
, 1992
"... This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction of probabili ..."
Abstract

Cited by 1381 (32 self)
 Add to MetaCart
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Results 1  10
of
120,164