Results 1  10
of
52
On kerneltarget alignment
 Advances in Neural Information Processing Systems 14
, 2002
"... Editor: Kernel based methods are increasingly being used for data modeling because of their conceptual simplicity and outstanding performance on many tasks. However, the kernel function is often chosen using trialanderror heuristics. In this paper we address the problem of measuring the degree of ..."
Abstract

Cited by 237 (8 self)
 Add to MetaCart
Editor: Kernel based methods are increasingly being used for data modeling because of their conceptual simplicity and outstanding performance on many tasks. However, the kernel function is often chosen using trialanderror heuristics. In this paper we address the problem of measuring the degree of agreement between a kernel and a learning task. A quantitative measure of agreement is important from both a theoretical and practical point of view. We propose a quantity to capture this notion, which we call Alignment. We study its theoretical properties, and derive a series of simple algorithms for adapting a kernel to the labels and vice versa. This produces a series of novel methods for clustering and transduction, kernel combination and kernel selection. The algorithms are tested on two publicly available datasets and are shown to exhibit good performance.
Randomized Distributed Edge Coloring via an Extension of the ChernoffHoeffding Bounds
 SIAM J. Comput
, 1997
"... . Certain types of routing, scheduling, and resourceallocation problems in a distributed setting can be modeled as edgecoloring problems. We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed pointtopoint model of computation. Our algorithms co ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
. Certain types of routing, scheduling, and resourceallocation problems in a distributed setting can be modeled as edgecoloring problems. We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed pointtopoint model of computation. Our algorithms compute an edge coloring of a graph G with n nodes and maximum degree # with at most 1.6# +O(log 1+# n) colors with high probability (arbitrarily close to 1) for any fixed #>0; they run in polylogarithmic time. The upper bound on the number of colors improves upon the (2#  1)coloring achievable by a simple reduction to vertex coloring. To analyze the performance of our algorithms, we introduce new techniques for proving upper bounds on the tail probabilities of certain random variables. The Cherno#Hoe#ding bounds are fundamental tools that are used very frequently in estimating tail probabilities. However, they assume stochastic independence among certain random variables, which may n...
A sharp threshold in proof complexity
 PROCEEDINGS OF STOC 2001
, 2001
"... We give the first example of a sharp threshold in proof complexity. More precisely, we show that for any sufficiently small � and � � �, random formulas consisting of 2clauses and 3clauses, which are known to be unsatisfiable almost certainly, almost certainly require resolution and DavisPutnam ..."
Abstract

Cited by 51 (13 self)
 Add to MetaCart
We give the first example of a sharp threshold in proof complexity. More precisely, we show that for any sufficiently small � and � � �, random formulas consisting of 2clauses and 3clauses, which are known to be unsatisfiable almost certainly, almost certainly require resolution and DavisPutnam proofs of unsatisfiability of exponential size, whereas it is easily seen that random formulas with 2clauses (and 3clauses) have linear size proofs of unsatisfiability almost certainly. A consequence of our result also yields the first proof that typical random 3CNF formulas at ratios below the generally accepted range of the satisfiability threshold (and thus expected to be satisfiable almost certainly) cause natural DavisPutnam algorithms to take exponential time to find satisfying assignments.
Generating Random Regular Graphs Quickly
, 1999
"... this paper we examine an algorithm which, although it does not generate uniformly at random, is provably close to a uniform generator when the degrees are relatively small. Moreover, it is easy to implement and quite fast in practice. The most interesting case is the regular one, when all degrees ar ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
this paper we examine an algorithm which, although it does not generate uniformly at random, is provably close to a uniform generator when the degrees are relatively small. Moreover, it is easy to implement and quite fast in practice. The most interesting case is the regular one, when all degrees are equal to d = d(n) say. Moreover, methods for the regular case of this problem usually extend to arbitrary degree sequences, although the analysis can become more complicated and it may be needed to impose restrictions on the variation in the degrees (such as is analyzed by Jerrum et al. [4]). The rst algorithm for generating dregular graphs uniformly at random was implicit in the paper of Bollobas [2] and also in the approaches to counting regular graphs by Bender and Caneld [1] and in [13] (see also [14] for explicit algorithms). The
AlmostEverywhere Algorithmic Stability and Generalization Error
 In UAI2002: Uncertainty in Artificial Intelligence
, 2002
"... We introduce a new notion of algorithmic stability, which we call training stability. ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
We introduce a new notion of algorithmic stability, which we call training stability.
Risk bounds for Statistical Learning
"... We propose a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM).We essentially focus on the binary classi…cation framework. We extend Tsybakov’s analysis of the risk of an ERM under margin type conditions by using concentration inequalities for conveniently weig ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
We propose a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM).We essentially focus on the binary classi…cation framework. We extend Tsybakov’s analysis of the risk of an ERM under margin type conditions by using concentration inequalities for conveniently weighted empirical processes. This allows us to deal with other ways of measuring the ”size”of a class of classi…ers than entropy with bracketing as in Tsybakov’s work. In particular we derive new risk bounds for the ERM when the classi…cation rules belong to some VCclass under margin conditions and discuss the optimality of those bounds in a minimax sense.
Moment Inequalities for Functions of Independent Random Variables
"... this paper is to provide such generalpurpose inequalities. Our approach is based on a generalization of Ledoux's entropy method (see [26, 28]). Ledoux's method relies on abstract functional inequalities known as logarithmic Sobolev inequalities and provide a powerful tool for deriving exponential i ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
this paper is to provide such generalpurpose inequalities. Our approach is based on a generalization of Ledoux's entropy method (see [26, 28]). Ledoux's method relies on abstract functional inequalities known as logarithmic Sobolev inequalities and provide a powerful tool for deriving exponential inequalities for functions of independent random variables, see Boucheron, Massart, and AMS 1991 subject classifications. Primary 60E15, 60C05, 28A35; Secondary 05C80 Key words and phrases. Moment inequalities, Concentration inequalities; Empirical processes; Random graphs Supported by EU Working Group RANDAPX, binational PROCOPE Grant 05923XL The work of the third author was supported by the Spanish Ministry of Science and Technology and FEDER, grant BMF200303324 Lugosi [6, 7], Bousquet [8], Devroye [14], Massart [30, 31], Rio [36] for various applications. To derive moment inequalities for general functions of independent random variables, we elaborate on the pioneering work of Latala and Oleszkiewicz [25] and describe socalled #Sobolev inequalities which interpolate between Poincare's inequality and logarithmic Sobolev inequalities (see also Beckner [4] and Bobkov's arguments in [26])
Algorithmic Stability and Generalization Performance
, 2001
"... We present a novel way of obtaining PACstyle bounds on the generalization error of learning algorithms, explicitly using their stability properties. A stable learner is one for which the learned solution does not change much with small changes in the training set. The bounds we obtain do not depend ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
We present a novel way of obtaining PACstyle bounds on the generalization error of learning algorithms, explicitly using their stability properties. A stable learner is one for which the learned solution does not change much with small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension is infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance.
On the Spanning Ratio of Gabriel Graphs and βSkeletons
, 2001
"... The spanning ratio of a graph de ned on n points in the Euclidean plane is the maximal ratio over all pairs of data points (u; v), of the minimum graph distance between u and v, over the Euclidean distance between u and v. A connected graph is said to be a kspanner if the spanning ratio does n ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
The spanning ratio of a graph de ned on n points in the Euclidean plane is the maximal ratio over all pairs of data points (u; v), of the minimum graph distance between u and v, over the Euclidean distance between u and v. A connected graph is said to be a kspanner if the spanning ratio does not exceed k. For example, for any k, there exists a point set whose minimum spanning tree is not a kspanner. At the other end of the spectrum, a Delaunay triangulation is guaranteed to be a 2:42spanner [11]. For proximity graphs inbetween these two extremes, such as Gabriel graphs[8], relative neighborhood graphs[16] and skeletons[12] with 2 [0; 2] some interesting questions arise. We show that the spanning ratio for Gabriel graphs (which are skeletons with = 1) is ( in the worst case. For all skeletons with 2 [0; 1], we prove that the spanning ratio is at most O(n ) where = (1 log 2 (1 + ))=2.