Results 1  10
of
434,484
On the Irregularity Strength of Trees
 J. Graph Theory
, 2004
"... For any graph G, let ni be the number of vertices of degree i, and}. This is a general lower bound on the λ(G) = maxi≤j { ni+···+nj+i−1 j irregularity strength of graph G. All known facts suggest that for connected graphs, this is the actual irregularity strength up to an additive constant. In fact ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
For any graph G, let ni be the number of vertices of degree i, and}. This is a general lower bound on the λ(G) = maxi≤j { ni+···+nj+i−1 j irregularity strength of graph G. All known facts suggest that for connected graphs, this is the actual irregularity strength up to an additive constant
On the Irregularity Strength of Trees
"... Abstract For any graph G, let ni be the number of vertices of degree i, and *(G) = maxi<=j { ni+***+nj+i1j}. This is a general lower bound on the irregularity strength of graph G. All known facts suggest that for connected graphs, this is the actual irregularity strength up to an additive const ..."
Abstract
 Add to MetaCart
Abstract For any graph G, let ni be the number of vertices of degree i, and *(G) = maxi<=j { ni+***+nj+i1j}. This is a general lower bound on the irregularity strength of graph G. All known facts suggest that for connected graphs, this is the actual irregularity strength up to an additive
AN ITERATIVE APPROACH TO THE IRREGULARITY STRENGTH OF TREES
"... Abstract. An assignment of positive integer weights to the edges of a simple graph G is called irregular if the weighted degrees of the vertices are all different. The irregularity strength, s(G), is the maximal edge weight, minimized over all irregular assignments, and is set to infinity if no such ..."
Abstract
 Add to MetaCart
Abstract. An assignment of positive integer weights to the edges of a simple graph G is called irregular if the weighted degrees of the vertices are all different. The irregularity strength, s(G), is the maximal edge weight, minimized over all irregular assignments, and is set to infinity
Total edge irregularity strength of trees
 Discussiones Math. Graph Theory
"... A total edgeirregular klabelling ξ: V (G) ∪ E(G) → {1, 2,..., k} of a graph G is a labelling of vertices and edges of G in such a way that for any different edges e and f their weights wt(e) and wt(f) are distinct. The weight wt(e) of an edge e = xy is the sum of the labels of vertices x and y a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
and the label of the edge e. The minimum k for which a graph G has a total edgeirregular klabelling is called the total edge irregularity strength of G, tes(G). In this paper we prove that for every tree T of maximum degree ∆ on p vertices tes(T) = max{d(p+ 1)/3e, d( ∆ + 1)/2e}.
The strength of weak learnability
 Machine Learning
, 1990
"... Abstract. This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with h ..."
Abstract

Cited by 861 (24 self)
 Add to MetaCart
Abstract. This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error e.
A fast and high quality multilevel scheme for partitioning irregular graphs
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1998
"... Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. ..."
Abstract

Cited by 1173 (16 self)
 Add to MetaCart
Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc.
Scalable Recognition with a Vocabulary Tree
 IN CVPR
, 2006
"... A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CDcovers from a database of 40000 images of popular music CD's. The scheme ..."
Abstract

Cited by 1043 (0 self)
 Add to MetaCart
A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CDcovers from a database of 40000 images of popular music CD's. The scheme
Convergent Treereweighted Message Passing for Energy Minimization
 ACCEPTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI), 2006. ABSTRACTACCEPTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI)
, 2006
"... Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper we focus on the recent technique proposed by Wainwright et al. [33] treereweighted maxproduct message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy ..."
Abstract

Cited by 491 (16 self)
 Add to MetaCart
Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper we focus on the recent technique proposed by Wainwright et al. [33] treereweighted maxproduct message passing (TRW). It was inspired by the problem of maximizing a lower bound
Random forests
 Machine Learning
, 2001
"... Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the fo ..."
Abstract

Cited by 3433 (2 self)
 Add to MetaCart
in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R
Results 1  10
of
434,484