Results 1  10
of
372,425
Neighborweighted Knearest neighbor for unbalanced text corpus
"... Text categorization or classification is the automated assigning of text documents to predefined classes based on their contents. Many of classification algorithms usually assume that the training examples are evenly distributed among different classes. However, unbalanced data sets often appear in ..."
Abstract
 Add to MetaCart
in many practical applications. In order to deal with uneven text sets, we propose the neighborweighted Knearest neighbor algorithm, i.e. NWKNN. The experimental results indicate that our algorithm NWKNN achieves significant classification performance improvement on imbalanced corpora.
A Nearest Neighbor Weighted Measure In Classification Problems
 In VIII Simposium Nacional
, 1999
"... A weighted dissimilarity measure in vectorial spaces is proposed to optimize the performance of the nearest neighbor classifier. An approach to find the required weights based on gradient descent is presented. Experiments with both synthetic and real data shows the effectiveness of the proposed t ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A weighted dissimilarity measure in vectorial spaces is proposed to optimize the performance of the nearest neighbor classifier. An approach to find the required weights based on gradient descent is presented. Experiments with both synthetic and real data shows the effectiveness of the proposed
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
Fast approximate nearest neighbors with automatic algorithm configuration
 In VISAPP International Conference on Computer Vision Theory and Applications
, 2009
"... nearestneighbors search, randomized kdtrees, hierarchical kmeans tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in highdimensional spaces. There are no known exact algorithms for solving these highdimensional problems ..."
Abstract

Cited by 448 (2 self)
 Add to MetaCart
nearestneighbors search, randomized kdtrees, hierarchical kmeans tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in highdimensional spaces. There are no known exact algorithms for solving these high
Neighbors as Negatives: Relative Earnings and WellBeing
 Quarterly Journal of Economics
, 2005
"... This paper investigates whether individuals feel worse off when others around them earn more. In other words, do people care about relative position, and does “lagging behind the Joneses ” diminish wellbeing? To answer this question, I match individuallevel data containing various indicators of we ..."
Abstract

Cited by 401 (6 self)
 Add to MetaCart
of wellbeing to information about local average earnings. I find that, controlling for an individual’s own income, higher earnings of neighbors are associated with lower levels of selfreported happiness. The data’s panel nature and rich set of measures of wellbeing and behavior indicate
A distributed algorithm for minimumweight spanning trees
, 1983
"... A distributed algorithm is presented that constructs he minimumweight spanning tree in a connected undirected graph with distinct edge weights. A processor exists at each node of the graph, knowing initially only the weights of the adjacent edges. The processors obey the same algorithm and exchange ..."
Abstract

Cited by 443 (3 self)
 Add to MetaCart
and exchange messages with neighbors until the tree is constructed. The total number of messages required for a graph of N nodes and E edges is at most 5N log2N + 2E, and a message contains at most one edge weight plus log28N bits. The algorithm can be initiated spontaneously at any node or at any subset
An Efficient Boosting Algorithm for Combining Preferences
, 1999
"... The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting ..."
Abstract

Cited by 707 (18 self)
 Add to MetaCart
Boost to nearestneighbor and regression algorithms.
Vivaldi: A Decentralized Network Coordinate System
 In SIGCOMM
, 2004
"... Largescale Internet applications can benefit from an ability to predict roundtrip times to other hosts without having to contact them first. Explicit measurements are often unattractive because the cost of measurement can outweigh the benefits of exploiting proximity information. Vivaldi is a simp ..."
Abstract

Cited by 593 (5 self)
 Add to MetaCart
simple, lightweight algorithm that assigns synthetic coordinates to hosts such that the distance between the coordinates of two hosts accurately predicts the communication latency between the hosts.
Ensemble Methods in Machine Learning
 MULTIPLE CLASSIFIER SYSTEMS, LBCS1857
, 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boostin ..."
Abstract

Cited by 607 (3 self)
 Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2176 (21 self)
 Add to MetaCart
learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearestneighbor classifier on an OCR problem.
Results 1  10
of
372,425