Results 1  10
of
93,423
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
law), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball
Making LargeScale Support Vector Machine Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 620 (1 self)
 Add to MetaCart
learning tasks with many training examples, offtheshelf optimization techniques for general quadratic programs quickly become intractable in their memory and time requirements. SVM light1 is an implementation of an SVM learner which addresses the problem of large tasks. This chapter presents
LogP: Towards a Realistic Model of Parallel Computation
, 1993
"... A vast body of theoretical research has focused either on overly simplistic models of parallel computation, notably the PRAM, or overly specific models that have few representatives in the real world. Both kinds of models encourage exploitation of formal loopholes, rather than rewarding developme ..."
Abstract

Cited by 562 (15 self)
 Add to MetaCart
parallel algorithms and to offer guidelines to machine designers. Such a model must strike a balance between detail and simplicity in order to reveal important bottlenecks without making analysis of interesting problems intractable. The model is based on four parameters that specify abstractly
Mining the Network Value of Customers
 In Proceedings of the Seventh International Conference on Knowledge Discovery and Data Mining
, 2002
"... One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected pro t from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only ..."
Abstract

Cited by 562 (11 self)
 Add to MetaCart
can be extremely eective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases. Categories and Subject Descriptors H.2.8 [Database Management]: Database Applications data mining
Investor psychology and security market under and overreactions
 Journal of Finance
, 1998
"... We propose a theory of securities market under and overreactions based on two wellknown psychological biases: investor overconfidence about the precision of private information; and biased selfattribution, which causes asymmetric shifts in investors ’ confidence as a function of their investment ..."
Abstract

Cited by 661 (38 self)
 Add to MetaCart
We propose a theory of securities market under and overreactions based on two wellknown psychological biases: investor overconfidence about the precision of private information; and biased selfattribution, which causes asymmetric shifts in investors ’ confidence as a function of their investment outcomes. We show that overconfidence implies negative longlag autocorrelations, excess volatility, and, when managerial actions are correlated with stock mispricing, publiceventbased return predictability. Biased selfattribution adds positive shortlag autocorrelations ~“momentum”!, shortrun earnings “drift, ” but negative correlation between future returns and longterm past stock market and accounting performance. The theory also offers several untested implications and implications for corporate financial policy. IN RECENT YEARS A BODY OF evidence on security returns has presented a sharp challenge to the traditional view that securities are rationally priced to ref lect all publicly available information. Some of the more pervasive anoma
Directional Statistics and Shape Analysis
, 1995
"... There have been various developments in shape analysis in the last decade. We describe here some relationships of shape analysis with directional statistics. For shape, rotations are to be integrated out or to be optimized over whilst they are the basis for directional statistics. However, various c ..."
Abstract

Cited by 775 (31 self)
 Add to MetaCart
There have been various developments in shape analysis in the last decade. We describe here some relationships of shape analysis with directional statistics. For shape, rotations are to be integrated out or to be optimized over whilst they are the basis for directional statistics. However, various concepts are connected. In particular, certain distributions of directional statistics have emerged in shape analysis, such a distribution is Complex Bingham Distribution. This paper first gives some background to shape analysis and then it goes on to directional distributions and their applications to shape analysis. Note that the idea of using tangent space for analysis is common to both manifold as well. 1 Introduction Consider shapes of configurations of points in Euclidean space. There are various contexts in which k labelled points (or "landmarks") x 1 ; :::; x k in IR m are given and interest is in the shape of (x 1 ; :::; x k ). Example 1 The microscopic fossil Globorotalia truncat...
Ensemble Methods in Machine Learning
 MULTIPLE CLASSIFIER SYSTEMS, LBCS1857
, 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boostin ..."
Abstract

Cited by 607 (3 self)
 Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1427 (15 self)
 Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
Transductive Inference for Text Classification using Support Vector Machines
, 1999
"... This paper introduces Transductive Support Vector Machines (TSVMs) for text classification. While regular Support Vector Machines (SVMs) try to induce a general decision function for a learning task, Transductive Support Vector Machines take into account a particular test set and try to minimiz ..."
Abstract

Cited by 887 (4 self)
 Add to MetaCart
This paper introduces Transductive Support Vector Machines (TSVMs) for text classification. While regular Support Vector Machines (SVMs) try to induce a general decision function for a learning task, Transductive Support Vector Machines take into account a particular test set and try to minimize misclassifications of just those particular examples. The paper presents an analysis of why TSVMs are well suited for text classification. These theoretical findings are supported by experiments on three test collections. The experiments show substantial improvements over inductive methods, especially for small training sets, cutting the number of labeled training examples down to a twentieth on some tasks. This work also proposes an algorithm for training TSVMs efficiently, handling 10,000 examples and more.
Results 1  10
of
93,423