Results 1  10
of
772
Optimal kernel choice for largescale twosample tests
 Advances in Neural Information Processing Systems
, 2012
"... Given samples from distributions p and q, a twosample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Given samples from distributions p and q, a twosample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 476 (46 self)
 Add to MetaCart
stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LSSVM classifiers with linear, polynomial and radial basis function
Btests: Low Variance Kernel TwoSample Tests
 in "Neural Information Processing Systems", Lake Tahoe, United States
, 2013
"... A family of maximum mean discrepancy (MMD) kernel twosample tests is introduced. Members of the test family are called Blocktests or Btests, since the test statistic is an average over MMDs computed on subsets of the samples. The choice of block size allows control over the tradeoff between test ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A family of maximum mean discrepancy (MMD) kernel twosample tests is introduced. Members of the test family are called Blocktests or Btests, since the test statistic is an average over MMDs computed on subsets of the samples. The choice of block size allows control over the tradeoff between
Correlation and LargeScale Simultaneous Significance Testing
 Journal of the American Statistical Association
"... Largescale hypothesis testing problems, with hundreds or thousands of test statistics “zi ” to consider at once, have become familiar in current practice. Applications of popular analysis methods such as false discovery rate techniques do not require independence of the zi’s, but their accuracy can ..."
Abstract

Cited by 97 (8 self)
 Add to MetaCart
Largescale hypothesis testing problems, with hundreds or thousands of test statistics “zi ” to consider at once, have become familiar in current practice. Applications of popular analysis methods such as false discovery rate techniques do not require independence of the zi’s, but their accuracy
Largescale support vector learning with structural kernels
 In ECML/PKDD
, 2010
"... Abstract. In this paper, we present an extensive study of the cuttingplane algorithm (CPA) applied to structural kernels for advanced text classification on large datasets. In particular, we carry out a comprehensive experimentation on two interesting natural language tasks, e.g. predicate argumen ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Abstract. In this paper, we present an extensive study of the cuttingplane algorithm (CPA) applied to structural kernels for advanced text classification on large datasets. In particular, we carry out a comprehensive experimentation on two interesting natural language tasks, e.g. predicate
Kernels based tests with nonasymptotic bootstrap approaches for twosample problems
, 2012
"... Considering either two independent i.i.d. samples, or two independent samples generated from a heteroscedastic regression model, or two independent Poisson processes, we address the question of testing equality of their respective distributions. We first propose single testing procedures based on a ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Considering either two independent i.i.d. samples, or two independent samples generated from a heteroscedastic regression model, or two independent Poisson processes, we address the question of testing equality of their respective distributions. We first propose single testing procedures based on a
Fastmmd: Ensemble of circular discrepancy for efficient twosample test. Neural computation
, 2015
"... The maximum mean discrepancy (MMD) is a recently proposed test statistic for twosample test. Its quadratic time complexity, however, greatly hampers its availability to largescale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The cor ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The maximum mean discrepancy (MMD) is a recently proposed test statistic for twosample test. Its quadratic time complexity, however, greatly hampers its availability to largescale applications. To accelerate the MMD calculation, in this study we propose an efficient method called Fast
Recent Advances of Largescale Linear Classification
"... Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
faster. Recently, many research works have developed efficient optimization methods to construct linear classifiers and applied them to some largescale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.
Coil sensitivity encoding for fast MRI. In:
 Proceedings of the ISMRM 6th Annual Meeting,
, 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract

Cited by 193 (3 self)
 Add to MetaCart
and sensitivity encoding. That is, no restrictions are made as to the coil configuration and the sampling pattern in kspace. Two reconstruction strategies are discussed. The first approach strictly aims at optimal voxel shape and is called strong reconstruction for convenience. In weak reconstruction, the voxel
BSupervised hashing with kernels
 in Proc. IEEE Conf. Comput. Vis. Pattern Recognit
, 2012
"... Recent years have witnessed the growing popularity of hashing in largescale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or oft ..."
Abstract

Cited by 84 (24 self)
 Add to MetaCart
Recent years have witnessed the growing popularity of hashing in largescale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance
Results 1  10
of
772