Results 1 
5 of
5
General
"... �� � ���� � � � �� � �� � ������Æ� � ������ � ����� � � � ����� � ���������� � ����� � ������������������ � ��������� � ������� � ������ � ��������� � � � ����� � ����� � ��� � � � ��� ��� � � � � � ��℄��������� � � � � � ��� � ������� � � ����Æ� � � � � � � ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
�� � ���� � � � �� � �� � ������Æ� � ������ � ����� � � � ����� � ���������� � ����� � ������������������ � ��������� � ������� � ������ � ��������� � � � ����� � ����� � ��� � � � ��� ��� � � � � � ��℄��������� � � � � � ��� � ������� � � ����Æ� � � � � � �
Kernels and Ensembles: Perspectives on Statistical Learning
"... Since their emergence in the 1990s, the support vector machine and the AdaBoost algorithm have spawned a wave of research in statistical machine learning. Much of this new research falls into one of two broad categories: kernel methods and ensemble methods. In this expository article, I discuss the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Since their emergence in the 1990s, the support vector machine and the AdaBoost algorithm have spawned a wave of research in statistical machine learning. Much of this new research falls into one of two broad categories: kernel methods and ensemble methods. In this expository article, I discuss the main ideas behind these two types of methods, namely how to transform linear algorithms into nonlinear ones by using kernel functions, and how to make predictions with an ensemble or a collection of models rather than a single model. I also share my personal perspectives on how these ideas have influenced and shaped my own research. In particular, I present two recent algorithms that I have invented with my collaborators: LAGO, a fast kernel algorithm for unbalanced classification and rare target detection; and Darwinian evolution in parallel universes, an ensemble method for variable selection.
Diverse Committees Vote for Dependable Profits
"... Stock selection for hedge fund portfolios is a challenging problem for Genetic Programming (GP) because the markets (the environment in which the GP solution must survive) are dynamic, unpredictable and unforgiving. How can GP be improved so that solutions are produced that are robust to nontrivial ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Stock selection for hedge fund portfolios is a challenging problem for Genetic Programming (GP) because the markets (the environment in which the GP solution must survive) are dynamic, unpredictable and unforgiving. How can GP be improved so that solutions are produced that are robust to nontrivial changes in the environment? We explore an approach that uses a voting committee of GP individuals with differing phenotypic behaviour.
and
, 711
"... We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS). This approach is inspired by the Relevance Vector Machine (RVM), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is ty ..."
Abstract
 Add to MetaCart
(Show Context)
We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS). This approach is inspired by the Relevance Vector Machine (RVM), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is typically applied in the context of kernel regression although it is also suitable in the standard regression context. Extending the RVM algorithm, we include a proper prior distribution for the precisions of the regression coefficients, v −1 j ∼ f(v−1 η), where η is a scaler hyperparameter. Based upon this j model, we derive the full set of conditional distributions for parameters as would typically be done when applying Gibbs sampling. However, instead of simulating samples from the joint posterior distribution in order to estimate the posterior means of the parameters, we use the full conditionals in order 1 to find the joint maximum of the posterior distribution p(β,σ 2,Vy,η) given the value of the hyperparameter η. While the models with η = 0 result in an “RVMlike ” solution, those with η> 0 reinforce further shrinkage leading to more parsimonious models with smaller MSE and prediction errors than traditional RVM models. η is estimated via maximizing the marginal likelihood, i.e. empirical Bayes. From the conventional viewpoint, the proposed method eliminates the need for combinatorial search techniques over a discreet model space, converting the model selection problem into the maximization of the marginal likelihood over a one dimensional continuous space. Through a series of examples, we demonstrate the statistical accuracy of BSVS model selection in standard regression problems and provide comparisons with wellknown model selection criteria.
Dr Mu Zhu visits Australia The inaugural visit to Australia by an
"... time during his 7week visit to Australia to answer a few questions about statistics, being a statistician and what his research means to him. What inspired you to choose a career in this area? I still remember this very vividly: One day when I was still a freshman in college, my roommate and I were ..."
Abstract
 Add to MetaCart
time during his 7week visit to Australia to answer a few questions about statistics, being a statistician and what his research means to him. What inspired you to choose a career in this area? I still remember this very vividly: One day when I was still a freshman in college, my roommate and I were looking through that thick catalog trying to declare a major. I flipped through the pages and saw “statistics. ” I immediately said to my roommate, “Look! There is even a major called statistics. That must be the most boring subject in the whole