Results 1  10
of
24
Generalized linear models with functional predictors
 Journal of the Royal Statistical Society, Series B
, 2002
"... In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simu ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simulated and real world data sets, how this approach can be used to perform linear, logistic and censored regression with functional predictors. In addition, we show how functional principal components can be used to gain insight into the relationship between the response and functional predictors. Finally, we extend the methodology to apply GLM and principal components to standard missing data problems.
LASSOPatternsearch Algorithm with Application to Ophthalmology and Genomic Data
, 2008
"... The LASSOPatternsearch algorithm is proposed to efficiently identify patterns of multiple dichotomous risk factors for outcomes of interest in demographic and genomic studies. The patterns considered are those that arise naturally from the log linear expansion of the multivariate Bernoulli density. ..."
Abstract

Cited by 29 (22 self)
 Add to MetaCart
The LASSOPatternsearch algorithm is proposed to efficiently identify patterns of multiple dichotomous risk factors for outcomes of interest in demographic and genomic studies. The patterns considered are those that arise naturally from the log linear expansion of the multivariate Bernoulli density. The method is designed for the case where there is a possibly very large number of candidate patterns but it is believed that only a relatively small number are important. A LASSO is used to greatly reduce the number of candidate patterns, using a novel computational algorithm that can handle an extremely large number of unknowns simultaneously. The patterns surviving the LASSO are further pruned in the framework of (parametric) generalized linear models. A novel tuning procedure based on the GACV for Bernoulli outcomes, modified to act
Variable Selection and Model Building via Likelihood Basis Pursuit
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2002
"... This paper presents a nonparametric penalized likelihood approach for variable selection and model building, called likelihood basis pursuit (LBP). In the setting of a tensor product reproducing kernel Hilbert space, we decompose the log likelihood into the sum of different functional components suc ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
This paper presents a nonparametric penalized likelihood approach for variable selection and model building, called likelihood basis pursuit (LBP). In the setting of a tensor product reproducing kernel Hilbert space, we decompose the log likelihood into the sum of different functional components such as main effects and interactions, with each component represented by appropriate basis functions. The basis functions are chosen to be compatible with variable selection and model building in the context of a smoothing spline ANOVA model. Basis pursuit is applied to obtain the optimal decomposition in terms of having the smallest l 1 norm on the coefficients. We use the functional L 1 norm to measure the importance of each component and determine the "threshold" value by a sequential Monte Carlo bootstrap test algorithm. As a generalized LASSOtype method, LBP produces shrinkage estimates for the coefficients, which greatly facilitates the variable selection process, and provides highly interpretable multivariate functional estimates at the same time. To choose the regularization parameters appearing in the LBP models, generalized approximate cross validation (GACV) is derived as a tuning criterion. To make GACV widely applicable to large data sets, its randomized version is proposed as well. A technique "slice modeling" is used to solve the optimization problem and makes the computation more efficient. LBP has great potential for a wide range of research and application areas such as medical studies, and in this paper we apply it to two large ongoing epidemiological studies: the Wisconsin Epidemiological Study of Diabetic Retinopathy (WESDR) and the Beaver Dam Eye Study (BDES).
Optimal Properties and Adaptive Tuning of Standard and Nonstandard Support Vector Machines
 IN PROCEEDINGS OF THE MSRI BERKELEY WORKSHOP ON
, 2002
"... We review some of the basic ideas of Support Vector Machines (SVM's) for classification, with the goal of describing how these ideas can sit comfortably inside the statistical literature in decision theory and penalized likelihood regression. We review recent work on adaptive tuning of SVMs, discuss ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We review some of the basic ideas of Support Vector Machines (SVM's) for classification, with the goal of describing how these ideas can sit comfortably inside the statistical literature in decision theory and penalized likelihood regression. We review recent work on adaptive tuning of SVMs, discussing generalizations to the nonstandard case where the training set is not representative and misclassification costs are not equal. Mention is made of recent results in the multicategory case.
Penalized likelihood regression: General formulation and efficient approximation
, 2002
"... The authors consider a general formulation of penalized likelihood regression, which covers canonical and noncanonical links for exponential families as well as accelerated life models with censored survival data. They present an asymptotic analysis of convergence rates to justify a simple approach ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The authors consider a general formulation of penalized likelihood regression, which covers canonical and noncanonical links for exponential families as well as accelerated life models with censored survival data. They present an asymptotic analysis of convergence rates to justify a simple approach to the lowerdimensional approximation of the estimates. The lowerdimensional approximation allows for much faster numerical calculation, paving the way to the development of algorithms that scale well with large data sets.
Matching Pursuit
, 1993
"... This paper presents a nonparametric penalized likelihood approach for variable selection and model building, called likelihood basis pursuit (LBP). In the setting of a tensor product reproducing kernel Hilbert space, we decompose the log likelihood into the sum of different functional components suc ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper presents a nonparametric penalized likelihood approach for variable selection and model building, called likelihood basis pursuit (LBP). In the setting of a tensor product reproducing kernel Hilbert space, we decompose the log likelihood into the sum of different functional components such as main effects and interactions, with each component represented by appropriate basis functions. Basis functions are chosen to be compatible with variable selection and model building in the context of a smoothing spline ANOVA model. Basis pursuit is applied to obtain the optimal decomposition in terms of having the smallest l1 norm on the coefficients. We use the functional L1 norm to measure the importance of each component and determine the “threshold” value by a sequential Monte Carlo bootstrap test algorithm. As a generalized LASSOtype method, LBP produces shrinkage estimates for the coefficients, which greatly facilitates variable selection process, and provides highly interpretable multivariate functional estimates at the same time. To choose the regularization parameters appearing in the LBP models, generalized approximate cross validation (GACV) is derived as a tuning criterion. To make GACV widely applicable to large data sets, its randomized version is proposed as well. A technique “slice modeling ” is
Variable selection via basis pursuit for nonGaussian data
 In 2001 Proceedings of the American Statistical Association, Biometrics Section
, 2001
"... A simultaneous flexible variable selection procedure is proposed by applying a basis pursuit method to the likelihood function. The basis functions are chosen to be compatible with variable selection in the context of smoothing spline ANOVA models. Since it is a generalized LASSOtype method, it enj ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
A simultaneous flexible variable selection procedure is proposed by applying a basis pursuit method to the likelihood function. The basis functions are chosen to be compatible with variable selection in the context of smoothing spline ANOVA models. Since it is a generalized LASSOtype method, it enjoys the favorable property of shrinking coefficients and gives interpretable results. We derive a Generalized Approximate Cross Validation function (GACV), an approximate leaveoutone cross validation function used to choose smoothing parameters. In order to apply the GACV function for a large data set situation, we propose a corresponding randomized GACV. A technique called ‘slice modeling ’ is used to develop an efficient code. Our simulation study shows the effectiveness of the proposed approach in the Bernoulli case. KEY WORDS: basis pursuit, spline ANOVA, LASSO, GACV, randomized GACV, slice modeling
Learning HigherOrder Graph Structure with Features by Structure Penalty
"... In discrete undirected graphical models, the conditional independence of node labels Y is specified by the graph structure. We study the case where there is another input random vector X (e.g. observed features) such that the distribution P(Y  X) is determined by functions of X that characterize th ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In discrete undirected graphical models, the conditional independence of node labels Y is specified by the graph structure. We study the case where there is another input random vector X (e.g. observed features) such that the distribution P(Y  X) is determined by functions of X that characterize the (higherorder) interactions among the Y ’s. The main contribution of this paper is to learn the graph structure and the functions conditioned on X at the same time. We prove that discrete undirected graphical models with feature X are equivalent to multivariate discrete models. The reparameterization of the potential functions in graphical models by conditional log odds ratios of the latter offers advantages in representation of the conditional independence structure. The functional spaces can be flexibly determined by kernels. Additionally, we impose a Structure Lasso (SLasso) penalty on groups of functions to learn the graph structure. These groups with overlaps are designed to enforce hierarchical function selection. In this way, we are able to shrink higher order interactions to obtain a sparse graph structure. 1
060095. LASSOPatternsearch Algorithm By
, 2008
"... The LASSOPatternsearch Algorithm and its variant the Grouped LASSOPatternsearch Algorithm are proposed to efficiently identify patterns of multiple dichotomous risk factors for outcomes of interest in demographic and genomic studies. The patterns considered are those that arise naturally from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The LASSOPatternsearch Algorithm and its variant the Grouped LASSOPatternsearch Algorithm are proposed to efficiently identify patterns of multiple dichotomous risk factors for outcomes of interest in demographic and genomic studies. The patterns considered are those that arise naturally from the log linear expansion of the multivariate Bernoulli density. Both methods are designed for the case where there is a possibly very large number of candidate patterns but it is believed that only a relatively small number are important. In the LASSOPatternsearch Algorithm, a LASSO is used to greatly reduce the number of candidate patterns, using a novel computational algorithm that can handle an extremely large number of unknowns simultaneously. The patterns surviving the LASSO are further pruned in the framework of (parametric) generalized linear models. A novel tuning procedure based on the GACV for Bernoulli outcomes, modified to act as a model selector, is used at both steps. We applied the method to myopia data from the populationbased Beaver Dam Eye Study, exposing physiologically interesting interacting risk factors. We then
Reproducing Kernel Hilbert Spaces  Two Brief Reviews
, 2003
"... This TR contains two brief reviews which will appear in the Proceedings of the 13th IFAC Symposium on System Identification (SYSID 2003), Rotterdam, August 2003. They are ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This TR contains two brief reviews which will appear in the Proceedings of the 13th IFAC Symposium on System Identification (SYSID 2003), Rotterdam, August 2003. They are