• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 90,167
Next 10 →

On Model Selection Consistency of Lasso

by Peng Zhao, Bin Yu , 2006
"... Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used ..."
Abstract - Cited by 477 (20 self) - Add to MetaCart
Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being

Model selection and estimation in regression with grouped variables

by Ming Yuan, Yi Lin , 2006
"... ..."
Abstract - Cited by 1161 (9 self) - Add to MetaCart
Abstract not found

Verb Semantics And Lexical Selection

by Zhibiao Wu , 1994
"... ... structure. As Levin has addressed (Levin 1985), the decomposition of verbs is proposed for the purposes of accounting for systematic semantic-syntactic correspondences. This results in a series of problems for MT systems: inflexible verb sense definitions; difficulty in handling metaphor and new ..."
Abstract - Cited by 551 (4 self) - Add to MetaCart
and new usages; imprecise lexical selection and insufficient system coverage. It seems one approach is to apply probability methods and statistical models for some of these problems. However, the question reminds: has PSR exhausted the potential of the knowledge-based approach? If not, are there any

Regression Shrinkage and Selection Via the Lasso

by Robert Tibshirani - JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B , 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract - Cited by 4212 (49 self) - Add to MetaCart
that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also

A tutorial on hidden Markov models and selected applications in speech recognition

by Lawrence R. Rabiner - PROCEEDINGS OF THE IEEE , 1989
"... Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical s ..."
Abstract - Cited by 5892 (1 self) - Add to MetaCart
of statistical modeling and show how they have been applied to selected problems in machine recognition of speech.

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection

by Ron Kohavi - INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE , 1995
"... We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), te ..."
Abstract - Cited by 1283 (11 self) - Add to MetaCart
We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection

A General Theory of Equilibrium Selection in Games.

by References Harsanyi , J C Seleten , R , 1988
"... Abstract This paper presents a Downsian model of political competition in which parties have incomplete but richer information than voters on policy effects. Each party can observe a private signal of the policy effects, while voters cannot. In this setting, voters infer the policy effects from the ..."
Abstract - Cited by 734 (4 self) - Add to MetaCart
Abstract This paper presents a Downsian model of political competition in which parties have incomplete but richer information than voters on policy effects. Each party can observe a private signal of the policy effects, while voters cannot. In this setting, voters infer the policy effects from

High dimensional graphs and variable selection with the Lasso

by Nicolai Meinshausen, Peter Bühlmann - ANNALS OF STATISTICS , 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract - Cited by 736 (22 self) - Add to MetaCart
is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We

Regularization and variable selection via the Elastic Net.

by Hui Zou , Trevor Hastie - J. R. Stat. Soc. Ser. B , 2005
"... Abstract We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, wher ..."
Abstract - Cited by 973 (11 self) - Add to MetaCart
, where strongly correlated predictors tend to be in (out) the model together. The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the p n case

The SWISS-MODEL Workspace: A web-based environment for protein structure homology modelling

by Konstantin Arnold, Lorenza Bordoli, Torsten Schwede, et al. - BIOINFORMATICS , 2005
"... Motivation: Homology models of proteins are of great interest for planning and analyzing biological experiments when no experimental three-dimensional structures are available. Building homology models requires specialized programs and up-to-date sequence and structural databases. Integrating all re ..."
Abstract - Cited by 575 (5 self) - Add to MetaCart
databases necessary for modelling are accessible from the workspace and are updated in regular intervals. Tools for template selection, model building, and structure quality evaluation can be invoked from within the workspace. Workflow and usage of the workspace are illustrated by modelling human Cyclin A1
Next 10 →
Results 1 - 10 of 90,167
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University