Results 1  10
of
307
Least absolute deviations estimation for the censored regression model
 Journal of Econometrics
, 1984
"... This paper proposes an alternative to maximum likelihood estimation of the parameters of the censored regression (or censored ‘Tobit’) model. The proposed estimator is a generalization of least absolute deviations estimation for the standard linear model, and, unlike estimation methods based on the ..."
Abstract

Cited by 229 (5 self)
 Add to MetaCart
This paper proposes an alternative to maximum likelihood estimation of the parameters of the censored regression (or censored ‘Tobit’) model. The proposed estimator is a generalization of least absolute deviations estimation for the standard linear model, and, unlike estimation methods based on the assumption of normally distributed error terms, the estimator is consistent and asymptotically normal for a wide class of error distributions, and is also robust to heteroscedasticity. The paper gives the regularity conditions and proofs of these largesample results, and proposes classes of consistent estimators of the asymptotic ovariance matrix for both homoscedastic and heteroscedastic disturbances. 1.
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 198 (14 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Algorithms and applications for approximate nonnegative matrix factorization
 Computational Statistics and Data Analysis
, 2006
"... In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both spars ..."
Abstract

Cited by 169 (7 self)
 Add to MetaCart
(Show Context)
In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for largescale and timevarying datasets. Key words: nonnegative matrix factorization, text mining, spectral data analysis, email surveillance, conjugate gradient, constrained least squares.
Comparison and evaluation of retrospective intermodality brain image registration techniques
 JOURNAL OF COMPUTER ASSISTED TOMOGRAPHY
, 1997
"... ..."
(Show Context)
All in the Family: Nesting Symmetric and Asymmetric GARCH Models
 Journal of Financial Economics
, 1995
"... This paper develops a parametric family of models of generalized autoregressive heteroskedasticity (GARCH). The family nests the most popular symmetric and asymmetric GARCH models, thereby highlighting the relation between the models and their treatment of asymmetry. Furthermore, the structure perm ..."
Abstract

Cited by 110 (0 self)
 Add to MetaCart
(Show Context)
This paper develops a parametric family of models of generalized autoregressive heteroskedasticity (GARCH). The family nests the most popular symmetric and asymmetric GARCH models, thereby highlighting the relation between the models and their treatment of asymmetry. Furthermore, the structure permits nested tests of different ypes of asymmetry and functional forms. Daily U.S. stock return data reject all standard GARCH models in favor of a model in which, roughly speaking, the conditional standard deviation depends on the shifted absolute value of the shocks raised to the power three halves and past standard deviations.
Global optimization by multilevel coordinate search
 J. Global Optimization
, 1999
"... Abstract. Inspired by a method by Jones et al. (1993), we present a global optimization algorithm based on multilevel coordinate search. It is guaranteed to converge if the function is continuous in the neighborhood of a global minimizer. By starting a local search from certain good points, an impro ..."
Abstract

Cited by 94 (12 self)
 Add to MetaCart
Abstract. Inspired by a method by Jones et al. (1993), we present a global optimization algorithm based on multilevel coordinate search. It is guaranteed to converge if the function is continuous in the neighborhood of a global minimizer. By starting a local search from certain good points, an improved convergence result is obtained. We discuss implementation details and give some numerical results.
Direct search methods: then and now
, 2000
"... We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized ..."
Abstract

Cited by 82 (3 self)
 Add to MetaCart
We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized by the absence of the construction of a model of the objective. We then consider a number of the classical direct search methods and discuss what research in the intervening years has uncovered about these algorithms. In particular, while the original direct search methods were consciously based on straightforward heuristics, more recent analysis has shown that in most — but not all — cases these heuristics actually
Recognition confidence scoring and its use in speech understanding systems
 Computer Speech and Language
, 2002
"... In this paper we present an approach to recognition confidence scoring and a method for integrating confidence scores into the understanding and dialogue components of a speech understanding system. The system uses a multitiered approach where confidence scores are computed at the phonetic, word, a ..."
Abstract

Cited by 78 (11 self)
 Add to MetaCart
(Show Context)
In this paper we present an approach to recognition confidence scoring and a method for integrating confidence scores into the understanding and dialogue components of a speech understanding system. The system uses a multitiered approach where confidence scores are computed at the phonetic, word, and utterance levels. The scores are produced by extracting confidence features from the computation of the recognition hypotheses and processing these features using an accept/reject classifier for word and utterance hypotheses. The output of the confidence classifiers can then be incorporated into the parsing mechanism of the language understanding component. To evaluate the system, experiments were conducted using the JUPITER weather information system. Evaluation was performed at the understanding level using keyvalue pair concept error rate as the evaluation metric. When confidence scores were integrated into the understanding component of the system, the concept error rate was reduced by over 35%.
On The Convergence Of The Multidirectional Search Algorithm
, 1991
"... . This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit ca ..."
Abstract

Cited by 76 (7 self)
 Add to MetaCart
. This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit calculation of the gradient is unnecessary, although it is assumed that the function is continuously differentiable over some subset of the domain. The proof can be extended to treat most nonsmooth cases of interest; the argument breaks down only at points where the derivative exists but is not continuous. Finally, it is shown how a general convergence theory can be developed for an entire class of direct search methodswhich includes such methods as the factorial design algorithm and the pattern search algorithmthat share a key feature of the multidirectional search algorithm. Key words. unconstrained optimization, convergence analysis, direct search methods, parallel optimization, mult...
Direct Search Algorithms for Optimization Calculations
, 1998
"... : Many different procedures have been proposed for optimization calculations when first derivatives are not available. Further, several researchers have contributed to the subject, including some who wish to prove convergence theorems, and some who wish to make any reduction in the least calculated ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
: Many different procedures have been proposed for optimization calculations when first derivatives are not available. Further, several researchers have contributed to the subject, including some who wish to prove convergence theorems, and some who wish to make any reduction in the least calculated value of the objective function. There is not even a key idea that can be used as a foundation of a review, except for the problem itself, which is the adjustment of variables so that a function becomes least, where each value of the function is returned by a subroutine for each trial vector of variables. Therefore the paper is a collection of essays on particular strategies and algorithms, in order to consider the advantages, limitations and theory of several techniques. The subjects that are addressed are line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or...