Results 1  10
of
45
Mortality functions for north Queensland rainforests
 JOURNAL OF TROPICAL FOREST SCIENCE
, 1991
"... Subjective a priori grouping of tropical rain forest species for growth prediction may be unreliable because 1) there may be hundreds of species, many comparatively uncommon, the ecology of which may not be well known, 2) species within the same genus, may have significantly different growth pattern ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Subjective a priori grouping of tropical rain forest species for growth prediction may be unreliable because 1) there may be hundreds of species, many comparatively uncommon, the ecology of which may not be well known, 2) species within the same genus, may have significantly different growth patterns, and 3) growth rate may not provide a reliable indication of mortality. Growth models can retain the species identity of each simulated tree, but some aggregation is necessary to enable estimation of increment and mortality functions. An objective approach aggregated 100 rain forest tree species into ten groups to enable efficient estimation of mortality functions. This strategy provided better predictions than a previous subjective grouping. Annual survival probabilities were predicted from tree size, stand density and site quality using a logistic equation fitted by maximum likelihood estimation. Additional species with insufficient data for analysis were subjectively assigned to these ten equations. Several strategies were investigated; the best approach for these species seemed to be to employ the equation which served the greatest number of species. The increment pattern did not provide a good basis for assigning such species to equations, and this suggests that different groupings may be necessary to model the various components of tree growth.
A predictive view of Bayesian clustering
 J. Statist. Planning and Inference
, 2006
"... This work considers probability models for partitions of a set of n elements using a predictive approach, i.e., models that are specified in terms of the conditional probability of either joining an already existing cluster or forming a new one. The inherent structure can be motivated by resorting t ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This work considers probability models for partitions of a set of n elements using a predictive approach, i.e., models that are specified in terms of the conditional probability of either joining an already existing cluster or forming a new one. The inherent structure can be motivated by resorting to hierarchical models of either parametric or nonparametric nature. Parametric examples include the product partition models (PPMs) and the modelbased approach of Dasgupta and Raftery (1998), while nonparametric alternatives include the Dirichlet Process, and more generally, the Species Sampling Models (SSMs). Under exchangeability, PPMs and SSMs induce the same type of partition structure. The methods are discussed in the context of outlier detection in normal linear regression models and of (univariate) density estimation.
New Trade Models, Same Old Gains?
, 2010
"... Microlevel data have had a profound in‡uence on research in international trade over the last ten years. In many regards, this research agenda has been very successful. New stylized facts have been uncovered and new trade models have been developed to explain these facts. In this paper we investiga ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Microlevel data have had a profound in‡uence on research in international trade over the last ten years. In many regards, this research agenda has been very successful. New stylized facts have been uncovered and new trade models have been developed to explain these facts. In this paper we investigate to which extent answers to new microlevel questions have a¤ected answers to an old and central question in the …eld: How large are the gains from trade? A crude summary of our results is: “So far, not much.”
Selective Attention for Handwritten Digit Recognition
 Advances in Neural Information Processing Systems 8
, 1996
"... Completely parallel object recognition is NPcomplete. Achieving a recognizer with feasible complexity requires a compromise between parallel and sequential processing where a system selectively focuses on parts of a given image, one after another. Successive fixations are generated to sample the im ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Completely parallel object recognition is NPcomplete. Achieving a recognizer with feasible complexity requires a compromise between parallel and sequential processing where a system selectively focuses on parts of a given image, one after another. Successive fixations are generated to sample the image and these samples are processed and abstracted to generate a temporal context in which results are integrated over time. A computational model based on a partially recurrent feedforward network is proposed and made credible by testing on the realworld problem of recognition of handwritten digits with encouraging results. 1 INTRODUCTION For allparallel bottomup recognition, allocating one separate unit for each possible feature combination, i.e., conjunctive encoding, implies combinatorial explosion. It has been shown that completely parallel, bottomup visual object recognition is NPcomplete (Tsotsos, 1990). By exchanging space with time, systems with much less complexity may be des...
Fisher's Method Of Scoring
 Int. Stat. Rev
, 1992
"... . An analysis is given of the computational properties of Fisher's method of scoring for maximizing likelihoods and solving estimating equations based on quasilikelihoods. Consistent estimation of the true parameter vector is shown to be important if a fast rate of convergence is to be achieved, bu ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. An analysis is given of the computational properties of Fisher's method of scoring for maximizing likelihoods and solving estimating equations based on quasilikelihoods. Consistent estimation of the true parameter vector is shown to be important if a fast rate of convergence is to be achieved, but if this condition is met then the algorithm is very attractive. This link between the performance of the scoring algorithm and the adequacey of the underlying problem modelling is stressed. The effect of linear constraints on performance is discussed, and examples of likelihood and quasilikelihood calculations are presented. 1. Introduction Two basic paradigms play important roles in the material developed in this paper. These are: (1) Newton's method for function minimization, and (2) the method of maximum likelihood for parameter estimation in data analysis problems. The main aim is to examine aspects of the structure and performance of Fisher's method of scoring, a minimization techniq...
A Personal View of Statistical Packages for Linear Regression
 Vanclay (eds) Socioeconomic Research Methods in Forestry: A Training Manual. Rainforest CRC, Cairns, ISBN 0 86443 691
, 2002
"... INTRODUCTION Twovariable and multivariate analysis are important steps in fitting relationships for use in systems models. Many statistical packages for use on personal computers are available with regression capabilities, and there is great variation in range of capabilities, ease of use and cost ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
INTRODUCTION Twovariable and multivariate analysis are important steps in fitting relationships for use in systems models. Many statistical packages for use on personal computers are available with regression capabilities, and there is great variation in range of capabilities, ease of use and cost. This is a personal overview of statistical packages used by the author, emphasizing the utility of the package for fitting curves to data using linear regression. It is not a comprehensive review, and does not consider expensive packages such as SPSS, SAS, and Splus. Instead, it looks mainly at the free or cheap packages that do not require an annual license fee. Some basic concepts in regression analysis are first introduced, and then a number of packages with regression capabilities are reviewed  specifically Excel, CurveExpert, GLIM, ARC and ViSta. 2. SOME BASIC CONCEPTS IN STATISTICS Let's begin by reexamining the principles underlying curve fitting with regression analysis. Fig
Jemain, “Handling Overdispersion with Negative Binomial and Generalized Poisson Regression Models,” CAS Winter Forum including the Ratemaking Call Papers
, 2007
"... In actuarial hteramre, researchers suggested various statistical procedures to estimate the parameters in claim count or frequency model. In particular, the Poisson regression model, which is also known as the Generahzed Linear Model (GLM) with Poisson error structure, has been x~adely used in the r ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In actuarial hteramre, researchers suggested various statistical procedures to estimate the parameters in claim count or frequency model. In particular, the Poisson regression model, which is also known as the Generahzed Linear Model (GLM) with Poisson error structure, has been x~adely used in the recent years. However, it is also recognized that the count or frequency data m insurance practice often display overdispersion, i.e., a situation where the variance of the response variable exceeds the mean. Inappropriate imposition of the Poisson may underestimate the standard errors and overstate the sigruficance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the Negative Binomial and Generalized Poisson regression models as ahemafives for handling overdispersion. If the Negative Binomial and Generahzed Poisson regression models are fitted by the maximum likelihood method, the models are considered to be convenient and practical; they handle overdispersion, they allow the likelihood ratio and other standard maximum likelihood tests to be implemented, they have good properties, and they permit the fitting procedure to be carried out by using the herative Weighted I_,east Squares OWLS) regression similar to those of the Poisson. In this paper, two types of regression model will be discussed and applied; multiplicative and additive. The multiplicative and additive regression models for Poisson, Negative Binomial and Generalized Poisson will be fitted, tested and compared on three different sets of claim frequency data; Malaysian private motor third part T property ' damage data, ship damage incident data from McCuUagh and Nelder, and data from Bailey and Simon on Canadian private automobile liabili~,.
A Mortality Study Of Employees Of The Nuclear Industry In Oak Ridge, Tennessee
, 1997
"... An analysis was conducted of 27,982 deaths among 106,020 persons employed at four Federal nuclear plants in Oak Ridge, Tennessee between 1943 and 1984. The main objectives were to extend the evaluation of the health effects of employment in the nuclear industry in Oak Ridge to include most workers w ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
An analysis was conducted of 27,982 deaths among 106,020 persons employed at four Federal nuclear plants in Oak Ridge, Tennessee between 1943 and 1984. The main objectives were to extend the evaluation of the health effects of employment in the nuclear industry in Oak Ridge to include most workers who were omitted from earlier studies; to compare the mortality experience among the facilities; to address methodological problems that occur when individuals employed at more than one facility are included in the analysis; and to conduct doseresponse analyses for those individuals with potential exposure to external radiation. All cause mortality and all cancer mortality were in close agreement with national rates. The only notable excesses occurred for white males for lung cancerstandardized mortality ratio (SMR) = 1.18, 1849 deaths and nonmalignant respiratory disease (SMR = 1.12, 1568 deaths). A more detailed analysis revealed substantial differences in death rates among workers a...
Modelling Regeneration and Recruitment in a Tropical Rainforest
 CANADIAN JOURNAL OF FOREST RESEARCH
, 1992
"... A twostage model predicts recruitment of the 100 species which account for 97% of all recruitment observed on 217 permanent sample plots in the tropical rainforest of north Queensland. The first stage predicts the probability of the occurrence of any recruitment from stand basal area and the presen ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
A twostage model predicts recruitment of the 100 species which account for 97% of all recruitment observed on 217 permanent sample plots in the tropical rainforest of north Queensland. The first stage predicts the probability of the occurrence of any recruitment from stand basal area and the presence of that species in the existing stand. These probabilities can be implemented stochastically, or deterministically by summing for each species until unity is reached, recruitment initiated and the accumulated probability reset. The second stage indicates the expected amount of recruitment, given that it is known to occur, and
employs stand basal area, the relative number of trees of that species in the stand, and site quality. This approach is easily implemented and provides good results.
State Gun Policy and CrossState Externalities: Evidence from Crime Gun Tracing
 American Economic Journal: Economic Policy
"... This paper provides a theoretical and empirical analysis of crossstate externalities associated with gun regulations in the context of the gun tra cking market. Using gun tracing data, which identify the source state for crime guns recovered in destination states, we nd that rearms in this market t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper provides a theoretical and empirical analysis of crossstate externalities associated with gun regulations in the context of the gun tra cking market. Using gun tracing data, which identify the source state for crime guns recovered in destination states, we nd that rearms in this market tend to ow from states with weak gun laws to states with strict gun laws, satisfying a necessary condition for the existence of crossstate externalities in the theoretical model. We also nd an important role for transportation costs in this market, with gun ows more signi cant between nearby states; this nding suggests that externalities are spatial in nature. Finally, we present evidence that criminal possession of guns is higher in states exposed to weak gun laws in nearby states.