Results 1  10
of
76
Ontological Semantics
, 2004
"... This book introduces ontological semantics, a comprehensive approach to the treatment of text meaning by computer. Ontological semantics is an integrated complex of theories, methodologies, descriptions and implementations. In ontological semantics, a theory is viewed as a set of statements determin ..."
Abstract

Cited by 85 (27 self)
 Add to MetaCart
This book introduces ontological semantics, a comprehensive approach to the treatment of text meaning by computer. Ontological semantics is an integrated complex of theories, methodologies, descriptions and implementations. In ontological semantics, a theory is viewed as a set of statements determining the format of descriptions of the phenomena with which the theory deals. A theory is associated with a methodology used to obtain the descriptions. Implementations are computer systems that use the descriptions to solve specific problems in text processing. Implementations of ontological semantics are combined with other processing systems to produce applications, such as information extraction or machine translation. The theory of ontological semantics is built as a society of microtheories covering such diverse ground as specific language phenomena, world knowledge organization, processing heuristics and issues relating to knowledge representation and implementation system architecture. The theory briefly sketched above is a toplevel microtheory, the ontological semantics theory per se. Descriptions in ontological semantics include text meaning representations, lexical entries, ontological concepts and instances as well as procedures for manipulating texts and their meanings. Methodologies in ontological semantics are sets of techniques and instructions for acquiring and
Adjusting for nonignorable dropout using semiparametric nonresponse models (with discussion
 Journal of the American Statistical Association
, 1999
"... Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t during the interval [0,T). We are interested in making inferences about the marginal mean µ0 of Y when some subjects drop out of the study at random times Q prior to the common fixed end of followup time T. The purpose of this article is to show how to make inferences about µ0 when the continuous dropout time Q is modeled semiparametrically and no restrictions are placed on the joint distribution of the outcome and other measured variables. In particular, we consider two models for the conditional hazard of dropout given ( ¯ V(T), Y), where ¯ V(t) denotes the history of the process V(t) through time t, t ∈ [0,T). In the first model, we assume that λQ(t  ¯ V(T), Y) = λ0(t  ¯ V(t)) exp(α0Y), where α0 is a scalar parameter and λ0(t  ¯ V(t)) is an unrestricted positive function of t and the process ¯ V(t). When the process ¯ V(t) is high dimensional, estimation in this model is not feasible with moderate sample sizes, due to the curse of dimensionality. For such situations, we consider a second model that imposes the additional restriction that λ0(t  ¯ V(t)) = λ0(t) exp(γ ′ 0W(t)), where λ0(t) is an unspecified baseline hazard function, W(t) = w(t, ¯ V(t)), w(·, ·) is a known function that maps (t, ¯ V(t)) to Rq, and γ0 is a q × 1 unknown parameter vector. When α0 � = 0, then dropout is nonignorable. On account of identifiability problems, joint estimation of the mean µ0 of Y and the selection bias parameter α0 may be difficult or impossible. Therefore, we propose regarding the selection bias parameter α0 as known, rather than estimating it from the data. We then perform a sensitivity analysis to see how inference about µ0 changes as we vary α0 over a plausible range of values. We apply our approach to the analysis of ACTG 175, an AIDS clinical trial. KEY WORDS: Augmented inverse probability of censoring weighted estimators; Cox proportional hazards model; Identification;
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
Bayesian hypothesis testing: A reference approach
 Internat. Statist. Rev
, 2002
"... For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0), ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0), or not to use (a1), the simpler probability model (or null model) M0 ≡{p(x  θ0, ω), ω ∈ Ω}, where the loss difference L(a0, θ, ω) − L(a1, θ, ω) is proportional to the amount of information δ(θ0, θ, ω) which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ∫ ∫ δ(θ0, θ, ω) π(θ, ω  x) dθ dω is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(θ0, x) = ∫ δπ(δ  x) dδ, of the amount of information δ(θ0, θ, ω) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is
Bayesian inference procedures derived via the concept of relative surprise
 Communications in Statistics
, 1997
"... of least relative surprise; model checking; change of variable problem; crossvalidation. We consider the problem of deriving Bayesian inference procedures via the concept of relative surprise. The mathematical concept of surprise has been developed by I.J. Good in a long sequence of papers. We make ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
of least relative surprise; model checking; change of variable problem; crossvalidation. We consider the problem of deriving Bayesian inference procedures via the concept of relative surprise. The mathematical concept of surprise has been developed by I.J. Good in a long sequence of papers. We make a modiÞcation to this development that permits the avoidance of a serious defect; namely, the change of variable problem. We apply relative surprise to the development of estimation, hypothesis testing and model checking procedures. Important advantages of the relative surprise approach to inference include the lack of dependence on a particular loss function and complete freedom to the statistician in the choice of prior for hypothesis testing problems. Links are established with common Bayesian inference procedures such as highest posterior density regions, modal estimates and Bayes factors. From a practical perspective new inference
Bayesian Multiple Comparisons Using Dirichlet Process Priors
 Journal of the American Statistical Association
, 1996
"... We consider the problem of multiple comparisons from a Bayesian viewpoint. The family of Dirichlet process priors is applied in the form of baseline prior/likelihood combinations, to obtain posterior probabilities for various hypotheses. The baseline prior/likelihood combinations considered here are ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We consider the problem of multiple comparisons from a Bayesian viewpoint. The family of Dirichlet process priors is applied in the form of baseline prior/likelihood combinations, to obtain posterior probabilities for various hypotheses. The baseline prior/likelihood combinations considered here are beta/binomial, normal/inverted gamma with equal variances and a hierarchical nonconjugate normal/inverted gamma prior on treatment means. The prior probabilities of the hypotheses depend directly on the concentration parameter of the Dirichlet process prior. The problem is analytically intractable; we use Gibbs sampling. The posterior probabilities of the hypotheses are easily obtained as a byproduct in evaluating the marginal posterior distributions of the parameters. The proposed procedure is compared with Duncan's multiple range test and shown to be more powerful under certain alternative hypotheses. Keywords: Gibbs sampling, beta/binomial prior, normal/inverted gamma prior, hierarchica...
Nonaxiomatic reasoning system (version 2.2
, 1993
"... NonAxiomatic Reasoning System (NARS) is an intelligent reasoning system, where intelligence means working and adapting with insu cient knowledge and resources. NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
NonAxiomatic Reasoning System (NARS) is an intelligent reasoning system, where intelligence means working and adapting with insu cient knowledge and resources. NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a uni ed format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network. After present the major components of the system, its implementation is brie y described. An example is used to show howthe system works. The limitations of the system are also discussed. 1