Results 1  10
of
735
Biological sequence analysis: probabilistic models of proteins and nucleic acids
, 1998
"... ..."
(Show Context)
The beta reputation system
 In Proceedings of the 15th Bled Conference on Electronic Commerce
, 2002
"... Reputation systems can be used to foster good behaviour and to encourage adherence to contracts in ecommerce. Several reputation systems have been deployed in practical applications or proposed in the literature. This paper describes a new system called the beta reputation system which is based on ..."
Abstract

Cited by 346 (18 self)
 Add to MetaCart
(Show Context)
Reputation systems can be used to foster good behaviour and to encourage adherence to contracts in ecommerce. Several reputation systems have been deployed in practical applications or proposed in the literature. This paper describes a new system called the beta reputation system which is based on using beta probability density functions to combine feedback and derive reputation ratings. The advantage of the beta reputation system is flexibility and simplicity as well as its foundation on the theory of statistics. 1
Experiences with an Interactive Museum TourGuide Robot
, 1998
"... This article describes the software architecture of an autonomous, interactive tourguide robot. It presents a modular and distributed software architecture, which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and Webbased telep ..."
Abstract

Cited by 328 (75 self)
 Add to MetaCart
This article describes the software architecture of an autonomous, interactive tourguide robot. It presents a modular and distributed software architecture, which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and Webbased telepresence. At its heart, the software approach relies on probabilistic computation, online learning, and anytime algorithms. It enables robots to operate safely, reliably, and at high speeds in highly dynamic environments, and does not require any modifications of the environment to aid the robot's operation. Special emphasis is placed on the design of interactive capabilities that appeal to people's intuition. The interface provides new means for humanrobot interaction with crowds of people in public places, and it also provides people all around the world with the ability to establish a "virtual telepresence" using the Web. To illustrate our approach, results are reported obtained in mid...
Theoretical and Empirical properties of Dynamic Conditional Correlation Multivariate GARCH
, 2001
"... In this paper, we develop the theoretical and empirical properties of a new class of multivariate GARCH models capable of estimating large timevarying covariance matrices, Dynamic Conditional Correlation Multivariate GARCH. We show that the problem of multivariate conditional variance estimation ca ..."
Abstract

Cited by 209 (11 self)
 Add to MetaCart
In this paper, we develop the theoretical and empirical properties of a new class of multivariate GARCH models capable of estimating large timevarying covariance matrices, Dynamic Conditional Correlation Multivariate GARCH. We show that the problem of multivariate conditional variance estimation can be simplified by estimating univariate GARCH models for each asset, and then, using transformed residuals resulting from the first stage, estimating a conditional correlation estimator. The standard errors for the first stage parameters remain consistent, and only the standard errors for the correlation parameters need be modified. We use the model to estimate the conditional covariance of up to 100 assets using S&P 500 Sector Indices and Dow Jones Industrial Average stocks, and conduct specification tests of the estimator using an industry standard benchmark for volatility models. This new estimator demonstrates very strong performance especially considering ease of implementation of the estimator.
Correcting sample selection bias by unlabeled data
"... We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We prese ..."
Abstract

Cited by 205 (12 self)
 Add to MetaCart
We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.
A likelihood approach to estimating phylogeny from discrete morphological character data
 Systematic Biology
, 2001
"... Abstract.—Evolutionary biologists have adopted simple likelihoodmodels for purposes of estimating ancestral states and evaluating character independence on specied phylogenies; however, for purposes of estimating phylogenies by using discrete morphological data, maximum parsimony remains the only o ..."
Abstract

Cited by 144 (0 self)
 Add to MetaCart
Abstract.—Evolutionary biologists have adopted simple likelihoodmodels for purposes of estimating ancestral states and evaluating character independence on specied phylogenies; however, for purposes of estimating phylogenies by using discrete morphological data, maximum parsimony remains the only option. This paper explores the possibility of using standard, wellbehaved Markov models for estimating morphological phylogenies (including branch lengths) under the likelihood criterion. An importantmodication of standardMarkovmodels involvesmaking the likelihood conditional on characters being variable, because constant characters are absent in morphological data sets. Without this modication, branch lengths are often overestimated, resulting in potentially serious biases in tree topology selection. Several new avenues of research are opened by an explicitly modelbased approach to phylogenetic analysis of discrete morphological data, including combineddata likelihood analyses (morphologyC sequence data), likelihood ratio tests, and Bayesian analyses. [Discrete morphological character; Markov model; maximum likelihood; phylogeny.] The increased availability of nucleotide and protein sequences from a diversity of both organisms and genes has stimu
Sober: statistical modelbased bug localization
 In Proc. ESEC/FSE’05
, 2005
"... Automated localization of software bugs is one of the essential issues in debugging aids. Previous studies indicated that the evaluation history of program predicates may disclose important clues about underlying bugs. In this paper, we propose a new statistical modelbased approach, called SOBER, ..."
Abstract

Cited by 141 (13 self)
 Add to MetaCart
(Show Context)
Automated localization of software bugs is one of the essential issues in debugging aids. Previous studies indicated that the evaluation history of program predicates may disclose important clues about underlying bugs. In this paper, we propose a new statistical modelbased approach, called SOBER, which localizes software bugs without any prior knowledge of program semantics. Unlike existing statistical debugging approaches that select predicates correlated with program failures, SOBER models evaluation patterns of predicates in both correct and incorrect runs respectively and regards a predicate as bugrelevant if its evaluation pattern in incorrect runs differs significantly from that in correct ones. SOBER features a principled quantification of the pattern difference that measures the bugrelevance of program predicates. We systematically evaluated our approach under the same setting as previous studies. The result demonstrated the power of our approach in bug localization: SOBER can help programmers locate 68 out of 130 bugs in the Siemens suite when programmers are expected to examine no more than 10 % of the code, whereas the best previously reported is 52 out of 130. Moreover, with the assistance of SOBER, we found two bugs in bc 1.06 (an arbitrary precision calculator on UNIX/Linux), one of which has never been reported before.
A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells
 Journal of Neuroscience
, 1998
"... The problem of predicting the position of a freely foraging rat based on the ensemble firing patterns of place cells recorded from the CA1 region of its hippocampus is used to develop a twostage statistical paradigm for neural spike train decoding. In the first,or encoding stage,place cell spiking ..."
Abstract

Cited by 132 (13 self)
 Add to MetaCart
The problem of predicting the position of a freely foraging rat based on the ensemble firing patterns of place cells recorded from the CA1 region of its hippocampus is used to develop a twostage statistical paradigm for neural spike train decoding. In the first,or encoding stage,place cell spiking activity is modeled as an inhomogeneous Poisson process whose instantaneous rate is a function of the animal’s position in space and phase of its theta rhythm. The animal’s path is modeled as a Gaussian random walk. In the second,or decoding stage,a Bayesian statistical paradigm is used to derive a nonlinear recursive causal filter algorithm for predicting the position of the animal from the place cell ensemble firing patterns. The algebra of the decoding algorithm defines an explicit map of the discrete spike trains into the position prediction. The confidence regions for the position predictions quantify spike train infor
Bayesian Landmark Learning for Mobile Robot Localization
, 1998
"... . To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landm ..."
Abstract

Cited by 132 (14 self)
 Add to MetaCart
(Show Context)
. To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landmarks to use). This paper describes a learning algorithm, called BaLL, that enables mobile robots to learn what features/landmarks are best suited for localization, and also to train artificial neural networks for extracting them from the sensor data. A rigorous Bayesian analysis of probabilistic localization is presented, which produces a rational argument for evaluating features, for selecting them optimally, and for training the networks that approximate the optimal solution. In a systematic experimental study, BaLL outperforms two other recent approaches to mobile robot localization. Keywords: artificial neural networks, Bayesian analysis, feature extraction, landmarks, localization, mobi...
The TimeRescaling Theorem and Its Application to Neural Spike Train Data Analysis
 NEURAL COMPUTATION
, 2001
"... Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point pro ..."
Abstract

Cited by 126 (22 self)
 Add to MetaCart
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point process neural spike train models, especially for histogrambased models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The timerescaling theorem is a wellknown result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodnessoffit tests for both parametric and histogrambased point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the sup