Results 1  10
of
77
EndtoEnd Routing Behavior in the Internet
, 1996
"... The largescale behavior of routing in the Internet has gone virtually without any formal study, the exceptions being Chinoy's analysis of the dynamics of Internet routing information [Ch93], and recent work, similar in spirit, by Labovitz, Malan and Jahanian [LMJ97]. We report on an analysis o ..."
Abstract

Cited by 578 (15 self)
 Add to MetaCart
The largescale behavior of routing in the Internet has gone virtually without any formal study, the exceptions being Chinoy's analysis of the dynamics of Internet routing information [Ch93], and recent work, similar in spirit, by Labovitz, Malan and Jahanian [LMJ97]. We report on an analysis of 40,000 endtoend route measurements conducted using repeated “traceroutes ” between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry. For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5 % to 3.3%. For routing stability, we define two separate types of stability, “prevalence, ” meaning the overall likelihood that a particular route is encountered, and “persistence, ” the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About 2/3's of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30 % of the time.
On the universality and cultural specificity of emotion recognition: a metaanalysis
 Psychological Bulletin
, 2002
"... ..."
Rational approximations to rational models: Alternative algorithms for category learning
"... Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible fo ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of “rational process models” that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson’s (1990, 1991) Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure
Coevolving protein residues: maximum likelihood identification and relationship to structure
 J. Mol. Biol
, 1999
"... There has been a great deal of recent research on ..."
Efficient Measurement of the Percolation Threshold for Fully Penetrable Discs
, 2000
"... We study the percolation threshold for fully penetrable discs by measuring the average location of the frontier for a statistically inhomogeneous distribution of fully penetrable discs. We use two different algorithms to efficiently simulate the frontier, including the continuum analogue of an algor ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
We study the percolation threshold for fully penetrable discs by measuring the average location of the frontier for a statistically inhomogeneous distribution of fully penetrable discs. We use two different algorithms to efficiently simulate the frontier, including the continuum analogue of an algorithm previously used for gradient percolation on a square lattice. We find that # c 0.676 339 0.000 004, thus providing an extra significant digit of accuracy to this constant.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
The influence of categories on perception: explaining the perceptual magnet effect as optimal statistical inference
 PSYCHOLOGICAL REVIEW
, 2009
"... A variety of studies have demonstrated that organizing stimuli into categories can affect the way the stimuli are perceived. We explore the influence of categories on perception through one such phenomenon, the perceptual magnet effect, in which discriminability between vowels is reduced near protot ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
A variety of studies have demonstrated that organizing stimuli into categories can affect the way the stimuli are perceived. We explore the influence of categories on perception through one such phenomenon, the perceptual magnet effect, in which discriminability between vowels is reduced near prototypical vowel sounds. We present a Bayesian model to explain why this reduced discriminability might occur: It arises as a consequence of optimally solving the statistical problem of perception in noise. In the optimal solution to this problem, listeners’ perception is biased toward phonetic category means because they use knowledge of these categories to guide their inferences about speakers ’ target productions. Simulations show that model predictions closely correspond to previously published human data, and novel experimental results provide evidence for the predicted link between perceptual warping and noise. The model unifies several previous accounts of the perceptual magnet effect and provides a framework for exploring categorical effects in other domains.
Derivation of error distribution in leastsquares steganalysis
 IEEE Transactions on Information Forensics and Security
, 2007
"... Abstract—This paper considers the least squares method (LSM) for estimation of the length of payload embedded by leastsignificant bit replacement in digital images. Errors in this estimate have already been investigated empirically, showing a slight negative bias and substantially heavy tails (extr ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract—This paper considers the least squares method (LSM) for estimation of the length of payload embedded by leastsignificant bit replacement in digital images. Errors in this estimate have already been investigated empirically, showing a slight negative bias and substantially heavy tails (extreme outliers). In this paper, (approximations for) the estimator distribution over cover images are derived: this requires analysis of the cover image assumption of the LSM algorithm and a new model for cover images which quantifies deviations from this assumption. The theory explains both the heavy tails and the negative bias in terms of coverspecific observable properties, and suggests improved detectors. It also allows the steganalyst to compute precisely, for the first time, avalue for testing the hypothesis that a hidden payload is present. This is the first derivation of steganalysis estimator performance. Index Terms—Leastsignificant bit (LSB) embedding, steganography, structural steganalysis. I.