Results 1  10
of
275
On limits of wireless communications in a fading environment when using multiple antennas
 Wireless Personal Communications
, 1998
"... Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bitrates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multielement array (M ..."
Abstract

Cited by 1526 (7 self)
 Add to MetaCart
Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bitrates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multielement array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon’s classical formula scales as one more bit/cycle for every 3 dB of signaltonoise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99%
A theory of memory retrieval
 Psychol. Rev
, 1978
"... A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympath ..."
Abstract

Cited by 377 (73 self)
 Add to MetaCart
A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympathetic vibrations in other tuning forks. Evidence is accumulated in parallel from each probememory item comparison, and each comparison is modeled by a continuous random walk process. In item recognition, the decision process is selfterminating on matching comparisons and exhaustive on nonmatching comparisons. The mathematical model produces predictions about accuracy, mean reaction time, error latency, and reaction time distributions that are in good accord with experimental data. The theory is applied to four item recognition paradigms (Sternberg, prememorized list, studytest, and continuous) and to speedaccuracy paradigms; results are found to provide a basis for comparison of these paradigms. It is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a retrieval scheme. At the present time, one of the major deficiencies in cognitive psychology is the lack of explicit theories that encompass more than a single experimental paradigm. The lack of such theories and some of the unfortunate consequences have been discussed recently by
A Markov Model for the Term Structure of Credit Risk Spreads
 Review of Financial Studies
, 1997
"... This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space Markov chain in credit ratings. The parameters of this process are easily estimated using observable data ..."
Abstract

Cited by 236 (12 self)
 Add to MetaCart
This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space Markov chain in credit ratings. The parameters of this process are easily estimated using observable data. This model is useful for pricing and hedging corporate debt with imbedded options, for pricing and hedging OTC derivatives with counterparty risk, for pricing and hedging (foreign) government bonds subject to default risk (e.g., municipal bonds), for pricing and hedging credit derivatives, and for risk management. This article presents a simple model for valuing risky debt that explicitly incorporates a firm's credit rating as an indicator of the likelihood of default. As such, this article presents an arbitragefree model for the term structure of credit risk spreads and their evolution through time. This model will prove useful for the pricing and hedging of corporate debt with We would like to thank John Tierney of Lehman Brothers for providing the bond index price data, and Tal Schwartz for computational assistance. We would also like to acknowledge helpful comments received from an anonymous referee. Send all correspondence to Robert A. Jarrow, Johnson Graduate School of Management, Cornell University, Ithaca, NY 14853. The Review of Financial Studies Summer 1997 Vol. 10, No. 2, pp. 481523 1997 The Review of Financial Studies 08939454/97/$1.50 imbedded options, for the pricing and hedging of OTC derivatives with counterparty risk, for the pricing and hedging of (foreign) government bonds subject to default risk (e.g., municipal bonds), and for the pricing and hedging of credit derivatives (e.g. credit sensitive notes and spread adjusted notes). This model can also...
Testing ContinuousTime Models of the Spot Interest Rate
 Review of Financial Studies
, 1996
"... Different continuoustime models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuoustime model by discrete approximations, even though the data are rec ..."
Abstract

Cited by 192 (7 self)
 Add to MetaCart
Different continuoustime models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuoustime model by discrete approximations, even though the data are recorded at discrete intervals. The principal source of rejection of existing models is the strong nonlinearity of the drift. Around its mean, where the drift is essentially zero, the spot rate behaves like a random walk. The drift then meanreverts strongly when far away from the mean. The volatility is higher when away from the mean. The continuoustime financial theory has developed extensive tools to price derivative securities when the underlying traded asset(s) or nontraded factor(s) follow stochastic differential equations [see Merton (1990) for examples]. However, as a practical matter, how to specify an appropriate stochastic differential equation is for the most part an unanswered question. For example, many different continuoustime The comments and suggestions of Kerry Back (the editor) and an anonymous referee were very helpful. I am also grateful to George Constantinides,
Transition Density, A New Measure of Activity in Digital Circuits
 IEEE Transactions on ComputerAided Design
, 1992
"... Reliability assessment is an important part of the design process of digital integrated circuits. We observe that a common thread that runs through most causes of runtime failure is the extent of circuit activity, i.e., the rate at which its nodes are switching. We propose a new measure of activity ..."
Abstract

Cited by 147 (24 self)
 Add to MetaCart
Reliability assessment is an important part of the design process of digital integrated circuits. We observe that a common thread that runs through most causes of runtime failure is the extent of circuit activity, i.e., the rate at which its nodes are switching. We propose a new measure of activity, called the transition density, which may be defined as the "average switching rate" at a circuit node. Based on a stochastic model of logic signals, we also present an algorithm to propagate density values from the primary inputs to internal and output nodes. To illustrate the practical significance of this work, we demonstrate how the density values at internal nodes can be used to study circuit reliability by estimating (1) the average power & ground currents, (2) the average power dissipation, (3) the susceptibility to electromigration failures, and (4) the extent of hotelectron degradation. The density propagation algorithm has been implemented in a prototype density simulator. Using ...
A comparison of sequential sampling models for twochoice reaction time
 Psychological Review
, 2004
"... The authors evaluated 4 sequential sampling models for 2choice decisions—the Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models—by fitting them to the response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented wi ..."
Abstract

Cited by 112 (30 self)
 Add to MetaCart
The authors evaluated 4 sequential sampling models for 2choice decisions—the Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models—by fitting them to the response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented with assumptions of variability across trials in the rate of accumulation of evidence from stimuli, the values of response criteria, and the value of base RT across trials. Although there was substantial model mimicry, empirical conditions were identified under which the models make discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with smalltomoderate decay, and the accumulator model with longtailed (exponential) distributions of criteria, although the last was unable to produce error RTs shorter than correct RTs. The relationship between these models and 3 recent, neurally inspired models was also examined. A common feature of many tasks studied by experimental psychologists is that they involve a simple decision about some feature of a stimulus that is expressed as a choice between two alternative responses. Because decisions of this type are so fundamental to theory development and evaluation, their study has been
How Many Iterations in the Gibbs Sampler?
 In Bayesian Statistics 4
, 1992
"... When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easilyimplemented metho ..."
Abstract

Cited by 96 (5 self)
 Add to MetaCart
When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easilyimplemented method for determining the total number of iterations required, and also the number of initial iterations that should be discarded to allow for "burnin". The method uses only the Gibbs iterates themselves, and does not, for example, require external specification of characteristics of the posterior density. Here the method is described for the situation where one long run is generated, but it can also be easily applied if there are several runs from different starting points. It also applies more generally to Markov chain Monte Carlo schemes other than the Gibbs sampler. It can also be used when several quantiles are to be estimated, when the quantities of interest are probabilities rath...
Continuous versus discrete information processing: Modeling the accumulation of partial information
 Psychological Review
, 1988
"... David Meyer and colleagues have recently developed a new technique for examining the time course of information processing. The technique is a variant of the response signal procedure: On some trials subjects are presented with a signal that requires them to respond, whereas on other trials they res ..."
Abstract

Cited by 73 (37 self)
 Add to MetaCart
David Meyer and colleagues have recently developed a new technique for examining the time course of information processing. The technique is a variant of the response signal procedure: On some trials subjects are presented with a signal that requires them to respond, whereas on other trials they respond normally. These two types of trials are randomly intermixed so subjects are unable to anticipate which kind of trial is to be presented next. For data analysis, it is assumed that on the signal trials observed reaction times are a probability mixture of regular responses and guesses based on partial information. The accuracy of guesses based on partial information can be determined by using the data from the regular trials and a simple race model to remove the contribution of fastfinishing regular trials from signal trial data. This analysis shows that the accuracy of guesses is relatively low and is either approximately constant or grows slowly over the time course of retrieval. Meyer and colleagues have argued that this pattern of results rules out most continuous models of information processing. But the analyses presented in this article show that this pattern is consistent with several stochastic reaction time models': the simple random walk, the runs, and the continuous diffusion models. The diffusion model is assessed with data from a new experiment using the studytest recognition memory procedure. Fitting the diffusion model to the data from regular trials fixes
Characterizing the Variability of Arrival Processes with Indices of Dispersion
 IEEE Journal on Selected Areas in Communications
, 1990
"... We propose to characterize the burstiness of packet arrival processes with indices of dispersion for intervals and for counts. These indices, which are functions of the variance of intervals and counts, are relatively straightforward to estimate and convey much more information than simpler indic ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
We propose to characterize the burstiness of packet arrival processes with indices of dispersion for intervals and for counts. These indices, which are functions of the variance of intervals and counts, are relatively straightforward to estimate and convey much more information than simpler indices, such as the coefficient of variation, that are often used to describe burstiness quantitatively.
Collective Behavior of Networks with Linear (VLSI) IntegrateandFire Neurons
, 1999
"... Introduction The integrateandfire (IF) neuron has become popular as a simplified neural element in modeling the dynamics of largescale networks of spiking neurons. A simple version of an IF neuron integrates the input current as an RC circuit (with a leakage current proportional to the depolariz ..."
Abstract

Cited by 60 (19 self)
 Add to MetaCart
Introduction The integrateandfire (IF) neuron has become popular as a simplified neural element in modeling the dynamics of largescale networks of spiking neurons. A simple version of an IF neuron integrates the input current as an RC circuit (with a leakage current proportional to the depolarization) and emits a spike when the depolarization crosses a threshold. We will refer to it as the RC neuron. Networks of neurons schematized in this way exhibit a wide variety of characteristics observed in single and multiple neuron recordings in cortex in vivo. With biologically plausible time constants and synaptic efficacies, they can maintain spontaneous activity, and when the network is subjected to Hebbian learning (subsets of cells are repeatedly activated by the external stimuli), it shows many stable states of activation, each corresponding to a different attractor of the network dynamics, in coexistence with spontaneous activity (Amit & Brunel, 1997a). These s