Results 1  10
of
16
The Variation of Software Survival Time for Different Operational Input Profiles (or why you can wait a long time for a big bug to fail)
 in Proc. FTCS23
, 1993
"... This paper provides experimental and theoretical evidence for the existence of contiguous failure regions in the program input space (`blob' defects). For realtime systems where successive input values tend to be similar, blob defects can have a major impact on the software survival time because the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This paper provides experimental and theoretical evidence for the existence of contiguous failure regions in the program input space (`blob' defects). For realtime systems where successive input values tend to be similar, blob defects can have a major impact on the software survival time because the failure probability is not constant. For example, with a `random walk' input sequence, the probability of failure decreases as the time from the last failure increases. It is shown that the key factors affecting the survival time are the input `trajectory ', the rate of change of the input values and the `surface area' of the defect (rather than its volume). 1 Introduction This paper is an extension of earlier experimental studies on the failure characteristics of some known software defects [1, 2, 3]. The results of these studies cast doubt on the general validity of an assumption of constant probability of failure for software. In conventional reliability theory, it is often assumed that...
Quantum Computation
 In Annual Review of Computational Physics VI, D. Stauffer, Ed., World Scientific
, 1999
"... In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I left out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor’s factorization algorithm and Grover’s algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity, because any realistic model will inevitably be subject to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review I make these connections explicit, discussing the possible implications of quantum computation on fundamental physical questions, such as the transition from quantum to classical physics. 1
On the structure of tdesigns
 SIAM. J. on Algebraic and Discrete Methods
, 1980
"... Abstract. It is possible toviewthecombinatorial structuresknown as (integral) tdesigns asZmodules in a natural way. In this note we introduce a polynomial associated to each such Zmodule. Using this association, we quickly derive explicit bases for the important class of submodules which correspo ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. It is possible toviewthecombinatorial structuresknown as (integral) tdesigns asZmodules in a natural way. In this note we introduce a polynomial associated to each such Zmodule. Using this association, we quickly derive explicit bases for the important class of submodules which correspond to the socalled nulldesigns. Introduction. Among the most fundamental (and least understood) types of combinatorial configurations are the tdesigns [2], [5], [6]. These can be defined as follows. Let v, k, andA be positive integers satisfying<k< v.Atdesign Sx (t, k, v) is a collection #9 of ksubsetsB (called blocks) of a vset V with the property that every tsubset of V occurs as a subset of exactlyA blocksB.(It is notrequired thatblocks
Priority and Maximal Progress are completely axiomatisable (Extended Abstract)
, 1998
"... . During the last decade, CCS has been extended in different directions, among them priority and real time. One of the most satisfactory results for CCS is Milner's complete proof system for observational congruence [28]. Observational congruence is fair in the sense that it is possible to escape di ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
. During the last decade, CCS has been extended in different directions, among them priority and real time. One of the most satisfactory results for CCS is Milner's complete proof system for observational congruence [28]. Observational congruence is fair in the sense that it is possible to escape divergence, reflected by an axiom recX:(ø:X + P ) = recX:ø:P . In this paper we discuss observational congruence in the context of interactive Markov chains, a simple stochastic timed variant CCS with maximal progress. This property implies that observational congruence becomes unfair, i.e. it is not always possible to escape divergence. This problem also arises in calculi with priority. So, completeness results for such calculi modulo observational congruence have been unknown until now. We obtain a complete proof system by replacing the above axiom by a set of axioms allowing to escape divergence by means of a silent alternative. This treatment can be profitably adapted to other calculi. 1 I...
Subadditivity reexamined: the case for ValueatRisk. FMG Discussion Papers, London School of Economics
, 2005
"... This paper explores the potential for violations of VaR subadditivity both theoretically and by simulations, and finds that for most practical applications VaR is subadditive. Hence, there is no reason to choose a more complicated risk measure than VaR, solely for reasons of subadditivity. KEY WORDS ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper explores the potential for violations of VaR subadditivity both theoretically and by simulations, and finds that for most practical applications VaR is subadditive. Hence, there is no reason to choose a more complicated risk measure than VaR, solely for reasons of subadditivity. KEY WORDS: Value–at–Risk, subadditivity, regular variation, tail index, heavy tailed distribution.
Option Valuation with Conditional Heteroskedasticity and NonNormality
, 2009
"... We provide results for the valuation of European style contingent claims for a large class of specifications of the underlying asset returns. Our valuation results obtain in a discrete time, infinite statespace setup using the noarbitrage principle and an equivalent martingale measure. Our approac ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We provide results for the valuation of European style contingent claims for a large class of specifications of the underlying asset returns. Our valuation results obtain in a discrete time, infinite statespace setup using the noarbitrage principle and an equivalent martingale measure. Our approach allows for general forms of heteroskedasticity in returns, and valuation results for homoskedastic processes can be obtained as a special case. It also allows for conditional nonnormal return innovations, which is critically important because heteroskedasticity alone does not suffice to capture the option smirk. We analyze a class of equivalent martingale measures for which the resulting riskneutral return dynamics are from the same family of distributions as the physical return dynamics. In this case, our framework nests the valuation results obtained by Duan (1995) and Heston and Nandi (2000) by allowing for a timevarying price of risk and nonnormal innovations. We provide extensions of these results to more general equivalent martingale measures and to discrete time stochastic volatility models, and we analyze the relation between our results and those obtained for continuous time models.
Explicit formulas for hook walks on continual Young
, 2003
"... We consider, following the work of S. Kerov, random walks which are continuousspace generalizations of the Hook Walks defined by GreeneNijenhuisWilf, performed under the graph of a continual Young diagram. The limiting point of these walks is a point on the graph of the diagram. We present severa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider, following the work of S. Kerov, random walks which are continuousspace generalizations of the Hook Walks defined by GreeneNijenhuisWilf, performed under the graph of a continual Young diagram. The limiting point of these walks is a point on the graph of the diagram. We present several explicit formulas giving the probability densities of these limiting points in terms of the shape of the diagram. This partially resolves a conjecture of Kerov concerning an explicit formula for the socalled Markov transform. We also present two inverse formulas, reconstructing the shape of the diagram in terms of the densities of the limiting point of the walks. One of the two formulas can be interpreted as an inverse formula for the Markov transform. As a corollary, some new integration identities are derived. 1.
2002, Temporal resolution of uncertainty, the investment policy of levered firms and corporate debt yields, Working paper
"... as Viral Acharya and Christopher Mann for their insightful comments. This paper would not have seen the light of day (at least not under its actual form) without the constant help and support from Kenneth Garbade. A special thought goes to Claudia Perlich and Jerold Weiss. The usual disclaimer appli ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
as Viral Acharya and Christopher Mann for their insightful comments. This paper would not have seen the light of day (at least not under its actual form) without the constant help and support from Kenneth Garbade. A special thought goes to Claudia Perlich and Jerold Weiss. The usual disclaimer applies. Temporal Resolution of Uncertainty, the Investment Policy of Leveraged Firms and Corporate Debt Yields This paper attempts to link the agency literature (concerned with whether debt will trigger underinvestment incentives or riskshifting behavior) with the one dealing with temporal resolution of uncertainty. To the best of our knowledge, apart from one article by John and Ronen (1990), there is no research article linking the two literatures. We are concerned here with how the product/input market influences deviations from the optimal investment policy, in particular to what extent the speed of resolution of uncertainty of the industry in which a given firm operates affects the riskshifting behavior of a shareholderaligned manager. We assume that investors are risk neutral and that the return on the risky technology is normally distributed. It is then shown that the pattern of temporal resolution of uncertainty monotonically affects risk shifting as well as bond yields, even after contracts mitigating deviations from optimal investment policy have been written; empirical implications are derived and discussed. 2
Comparison of Three Algorithms for Lévy Noise Generation
"... In this paper, we describe three algorithms for the generation of symmetric Lévy noise and we discuss the relative performance in terms of time of execution on an Intel Pentium M processor at 1500 MHz. The relative performance of the three algorithm is given as a function of the Lévy stable index α ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we describe three algorithms for the generation of symmetric Lévy noise and we discuss the relative performance in terms of time of execution on an Intel Pentium M processor at 1500 MHz. The relative performance of the three algorithm is given as a function of the Lévy stable index α and of the number of produced random points.
L8S 4KlSTATISTICAL INFERENCE FOR SOME VOLTERRA POPULATION PROCESSES IN A RANDOM ENVIRONMENT
, 1981
"... Inference in random environment ABSTRACT: We study the problem of maximum likelihood estimation in some random environment population models t defined through It6 stochastic differential equations. It is shown that a criticality parameter can be estimated consistently. The asymptotic behavior of the ..."
Abstract
 Add to MetaCart
Inference in random environment ABSTRACT: We study the problem of maximum likelihood estimation in some random environment population models t defined through It6 stochastic differential equations. It is shown that a criticality parameter can be estimated consistently. The asymptotic behavior of the estimators is analyzed t and a goodness of fit test is proposed. KEY WORDS: Stochastic differential equation t random environment t population process t maximum likelihood estimation. AMS (1980) Classification: Primary 62F12 Secondary 60H101. Introducti on Classical population models are usually written in the form of differential equations. These deterministic models can be thought of as representing the average behavior of a large population. Voltera and D'Ancona (1935) mOdelled single species population development by an equation of the form (1) Here x t represents the population size at time t. whereas the function f. which may depend on the population history. measures the growth rate. This model does not deal with fluctuations around the average behavior. One way to incorporate randomness. i.e. to allow for individual differences and interaction between the individuals. is to replace (1) by the Ito stochastic differential equation (2) with X