Results 11  20
of
140
HighLevel Fault Tolerance in Distributed Programs
, 1994
"... We have been developing highlevel checkpoint and restart methods for Dome (Distributed Object Migration Environment) , a C++ library of dataparallel objects that are automatically distributed using PVM. There are several levels of programming abstraction at which fault tolerance mechanisms can be ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
We have been developing highlevel checkpoint and restart methods for Dome (Distributed Object Migration Environment) , a C++ library of dataparallel objects that are automatically distributed using PVM. There are several levels of programming abstraction at which fault tolerance mechanisms can be designed: highlevel, where the checkpoint and restart are built into our C++ objects, but the program structure is severly constrained; highlevel with preprocessing, where a preprocessor inserts extra C++ statements into the code to facilitate checkpoint and restart; and lowlevel, where periodically an interrupt causes a memory image to be written out. Because we consider portability (both of our libraries and of the checkpoints they produce) to be an important goal, we focus on the higherlevel checkpointing methods. In addition, we describe an implementation of highlevel checkpointing, demonstrate it on multiple architectures, and show that it is efficient enough to provide good expect...
Proving ownership over categorical data
 In Proceedings of the IEEE International Conference on Data Engineering ICDE
, 2004
"... This paper introduces a novel method of rights protection for categorical data through watermarking. We discover new watermark embedding channels for relational data with categorical types. We design novel watermark encoding algorithms and analyze important theoretical bounds including mark vulnerab ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
This paper introduces a novel method of rights protection for categorical data through watermarking. We discover new watermark embedding channels for relational data with categorical types. We design novel watermark encoding algorithms and analyze important theoretical bounds including mark vulnerability. While fully preserving data quality requirements, our solution survives important attacks, such as subset selection and data resorting. Mark detection is fully “blind ” in that it doesn’t require the original data, an important characteristic especially in the case of massive data. We propose various improvements and alternative encoding methods. We perform validation experiments by watermarking the outsourced WalMart sales data available at our institute. We prove (experimentally and by analysis) our solution to be extremely resilient to both alteration and data loss attacks, for example tolerating up to 80% data loss with a watermark alteration of only 25%. 1
Predictionbased energy map for wireless sensor networks
 Ad Hoc Networks Journal (Special Issue on Ad
, 2005
"... Abstract. The key challenge in the design of wireless sensor networks is maximizing their lifetime. The information about the amount of available energy in each part of the network is called the energy map and can be useful to increase the lifetime of the network. In this paper, we address the probl ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Abstract. The key challenge in the design of wireless sensor networks is maximizing their lifetime. The information about the amount of available energy in each part of the network is called the energy map and can be useful to increase the lifetime of the network. In this paper, we address the problem of constructing the energy map of a wireless sensor network using predictionbased approaches. We also present an energy dissipation model that is used to simulate the behavior of a sensor node in terms of energy consumption. Simulation results compare the performance of the predictionbased approaches with a naive one in which no prediction is used. The results show that the predictionbased approaches outperform the naive in a variety of parameters.
The Fractional AdvectionDispersion Equation: Development and Application
, 1998
"... The traditional 2ndorder advectiondispersion equation (ADE) does not adequately describe the movement of solute tracers in aquifers. This study examines and rederives the governing equation. The analysis starts with a generalized notion of particle movements, since the secondorder equation is tr ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
The traditional 2ndorder advectiondispersion equation (ADE) does not adequately describe the movement of solute tracers in aquifers. This study examines and rederives the governing equation. The analysis starts with a generalized notion of particle movements, since the secondorder equation is trying to impart Brownian motion on a mathematical plume at any time. If particle motions with longrange spatial correlation are more favored, then the motion is described by Lévy's family of αstable densities. The new governing (FokkerPlanck) equation of these motions is similar to the ADE except that the order (α) of the highest derivative is fractional (e.g., the 1.65th derivative). Fundamental solutions resemble the Gaussian except that they spread proportional to time 1/α and have heavier tails. The order of the fractional ADE (FADE) is shown to be related to the aquifer velocity autocorrelation function. The FADE derived here is used to....
A Stochastic Framework for Multiprocessor Soft RealTime Scheduling ∗
"... Prior work has shown that the global earliestdeadlinefirst (GEDF) scheduling algorithm ensures bounded deadline tardiness on multiprocessors with no utilization loss; therefore, GEDF may be a good candidate scheduling algorithm for soft realtime workloads. However, such workloads are often implem ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Prior work has shown that the global earliestdeadlinefirst (GEDF) scheduling algorithm ensures bounded deadline tardiness on multiprocessors with no utilization loss; therefore, GEDF may be a good candidate scheduling algorithm for soft realtime workloads. However, such workloads are often implemented assuming an averagecase provisioning, and in prior tardinessbound derivations for GEDF, worstcase execution costs are assumed. As worstcase costs can be orders of magnitude higher than averagecase costs, using a worstcase provisioning may result in significant wasted processing capacity. In this paper, prior tardinessbound derivations for GEDF are generalized so that execution times are probabilistic, and a bound on expected (mean) tardiness is derived. It is shown that, as long as the total expected utilization is strictly less than the number of available processors, the expected tardiness of every task is bounded under GEDF. The result also implies that any quantile of the tardiness distribution is also bounded. 1
Strength of Two Data Encryption Standard Implementations under Timing Attacks
 ACM Transactions on Information and System Security
, 1998
"... We study the vulnerability of several implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method designed to break cryptographic systems that was recently proposed by Paul Kocher. It exploits the engineering aspects involved in the implement ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We study the vulnerability of several implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method designed to break cryptographic systems that was recently proposed by Paul Kocher. It exploits the engineering aspects involved in the implementation of cryptosystems and might succeed even against cryptosystems that remain impervious to sophisticated cryptanalytic techniques. A timing attack is, essentially, a way of obtaining some user's private information by carefully measuring the time it takes the user to carry out cryptographic operations. In this work we analyze two implementations of DES. We show that a timing attack yields the Hamming weight of the key used by both DES implementations. Moreover, the attack is computationally inexpensive. We also show that all the design characteristics of the target system, necessary to carry out the timing attack, can be inferred from timing measurements.
A WaveletBased Method for Improving SignaltoNoise Ratio and Contrast in MR Images
, 2000
"... MR images acquired with fast measurement often display poor signaltonoise ratio (SNR) and contrast. With the advent of high temporal resolution imaging, there is a growing need to remove these noise artifacts. The noise in magnitude MR images is signaldependent (Rician), whereas most denoising a ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
MR images acquired with fast measurement often display poor signaltonoise ratio (SNR) and contrast. With the advent of high temporal resolution imaging, there is a growing need to remove these noise artifacts. The noise in magnitude MR images is signaldependent (Rician), whereas most denoising algorithms assume additive Gaussian (white) noise. However, the Rician distribution only looks Gaussian at high SNR. Some recent work by Nowak employs a waveletbased method for denoising the square magnitude images, and explicitly takes into account the Rician nature of the noise distribution. In this article, we apply a wavelet denoising algorithm directly to the complex image obtained as the Fourier transform of the raw kspace twochannel (real and imaginary) data. By retaining the complex image, we are able to denoise not only magnitude images but also phase images. A multiscale (complex) waveletdomain Wienertype filter is derived. The algorithm preserves edges better when the Haar ...
CONTINUOUS TIME MARKOV CHAIN MODELS FOR CHEMICAL REACTION NETWORKS
"... A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transition ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. This chapter is devoted to the mathematical study of such stochastic models. We begin by developing much of the mathematical machinery we need to describe the stochastic models we are most interested in. We show how one can represent counting processes of the type we need in terms of Poisson processes. This random timechange representation gives a stochastic equation for continuoustime Markov chain models. We include a discussion on the relationship between this stochastic equation and the corresponding martingale problem and Kolmogorov forward (master) equation. Next, we exploit
Automatic Synthesis of Compression Techniques for Heterogeneous Files
, 1995
"... this paper uses a straightforward program synthesis technique: a compression plan, consisting of instructions for each block of input data, is generated, guided by the statistical properties of the input data. Because of its use of algorithms specifically suited to the types of redundancy exhibited ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
this paper uses a straightforward program synthesis technique: a compression plan, consisting of instructions for each block of input data, is generated, guided by the statistical properties of the input data. Because of its use of algorithms specifically suited to the types of redundancy exhibited by the particular input file, the system achieves consistent average performance throughout the file, as shown by experimental evidence
Factored EdgeValued Binary Decision Diagrams and their Application to Matrix Representation and Manipulation
 FORMAL METHODS IN SYSTEM DESIGN
, 1994
"... Factored EdgeValued Binary Decision Diagrams form an extension to EdgeValued Binary Decision Diagrams. By associating both an additive and a multiplicative weight with the edges, FEVBDDs can be used to represent a wider range of functions concisely. As a result, the computational complexity for ce ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Factored EdgeValued Binary Decision Diagrams form an extension to EdgeValued Binary Decision Diagrams. By associating both an additive and a multiplicative weight with the edges, FEVBDDs can be used to represent a wider range of functions concisely. As a result, the computational complexity for certain operations can be significantly reduced compared to EVBDDs. Additionally, the introduction of multiplicative edge weights allows us to directly represent the socalled complement edges which are used in OBDDs, thus providing a one to one mapping of all OBDDs to FEVBDDs. Applications such as integer linear programming and logic verification that have been proposed for EVBDDs also benefit from the extension. We present a complete matrix package based on FEVBDDs and apply the package to the problem of solving the ChapmanKolmogorov equations.