Results 1 - 10
of
264
Propagation delay estimation in asynchronous direct-sequence code-division multiple access sytems
- IEEE Trans. Commun
, 1996
"... ..."
A System for Sound Analysis/Transformation/Synthesis Based on a Deterministic Plus Stochastic Decomposition
, 1989
"... This dissertation introduces a new analysis/synthesis method. It is designed to obtain musically useful intermediate representations for sound transformations. The methodâs underlying model assumes that a sound is composed of a deterministic component plus a stochastic one. The deterministic compo ..."
Abstract
-
Cited by 127 (8 self)
- Add to MetaCart
This dissertation introduces a new analysis/synthesis method. It is designed to obtain musically useful intermediate representations for sound transformations. The methodâs underlying model assumes that a sound is composed of a deterministic component plus a stochastic one. The deterministic component is represented by a series of sinusoids that are described by amplitude and frequency functions. The stochastic component is represented by a series of magnitude-spectrum envelopes that function as a time-varying filter excited by white noise. Together these representations make it possible for a synthesized sound to attain all the perceptual characteristics of the original sound. At the same time the representation is easily modified to create a wide variety of new sounds. This analysis/synthesis technique is based on the short-time Fourier transform (STFT). From the set of spectra returned by the STFT, the relevant peaks of each spectrum are detected and used as breakpoints in a set of frequency trajectories. The deterministic signal is obtained by synthesizing a sinusoid from each trajectory. Then, in order to obtain the stochastic component, a set of spectra of the deterministic component is computed, and these spectra are subtracted from the spectra of the original sound. The resulting spectral residuals are approximated by a series of envelopes, from which the stochastic signal is generated by performing an inverse-STFT. The result is a method that is appropriate for the manipulation of sounds. The intermediate representation is very flexible and musically useful in that it offers unlimited possibilities for transformation.
Sequential ideal-observer analysis of visual discrimination.
- Psychological Review,
, 1989
"... ..."
A Joint Inter- and Intrascale Statistical Model for Bayesian Wavelet Based Image Denoising
- IEEE Trans. Image Proc
, 2002
"... This paper presents a new wavelet-based image denoising method, which extends a recently emerged "geometrical" Bayesian framework. The new method combines these criteria for distinguishing supposedly useful coefficient from noise coefficient magnit:q54 tgni evolut47 across scales and spatA ..."
Abstract
-
Cited by 68 (8 self)
- Add to MetaCart
(Show Context)
This paper presents a new wavelet-based image denoising method, which extends a recently emerged "geometrical" Bayesian framework. The new method combines these criteria for distinguishing supposedly useful coefficient from noise coefficient magnit:q54 tgni evolut47 across scales and spatA5 clust:q5A of large coefficients near image edges. These three crit546 are combined in a Bayesian framework. The spatD5 clust:q5] propert:5 are expressed in a prior model. Thest6[]A:q5D propertAA concerning coefficient magnit[:q andt:55 evolut4[ across scales are expressed in a joint condit:q]6 model. The three main noveltAA with respect to relat[ approaches are:(1)t he int760C7:q]0056: of wavelet coefficient are st0057:q]005 charact:q]55C and different local crit44C for dist]6:q]55C5 useful coefficient from noise are evaluat]6 (2) a joint condit:q]7 model is introduced, and (3) a novel anisot:q]7 Markov Random Field prior model is proposed. The results demonstrate an improved denoising performance over related earlier techniques.
Copyright Protection for the Electronic Distribution of Text Documents
- Proceedings of the IEEE
, 1999
"... Each copy of a text document can be made to be different in a nearly invisible way by repositioning or modifying the appearance of different elements of text: lines, words, or characters. A unique copy can be registered with its recipient, so that subsequent unauthorized copies that are retrieved ca ..."
Abstract
-
Cited by 62 (2 self)
- Add to MetaCart
Each copy of a text document can be made to be different in a nearly invisible way by repositioning or modifying the appearance of different elements of text: lines, words, or characters. A unique copy can be registered with its recipient, so that subsequent unauthorized copies that are retrieved can be traced back to the original owner. In this paper we describe and compare several mechanisms for marking documents and several other mechanisms for decoding the marks after documents have been subjected to common types of distortion. The marks are intended to protect documents of limited value, that are owned by individuals who would rather possess a legal than an illegal copy if they can be distinguished. We will describe attacks that remove the marks and countermeasures to those attacks. An architecture is described for distributing a large numbers of copies without burdening the publisher with creating and transmitting the unique documents. The architecture also allows the publisher...
A measure of relative entropy between individual sequences with application to universal classification
- IEEE Trans. Inf. Theory
, 1993
"... Abstract-A new notion of empirical informational divergence (relative entropy) between two individual sequences is introduced. If the two sequences are independent realizations of two finiteorder, finite alphabet, stationary Markov processes, the empirical relative entropy converges to the relative ..."
Abstract
-
Cited by 55 (4 self)
- Add to MetaCart
Abstract-A new notion of empirical informational divergence (relative entropy) between two individual sequences is introduced. If the two sequences are independent realizations of two finiteorder, finite alphabet, stationary Markov processes, the empirical relative entropy converges to the relative entropy almost surely. This new empirical divergence is based on a version of the Lempel-Ziv data compression algorithm. A simple universal classification algorithm for individual sequences into a finite number of classes which is based on the empirical divergence, is introduced. It discriminates between the classes whenever they are distinguishable by some finite-memory classifier, for almost every given training sets and almost any test sequence from these classes. It is universal in the sense of being independent of the unknown sources. Index Tem-Lempel-Ziv algorithm, information divergence, finite-membry machines, finite-state machines, universal classification, universal hypothesis testing. I.
Multi-Modal Identity Verification Using Expert Fusion
- Information Fusion
, 2000
"... The contribution of this paper is to compare paradigms coming from the classes of parametric, and non-parametric techniques to solve the decision fusion problem encountered in the design of a multi-modal biometrical identity verification system. The multi-modal identity verification system under con ..."
Abstract
-
Cited by 53 (0 self)
- Add to MetaCart
The contribution of this paper is to compare paradigms coming from the classes of parametric, and non-parametric techniques to solve the decision fusion problem encountered in the design of a multi-modal biometrical identity verification system. The multi-modal identity verification system under consideration is built of d modalities in parallel, each one delivering as output a scalar number, called score, stating how well the claimed identity is verified. A decision fusion module receiving as input the d scores has to take a binary decision: accept or reject the claimed identity. We have solved this fusion problem using parametric and non-parametric classifiers. The performances of all these fusion modules have been evaluated and compared with other approaches on a multi-modal database, containing both vocal and visual biometric modalities. Keywords: Multi-modal identity verification, biometrics, decision fusion. 1 Introduction The automatic verification 1 of a person is more and...
Particle filter theory and practice with positioning applications
- IEEE AEROSPACE AND ELECTRONIC SYSTEMS MAGAZINE
, 2010
"... ..."
(Show Context)
On the Entropy of DNA: Algorithms and Measurements based on Memory and Rapid Convergence
- In Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms
, 1994
"... We have applied the information theoretic notion of entropy to characterize DNA sequences. We consider a genetic sequence signal that is too small for asymptotic entropy estimates to be accurate, and for which similar approaches have previously failed. We prove that the match length entropy estimato ..."
Abstract
-
Cited by 38 (4 self)
- Add to MetaCart
(Show Context)
We have applied the information theoretic notion of entropy to characterize DNA sequences. We consider a genetic sequence signal that is too small for asymptotic entropy estimates to be accurate, and for which similar approaches have previously failed. We prove that the match length entropy estimator has a relatively fast converge rate and demonstrate experimentally that by using this entropy estimator, we can indeed extract a meaningful signal from segments of DNA. Further, we derive a method for detecting certain signals within DNA -- known as splice junctions -- with significantly better performance than previously known methods. The main result of this paper is that we find that the entropy of genetic material which is ultimately expressed in protein sequences is higher than that which is discarded. This is an unexpected result, since current biological theory holds that the discarded sequences ("introns") are capable of tolerating random changes to a greater dey farach@cs.rutge...
Feature detection and letter identification
, 2006
"... Seeking to understand how people recognize objects, we have examined how they identify letters. We expected this 26-way classification of familiar forms to challenge the popular notion of independent feature detection (‘‘probability summation’’), but find instead that this theory parsimoniously acco ..."
Abstract
-
Cited by 35 (3 self)
- Add to MetaCart
Seeking to understand how people recognize objects, we have examined how they identify letters. We expected this 26-way classification of familiar forms to challenge the popular notion of independent feature detection (‘‘probability summation’’), but find instead that this theory parsimoniously accounts for our results. We measured the contrast required for identification of a letter briefly presented in visual noise. We tested a wide range of alphabets and scripts (English, Arabic, Armenian, Chinese, Devanagari, Hebrew, and several artificial ones), three- and five-letter words, and various type styles, sizes, contrasts, durations, and eccentricities, with observers ranging widely in age (3 to 68) and experience (none to fluent). Foreign alphabets are learned quickly. In just three thousand trials, new observers attain the same proficiency in letter identification as fluent readers. Surprisingly, despite this training, the observers—like clinical letterby-letter readers—have the same meager memory span for random strings of these characters as observers seeing them for the first time. We compare performance across tasks and stimuli that vary in difficulty by pitting the human against the ideal observer, and expressing the results as efficiency. We find that efficiency for letter identification is independent of duration, overall contrast, and eccentricity, and only weakly dependent on size, suggesting that letters are identified by a similar computation across this wide range of viewing conditions. Efficiency is also independent of age and years of reading. However, efficiency does vary across alphabets and type styles, with more complex forms yielding lower efficiencies, as one might expect from Gestalt theories of perception. In fact, we find that efficiency is inversely proportional to perimetric complexity (perimeter squared over ‘‘ink’ ’ area) and nearly independent of everything else. This, and the surprisingly