Results 1  10
of
179
A Tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000
, 2001
"... The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based o ..."
Abstract

Cited by 73 (0 self)
 Add to MetaCart
The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based on the wavelet transform, in contrast to the discrete cosine transform (DCT) used in the original JPEG standard. The significant change in coding methods between the two standards leads one to ask: What prompted the JPEG committee to adopt such a dramatic change? The answer to this question comes from considering the state of image coding at the time the original JPEG standard was being formed. At that time wavelet analysis and wavelet coding were still
Image reconstruction and enhanced resolution imaging from irregular samples
 IEEE Transactions on Geoscience and Remote Sensing
, 2001
"... Abstract—While high resolution, regularly gridded observations are generally preferred in remote sensing, actual observations are often not evenly sampled and have lowerthandesired resolution. Hence, there is an interest in resolution enhancement and image reconstruction. This paper discusses a ge ..."
Abstract

Cited by 66 (41 self)
 Add to MetaCart
(Show Context)
Abstract—While high resolution, regularly gridded observations are generally preferred in remote sensing, actual observations are often not evenly sampled and have lowerthandesired resolution. Hence, there is an interest in resolution enhancement and image reconstruction. This paper discusses a general theory and techniques for image reconstruction and creating enhanced resolution images from irregularly sampled data. Using irregular sampling theory, we consider how the frequency content in aperture functionattenuated sidelobes can be recovered from oversampled data using reconstruction techniques, thus taking advantage of the high frequency content of measurements made with nonideal aperture filters. We show that with minor modification, the algebraic reconstruction technique (ART) is functionally equivalent to Grochenig’s irregular sampling reconstruction algorithm. Using simple Monte Carlo simulations, we compare and contrast the performance of additive ART, multiplicative ART, and the scatterometer image reconstruction (SIR) (a derivative of multiplicative ART) algorithms with and without noise. The reconstruction theory and techniques have applications with a variety of sensors and can enable enhanced resolution image production from many nonimaging sensors. The technique is illustrated with ERS2 and SeaWinds scatterometer data. Index Terms—Irregular samples, reconstruction, resolution enhancement, sampling. I.
On the BCJR trellis for linear block codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representati ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representation for a given linear block code is the one with the fewest edges. We will then show that, among all trellises that represent a given code, the original trellis introduced by Bahl, Cocke, Jelinek, and Raviv in 1974, and later rediscovered by Wolf, Massey, and Forney, uniquely minimizes the edge count, as well as several other figures of merit. Following Forney and Kschischang and Sorokine, we will also discuss “trellisoriented ” or “minimalspan ” generator matrices, which facilitate the calculation of the size of the BCJR trellis, as well as the actual construction of it. Index TermsBlock complexity.
Statistical and computational methods for comparative proteomic profiling using liquid chromatographytandem mass spectrometry
 Mol. Cell. Proteomics
, 2005
"... The combined method of LCMS/MS is increasingly being used to explore differences in the proteomic composition of complex biological systems. The reliability and utility of such comparative protein expression profiling studies is critically dependent on an accurate and rigorous assessment of quantit ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
The combined method of LCMS/MS is increasingly being used to explore differences in the proteomic composition of complex biological systems. The reliability and utility of such comparative protein expression profiling studies is critically dependent on an accurate and rigorous assessment of quantitative changes in the relative abundance of the myriad of proteins typically present in a biological sample such as blood or tissue. In this review, we provide an overview of key statistical and computational issues relevant to bottomup shotgun global proteomic analysis, with an emphasis on methods that can be applied to improve the dependability of biological inferences drawn from large proteomic datasets. Focusing on a starttofinish approach, we address the following topics: 1) lowlevel data processing steps, such as formation of a data
PartialVolume Bayesian Classification of Material Mixtures in MR Volume Data using Voxel Histograms
, 1998
"... We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Because we allow for mixtures of materials and treat voxels as regions, our technique reduces error ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Because we allow for mixtures of materials and treat voxels as regions, our technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, lowresolution data well. There are two unusual aspects to our approach. First, we assume that, due to partialvolume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; we compute the relative proportion of each material in the voxels. Second, we incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ##x#, from the...
Generalizations of the sampling theorem: Seven decades after Nyquist
 IEEE Trans. Circuits and Systems
, 2001
"... Abstract. 1 The sampling theorem is one of the most basic and fascinating topics in engineering sciences. The most well known form is Shannon’s uniform sampling theorem for bandlimited signals. Extensions of this to bandpass signals and multiband signals, and to nonuniform sampling are also wellkno ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
(Show Context)
Abstract. 1 The sampling theorem is one of the most basic and fascinating topics in engineering sciences. The most well known form is Shannon’s uniform sampling theorem for bandlimited signals. Extensions of this to bandpass signals and multiband signals, and to nonuniform sampling are also wellknown. The connection between such extensions and the theory of filter banks in DSP has been well established. This paper presents some of the less known aspects of sampling, with special emphasis on non bandlimited signals, pointwise stability of reconstruction, and reconstruction from nonuniform samples. Applications in multiresolution computation and in digital spline interpolation are also reviewed.
Biorthogonal partners and applications
 IEEE Trans. Signal Processing
, 2001
"... Abstract—Two digital filters ( ) and ( ) are said to be biorthogonal partners of each other if their cascade ( ) () satisfies the Nyquist or zerocrossing property. Biorthogonal partners arise in many different contexts such as filterbank theory, exact and least squares digital interpolation, and m ..."
Abstract

Cited by 27 (19 self)
 Add to MetaCart
Abstract—Two digital filters ( ) and ( ) are said to be biorthogonal partners of each other if their cascade ( ) () satisfies the Nyquist or zerocrossing property. Biorthogonal partners arise in many different contexts such as filterbank theory, exact and least squares digital interpolation, and multiresolution theory. They also play a central role in the theory of equalization, especially, fractionally spaced equalizers in digital communications. In this paper, we first develop several theoretical properties of biorthogonal partners. We also develop conditions for the existence of biorthogonal partners and FIR biorthogonal pairs and establish the connections to the Riesz basis property. We then explain how these results play a role in many of the abovementioned applications. I.
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 26 (15 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
On flow marking attacks in wireless anonymous communication networks
 In Proc. of IEEE Inter. Conf. on Distributed Computing Systems (ICDCS
, 2005
"... This paper studies the degradation of anonymity in a flowbased wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in de ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
(Show Context)
This paper studies the degradation of anonymity in a flowbased wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in defeating flow marking attacks, and it may take an adversary only a few seconds to recognize the communication relationship between hosts by tracking such artificial marks. Flow marking attacks utilize frequency domain analytical techniques and convert time domain marks into invariant feature frequencies. To counter flow marking attacks, we propose a new countermeasure based on digital filtering technology, and show that this filterbased countermeasure can effectively defend a wireless mix network from flow marking attacks. 1