Results 1  10
of
98
High Quality Document Image Compression with DjVu
 Journal of Electronic Imaging
, 1998
"... We present a new image compression technique called "DjVu " that is specifically geared towards the compression of highresolution, highquality images of scanned documents in color. This enables fast transmission of document images over lowspeed connections, while faithfully reproducing the visual ..."
Abstract

Cited by 76 (12 self)
 Add to MetaCart
We present a new image compression technique called "DjVu " that is specifically geared towards the compression of highresolution, highquality images of scanned documents in color. This enables fast transmission of document images over lowspeed connections, while faithfully reproducing the visual aspect of the document, including color, fonts, pictures, and paper texture. The DjVu compressor separates the text and drawings, which needs a high spatial resolution, from the pictures and backgrounds, which are smoother and can be coded at a lower spatial resolution. Then, several novel techniques are used to maximize the compression ratio: the bilevel foreground image is encoded with AT&T's proposal to the new JBIG2 fax standard, and a new waveletbased compression method is used for the backgrounds and pictures. Both techniques use a new adaptive binary arithmetic coder called the Zcoder. A typical magazine page in color at 300dpi can be compressed down to between 40 to 60 KB, approx...
A douglasRachford splitting approach to nonsmooth convex variational signal recovery
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is propo ..."
Abstract

Cited by 48 (14 self)
 Add to MetaCart
Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to solve it. The convergence of the method, which is based on the DouglasRachford algorithm for monotone operatorsplitting, is obtained under general conditions. Applications to nonGaussian image denoising in a tight frame are also demonstrated. Index Terms — Convex optimization, denoising, DouglasRachford, frame, nondifferentiable optimization, Poisson noise,
Combining Frequency and Spatial Domain Information for Fast Interactive Image Noise Removal
, 1996
"... Scratches on old films must be removed since these are more noticeable on higher definition and digital televisions. Wires that suspend actors or cars must be carefully erased during post production of special effects shots. Both of these are time consuming tasks but can be addressed by the followin ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Scratches on old films must be removed since these are more noticeable on higher definition and digital televisions. Wires that suspend actors or cars must be carefully erased during post production of special effects shots. Both of these are time consuming tasks but can be addressed by the following image restoration process: given the locations of noisy pixels to be replaced and a prototype image, restore those noisy pixels in a natural way. We call it image noise removal and this paper describes its fast iterative algorithm. Most existing algorithms for removing image noise use either frequency domain information (e.g low pass filtering) or spatial domain information (e.g median filtering or stochastic texture generation). The few that do combine the two domains place the limitation that the image be band limited and the band limits be known. Our algorithm works in both spatial and frequency domains
Convex Set Theoretic Image Recovery by Extrapolated Iterations of Parallel Subgradient Projections
 IEEE TRANS. IMAGE PROCESS
, 1997
"... Solving a convex set theoretic image recovery problem amounts to finding a point in the intersection of closed and convex sets in a Hilbert space. The POCS algorithm, in which an initial estimate is sequentially projected onto the individual sets according to a periodic schedule, has been the most p ..."
Abstract

Cited by 35 (16 self)
 Add to MetaCart
Solving a convex set theoretic image recovery problem amounts to finding a point in the intersection of closed and convex sets in a Hilbert space. The POCS algorithm, in which an initial estimate is sequentially projected onto the individual sets according to a periodic schedule, has been the most prevalent tool to solve such problems. Nonetheless, POCS has several shortcomings: it converges slowly, it is illsuited for implementation on parallel processors, and it requires the computation of exact projections at each iteration. In this paper, we propose a general parallel projection method (EMOPSP) that overcomes these shortcomings. At each iteration of EMOPSP, a convex combination of subgradient projections onto some of the sets is formed and the update is obtained via relaxation. The relaxation parameter may vary over an iterationdependent, extrapolated range that extends beyond the interval ]0,2] used in conventional projection methods. EMOPSP not only generalizes existing project...
Interpolation and the Discrete PapoulisGerchberg Algorithm
 IEEE Trans. Signal Processing
, 1994
"... In this paper we analyze the performance of an iterative algorithm, similar to the discrete PaponiisGerchberg algorithm, and which can be used to recover missing samples in finitelength records of bandlimited data. No assumptions are made regarding the distribution of the missing samples, in cont ..."
Abstract

Cited by 32 (20 self)
 Add to MetaCart
In this paper we analyze the performance of an iterative algorithm, similar to the discrete PaponiisGerchberg algorithm, and which can be used to recover missing samples in finitelength records of bandlimited data. No assumptions are made regarding the distribution of the missing samples, in contrast with the often studied extrapolation problem, in which the known samples are grouped together. Indeed, it is possible to regard the observed signal as a sampled version of the original one, and to interpret the reconstruction result studied herein as a sampling result. We show that the iterative algorithm converges if the density of the sampling set exceeds a certain minimum value which naturally increases with the bandwidth of the data. We give upper and lower bounds for the error as a function of the number of iterations, together with the signals for which the bounds are attained. Also, we analyze the effect of a relaxation constant present in the algorithm on the spectral radius of the iteration matrix. From this analysis we infer the optimum value of the relaxation constant. We also point out, among all sampling sets with the same density, those for which the convergence rate of the recovery algorithm is maximum or minimum. For lowpass signals it turns out that the best convergence rates result when the distances among the missing samples are a multiple of a certain integer. The worst convergence rates generally occur when the missing samples are contiguous.
Inverse Halftoning Using Wavelets
, 1997
"... This paper introduces a new approach to inverse halftoning using nonorthogonal wavelets. The distinct features of this waveletbased approach are: a) edge information in the highpass wavelet images of a halftone image is extracted and used to assist inverse halftoning, b) crossscale correlations in ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
This paper introduces a new approach to inverse halftoning using nonorthogonal wavelets. The distinct features of this waveletbased approach are: a) edge information in the highpass wavelet images of a halftone image is extracted and used to assist inverse halftoning, b) crossscale correlations in the multiscale wavelet decomposition are used for removing background halftoning noise while preserving important edges in the wavelet lowpass image, c) experiments show that our simple waveletbased approach outperforms the best results obtained from inverse halftoning methods published in the literature, which are iterative in nature [1, 2].
Informationtheoretic image formation
 IEEE Transactions on Information Theory
, 1998
"... Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigo ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigorous framework for defining the imaging problem, for defining measures of optimality used to form estimates of images, for addressing issues associated with the development of algorithms based on these optimality criteria, and for quantifying the quality of the approximations. The definition of the imaging problem consists of an appropriate model for the data and an appropriate model for the reproduction space, which is the space within which image estimates take values. Each problem statement has an associated optimality criterion that measures the overall quality of an estimate. The optimality criteria include maximizing the likelihood function and minimizing mean squared error for stochastic problems, and minimizing squared error and discrimination for deterministic problems. The development of algorithms is closely tied to the definition of the imaging problem and the associated optimality criterion. Algorithms with a strong informationtheoretic motivation are obtained by the method of expectation maximization. Related alternating minimization algorithms are discussed. In quantifying the quality of approximations, global and local measures are discussed. Global measures include the (mean) squared error and discrimination between an estimate and the truth, and probability of error for recognition or hypothesis testing problems. Local measures include Fisher information. Index Terms—Image analysis, image formation, image processing, image reconstruction, image restoration, imaging, inverse problems, maximumlikelihood estimation, pattern recognition. I.
A CMOS Area Image Sensor With Pixel Level A/D Conversion
 IN ISSCC DIGEST OF TECHNICAL PAPERS
, 1995
"... A CMOS 64 x 64 pixel area image sensor chip using SigmaDelta modulation at each pixel for A/D conversion is described. The image data output is digital. The chip was fabricated using a 1.2µm two layer metal single layer poly nwell CMOS process. Each pixel block consists of a phototransistor and ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
A CMOS 64 x 64 pixel area image sensor chip using SigmaDelta modulation at each pixel for A/D conversion is described. The image data output is digital. The chip was fabricated using a 1.2µm two layer metal single layer poly nwell CMOS process. Each pixel block consists of a phototransistor and 22 MOS transistors. Test results demonstrate a dynamic range potentially greater than 93dB, a signal to noise ratio (SNR) of up to 61dB, and dissipation of less than 1mW with a 5V power supply.
Image restoration subject to a total variation constraint
 IEEE Transactions on Image Processing
, 2004
"... Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which total variation is used as a constraint in a general convex programming framework. This approach places no limitation on the incorporation of additional constraints in the restoration process and the resulting optimization problem can be solved efficiently via blockiterative methods. Image denoising and deconvolution applications are demonstrated. I. PROBLEM STATEMENT THE CLASSICAL linear restoration problem is to find the original form of an image in a real Hilbert space from the observation of a degraded image where
Method of successive projections for finding a common point of sets in metric spaces
 J. Opt. Theory and Appl
, 1990
"... Abstract. Many problems in applied mathematics can be abstracted into finding a common point of a finite collection of sets. If all the sets are closed and convex in a Hilbert space, the method of successive projections (MOSP) has been shown to converge to a solution point, i.e., a point in the inte ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
Abstract. Many problems in applied mathematics can be abstracted into finding a common point of a finite collection of sets. If all the sets are closed and convex in a Hilbert space, the method of successive projections (MOSP) has been shown to converge to a solution point, i.e., a point in the intersection of the sets. These assumptions are however not suitable for a broad class of problems. In this paper, we generalize the MOSP to collections of approximately compact sets in metric spaces. We first define a sequence of successive projections (SOSP) in such a context and then proceed to establish conditions for the convergence of a SOSP to a solution point. Finally, we demonstrate an application of the method to digital signal restoration.