Results 1 -
4 of
4
Constant time o(1) bilateral filtering
- In: CVPR
"... This paper presents three novel methods that enable bilateral filtering in constant time O(1) without sampling. Constant time means that the computation time of the filtering remains same even if the filter size becomes very large. Our first method takes advantage of the integral histograms to avoid ..."
Abstract
-
Cited by 41 (0 self)
- Add to MetaCart
(Show Context)
This paper presents three novel methods that enable bilateral filtering in constant time O(1) without sampling. Constant time means that the computation time of the filtering remains same even if the filter size becomes very large. Our first method takes advantage of the integral histograms to avoid the redundant operations for bilateral filters with box spatial and arbitrary range kernels. For bilateral filters constructed by polynomial range and arbitrary spatial filters, our second method provides a direct formulation by using linear filters of image powers without any approximation. Lastly, we show that Gaussian range and arbitrary spatial bilateral filters can be expressed by Taylor series as linear filter decompositions without any noticeable degradation of filter response. All these methods drastically decrease the computation time by cutting it down constant times (e.g. to 0.06 seconds per 1MB image) while achieving very high PSNR’s over 45dB. In addition to the computational advantages, our methods are straightforward to implement. 1.
Reshuffling: a fast algorithm for filtering with arbitrary kernels
- In SPIE 6811, Real-Time Image Processing 2008
, 2008
"... A Novel method to accelerate the application of linear filters that have multiple identical coefficients on arbitrary kernels is presented. Such filters, including Gabor filters, gray level morphological operators, volume smoothing functions, etc., are wide used in many computer vision tasks. By tak ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
A Novel method to accelerate the application of linear filters that have multiple identical coefficients on arbitrary kernels is presented. Such filters, including Gabor filters, gray level morphological operators, volume smoothing functions, etc., are wide used in many computer vision tasks. By taking advantage of the overlapping area between the kernels of the neighboring points, the reshuffling technique prevents from the redundant multiplications when the filter response is computed. It finds a set of unique, constructs a set of relative links for each coefficient, and then sweeps through the input data by accumulating the responses at each point while applying the coefficients using their relative links. Dual solutions, single input access and single output access, that achieve 40 % performance improvement are provided. In addition to computational advantage, this method keeps a minimal memory imprint, which makes it an ideal method for embedded platforms. The effects of quantization, kernel size, and symmetry on the computational savings are discussed. Results prove that the reshuffling is superior to the conventional approach.
Can Run-time Reconfigurable Hardware be more Accessible?
"... Abstract — In this paper, a new project named Context ..."
(Show Context)
TPTCGen: A Tools for Temporal Partitioning and Tasks Design Exploration for Massive Data Applications in High Performance Reconfigurable Computers
"... Abstract—In this work we present an experimental Design Space Exploration(DSE) tool for digital integrated systems, targeting reconfigurable Field Programmable Gate Arrays (FP-GAs) devices designs. The tool, called TPTCGen, is integrated to design specification and high level synthesis tools, aiming ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—In this work we present an experimental Design Space Exploration(DSE) tool for digital integrated systems, targeting reconfigurable Field Programmable Gate Arrays (FP-GAs) devices designs. The tool, called TPTCGen, is integrated to design specification and high level synthesis tools, aiming an efficient FPGA-based computer application mapping. The approach is based on a temporal partitioning method aiming FPGA area improvement for computational intensive applications and performance increasing, parallelism and reuse-core library exploration. This library is composed by a collection of diferrent hardware cores implementation for each task or function. Thus, it is possible to explore different core space/time rate for each new partition implementation and so reach good perfomance and design on the FPGA devices. Experiments have demonstrated the efficiency of this approach in terms of design quality and solution convergence speed. The main tool algorithm, examples and some results are also presented.