Results 1 - 10
of
36
SpaRCS: Recovering low-rank and sparse matrices from compressive measurements
, 2011
"... We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization ..."
Abstract
-
Cited by 46 (4 self)
- Add to MetaCart
(Show Context)
We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1
P2c2: Programmable pixel compressive camera for high speed imaging
- Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
, 2011
"... We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the cam ..."
Abstract
-
Cited by 37 (6 self)
- Add to MetaCart
(Show Context)
We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal superresolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal superresolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera. 1.
Video from a single coded exposure photograph using a learned over-complete dictionary, inIEEEIntl.Conf.Comp. Vision
, 2011
"... Cameras face a fundamental tradeoff between the spatial and temporal resolution – digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase ..."
Abstract
-
Cited by 33 (2 self)
- Add to MetaCart
(Show Context)
Cameras face a fundamental tradeoff between the spatial and temporal resolution – digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical constraints on sampling scheme which is imposed by architectures of present image sensor devices. Consequently, our sampling scheme can be implemented on image sensors by making a straightforward modification to the control unit. To demonstrate the power of our approach, we have implemented a prototype imaging system with per-pixel coded exposure control using a liquid crystal on silicon (LCoS) device. Using both simulations and experiments on a wide range of scenes, we show that our method can effectively reconstruct a video from a single image maintaining high spatial resolution. 1.
CS-MUVI: video compressive sensing for spatial-multiplexing cameras
- in 2012 IEEE International Conference on Computational Photography (ICCP
, 2012
"... Compressive sensing (CS)-based spatial-multiplexing cameras (SMCs) sample a scene through a series of coded projections using a spatial light modulator and a few optical sensor elements. SMC architectures are particularly useful when imaging at wavelengths for which full-frame sensors are too cumber ..."
Abstract
-
Cited by 31 (7 self)
- Add to MetaCart
Compressive sensing (CS)-based spatial-multiplexing cameras (SMCs) sample a scene through a series of coded projections using a spatial light modulator and a few optical sensor elements. SMC architectures are particularly useful when imaging at wavelengths for which full-frame sensors are too cumbersome or expensive. While existing recovery algorithms for SMCs perform well for static images, they typically fail for time-varying scenes (videos). In this pa-per, we propose a novel CS multi-scale video (CS-MUVI) sensing and recovery framework for SMCs. Our frame-work features a co-designed video CS sensing matrix and recovery algorithm that provide an efficiently computable low-resolution video preview. We estimate the scene’s op-tical flow from the video preview and feed it into a convex-optimization algorithm to recover the high-resolution video. We demonstrate the performance and capabilities of the CS-MUVI framework for different scenes. 1.
Spectral Compressed Sensing via Structured Matrix Completion
"... The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension n is assumed to be a mixture of r complex multi-dimensional sinusoids, while the underlying frequencies can assume any value ..."
Abstract
-
Cited by 17 (6 self)
- Add to MetaCart
(Show Context)
The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension n is assumed to be a mixture of r complex multi-dimensional sinusoids, while the underlying frequencies can assume any value in the unit disk. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this problem, we develop a novel nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. The algorithm starts by arranging the data into a low-rank enhanced form with multi-fold Hankel structure, then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of O(rlog 2 n). We also show that, in many instances, accurate completion of a low-rank multi-fold Hankel matrix is possible when the number of observed entries is proportional to the information theoretical limits (except for a logarithmic gap). The robustness of EMaC against bounded noise and its applicability to super resolution are further demonstrated by numerical experiments. 1.
Flutter Shutter Video Camera for Compressive Sensing of Videos
"... Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. ..."
Abstract
-
Cited by 12 (4 self)
- Add to MetaCart
Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. In stark contrast, hardware improvements in temporal resolution have been modest. One solution to enhance temporal resolution is to use high bandwidth imaging devices such as high speed sensors and camera arrays. Unfortunately, these solutions are expensive. An alternate solution is motivated by recent advances in computational imaging and compressive sensing. Camera designs based on these principles, typically, modulate the incoming video using spatio-temporal light modulators and capture the modulated video at a lower bandwidth. Reconstruction algorithms, motivated by compressive sensing, are subsequently used to recover the high bandwidth video at high fidelity. Though promising, these methods have been limited since they require complex and expensive light modulators that make the techniques difficult to realize in practice. In this paper, we show that a simple coded exposure modulation is sufficient to reconstruct high speed videos. We propose the Flutter Shutter Video Camera (FSVC) in which each exposure of the sensor is temporally coded using an independent pseudo-random sequence. Such exposure coding is easily achieved in modern sensors and is already a feature of several machine vision cameras. We also develop two algorithms for reconstructing the high speed video; the first based on minimizing the total variation of the spatiotemporal slices of the video and the second based on a data driven dictionary based approximation. We perform evaluation on simulated videos and real data to illustrate the robustness of our system. 1.
Compressive mechanism: Utilizing sparse representation in differential privacy.
- In WPES,
, 2011
"... Abstract Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
(Show Context)
Abstract Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-ofthe-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O( √ n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
Video Compressive Sensing Using Gaussian Mixture Models
"... A Gaussian mixture model (GMM) based algorithm is proposed for video reconstruction from temporally-compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
A Gaussian mixture model (GMM) based algorithm is proposed for video reconstruction from temporally-compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.
Compressive acquisition of linear dynamical systems
, 2013
"... Compressive sensing (CS) enables the acquisition and recovery of sparse signals and images at sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Vid ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
Compressive sensing (CS) enables the acquisition and recovery of sparse signals and images at sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the time-varying parameters at each instant and accumulates measurements over time to estimate the time-invariant parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach and demonstrate its effectiveness with a range of experiments involving video recovery and scene classification.
Adaptive temporal compressive sensing for video
- International Conference on Image Processing
, 2013
"... This paper introduces the concept of adaptive temporal com-pressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene’s tempo-ral complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal ..."
Abstract
-
Cited by 5 (5 self)
- Add to MetaCart
(Show Context)
This paper introduces the concept of adaptive temporal com-pressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene’s tempo-ral complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the inte-gration time of the camera, opening the possibility to real-time implementation. The proposed algorithm is a general-ized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. Index Terms — Video compressive sensing, temporal compressive sensing ratio design, temporal superresolution, adaptive temporal compressive sensing, real-time implemen-tation. 1.