Results 1 - 10
of
16
Design and Analysis of a Hardware-Efficient Compressed Sensing Architecture for Data Compression in Wireless Sensors
- IEEE Journal of Solid-State Circuits (JSSC
"... Abstract—This work introduces the use of compressed sensing (CS) algorithms for data compression in wireless sensors to ad-dress the energy and telemetry bandwidth constraints common to wireless sensor nodes. Circuit models of both analog and dig-ital implementations of the CS system are presented t ..."
Abstract
-
Cited by 27 (5 self)
- Add to MetaCart
(Show Context)
Abstract—This work introduces the use of compressed sensing (CS) algorithms for data compression in wireless sensors to ad-dress the energy and telemetry bandwidth constraints common to wireless sensor nodes. Circuit models of both analog and dig-ital implementations of the CS system are presented that enable analysis of the power/performance costs associated with the design space for any potential CS application, including analog-to-infor-mation converters (AIC). Results of the analysis show that a digital implementation is significantly more energy-efficient for the wire-less sensor space where signals require high gain and medium to high resolutions. The resulting circuit architecture is implemented in a 90 nm CMOS process. Measured power results correlate well with the circuit models, and the test system demonstrates contin-uous, on-the-fly data processing, resulting in more than an order of magnitude compression for electroencephalography (EEG) signals while consuming only 1.9 W at 0.6 V for sub-20 kS/s sampling rates. The design and measurement of the proposed architecture is presented in the context of medical sensors, however the tools and insights are generally applicable to any sparse data acquisition. Index Terms—Biomedical electronics, circuit analysis, com-pressed sensing, electroencephalography, encoding, low power electronics, sensors, wireless sensor networks. I.
A sub-Nyquist radar prototype: Hardware and algorithms
- IEEE Transactions on Aerospace and Electronic Systems, special issue on Compressed Sensing for Radar, Aug. 2012
"... Traditional radar sensing typically employs matched filtering between the received signal and the shape of the transmitted pulse. Matched filtering (MF) is conventionally carried out digitally, after sampling the received analog signals. Here, principles from classic sampling theory are generally em ..."
Abstract
-
Cited by 9 (7 self)
- Add to MetaCart
Traditional radar sensing typically employs matched filtering between the received signal and the shape of the transmitted pulse. Matched filtering (MF) is conventionally carried out digitally, after sampling the received analog signals. Here, principles from classic sampling theory are generally employed, requiring that the received signals be sampled at twice their baseband bandwidth. The resulting sampling rates necessary for correlation-based radar systems become quite high, as growing demands for target distinction capability and spatial resolution stretch the bandwidth of the transmitted pulse. The large amounts of sampled data also necessitate vast memory capacity. In addition, real-time data processing typically results in high power consumption. Recently, new approaches for radar sensing and estimation were introduced, based on the finite rate of innovation (FRI) and Xampling frameworks. Exploiting the parametric nature of radar signals, these techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system’s estimation capabilities at sufficiently high signal-to-noise ratios (SNRs). Here we present for the first time a design and implementation of an Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demonstrate by real-time analog experiments that our system is able to maintain reasonable recovery capabilities, while sampling radar signals that require sampling at a rate of about 30 MHz at a total rate of 1 MHz.
Sparsity Order Estimation and its Application in Compressive Spectrum Sensing for Cognitive Radios
"... Abstract—Compressive sampling techniques can effectively reduce the acquisition costs of high-dimensional signals by utilizing the fact that typical signals of interest are often sparse in a certain domain. For compressive samplers, the number of samples Mr needed to reconstruct a sparse signal is d ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Compressive sampling techniques can effectively reduce the acquisition costs of high-dimensional signals by utilizing the fact that typical signals of interest are often sparse in a certain domain. For compressive samplers, the number of samples Mr needed to reconstruct a sparse signal is determined by the actual sparsity order Snz of the signal, which can be much smaller than the signal dimension N. However,Snz is often unknown or dynamically varying in practice, and the practical sampling rate has to be chosen conservatively according to an upper bound Smax of the actual sparsity order in lieu of Snz, which can be unnecessarily high. To circumvent such wastage of the sampling resources, this paper introduces the concept of sparsity order estimation, which aims to accurately acquire Snz prior to sparse signal recovery, by using a very small number of samples Me less than Mr. A statistical learning methodology is used to quantify
Why Analog-to-Information Converters Suffer
- in High-Bandwidth Sparse Signal Applications”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS
, 2013
"... Abstract—In applications where signal frequencies are high, but information bandwidths are low, analog-to-information converters (AICs) have been proposed as a potential solution to overcome the resolution and performance limitations of high-speed analog-to-digital converters (ADCs). However, the ha ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract—In applications where signal frequencies are high, but information bandwidths are low, analog-to-information converters (AICs) have been proposed as a potential solution to overcome the resolution and performance limitations of high-speed analog-to-digital converters (ADCs). However, the hardware implementation of such systems has yet to be evaluated. This paper aims to fill this gap, by evaluating the impact of circuit impairments on per-formance limitations and energy cost of AICs. We point out that although the AIC architecture facilitates slower ADCs, the signal encoding, typically realized with a mixer-like circuit, still occurs at the Nyquist frequency of the input to avoid aliasing. We illustrate that the jitter and aperture of this mixing stage limit the achiev-able AIC resolution. In order to do so, we designed an end-to-end system evaluation framework for examining these limitations, as well as the relative energy-efficiency of AICs versus high-speed ADCs across the resolution, receiver gain and signal sparsity. The evaluation shows that the currently proposed AICs have no per-formance benefits over high-speed ADCs. However, AICs enable 2–10X in energy savings in low to moderate resolution (ENOB), low gain applications. Index Terms—Analog-to-digital converter (ADC), analog-to-in-formation converter (AIC), compressed sensing (CS).
A Sub-Nyquist Rate Compressive Sensing Data Acquisition Front-End
, 2012
"... This paper presents a sub-Nyquist rate data acquisition front-end based on compressive sensing theory. The front-end randomizes a sparse input signal by mixing it with pseudo-random number sequences, followed by analog-to-digital converter sampling at sub-Nyquist rate. The signal is then recon-stru ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
This paper presents a sub-Nyquist rate data acquisition front-end based on compressive sensing theory. The front-end randomizes a sparse input signal by mixing it with pseudo-random number sequences, followed by analog-to-digital converter sampling at sub-Nyquist rate. The signal is then recon-structed using an L1-based optimization algorithm that exploits the signal sparsity to reconstruct the signal with high fidelity. The reconstruction is based on a priori signal model information, such as a multi-tone frequency-sparse model which matches the input signal frequency support. Wideband multi-tone test signals with 4 % sparsity in 5~500 MHz band were used to experimentally verify the front-end performance. Single-tone and multi-tone tests show maximum signal to noise and distortion ratios of 40 dB and 30 dB, respectively, with an equivalent sampling rate of 1 GS/s. The analog front-end was fabricated in a 90 nm complementary metal–oxide–semiconductor process and consumes 55 mW. The front-end core occupies 0.93 mm.
Digital-Assisted Asynchronous Compressive Sensing Front-End
"... Abstract—Compressive sensing (CS) is a promising technique that enables sub-Nyquist sampling, while still guaranteeing the re-liable signal recovery. However, existing mixed-signal CS front-end implementation schemes often suffer from high power con-sumption and nonlinearity. This paper presents a d ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Compressive sensing (CS) is a promising technique that enables sub-Nyquist sampling, while still guaranteeing the re-liable signal recovery. However, existing mixed-signal CS front-end implementation schemes often suffer from high power con-sumption and nonlinearity. This paper presents a digital-assisted asynchronous compressive sensing (DACS) front-end which offers lower power and higher reconstruction performance relative to the conventional CS-based approaches. The front-end architecture leverages a continuous-time ternary encoding scheme which mod-ulates amplitude variation to ternary timing information. Power is optimized by employing digital-assisted modules in the front-end circuit and a part-time operation strategy for high-power mod-ules. An-member Group-based Total Variation (-GTV) algo-rithm is proposed for the sparse reconstruction of piecewise-con-stant signals. By including both the inter-group and intra-group total variation, the-GTV scheme outperforms the conventional TV-based methods in terms of faster convergence rate and better sparse reconstruction performance. Analyses and simulations with a typical ECG recording system confirm that the proposed DACS front-end outperforms a conventional CS-based front-end using a randomdemodulator in terms of lower power consumption, higher recovery performance, and more system flexibility. Index Terms—Asynchronous architecture, compressive sensing (CS), continuous-time ternary encoding, digital-assisted front-end, part-time randomization, total variation. I.
Approved as to style and content by:
, 2014
"... This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has ..."
Abstract
- Add to MetaCart
This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has