Results 1  10
of
11
ℓ1 Trend Filtering
, 2007
"... The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on HodrickPrescott (HP) filtering, a widely used method for trend estimation. The proposed ℓ1 trend filtering method substitutes a sum of absolute values (i.e., ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on HodrickPrescott (HP) filtering, a widely used method for trend estimation. The proposed ℓ1 trend filtering method substitutes a sum of absolute values (i.e., an ℓ1norm) for the sum of squares used in HP filtering to penalize variations in the estimated trend. The ℓ1 trend filtering method produces trend estimates that are piecewise linear, and therefore is well suited to analyzing time series with an underlying piecewise linear trend. The kinks, knots, or changes in slope, of the estimated trend can be interpreted as abrupt changes or events in the underlying dynamics of the time series. Using specialized interiorpoint methods, ℓ1 trend filtering can be carried out with not much more effort than HP filtering; in particular, the number of arithmetic operations required grows linearly with the number of data points. We describe the method and some of its basic properties, and give some illustrative examples. We show how the method is related to ℓ1 regularization based methods in sparse signal recovery and feature selection, and list some extensions of the basic method.
Compressed sensing with quantized measurements
, 2010
"... We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function p ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an regularization term. Using a first order method developed by Hale et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
Reducing the risk of query expansion via robust constrained optimization
 Proceedings of the Eighteenth International Conference on Information and Knowledge Management (CIKM 2009). ACM. Hong
"... We introduce a new theoretical derivation, evaluation methods, and extensive empirical analysis for an automatic query expansion framework in which model estimation is cast as a robust constrained optimization problem. This framework provides a powerful method for modeling and solving complex expans ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We introduce a new theoretical derivation, evaluation methods, and extensive empirical analysis for an automatic query expansion framework in which model estimation is cast as a robust constrained optimization problem. This framework provides a powerful method for modeling and solving complex expansion problems, by allowing multiple sources of domain knowledge or evidence to be encoded as simultaneous optimization constraints. Our robust optimization approach provides a clean theoretical way to model not only expansion benefit, but also expansion risk, by optimizing over uncertainty sets for the data. In addition, we introduce riskreward curves to visualize expansion algorithm performance and analyze parameter sensitivity. We show that a robust approach significantly reduces the number and magnitude of expansion failures for a strong baseline algorithm, with no loss in average gain. Our approach is implemented as a highly efficient postprocessing step that assumes little about the baseline expansion method used as input, making it easy to apply to existing expansion methods. We provide analysis showing that this approach is a natural and effective way to do selective expansion, automatically reducing or avoiding expansion in risky scenarios, and successfully attenuating noise in poor baseline methods.
Probabilistic Management of OCR Data using an RDBMS
, 2011
"... The digitization of scanned forms and documents is changing the data sources that enterprises manage. To integrate these new data sources with enterprise data, the current stateoftheart approach is to convert the images to ASCII text using optical character recognition (OCR) software and then to s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The digitization of scanned forms and documents is changing the data sources that enterprises manage. To integrate these new data sources with enterprise data, the current stateoftheart approach is to convert the images to ASCII text using optical character recognition (OCR) software and then to store the resulting ASCII text in a relational database. The OCR problem is challenging, and so the output of OCR often contains errors. In turn, queries on the output of OCR may fail to retrieve relevant answers. Stateoftheart OCR programs, e.g., the OCR powering Google Books, use a probabilistic model that captures many alternatives during the OCR process. Only when the results of OCR are stored in the database, do these approaches discard the uncertainty. In this work, we propose to retain the probabilistic models produced by OCR process in a relational database management system. A key technical challenge is that the probabilistic data produced by OCR software is very large (a single book blows up to 2GB from 400kB as ASCII). As a result, a baseline solution that integrates these models with an RDBMS is over 1000x slower versus standard text processing for single table selectproject queries. However, many applications may have qualityperformance needs that are in between these two extremes of ASCII and the complete model output by the OCR software. Thus, we propose a novel approximation scheme called Staccato that allows a user to trade recall for query performance. Additionally, we provide a formal analysis of our scheme’s properties, and describe how we integrate our scheme with standardRDBMS text indexing.
SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING 1 Fault Identification via Nonparametric Belief Propagation
"... Abstract—We consider the problem of identifying a pattern of faults from a set of noisy linear measurements. Unfortunately, maximum a posteriori probability estimation of the fault pattern is computationally intractable. To solve the fault identification problem, we propose a nonparametric belief p ..."
Abstract
 Add to MetaCart
Abstract—We consider the problem of identifying a pattern of faults from a set of noisy linear measurements. Unfortunately, maximum a posteriori probability estimation of the fault pattern is computationally intractable. To solve the fault identification problem, we propose a nonparametric belief propagation approach. We show empirically that our belief propagation solver is more accurate than recent stateoftheart algorithms including interior point methods and semidefinite programming. Our superior performance is explained by the fact that we take into account both the binary nature of the individual faults and the sparsity of the fault pattern arising from their rarity. Index Terms—compressed sensing, fault identification, message passing, nonparametric belief propagation, stochastic approximation. I.
RealTime Convex Optimization . . .  Recent advances that make it easier to design and implement algorithms
, 2010
"... Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total va ..."
Abstract
 Add to MetaCart
Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total variation denoising, compressed sensing, fault detection, and image classification. In both scenarios, the optimization is carried out on time scales of seconds or minutes and without strict time constraints. Convex optimization has traditionally been considered computationally expensive, so its use has been limited to applications where plenty of time is available. Such restrictions are no longer justified. The combination of dramatically increased computing power, modern algorithms, and new coding approaches has delivered an enormous speed increase, which makes it possible to solve modestsized convex optimization problems on microsecond or millisecond time scales and with strict deadlines. This enables realtime convex optimization in signal processing.
WeC02.5 Mixed State Estimation for a Linear Gaussian Markov Model
"... Abstract — We consider a discretetime dynamical system with Boolean and continuous states, with the continuous state propagating linearly in the continuous and Boolean state variables, and an additive Gaussian process noise, and where each Boolean state component follows a simple Markov chain. This ..."
Abstract
 Add to MetaCart
Abstract — We consider a discretetime dynamical system with Boolean and continuous states, with the continuous state propagating linearly in the continuous and Boolean state variables, and an additive Gaussian process noise, and where each Boolean state component follows a simple Markov chain. This model, which can be considered a hybrid or jumplinear system with very special form, or a standard linear GaussMarkov dynamical system driven by a Boolean Markov process, arises in dynamic fault detection, in which each Boolean state component represents a fault that can occur. We address the problem of estimating the state, given Gaussian noise corrupted linear measurements. Computing the exact maximum a posteriori (MAP) estimate entails solving a mixed integer quadratic program, which is computationally difficult in general, so we propose an approximate MAP scheme, based on a convex relaxation, followed by rounding and (possibly) further local optimization. Our method has a complexity that grows linearly in the time horizon and cubicly with the state dimension, the same as a standard Kalman filter. Numerical experiments suggest that it performs very well in practice. I.
Mixed Linear System Estimation and Identification
"... Abstract — We consider a mixed linear system model, with both continuous and discrete inputs and outputs, described by a coefficient matrix and a set of noise variances. When the discrete inputs and outputs are absent, the model reduces to the usual noisecorrupted linear system. With discrete input ..."
Abstract
 Add to MetaCart
Abstract — We consider a mixed linear system model, with both continuous and discrete inputs and outputs, described by a coefficient matrix and a set of noise variances. When the discrete inputs and outputs are absent, the model reduces to the usual noisecorrupted linear system. With discrete inputs only, the model has been used in fault estimation, and with discrete outputs only, the system reduces to a probit model. We consider two fundamental problems: Estimating the model input, given the model parameters and the model output; and identifying the model parameters, given a training set of inputoutput pairs. The estimation problem leads to a mixed Booleanconvex optimization problem, which can be solved exactly when the number of discrete variables is small enough. In other cases the estimation problem can be solved approximately, by solving a convex relaxation, rounding, and possibly, carrying out a local optimization step. The identification problem is convex and so can be exactly solved. Adding ℓ1 regularization to the identification problem allows us to trade off model fit and model parsimony. We illustrate the identification and estimation methods with a numerical example. A. System model I.
FrA11.6
, 1012
"... Abstract — This paper demonstrates a novel optimizationbased approach to estimating fault states in a DC power system. The model includes faults changing the circuit topology along with sensor faults. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear ..."
Abstract
 Add to MetaCart
Abstract — This paper demonstrates a novel optimizationbased approach to estimating fault states in a DC power system. The model includes faults changing the circuit topology along with sensor faults. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using l1 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a realtime implementation of the approach for an instrumented electrical power system testbed at NASA. Accurate estimates of multiple faults are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.