#### DMCA

## Non-convex sparse optimization through deterministic anneling and applications (2008)

Venue: | in IEEE Int. Conf. on Im. Proc |

Citations: | 4 - 1 self |

### Citations

2682 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...e.g., [3, 4]). Also, if we minimize the sum of the absolute values of the coefficients (ℓ1-norm) instead of the number of active vectors (ℓ0-norm), then the optimization problem becomes convex (e.g., =-=[5, 6]-=-). Finally, iterative algorithms have been proposed, both for convex relaxation (as iterative soft thresholding method, IST, e.g. [7, 8]) and using hard thresholding, (iterative hard thresholding meth... |

618 | Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition
- PATI, REZAIIFAR, et al.
- 1993
(Show Context)
Citation Context ...inatorial search. Some authors have explored more tractable variants. Greedy methods approximate the image by incrementally selecting those vectors best describing the part not yet represented (e.g., =-=[3, 4]-=-). Also, if we minimize the sum of the absolute values of the coefficients (ℓ1-norm) instead of the number of active vectors (ℓ0-norm), then the optimization problem becomes convex (e.g., [5, 6]). Fin... |

347 | An EM algorithm for wavelet-based image restoration
- Figueiredo, Nowak
- 2003
(Show Context)
Citation Context ... (ℓ0-norm), then the optimization problem becomes convex (e.g., [5, 6]). Finally, iterative algorithms have been proposed, both for convex relaxation (as iterative soft thresholding method, IST, e.g. =-=[7, 8]-=-) and using hard thresholding, (iterative hard thresholding method, IHT, e.g. [9, 10]). They can be improved by some heuristics, like using decreasing thresholds. Here, we show that it is possible to ... |

240 | A new approach to variable selection in least squares problems
- Osborne, Presnell, et al.
(Show Context)
Citation Context ...se approximation problem. We then use a deterministic annealing-like ∗ Both authors funded by grant TEC2006/13845/TCM from the Ministerio de Ciencia y Tecnología, Spain. technique, through a homotopy =-=[11]-=- to avoid non-favorable local minima. We end up with a method already used as a heuristic (e.g., [9, 8]) but, up to our knowledge, we are first to propose a theoretically justified derivation. We have... |

214 | Sparse representation for color image restoration
- Mairal, Elad, et al.
- 2007
(Show Context)
Citation Context ...hesis. Most degradation sources decrease the sparseness of the wavelet coefficients, and thus we can compensate for part of the degradation by finding sparse approximations to the observations (e.g., =-=[1, 2]-=-). The exact solution to this problem requires a combinatorial search. Some authors have explored more tractable variants. Greedy methods approximate the image by incrementally selecting those vectors... |

177 |
Matching pursuit in a time-frequency dictionary
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...inatorial search. Some authors have explored more tractable variants. Greedy methods approximate the image by incrementally selecting those vectors best describing the part not yet represented (e.g., =-=[3, 4]-=-). Also, if we minimize the sum of the absolute values of the coefficients (ℓ1-norm) instead of the number of active vectors (ℓ0-norm), then the optimization problem becomes convex (e.g., [5, 6]). Fin... |

147 | Analysis versus synthesis in signal priors
- Elad, Milanfar, et al.
- 2007
(Show Context)
Citation Context ...not. Here we differentiate between synthesis-sense sparseness (SsS) and analysissense sparseness (AsS). The theoretical properties and the different uses we can give to both concepts are addressed in =-=[15]-=-. Although SsS methods are very powerful, AsS methods are easier to justify from an empirical Bayesian perspective, because they use statistical priors based on linear observations. Recently some auth... |

47 | Fast digital image inpainting
- Oliveira, Bowen, et al.
- 2001
(Show Context)
Citation Context ...lem when dealing with digital images. Here we compare the performance of our method with two other ones: EM-inpainting [1], based on convex relaxation of the sparseness condition, and Fast-inpainting =-=[16]-=-, a simple but effective PDE-based method. Given a set of indices, I, extracted from {1, ..., N}, and given y = fy(x), where all yi with i ∈ I holds that yi = xi, we define the consistency set, RI(y),... |

27 | Overcomplete image coding using iterative projection-based noise shaping
- Reeves, Kingsbury
(Show Context)
Citation Context ...erative algorithms have been proposed, both for convex relaxation (as iterative soft thresholding method, IST, e.g. [7, 8]) and using hard thresholding, (iterative hard thresholding method, IHT, e.g. =-=[9, 10]-=-). They can be improved by some heuristics, like using decreasing thresholds. Here, we show that it is possible to conjugate classical optimization methods with competitive results without using conve... |

11 | Morphological component analysis
- Starck, Moudden, et al.
- 2005
(Show Context)
Citation Context ... (ℓ0-norm), then the optimization problem becomes convex (e.g., [5, 6]). Finally, iterative algorithms have been proposed, both for convex relaxation (as iterative soft thresholding method, IST, e.g. =-=[7, 8]-=-) and using hard thresholding, (iterative hard thresholding method, IHT, e.g. [9, 10]). They can be improved by some heuristics, like using decreasing thresholds. Here, we show that it is possible to ... |

9 |
An em algorithm for sparse representation-based image inpainting
- Fadili, Starck
- 2005
(Show Context)
Citation Context ...hesis. Most degradation sources decrease the sparseness of the wavelet coefficients, and thus we can compensate for part of the degradation by finding sparse approximations to the observations (e.g., =-=[1, 2]-=-). The exact solution to this problem requires a combinatorial search. Some authors have explored more tractable variants. Greedy methods approximate the image by incrementally selecting those vectors... |

7 | Iterative hard thresholding and l0 regularisation
- Blumensath, Yaghoobi, et al.
- 2007
(Show Context)
Citation Context ...erative algorithms have been proposed, both for convex relaxation (as iterative soft thresholding method, IST, e.g. [7, 8]) and using hard thresholding, (iterative hard thresholding method, IHT, e.g. =-=[9, 10]-=-). They can be improved by some heuristics, like using decreasing thresholds. Here, we show that it is possible to conjugate classical optimization methods with competitive results without using conve... |

7 |
L0-based sparse approximation: two alternative methods and some applications
- Portilla, Mancera
- 2007
(Show Context)
Citation Context ...avorable local minima. We end up with a method already used as a heuristic (e.g., [9, 8]) but, up to our knowledge, we are first to propose a theoretically justified derivation. We have already shown =-=[12]-=- outstanding energy compaction performance of this method. Here we apply it to restoration, obtaining high-performance in-painting results. 2. THE SPARSE APPROXIMATION PROBLEM Let Φ be a N × M matrix ... |

2 |
Fast sparse representation based on smoothed
- Mohimani, Babaie-Zadeh, et al.
- 2007
(Show Context)
Citation Context ...d version of this consists of doing a single gradient descent step each time, slowly increasing λ (k) at each iteration. This idea is closely related to other schemes, such as GNC [14]. Alternatively =-=[13]-=- minimizes a continuous function which gets closer and closer to the ℓ0-norm. Figure 1 shows the convergence trajectories of IHT for several thresholds (dashed), and two trajectories (solid and circle... |

1 | 120 × 120 cropped degraded Barbara (24.19 dB). Top-right, Fast-inpainting (32.71 dB - Top-left |

1 |
Graduated non-convexity,” in Visual Reconstruction
- Blake, Zisserman
- 1987
(Show Context)
Citation Context ...zation). A simplified version of this consists of doing a single gradient descent step each time, slowly increasing λ (k) at each iteration. This idea is closely related to other schemes, such as GNC =-=[14]-=-. Alternatively [13] minimizes a continuous function which gets closer and closer to the ℓ0-norm. Figure 1 shows the convergence trajectories of IHT for several thresholds (dashed), and two trajectori... |