#### DMCA

## Projection Onto Convex Sets (POCS) Based Signal Reconstruction Framework with an associated cost function (2014)

### Citations

2559 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,”
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ... 42–44] that it is possible to construct the φ matrix from random numbers, which are i.i.d. Gaussian random variables. In this case, the number of measurements should be chosen as cKlog(N K ) < M N =-=[42]-=-, [2]. With this choice of the measurement matrix, the optimization problem (17) can be approximated by `1 norm minimization as: sp = arg min ‖s‖1 such that θ.s = y. (18) Instead of solving the origin... |

2232 | Nonlinear total variation based noise removal algorithms - Rudin, Osher, et al. - 1992 |

1640 | Matching pursuits with time-frequency dictionaries." Signal Processing,
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...in Table 4. In the last set of experiments, we compared our reconstruction results with 4 well known CS reconstruction algorithms from the literature; CoSamp [56], `1magic [42], Matching Pursuit (MP) =-=[57]-=-, and `p optimization based CS reconstruction [49] algorithms. In comparison to the `p optimization based CS reconstruction algorithm, we used three different values for p; p = [0.8, 1, 1.7]. With p =... |

1477 | Near optimal signal recovery from random projections: Universal encoding strategies?,” - Candès, Tao - 2006 |

1401 | introduction to compressive sampling - Candes, Wakin - 2008 |

747 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
- Needell, Tropp
- 2008
(Show Context)
Citation Context ... to the algorithm given in [55], as shown in Table 4. In the last set of experiments, we compared our reconstruction results with 4 well known CS reconstruction algorithms from the literature; CoSamp =-=[56]-=-, `1magic [42], Matching Pursuit (MP) [57], and `p optimization based CS reconstruction [49] algorithms. In comparison to the `p optimization based CS reconstruction algorithm, we used three different... |

617 | An algorithm for total variation minimization and applications
- Chambolle
(Show Context)
Citation Context ...al projection according to the location of the corrupted signal as shown in Fig. 2. The denoising solution w? has the lowest total variation on the line [v, w?]. In current TV based denoising methods =-=[40,41]-=- the following cost function is used: min‖v −w‖22 + λTV(w). (13) The solution of this problem can be obtained using the method that we discussed in Section 2. One problem with this approach is the est... |

477 |
The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming
- Bregman
- 1967
(Show Context)
Citation Context ...t is shown that it is possible to use a convex cost function in this framework [1–5]. Bregman developed iterative methods based on the so-called Bregman distance to solve convex optimization problems =-=[6]-=-. In Bregman’s approach, it is necessary to perform a D-projection (or Bregman projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general [5, 7, 8]. In... |

257 | Proximal splitting methods in signal processing. FixedPoint Algorithms for
- Combettes, Pesquet
- 2011
(Show Context)
Citation Context ...al projection according to the location of the corrupted signal as shown in Fig. 2. The denoising solution w? has the lowest total variation on the line [v, w?]. In current TV based denoising methods =-=[40,41]-=- the following cost function is used: min‖v −w‖22 + λTV(w). (13) The solution of this problem can be obtained using the method that we discussed in Section 2. One problem with this approach is the est... |

237 | Compressed sensing,” Information Theory - Donoho - 2006 |

185 | Iteratively reweighted algorithms for compressive sensing
- Chartrand, Yin
- 2008
(Show Context)
Citation Context ...pared our reconstruction results with 4 well known CS reconstruction algorithms from the literature; CoSamp [56], `1magic [42], Matching Pursuit (MP) [57], and `p optimization based CS reconstruction =-=[49]-=- algorithms. In comparison to the `p optimization based CS reconstruction algorithm, we used three different values for p; p = [0.8, 1, 1.7]. With p = 1, the algorithm solves the problem given in 18, ... |

183 | Exact reconstruction of sparse signals via nonconvex minimization
- Chartrand
- 2007
(Show Context)
Citation Context ... Relevance Vector Machines (RVM). Some researchers replaced the objective function of the CS optimization in (17), (18) with a new objective function to solve the sparse signal reconstruction problem =-=[47, 48]-=-. One popular approach is replacing `0 norm with `p norm, where p ∈ (0, 1) or even with the mix of two different norms as in [47–50]. However, in these cases, the resulting optimization problems are n... |

167 |
Image restoration by the method of convex projections: Part 1—Theory
- Youla, Webb
- 1982
(Show Context)
Citation Context ...an projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general [5, 7, 8]. In this article Bregman’s older projections onto convex sets (POCS) framework =-=[9,10]-=- is used to solve convex optimization problems instead of the Bregman distance approach. In the ordinary POCS approach the goal is simply to find a vector which is in the intersection of convex sets [... |

157 | Linograms in image reconstruction from projections - Edholm, Herman |

124 |
The method of projections for finding the common point of convex sets
- Gubin, Polyak, et al.
- 1967
(Show Context)
Citation Context ...vectors of the two sets Cs and Cf . As a result we obtain lim n→∞ w2n = [w ∗ f(w∗)]T , (6) where w∗ is the N dimensional vector minimizing f(w). The proof of Eq. 6 follows from Bregman’s POCS theorem =-=[9,36]-=-. It was generalized to non-intersection case by Gubin et. al [13, 36], [37]. Since the two closed and convex sets Cs and Cf are closest to each other at the optimal solution case, iterations oscillat... |

103 |
The foundations of set theoretic estimation,”
- Combettes
- 1993
(Show Context)
Citation Context ...timal solution case, iterations oscillate between the vectors [w∗ f(w∗)]T and [w∗ 0]T in RN+1 as n tends to infinity. It is possible to increase the speed of convergence by non-orthogonal projections =-=[25]-=-. If the cost function f is not convex and have more than one local minimum then the corresponding set Cf is not convex in R N+1. In this case iterates may converge to one of the local minima. Conside... |

73 | Row-action methods for huge and sparse systems and their applications - Censor - 1981 |

71 | An iterative row-action method for interval convex programming - Censor, Lent - 1981 |

67 |
Bregman iterative algorithms for `1minimization with applications to compressed sensing
- Yin, Osher, et al.
(Show Context)
Citation Context ... problems [6]. In Bregman’s approach, it is necessary to perform a D-projection (or Bregman projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general =-=[5, 7, 8]-=-. In this article Bregman’s older projections onto convex sets (POCS) framework [9,10] is used to solve convex optimization problems instead of the Bregman distance approach. In the ordinary POCS appr... |

58 | Sparse signal recovery using Markov random fields
- Cevher, Duarte, et al.
- 2008
(Show Context)
Citation Context ... signal length). It is important to note that, the cusp signal is not sparse; however, since the coefficients in most of the transform domains are not zero, but close to zero, then it is compressible =-=[58]-=-. Therefore, the sparsity level of the test signals are not known exactly beforehand. 6 Conclusion A new de-noising method based on the epigraph of the TV function is developed. The solution is obtain... |

53 | J.C.: Image restoration subject to a total variation constraint - Combettes, Pesquet - 2004 |

49 | The Proximal Minimization Algorithm with D-functions. Working paper - Censor, Zenios - 1989 |

49 | Sparsity and persistence: Mixed norms provide simple signal models with dependent coefficients - Kowalski, Torrésani - 2010 |

47 | Low-dimensional models for dimensionality reduction and signal recovery: A geometric perspective - Baraniuk, Cevher, et al. - 2010 |

43 | Image restoration by the method of convex projections: Part 2, applications and numerical results - Sezan, Stark - 1982 |

31 | Signal recovery from wavelet transform maxima - Çetin, Ansari - 1994 |

30 | On the eectiveness of projection methods for convex feasibility problems with linear inequality constraints
- Censor, Chen, et al.
(Show Context)
Citation Context ...n = [w ∗ f(w∗)]T , (6) where w∗ is the N dimensional vector minimizing f(w). The proof of Eq. 6 follows from Bregman’s POCS theorem [9,36]. It was generalized to non-intersection case by Gubin et. al =-=[13, 36]-=-, [37]. Since the two closed and convex sets Cs and Cf are closest to each other at the optimal solution case, iterations oscillate between the vectors [w∗ f(w∗)]T and [w∗ 0]T in RN+1 as n tends to in... |

30 | Adaptive learning in a world of projections - Theodoridis, Slavakis, et al. - 2011 |

26 |
Finding the common point of convex sets by the method of successive projections
- Bregman
- 1965
(Show Context)
Citation Context ...an projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general [5, 7, 8]. In this article Bregman’s older projections onto convex sets (POCS) framework =-=[9,10]-=- is used to solve convex optimization problems instead of the Bregman distance approach. In the ordinary POCS approach the goal is simply to find a vector which is in the intersection of convex sets [... |

25 | On some optimization techniques in image reconstruction from projections - Censor, Herman - 1987 |

23 | Equiripple FIR filter design by the FFT algorithm - Çetin, Gerek, et al. - 1997 |

20 | Block-based compressed sensing of images and video
- Fowler, Mun, et al.
- 2012
(Show Context)
Citation Context ...1.22 Flower 30 11.84 21.97 20.89 Flower 50 7.42 19.00 18.88 Average 30 13.11 23.18 22.84 Average 50 8.69 20.82 20.70 We compared our results with the block based compressed sensing algorithm given in =-=[55]-=-. Therefore, we divided the image into blocks and reconstructed those blocks individually. Random measurements, which are 30% of the total number of points in images, are used in tests on both the pro... |

19 | Online Kernel-Based Classification Using Adaptive Projection Algorithms - Slavakis, Theodoridis, et al. - 2008 |

17 | The Landweber iteration and projection onto convex sets - Trussell, Civanlar - 1985 |

16 |
Bayesian compressive sensing,” Signal Processing
- Ji, Xue, et al.
- 2008
(Show Context)
Citation Context ...18) Instead of solving the original CS problem in (17) or (18), several researchers developed methods to reformulate those and approximate the solution through these new formulations. For example, in =-=[46]-=-, the authors developed a Bayesian framework and solved the CS problem using Relevance Vector Machines (RVM). Some researchers replaced the objective function of the CS optimization in (17), (18) with... |

15 |
Compressive Sensing [lecture notes].” Signal Processing Magazine
- Baraniuk
- 2007
(Show Context)
Citation Context ...] that it is possible to construct the φ matrix from random numbers, which are i.i.d. Gaussian random variables. In this case, the number of measurements should be chosen as cKlog(N K ) < M N [42], =-=[2]-=-. With this choice of the measurement matrix, the optimization problem (17) can be approximated by `1 norm minimization as: sp = arg min ‖s‖1 such that θ.s = y. (18) Instead of solving the original CS... |

13 | Filtered variation method for denoising and sparse signal processing - Kose, Cevher, et al. - 2012 |

12 | An iterative method for the extrapolation of bandlimited functions - Lent, Tuy - 1981 |

11 | Adaptive constrained learning in reproducing kernel hilbert spaces: the robust beamforming case - Slavakis, Theodoridis, et al. - 2009 |

11 | Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasinonexpansive mapping - Yamada, Yukawa, et al. - 2011 |

11 | Combinatorial selection and least absolute shrinkage via the CLASH algorithm
- Kyrillidis, Cevher
- 2012
(Show Context)
Citation Context ...x and have more than one local minimum then the corresponding set Cf is not convex in R N+1. In this case iterates may converge to one of the local minima. Consider the standard LASSO based denoising =-=[39]-=-: min 1 2 ‖v −w‖ 2 2 + λ‖w‖1, (7) where v is the corrupted version of w. Since the cost function f(w) = 1 2 ‖y −w‖22 + λ‖w‖1, (8) is a convex function, the framework introduced in this section can sol... |

11 | Compressive Sensing for Ultrasound RF Echoes using α-Stable Distribution, Engineering - Achim, Buxton, et al. - 2010 |

9 |
Cetin, “Online Adaptive Decision Fusion Framework Based on Entropic Projections onto Convex Sets with Application to Wildfire Detection in Video”, submitted to
- Gunay, Toreyin, et al.
- 2011
(Show Context)
Citation Context ... problems [6]. In Bregman’s approach, it is necessary to perform a D-projection (or Bregman projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general =-=[5, 7, 8]-=-. In this article Bregman’s older projections onto convex sets (POCS) framework [9,10] is used to solve convex optimization problems instead of the Bregman distance approach. In the ordinary POCS appr... |

8 | Cetin, “Reconstruction of signals from fourier transform samples - E - 1989 |

7 | Resolution Enhancement of Low Resolution Wavefields with - Çetin, Özaktaş, et al. - 2003 |

7 | Optimization of Burg’s entropy over linear constraints - Censor, Pierro, et al. - 1991 |

6 | Cetin, “Low-pass filtering of irregularly sampled signals using a set theoretic framework - Kose, E |

5 | Optimization of \logx entropy over linear equality constraints - Censor, Lent - 1987 |

5 | Convolution based framework for signal recovery and applications - Cetin, Ansari - 1988 |

5 | Eldar, “Conditions for Target Recovery in Spatial Compressive Sensing for MIMO Radar - Rossi, Haimovich, et al. - 2013 |

4 |
Algorithmes proximaux pour lés problemes d´optimisation structur les,” 2012. [Online]: http://www. sciencesmaths-paris.fr/upload/Contenu/HM2012/04-combettes.pdf
- Combettes
(Show Context)
Citation Context ...(w∗)]T , (6) where w∗ is the N dimensional vector minimizing f(w). The proof of Eq. 6 follows from Bregman’s POCS theorem [9,36]. It was generalized to non-intersection case by Gubin et. al [13, 36], =-=[37]-=-. Since the two closed and convex sets Cs and Cf are closest to each other at the optimal solution case, iterations oscillate between the vectors [w∗ f(w∗)]T and [w∗ 0]T in RN+1 as n tends to infinity... |

3 | Signal and Image Processing Algorithms Using Interval Convex Programming and Sparsity
- Köse
- 2012
(Show Context)
Citation Context ... problems [6]. In Bregman’s approach, it is necessary to perform a D-projection (or Bregman projection) at each step of the algorithm and it may not be easy to compute the Bregman distance in general =-=[5, 7, 8]-=-. In this article Bregman’s older projections onto convex sets (POCS) framework [9,10] is used to solve convex optimization problems instead of the Bregman distance approach. In the ordinary POCS appr... |

3 | Compressive sensing using the modified entropy functional - Kose, Gunay, et al. |

2 | Greedy sparse reconstruction of nonnegative signals using symmetric alpha-stable distributions - Tzagkarakis, Tsakalides - 2010 |

1 |
Sparsity aware consistent and high precision variable selection
- Rezaii, Tinati, et al.
- 2012
(Show Context)
Citation Context ... Relevance Vector Machines (RVM). Some researchers replaced the objective function of the CS optimization in (17), (18) with a new objective function to solve the sparse signal reconstruction problem =-=[47, 48]-=-. One popular approach is replacing `0 norm with `p norm, where p ∈ (0, 1) or even with the mix of two different norms as in [47–50]. However, in these cases, the resulting optimization problems are n... |

1 | Shrinkage rules for variational minimization problems and applications to analytical ultracentrifugation - Ehler - 2011 |