## Dependent Hierarchical Beta Process for Image Interpolation and Denoising 1

Citations: | 7 - 4 self |

### BibTeX

@MISC{Zhou_dependenthierarchical,

author = {Mingyaun Zhou and Hongxia Yang and Guillermo Sapiro and David Dunson and Lawrence Carin},

title = {Dependent Hierarchical Beta Process for Image Interpolation and Denoising 1},

year = {}

}

### OpenURL

### Abstract

A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features, with covariate-dependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are likely to be manifested in terms of similar features. Coupling the dHBP with the Bernoulli process, and upon marginalizing out the dHBP, the model may be interpreted as a covariatedependent hierarchical Indian buffet process. As applications, we consider interpolation and denoising of an image, with covariates defined by the location of image patches within an image. Two types of noise models are considered: (i) typical white Gaussian noise; and (ii) spiky noise of arbitrary amplitude, distributed uniformly at random. In these examples, the features correspond to the atoms of a dictionary, learned based upon the data under test (without a priori training data). State-of-the-art performance is demonstrated, with efficient inference using hybrid Gibbs, Metropolis-Hastings and slice sampling.

### Citations

1688 |
A Global Geometric Framework for Nonlinear Dimensionality
- Tenenbaum, Silva, et al.
(Show Context)
Citation Context ... true and it is zero elsewise, and σ is the kernel width. We can calculate aij by (1) and we have aij ̸= 0 if and only if j ∈ Qi. This way of defining neighborhoods is similar to that used in Isomap (=-=Tenenbaum et al., 2000-=-). 4.3.1 Missing Pixels With missing pixels, we observe y i = Σixi, where Σi is the sampling matrix, constructed by selecting a subset of rows from the identity matrix IP . The matrix Σi is a function... |

1217 | Monte carlo sampling methods using markov chains and their applications - Hastings |

180 | Infinite Latent Feature Models and the Indian Buffet Process - Griffiths, Ghahramani - 2006 |

90 |
Online dictionary learning for sparse coding
- Mairal, Bach, et al.
- 2009
(Show Context)
Citation Context ...tionary atoms) with which data may be sparsely represented. In many applications the signal (and hence features) are dependent on observable covariates. For example, in image-processing applications (=-=Mairal et al., 2009-=-, 2008; Zhou et al., 2009) one often represents an image in terms of a set of local patches (each composed of a contiguous subset of pixels), and the objective is to represent each patch as a sparse l... |

83 |
Gibbs sampling for Bayesian non-conjugate and hierarchical models by using auxiliary variables
- Damlen, Wakefield, et al.
- 1999
(Show Context)
Citation Context ... and with the Euler’s reflection formula Γ(1 − x)Γ(x) = π/sin(πx), we have p(ηk|−) ∝ η c0η0−1 k (1 − ηk) c0(1−η0)−1 sin N (πηk) ( N∑ ( ∗ πjk exp c1ηk log 1 − π∗ ) jk ) . (21) j=1 With slice sampling (=-=Damlen et al., 1999-=-), we let uk ∼ Unif ( 0, η c0η0−1) k , wk ∼ Unif ( 0, sin N (πηk) ) vk ∼ Unif ( 0, (1 − ηk) c0(1−η0)−1) (22) and then draw ηk from the truncated exponential distribution as ( N∑ ( ∗ πjk ηk ∼ Exp − c1 ... |

74 | Hierarchical beta processes and the Indian buffet process - Thibaux, Jordan |

69 | Theoretical results on sparse representations of multiple-measurement vectors
- Chen, Huo
- 2006
(Show Context)
Citation Context ...or similar factors; this is related to 883Dependent Hierarchical Beta Process for Image Interpolation and Denoising previous work on joint sparse analysis of multiple data vectors, but in that work (=-=Chen and Huo, 2006-=-; Mishali and Eldar, 2008; Tropp, 2006) covariates were not explicitly employed. To address this challenge, we develop a new model, termed the dependent hierarchical beta process (dHBP), and relate it... |

60 | Reduce and boost: Recovering arbitrary sets of jointly sparse vectors - Mishali, Eldar - 2008 |

40 | Bayesian density regression - DUNSON, PILLAI, et al. - 2007 |

37 | Nonparametric factor analysis with beta process priors - Paisley, Carin - 2009 |

15 | The infinite hierarchical factor regression model
- Rai, Daumé
- 2008
(Show Context)
Citation Context ...gs and slice sampling. 1 INTRODUCTION There has been significant recent interest in the Indian buffet process (IBP) (Griffiths and Ghahramani, 2005; Knowles and Ghahramani, 2007; Miller et al., 2008; =-=Rai and Daumé, 2008-=-; Williamson et al., 2010) and in the related beta process (BP) (Paisley and Carin, 2009; Appearing in Proceedings of the 14 th International Conference on Artificial Intelligence and Statistics (AIST... |

12 | Indian buffet processes with power-law behavior
- Teh, Gorur
- 2009
(Show Context)
Citation Context ...ing in Proceedings of the 14 th International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors. =-=Teh and Gorur, 2009-=-; Thibaux and Jordan, 2007; Zhou et al., 2009). These models have been applied to factor analysis to infer a set of factors (features/dictionary atoms) with which data may be sparsely represented. In ... |

11 | 2011b), “Sparse Bayesian infinite factor models
- Bhattacharya, Dunson
(Show Context)
Citation Context ... (e.g., the observed data density, at the layer of the image in our studies) and indeed for over-parameterized hierarchical models one often obtains excellent mixing for identifiable quantities (see (=-=Bhattacharya and Dunson, 2011-=-) for a discussion); we observed this behavior in the proposed model. 5.1 Images with Missing Pixels and WGN We assume {xi}i=1,N represent data from (overlapping) patches from a single image, with a s... |

11 | The phylogenetic Indian buffet process: a non-exchangeable nonparametric prior for latent features - Miller, Griffiths, et al. - 2008 |

8 | Dependent Indian buffet processes - Williamson, Orbanz, et al. - 2010 |

1 | Robust principal component analysis? accepted for publication in - Candès, Li, et al. |

1 | Convex relaxation - part - 2006 |