## An empirical bayes approach to contextual region classification (2009)

### Cached

### Download Links

Venue: | In CVPR |

Citations: | 15 - 0 self |

### BibTeX

@INPROCEEDINGS{Lazebnik09anempirical,

author = {Svetlana Lazebnik and Maxim Raginsky},

title = {An empirical bayes approach to contextual region classification},

booktitle = {In CVPR},

year = {2009}

}

### OpenURL

### Abstract

This paper presents a nonparametric approach to labeling of local image regions that is inspired by recent developments in information-theoretic denoising. The chief novelty of this approach rests in its ability to derive an unsupervised contextual prior over image classes from unlabeled test data. Labeled training data is needed only to learn a local appearance model for image patches (although additional supervisory information can optionally be incorporated when it is available). Instead of assuming a parametric prior such as a Markov random field for the class labels, the proposed approach uses the empirical Bayes technique of statistical inversion to recover a contextual model directly from the test data, either as a spatially varying or as a globally constant prior distribution over the classes in the image. Results on two challenging datasets convincingly demonstrate that useful contextual information can indeed be learned from unlabeled data. 1.

### Citations

5516 | Distinctive image features from scaleinvariant keypoints
- Lowe
(Show Context)
Citation Context .... [6]. 3.1. Local likelihood model We perform feature extraction by dividing the images into non-overlapping 20 × 20 pixel patches and computing four types of features from each patch: position, SIFT =-=[9]-=-, textons, and color. Textons are computed by convolving the images with a filter bank and recording the index of the filter with the maximum absolute response at each pixel. The texton descriptor is ... |

2591 | Latent Dirichlet allocation
- Blei, Ng, et al.
- 2003
(Show Context)
Citation Context ...kelihoods of words given topics are learned on a training set [5]. It has been argued that pLSA is not a “true” generative model because the prior over topics is conditioned on the “dummy” variable d =-=[2]-=-. But from the empirical Bayes perspective, d is actually the context, and pLSA can be thought of as a data-driven technique that uses the fact that a given group of words or observations all originat... |

967 | On the statistical analysis of dirty pictures
- Besag
- 1986
(Show Context)
Citation Context ...mages, we can re-compute the neighborhood context using the new labels and rerun the empirical Bayes algorithm to get a further improved estimate. This strategy, similar to iterated conditional modes =-=[1]-=-, tends to converge very quickly (i.e., in about three iterations), and typically results in a further improvement of just under 1%. 3.5. Additional evaluation The contextual priors described so far a... |

851 | Probabilistic Latent Semantic Indexing
- Hofmann
- 1999
(Show Context)
Citation Context ...t section. But first, we would like to discuss an intriguing connection that emerges between our proposed approach for computing image-level contexts and probabilistic Latent Semantic Analysis (pLSA) =-=[5]-=-, a popular document model that has been successfully applied to images [15, 21]. This model was originally developed as an unsupervised procedure for discovering latent document structure in terms of... |

747 | Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
- Platt
- 1999
(Show Context)
Citation Context ...the histogram of per-patch class labels estimated using the unsupervised image-level prior. The output of the binary SVM corresponding to class x is converted to a probability in the standard fashion =-=[10]-=- and becomes a supervised image-level prior P (x). This prior is then used to “modulate” the unsupervised contextual prior by multiplication, as suggested in [20]. Table 3(e) shows the performance obt... |

345 | Representing and Recognizing the Visual Appearance of Materials Using Three-Dimensional Textons - Leung, Malik - 2001 |

306 | A.: Textonboost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation
- Shotton, Winn, et al.
- 2006
(Show Context)
Citation Context ...textual model that captures the probability of different classes occurring nearby, or sharing a specific spatial relationship. Recent literature contains many approaches for contextual image labeling =-=[4, 6, 11, 18, 19, 20, 21]-=-. Most existing contextual models must be learned from training data that contains a representative sampling of all possible inter-class interactions. Since the number of even pairwise interactions is... |

187 | Semantic texton forests for image categorization and segmentation
- Shotton, Johnson, et al.
- 2008
(Show Context)
Citation Context ...textual model that captures the probability of different classes occurring nearby, or sharing a specific spatial relationship. Recent literature contains many approaches for contextual image labeling =-=[4, 6, 11, 18, 19, 20, 21]-=-. Most existing contextual models must be learned from training data that contains a representative sampling of all possible inter-class interactions. Since the number of even pairwise interactions is... |

102 | Constructing models for content-based image retrieval
- Schmid
- 2001
(Show Context)
Citation Context ...e histogram of texton indices within the patch. We use a subset of the LM filter bank [7] consisting of 18 second-derivative-of-Gaussian and 6 Laplacian filters, and 13 filters from the S filter bank =-=[17]-=-, for a total of 37 filters. Because we distinguish between positive and negative filter responses, the texton histogram has 74 dimensions. For color, we compute a 48-dimensional descriptor by subdivi... |

81 | Universal discrete denoising: Known channel
- Weissman, Ordentlich, et al.
- 2005
(Show Context)
Citation Context ...servation sequence to recover the underlying clean label sequence. of being specified in advance. This methodology has recently given rise to an information-theoretic framework of universal denoising =-=[22]-=-, which, in turn, has inspired our own work. We think of the stochastic mapping from class labels to observations as a “noisy channel,” and then we infer the underlying class labels sequence by denois... |

80 | Learning spatial context: Using stuff to find things
- Heitz, Koller
- 2008
(Show Context)
Citation Context ...textual model that captures the probability of different classes occurring nearby, or sharing a specific spatial relationship. Recent literature contains many approaches for contextual image labeling =-=[4, 6, 11, 18, 19, 20, 21]-=-. Most existing contextual models must be learned from training data that contains a representative sampling of all possible inter-class interactions. Since the number of even pairwise interactions is... |

78 |
An empirical Bayes approach to statistics
- Robbins
- 1955
(Show Context)
Citation Context ...sequence of image features, we should be able to go back from the features to the labels. This insight can be formally captured with the help of empirical Bayes methodology from statistics literature =-=[13, 14]-=-, in which priors are inferred from data insteadCLASS LABEL SEQUENCE OBSERVED IMAGE OBSERVATION SEQUENCE SKY IMAGE FORMATION QUANTIZATION TRAINING DATA TEST IMAGE CONTEXT ci TEST IMAGES TREE CLASS LA... |

64 | Region Classification with Markov Field Aspect Models - Verbeek, Triggs - 2007 |

41 |
Asymptotically subminimax solutions of compound statistical decision problems
- Robbins
- 1951
(Show Context)
Citation Context ...by a known noisy channel (i.e., the stochastic mapping Q), where we seek to minimize the expected fraction of incorrectly recovered symbols. Denoising can be formulated as a compound decision problem =-=[12]-=-,orasetof simultaneous statistical decision problems that have some shared structure. In our case, each separate problem is the recovery of a correct class label for a single test observation, and the... |

31 | Object class segmentation using random forests
- Schroff, Criminisi, et al.
- 2008
(Show Context)
Citation Context |

29 |
The Bayesian Choice, 2nd ed
- Robert
- 2001
(Show Context)
Citation Context ...sequence of image features, we should be able to go back from the features to the labels. This insight can be formally captured with the help of empirical Bayes methodology from statistics literature =-=[13, 14]-=-, in which priors are inferred from data insteadCLASS LABEL SEQUENCE OBSERVED IMAGE OBSERVATION SEQUENCE SKY IMAGE FORMATION QUANTIZATION TRAINING DATA TEST IMAGE CONTEXT ci TEST IMAGES TREE CLASS LA... |

9 |
Discovering objects and their location in images. ICCV
- Sivic, Russell, et al.
- 2005
(Show Context)
Citation Context ...t emerges between our proposed approach for computing image-level contexts and probabilistic Latent Semantic Analysis (pLSA) [5], a popular document model that has been successfully applied to images =-=[15, 21]-=-. This model was originally developed as an unsupervised procedure for discovering latent document structure in terms of underlying “topics” that generate the observed words. In relation to the framew... |

7 |
Geometric context from a single image,” ICCV
- Hoiem, Efros, et al.
- 2005
(Show Context)
Citation Context |