#### DMCA

## R.: Learning to model spatial dependency: Semi-supervised discriminative random fields (2007)

### Cached

### Download Links

Venue: | In: NIPS |

Citations: | 26 - 7 self |

### Citations

12920 | The nature of statistical learning theory
- Vapnik
- 1995
(Show Context)
Citation Context ...l probability over the pixel label field given an observed image. In this sense, a DRF is equivalent to a conditional random field [12] defined over a 2-D lattice. Following the basic tenet of Vapnik =-=[18]-=-, it is natural to anticipate that learning an accurate joint model should be more challenging than learning an accurate conditional model. Indeed, recent experimental evidence shows that DRFs tend to... |

3395 | Conditional random fields: Probabilistic models for segmenting and labeling sequence data
- Lafferty, McCallum, et al.
- 2001
(Show Context)
Citation Context ...lds (DRFs) [11, 10], on the other hand, directly model the conditional probability over the pixel label field given an observed image. In this sense, a DRF is equivalent to a conditional random field =-=[12]-=- defined over a 2-D lattice. Following the basic tenet of Vapnik [18], it is natural to anticipate that learning an accurate joint model should be more challenging than learning an accurate conditiona... |

2094 | Fast approximate energy minimization via graph cuts,”
- Boykov, Veksler, et al.
- 1999
(Show Context)
Citation Context ...rd supervised training of DRFs as well as iid logistic regression classifiers. To further accelerate the performance with respect to accuracy, we may apply loopy belief propagation [20] or graph-cuts =-=[4]-=- as an inference tool. Since our model is tightly coupled with inference steps during the learning, the proper choice of an inference algorithm will most likely improve segmentation tasks. Acknowledgm... |

1601 | Combining labeled and unlabeled data with co-training.
- Blum, Mitchell
- 1998
(Show Context)
Citation Context ...ed data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training =-=[3]-=-, information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems, or class label c... |

1238 | On the statistical analysis of dirty pictures
- Besag
- 1986
(Show Context)
Citation Context ...ependencies in natural image data. The two predominant types of random field models correspond to generative versus discriminative graphical models respectively. Classical Markov random fields (MRFs) =-=[2]-=- follow a traditional generative approach, where one models the joint probability of the observed image along with the hidden label field over the pixels. Discriminative random fields (DRFs) [11, 10],... |

1021 | Text Classification from Labeled and Unlabeled Documents Using EM.
- Mitchell
- 2000
(Show Context)
Citation Context ...areas due to the abundance of unlabeled data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models =-=[14]-=-, self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate class... |

726 | Semi-supervised learning using gaussian fields and harmonic functions,
- Zhu, Ghahramani, et al.
- 2003
(Show Context)
Citation Context ...rvised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction =-=[22, 23, 24]-=-. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23, 24]. Unfortunately, semi-supervised learn... |

656 | Learning with local and global consistency.
- Zhou, Bousquet, et al.
- 2004
(Show Context)
Citation Context ...rvised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction =-=[22, 23, 24]-=-. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23, 24]. Unfortunately, semi-supervised learn... |

469 | Generalized belief propagation.
- Yedidia, Freeman, et al.
- 2001
(Show Context)
Citation Context ...ccuracy over standard supervised training of DRFs as well as iid logistic regression classifiers. To further accelerate the performance with respect to accuracy, we may apply loopy belief propagation =-=[20]-=- or graph-cuts [4] as an inference tool. Since our model is tightly coupled with inference steps during the learning, the proper choice of an inference algorithm will most likely improve segmentation ... |

228 | Discriminative random fields: A discriminative framework for contextual interaction in classification,” in
- Kumar, Hebert
- 2003
(Show Context)
Citation Context ...ver, we have found that ICM yields good performance at our tasks below, and is probably one of the simplest possible alternatives. 5 Experiments Using standard supervised DRF models, Kumar and Hebert =-=[11, 10]-=- reported interesting experimental results for joint classification tasks on a 2-D lattice, which represents an image with a DRF model. Since labeling image data is expensive and tedious, we believe t... |

195 |
A classification EM algorithm for clustering and two stochastic versions.
- Celeux, Govaert
- 1992
(Show Context)
Citation Context ...ndance of unlabeled data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning =-=[5]-=-, co-training [3], information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems,... |

195 | Contextual models for object detection using boosted random fields
- Torralba, Murphy, et al.
- 2004
(Show Context)
Citation Context ...onal model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition [15] and object detection =-=[11, 10, 19, 17]-=-. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labels, the approach relies more heavily o... |

167 | Conditional random fields for object recognition
- Quattoni, Collins, et al.
- 2005
(Show Context)
Citation Context ...arning an accurate conditional model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition =-=[15]-=- and object detection [11, 10, 19, 17]. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labe... |

143 | Discriminative Fields for Modeling Spatial Dependencies in Natural Images. In:
- Kumar, Hebert
- 2004
(Show Context)
Citation Context ...MRFs) [2] follow a traditional generative approach, where one models the joint probability of the observed image along with the hidden label field over the pixels. Discriminative random fields (DRFs) =-=[11, 10]-=-, on the other hand, directly model the conditional probability over the pixel label field given an observed image. In this sense, a DRF is equivalent to a conditional random field [12] defined over a... |

140 | Accelerated training of conditional random fields with stochastic gradient methods
- Vishwanathan, Schraudolph, et al.
(Show Context)
Citation Context ...onal model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition [15] and object detection =-=[11, 10, 19, 17]-=-. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labels, the approach relies more heavily o... |

131 | Learning from labeled and unlabeled data on a directed graph.
- Zhou, Huang, et al.
- 2005
(Show Context)
Citation Context ...rvised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction =-=[22, 23, 24]-=-. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23, 24]. Unfortunately, semi-supervised learn... |

99 | Semi-supervised learning by entropy minimization
- Bengio, Grandvalet
- 2005
(Show Context)
Citation Context ...e now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization =-=[6, 8]-=-, and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23... |

78 | Semi-supervised conditional random fields for improved sequence segmentation and labeling - Jiao, Wang, et al. - 2006 |

65 | Maximum margin semi-supervised learning for structured variables
- Altun, McAllester, et al.
- 2006
(Show Context)
Citation Context ...Unfortunately, semi-supervised learning for structured classification problems, where the prediction variables are interdependent in complex ways, have not been as widely studied, with few exceptions =-=[1, 9]-=-. Current work on semi-supervised learning for structured predictors [1, 9] has focused primarily on simple sequence prediction tasks where learning and inference can be efficiently performed using st... |

21 | Tumor segmentation from magnetic resonance imaging by learning via one-class support vector machine
- Zhang, Ma, et al.
- 2004
(Show Context)
Citation Context ...e) has three modalities available — T1, T2, and T1 contrast. Note that each modality for each slice has 66, 564 pixels. As with much of the related work on automatic brain tumor segmentation (such as =-=[7, 21]-=-), our training is based on patient-specific data, where training MR images for a classifier are obtained from the patient to be tested. Note that the training sets and testing sets for a classifier a... |

17 | Maximum certainty data partitioning
- Roberts, Everson, et al.
- 2000
(Show Context)
Citation Context ...for DRFs, which was formulated as MAP estimation with conditional entropy over unlabeled data as a data-dependent prior regularization. Our approach is motivated by the information-theoretic argument =-=[8, 16]-=- that unlabeled examples can provide the most benefit when classes have small overlap. We introduced a simple approximation approach for this new learning procedure that exploits the local conditional... |

16 |
Conditional random elds: Probabilistic models for segmenting and labeling sequence data
- Lafferty, Pereira, et al.
- 2001
(Show Context)
Citation Context ...elds (DRFs) [11, 10], on the other hand, directly model the conditional probability over the pixel label eld given an observed image. In this sense, a DRF is equivalent to a conditional random eld =-=[12]-=- dened over a 2-D lattice. Following the basic tenet of Vapnik [18], it is natural to anticipate that learning an accurate joint model should be more challenging than learning an accurate conditional... |

14 |
Kernel based method for segmentation and modeling of magnetic resonance images
- Garcia, Moreno
- 2004
(Show Context)
Citation Context ...e) has three modalities available — T1, T2, and T1 contrast. Note that each modality for each slice has 66, 564 pixels. As with much of the related work on automatic brain tumor segmentation (such as =-=[7, 21]-=-), our training is based on patient-specific data, where training MR images for a classifier are obtained from the patient to be tested. Note that the training sets and testing sets for a classifier a... |

12 |
Text classication from labeled and unlabeled documents using EM
- Nigam, McCallum, et al.
(Show Context)
Citation Context ...areas due to the abundance of unlabeled data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models =-=[14]-=-, self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate class... |

7 | Efficient spatial classification using decoupled conditional random fields
- Lee, Greiner, et al.
- 2006
(Show Context)
Citation Context ...First, the value of τ 2 is critical to the final result, and unfortunately selecting the appropriate τ 2 is a non-trivial task, which in turn makes the learning procedures more challenging and costly =-=[13]-=-. Second, the Gaussian prior is data-independent, and is not associated with either the unlabeled or labeled observations a priori. Inspired by the work in [8] and [9], we propose a semi-supervised le... |

7 |
Semi-supervised learning using gaussian elds and harmonic functions
- Zhu, Ghahramani, et al.
- 2003
(Show Context)
Citation Context ...rvised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction =-=[22, 23, 24]-=-. However, most of these techniques have been developed for univariate classication problems, or class label classication with a structured input [22, 23, 24]. Unfortunately, semi-supervised learnin... |

5 | Data dependent regularization
- Corduneanu, Jaakkola
- 2006
(Show Context)
Citation Context ...e now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization =-=[6, 8]-=-, and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23... |

3 |
Accelerated training of conditional random elds with stochastic gradient methods
- Vishwanathan, Schraudolph, et al.
- 2006
(Show Context)
Citation Context ...onal model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition [15] and object detection =-=[11, 10, 19, 17]-=-. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labels, the approach relies more heavily o... |

2 |
Discriminative random elds: A discriminative framework for contextual interaction in classication
- Kumar, Hebert
- 2003
(Show Context)
Citation Context ...ver, we have found that ICM yields good performance at our tasks below, and is probably one of the simplest possible alternatives. 5 Experiments Using standard supervised DRF models, Kumar and Hebert =-=[11, 10]-=- reported interesting experimental results for joint classication tasks on a 2-D lattice, which represents an image with a DRF model. Since labeling image data is expensive and tedious, we believe th... |

1 | Semi-supervised conditional random elds for improved sequence segmentation and labeling - Jiao, Wang, et al. - 2006 |

1 |
Discriminative elds for modeling spatial dependencies in natural images
- Kumar, Hebert
- 2003
(Show Context)
Citation Context ... (MRFs) [2] follow a traditional generative approach, where one models the joint probability of the observed image along with the hidden label eld over the pixels. Discriminative random elds (DRFs) =-=[11, 10]-=-, on the other hand, directly model the conditional probability over the pixel label eld given an observed image. In this sense, a DRF is equivalent to a conditional random eld [12] dened over a 2-... |

1 |
Efcient spatial classication using decoupled conditional random elds
- Lee, Greiner, et al.
- 2006
(Show Context)
Citation Context ... First, the value of 2 is critical to the nal result, and unfortunately selecting the appropriate 2 is a non-trivial task, which in turn makes the learning procedures more challenging and costly =-=[13]-=-. Second, the Gaussian prior is data-independent, and is not associated with either the unlabeled or labeled observations a priori. Inspired by the work in [8] and [9], we propose a semi-supervised le... |

1 |
Conditional random elds for object recognition
- Quattoni, Collins, et al.
- 2004
(Show Context)
Citation Context ...arning an accurate conditional model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition =-=[15]-=- and object detection [11, 10, 19, 17]. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labe... |