Results 1 -
4 of
4
Fine-Grained Visual Comparisons with Local Learning
"... Given two images, we want to predict which exhibits a particular visual attribute more than the other—even when the two images are quite similar. Existing relative attribute methods rely on global ranking functions; yet rarely will the visual cues relevant to a comparison be constant for all data, n ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Given two images, we want to predict which exhibits a particular visual attribute more than the other—even when the two images are quite similar. Existing relative attribute methods rely on global ranking functions; yet rarely will the visual cues relevant to a comparison be constant for all data, nor will humans ’ perception of the attribute neces-sarily permit a global ordering. To address these issues, we propose a local learning approach for fine-grained vi-sual comparisons. Given a novel pair of images, we learn a local ranking model on the fly, using only analogous train-ing comparisons. We show how to identify these analogous pairs using learned metrics. With results on three challeng-ing datasets—including a large newly curated dataset for fine-grained comparisons—our method outperforms state-of-the-art methods for relative attribute prediction. 1.
D-ITET, ETH Zurich
"... In this paper we present seven techniques that everybody should know to improve example-based single image super resolution (SR): 1) augmentation of data, 2) use of large dictionaries with efficient search structures, 3) cascading, 4) image self-similarities, 5) back projection refinement, 6) enhanc ..."
Abstract
- Add to MetaCart
(Show Context)
In this paper we present seven techniques that everybody should know to improve example-based single image super resolution (SR): 1) augmentation of data, 2) use of large dictionaries with efficient search structures, 3) cascading, 4) image self-similarities, 5) back projection refinement, 6) enhanced prediction by consistency check, and 7) context reasoning. We validate our seven techniques on standard SR bench-marks (i.e. Set5, Set14, B100) and methods (i.e. A+, SR-CNN, ANR, Zeyde, Yang) and achieve substantial improve-ments. The techniques are widely applicable and require no changes or only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method sets new state-of-the-art results outperforming A+ by up to 0.9dB on aver-age PSNR whilst maintaining a low time complexity. 1.
Machine Learning Thousands of Portraits
"... Figure 1: We collect thousands of portraits by capturing video of a subject while they watch movie clips designed to elicit a range of positive emotions. We use crowdsourcing and machine learning to train models that can predict attractiveness scores of different expressions. These models can be use ..."
Abstract
- Add to MetaCart
Figure 1: We collect thousands of portraits by capturing video of a subject while they watch movie clips designed to elicit a range of positive emotions. We use crowdsourcing and machine learning to train models that can predict attractiveness scores of different expressions. These models can be used to select a subject’s best expressions across a range of emotions, from more serious professional portraits to big smiles. We describe a method for providing feedback on portrait expres-sions, and for selecting the most attractive expressions from large video/photo collections. We capture a video of a subject’s face while they are engaged in a task designed to elicit a range of pos-itive emotions. We then use crowdsourcing to score the captured expressions for their attractiveness. We use these scores to train a model that can automatically predict attractiveness of different ex-pressions of a given person. We also train a cross-subject model that evaluates portrait attractiveness of novel subjects and show how it can be used to automatically mine attractive photos from personal photo collections. Furthermore, we show how, with a little bit ($5-worth) of extra crowdsourcing, we can substantially improve the cross-subject model by ”fine-tuning ” it to a new individual using active learning. Finally, we demonstrate a training app that helps people learn how to mimic their best expressions.
ADAPTIVE RANKING OF FACIAL ATTRACTIVENESS
"... As humans, we love to rank things. Top ten lists exist for everything from movie stars to scary animals. Ambiguities (i.e., ties) naturally occur in the process of ranking when peo-ple feel they cannot distinguish two items. Human reported rankings derived from star ratings abound on recommenda-tion ..."
Abstract
- Add to MetaCart
(Show Context)
As humans, we love to rank things. Top ten lists exist for everything from movie stars to scary animals. Ambiguities (i.e., ties) naturally occur in the process of ranking when peo-ple feel they cannot distinguish two items. Human reported rankings derived from star ratings abound on recommenda-tion websites such as Yelp and Netflix. However, those web-sites differ in star precision which points to the need for rank-ing systems that adapt to an individual user’s preference sensi-tivity. In this work we propose an adaptive system that allows for ties when collecting ranking data. Using this system, we propose a framework for obtaining computer-generated rank-ings. We test our system and a computer-generated ranking method on the problem of evaluating human attractiveness. Extensive experimental evaluations and analysis demonstrate the effectiveness and efficiency of our work. Index Terms — ranking, rating, adaptive methods, facial attractiveness 1.