Results 1 -
4 of
4
Posing to the Camera: Automatic Viewpoint Selection for Human Actions
"... Abstract. In many scenarios a scene is filmed by multiple video cameras located at different viewing positions. The difficulty in watching multiple views simultaneously raises an immediate question- which cameras capture better views of the dynamic scene? When one can only display a single view (e.g ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. In many scenarios a scene is filmed by multiple video cameras located at different viewing positions. The difficulty in watching multiple views simultaneously raises an immediate question- which cameras capture better views of the dynamic scene? When one can only display a single view (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we propose a method for evaluating the quality of a view, captured by a single camera. This can be used to automate viewpoint selection. We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is evaluated by incorporating three measures that capture the visibility of the action provided by these space-time shapes. We evaluate the proposed approach both qualitatively and quantitatively. 1
International Journal of Computer Vision manuscript No. (will be inserted by the editor) Viewpoint Selection for Human Actions
"... Abstract In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing positions. Visualizing such multi-view data on a single display raises an immediate question- which cameras capture better views of the scene? Typically, (e.g. in TV broadcasts) a human produc ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing positions. Visualizing such multi-view data on a single display raises an immediate question- which cameras capture better views of the scene? Typically, (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we wish to automate this process by evaluating the quality of a view, captured by every single camera. We regard human actions as threedimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is then evaluated based on features of the space-time shape, which correspond with limb visibility. Resting on these features, two view quality approaches are proposed. One is generic while the other can be trained to fit any preferred action recognition method. Our experiments show that the proposed view selection provide intuitive results which match common conventions. We further show that it improves action recognition results.
Int J Comput Vis DOI 10.1007/s11263-011-0484-5 Viewpoint Selection for Human Actions
, 2010
"... Abstract In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing posi-tions. Visualizing such multi-view data on a single display raises an immediate question—which cameras capture bet-ter views of the scene? Typically, (e.g. in TV broadcasts) a human produ ..."
Abstract
- Add to MetaCart
Abstract In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing posi-tions. Visualizing such multi-view data on a single display raises an immediate question—which cameras capture bet-ter views of the scene? Typically, (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we wish to automate this process by evaluating the quality of a view, captured by every single camera. We regard hu-man actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is then evaluated based on features of the space-time shape, which correspond with limb visibility. Resting on these fea-tures, two view quality approaches are proposed. One is generic while the other can be trained to fit any preferred ac-tion recognition method. Our experiments show that the pro-posed view selection provide intuitive results which match common conventions. We further show that it improves ac-tion recognition results.
Exact Algorithms for Non-Overlapping 2-Frame Problem with Non-Partial Coverage for Networked Robotic Cameras
"... Abstract — We report our algorithmic development on the 2-frame problem that addresses the need of coordinating two networked robotic pan-tilt-zoom (PTZ) cameras for n, (n> 2), competing rectangular observation requests. We assume the two camera frames have no overlap on their coverage. A request ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — We report our algorithmic development on the 2-frame problem that addresses the need of coordinating two networked robotic pan-tilt-zoom (PTZ) cameras for n, (n> 2), competing rectangular observation requests. We assume the two camera frames have no overlap on their coverage. A request is satisfied only if it is fully covered by a camera frame. The satisfaction level for a given request is quantified by comparing its desirable observation resolution with that of the camera frame which fully covers it. We propose a series of exact algorithms for the solution that maximizes the overall satisfaction. Our algorithms solve the 2-frame problem in O(n2), O(n2m) and O(n3) times for fixed, m discrete and continuous camera resolution levels, respectively. We have implemented all the algorithms and compared them with the existing work. I.