Results 1 - 10
of
19
Real-Time Multitarget Tracking by a Cooperative Distributed Vision System
- Proceedings of the IEEE
, 2002
"... Target detection and tracking is one of the most important and fundamental technologies to develop real-world computer vision systems such as security and traffic monitoring systems. This paper first categorizes target tracking systems based on characteristics of scenes, tasks, and system architectu ..."
Abstract
-
Cited by 36 (4 self)
- Add to MetaCart
(Show Context)
Target detection and tracking is one of the most important and fundamental technologies to develop real-world computer vision systems such as security and traffic monitoring systems. This paper first categorizes target tracking systems based on characteristics of scenes, tasks, and system architectures. Then we present a real-time cooperative multitarget tracking system. The system consists of a group of active vision agents (AVAs), where an AVA is a logical model of a network-connected computer with an active camera. All AVAs cooperatively track their target objects by dynamically exchanging object information with each other. With this cooperative tracking capability, the system as a whole can track multiple moving objects persistently even under complicated dynamic environments in the real world. In this paper, we address the technologies employed in the system and demonstrate their effectiveness. Keywords—Cooperative distributed vision, cooperative tracking, fixed-viewpoint camera, multi-camera sensing, multitarget tracking, real-time cooperation by multiple agents, real-time tracking. I.
Learning Object Motion Patterns for Anomaly Detection and Improved Object Detection
"... We present a novel framework for learning patterns of motion and sizes of objects in static camera surveillance. The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback. Pixel level probability density func ..."
Abstract
-
Cited by 35 (2 self)
- Add to MetaCart
(Show Context)
We present a novel framework for learning patterns of motion and sizes of objects in static camera surveillance. The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback. Pixel level probability density functions (pdfs) of appearance have been used for background modelling in the past, but modelling pixel level pdfs of object speed and size from the tracks is novel. Each pdf is modelled as a multivariate Gaussian Mixture Model (GMM) of the motion (destination location & transition time) and the size (width & height) parameters of the objects at that location. Output of the tracking module is used to perform unsupervised EM-based learning of every GMM. We have successfully used the proposed scene model to detect local as well as global anomalies in object tracks. We also show the use of this scene model to improve object detection through pixel-level parameter feedback of the minimum object size and background learning rate. Most object path modelling approaches first cluster the tracks into major paths in the scene, which can be a source of error. We avoid this by building local pdfs that capture a variety of tracks which are passing through them. Qualitative and quantitative analysis of actual surveillance videos proved the effectiveness of the proposed approach. 1.
PRISMATICA: Toward ambient intelligence in public transport environments
- IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 2005
"... ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other ..."
Abstract
-
Cited by 17 (2 self)
- Add to MetaCart
(Show Context)
©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE
Object Detection, Tracking and Recognition for Multiple Smart Cameras -- Efficient distributed algorithms defined for small networks of fixed cameras may be adaptable to larger networks with mobile, steerable cameras.
, 2008
"... Video cameras are among the most commonly used sensors in a large number of applications, ranging from surveillance to smart rooms for videoconferencing. There is a need to develop algorithms for tasks such as detection, tracking, and recognition of objects, specifically using distributed networks o ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
Video cameras are among the most commonly used sensors in a large number of applications, ranging from surveillance to smart rooms for videoconferencing. There is a need to develop algorithms for tasks such as detection, tracking, and recognition of objects, specifically using distributed networks of cameras. The projective nature of imaging sensors provides ample challenges for data association across cameras. We first discuss the nature of these challenges in the context of visual sensor networks. Then, we show how realworld constraints can be favorably exploited in order to tackle these challenges. Examples of real-world constraints are a) the presence of a world plane, b) the presence of a threedimiensional scene model, c) consistency of motion across cameras, and d) color and texture properties. In this regard, the main focus of this paper is towards highlighting the efficient use of the geometric constraints induced by the imaging devices to derive distributed algorithms for target detection, tracking, and recognition. Our discussions are supported by several examples drawn from real applications. Lastly, we also describe several potential research problems that remain to be addressed.
Time-bounded Distributed QoS-Aware Service Configuration in Heterogeneous Cooperative Environments
"... The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to execute services within the users ' requested Quality of Service levels, particularly in open real-time environments where the characteristics of the computational load ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
(Show Context)
The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to execute services within the users ' requested Quality of Service levels, particularly in open real-time environments where the characteristics of the computational load cannot always be predicted in advance but, nevertheless, response to events still has to be provided within precise timing constraints in order to guarantee a desired level of performance. This paper proposes a cooperative service execution, allowing resource constrained devices to collectively execute services with their more powerful neighbours, meeting non-functional requirements that otherwise would not be met by an individual execution. Nodes dynamically group themselves into a new coalition, allocating resources to each new service and establishing an initial service configuration which maximises the satisfaction of the QoS constraints associated with the new service and minimises the impact on the global QoS caused by the new service's arrival. However, the increased complexity of open real-time environments may prevent the possibility of computing optimal local and global resource allocations within a useful and bounded time. As such, the QoS optimisation problem is here reformulated as a heuristic-based anytime optimisation problem that can be interrupted at any time and quickly respond to environmental changes. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly
Heterogeneous data collection and representation within a distributed smart space architecture,” presented at the Adv. Concepts Intell. Vision Syst
, 2002
"... This paper presents a distributed multi-sensor system developed in an Ambient Intelligence (AmI) context. The aim of this work is to give design principles for an architecture that shows abilities in collecting and analysing data coming from multiple sensors, in being aware of its inner state and of ..."
Abstract
-
Cited by 5 (5 self)
- Add to MetaCart
(Show Context)
This paper presents a distributed multi-sensor system developed in an Ambient Intelligence (AmI) context. The aim of this work is to give design principles for an architecture that shows abilities in collecting and analysing data coming from multiple sensors, in being aware of its inner state and of its users ’ state and in interacting with them. In particular, different features are proposed for describing the internal state of an intelligent environment and the architecture of a distributed system able to collect these data is proposed. The communication infrastructure is based on Software Agents paradigm that satisfies requirements imposed by a scalable, distributed and dynamic system. The final application is concerned with the progressive realization of an “intelligent building ” that can improve both quality of life and security of its users. 1.
Intelligent Perception in Virtual Sensor Networks and Space Robotics
"... Intelligent perception is a fundamental requirement of systems that exhibit sophisticated autonomous operation in complex dynamic worlds. It combines low-level, bottom-up, datadriven vision with high-level, top-down, knowledge-based processes. This thesis develops two embodied, task-oriented vision ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
Intelligent perception is a fundamental requirement of systems that exhibit sophisticated autonomous operation in complex dynamic worlds. It combines low-level, bottom-up, datadriven vision with high-level, top-down, knowledge-based processes. This thesis develops two embodied, task-oriented vision systems that exhibit autonomous, intelligent, goal-driven behavior through intelligent perception. In Part I of the thesis, we develop a prototype surveillance system featuring a visual sensor network comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom active cameras. Novel multicamera control strategies enable the camera nodes to collaborate both in tracking pedestrians of interest that move across the FOVs of different cameras and in acquiring close-up videos of pedestrians as they travel across extended areas. Impediments to deploying and experimenting with appropriately extensive camera networks in large, busy public spaces would make our research more or less infeasible in the real world. However, a unique centerpiece of our approach is the virtual vision paradigm, in which we employ a visually and behaviorally realistic simulator in the design and evaluation of our surveillance systems. In particular, we employ a virtual train station populated by autonomous, lifelike virtual pedestrians,
Consolidation of a WSN and Minimax Method to Rapidly Neutralise
- Intruders in Strategic Installations. Sensors 2012
"... sensors ..."
(Show Context)
Real-time Video Content Analysis: QoS-Aware Application Composition and Parallel Processing
- ACM Transactions on Multimedia Computing, Communications, and Applications
, 2006
"... Real-Time content-based access to live video data requires content analysis applications that are able to process video streams in real-time and with an acceptable error rate. Statements such as this express quality of service (QoS) requirements. In gen-eral, control of the QoS provided can be achie ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Real-Time content-based access to live video data requires content analysis applications that are able to process video streams in real-time and with an acceptable error rate. Statements such as this express quality of service (QoS) requirements. In gen-eral, control of the QoS provided can be achieved by sacrificing application quality in one QoS dimension for better quality in another, or by controlling the allocation of processing resources to the application. However, controlling QoS in video content analysis is particularly difficult, not only because main QoS dimensions like accuracy are nonadditive, but also because both the communication- and the processing-resource requirements are challenging. This article presents techniques for QoS-aware composition of applications for real-time video content analysis, based on dynamic Bayesian networks. The aim of QoS-aware composition is to determine application deployment configurations which satisfy a given set of QoS requirements. Our approach consists of: (1) an algorithm for QoS-aware selection of configurations of feature extractor and classification algorithms which balances requirements for timeliness and accuracy against available processing resources, (2) a distributed content-based publish/subscribe system which provides application scalability at multiple logical levels of distribution, and (3) scalable solutions for video streaming, filtering/transformation, feature extraction, and classification. We evaluate our approach based on experiments with an implementation of a real-time motion vector based object-tracking application. The evaluation shows that the application largely behaves as expected when resource availability and selections of
Enabling Technologies on Hybrid Camera Networks for Behavioral Analysis of Unattended Indoor Environments and Their Surroundings
"... This paper presents a layered network architecture and the enabling technologies for accomplishing vision-based behavioral analysis of unattended environments. Specifically the vision network covers both the attended environment and its surroundings by means of hybrid cameras. The layer overlooking ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This paper presents a layered network architecture and the enabling technologies for accomplishing vision-based behavioral analysis of unattended environments. Specifically the vision network covers both the attended environment and its surroundings by means of hybrid cameras. The layer overlooking at the surroundings is laid outdoor and tracks people, monitoring entrance/exit points. It recovers the geometry of the site under surveillance and communicates people positions to a higher level layer. The layer monitoring the unattended environment undertakes similar goals, with the addition of maintaining a global mosaic of the observed scene for further understanding. Moreover, it merges information coming from sensors beyond the vision to deepen the understanding or increase the reliability of the system. The behavioral analysis is demanded to a third layer that merges the information received from the two other layers and infers knowledge about what happened, happens and will be likely happening in the environment. The paper also describes a case study that was implemented in the Engineering Campus of the University of Modena and Reggio Emilia, where our surveillance system has been deployed in a computer laboratory which was often unaccessible due to lack of attendance.