Results 1 -
3 of
3
A Link Quality and Geographical-aware Routing Protocol for Video Transmission in Mobile IoT
"... Abstract—Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, wh ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, while also predicting hazardous events. In these mobile and dynamic (possible disaster) scenarios, opportunistic routing allows routing decisions in a completely distributed manner, by using a hopby-hop route decision based on protocol-specific characteristics, and a predefined end-to-end path is not a reliable solution. This enables the transmission of video flows of a monitored area/object with Quality of Experience (QoE) support to users, headquarters or IoT platforms. However, existing approaches rely on a single metric to make the candidate selection rule, including link quality or geographic information, which causes a high packet loss rate, and reduces the video perception from the human standpoint. This article proposes a cross-layer Link quality and Geographical-aware Opportunistic routing protocol (LinGO), which is designed for video dissemination in mobile multimedia IoT environments. LinGO improves routing decisions using multiple metrics, including link quality, geographic location, and energy. The simulation results show the benefits of LinGO compared with well-known routing solutions for video transmission with QoE support in mobile scenarios. Index Terms—Internet of Things, Multimedia distribution, New IoT applications, Node mobility.
ShotVis: Smartphone-based Visualization of OCR Information from Images
"... While visualization has been widely used as a data presentation tool in both desktop and mobile devices, the rapid visualization of information from images is still underexplored. In this work, we present a smartphone image acquisition and visualization approach for text-based data. Our prototype, S ..."
Abstract
- Add to MetaCart
While visualization has been widely used as a data presentation tool in both desktop and mobile devices, the rapid visualization of information from images is still underexplored. In this work, we present a smartphone image acquisition and visualization approach for text-based data. Our prototype, ShotVis, takes images of text captured from mobile devices and extracts information for visualization. First, scattered characters in the text are processed and interactively reformulated to be stored as structured data (i.e., tables of numbers, lists of words, sentences). From there, ShotVis allows users to interactively bind visual forms to the underlying data and produce visualizations of the selected forms through touch-based interactions. In this manner, ShotVis can quickly summarize text from images into word clouds, scatterplots, and various other visualizations all through a simple click of the camera. In this way, ShotVis facilitates the interactive exploration of text data captured via cameras in smartphone devices. To demonstrate our prototype, several case studies are presented along with one user study to demonstrate the effectiveness of our approach.
unknown title
"... Smart phones, as one of the most important platforms for personal communications and mobile computing, have evolved with various embedded devices, such as cameras, Wi-Fi transceivers, Bluetooth transceivers and sensors. Specifically, the photos taken by a smart phone has the approximate or even equi ..."
Abstract
- Add to MetaCart
(Show Context)
Smart phones, as one of the most important platforms for personal communications and mobile computing, have evolved with various embedded devices, such as cameras, Wi-Fi transceivers, Bluetooth transceivers and sensors. Specifically, the photos taken by a smart phone has the approximate or even equivalent image quality to that of a professional camera. As a result, smart phones have become the first choice for people to take photos to record their ordinary life. However, how to manage thousands of photos on a smart phone becomes a challenge. In this paper, we propose a new architecture in terms of Smart Photo-Tagging Framework (SPTF) to manage the substantial number of photos taken by smart phones. In particular, our SPTF collects the ambient data obtained from various embedded sensors on a smart phone when a photo is taken. After processing and analyzing the ambient data, SPTF can accurately record both the ambient tags and the face tags of the photo, which will be used for auto-tagging photos and searching photos. We also implement SPTF and verify its effectiveness by conducting a number of realistic experiments.