Results 1 -
3 of
3
Large Displacement 3D Scene Flow with Occlusion Reasoning
"... The emergence of modern, affordable and accurate RGB-D sensors increases the need for single view ap-proaches to estimate 3-dimensional motion, also known as scene flow. In this paper we propose a coarse-to-fine, dense, correspondence-based scene flow formulation that relies on explicit geometric re ..."
Abstract
- Add to MetaCart
(Show Context)
The emergence of modern, affordable and accurate RGB-D sensors increases the need for single view ap-proaches to estimate 3-dimensional motion, also known as scene flow. In this paper we propose a coarse-to-fine, dense, correspondence-based scene flow formulation that relies on explicit geometric reasoning to account for the ef-fects of large displacements and to model occlusion. Our methodology enforces local motion rigidity at the level of the 3d point cloud without explicitly smoothing the param-eters of adjacent neighborhoods. By integrating all ge-ometric and photometric components in a single, consis-tent, occlusion-aware energy model, defined over overlap-ping, image-adaptive neighborhoods, our method can pro-cess fast motions and large occlusions areas, as present in challenging datasets like the MPI Sintel Flow Dataset, recently augmented with depth information. By explicitly modeling large displacements and occlusion, we can han-dle difficult sequences which cannot be currently processed by state of the art scene flow methods. We also show that by integrating depth information into the model, we can ob-tain correspondence fields with improved spatial support and sharper boundaries compared to the state of the art, large-displacement optical flow methods. 1.
New Objects to be Labeled by AMTReconstruction Annotation Recognition Robot
"... Figure 1. A robot that can recognize all the objects. We propose an extremely robust mechanism to reconstruct a 3D map and use crowd sourcing to collectively annotate all objects. During testing, the robot localizes its pose, recognizes all seen objects (four images on the right from four RGB-D sens ..."
Abstract
- Add to MetaCart
(Show Context)
Figure 1. A robot that can recognize all the objects. We propose an extremely robust mechanism to reconstruct a 3D map and use crowd sourcing to collectively annotate all objects. During testing, the robot localizes its pose, recognizes all seen objects (four images on the right from four RGB-D sensors mounted on the robot), and identifies new ones (e.g. the backpack and the box). In most cases, the robot can recognize autonomously. It can indicate reliably when it fails, and utilize crowd sourcing to fix the problem or to annotate new objects. While general object recognition is still far from being solved, this paper proposes a way for a robot to recog-nize every object at an almost human-level accuracy. Our key observation is that many robots will stay in a relatively closed environment (e.g. a house or an office). By con-straining a robot to stay in a limited territory, we can ensure that the robot has seen most objects before and the speed of introducing a new object is slow. Furthermore, we can build a 3D map of the environment to reliably subtract the back-ground to make recognition easier. We propose extremely robust algorithms to obtain a 3D map and enable humans to collectively annotate objects. During testing time, our algo-rithm can recognize all objects very reliably, and query hu-mans from crowd sourcing platform if confidence is low or new objects are identified. This paper explains design deci-sions in building such a system, and constructs a benchmark for extensive evaluation. Experiments suggest that mak-ing robot vision appear to be working from an end user’s perspective is a reachable goal today, as long as the robot stays in a closed environment. By formulating this task, we hope to lay the foundation of a new direction in vision for robotics. Code and data will be available upon acceptance. 1.
1SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
"... Abstract—We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. Th ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the fully convolutional network [2] architecture and its variants. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. The design of SegNet was primarily motivated by road scene understanding applications. Hence, it is efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than competing architectures and can be trained end-to-end using stochastic gradient descent without complex training protocols. We also benchmark the performance of SegNet on Pascal VOC12 salient object segmentation and the recent SUN RGB-D indoor scene understanding challenge. These quantitative assessments show that SegNet provides competitive performance although it is significantly smaller than other architectures. We also provide a Caffe implementation of SegNet and a web demo at