Results 1  10
of
21
Decoding binary node labels from censored edge measurements: Phase transition and efficient recovery. available at arXiv:1404.4749 [cs.IT
, 2014
"... Abstract. We consider the problem of clustering a graphG into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables Y = BGx⊕Z, where BG is the incidence matrix of a graph G, x is the vector of unknown vertex variables ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of clustering a graphG into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables Y = BGx⊕Z, where BG is the incidence matrix of a graph G, x is the vector of unknown vertex variables (with a uniform prior) and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of x is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n)/n for ErdősRényi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np / log(n), it is shown that exact recovery is possible if and only if α> 2/(1 − 2ε)2 + o(1/(1 − 2ε)2). In other words, 2/(1 − 2ε)2 is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph G, defining the degree rate as α = d / log(n), where d is the minimum degree of the graph, it is shown that the proposed method achieves the rate α> 4((1 + λ)/(1 − λ)2)/(1 − 2ε)2 + o(1/(1 − 2ε)2), where 1 − λ is the spectral gap of the graph G. A preliminary version of this paper appeared in ISIT 2014 [ABBS14]. 1.
Nearoptimal joint object matching via convex relaxation. arxiv preprint arXiv:1402.1473
, 2014
"... Joint object matching aims at aggregating information from a large collection of similar instances (e.g. images, graphs, shapes) to improve the correspondences computed between pairs of objects, typically by exploiting global map compatibility. Despite some practical advances on this problem, fro ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Joint object matching aims at aggregating information from a large collection of similar instances (e.g. images, graphs, shapes) to improve the correspondences computed between pairs of objects, typically by exploiting global map compatibility. Despite some practical advances on this problem, from the theoretical point of view, the errorcorrection ability of existing algorithms are limited by a constant barrier — none of them can provably recover the correct solution when more than a constant fraction of input correspondences are corrupted. Moreover, prior approaches focus mostly on fully similar objects, while it is practically more demanding and realistic to match instances that are only partially similar to each other. In this paper, we propose an algorithm to jointly match multiple objects that exhibit only partial similarities, where the provided pairwise feature correspondences can be densely corrupted. By encoding a consistent partial map collection into a 01 semidefinite matrix, we attempt recovery via a twostep procedure, that is, a spectral technique followed by a parameterfree convex program called MatchLift. Under a natural randomized model, MatchLift exhibits nearoptimal errorcorrection ability, i.e. it guarantees the recovery of the groundtruth maps even when a dominant fraction of the inputs are randomly corrupted. We evaluate the proposed algorithm on various benchmark data sets including synthetic examples and realworld examples, all of which confirm the practical applicability of the proposed algorithm.
Estimating Image Depth Using Shape Collections
"... Figure 1: We attribute a single 2D image of an object (left) with depth by transporting information from a 3D shape deformation subspace learned by analyzing a network of related but different shapes (middle). For visualization, we color code the estimated depth with values increasing from red to bl ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Figure 1: We attribute a single 2D image of an object (left) with depth by transporting information from a 3D shape deformation subspace learned by analyzing a network of related but different shapes (middle). For visualization, we color code the estimated depth with values increasing from red to blue (right). Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively ‘lifting ’ it back to 3D, by exploiting a collection of aligned 3D models of related objects shape. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shapespecific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depthenhanced image editing and image relighting.
Linear inverse problems on ErdősRényi graphs: Informationtheoretic limits and efficient recovery
"... Abstract—This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. T ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract—This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery of X is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n)/n for ErdősRényi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np / log(n), it is shown that exact recovery is possible if and only if α> 2/(1−2ε)2+o(1/(1−2ε)2). In other words, 2/(1−2ε)2 is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. Full version available in [1]. I.
Creating Consistent Scene Graphs Using a Probabilistic Grammar
"... Figure 1: Our algorithm processes raw scene graphs with possible oversegmentation (a), obtained from repositories such as the Trimble Warehouse, into consistent hierarchies capturing semantic and functional groups (b,c). The hierarchies are inferred by parsing the scene geometry with a probabilisti ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Figure 1: Our algorithm processes raw scene graphs with possible oversegmentation (a), obtained from repositories such as the Trimble Warehouse, into consistent hierarchies capturing semantic and functional groups (b,c). The hierarchies are inferred by parsing the scene geometry with a probabilistic grammar learned from a set of annotated examples. Apart from generating meaningful groupings at multiple scales, our algorithm also produces object labels with higher accuracy compared to alternative approaches. Growing numbers of 3D scenes in online repositories provide new opportunities for datadriven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for datadriven applications because it lacks consistent segmentations, category labels, and/or functional groupings required for coanalysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes,
Seamless Surface Mappings
"... Figure 1: Two bijective seamless mappings between models of two humans are shown in (c),(d), generated by our algorithm from the two different cutplacements in (a),(b) (respectively), cuts visualized as colored curves. The two maps interpolate the same set of usergiven landmarks, shown as colored ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Figure 1: Two bijective seamless mappings between models of two humans are shown in (c),(d), generated by our algorithm from the two different cutplacements in (a),(b) (respectively), cuts visualized as colored curves. The two maps interpolate the same set of usergiven landmarks, shown as colored spheres. The maps are visualized by texturing the male model and transferring the texture to the female model using the mappings. The algorithm is not affected by the choice of cuts: the maps do not exhibit any artifacts near the cut nor does the poor cutcorrespondence (e.g.the torso in (b)) affect them, and in fact for the two different cutplacements, the produced maps are identical. We introduce a method for computing seamless bijective mappings between two surfacemeshes that interpolates a given set of correspondences. A common approach for computing a map between surfaces is to cut the surfaces to disks, flatten them to the plane, and extract the mapping from the flattenings by composing one flattening with the inverse of the other. So far, a significant drawback in this class of techniques is that the choice of cuts introduces a bias in the computation of the map that often causes visible artifacts and wrong correspondences. In this paper we develop a surface mapping technique that is indifferent to the particular cut choice. This is achieved by a novel type of surface flattenings that encodes this cutinvariance, and when optimized with a suitable energy functional results in a seamless surfacetosurface map. We show the algorithm enables producing highquality seamless bijective maps for pairs of surfaces with a wide range of shape variability and from a small number of prescribed correspondences. We also used this framework to produce threeway, consistent and seamless mappings for triplets of surfaces.
FlowWeb: Joint Image Set Alignment by Weaving Consistent, Pixelwise Correspondences
"... Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixelwise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fullyconnected correspondence flow graph with each node repre ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixelwise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fullyconnected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an offtheshelf, pairwise flow method. We then iteratively update the graph to force it to be more selfconsistent. Once the algorithm converges, dense, globallyconsistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods. 1.
Controlling Singular Values with Semidefinite Programming
"... Controlling the singular values of ndimensional matrices is often required in geometric algorithms in graphics and engineering. This paper introduces a convex framework for problems that involve singular values. Specifically, it enables the optimization of functionals and constraints expressed in t ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Controlling the singular values of ndimensional matrices is often required in geometric algorithms in graphics and engineering. This paper introduces a convex framework for problems that involve singular values. Specifically, it enables the optimization of functionals and constraints expressed in terms of the extremal singular values of matrices. Towards this end, we introduce a family of convex sets of matrices whose singular values are bounded. These sets are formulated using Linear Matrix Inequalities (LMI), allowing optimization with standard convex Semidefinite Programming (SDP) solvers. We further show that these sets are optimal, in the sense that there exist no larger convex sets that bound singular values. A number of geometry processing problems are naturally described in terms of singular values. We employ the proposed framework to optimize and improve upon standard approaches. We experiment with this new framework in several applications: volumetric mesh deformations, extremal quasiconformal mappings in three dimensions, nonrigid shape registration and averaging of rotations. We show that in all applications the proposed approach leads to algorithms that compare favorably to stateofart algorithms.
Multiple Shape Correspondence by Dynamic Programming
"... We present a multiple shape correspondence method based on dynamic programming, that computes consistent bijective maps between all shape pairs in a given collection of initially unmatched shapes. As a fundamental distinction from previous work, our method aims to explicitly minimize the overall dis ..."
Abstract
 Add to MetaCart
(Show Context)
We present a multiple shape correspondence method based on dynamic programming, that computes consistent bijective maps between all shape pairs in a given collection of initially unmatched shapes. As a fundamental distinction from previous work, our method aims to explicitly minimize the overall distortion, i.e., the average isometric distortion of the resulting maps over all shape pairs. We cast the problem as optimal path finding on a graph structure where vertices are maps between shape extremities. We exploit as much context information as possible using a dynamic programming based algorithm to approximate the optimal solution. Our method generates coarse multiple correspondences between shape extremities, as well as denser correspondences as byproduct. We assess the performance on various mesh sequences of (nearly) isometric shapes. Our experiments show that, for isometric shape collections with nonuniform triangulation and noise, our method can compute relatively dense correspondences reasonably fast and outperform state of the art in terms of accuracy.