#### DMCA

## Structure preserving embedding (2009)

### Cached

### Download Links

Venue: | Proceedings of the 26th International Conference on Machine Learning (ICML 2009 |

Citations: | 28 - 6 self |

### Citations

2420 | A Global Geometric Framework for Nonlinear Dimensionality - Tenenbaum, Silva, et al. - 2000 |

2382 | Nonlinear Dimensionality Reduction by Locally Linear Embedding
- Roweis
- 2000
(Show Context)
Citation Context ...ensional scaling preserves distances between data points (Cox & M.Cox, 1994). Nonlinear manifold learning algorithms preserve local distances described by a graph on the data (Tenenbaum et al., 2000; =-=Roweis & Saul, 2000-=-; Weinberger et al., 2005). For these algorithms the input consists of high-dimensional points as well as binary connectivity. Many of these manifold learning techniques preserve local distances but n... |

1491 |
Spectral Graph Theory
- Chung
- 1997
(Show Context)
Citation Context ...edded in 2-dimensional Euclidean space. However, there are many uses for graph embedding that do not relate to arc crossing and thus there exists a suite of embedding algorithms with different goals (=-=Chung, 1997-=-; Battista et al., 1999). One motivation for embedding graphs is to solve computationally hard problems geometrically. For example, using hyperplanes to separate points after graph embedding is useful... |

1194 | Laplacian eigenmaps for dimensionality reduction and data representation - Belkin, Niyogi - 2003 |

590 | Multidimensional Scaling - Cox, Cox - 2000 |

311 | Expander flows, geometric embeddings and graph partitioning
- Arora, Rao, et al.
(Show Context)
Citation Context ... to solve computationally hard problems geometrically. For example, using hyperplanes to separate points after graph embedding is useful for efficiently approximating the NPhard sparsest cut problem (=-=Arora et al., 2004-=-). This article will focus on graph embedding for visualization and compression. Given only the connectivity of a graph, can we efficiently recover low-dimensional point coordinates for each node such... |

156 |
UCI machine learning repository. http://www.ics. uci.edu/~mlearn/MLRepository.html
- Asuncion, Newman
- 2007
(Show Context)
Citation Context ...her methods in terms of accuracy of using a 1-nearest neighbor classifier on the resulting 2D embeddings. The results in Table 3 were obtained by sampling a variety of small graphs from UCI datasets (=-=Asuncion & Newman, 2007-=-) by randomly selecting 120 nodes from the two largest classes of each dataset. Then, the data was embedded with the the variety of algorithms above and the resulting performance of a 1NN classifier w... |

151 | A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization
- Burer, Monteiro
- 2003
(Show Context)
Citation Context ...erges when |tr( ˜ WÃ)−tr( ˜ WA)| ≤ ɛ, where ɛ is an input parameter. Table 2 summarizes the cutting-plane version of SPE. SPE is implemented in MATLAB as a Semidefinite Program using CSDP and SDP-LR (=-=Burer & Monteiro, 2003-=-) and has complexity similar to other dimensionality reduction SDPs such as Semidefinite Embedding. The complexity is O(N 3 +C 3 ) (Weinberger et al., 2005) where C denotes the number of constraints (... |

144 |
Graph drawing: algorithms for the visualization of graphs
- Battista, Eades, et al.
- 1998
(Show Context)
Citation Context ...mensional Euclidean space. However, there are many uses for graph embedding that do not relate to arc crossing and thus there exists a suite of embedding algorithms with different goals (Chung, 1997; =-=Battista et al., 1999-=-). One motivation for embedding graphs is to solve computationally hard problems geometrically. For example, using hyperplanes to separate points after graph embedding is useful for efficiently approx... |

138 | Training Structural SVMs when Exact Inference is Intractable
- Finley, Joachims
- 2008
(Show Context)
Citation Context ...ponential enumeration is avoided and the most violated inequalities are introduced sequentially. It has been shown that cutting-plane optimizations such as this converge and perform well in practice (=-=Finley & Joachims, 2008-=-). 3. A Low-Rank Objective The previous section showed that it is possible to force an embedding to preserve the graph structure in a given adjaceny matrix A by introducing a set of linear constraints... |

74 | New outer bounds on the marginal polytope
- Sontag, Jaakkola
- 2008
(Show Context)
Citation Context ... do not carry over immediately, although in practice the cutting plane algorithm works well and has also been successfully deployed in settings beyond structured prediction and quadratic programming (=-=Sontag & Jaakkola, 2008-=-). 5. Experiments We present visualization results on a variety of synthetic and real-world datasets, highlighting the improvements of SPE over purely spectral methods. Figure 2 shows a variety of cla... |

27 | Minimum volume embedding
- Shaw, Jebara
(Show Context)
Citation Context ...mum Variance Unfolding (MVU), and Minimum Volume Embedding (MVE) begin by finding a sparse connectivity matrix A that describes local pairwise distances (Roweis & Saul, 2000; Weinberger et al., 2005; =-=Shaw & Jebara, 2007-=-). These methods assume that the data lies on a low-dimensional nonlinear manifold embedded in a high-dimensional ambient space. To recover the manifold, these methods preserve local distances along e... |

23 | Balanced network flows, a unifying framework for design and analysis of matching algorithms
- Fremuth-Paeger, Jungnickel
- 1999
(Show Context)
Citation Context ...ty computed from K is exactly A, and △(G(K),A) = 0. 2.2. Maximum Weight Subgraphs Maximum weight subgraph algorithms select edges from a weighted graph to produce a subgraph which has maximal weight (=-=Fremuth-Paeger & Jungnickel, 1999-=-). Definition 2. Given a kernel matrix K, define the weight between two points (i, j) as the negated pairwise distance between them: Wij = −Dij = −Kii − Kjj + 2Kij. Once again, the matrix W composed o... |

12 |
The political blogosphere and
- Adamic, Glance
- 2005
(Show Context)
Citation Context ...e left. Given only connectivity information SPE is able to produce coordinates for each atom that better resemble the true physical coordinates. Figure 5 shows a visualization of 981 political blogs (=-=Adamic & Glance, 2005-=-). The eigenspectrum shown next to each embedding reveals that both spectral embedding andStructure Preserving Embedding Molecule TR015 Spectral Embedding Laplacian Eigenmaps Normalized Laplacian Eig... |

6 |
Asymptotic expansions of the k nearest neighbor risk
- Snapp, Venkatesh
- 1998
(Show Context)
Citation Context ...sentations of data (without any knowledge of labels or a classification task). Furthermore, these results confirm a recent finding regarding the finite-sample risk for a k-nearest neighbor classifer (=-=Snapp & Venkatesh, 1998-=-) which show that the finite-sample risk can be expressed asymptotically as: Rm = R∞ + ∑k j=2 cjn−j/d + O(n−(k+1)/d ),n→∞, where n is the number of finite samples, d is the dimensionality of the data,... |