Results

**11 - 17**of**17**### A. Kerren and S. Seipel (Editors) Glint: An MDS Framework for Costly Distance Functions

"... Previous algorithms for multidimensional scaling, or MDS, aim for scalable performance as the number of points to lay out increases. However, they either assume that the distance function is cheap to compute, and perform poorly when the distance function is costly, or they leave the precise number o ..."

Abstract
- Add to MetaCart

(Show Context)
Previous algorithms for multidimensional scaling, or MDS, aim for scalable performance as the number of points to lay out increases. However, they either assume that the distance function is cheap to compute, and perform poorly when the distance function is costly, or they leave the precise number of distances to compute as a manual tuning parameter. We present Glint, an MDS algorithm framework that addresses both of these shortcomings. Glint is designed to automatically minimize the total number of distances computed by progressively computing a more and more densely sampled approximation of the distance matrix. We present instantiations of the Glint framework on three different classes of MDS algorithms: force-directed, analytic, and gradient-based. We validate the framework through computational benchmarks on several real-world datasets, and demonstrate substantial performance benefits without sacrificing layout quality. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Human-centered Computing]: Visualization— Visualization systems and tools

### Manuscript of 6/2/20145 Correspondence address:

"... Röthlisberger, Lars Hinrichs and Koen Plevoets for helpful comments and suggestions. All remaining errors are, of course, our own. 2 Lectometry – a methodology that explores how various language-external dimensions shape language usage in an aggregate perspective – is underused in English-language c ..."

Abstract
- Add to MetaCart

Röthlisberger, Lars Hinrichs and Koen Plevoets for helpful comments and suggestions. All remaining errors are, of course, our own. 2 Lectometry – a methodology that explores how various language-external dimensions shape language usage in an aggregate perspective – is underused in English-language corpus linguistics. Against this backdrop, the paper utilizes state-of-the-art lectometric analysis techniques to investigate lexical variability in written Standard English, as sampled in the well-known Brown family of corpora. We employ the following five-step procedure: (1) draw on large corpora (the British National Corpus, the American National Corpus, and the Blog Authorship Attribution Corpus) and Semantic Vector Space modeling to obtain an unbiased set of n = 303 lexical variables in a bottom-up and semi-automatic fashion; (2) determine the frequency distribution of lexical variant forms in the Brown corpora; (3) rely on the Profile-based Distance Metric to transform the distributional information into distances between the lects represented in the Brown

### 1Tackling the Flip Ambiguity in Wireless Sensor Network Localization and Beyond

"... Abstract—There have been significant advances in range-based numerical methods for sensor network localizations over the past decade. However, there remain a few challenges to be resolved to satisfaction. Those issues include, for example, the flip ambiguity, high level of noises in distance measure ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—There have been significant advances in range-based numerical methods for sensor network localizations over the past decade. However, there remain a few challenges to be resolved to satisfaction. Those issues include, for example, the flip ambiguity, high level of noises in distance measurements, and irregular topology of the concerning network. Each or a combination of them often severely degrades the otherwise good performance of existing methods. Integrating the connectivity constraints is an effective way to deal with those issues. However, there are too many of such constraints, especially in a large and sparse network. This presents a challenging computational problem to existing methods. In this paper, we propose a convex optimization model based on the Euclidean Distance Matrix (EDM). In our model, the connectivity constraints can be simply represented as lower and upper bounds on the elements of EDM, resulting in a standard 3-block quadratic conic programming, which can be efficiently solved by a recently proposed 3-block alternating direction method of multipliers. Numerical experiments show that the EDM model effectively eliminates the flip ambiguity and retains robustness in terms of being resistance to irregular wireless sensor network topology and high noise levels. Index Terms—Euclidean distance matrix, range-based node localization, convex optimization, alternating direction method of multipliers, wireless sensor networks. I.

### RESEARCH ARTICLE Open Access Reconstruction of 3D genome architecture

"... via a two-stage algorithm ..."

(Show Context)
### Reviewer: Gary Evans Purdue University Analyzing Spatial Models of Choice and Judgment with R

"... Owing to its often elegant mathematics and sophisticated graphical output, latent space mod-eling has become a popular statistical method. From its origins in psychometrics, it has been effectively applied, in one form or another, to areas as diverse as machine learning, network analysis, and genera ..."

Abstract
- Add to MetaCart

Owing to its often elegant mathematics and sophisticated graphical output, latent space mod-eling has become a popular statistical method. From its origins in psychometrics, it has been effectively applied, in one form or another, to areas as diverse as machine learning, network analysis, and general categorical data analysis. Historically, one of its earliest applications outside of psychometrics was in political science, which remains a rich source of modeling innovation. Such applications and innovations are the subject of this excellent book which will take readers on a thoroughgoing tour bringing them right to the cutting edge of spa-tial modeling as it is used with political choice data and revealing to them the considerable capabilities of R for carrying out such procedures. The book is aimed at practicing political science researchers and, to use the authors ’ phrase, “expert [statistical] methodologists. ” (Readers and instructors who intend to use the book should be aware that it is written at this advanced level.) For the latter audience in particular, the first chapter provides a brief, but lucid discussion on latent space theory as it applies to voting and political choices and, for the former, a well-considered argument for using such models as a standard analytical practice for such data.

### Isotone optimization in R: . . . Set methods

"... ing case antitonic regression. The corresponding umbrella term for both cases is monotonic regression (for a compact description, see de Leeuw 2005). Suppose that Z is the finite set {z1, z2,..., zn} of the ordered predictors with no ties, i.e., z1 < z2 < · · · < zn. Let y be again the ob ..."

Abstract
- Add to MetaCart

ing case antitonic regression. The corresponding umbrella term for both cases is monotonic regression (for a compact description, see de Leeuw 2005). Suppose that Z is the finite set {z1, z2,..., zn} of the ordered predictors with no ties, i.e., z1 < z2 < · · · < zn. Let y be again the observed response vector and x = (x1,..., xi,... xn) the unknown response values to be fitted. The least squares problem in monotonic regression can be stated as f(x) = n∑ i=1 wi (yi − xi)2 → min! (2) which has to be minimized over x under the inequality restrictions x1 ≤ x2 ≤ · · · ≤ xn for isotonic regression and x1 ≥ x2 ≥ · · · ≥ xn for the antitonic case. These restrictions avoid that we always get perfect fit. The basic theorem the isotonic solution of (2) is based on, is that if yi ≥ yi+1, then the fitted values x̂i+1: = x̂i. Correspondingly, the antitonic solution requires yi ≤ yi+1.

### On the Procrustean analogue of individual differences scaling (INDSCAL)

, 2013

"... In this paper, individual differences scaling (INDSCAL) is revisited, considering INDSCAL as being embedded within a hierarchy of individual difference scaling models. We explore the members of this family, distinguishing (i) models, (ii) the role of identification and substantive constraints, (iii) ..."

Abstract
- Add to MetaCart

In this paper, individual differences scaling (INDSCAL) is revisited, considering INDSCAL as being embedded within a hierarchy of individual difference scaling models. We explore the members of this family, distinguishing (i) models, (ii) the role of identification and substantive constraints, (iii) criteria for fitting models and (iv) algorithms to optimise the criteria. Model formulations may be based either on data that are in the form of proximities or on configurational matrices. In its configurational version, individual difference scaling may be formulated as a form of generalized Procrustes analysis. Algorithms are introduced for fitting the new models. An application from sensory evaluation illustrates the performance of the methods and their solutions.