Results 1  10
of
221
Dynamic social network analysis using latent space models
 SIGKDD Explorations, Special Issue on Link Mining
"... This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n ..."
Abstract

Cited by 118 (6 self)
 Add to MetaCart
This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in pdimensional Euclidean latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (subquadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional KDtrees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for nonlinear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and realworld data on up to 11,000 entities which indicate nearlinear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS coauthorship data. 1.
A Multistage Representation of the Wiener Filter Based on Orthogonal Projections
 IEEE Transactions on Information Theory
, 1998
"... The Wiener filter is analyzed for stationary complex Gaussian signals from an informationtheoretic point of view. A dualport analysis of the Wiener filter leads to a decomposition based on orthogonal projections and results in a new multistage method for implementing the Wiener filter using a nest ..."
Abstract

Cited by 102 (5 self)
 Add to MetaCart
The Wiener filter is analyzed for stationary complex Gaussian signals from an informationtheoretic point of view. A dualport analysis of the Wiener filter leads to a decomposition based on orthogonal projections and results in a new multistage method for implementing the Wiener filter using a nested chain of scalar Wiener filters. This new representation of the Wiener filter provides the capability to perform an informationtheoretic analysis of previous, basisdependent, reducedrank Wiener filters. This analysis demonstrates that the recently introduced crossspectral metric is optimal in the sense that it maximizes mutual information between the observed and desired processes. A new reducedrank Wiener filter is developed based on this new structure which evolves a basis using successive projections of the desired signal onto orthogonal, lower dimensional subspaces. The performance is evaluated using a comparative computer analysis model and it is demonstrated that the lowcomplexity multistage reducedrank Wiener filter is capable of outperforming the more complex eigendecompositionbased methods.
Graph Drawing by HighDimensional Embedding
 In GD02, LNCS
, 2002
"... We present a novel approach to the aesthetic drawing of undirected graphs. The method has two phases: first embed the graph in a very high dimension and then project it into the 2D plane using PCA. Experiments we have carried out show the ability of the method to draw graphs of 10 nodes in few seco ..."
Abstract

Cited by 73 (10 self)
 Add to MetaCart
(Show Context)
We present a novel approach to the aesthetic drawing of undirected graphs. The method has two phases: first embed the graph in a very high dimension and then project it into the 2D plane using PCA. Experiments we have carried out show the ability of the method to draw graphs of 10 nodes in few seconds. The new method appears to have several advantages over classical methods, including a significantly better running time, a useful inherent capability to exhibit the graph in various dimensions, and an effective means for interactive exploration of large graphs.
Measurements and models for radio path loss and penetration loss in and around homes and trees at 5.85 GHz
 IEEE Trans. Commun
, 1998
"... Abstract—This paper contains measured data and empirical models for 5.85GHz radio propagation path loss in and around residential areas for the newly allocated U.S. National Information Infrastructure (NII) band. Three homes and two stands of trees were studied for outdoor path loss, tree loss, an ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This paper contains measured data and empirical models for 5.85GHz radio propagation path loss in and around residential areas for the newly allocated U.S. National Information Infrastructure (NII) band. Three homes and two stands of trees were studied for outdoor path loss, tree loss, and house penetration loss in a narrowband measurement campaign that included 270 local area path loss measurements and over 276 000 instantaneous power measurements. Outdoor transmitters at a height of 5.5 m were placed at distances between 30 and 210 m from the homes, to simulate typical neighborhood base stations mounted atop utility poles. All path loss data are presented graphically and coupled with sitespecific information. We develop measurementbased path loss models for propagation prediction. The measurements and models may aid the development of futuristic outdoortoindoor residential communication systems for wireless internet access, wireless cable distribution, and wireless local loops. Index Terms—Residential wireless communications, inbuilding propagation, building penetration, path loss. I.
Distributed Clustering Using Collective Principal Component Analysis
 Knowledge and Information Systems
, 1999
"... This paper considers distributed clustering of high dimensional heterogeneous data using a distributed Principal Component Analysis (PCA) technique called the Collective PCA. It presents the Collective PCA technique that can be used independent of the clustering application. It shows a way to inte ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
(Show Context)
This paper considers distributed clustering of high dimensional heterogeneous data using a distributed Principal Component Analysis (PCA) technique called the Collective PCA. It presents the Collective PCA technique that can be used independent of the clustering application. It shows a way to integrate the Collective PCA with a given otheshelf clustering algorithm in order to develop a distributed clustering technique. It also presents experimental results using dierent test data sets including an application for web mining.
Updating a RankRevealing ULV Decomposition
, 1991
"... A ULV decomposition of a matrix A of order n is a decomposition of the form A = ULV^H, where U and V are orthogonal matrices and L is a lower triangular matrix. When A is approximately of rank k, the decomposition is rank revealing if the last n \Gamma k rows of L are small. This paper presents al ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
A ULV decomposition of a matrix A of order n is a decomposition of the form A = ULV^H, where U and V are orthogonal matrices and L is a lower triangular matrix. When A is approximately of rank k, the decomposition is rank revealing if the last n \Gamma k rows of L are small. This paper presents algorithms for updating a rankrevealing ULV decomposition. The algorithms run in O(n²) time, and can be implemented on a linear array of processors to run in O(n) time.
Perturbations of orthogonal polynomials with periodic recursion coefficients
, 2007
"... We extend the results of Denisov–Rakhmanov, Szegő–Shohat– Nevai, and Killip–Simon from asymptotically constant orthogonal polynomials on the real line (OPRL) and unit circle (OPUC) to asymptotically periodic OPRL and OPUC. The key tool is a characterization of the isospectral torus that is well ada ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
(Show Context)
We extend the results of Denisov–Rakhmanov, Szegő–Shohat– Nevai, and Killip–Simon from asymptotically constant orthogonal polynomials on the real line (OPRL) and unit circle (OPUC) to asymptotically periodic OPRL and OPUC. The key tool is a characterization of the isospectral torus that is well adapted to the study of perturbations.
Gaussian mixture clustering and imputation of microarray data
 Bioinformatics
, 2004
"... Motivation: In microarray experiments, missing entries arise from blemishes on the chips. In largescale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
(Show Context)
Motivation: In microarray experiments, missing entries arise from blemishes on the chips. In largescale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses.This study compares methods of missing value estimation. Results: Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of misclustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both timeseries (correlated) and nontime series (uncorrelated) data sets. Availability: Matlab code is available on request from the authors. Contact:
A Parallel Implementation of the Nonsymmetric QR Algorithm for Distributed Memory Architectures
 SIAM J. SCI. COMPUT
, 2002
"... One approach to solving the nonsymmetric eigenvalue problem in parallel is to parallelize the QR algorithm. Not long ago, this was widely considered to be a hopeless task. Recent efforts have led to significant advances, although the methods proposed up to now have suffered from scalability problems ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
One approach to solving the nonsymmetric eigenvalue problem in parallel is to parallelize the QR algorithm. Not long ago, this was widely considered to be a hopeless task. Recent efforts have led to significant advances, although the methods proposed up to now have suffered from scalability problems. This paper discusses an approach to parallelizingthe QR algorithm that greatly improves scalability. A theoretical analysis indicates that the algorithm is ultimately not scalable, but the nonscalability does not become evident until the matrix dimension is enormous. Experiments on the Intel Paragon system, the IBM SP2 supercomputer, the SGI Origin 2000, and the Intel ASCI Option Red supercomputer are reported.
ADAPTIVE MULTIPRECISION PATH TRACKING
"... This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a con ..."
Abstract

Cited by 34 (22 self)
 Add to MetaCart
(Show Context)
This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or rerun paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.