Results 1 
7 of
7
Simultaneous Unsupervised Learning of Disparate Clusterings
"... Most clustering algorithms produce a single clustering for a given data set even when the data can be clustered naturally in multiple ways. In this paper, we address the difficult problem of uncovering disparate clusterings from the data in a totally unsupervised manner. We propose two new approache ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Most clustering algorithms produce a single clustering for a given data set even when the data can be clustered naturally in multiple ways. In this paper, we address the difficult problem of uncovering disparate clusterings from the data in a totally unsupervised manner. We propose two new approaches for this problem. In the first approach we aim to find good clusterings of the data that are also decorrelated with one another. To this end, we give a new and tractable characterization of decorrelation between clusterings, and present an objective function to capture it. We provide an iterative “decorrelated” kmeans type algorithm to minimize this objective function. In the second approach, we model the data as a sum of mixtures and associate each mixture with a clustering. This approach leads us to the problem of learning a convolution of mixture distributions. Though the latter problem can be formulated as one of factorial learning [8, 13, 16], the existing formulations and methods do not perform well on many real highdimensional data sets. We propose a new regularized factorial learning framework that is more suitable for capturing the notion of disparate clusterings in modern, highdimensional data sets. The resulting algorithm does well in uncovering multiple clusterings, and is much improved over existing methods. We evaluate our methods on two realworld data sets a music data set from the text mining domain, and a portrait data set from the computer vision domain. Our methods achieve a substantially higher accuracy than existing factorial learning as well as traditional clustering algorithms.
Modified Descent Methods for Solving the Monotone Variational Inequality Problem
 Operations Research Letters
, 1998
"... Recently, Fukushima proposed a differentiable optimization framework for solving strictly monotone and continuously differentiable variational inequalities. The main result of this paper is to show that Fukushima's results can be extended to monotone (not necessarily strictly monotone) and Lipschitz ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Recently, Fukushima proposed a differentiable optimization framework for solving strictly monotone and continuously differentiable variational inequalities. The main result of this paper is to show that Fukushima's results can be extended to monotone (not necessarily strictly monotone) and Lipschitz continuous (not necessarily continuously differentiable) variational inequalities, if one is willing to modify slightly the basic algorithmic scheme. The modification applies also to a general descent scheme introduced by Zhu and Marcotte. Keywords Variational inequalities. Descent methods. Projection. Global convergence. 1 Introduction Let C be a nonempty, closed and convex subset of R n and let F be a mapping from R n into R n . We consider the variational inequality problem (VIP): Find x 2 C such that hF (x ); x \Gamma xi 0 for all x in C, (1) where h\Delta; \Deltai denotes the standard Euclidian inner product in R n . Traditionally, algorithms for solving variati...
Time and cost tradeoff for distributed data processing
 Computers ind. Engng
, 1989
"... AbstractAn important design issue in distributed data processing systems is to determine optimal data distribution. The problem requires a tradeoff between time and cost. For instance, quick response time conflicts with low cost. The paper addresses the data distribution problem in this conflictin ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
AbstractAn important design issue in distributed data processing systems is to determine optimal data distribution. The problem requires a tradeoff between time and cost. For instance, quick response time conflicts with low cost. The paper addresses the data distribution problem in this conflicting environment. A formulation of the problem as a nonlinear program is developed. An algorithm employing a simple search procedure is presented, which gives an optimal data distribution. An example is solved to illustrate the method.
Blacksburg, VirginiaIterated Grid Search Algorithm on Unimodal Criteria
, 1997
"... The unimodality of a function seems a simple concept. But in the Euclidean space Rm,m = 3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee conve ..."
Abstract
 Add to MetaCart
The unimodality of a function seems a simple concept. But in the Euclidean space Rm,m = 3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee convergence. Support is given both by analytic arguments and simulation study. Application is envisioned in lowdimensional but nontrivial problems. The convergence of the proposed iterated grid search algorithm is presented along with the results of particular application studies. It has been recognized that the derivative methods, such as the Newtontype method, are not entirely satisfactory, so a variety of other tools are being considered as alternatives. Many other tools have been rejected because of apparent manipulative difficulties. But in our current research, we focus on the simple algorithm and the guaranteed convergence for unimodal function to avoid the possible chaotic behavior of the function. Furthermore, in case the loss function to be optimized is not unimodal, we suggest a weaker condition: almost (noisy) unimodality, under which the iterated grid search finds an estimated optimum point. Subject Classification: statistical computing, nonlinear estimation, statistical optimization, statistical simulation
MATRIX BALANCING PROBLEM AND BINARY AHP
, 2006
"... Abstract A matrix balancing problem and an eigenvalue problem are transformed into two minimumnorm point problems whose difference is only a norm. The matrix balancing problem is solved by scaling algorithms that are as simple as the power method of the eigenvalue problem. This study gives a proof o ..."
Abstract
 Add to MetaCart
Abstract A matrix balancing problem and an eigenvalue problem are transformed into two minimumnorm point problems whose difference is only a norm. The matrix balancing problem is solved by scaling algorithms that are as simple as the power method of the eigenvalue problem. This study gives a proof of global convergence for scaling algorithms and applies the algorithm to Analytic Hierarchy process (AHP), which derives priority weights from pairwise comparison values by the eigenvalue method (EM) traditionally. Scaling algorithms provide the minimum χ square estimate from pairwise comparison values. The estimate has properties of priority weights such as rightleft symmetry and robust ranking that are not guaranteed by the EM.
Open Access
"... Weighted sumrate maximization for multiuser SIMO multiple access channels in cognitive radio networks Peter He 1 *,LianZhao 1 andJianhuaLu 2 In this article, an efficient distributed and parallel algorithm is proposed to maximize the sumrate and optimize the input distribution policy for the mult ..."
Abstract
 Add to MetaCart
Weighted sumrate maximization for multiuser SIMO multiple access channels in cognitive radio networks Peter He 1 *,LianZhao 1 andJianhuaLu 2 In this article, an efficient distributed and parallel algorithm is proposed to maximize the sumrate and optimize the input distribution policy for the multiuser single input multiple output multiple access channel (MUSIMO MAC) system with concurrent access within a cognitive radio (CR) network. The single input means that every user has a single antenna and multiple output means that base station(s) has multiple antennas. The main features are: (i) the power distribution for the users is updated by using variable scale factors which effectively and efficiently maximize the objective function at each iteration; (ii) distributed and parallel computation is employed to expedite convergence of the proposed distributed algorithm; and (iii) a novel waterfilling with mixed constraints is investigated, and used as a fundamental block of the proposed algorithm. Due to sufficiently exploiting the structure of the proposed model, the proposed algorithm owns fast convergence. Numerical results verify that the proposed algorithm is effective and fast convergent. Using the proposed approach, for the simulated range, the required number of iterations for convergence is two and this number is not sensitive to the increase of the number of users. This feature is quite desirable for large scale systems with dense active users. In addition, it is also worth noting that the proposed algorithm is a monotonic feasible operator to the iteration. Thus, the stop criterion for computation could be easily set up.
A DUAL APPROACH TO SOLVING LINEAR INEQUALITIES BY UNCONSTRAINED QUADRATIC OPTIMIZATION
"... The method used to obtain the minimumnorm solution of a largescale system of linear inequalities proceeds by solving a finite number of smaller unconstrained subproblems. Such a subproblem has the form of optimizing a quadratic function, which is easily solved. The case when the vector b is perturb ..."
Abstract
 Add to MetaCart
The method used to obtain the minimumnorm solution of a largescale system of linear inequalities proceeds by solving a finite number of smaller unconstrained subproblems. Such a subproblem has the form of optimizing a quadratic function, which is easily solved. The case when the vector b is perturbed is also included. 1.