Results 1  10
of
174
Strongly Typed Genetic Programming
 Evolutionary Computation
, 1994
"... Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of ..."
Abstract

Cited by 233 (1 self)
 Add to MetaCart
Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming(STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate syntactically correct parse trees. Key concepts for STGP are generic functions, which are not true strongly typed functions but rather templates for classes of such functions, and generic data types, which are analogous. To illustrate STGP, we present four examples involving vector/matrix manip...
Sparse signal reconstruction from limited data using FOCUSS: A reweighted minimum norm algorithm
 IEEE Trans. Signal Processing
, 1997
"... Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), t ..."
Abstract

Cited by 218 (12 self)
 Add to MetaCart
Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a lowresolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learningbased algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in directionofarrival (DOA) estimation and neuromagnetic imaging. I.
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 180 (30 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Randomwalk computation of similarities between nodes of a graph, with application to collaborative recommendation
 IEEE Transactions on Knowledge and Data Engineering
, 2006
"... Abstract—This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markovchain model of random walk through the database. More precisely, we compute quantities (the average comm ..."
Abstract

Cited by 116 (14 self)
 Add to MetaCart
Abstract—This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markovchain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the “length ” of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commutetime distance. This graph PCA provides a nice interpretation to the “Fiedler vector, ” widely used for graph partitioning. The model is evaluated on a collaborativerecommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacianbased similarities perform well in comparison with other methods. The model, which nicely fits into the socalled “statistical relational learning ” framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machinelearning and patternrecognition tasks involving a relational database. Index Terms—Graph analysis, graph and database mining, collaborative recommendation, graph kernels, spectral clustering, Fiedler vector, proximity measures, statistical relational learning. 1
The Principal Components Analysis of a Graph, and its Relationships to Spectral Clustering
 Proceedings of the 15th European Conference on Machine Learning (ECML 2004). Lecture Notes in Artificial Intelligence
, 2004
"... This work presents a novel procedure for computing (1) distances between nodes of a weighted, undirected, graph, called the Euclidean Commute Time Distance (ECTD), and (2) a subspace projection of the nodes of the graph that preserves as much variance as possible, in terms of the ECTD  a princi ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
This work presents a novel procedure for computing (1) distances between nodes of a weighted, undirected, graph, called the Euclidean Commute Time Distance (ECTD), and (2) a subspace projection of the nodes of the graph that preserves as much variance as possible, in terms of the ECTD  a principal components analysis of the graph. It is based on a Markovchain model of random walk through the graph. The model assigns transition probabilities to the links between nodes, so that a random walker can jump from node to node. A quantity, called the average commute time, computes the average time taken by a random walker for reaching node j when starting from node i, and coming back to node i. The square root of this quantity, the ECTD, is a distance measure between any two nodes, and has the nice property of decreasing when the number of paths connecting two nodes increases and when the "length" of any path decreases. The ECTD can be computed from the pseudoinverse of the Laplacian matrix of the graph, which is a kernel. We finally define the Principal Components Analysis (PCA) of a graph as the subspace projection that preserves as much variance as possible, in terms of the ECTD. This graph PCA has some interesting links with spectral graph theory, in particular spectral clustering.
Dynamic Core Provisioning for Quantitative Differentiated Services
 IEEE/ACM Transactions on Networking
, 2004
"... Abstract — Efficient network provisioning mechanisms that support service differentiation and automatic capacity dimensioning are essential to the realization of the Differentiated Services (DiffServ) Internet. Building on our prior work on edge provisioning, we propose a set of efficient dynamic no ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
Abstract — Efficient network provisioning mechanisms that support service differentiation and automatic capacity dimensioning are essential to the realization of the Differentiated Services (DiffServ) Internet. Building on our prior work on edge provisioning, we propose a set of efficient dynamic node and core provisioning algorithms for interior nodes and core networks, respectively. The node provisioning algorithm prevents transient violations of service level agreements by predicting the onset of service level violations based on a multiclass virtual queue measurement technique, and by automatically adjusting the service weights of weighted fair queueing schedulers at core routers. Persistent service level violations are reported to the core provisioning algorithm, which dimensions traffic aggregates at the network ingress edge. The core provisioning algorithm is designed to address the difficult problem of provisioning DiffServ traffic aggregates (i.e., ratecontrol can only be exerted at the root of any traffic distribution tree) by taking into account fairness issues not only across different traffic aggregates but also within the same aggregate whose packets take different routes through a core IP network. We demonstrate through analysis and simulation that the proposed dynamic provisioning model is superior to static provisioning for DiffServ in providing quantitative delay bounds with differentiated loss across peraggregate service classes under persistent congestion and device failure conditions when observed in core networks.
The 3L Algorithm for Fitting Implicit Polynomial Curves and Surfaces to Data
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2000
"... Of great importance to a wide variety of computer vision and image analysis problems is the ability to represent two (2D) and threedimensional (3D) data or objects. Implicit polynomial curves and surfaces are two of the most useful representations available. Their representational power is evidenc ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
Of great importance to a wide variety of computer vision and image analysis problems is the ability to represent two (2D) and threedimensional (3D) data or objects. Implicit polynomial curves and surfaces are two of the most useful representations available. Their representational power is evidenced by their ability to smooth noisy data and to interpolate through sparse or missing data. Furthermore, their associated Euclidean and affine invariants are powerful discriminators, making implicit polynomials a computationally attractive technology for recognizing objects in arbitrary positions with respect to cameras or range sensors. In this paper, we introduce a completely new approach to fitting implicit polynomials to data. The algorithm represents a significant advancement of implicit polynomial technology for three important reasons. First, it is orders of magnitude faster than existing methods. Second, it has significantly better repeatability and numerical stability than current m...
Distinctness of compositions of an integer: A probabilistic analysis
 RANDOM STRUCTURES AND ALGORITHMS
, 2001
"... Compositions of integers are used as theoretical models for many applications. The degree of distinctness of a composition is a natural and important parameter. In this paper, we use as measure of distinctness the number of distinct parts (or components). We investigate, from a probabilistic point o ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
Compositions of integers are used as theoretical models for many applications. The degree of distinctness of a composition is a natural and important parameter. In this paper, we use as measure of distinctness the number of distinct parts (or components). We investigate, from a probabilistic point of view, the first empty part, the maximum part size and the distribution of the number of distinct part sizes. We obtain asymptotically, for the classical composition of an integer, the moments and an expression for a continuous distribution F, the (discrete) distribution of the number of distinct part sizes being computable from F. We next analyze another composition: the Carlitz one, where two successive parts are dierent. We use tools such as analytical depoissonization, Mellin transforms, Markov chain potential theory, limiting hitting times, singularity analysis and perturbation analysis.
Image Processing with Multiscale Stochastic Models
, 1993
"... In this thesis, we develop image processing algorithms and applications for a particular class of multiscale stochastic models. First, we provide background on the model class, including a discussion of its relationship to wavelet transforms and the details of a twosweep algorithm for estimation. A ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
In this thesis, we develop image processing algorithms and applications for a particular class of multiscale stochastic models. First, we provide background on the model class, including a discussion of its relationship to wavelet transforms and the details of a twosweep algorithm for estimation. A multiscale model for the error process associated with this algorithm is derived. Next, we illustrate how the multiscale models can be used in the context of regularizing illposed inverse problems and demonstrate the substantial computational savings that such an approach offers. Several novel features of the approach are developed including a technique for choosing the optimal resolution at which to recover the object of interest. Next, we show that this class of models contains other widely used classes of statistical models including 1D Markov processes and 2D Markov random fields, and we propose a class of multiscale models for approximately representing Gaussian Markov random fields...
Optimal linear precoding strategies for wideband noncooperative systems based on game theory – Part II: Algorithms
 IEEE Trans. Signal Process
, 2008
"... In this twoparts paper we propose a decentralized strategy, based on a gametheoretic formulation, to find out the optimal precoding/multiplexing matrices for a multipointtomultipoint communication system composed of a set of wideband links sharing the same physical resources, i.e., time and band ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
In this twoparts paper we propose a decentralized strategy, based on a gametheoretic formulation, to find out the optimal precoding/multiplexing matrices for a multipointtomultipoint communication system composed of a set of wideband links sharing the same physical resources, i.e., time and bandwidth. We assume, as optimality criterion, the achievement of a Nash equilibrium and consider two alternative optimization problems: 1) the competitive maximization of mutual information on each link, given constraints on the transmit power and on the spectral mask imposed by the radio spectrum regulatory bodies; and 2) the competitive maximization of the transmission rate, using finite order constellations, under the same constraints as above, plus a constraint on the average error probability. In Part I of the paper, we start by showing that the solution set of both noncooperative games is always nonempty and contains only pure strategies. Then, we prove that the optimal precoding/multiplexing scheme for both games leads to a channel diagonalizing structure, so that both matrixvalued problems can be recast in a simpler unified vector power control game, with no performance penalty. Thus, we study this simpler game and derive sufficient conditions ensuring the uniqueness of the Nash equilibrium. Interestingly, although derived under stronger constraints,