Results 1  10
of
64
On the Complexity of Computing and Learning with Multiplicative Neural Networks
 NEURAL COMPUTATION
"... In a great variety of neuron models neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units which multiply their inputs instead of summing them and, thus, allow inputs to interact nonlinearly. The class of multiplicative n ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
In a great variety of neuron models neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units which multiply their inputs instead of summing them and, thus, allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well studied network types as higherorder networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the VapnikChervonenkis (VC) dimension and the pseudo dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo dimension is bounded from above by a polynomial with the same order of magnitude as the currently best known bound for purely sigmoidal networks. Moreover, we show that this bound holds even in the case when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds we construct product unit networks of fixed depth with superlinear VC dimension. For sigmoidal networks of higher order we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higherorder units, also known as sigmapi units, that are characterized by connectivity constraints. In terms of these we derive some asymptotically tight bounds.
Living on the edge: A geometric theory of phase transitions in convex optimization
, 2013
"... Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indee ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
(Show Context)
Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.
Clustering for EdgeCost Minimization
"... Leonard J. Schulman College of Computing Georgia Institute of Technology Atlanta GA 303320280 ABSTRACT We address the problem of partitioning a set of n points into clusters, so as to minimize the sum, over all intracluster pairs of points, of the cost associated with each pair. We obtain a ra ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
Leonard J. Schulman College of Computing Georgia Institute of Technology Atlanta GA 303320280 ABSTRACT We address the problem of partitioning a set of n points into clusters, so as to minimize the sum, over all intracluster pairs of points, of the cost associated with each pair. We obtain a randomized approximation algorithm for this problem, for the cost functions ` 2 2 ; `1 and `2 , as well as any cost function isometrically embeddable in ` 2 2 .
Randomized approximation algorithms for set multicover problems with applications to reverse engineering of protein and gene networks
, 2007
"... ..."
(Show Context)
A graph theoretical GaussBonnetChern theorem
, 2011
"... Abstract. We prove a discrete GaussBonnetChern theorem ∑ g∈V K(g) = χ(G) for finite graphs G = (V,E), where V is the vertex set and E is the edge set of the graph. The dimension of the graph, the local curvature form K and the Euler characteristic are all defined graph theoretically. ..."
Abstract

Cited by 17 (17 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a discrete GaussBonnetChern theorem ∑ g∈V K(g) = χ(G) for finite graphs G = (V,E), where V is the vertex set and E is the edge set of the graph. The dimension of the graph, the local curvature form K and the Euler characteristic are all defined graph theoretically.
GeneralPurpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winnertakeall, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
Variational Principles for Circle Patterns
, 2003
"... A Delaunay cell decomposition of a surface with constant curvature gives rise to a circle pattern, consisting of the circles which are circumscribed to the facets. We treat the problem whether there exists a Delaunay cell decomposition for a given (topological) cell decomposition and given interse ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
A Delaunay cell decomposition of a surface with constant curvature gives rise to a circle pattern, consisting of the circles which are circumscribed to the facets. We treat the problem whether there exists a Delaunay cell decomposition for a given (topological) cell decomposition and given intersection angles of the circles, whether it is unique and how it may be constructed. Somewhat more generally, we allow conelike singularities in the centers and intersection points of the circles. We prove existence and uniqueness theorems for the solution of the circle pattern problem using a variational principle. The functionals (one for the euclidean, one for the hyperbolic case) are convex functions of the radii of the circles. The critical points correspond to solutions of the circle pattern problem. The analogous functional for the spherical case is not convex, hence this case is treated by stereographic projection to the plane. From the existence and uniqueness of circle patterns in the sphere, we derive a strengthened version of Steinitz’ theorem on the geometric realizability of abstract polyhedra. We derive the
Small examples of nonconstructible simplicial balls and spheres
 SIAM J. Discrete Math
, 2004
"... We construct nonconstructible simplicial dspheres with d + 10 vertices and nonconstructible, nonrealizable simplicial dballs with d + 9 vertices for d≥3. 1 ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
We construct nonconstructible simplicial dspheres with d + 10 vertices and nonconstructible, nonrealizable simplicial dballs with d + 9 vertices for d≥3. 1
Face Numbers of 4Polytopes and 3Spheres
 Proceedings of the international congress of mathematicians, ICM 2002
, 2002
"... Steinitz (1906) gave a remarkably simple and explicit description of the set of all fvectors f(P ) = (f0 , f1 , f2) of all 3dimensional convex polytopes. His result also identifies the simple and the simplicial 3dimensional polytopes as the only extreme cases. Moreover, it can be extended to s ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Steinitz (1906) gave a remarkably simple and explicit description of the set of all fvectors f(P ) = (f0 , f1 , f2) of all 3dimensional convex polytopes. His result also identifies the simple and the simplicial 3dimensional polytopes as the only extreme cases. Moreover, it can be extended to strongly regular CW 2spheres (topological objects), and further to Eulerian lattices of length 4 (combinatorial objects).