## An Analysis of Various Elastic Net Algorithms (1995)

Citations: | 2 - 2 self |

### BibTeX

@TECHREPORT{Berg95ananalysis,

author = {Jan Van Den Berg and Jock H. Geselschap},

title = {An Analysis of Various Elastic Net Algorithms},

institution = {},

year = {1995}

}

### OpenURL

### Abstract

The Elastic Net Algorithm (ENA) for solving the Traveling Salesman Problem is analyzed applying statistical mechanics. Using some general properties of the free energy function of stochastic Hopfield Neural Networks, we argue why Simic's derivation of the ENA from a Hopfield network is incorrect. However, like the Hopfield-Lagrange method, the ENA may be considered a specific dynamic penalty method , where, in this case, the weights of the various penalty terms decrease during execution of the algorithm. This view on the ENA corresponds to the view resulting from the theory on `deformable templates', where the term stochastic penalty method seems to be most appropriate. Next, the ENA is analyzed both on the level of the energy function as well as on the level of the motion equations. It will be proven and shown experimentally, why a non-feasible solution is sometimes found. It can be caused either by a too rapid lowering of the temperature parameter (which is avoidable), or...

### Citations

1773 |
Introduction to the Theory of Neural Computation
- Hertz, Krogh, et al.
- 1991
(Show Context)
Citation Context ...n[1 + exp(\Gammafi( X j w ij V j + I i ))]; (5) where 8i : V i = P(S i = 1). The stationary points of F u are found at points of the state space where 8i : V i = 1 1 + exp(fi( P j w ij V j + I i )) : =-=(6)-=- Theorem 2. In mean field approximation, the free energy of constrained stochastic binary Hopfield networks, submitted to the constraint X i S i = 1 (7) equals F c (V) = \Gamma 1 2 X ij w ij V i V j \... |

1572 |
Neural networks and physical systems with emergent collective computational abilities
- Hopfield
(Show Context)
Citation Context ...i : V i = 1 1 + exp(fi( P j w ij V j + I i )) : (6) Theorem 2. In mean field approximation, the free energy of constrained stochastic binary Hopfield networks, submitted to the constraint X i S i = 1 =-=(7)-=- equals F c (V) = \Gamma 1 2 X ij w ij V i V j \Gamma 1 fi ln[ X i exp(\Gammafi( X j w ij V j + I i ))]; (8) where 8i : V i = P(S i = 1s8j 6= i : S j = 0). The stationary points of F c are found at po... |

588 |
Neurons with graded response have collective computational properties like those of two-state neurons
- Hopfield
- 1984
(Show Context)
Citation Context ...of constrained stochastic binary Hopfield networks, submitted to the constraint X i S i = 1 (7) equals F c (V) = \Gamma 1 2 X ij w ij V i V j \Gamma 1 fi ln[ X i exp(\Gammafi( X j w ij V j + I i ))]; =-=(8)-=- where 8i : V i = P(S i = 1s8j 6= i : S j = 0). The stationary points of F c are found at points of the state space where 8i : V i = exp(\Gammafi( P j w ij V j + I i )) P l exp(\Gammafi( P j w lj V j ... |

481 |
Neural computations of decisions in optimization problems
- Hopfield, Tank
- 1985
(Show Context)
Citation Context ... V i = P(S i = 1s8j 6= i : S j = 0). The stationary points of F c are found at points of the state space where 8i : V i = exp(\Gammafi( P j w ij V j + I i )) P l exp(\Gammafi( P j w lj V j + I l )) : =-=(9)-=- The first theorem can be used to solve the TSP using the `soft' approach with penalty terms. A generalization of the second theorem can be used to solve the TSP using a combination of the `strong' an... |

248 |
Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing
- Aarts, Korst
- 1989
(Show Context)
Citation Context ... 3 theory using an asynchronous updating rule and binary units 7 . Like Simic, we shall use Hopfield's energy expression multiplied by minus one, that is: E(S) = 1 2 X ij w ij S i S j + X i I i S i ; =-=(1)-=- where S 2 f0; 1g n is the state vector (S 1 ; : : : ; S n ) of the neural network, S i the output value and I i the external input of neuron i and where w ij represents the interconnection strength f... |

169 |
An Analogue Approach to the Traveling Salesman Problem using an Elastic Net Method", Nature 326
- Durbin, Willshaw
- 1987
(Show Context)
Citation Context ...nergy F (P ). In this case, F is considered as a function of an arbitrary probability distribution P over the states of the system: F (P ) = E(P ) \Gamma TS(P ) = X S P (S)E(S) + T X S P (S) ln P (S) =-=(4)-=- The principle states that a minimum of the free energy F (P ) corresponds to a (dynamic) equilibrium state of the thermodynamic system. Therefore, the free energy can be used to find such a state. Mo... |

150 | A new method for mapping optimization problems onto neural networks
- Peterson, Soderberg
- 1989
(Show Context)
Citation Context ...ean field approximation of the free energy may be obtained as follows: F tsp2 (V) = 1 4 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ) + ff 4 X i X pq d 2 pq V i p V i q + 1 fi X pi V p i ln V p i ; =-=(12)-=- which has the structure of equation (3), where the entropy equals S = \Gamma P pi V p i ln V p i . However, this free energy expression is not used in the rest of this paper. 2.2. The original elasti... |

115 |
Neural Networks for Optimization and Signal Processing
- Cichocki, Unbehauen
- 1994
(Show Context)
Citation Context ... free energy defined by F fi = \GammaT ln(Z fi ), where T = 1=fi is the `temperature' of the system. It can also be written as F = hE(S)i \Gamma TS eq = X S P eq (S)E(S) + T X S P eq (S) ln P eq (S): =-=(3)-=- hE(S)i represents the average energy of the system at thermal equilibrium, S eq is the so-called entropy at thermal equilibrium, and P eq (S) is the probability of finding the system in state S at th... |

71 |
On the Stability of the Travelling Salesman Problem Algorithm of Hopfield and Tank
- Wilson, Pawley
- 1988
(Show Context)
Citation Context ...esult in (11) eventually yields: F ap (V) = 1 4 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ) \Gamma ff 4 X i X pq d 2 pq V i p V i q \Gamma 1 fi X p ln X i exp ( \Gamma fi ff 2 X q d 2 pq V i q ): =-=(21)-=- Simic found a slightly different expression with the weight value ff 2 instead of the value \Gamma ff 4 . He simply ignores this term arriving at the following expression for the free energy: F sim (... |

66 |
Statistical field theory, Addisonâ€“Wesley
- Parisi
- 1988
(Show Context)
Citation Context ...sp1 (V) = \Gamma 1 4 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ) \Gamma ff 4 X i X pq d 2 pq V i p V i q \Gamma 1 fi X p ln [ X i exp(\Gamma fi 2 X q d 2 pq (ffV i q + V i+1 q + V i\Gamma1 q ))]: =-=(11)-=- This energy expression has been used by Simic 14 to derive the elastic net algorithm. However, we think his derivation is not correct (section 3). We note, that a more `natural' mean field approximat... |

52 |
Neural Networks and Physical Systems With Emergent Collective Computational Abilities
- eld, J
- 1982
(Show Context)
Citation Context ...pace where 1 8i : Vi = 1 + exp( ( P : j wijVj + Ii)) (6) Theorem 2. In mean eld approximation, the free energy of constrained stochastic binary Hop eld networks, submitted to the constraint X Si =1 i =-=(7)-=- equals Fc(V) =, 1X 2 ij wijViVj , 1 ln[ X i exp(, ( X j wijVj + Ii))]; (8) where 8i : Vi =P(Si =1 ^ 8j 6= i : Sj =0). The stationary points of Fc are found at points of the state space where 8i : Vi ... |

45 |
Statistical Mechanics as the Underlying Theory of Elastic and Neural Optimisations
- Simic
- 1990
(Show Context)
Citation Context ... the location of city p. Application of the gradient descent method on equation (13) yields the updating rule: \Deltax i = ff 2 fi (x i+1 \Gamma 2x i + x i\Gamma1 ) + ff 1 X p p (i)(x p \Gamma x i ); =-=(14)-=- where the time-step \Deltat = 1=fi equals the current temperature T and where p (i) = exp(\Gamma fi 2 2 j x p \Gamma x i j 2 ) P l exp(\Gamma fi 2 2 j x p \Gamma x l j 2 ) : (15) In practice, all x p... |

30 |
Improving the Performance of the Hopfield-Tank Neural Network through Normalization and Annealing
- Bout, Miller
- 1989
(Show Context)
Citation Context ... = 1 fi X p ln [ X i exp(a i p )] + 1 fi X ip h i p @f @x i p (a i p ) +O(h 2 ) (19) 1 fi X p ln X i exp ( \Gamma fi ff 2 X q d 2 pq V i q ) \Gamma 1 2 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ):=-=(20)-=- c It is interesting to note Simic's observation that expression (11) has the `wrong' sign: indeed, the structure of that equation suggests, that stationary points in that case correspond to maxima, w... |

23 |
Neurons with graded response have collective computational properties like those of two-state neurons
- eld, J
- 1984
(Show Context)
Citation Context ... eld approximation, the free energy of constrained stochastic binary Hop eld networks, submitted to the constraint X Si =1 i (7) equals Fc(V) =, 1X 2 ij wijViVj , 1 ln[ X i exp(, ( X j wijVj + Ii))]; =-=(8)-=- where 8i : Vi =P(Si =1 ^ 8j 6= i : Sj =0). The stationary points of Fc are found at points of the state space where 8i : Vi = exp(, (Pj wijVj + Ii)) P l exp(, ( P : (9) j wljVj + Il)) The rst theorem... |

13 |
Generalized Deformable Models
- Yuille
- 1990
(Show Context)
Citation Context ...res this term arriving at the following expression for the free energy: F sim (V) = 1 4 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ) \Gamma 1 fi X p ln X i exp ( \Gamma fi ff 2 X q d 2 pq V i q ): =-=(22)-=- However, inspection of equation (19) reveals that the chosen Taylor-approximation does not hold for low values of the temperature, i.e., for high values of fi. This is a fundamental objection because... |

10 |
III, \Improving the Performance of the Hop eld{Tank Neural Network through Normalization and Annealing
- Bout, Miller
- 1989
(Show Context)
Citation Context ...(a i p)] + 1X ip exp ( , 2 X q exp(x i p)]; (16) d 2 pqV i q ; and (17) d 2 pq(V i+1 q + V i,1 q ); (18) d 2 pq @f h i p @xi p i 1 Vq ) , 2 (a i p)+O(h 2 ) (19) X i X pq d 2 pq i i+1 Vp (Vq i,1 + V ):=-=(20)-=- c It is interesting to note Simic's observation that expression (11) has the `wrong' sign: indeed, the structure of that equation suggests, that stationary points in that case correspond to maxima, w... |

6 |
Artificial neural networks and combinatorial optimization problems
- Peterson, SĂ¶derberg
- 1992
(Show Context)
Citation Context ...ion the main results. The energy function b to be minimized of the elastic net equals: F en (x) = ff 2 2 X i j x i+1 \Gamma x i j 2 \Gamma ff 1 fi X p ln X j exp( \Gammafi 2 2 j x p \Gamma x j j 2 ): =-=(13)-=- Here, x i represents the i-th elastic net point (the succeeding M elastic net points form a ring) and x p represents the location of city p. Application of the gradient descent method on equation (13... |

6 |
Constrained Optimization With the Hop eldLagrange Model
- Berg, Bioch
- 1994
(Show Context)
Citation Context ... = 1X p a i p = , 2 h i p = , 1 2 ln [ X i X q X q we found using the mean eld equation (9): f(a + h) = 1X p 1X p ln [ X i ln X i exp(a i p)] + 1X ip exp ( , 2 X q exp(x i p)]; (16) d 2 pqV i q ; and =-=(17)-=- d 2 pq(V i+1 q + V i,1 q ); (18) d 2 pq @f h i p @xi p i 1 Vq ) , 2 (a i p)+O(h 2 ) (19) X i X pq d 2 pq i i+1 Vp (Vq i,1 + V ):(20) c It is interesting to note Simic's observation that expression (1... |

5 | Some Theorems Concerning the Free Energy of (Un)Constrained Stochastic Hop eld Neural Networks
- Berg, Bioch
- 1995
(Show Context)
Citation Context ...17) h i p = \Gammafi 1 2 X q d 2 pq (V i+1 q + V i\Gamma1 q ); (18) we found using the mean field equation (9): f(a + h) = 1 fi X p ln [ X i exp(a i p )] + 1 fi X ip h i p @f @x i p (a i p ) +O(h 2 ) =-=(19)-=- 1 fi X p ln X i exp ( \Gamma fi ff 2 X q d 2 pq V i q ) \Gamma 1 2 X i X pq d 2 pq V i p (V i+1 q + V i\Gamma1 q ):(20) c It is interesting to note Simic's observation that expression (11) has the `w... |

4 |
An Improved Elastic Net Method for the Traveling Salesman Problem
- Burr
- 1988
(Show Context)
Citation Context ... called the Hamiltonian denoted by H ff , in this paper denoted by E(S) a . Next, the central expression to calculate is the so-called partition function Z fi defined by Z fi = X S exp(\GammafiE(S)); =-=(2)-=- where the summation takes place over the set of states S of the system. Related to the partition function is the thermodynamic free energy defined by F fi = \GammaT ln(Z fi ), where T = 1=fi is the `... |

3 |
Een Verbeterd `Elastic Net' Algoritme (An Improved Elastic Net Algorithm
- Geselschap
- 1994
(Show Context)
Citation Context ...an field approximation, the free energy of unconstrained stochastic binary Hopfield networks equals F u (V) = \Gamma 1 2 X ij w ij V i V j \Gamma 1 fi X i ln[1 + exp(\Gammafi( X j w ij V j + I i ))]; =-=(5)-=- where 8i : V i = P(S i = 1). The stationary points of F u are found at points of the state space where 8i : V i = 1 1 + exp(fi( P j w ij V j + I i )) : (6) Theorem 2. In mean field approximation, the... |

3 | Constrained optimization with the Hopfield-Lagrange model
- Berg, Bioch
- 1994
(Show Context)
Citation Context ...S), Simic applies a Taylor series expansion on the last term of equation (11). We tried to do the same. Taking f(x) = 1 fi X p ln [ X i exp(x i p )]; (16) a i p = \Gammafi ff 2 X q d 2 pq V i q ; and =-=(17)-=- h i p = \Gammafi 1 2 X q d 2 pq (V i+1 q + V i\Gamma1 q ); (18) we found using the mean field equation (9): f(a + h) = 1 fi X p ln [ X i exp(a i p )] + 1 fi X ip h i p @f @x i p (a i p ) +O(h 2 ) (19... |

3 |
Arti cial Neural Networks and Combinatorial Optimization Problems
- Peterson, Soderberg
(Show Context)
Citation Context ...a trade model. Here, we merely mention the main results. The energy function b to be minimized of the elastic net equals: Fen(x) = 2 2 X i j x i+1 , x i j 2 , 1X p ln X j exp( , 2 2 j xp , x j j 2 ): =-=(13)-=- Here, xi represents the i-th elastic net point (the succeeding M elastic net points form a ring) and xp represents the location of city p. Application of the gradient descent method on equation (13) ... |

2 | The optimal elastic net: Finding solutions to the travelling salesman problem
- Stone
- 1992
(Show Context)
Citation Context ...i)(x p \Gamma x i ); (14) where the time-step \Deltat = 1=fi equals the current temperature T and where p (i) = exp(\Gamma fi 2 2 j x p \Gamma x i j 2 ) P l exp(\Gamma fi 2 2 j x p \Gamma x l j 2 ) : =-=(15)-=- In practice, all x p should be normalized to points in the unit square. In that case, the following parameter values appear to be efficient 4 : ff 1 = 2:0 and ff 2 = 0:2. The initial value of the tem... |

2 | On the Statistical Mechanics of (Un)Constrained Stochastic Hopfield and `Elastic' Neural Networks
- Berg, Bioch
- 1994
(Show Context)
Citation Context ...ndard form of the free energy (F = hE(S)i \Gamma TS), Simic applies a Taylor series expansion on the last term of equation (11). We tried to do the same. Taking f(x) = 1 fi X p ln [ X i exp(x i p )]; =-=(16)-=- a i p = \Gammafi ff 2 X q d 2 pq V i q ; and (17) h i p = \Gammafi 1 2 X q d 2 pq (V i+1 q + V i\Gamma1 q ); (18) we found using the mean field equation (9): f(a + h) = 1 fi X p ln [ X i exp(a i p )]... |

1 |
der Malsburg and D.J. Willshaw How to label nerve cells so that they can interconnect in an ordered fashion
- von
- 1977
(Show Context)
Citation Context ...pq is the distance between points p and q, then the corresponding Hamiltonian may be formulated as 14 : E(S) = 1 4 X i X pq d 2 pq S i p (S i+1 q + S i\Gamma1 q ) + ff 4 X i X pq d 2 pq S i p S i q : =-=(10)-=- The first term represents the sum of distance-squares between visited cities, while the second term is a penalty term which penalizes the simultaneous presence of the salesman at more than one positi... |

1 |
On the (Free) Energy of Hopfield Networks. To appear in
- Berg, Bioch
- 1995
(Show Context)
Citation Context ...equation (11). We tried to do the same. Taking f(x) = 1 fi X p ln [ X i exp(x i p )]; (16) a i p = \Gammafi ff 2 X q d 2 pq V i q ; and (17) h i p = \Gammafi 1 2 X q d 2 pq (V i+1 q + V i\Gamma1 q ); =-=(18)-=- we found using the mean field equation (9): f(a + h) = 1 fi X p ln [ X i exp(a i p )] + 1 fi X ip h i p @f @x i p (a i p ) +O(h 2 ) (19) 1 fi X p ln X i exp ( \Gamma fi ff 2 X q d 2 pq V i q ) \Gamma... |

1 |
On the (Free) Energy of Hop eld Networks. To appear in: World Scienti c
- Berg, Bioch
- 1995
(Show Context)
Citation Context ... ln [ X i X q X q we found using the mean eld equation (9): f(a + h) = 1X p 1X p ln [ X i ln X i exp(a i p)] + 1X ip exp ( , 2 X q exp(x i p)]; (16) d 2 pqV i q ; and (17) d 2 pq(V i+1 q + V i,1 q ); =-=(18)-=- d 2 pq @f h i p @xi p i 1 Vq ) , 2 (a i p)+O(h 2 ) (19) X i X pq d 2 pq i i+1 Vp (Vq i,1 + V ):(20) c It is interesting to note Simic's observation that expression (11) has the `wrong' sign: indeed, ... |