## GENERAL THEORY OF CNN........................................................................................................... 4 APPLICATION OF CNN IN IMAGE PROCESSING............................................................................... 5

### BibTeX

@MISC{Mátyás_generaltheory,

author = {Brendel Mátyás and Scientific Adviser Tamás Roska and D. Sc},

title = {GENERAL THEORY OF CNN........................................................................................................... 4 APPLICATION OF CNN IN IMAGE PROCESSING............................................................................... 5},

year = {}

}

### OpenURL

### Abstract

CNN-backpropagation, adaptive image sensing and

### Citations

966 |
The organization of behavior
- Hebb
- 1949
(Show Context)
Citation Context ...neural basis of learning is connected to the synapses, and essentially it is true according to our knowledge today as well 1 . In the theory of synaptic learning a basic contribution was done by Hebb =-=[11]-=-. This theory has been considered as the main basis of neural learning since that time. Yet Hebb's theory was based on behavioral psychology and it was only a neurobiological hypothesis explaining the... |

324 | Information-based objective functions for active data selection
- MacKay
- 1992
(Show Context)
Citation Context ...es, which are not included in the samples. The sampling is usually predefined, or tha task of human design. However there are aslo methods for “optimal experiment design” also called “active learning”=-=[73]-=-. In these algo5 Special knowledge means some features of the template, which influences the dynamics of CNN in a known and direct way, and on the other hand, the dynamics of the network can also be r... |

312 | Image selective smoothing and edge detection by nonlinear diffusion - Alvarez, Lions, et al. - 1992 |

229 |
Cellular neural networks: theory
- Chua, Yang
- 1988
(Show Context)
Citation Context ... the array is colored by black, cells that fall within the sphere of influence of neighborhood radius r = 1 (the nearest neighbors) by gray. The paradigm of Cellular Neural Networks was introduced in =-=[9]-=-. The exact description of the topology of CNN is as follows. The cells lay on an MxN grid. Each cell is connected to a neighborhood of radius r (usually r=1 or r=0). The neighborhood of cell (i,j) is... |

184 | Digital Image Enhancement and Noise Filtering by use of Local Statistics - Lee - 1980 |

137 |
Generalization of back-propagation to recurrent neural networks
- Pineda
- 1987
(Show Context)
Citation Context ...uare error concept. However, various kinds of minimization methods were used for optimization of the error function over the parameter space. These optimization methods are the gradient-based methods =-=[26]-=-; the evolutionary methods (see [27] and [28]) and the statistical optimization methods [29]. 4.1. Training of CNN with gradient-based methods In [30] the methods used for computing or estimating grad... |

116 |
The CNN paradigm
- Chua, Roska
- 1993
(Show Context)
Citation Context ...nd on computational resources is fulfilled by a fast hardware solution. The significance of this possibility is underlined by the fact that in an analogic CNN computer, the CNN Universal Machine (see =-=[36]-=-) stored program can be defined to calculate the gradients via the same hardware. Once we are able to calculate the gradients various architectural possibilities for adaptation and plasticity are avai... |

93 | Two methods for display of high contrast images - TUMBLIN, HODGINS, et al. - 1999 |

80 |
The CNN universal machine: An analogic array computer
- Roska, Chua
- 1993
(Show Context)
Citation Context ... additional extensions. The CNN executes a parallel, analog operation, this can be extended with storage of the images, logical operations and others. The design of the CNN Universal Machine (CNN-UM, =-=[10]-=-), a stored program nonlinear array computer was motivated by these considerations. The architecture is able to combine analog array operations with logic efficiently. Thus complex, socalled analogic ... |

65 | Adaptation of retinal processing to image contrast and spatial scale - Smirnakis, Berry, et al. - 1997 |

65 | Mammographic feature enhancement by multiscale analysis - Laine, Schuler, et al. - 1994 |

53 |
Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path
- TV, Lomo
- 1973
(Show Context)
Citation Context ...ated. Use-dependent long-term changes in synaptic efficacy are now thought to be a basis for learning and memory. Long-term potentiation (LTP) was first discovered in the hippocampus and published in =-=[12]-=-. Hippocampus is the brain area, which is responsible for short-term memory (lasting few days), as it was presumptived by preceding studies. The opposite mechanism is long-term depression (LTD), which... |

30 |
Handbook of Neural Computing Applications
- Maren, Harston, et al.
- 1990
(Show Context)
Citation Context ...ochemical mechanism of this synaptic learning was not known, there was a wide space for inventing learning rules for artificial neural networks, when they emerged in computer science (see for example =-=[14]-=-). The aim of studying artificial neural networks usually was not to develop biological valid learning rules but to construct learning mechanism, which are efficient in applications. On the other hand... |

21 | Contrast Enhancement of Medical Images Using Multiscale Edge Representation - Lu, Healy, et al. - 1994 |

18 |
1994] “The analogic cellular neural network as a bionic eye
- Werblin, Roska, et al.
(Show Context)
Citation Context ...ce” of the retina by a complex model, but it is difficult to realize them on-chip. In the future there is a possibility of constructing a bionic-eye, which is an important inspiration of CNN research =-=[16]-=-. On the other hand, I was not interested in modeling the whole, complex retina, only its adaptive features. Accordingly I was searching for simpler models, which are realizable at this time or in the... |

17 |
Design and learning with cellular neural networks
- Nossek
- 1994
(Show Context)
Citation Context ...g (CNN learning) has also been an important topic ever since the first CNN conference CNNA’90 [23]. A comprehensive summary of the early period of template design and learning can be found in [24] or =-=[25]-=-. Template training has been based on basically the same, mean square error concept. However, various kinds of minimization methods were used for optimization of the error function over the parameter ... |

17 | Methods for image processing and pattern formation in cellular neural network: a tutorial - Crounse, Chua - 1995 |

15 |
Genetic algorithms for CNN template learning
- Kozek, Roska, et al.
(Show Context)
Citation Context ...learning tasks are investigated. As I know, there is no special gradient-based method for CNN training. On the other hand, several other training methods were already tested on CNN template training (=-=[27]-=-, [28] and [29]). The topic of this dissertation is not the improvement of the gradient-based methods or other methods, but only the investigation of gradient computation for CNN. The advantage of sup... |

15 | Diagrammatic Derivation of Gradient Algorithms for Neural Networks
- Wan, Beaufays
- 1996
(Show Context)
Citation Context ...wn from general neural network theory for a long time. For recurrent networks recurrent back-propagation was introduced, which applies to CNN as well. A general approach is network reciprocity ([33], =-=[34]-=- and [35]), which automates the diagrammatic derivation of the reciprocal network, computing the gradients of the original network for arbitrary network structures. This approach applies for all netwo... |

15 | Rational unsharp masking technique
- Ramponi, Polesel
- 1998
(Show Context)
Citation Context ...thod however is intractable with digital technologies at present. As a comparison I present some of these methods and the relation with my first method. The first is unsharp-masking. Equation (15) in =-=[61]-=- is related to the contrast enhancement part of the equation of our method (30). Only the so-called control term is simplified for CNN realization. See the details later. 10 The notion “information” i... |

14 |
Real-Time Adaptive Contrast Enhancement
- Narendra, Fitch
- 1981
(Show Context)
Citation Context ...i.e. monotonically decreasing. Our contrast gain (CG) is also monotonically decreasing, but we could not use division, instead we composed another function. Then the method of Narendra and Fitch from =-=[72]-=- is: f ( i, j) = m x ( i, j) D + σ ( i, x [ x( i, j) j) − m x ( i, j)] where D is a constant. Chang and Wu makes a generalization by: f ( i, j) = m x ( i, j) + x( i, K( i, j)[ j) − mx ( i, j) ] σ ( i,... |

11 |
F.K.: "Image Contrast Enhancement by Constrained Local Histogram Equalization
- Zhu, Chan, et al.
- 1999
(Show Context)
Citation Context ...ficiently adaptive. Contrary to global methods, adaptivity means that information is handled not globally, neither locally, but regionally. Certainly there are also adaptive methods developed already =-=[50]-=-. However, these methods are computationally intensive. The fact that even an interpolating technique has been proposed in [47] to grade the huge computational demand, indicates the significance of th... |

9 | The computational eye - Werblin, and, et al. - 1996 |

8 |
Texture classification and Segmentation by Cellular Neural Network using Genetic Learning
- Sziranyi, Csapodi
- 1998
(Show Context)
Citation Context ...ng tasks are investigated. As I know, there is no special gradient-based method for CNN training. On the other hand, several other training methods were already tested on CNN template training ([27], =-=[28]-=- and [29]). The topic of this dissertation is not the improvement of the gradient-based methods or other methods, but only the investigation of gradient computation for CNN. The advantage of supervise... |

8 |
A 64x64 CNN universal chip with analog and digital
- Espejo, Domínguez-Castro, et al.
- 1998
(Show Context)
Citation Context ...ire method. This method can even be developed to become more complex, if more complex hardware is available. The single-step algorithm was tested by the current available visual microprocessor (ACE4k =-=[56]-=-). The parameter setting was: k1=5,k2=1,k3=,k4=4, m=1, n=1. Figure 64 shows the results. The realization of a wide-range diffusion is currently a hard problem on chip. Consequently, the result of the ... |

6 |
Cellular Neural Networks: Foundations and Primer
- Chua, Roska
- 1997
(Show Context)
Citation Context ... 194.1.1. Computing the gradient with the DT-CNN itself The question is whether this computation can be carried out with the DT-CNN itself. Let us first introduce the following SHIFT templates (see. =-=[40]-=-). ⎧ ν ,µ 1 if ( k, l) = ( ν , µ) ⎫ E ( k, l) = ⎨ ⎬ ⎩0 otherwise ⎭ Equations (18), (19) and (20) are all similar to the DT-CNN equations: they all consist of an input added to the convolution A*Dp(n).... |

6 |
Fault Tolerant Design of Analogic CNN Templates and Algorithms - Part I: The Binary Output Case
- Földesy, Kék, et al.
- 1999
(Show Context)
Citation Context ...can be defined to calculate the gradients via the same hardware. Once we are able to calculate the gradients various architectural possibilities for adaptation and plasticity are available (see [42], =-=[43]-=- and [44]) to use this information. Computing the DT-CNN’s gradient In this chapter discrete time cellular neural networks are considered. The DT-CNN I consider resembles the DT-CNN described in [39].... |

6 |
Rodriguez-Vazquez “A stored program 2 nd order/ 3-layer complex cell
- Rekeczky, Roska, et al.
- 2000
(Show Context)
Citation Context ...are parallel computing, analog arrays, which are suitable for most of the computation needed. Adaptive sensing is one of the ideal applications for the planned CNN-type sensor-computers (see [45] and =-=[55]-=-), where both the 2D parallel architecture and the high speed are utilized, and the CNN-UM architecture is also supported. Another progress can be recognized in the development of cam coders and digit... |

5 | An Exact and Direct Analytical Method for the Design of Optimally Robust CNN Templates
- Hanggi, Moschytz
(Show Context)
Citation Context ...troduction of cellular neural networks [9], the design and training of CNN has also been a major topic of research. A summary of template design methods can be found in [18] (see [19], [21], [22] and =-=[20]-=-). Supervised learning is only one of the CNN template design methods. It is a general method, which can be applied without special knowledge about the CNN. There are also special design methods that ... |

5 |
Efficient implementation of neighborhood logic for cellular neural network universal machine,”IEEE Trans
- Crounse, Fung, et al.
- 1997
(Show Context)
Citation Context ...ce the introduction of cellular neural networks [9], the design and training of CNN has also been a major topic of research. A summary of template design methods can be found in [18] (see [19], [21], =-=[22]-=- and [20]). Supervised learning is only one of the CNN template design methods. It is a general method, which can be applied without special knowledge about the CNN. There are also special design meth... |

5 | A Learning Algorithm for Cellular Neural Networks (CNN) Solving Nonlinear Partial Differential Equations - Puffer, Tetzlaff, et al. - 1995 |

5 |
Learning Evaluation Functions for Global Optimization” doctoral dissertation
- Boyan
- 1998
(Show Context)
Citation Context ...l points. A possibility would be to make restarts of local searches intelligent by a technique, which uses special knowledge related to CNN. We tried this possibility with the method of Andrew Boyan (=-=[59]-=-), but the results were ambiguous. The key was to find general features of templates, which are efficient in learning. The other drawback of supervised learning is the difference between the real prob... |

4 |
Implementation of Arbitrary Boolean Functions on the CNN Universal Machine", Int. J. Circuit Theory and
- Nemes, Chua, et al.
- 1998
(Show Context)
Citation Context ...y. Since the introduction of cellular neural networks [9], the design and training of CNN has also been a major topic of research. A summary of template design methods can be found in [18] (see [19], =-=[21]-=-, [22] and [20]). Supervised learning is only one of the CNN template design methods. It is a general method, which can be applied without special knowledge about the CNN. There are also special desig... |

4 |
Cellular Neural Network Design Using a Learning Algorithm
- Zou, Schwarz, et al.
- 1990
(Show Context)
Citation Context ...rd, (ii) uncoupled, (iii) coupled, binary input-binary output templates. The study of CNN template training (CNN learning) has also been an important topic ever since the first CNN conference CNNA’90 =-=[23]-=-. A comprehensive summary of the early period of template design and learning can be found in [24] or [25]. Template training has been based on basically the same, mean square error concept. However, ... |

4 | Adaptive simulated annealing in CNN template learning
- Rekeczky, Ushida
- 1999
(Show Context)
Citation Context ...are investigated. As I know, there is no special gradient-based method for CNN training. On the other hand, several other training methods were already tested on CNN template training ([27], [28] and =-=[29]-=-). The topic of this dissertation is not the improvement of the gradient-based methods or other methods, but only the investigation of gradient computation for CNN. The advantage of supervised learnin... |

4 | A Learning Algorithm for the Dynamics of CNN with Nonlinear Templates Part I: Discrete-Time Case
- Tetzlaff, Wolf
- 1996
(Show Context)
Citation Context ... diagrammatic derivation of the reciprocal network, computing the gradients of the original network for arbitrary network structures. This approach applies for all networks, also for CNN. In [31] and =-=[32]-=- a unified approach was presented for computing precise gradients of multilayer discrete-time CNNs with nonlinear templates (DT-CNN) and continuous-time CNNs (CT-CNN) respectively. However the main dr... |

4 |
Computer-Sensors: spatial-temporal computers for analog array signals, dynamically integrated with sensors
- Roska
- 1999
(Show Context)
Citation Context ...fined to calculate the gradients via the same hardware. Once we are able to calculate the gradients various architectural possibilities for adaptation and plasticity are available (see [42], [43] and =-=[44]-=-) to use this information. Computing the DT-CNN’s gradient In this chapter discrete time cellular neural networks are considered. The DT-CNN I consider resembles the DT-CNN described in [39]. Definiti... |

4 |
Roska , "Adaptive Histogram equalization with cellular neural networks
- Csapodi, T
(Show Context)
Citation Context ... pixel is modified so that an intensification and equalization is achieved. The most common method for image enhancement is histogram equalization. CNN techniques have already been used for this task =-=[46]-=-, but it is difficult to realize them. The current work addresses simpler methods such as contrast and intensity equalization rather than histogram equalization. I do not address histogram equalizatio... |

4 | CNN Based Models for Color Vision and Visual Illusions - Zarándy, Orzó, et al. - 1999 |

3 |
The Art of CNN Template Design", Int. J. Circuit Theory and
- Zarándy
- 1999
(Show Context)
Citation Context ...ient effectivelly. Since the introduction of cellular neural networks [9], the design and training of CNN has also been a major topic of research. A summary of template design methods can be found in =-=[18]-=- (see [19], [21], [22] and [20]). Supervised learning is only one of the CNN template design methods. It is a general method, which can be applied without special knowledge about the CNN. There are al... |

3 | Network Reciprocity: A Simple Approach to Derive Gradient Algorithms for Arbitrary Neural Network Structures
- Wan, Beaufays
- 1994
(Show Context)
Citation Context ...en known from general neural network theory for a long time. For recurrent networks recurrent back-propagation was introduced, which applies to CNN as well. A general approach is network reciprocity (=-=[33]-=-, [34] and [35]), which automates the diagrammatic derivation of the reciprocal network, computing the gradients of the original network for arbitrary network structures. This approach applies for all... |

2 |
Chua: Implementation of binary and grayscale mathematical morphology on the CNN universal machine
- Zarándy, Stoffels, et al.
- 1998
(Show Context)
Citation Context ...tivelly. Since the introduction of cellular neural networks [9], the design and training of CNN has also been a major topic of research. A summary of template design methods can be found in [18] (see =-=[19]-=-, [21], [22] and [20]). Supervised learning is only one of the CNN template design methods. It is a general method, which can be applied without special knowledge about the CNN. There are also special... |

2 |
Nossek: Design and learning with cellular neural networks
- A
- 1994
(Show Context)
Citation Context ... training (CNN learning) has also been an important topic ever since the first CNN conference CNNA’90 [23]. A comprehensive summary of the early period of template design and learning can be found in =-=[24]-=- or [25]. Template training has been based on basically the same, mean square error concept. However, various kinds of minimization methods were used for optimization of the error function over the pa... |

2 |
A Unified Approach to Derive Gradient Algorithms for Arbitrary Neural Network Structures
- Beaufays, Wan
- 1994
(Show Context)
Citation Context ...eneral neural network theory for a long time. For recurrent networks recurrent back-propagation was introduced, which applies to CNN as well. A general approach is network reciprocity ([33], [34] and =-=[35]-=-), which automates the diagrammatic derivation of the reciprocal network, computing the gradients of the original network for arbitrary network structures. This approach applies for all networks, also... |

2 |
Modern Peer-to-Peer File-Sharing over the Internet
- unknown authors
- 1999
(Show Context)
Citation Context ...site cells. My work applies gradient-based training specialy to CNN (also CT-CNN is addresse) and the method of gradient computation is detailed. The third paper is the work of T. Yang and L. O. Chua =-=[38]-=-, which recognizes the possibility of realizing gradient computation for BTT with CNN. However their work is related to the training of SRN’s. 415. Adaptive sensing with CNN In recent years, a revolu... |

2 |
The Morpheus homepage
- unknown authors
- 1994
(Show Context)
Citation Context ... [43] and [44]) to use this information. Computing the DT-CNN’s gradient In this chapter discrete time cellular neural networks are considered. The DT-CNN I consider resembles the DT-CNN described in =-=[39]-=-. Definition: A one layer, linear, space invariant DT-CNN is defined by Γ=(f,A,B,z,I,X0,U), where: − f is a (continuous and monotonically increasing) function R->R, the output function; − A and B are ... |

2 |
The Napster homepage
- unknown authors
- 1994
(Show Context)
Citation Context ... to the case of the standard CT-CNN. Gradient computation, and consequently optimization is strictly theoretically impossible in this case, since gradients are zero for large, nonoptimal convex cones =-=[41]-=- and infinite a some of the edges of this cones. Therefore the continuation method [39] shall be used with the gradient method, which means that λ is tuned from lower values to greater values during o... |

2 |
Space Variant Adaptive CNN - Fault Tolerance and Plasticity
- Roska
- 1997
(Show Context)
Citation Context ...ogram can be defined to calculate the gradients via the same hardware. Once we are able to calculate the gradients various architectural possibilities for adaptation and plasticity are available (see =-=[42]-=-, [43] and [44]) to use this information. Computing the DT-CNN’s gradient In this chapter discrete time cellular neural networks are considered. The DT-CNN I consider resembles the DT-CNN described in... |

2 | Analogic CNN computing: Architectural, Implementational and Algorithmic Advances – a Review
- Roska
(Show Context)
Citation Context ...me. CNNs are parallel computing, analog arrays, which are suitable for most of the computation needed. Adaptive sensing is one of the ideal applications for the planned CNN-type sensor-computers (see =-=[45]-=- and [55]), where both the 2D parallel architecture and the high speed are utilized, and the CNN-UM architecture is also supported. Another progress can be recognized in the development of cam coders ... |

2 | RodriguezVazquez,"CNNUC3: A Mixed-Signal 64x64 CNN Universal Chip - Linan, Espejo, et al. - 1999 |