## Self Organization of a Massive Document Collection (0)

Venue: | IEEE Transactions on Neural Networks |

Citations: | 207 - 14 self |

### BibTeX

@ARTICLE{Kohonen_selforganization,

author = {Teuvo Kohonen and Samuel Kaski and Krista Lagus and Jarkko Salojarvi and Vesa Paatero and Antti Saarela},

title = {Self Organization of a Massive Document Collection},

journal = {IEEE Transactions on Neural Networks},

year = {},

volume = {11},

pages = {574--585}

}

### Years of Citing Articles

### OpenURL

### Abstract

This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the Self-Organizing Map (SOM) algorithm. As the feature vectors for the documents we use statistical representations of their vocabularies. The main goal in our work has been to scale up the SOM algorithm to be able to deal with large amounts of high-dimensional data. In a practical experiment we mapped 6,840,568 patent abstracts onto a 1,002,240-node SOM. As the feature vectors we used 500-dimensional vectors of stochastic figures obtained as random projections of weighted word histograms. Keywords Data mining, exploratory data analysis, knowledge discovery, large databases, parallel implementation, random projection, Self-Organizing Map (SOM), textual documents. I. Introduction A. From simple searches to browsing of self-organized data collections Locating documents on the basis of keywords and simple search expressions is a c...

### Citations

3242 |
Self-Organizing Maps
- Kohonen
- 2001
(Show Context)
Citation Context ...or large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted to the neural-network methods, e.g., the SelfOrganizing Map (SOM) [9], [10], =-=[11]-=- that approximate an unlimited number of input data items by a finite set of models. A further advantage achieved by the SOM mapping is then that unlike in, say, multidimensional scaling, the SOM can ... |

3124 |
Introduction to Modern Information Retrieval
- Salton, McGill
- 1983
(Show Context)
Citation Context ...SOM2 system is depicted in Fig. 1. Below we first review some attempts to describe the textual contents of documents statistically. A. The primitive vector space model In the basic vector space model =-=[38]-=- the stored documents are represented as real vectors in which each component corresponds to the frequency of occurrence of a particular word in the document. Obviously one should provide the differen... |

2721 | Indexing by Latent Semantic Analysis
- Deerwester, Dumais, et al.
- 1990
(Show Context)
Citation Context ...om the histogram of the remaining factors has then a much smaller dimensionality while as much as possible is retained of the original histograms. This method is called latent semantic indexing (LSI) =-=[39]-=-. C. Randomly projected histograms We have shown earlier that the dimensionality of the document vectors can be reduced radically by a much simpler method than the LSI, by the random projection method... |

1093 |
Self-organized formation of topologically correct feature maps
- Kohonen
- 1982
(Show Context)
Citation Context ...tion [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted to the neural-network methods, e.g., the SelfOrganizing Map (SOM) =-=[9]-=-, [10], [11] that approximate an unlimited number of input data items by a finite set of models. A further advantage achieved by the SOM mapping is then that unlike in, say, multidimensional scaling, ... |

645 | Vector Quantization
- Gray
- 1984
(Show Context)
Citation Context ...ented in [25].) In the following we formulate the Batch-Map SOM principle in such a way that it is reduced to two familiar computational steps, namely, that of the classical Vector Quantization [26], =-=[27]-=-, [28] and that of smoothing of numerical values over a two-dimensional grid. The particular implementation in this work is based on these steps. Let V i be the set of all x(t) that have m i as their ... |

425 |
Two-level morphology: A general computational model for word-form recognition and production
- Koskenniemi
- 1983
(Show Context)
Citation Context ...tical symbols and numbers were converted into special "dummy" symbols. The whole vocabulary contained 733,179 different words (base forms). All words were converted to their base form using =-=a stemmer [50]-=-. The words occurring less than 50 times in the whole corpus, as well as a set of common words in a stopword list of 1,335 words were removed. The remaining vocabulary consisted of 43,222 words. Final... |

362 | Towards optimal feature selection
- Koller, Sahami
- 1996
(Show Context)
Citation Context ... be described by a set of features. If the purpose were to assign the documents into prescribed classes, the selection of the features could be optimized for maximum classification accuracy (cf. e.g. =-=[30]-=-). The goal in our work, however, was unsupervised classification, in which the classes are not known a priori 3 ; the documents can only be clustered according to their detailed topical similarities.... |

288 |
Multidimensional scaling
- Kruskal, Wish
- 1978
(Show Context)
Citation Context ...multivariate analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], [3], =-=[4]-=-, [5], [6], [7] and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might b... |

249 | Latent semantic indexing: a probabilistic analysis
- Papadimitriou, Raghavan, et al.
(Show Context)
Citation Context ...e decreasing dimensionality of the document vectors, the time needed to classify a document is decreased radically. 4 It has recently been suggested that the random projection [42] or similar methods =-=[43]-=- could be used for reducing the computational complexity of the LSI as well. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 11 D. Histograms on the word category... |

205 |
Multidimensional scaling. I. Theory and method
- TORGERSON
- 1952
(Show Context)
Citation Context ... and multivariate analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], =-=[3]-=-, [4], [5], [6], [7] and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest mi... |

200 |
Asymptotically optimal block quantization
- Gersho
- 1979
(Show Context)
Citation Context ...n presented in [25].) In the following we formulate the Batch-Map SOM principle in such a way that it is reduced to two familiar computational steps, namely, that of the classical Vector Quantization =-=[26]-=-, [27], [28] and that of smoothing of numerical values over a two-dimensional grid. The particular implementation in this work is based on these steps. Let V i be the set of all x(t) that have m i as ... |

174 |
A self-organizing semantic map for information retrieval
- Lin, Soergel, et al.
- 1991
(Show Context)
Citation Context ...teresting documents on the map using content addressing. Since 1991, there have existed attempts to apply the SOM for the organization of texts, based on word histograms regarded as the input vectors =-=[12]-=-, [13], [14]. In order to avoid their dimensionalities from growing too large, the vocabularies were limited manually. However, to classify masses of natural texts, it is unavoidable to refer to a rat... |

154 |
Self-organizing semantic maps
- Ritter, Kohonen
- 1989
(Show Context)
Citation Context ...ressed forms [12], [13], [14], [31], [32], [33], [34], [35]. In a series of earlier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic m=-=aps" [36]. This system was ca-=-lled the "WEBSOM." Its later phases have been explained, e.g., in Honkela et al. [37]. Certain reasons expounded below, however, recently led us to abandon the "semantic maps" and ... |

138 | The Self-Organizing Map Program Package
- Kohonen, Hynninen, et al.
- 1995
(Show Context)
Citation Context ...ome statistical models such as word 2 It has been shown that a random choice for the m i is possible. However, faster convergence is obtained if the initial values of the m i are even roughly ordered =-=[29]-=-. 3 We do, however, monitor the classification accuracy with respect to the major patent classes (subsections) in order to be able to compare the different algorithms. Such an accuracy, however, is on... |

133 | WEBSOMâ€” self-organizing maps of document collections
- Honkela, Kaski, et al.
(Show Context)
Citation Context ...heir eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], [16], [17], [18], [19], =-=[20]-=-, [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a major project that was laun... |

117 | Dimensionality reduction by random mapping: Fast similarity computation for clustering
- Kaski
(Show Context)
Citation Context ... projected histograms We have shown earlier that the dimensionality of the document vectors can be reduced radically by a much simpler method than the LSI, by the random projection method [20], [40], =-=[41]-=-, without essentially losing the power of discrimination between the documents. Experimental results that prove this will be given below in Sec. III-E and in Table I. On the other hand, the computatio... |

115 |
Vector quantization in speech coding
- Makhoul, Roucos, et al.
- 1985
(Show Context)
Citation Context ...in [25].) In the following we formulate the Batch-Map SOM principle in such a way that it is reduced to two familiar computational steps, namely, that of the classical Vector Quantization [26], [27], =-=[28]-=- and that of smoothing of numerical values over a two-dimensional grid. The particular implementation in this work is based on these steps. Let V i be the set of all x(t) that have m i as their closes... |

99 |
Discussion of a set of points in terms of their mutual distances
- Young, Householder
- 1938
(Show Context)
Citation Context ...s [1] and multivariate analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) =-=[2]-=-, [3], [4], [5], [6], [7] and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable intere... |

97 | Data Exploration Using Self-Organizing Maps
- Kaski
- 1997
(Show Context)
Citation Context ...ndomly projected histograms We have shown earlier that the dimensionality of the document vectors can be reduced radically by a much simpler method than the LSI, by the random projection method [20], =-=[40]-=-, [41], without essentially losing the power of discrimination between the documents. Experimental results that prove this will be given below in Sec. III-E and in Table I. On the other hand, the comp... |

86 | Clustering in large graphs and matrices
- Drineas, Kannan, et al.
- 1999
(Show Context)
Citation Context ...ce method, while with the decreasing dimensionality of the document vectors, the time needed to classify a document is decreased radically. 4 It has recently been suggested that the random projection =-=[42]-=- or similar methods [43] could be used for reducing the computational complexity of the LSI as well. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 11 D. Histogr... |

73 | Map displays for information retrieval
- Lin
- 1997
(Show Context)
Citation Context ... measure for comparing different algorithms. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 8 histograms or their compressed forms [12], [13], [14], [31], [32], =-=[33], [34], [35]. In a s-=-eries of earlier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic maps" [36]. This system was called the "WEBSOM." Its ... |

70 | Newsgroup Exploration with WEBSOM method and browsing
- Honkela, Kaski, et al.
- 1996
(Show Context)
Citation Context ... of the histogram vectors by their eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications =-=[15]-=-, [16], [17], [18], [19], [20], [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of... |

70 |
Internet Categorization and Search: A Self-Organizing Approach
- Chen, Schuffels, et al.
- 1996
(Show Context)
Citation Context ...ect relative measure for comparing different algorithms. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 8 histograms or their compressed forms [12], [13], [14], =-=[31], [32], [33], [-=-34], [35]. In a series of earlier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic maps" [36]. This system was called the "W... |

63 | Self-organizing maps of document collections: a new approach to interactive exploration
- Lagus, Honkela, et al.
- 1996
(Show Context)
Citation Context ...vectors by their eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], [16], [17], =-=[18]-=-, [19], [20], [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a major project t... |

55 | Self-organization of very large document collections
- Kohonen, Kaski, et al.
- 2003
(Show Context)
Citation Context ... the straightforward random projection of the word histograms. E. Validation of the random projection method by small-scale preliminary experiments Before describing the new encoding of the documents =-=[44]-=- used in this work, some preliminary experimental results that motivate its idea must be presented. Table I compares the three projection methods discussed above in which the model vectors, except in ... |

54 |
Keyword selection method for characterizing text document maps
- Lagus, Kaski
- 1999
(Show Context)
Citation Context ... maps we have used three zooming levels before reaching the documents. To provide guidance to the exploration, an automatic method has been utilized for selecting keywords to characterize map regions =-=[51]-=-. These keywords, to be regarded only as some kind of landmarks on the map display, serve as navigation cues during the exploration of the map, as well as provide information January 25, 2000 DRAFT IE... |

49 |
Progress with the tree-structured selforganizing map. ECAI1994
- Koikkalainen
(Show Context)
Citation Context ... Training vectors Pointers SOM winner Fig. 3. Finding the new winner in the vicinity of the old one, whereby the old winner is directly located by a pointer. The pointer is then updated. Koikkalainen =-=[47]-=-, [48] has suggested a similar speedup method for a search-tree structure. C.2 Initialization of the pointers When the size (number of grid nodes) of the maps is increased stepwise during learning usi... |

47 | Very large two-level SOM for the browsing of newsgroups
- Kohonen, Kaski, et al.
- 1996
(Show Context)
Citation Context ...ogram vectors by their eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], [16], =-=[17]-=-, [18], [19], [20], [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a major pro... |

45 |
Tukey, Exploratory Data Analysis
- W
- 1977
(Show Context)
Citation Context ...ed exploratory data analysis or knowledge discovery in databases, often colloquially called "data mining." B. The scope of this work There exist several classical methods in exploratory data=-= analysis [1]-=- and multivariate analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], ... |

41 | Exploration of very large databases by self-organizing maps
- Kohonen
- 1997
(Show Context)
Citation Context ...s by their eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], [16], [17], [18], =-=[19]-=-, [20], [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a major project that wa... |

40 | Creating an order in digital libraries with self-organizing maps
- Kaski, Honkela, et al.
- 1996
(Show Context)
Citation Context ...e histogram vectors by their eigenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], =-=[16]-=-, [17], [18], [19], [20], [21], 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a maj... |

33 | Theory of multidimensional scaling
- Leeuw, Heiser
- 1982
(Show Context)
Citation Context ...variate analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], [3], [4], =-=[5]-=-, [6], [7] and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be dev... |

33 |
Self organization of a massive text document collection
- Kohonen, Kaski, et al.
- 1999
(Show Context)
Citation Context ...alability of the SOM method. While about 5 000 documents were still mapped in our first publications in 1996 [15], [16], [17], [18], we finally increased the database to about seven million documents =-=[22]-=-. There do not exist many sources of freely available information of this size. In order that our work would also lead to a useful application, we decided to use the corpus of all the patent abstracts... |

30 | A Scalable Self-Organizing Map Algorithm for Textual Classification: A Neural Network Approach to Automatic Thesaurus Generation
- Roussinov, Chen
- 1998
(Show Context)
Citation Context ...ach input vector, and thereafter consider only those components when computing the distances. Related, more complex methods have been proposed for computing Euclidean distances between sparse vectors =-=[45]-=-. However, the model vectors must then be stored in the original high-dimensional format, for which we have no memory capacity; we must use low-dimensional models. B. Estimation of larger maps based o... |

28 | WEBSOM for textual data mining
- Lagus, Honkela, et al.
- 1999
(Show Context)
Citation Context ...igenvectors (the latent semantic indexing described in Sec. III-B), 2. Clustering of words into semantic categories, as was done in our earlier WEBSOM publications [15], [16], [17], [18], [19], [20], =-=[21]-=-, 3. Reduction of the dimensionality of the histogram vectors by a random projection method, as done in this work. The present article describes the final phases of a major project that was launched i... |

23 |
Clustering, Taxonomy, and Topological Maps of Patterns
- Kohonen
- 1982
(Show Context)
Citation Context ...[8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted to the neural-network methods, e.g., the SelfOrganizing Map (SOM) [9], =-=[10]-=-, [11] that approximate an unlimited number of input data items by a finite set of models. A further advantage achieved by the SOM mapping is then that unlike in, say, multidimensional scaling, the SO... |

23 |
Text classification with self-organizing maps: Some lessons learned
- Merkl
- 1998
(Show Context)
Citation Context ...re for comparing different algorithms. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 8 histograms or their compressed forms [12], [13], [14], [31], [32], [33], =-=[34], [35]. In a series -=-of earlier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic maps" [36]. This system was called the "WEBSOM." Its later ... |

23 |
Things you haven't heard about the selforganizing map
- Kohonen
- 1993
(Show Context)
Citation Context ...esenting each component with eight bits only. If the dimensionality of the data vectors is large, the statistical accuracy of the distance computations is still sufficient as shown in earlier studies =-=[49]-=-. The sufficient accuracy can be maintained during the computation if a suitable amount of noise is added to each new value of a model vector before quantizing it. D. Performance evaluation of the new... |

20 |
Unsupervised learning and the information retrieval problem
- Scholtes
- 1991
(Show Context)
Citation Context ...ing documents on the map using content addressing. Since 1991, there have existed attempts to apply the SOM for the organization of texts, based on word histograms regarded as the input vectors [12], =-=[13]-=-, [14]. In order to avoid their dimensionalities from growing too large, the vocabularies were limited manually. However, to classify masses of natural texts, it is unavoidable to refer to a rather la... |

20 |
Fast deterministic self-organizing maps
- Koikkalainen
- 1995
(Show Context)
Citation Context ...ing vectors Pointers SOM winner Fig. 3. Finding the new winner in the vicinity of the old one, whereby the old winner is directly located by a pointer. The pointer is then updated. Koikkalainen [47], =-=[48]-=- has suggested a similar speedup method for a search-tree structure. C.2 Initialization of the pointers When the size (number of grid nodes) of the maps is increased stepwise during learning using the... |

16 |
Comparison of SOM point densities based on different criteria
- Kohonen
- 1999
(Show Context)
Citation Context ... i \Gamma r c(x) k exceeds a certain limit. It has been thought that the SOM algorithm might be derivable from some objective function that describes the average quantization error. In a recent study =-=[23]-=- it was shown that a different point density of the model vectors is thereby obtained. In this work we use the original SOM algorithm, which is computationally lightest of all variants. This aspect wa... |

13 |
New developments of learning vector quantization and the selforganizing map
- Kohonen
- 1992
(Show Context)
Citation Context ...hm, which is computationally lightest of all variants. This aspect was most decisive in this very large implementation. In an attempt to accelerate the computation of the SOM, the Batch Map principle =-=[24]-=- has turned out to be computationally very effective. The implementation of the present method is based on the Batch Map. Assuming that the convergence to some ordered state is true, we require that t... |

13 |
Information visualization for collaborative computing
- Chen, Nunamaker, et al.
- 1998
(Show Context)
Citation Context ... comparing different algorithms. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 8 histograms or their compressed forms [12], [13], [14], [31], [32], [33], [34], =-=[35]. In a series of ear-=-lier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic maps" [36]. This system was called the "WEBSOM." Its later phases... |

11 |
Multidimensional scaling and its applications
- Wish, Carroll
- 1982
(Show Context)
Citation Context ...te analysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], [3], [4], [5], =-=[6]-=-, [7] and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted ... |

7 |
Improving the learning speed in topological maps of patterns
- Rodriques, Almeida
- 1990
(Show Context)
Citation Context ...e low-dimensional models. B. Estimation of larger maps based on carefully constructed smaller ones Several suggestions for increasing the number of nodes of the SOM during its construction (cf., e.g. =-=[46]-=-) have been made. The new idea presented below is to estimate good January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 15 initial values for the model vectors of a ve... |

6 |
Convergence and ordering of Kohonen's batch map
- Cheng
- 1997
(Show Context)
Citation Context ...ration cycles, after the discrete-valued indices c(x) have settled down and are no longer changed in further iterations. (The convergence proof of a slightly different Batch Map has been presented in =-=[25]-=-.) In the following we formulate the Batch-Map SOM principle in such a way that it is reduced to two familiar computational steps, namely, that of the classical Vector Quantization [26], [27], [28] an... |

5 |
The representation of semantic similarity between documents by using maps: Application of an artificial neural network to organize software libraries
- Merkl, Tjoa
- 1994
(Show Context)
Citation Context ...cuments on the map using content addressing. Since 1991, there have existed attempts to apply the SOM for the organization of texts, based on word histograms regarded as the input vectors [12], [13], =-=[14]-=-. In order to avoid their dimensionalities from growing too large, the vocabularies were limited manually. However, to classify masses of natural texts, it is unavoidable to refer to a rather large vo... |

5 | Neural networks and information extraction in astronomical information retrieval
- Lesteven, PonÃ§ot, et al.
- 1996
(Show Context)
Citation Context ...lative measure for comparing different algorithms. January 25, 2000 DRAFT IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, May 2000 8 histograms or their compressed forms [12], [13], [14], [31], =-=[32], [33], [34], [-=-35]. In a series of earlier works we replaced the word histograms by histograms formed over word clusters using "self-organizing semantic maps" [36]. This system was called the "WEBSOM.... |

3 |
Sammon Jr., "A nonlinear mapping for data structure analysis
- W
- 1969
(Show Context)
Citation Context ... of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], [3], [4], [5], [6], [7] and its frequently applied version is called Sammon's projection =-=[8]-=-. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted to the neural-network methods, e.g., the SelfOrganizing Map (SOM) [9], [10... |

1 |
Multidimensional scaling," in Encyclopedia of Statistical Sciences
- Young
- 1985
(Show Context)
Citation Context ...alysis that are able to form illustrative two-dimensional projections of distributions of items in high-dimensional data spaces. One of them is multidimensional scaling (MDS) [2], [3], [4], [5], [6], =-=[7]-=- and its frequently applied version is called Sammon's projection [8]. For large amounts of data items these mappings are computationally heavy. Therefore, considerable interest might be devoted to th... |