222826308.Pdf

222826308.Pdf

View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Aston Publications Explorer Published in: Neural Computation 18, 446-469 (2006) Magnification Control in Self-Organizing Maps and Neural Gas Thomas Villmann Clinic for Psychotherapy, University of Leipzig, 04107 Leipzig, Germany Jens Christian Claussen Institute of Theoretical Physics and Astrophysics, Christian-Albrecht University Kiel, 24098 Kiel, Germany We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case. 1 Introduction Vector quantization is an important task in data processing, pattern recog- nition and control (Fritzke, 1993; Haykin, 1994; Linde, Buzo, & Gray, 1980; Ripley, 1996). A large number of different types have been dis- cussed, (for an overview, refer to Haykin, 1994; Kohonen, 1995; Duda & Hart, 1973). Neural maps are a popular type of neural vector quan- tizers that are commonly used in, for example, data visualization, feature extraction, principle component analysis, image processing, classification tasks, and acceleration of common vector quantization (de Bodt, Cot- trell, Letremy, & Verleysen, 2004). Well known approaches are the Self- arXiv:cond-mat/0609517v1 [cond-mat.dis-nn] 20 Sep 2006 Organizing Map (SOM) (Kohonen, 1995), the neural gas (NG) (Martinetz, Berkovich, & Schulten, 1993), elastic net (EN) (Durbin & Willshaw, 1987) and generative topographic mapping (GTM) (Bishop, Svens´en, & Williams 1998). In vector quantization, data vectors v ∈ Rd are represented by a few codebooks or weight vectors wi, where i is an arbitrary index. Several cri- teria exist to evaluate the quality of a vector quantizer. The most common 1 one is the squared reconstruction error. However, other quality criteria are also known, for instance topographic quality for neighborhood preserving mapping approaches (Bauer & Pawelzik, 1992; Bauer, Der, & Villmann, 1999), optimization of mutual information (Linsker, 1989) and other cri- teria (for an overview, see Haykin, 1994). Generally, a faithful represen- tation of the data space by the codebooks is desired. This property is closely related to the so-called magnification, which describes the relation between data and weight vector density for a given model. The knowledge of magnification of a map is essential for correct interpretation of its output (Hammer & Villmann, 2003). In addition, explicit magnification control is a desirable property of learning algorithms, if depending on the respective application, only sparsely covered regions of the data space have to be em- phasized or, conversely, suppressed. The magnification can be explicitly expressed for several vector quantization models. Usually, for these ap- proaches the magnification can be expressed by a power law between the codebook vector density ρ and the data density P . The respective exponent is called magnification exponent or magnification factor. As explained in more detail below, the magnification is also related to other properties of the map, for example, reconstruction error as well as mutual information. Hence, control of magnification is influencing these properties too. In biologically motivated approaches, magnification can also be seen in the context of information representation in brains, for instance, in the senso-motoric cortex (Ritter, Martinetz, & Schulten, 1992). Magnification and its control can be related to biological phenomena like the perceptual magnet effect, which refers to the fact that rarely occurring stimuli are dif- ferentiated with high precision whereas frequent stimuli are distinguished only in a rough manner (Kuhl, 1991; Kuhl, Williams, Lacerda, Stevens, & Lindblom, 1992). It is a kind of attention-based learning with inverted magnification, that is, rarely occurring input samples are emphasized by an increased learning gain (Der & Herrmann, 1992; Herrmann, Bauer, & Der, 1994). This effect is also beneficial in technical systems. In remote- sensing image analysis, for instance, seldomly found ground cover classes should be detected, whereas usual (frequent) classes with broad variance should be suppressed (Mer´enyi & Jain, 2004; Villmann, Mer´enyi & Ham- mer, 2003). Another technical environment for magnification control is robotics for accurate description of dangerous navigation states (Villmann & Heinze, 2000). In this article we concentrate on a general framework for magnification control in SOM and NG. In this context, we briefly review the most im- portant approaches. One approach for SOM is generalized, and afterward, it is transferred to NG. For this purpose, we first give the basic notations, followed in section 3 by a more detailed description of magnification and early approaches related to the topic of magnification control, including a unified approach for controlling strategies. The magnification control approaches of SOM are described according to the unified framework in 2 section 4, whereby one of them is significantly extended. The same pro- cedure is applied to NG in section 5. Again, one of the control approaches presented in this section is new. A short discussion concludes the article. 2 Basic Concepts and Notations in SOM and NG In general, neural maps project data vectors v from a (possibly high- dimensional) data manifold D ⊆Rd onto a set A of neurons i, which is formally written as ΨD→A : D → A. Each neuron i is associated with a w Rd W w pointer i ∈ , all of which establish the set = { i}i∈A. The map- ping description is a winner-take-all rule, that is, a stimulus vector v ∈D is mapped onto that neuron s ∈ A with the pointer ws being closest to the actual presented stimulus vector v, ΨD→A : v 7→ s (v) = argmin kv − wik . (2.1) i∈A The neuron s is called winner neuron. The set Ri = {v ∈ D|ΨD→A (v)= i} is called the (masked) receptive field of the neuron i. The weight vectors are adapted during the learning process such that the data distribution is represented. For further investigations, we describe SOM and NG as our focused neural maps in more detail. During the adaptation process a sequence of data points v ∈D is presented to the map with respect to the data distribu- tion P (D). Then the most proximate neuron s according to equation (2.1) is determined, and the pointer ws, as well as all pointers wi of neurons in the neighborhood of s, are shifted towards v, according to △wi = ǫh (i, v, W)(v − wi) . (2.2) The property of “being in the neighborhood of s” is represented by a neigh- borhood function h (i, v, W). The neighborhood function is defined as k (v, W) h (i, v, W) = exp − i (2.3) λ λ for the NG, where ki (v, W) yields the number of pointers wj for which the relation kv − wjk < kv − wik is valid (Martinetz et al., 1993); espe- cially, we have hλ (s, v, W)=1.0. In case of SOM the set A of neurons has a topological structure usually chosen as a hypercube or hexagonal lat- tice. Each neuron i has a fixed position r (i). The neighborhood function has the form kr (i) − r (s (v))k h (i, v, W) = exp − A . (2.4) σ 2σ2 3 In contrast to the NG, the neighborhood function of SOM is evaluated in the output space A according to its topological structure. This difference causes the significantly different properties of both algorithms. For the SOM there does not exist any energy function such that the adaptation rule follows the gradient descent (Erwin, Obermayer, & Schulten, 1992). Moreover, the convergence proofs are only valid for the one-dimensional setting (Cottrell, Fort & Pages, 1998, Ritter et al., 1992). The introduction of an energy function leads to different dynamics as in the EN (Durbin & Willshaw, 1987) or new winner determination rule (Heskes 1999). The advantage of the SOM is the ordered topological structure of neurons in A. In contrast, in the original NG, such an order is not given. One can extend the NG to the topology representing network (TRN) such that topological relations between neurons are installed during learning, although generally they do not achieve the simple structure as in SOM lattices (Martinetz & Schulten, 1994). Finally, the important advantage of the NG is that the adaptation dynamic of the weight vectors follows a potential minimizing dynamics (Martinetz et al., 1993). 3 Magnification and Magnification Control in Vector Quantization 3.1 Magnification in Vector Quantization. Usually vector quantization aims to minimize the reconstruction error RE = kv − w k2 P (v) dv. However, other quality criteria are i Ri i also known,P R for instance, topographic quality (Bauer & Pawelzik, 1992; Bauer et al., 1999). More generally, one can consider the generalized dis- tortion error, γ Eγ = kws − vk P (v) dv. (3.1) ZD This error is closely related to other properties of the (neural) vector quan- tizer. One important property is the achieved weight vector density ρ (w) after learning in relation to the data density P (D). Generally, for vector quantizers one finds the relation P (w) ∝ ρ (w)α (3.2) after the converged learning process (Zador 1982). The exponent α is called magnification exponent or magnification factor. The magnification is coupled with the generalized distortion error (3.1) by d α = (3.3) d + γ 4 Table 1: Magnification of Different Neural Maps and Vector Quantization Ap- proaches.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us