entropy Article Semiotic Aggregation in Deep Learning Bogdan Mu¸sat 1,2,* and R˘azvanAndonie 1,3 1 Department of Electronics and Computers, Transilvania University, 500036 Bra¸sov, Romania 2 Xperi Corporation, 3025 Orchard Parkway, San Jose, CA 95134, USA 3 Department of Computer Science, Central Washington University, Ellensburg, WA 98926, USA;
[email protected] * Correspondence:
[email protected] Received: 3 November 2020; Accepted: 27 November 2020; Published: 3 December 2020 Abstract: Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks. Keywords: deep learning; spatial entropy; saliency maps; semiotics; convolutional neural networks 1. Introduction Convolutional neural networks (CNNs) were first made popular by Lecun et al.