The Meaning of Things As a Concept in a Strong AI Architecture
Total Page:16
File Type:pdf, Size:1020Kb
The meaning of things as a concept in a strong AI architecture Alexey Redozubov[0000-0002-6566-4102], Dmitry Klepikov[0000-0003-3311-0014] TrueBrainComputing, Moscow, Russia [email protected] Abstract. Artificial intelligence becomes an integral part of human life. At the same time, modern widely used approaches, which work successfully due to the availability of enormous computing power, based on ideas about the work of the brain, suggested more than half a century ago. The proposed model describes the general principles of information processing by the human brain, taking into ac- count the latest achievements. The neuroscientific grounding of this model and its applicability in the creation of AGI or Strong AI are discussed in the article. In this model, the cortical minicolumn is the primary computing processor that works with the semantic description of information. The minicolumn transforms incoming information into its interpretation to a specific context. In this way, a parallel verification of hypotheses of information interpretations is provided when comparing them with information in the memory of each minicolumn of the cortical zone, and, at the same time, determining a significant context is the information transformation rule. The meaning of information is defined as an in- terpretation that is close to the information available in the memory of a minicol- umn. The behavior is a result of modeling of possible situations. Using this ap- proach will allow creating a strong AI or AGI. Keywords: Meaning of information, Artificial general intelligence, Strong AI, Brain, Cerebral cortex, Semantic memory, Information waves, Contextual se- mantic, Cortical minicolumns, Context processor, Hippocampus, Membrane re- ceptors, Cluster of receptors, Dendrites. 1 Introduction 1.1 Modern neural networks and the brain - are they comparable? There are several main fundamental issues in the field of information, the meaning of information and intelligence. In this paper, it is shown that, based on modern knowledge about the work of the brain [1], it is already possible to build a general theory of the work of the brain. We have a model of that, it is possible to use it to create an AGI or a strong AI, and it seems that this is the best way. We have to ask the further question arguing that strong artificial intelligence is human- like: what human-like means? Many things were invented in the field of neural net- works; they solve a lot of problems. But how close are modern neural networks to the 2 brain? This question is divided into two others. How those formal neurons, used in neural networks, are close architecturally to the configurations of neural networks, their contacts, connections, training principles? Do they have something in common with arrangement of neurons of the brain and its structures? The second question is the ide- ology of learning, the logic of neural networks and the logic of the brain - are they comparable or not? Something can be reproduced; brain objects can be modeled on a computer in the form of mathematical models and assumed that they more or less reflect reality. Nevertheless, whether neural networks reflect some principles of cerebration is still an open question. 1.2 Difficulties in determining the essence of a thing When operating with information, we introduce terminology and determine what we are dealing with. Two and a half thousand years ago, Greek philosophers Socrates, Plato, Aristotle tried to storm probably the main problem of philosophy: to formulate what a phenomenon in general is. What are a thing and an object? Some concepts arose around the fact that when we deal with a thing, we can observe its external signs. Fur- ther, such a paradigm was introduced: the essence of things. Each thing has certain external signs that seem to define it. In addition, there is a certain essence that speaks about this more specific, but never expresses this in words. If the essence of a thing could be formulated in the same terms, the same words in which external signs were formulated, then there would be no questions, everything would come down to the same thing. It turns out that without answering this question, this is impossible to answer the question of human thinking. 1.3 Inability to define essence through signs Returning to the real world, it suddenly turns out that everything is much more compli- cated. The name of this trouble is well known: a combinatorial explosion. When we are dealing not with two or three phenomena, but with thousands of them, tens of thou- sands, when these phenomena can take an incredible number of different forms, it turns out that there are no calculating capacities to list everything. Most importantly, if we try to apply this to a person, the whole life will not be enough to train the human brain using methods implemented by a neural network. Moreover, we know that a person does not need to be taught for a long time. Try to do this with a neural network, and it turns out that even if a neural network tries to remember all the signs, there will be a combinatorial explosion. How to avoid this combinatorial explosion? How does the brain do this? 3 2 The meaning of things 2.1 The transition from convolution networks to memory and transformation What is the meaning? It is clear that the question of the meaning of things is different; it can be formulated as a question of strong generalization. Once Frank Rosenblatt, who created the perceptron, the first neural network, formulated it [2]. This question has always been very acute to neural networks [3]. Now it’s called convolutional networks. Convolutional networks have the convolution core; this is a kind of memory that stores letters, numbers, images that been remembered. It can be applied to different parts of the image. In that case, perhaps, we have learned to transform, recode, convert the in- formation from this place of the image like so that it becomes in the same terms in which we have formed our memory. However, the opposite is also possible: we’ve got a memory, and we know how to transform what we see in some other place of the picture to the one that is stored in the memory. It may not seem such an important and key point, but in fact, there is a large grain in this small conversion. The key question is, how we do know the way to apply convolution rules to different parts of an image in a convolutional network? Why do we know that this can be done at all? The answer is obvious: this follows from the properties of the world, from its geometry. Nevertheless, it turns out that the same knowledge can be obtained; it can be learned by observing the world. First, one can construct a space of various displace- ments, and then, observing the result of the displacements, construct the necessary rules. The mechanism of saccades and microsaccades [4] allows the real brain to do all this. However, more importantly, that this principle allows obtaining knowledge related to information of any other origin. 2.2 The change of information in the presence of a phenomenon What does a rule mean? For example, eyes have some movements, when the eye turns on a tiny angle; this is called microsaccades. When an eye makes a microsaccade, it turns out that when one image is transmitted to the brain, a slightly displaced image is transmitted after; the microsaccade is very fast so that we practically have the same image but in two positions. It is possible to calculate the rules of transformation, those consistent patterns that transfer one picture into another depending on a specific shift. It turns out that if you possess the movement of the eyes and get learning for a while, the brain will be able to find out these rules of transformation itself. Furthermore, if we take not only visual but also audio or any other information in general, then the same logic can be applied to such information. Summarizing - you can accept and observe information changes in the presence of any phenomenon. In the case of vision training, this phenomenon will be a shift. The fact of eye saccadic movements that are known to be encoded by the brain [4]: the superior colliculi gives a certain signal, which in turn is the code for this muscle movement and such a phenomenon in which the displace- ment occurs. 4 3 About the brain architecture 3.1 There is no grandmother cell McCulloch and Pitts had put forward a model of a formal neuron [5] as a threshold adder. Later it was suggested that a threshold function could be more complicated, any rules for the possibility for further operation. If you collect all these neurons in a net, you can get a construction that is difficult to explain its working. Nevertheless, it be- came possible to make such neural networks. These neural networks reflect the “Grand- mother cell” paradigm. There was a question: if a person reacts to the presence of his grandmother in some way, then there must be a neuron somewhere in the structures of his brain, which is activated when looking at his grandmother. It was possible to detect the reaction of a specific neuron presenting pictures of Jennifer Aniston [6]. However, whenever it was discovered, there was a disappointment, because grandmother cell reacted not only to Jennifer Aniston but to everyone from the series “Friends,” and also to cats. As a result, neurophysiologists agreed that we could not detect grandmother cells. Also, Nobel lau- reates Hubel and Wiesel postulated that the neurons of the visual cortex reacted to cer- tain stimuli [7]; they described these stimuli.