A Standard Model of Mind: AAAI Technical Report FS-17-05 The Neural Cognitive Architecture Christian R. Huyck Middlesex University [email protected] London, UK NW4 4BT Abstract Development of a neuro-cognitive standard model will not be simple. It takes advantage of human functioning neu- The development of a cognitive architecture based on neurons ral and cognitive systems as a guide, and other behaving an- is currently viable. An initial architecture is proposed, and is based around a slow serial system, and a fast parallel system, imals, all of which use neurons for their minds. with additional subsystems for behaviours such as sensing, action and language. Current technology allows us to emulate Can We Do It Now millions of neurons in real time supporting the development Since neurons are Turing complete, it is possible to build and use of relatively sophisticated systems based on the ar- a cognitive architecture in neurons. While it is possible to chitecture. While knowledge of biological neural processing reimplement, for instance, ACT-R (Anderson and Lebiere and learning rules, and cognitive behaviour is extensive, it is 1998) in neurons, this reimplementation might not take full far from complete. This architecture provides a slowly vary- ing neural structure that forms the framework for cognition advantage of neural processing. None the less, it will be use- and learning. It will provide support for exploring biological ful to build basic components of the initial architecture in neural behaviour in functioning animals, and support for the less than biologically plausible ways. Once things are run- development of artificial systems based on neurons. ning in neurons, new more biologically plausible systems can be developed. Introduction We Can Develop an Architecture in Neurons The mind emerges from the behaviour of the brain, which This is a viable technology now. The community knows how in turn consists of neurons. This was known when the first to make rule based systems in neurons. Though incomplete, cognitive architecture was proposed (Newell 1990), and re- there is a great deal of knowledge about neurons. We know mains uncontroversial. Due to the simplicity of developing about neural and cognitive learning, and can emulate at least, systems of rules, most architectures are based around them. mouse size brains, in real time now. This paper proposes a cognitive architecture based on neu- One of the benefits of neural processing is that it can be rons, a neural cognitive architecture. highly parallel. Unfortunately, most modern computers are It is difficult to program things in simulated neurons but, largely serial. However, neuromorphic hardware is becom- like rule based systems, neurons are Turing complete (Byrne ing increasingly available. For instance, the SpiNNaker sys- and Huyck 2010). Consequently, it is possible to specify, de- tem (Furber et al. 2013) emulates a wide range of neural sign and implement a cognitive architecture with neurons as models in 1ms time steps in real time. One current system the foundation. This can then be refined so it will converge consists of 500,000 processors, and can emulate 500 million to the actual neural cognitive architecture. neurons in real time. This is not enough to emulate a full It is, of course, a long way from the certainty that minds human brain, but is much larger than the number of neu- emerge from neurons to developing working cognitive mod- rons in a mouse brain. Models can be developed to run in els, agents, and other high and low level behaviours in simu- Nest (Gewaltig and Diesmann 2007) on standard computers lated and emulated neurons1. Consequently, the initial archi- though the models with large numbers of neurons will run tecture should be relatively simple and be developed around at less the real time; this allows development of systems on two other, slightly more controversial systems (Kahneman standard serial hardware. 2011): a fast, parallel, implicit, subconscious system, and a Neurons communicate via spikes. This provides a degree slow, serial, explicit, conscious system. In this paper, these of modularity to facilitate development. In many cases, sub- will be referred to as the slow system and the fast system, systems can be combined and communicate via spikes. In and they will need to be implemented in neurons. other cases, subsystems will be combined and share neurons. Other Neural Architectures 1Emulated neurons run on neuromorphic hardware, and simu- There are already artificial cognitive architectures based on lated neurons run on standard hardware. simulated neurons, or on simulated neuron-like units. One 365 ated from the descriptions), and fully instantiated full sys- tems (combinations of topologies from the subsystems). The Slow and Fast Systems The first subsystem is the slow system. It should be relatively simple to build a relatively unconstrained rule based system in neurons. The key problem is variable binding, and there are several solutions (e.g. (Shastri and Aijanagadde 1993; van der Velde and de Kamps 2006; Huyck 2009b)). It should be possible to directly translate a particular rule base auto- matically to neurons. When a particular rule base is con- verted to neurons, this is a fully instantiated system. Note, that a developer can program the full rule based system di- rectly into neurons, and skip the translation step. (One way to program neurons is to specify the topology using a lan- guage like PyNN (Davison et al. 2008).) While the slow system can be approximated by a rule Figure 1: Brodmann Areas, lateral surface of the brain (cour- based system, it is appreciably less clear how the fast sys- tesy of Mark Dubin) tem works. It is proposed that it is a form of the spreading activation networks popular in the 80s, e.g. the interactive activation model (Rumelhart and McClelland 1982). of the best of these is the LEABRA model (O’Reilly 1996). Note that this assumes that a reasonably large amount of It uses rate coded neurons instead of spike coded neurons, the computation the brain does is computation via spread- but the two types of systems can be combined. Rate coded ing activation. Bottom up computation, from the senses, is neurons do lose information, have problems accounting for driven by the environment, and is similar to deep nets and synchrony, and have issues with learning. other common machine learning connectionist nets. Spread- A second example is the global workspace theory (Shana- ing activation between different sets of neurons is the basis han 2006). This uses specialised neural-like units as the ba- of cognition in the fast system. At the symbolic end of this sis of its computation. While an interesting exploration of computation, the spreading activation is determined by the complex behaviours, such as internal simulation, it suffers semantics of the symbol, and leads to active symbols (Ka- from its non-biological units. plan, Weaver, and French 1990). Another popular neural architecture is the Spaun model While there has been work in translating general con- (Eliasmith et al. 2012) shows how flexible computation with tinuously valued neuron models, like interactive activa- neurons can be. This benefits from a vector based approach tion models, to spiking neurons (Abbott, DePasquale, and supporting the translation of a range of behaviours into a Memmesheimer 2016). the author is not aware of a solid range of spike based neural models. It suffers from bind- theory of how this works in spiking networks. The develop- ing via convolution, which does not seem to be biologically ment of a solid theory of implementing spreading activation plausible, and its dependency on vector representations. nets in spiking neurons is an important challenge for neural These, and others, are interesting neural systems, and computation and cognitive architectures. show that it is possible to build a neural cognitive architec- The initial fast system will have two largely distinct com- ture now. However, it is not at all clear which one to build. ponents: a planning subsystem and an associative memory to The brain is self modifying on several time scales, but it is represent concepts. Active facts from the associative mem- not at all clear how to build a model of even simple point ory can be used in the planning subsystem to represent the neurons with static synapses. Consequently, we will need to environment, or an internally simulated environment. develop a series of prototypes. We do not know everything. One variant of a spreading activation system has been We do not know the correct neural models, learning rules, used for planning (Maes 1989) and versions have been built topology, or even brain areas of particular functions. in spiking neurons. In this first component, plans are en- So, an initial architecture will be based around several coded in the network topology, and are run by spreading subsystems that function in parallel. Like rule based archi- activation in response to goals and the environment. tectures, the subsystems may be customisable. The initial A second component is an associative memory, a type of a system and its instances will provide direction for the devel- semantic net. Concepts and instances of those concepts can opment of future versions. be stored in a spreading activation net. Their associations can be stored providing spread of activation behaviour. A An Initial Roadmap variant of this has been used for prepositional phrase attach- This section proposes a relatively simple first version of the ment resolution, but an extended version is needed. architecture. This will consist of customisable neural sub- The fast and slow systems will be combined for the first systems (descriptions of these subsystems), fully instanti- full system. Facts that support rules will be synaptically ated subsystems (actual neural topologies possibly gener- linked to facts from the fast system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-