
Computational Cognitive Neuroscience and Its Applications Laurent Itti University of Southern California Introduction and motivation A number of tasks which can be effortlessly achieved by humans and other animals have until recently remained seemingly intractable for computers. Striking examples of such tasks exist in the visual and auditory domains. These include recognizing the face of a friend in a photograph, understanding whether a new, never-seen object is a car or an airplane, quickly finding objects or persons of interest like one's child in a crowd, understanding fluent speech while also deciphering the emotional state of the speaker, or reading cursive handwriting. In fact, several of these tasks have become the hallmark of human intelligence, while other, seemingly more complex and more cognitively involved tasks, such as playing checkers or chess, solving differential equations, or proving theorems have been mastered by machines to a reasonable degree (Samuel, 1959; Newell & Simon, 1972). An everyday demonstration of this state of affairs is the use of simple image, character, or sound recognition in CAPTCHA tests (Completely Automated Public Turing Tests to Tell Computers and Humans Apart) used by many web sites to ensure that a human, rather than a software robot, is accessing the site (CAPTCHA tests, for example, are used by web sites providing free email accounts to registered users, and are a simple yet imperfect way to prevent spammers from opening thousands of email accounts using an automated script). To some extent, these machine-intractable tasks are the cause for our falling short on the early promises made in the 1950s by the founders of Artificial Intelligence, Computer Vision, and Robotics (Minsky, 1961). Although tremendous progress has been made in just a half century, and one is beginning to see cars that can drive on their own or robots that vacuum the floor without human supervision, such machines have not yet reached mainstream adoption and remain highly limited in their ability to interact with the real world. Although in the early years one could blame the poor performance of machines on limitations in computing resources, rapid advances in microelectronics have now rendered such excuses less believable. The core of the problem is not only how much computing cycles one may have to perform a task, but how those cycles are used, in what kind of algorithm and of computing paradigm. For biological systems, interacting with the visual world, in particular through vision, audition, and other senses, is key to survival. Essential tasks like locating and identifying potential prey, predators, or mates must be performed quickly and reliably if an animal is to stay alive. Taking inspiration from nature, recent work in computational neuroscience has hence started to devise a new breed of algorithms, which can be more flexible, robust, and adaptive when confronted with the complexities of the real world. I here focus on describing recent progress with a few simple examples of such algorithms, concerned with directing attention towards interesting locations in a visual scene, so as to concentrate the deployment of computing resources primarily onto these locations. Modeling visual attention Positively identifying any and all interesting targets in one's visual field has prohibitive computational complexity, making it a daunting task even for the most sophisticated biological brains (Tsotsos, 1991). One solution, adopted by primates and many other animals, is to break down the visual analysis of the entire field of view into smaller regions, each of which is easier to analyze and can be processed in turn. This serialization of visual scene analysis is operationalized through mechanisms of visual attention: A common (athough somewhat inaccurate) metaphor for attention is that of a virtual ''spotlight,'' shifting towards and highlighting different sub-regions of the visual world, so that one region at a time can be subjected to more detailed visual analysis (Treisman & Gelade, 1980; Crick, 1984; Weichselgartner & Sperling, 1987). The central problem in attention research then becomes how to best direct this spotlight towards the most interesting and behaviorally relevant visual locations. Simple strategies, like constantly scanning the visual field from left to right and from top to bottom, like many computer algorithms do, may be too slow for situations where survival depends on reacting quickly. Recent progress in computational neuroscience has proposed a number of new biologically-inspired algorithms which implement more efficient strategies. These algorithms usually distinguish between a so-called ''bottom-up'' drive of attention towards conspicuous or ''salient'' locations in the visual field, and a volitional and task-dependent so-called ''top-down'' drive of attention toward behaviorally relevant scene elements (Desimone & Duncan, 1995; Itti & Koch, 2001). Simple examples of bottom-up salient and top-down relevant items are shown in Figure 1. Figure 1: (left) Example where one item (a red and roughly horizontal bar) in an array of items is highly salient and immediately and effortlessly grabs visual attention attention in a bottom-up, image-driven manner. (right) Example where a similar item (a red and roughly vertical bar) is not salient but may be behaviorally relevant if your task is to find it as quickly as possible; top-down, volition-driven mechanisms, must be deployed to initiate a search for the item. (Also see Treisman & Gelade, 1980). Koch and Ullman (1985) introduced the idea of a saliency map to accomplish preattentive selection in the primate brain. This is an explicit two-dimensional map that encodes the saliency of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the most salient object, which constitutes the next target. If this location is subsequently inhibited, the system automatically shifts to the next most salient location, endowing the search process with internal dynamics. Later research has further elucidated the basic principles behind computing salience (Figure 2). One important principle is the detection of locations whose local visual statistics significantly differ from the surrounding image statistics, along some dimension or combination of dimensions which are currently relevant to the subjective observer (Itti et al., 1998; Itti & Koch, 2001). This significant difference could be in a number of simple visual feature dimensions which are believed to be represented in the early stages of cortical visual processing: color, edge orientation, luminance, or motion direction (Treisman & Gelade, 1980; Itti & Koch, 2001). Wolfe and Horowith (2004) provide a very nice review of which elementary visual features may strongly contribute to visual salience and guide visual search. Two mathematical constructs can be derived from electrophysiological recordings in living brains, which shed light onto how this detection of statistical odd-man-out may be carried out. First, early visual neurons often have center-surround receptive field structures, by which the neuron's view of the world consists of two antagonistic sub- regions of the visual field, a small central region which drives the neuron in one direction (e.g., excites the neuron when a bright pattern is presented) and a larger, concentric surround region which drives the neuron in the opposite direction (e.g., inhibits the neuron when a bright pattern is presented; Kuffler, 1953). In later processing stages, this simple concentric center-surround receptive field structure is replaced by more complex types of differential operators, such as Gabor receptive fields sensitive to the presence of oriented line segments (Hubel & Wiesel, 1962). In addition, more recent research has unraveled how neurons with similar stimulus preferences but sensitive to different locations in the visual field may interact. In particular, neurons with sensitivities to similar patterns tend to inhibit each other, a phenomenon known as non-classical surround inhibition (Allman et al., 1985; Cannon & Fullenkamp, 1991; Sillito et al., 1995). Taken together, these basic principles suggest ways by which detecting an odd- man-out, or significantly different item in a display, can be achieved in a very economical (in terms of neural hardware) yet robust manner. Figure 2: Architecture for computing saliency and directing attention towards conspicuous locations in a scene. Incoming visual input (top) is processes at several spatial scales along a number of basic feature dimensions, including color, luminance intensity, and local edge orientation. Center-surround operations as well as non-classical surround inhibition within each resulting ''feature map'' enhance the neural representation of locations which significantly stand out from their neighbors. All feature maps feed into a single saliency map which topographically represents salience irrespectively of features. Finally, attention is first directed to the most salient location in the image, and subsequently to less salient locations. Many computational models of human visual search have embraced the idea of a saliency map under different guises (Treisman, 1988; Wolfe, 1994; Niebur & Koch, 1996; Itti & Koch, 2000). The appeal of an explicit saliency map is the relatively straightforward manner in which it allows the input from multiple, quasi-independent feature maps to be combined and to give rise to a single output:
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-