CURRICULUM VITAE Scott D

Total Page:16

File Type:pdf, Size:1020Kb

CURRICULUM VITAE Scott D CURRICULUM VITAE Scott D. Slotnick Department of Psychology, Boston College McGuinn Hall, Room 330, Chestnut Hill, MA 02467 [email protected], 617−552−4188, https://sites.google.com/bc.edu/sd-slotnick EDUCATION AND EMPLOYMENT 2017– Boston College, Psychology Department, Professor 2011−2017 Boston College, Psychology Department, Associate Professor 2005−2011 Boston College, Psychology Department, Assistant Professor 2002−2005 Harvard University, Psychology Department, Post−doc 1999−2002 Johns Hopkins University, Psychology Department, Post−doc 1998 UC Berkeley, Vision Science Department, Ph.D. 1996 UC Berkeley, Vision Science Department, M.S. 1994 GMI Engineering & Management Institute, Electrical Engineering, B.S., Mechanical Engineering, B.S. 1989−1994 General Motors Headquarters, Student Engineer/Engineer EDITORIAL POSITIONS 2015− Editor−in−Chief, Cognitive Neuroscience 2015−2016 Guest Associate Editor, Attention, Perception, & Psychophysics 2014−2015 Associate Editor, Cognitive Neuroscience 2011−2012 Guest Editor, Cognitive Neuroscience MEMBERSHIPS Cognitive Neuroscience Society Society for Neuroscience Psychonomic Society (Fellow) PUBLICATIONS Books Slotnick, S. D. (2017). Cognitive Neuroscience of Memory. Cambridge, UK: Cambridge University Press. Slotnick, S. D. (2013). Controversies in Cognitive Neuroscience. Basingstoke, UK: Palgrave Macmillan. Edited Volumes Hopfinger, J. B., & Slotnick, S. D. (Eds.). (2020). Attentional control and executive function. Cognitive Neuroscience, Issues 1 & 2. Scott D. Slotnick Slotnick, S. D. (2018). Attentional modulation of early visual areas. Cognitive Neuroscience, Issues 1 & 2. Slotnick, S. D. (2012). The cognitive neuroscience of memory. Cognitive Neuroscience, Issues 3 & 4. Papers (N = 82, Citations = 4809, h−index = 33) Jeye, B. M., McCarthy, C. R., & Slotnick, S. D. (in press). Long-term memory specificity depends on inhibition of related items. Memory. Kark, S. M., Slotnick, S. D., & Kensinger, E. A. (in press). Forgotten but not gone: fMRI evidence of implicit memory for negative stimuli 24 hours after the intitial study episode. Neuropsychologia. Hopfinger, J. B., & Slotnick, S. D. (in press). Attentional control and executive function. Cognitive Neuroscience. Spets, D. S., & Slotnick, S. D. (2019). Similar patterns of cortical activity in females and males during item memory. Brain and Cognition, 135, 103581 (1–7). Spets, D. S., Jeye, B. M., & Slotnick, S. D. (2019). Different patterns of cortical activity in females and males during spatial long–term memory. Neuroimage, 199, 626–634. Karanian, J. M., & Slotnick, S. D. (2018). Confident false memories for spatial location are mediated by V1. Cognitive Neuroscience, 9, 139–150. Jeye, B. M., MacEvoy, S. P., Karanian, J. M., & Slotnick, S. D. (2018). Distinct regions of the hippocampus are associated with memory for different spatial locations. Brain Research, 1687, 41–49. Slotnick, S. D. (2018). Attentional modulation of early visual areas. Cognitive Neuroscience, 9, 1–3. Slotnick, S. D. (2018). The experimental parameters that affect attentional modulation of the ERP C1 component. Cognitive Neuroscience, 9, 53–61. Slotnick, S. D. (2018). Several studies with significant C1 attention effects survive critical analysis. Cognitive Neuroscience, 9, 75–85. Karanian, J. M., & Slotnick, S. D. (2017). False memories for shape activate the 2 Scott D. Slotnick lateral occipital complex. Learning & Memory, 24, 552–556. Slotnick, S. D. (2017). Cluster success: fMRI inferences for spatial extent have acceptable false–positive rates. Cognitive Neuroscience, 8, 150–155. Slotnick, S. D. (2017). Resting–state fMRI data reflects default network activity rather than null data: A defense of commonly employed methods to correct for multiple comparisons. Cognitive Neuroscience, 8, 141–143. Karanian, J. M., & Slotnick, S. D. (2017). False memory for context and true memory for context similarly activate the parahippocampal cortex. Cortex, 91, 79– 88. Thakral, P. P., Jacobs, C. M., & Slotnick, S. D. (2017). An attention account of neural priming. Memory, 25, 856–864. Jeye, B. M., Karanian, J. M., & Slotnick, S. D. (2017). The anterior prefrontal cortex and the hippocampus are negatively correlated during false memories. Brain Sciences, 7, 13. Jeye, B. M., Karanian, J. M., & Slotnick, S. D. (2016). Spatial memory activity distributions indicate the hippocampus operates in a continuous manner. Brain Sciences, 6, 37. Slotnick, S. D., Jeye, B. M., & Dodson, C. S. (2016). Recollection is a continuous process: Evidence from plurality memory receiver operating characteristics. Memory, 1, 2−11. Thakral, P. P., Kensinger, E. A., & Slotnick, S. D. (2016). Familiarity and priming are mediated by overlapping neural substrates. Brain Research, 1632, 107−118. Kark, S. M., Slotnick, S. D., & Kensinger, E. A. (2016). Repetition enhancement of amygdala and visual cortex functional connectivity reflects nonconscious memory for negative visual stimuli. Journal of Cognitive Neuroscience, 28, 1933−1946. Thakral, P. P., & Slotnick, S. D. (2015). The sensory timecourses associated with conscious visual item memory and source memory. Behavioural Brain Research, 290, 143−151. Karanian, J. M., & Slotnick, S. D. (2015). Memory for shape reactivates the lateral occipital complex. Brain Research, 1603, 124−132. Karanian, J. M., & Slotnick, S. D. (2014). False memory for context activates the parahippocampal cortex. Cognitive Neuroscience, 5, 186−192. 3 Scott D. Slotnick Thakral, P. P., & Slotnick, S. D. (2014). Nonconscious memory for motion activates MT+. NeuroReport, 25, 1326−1330. Karanian, J. M., & Slotnick, S. D. (2014). The cortical basis of true memory and false memory for motion. Neuropsychologia, 54, 53-58. Slotnick, S. D. (2013). The nature of recollection in behavior and the brain. NeuroReport, 24, 663−670. Slotnick, S. D., & Thakral, P. P. (2013). The hippocampus operates in a threshold manner during spatial source memory. NeuroReport, 24, 265−269. Thakral, P. P., & Slotnick, S. D. (2013). The role of spatial attention during spatial encoding. Cognitive Neuroscience, 4, 73−80. Thakral, P. P., Slotnick, S. D., & Schacter, D. L. (2013). Conscious processing during retrieval can occur in early and late visual regions. Neuropsychologia, 51, 482−487. Slotnick, S. D., & White, R. C. (2013). The fusiform face area responds equivalently to faces and abstract shapes in the left and central visual fields. NeuroImage, 83, 408−417. Slotnick, S. D. (2012). The cognitive neuroscience of memory. Cognitive Neuroscience, 3, 139−141. Thakral, P. P., Moo, L. R., & Slotnick, S. D. (2012). A neural mechanism for aesthetic experience. NeuroReport, 23, 310−313. Slotnick, S. D., Thompson, W. L., & Kosslyn, S. M. (2012). Visual memory and visual mental imagery recruit common control and sensory regions of the brain. Cognitive Neuroscience, 3, 14−20. Slotnick, S. D., & Thakral, P. P. (2011). Memory for motion and spatial location is mediated by contralateral and ipsilateral motion processing cortex. NeuroImage, 55, 794−800. Thakral, P. P., & Slotnick, S. D. (2011). Disruption of MT preferentially impairs motion processing. Neuroscience Letters, 490, 226−230. Slotnick, S. D. (2010). Synchronous retinotopic frontal−temporal activity during long−term memory for spatial location. Brain Research, 1330, 89−100. 4 Scott D. Slotnick Slotnick, S. D. (2010). Does the hippocampus mediate objective binding or subjective remembering? NeuroImage, 49, 1769−1776. Slotnick, S. D. (2010). "Remember" source memory ROCs indicate recollection is a continuous process. Memory, 18, 27−39. Slotnick, S. D., & Schacter, D. L. (2010). Conscious and nonconscious memory effects are temporally dissociable. Cognitive Neuroscience, 1, 8−15. Slotnick, S. D. (2010). High density event−related potential data acquisition in Cognitive Neuroscience. Journal of Visualized Experiments, 38. Thakral, P. P., & Slotnick, S. D. (2010). Attentional inhibition mediates inattentional blindness. Consciousness and Cognition, 19, 636−643. Slotnick, S. D. (2009). Rapid retinotopic reactivation during spatial memory. Brain Research, 1268, 97−111. Slotnick, S. D. (2009). Memory for color reactivates color processing region. NeuroReport, 20, 1568−1571. Thakral, P. P., & Slotnick, S. D. (2009). The role of parietal cortex during sustained visual spatial attention. Brain Research, 1202, 157−166. Thompson, W. L., Slotnick, S. D., Burrage, M. S., & Kosslyn, S. M. (2009). Two forms of spatial imagery: Neuroimaging evidence. Psychological Science, 20, 1245−1253. Ross, R. S., & Slotnick, S. D. (2008). The hippocampus is preferentially associated with memory for spatial context. Journal of Cognitive Neuroscience, 20, 432−446. Moo, L. R., Emerton, B. C., & Slotnick, S. D. (2008). Functional MT+ lesion impairs contralateral motion processing. Cognitive Neuropsychology, 25, 677−689. Slotnick, S. D. (2008). Imagery: Mental pictures disrupt perceptual rivalry. Current Biology, 18, R603−605. Dodson, C. S., Bawa, S., & Slotnick, S. D. (2007). Aging, source memory, and misrecollections. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33, 169−181. Slotnick, S. D., & Schacter, D. L. (2007). The cognitive neuroscience of memory and consciousness. In P.D. Zelazo, M. Moscovitch, & E. Thompson (Eds.), Cambridge handbook of consciousness (pp. 809−827). New York: Cambridge University Press. 5 Scott D. Slotnick Finkbeiner, M., Slotnick, S. D., Moo, L. R., & Caramazza, A. (2007). Involuntary capture of attention produces domain−specific activation.
Recommended publications
  • The Photographerls Guide to The
    THE PHOTOGRAPHER’S GUIDE TO THE EYE EYES—NOSTRILS—LIPS—SHINY THINGS. My eyes scanned the photo of two women almost the way a monkey’s would, going first to the faces, specifically features that WHAT SCIENCE IS would indicate friend, foe, or possible mate, then to the brightest objects in the image: a blue bottle, a pendant. Surprisingly, it took LEARNING ABOUT HOW forever (almost three seconds) for me to look at the famous—and famously gorgeous—face in the photo, Angelina Jolie. WE SEE CAN HELP YOU In this impulsive type of seeing, called “bottom-up,” my eyes went first to the face of the other woman in the picture until I consciously ordered them to spend time on Jolie—“top-down TAKE MORE COMPELLING attentional deployment” in scientific lingo. That’s because, researchers have found, we recognize culture-defined beauty only PICTURES after taking the reflexive glances all primate eyes perform within the first few milliseconds. In other words, we see first with our BY NEAL MATTHEWS animal selves and then with our acculturated minds. We process visual information very quickly, as the brain electronically parcels parts of images to different cortical areas concerned with faces, colors, shapes, motion, and many other aspects of a scene, where they are broken down even further. Then the brain puts all that information back together into a coherent composite before directing the eyes to move. My eye tracks were being recorded by vision researcher Laurent Itti, an associate professor of computer science, psychology, and neuroscience at the University of Southern California’s iLab (ilab.usc.edu).
    [Show full text]
  • Neural Encoding with Visual Attention
    Neural encoding with visual attention Meenakshi Khosla1;∗, Gia H. Ngo1, Keith Jamison3, Amy Kuceyeski3;4 and Mert R. Sabuncu1;2;3 1 School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853 2 Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY 14853 3 Radiology, Weill Cornell Medicine, New York, NY 10065 4 Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY 10065 ∗ Correspondence: [email protected] Abstract Visual perception is critically influenced by the focus of attention. Due to limited re- sources, it is well known that neural representations are biased in favor of attended locations. Using concurrent eye-tracking and functional Magnetic Resonance Imag- ing (fMRI) recordings from a large cohort of human subjects watching movies, we first demonstrate that leveraging gaze information, in the form of attentional masking, can significantly improve brain response prediction accuracy in a neural encoding model. Next, we propose a novel approach to neural encoding by includ- ing a trainable soft-attention module. Using our new approach, we demonstrate that it is possible to learn visual attention policies by end-to-end learning merely on fMRI response data, and without relying on any eye-tracking. Interestingly, we find that attention locations estimated by the model on independent data agree well with the corresponding eye fixation patterns, despite no explicit supervision to do so. Together, these findings suggest that attention modules can be instrumental in neural encoding models of visual stimuli. 1 1 Introduction Developing accurate population-wide neural encoding models that predict the evoked brain response directly from sensory stimuli has been an important goal in computational neuroscience.
    [Show full text]
  • Feature-Based Attention Is Independent of Object Appearance
    Journal of Vision (2014) 14(1):3, 1–11 http://www.journalofvision.org/content/14/1/3 1 Feature-based attention is independent of object appearance Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Guobei Xiao Tongji University School of Medicine, Shanghai, China Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Guotong Xu Tongji University School of Medicine, Shanghai, China Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Xiaoqing Liu Tongji University School of Medicine, Shanghai, China Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Jingying Xu Tongji University School of Medicine, Shanghai, China Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Fang Wang Tongji University School of Medicine, Shanghai, China Department of Ophthalmology of Shanghai Tenth People’s Hospital and Tongji Eye Institute, Li Li Tongji University School of Medicine, Shanghai, China Computer Science Department and Neuroscience Graduate Laurent Itti Program, University of Southern California $ Department of Ophthalmology of Shanghai Tenth People’s Hospital, The Advanced Institute of Translational Medicine and School of Software Engineering, Jianwei Lu Tongji University, Shanghai, China $ How attention interacts with low-level visual luminance discrimination task on one side, and ignored representations to give rise to perception remains a the other side, where we probed for enhanced activity central yet controversial question in neuroscience. While when either it was perceived as belonging to a same several previous studies suggest that the units of object, or shared features with the task side.
    [Show full text]
  • The Seabee II: a Beosub Class
    AUVSI TECHNICAL REPORT, JUNE 2007 1 The SeaBee II: A BeoSub Class AUV Randolph Voorhies, Christopher Roth, Michael Montalbo, Andre Rosa, Andrew Chambers, Kevin Roth, Neil Bernardo, Christian Siagian, Laurent Itti Abstract—The SeaBee II is an autonomous underwater year’s design is the most promising AUV to come vehicle (AUV) designed and built by students at the out of USCR. University of Southern California to compete in the 10th annual International Autonomous Underwater Vehicle Competition hosted by AUVSI and ONR in July 2007. This years entry is USCR’s most ambitious project to date, featuring a powerful, actively water-cooled BeoWulf Class I computing cluster composed of two Intel Core 2 Duo main computers. The design also boasts a brand new lightweight hull, a highly configurable external rack, and a 5 thruster propulsion system allowing easy set up of mission-oriented payloads without the need for constant re-balancing. Index Terms—Autonomous Underwater Vehicle AUV Saliency Gist Vision Robotics XTX I. INTRODUCTION N August of 2006, the University of Southern I California Competition Robotics Team (USCR) - www.ilab.usc.edu/uscr - began the design pro- cess for a brand new autonomous underwater ve- hicle for the 10th AUVSI competition. This year’s competition offers a number of challenges beyond the scope of those of previous years. These new challenges include most notably the addition of a ”treasure” to be recovered from the bottom of the pool. We approached this year’s competition with II. MECHANICAL DESIGN the platform concept centered around the compu- tational workhorse of a Beowulf Class 1 cluster.
    [Show full text]
  • Laboratory Notes
    LABORATORY The iLab Neuromorphic Vision NOTES C++ Toolkit: Free tools for the next generation of vision algorithms Because of its truly interdisciplinary nature— stage relies on a neural saliency map, which toolkit for the implementation of the next benefiting from the latest advances in experi- gives a graded measure of ‘attractiveness’ to processing stage—concerned with identify- mental and computational neuroscience, elec- every location in the scene, and is modeled ing the object that has drawn the attention trical engineering, control theory, and signal on the neural architecture of the posterior and the eyes—and most of these models are and image processing—neuromorphic engi- parietal cortex in the monkey brain. At any inspired by the visual-response properties of neering is a very complex field. This has been neurons in the infero-temporal cortex. Finally, one of the leading motivations for the devel- additional modules are available for short- opment of a Neuromorphic Vision Toolkit term and long-term memory, cognitive at the iLab of the University of Southern knowledge representation, and modulatory California to provide a set of basic tools that feedback from a high-level task definition can assist newcomers in the field with the de- (e.g., look for the stop sign) to the low-level velopment of new models and systems. More visual processing (e.g., emphasize the con- generally, the iLab Neuromorphic Vision tribution of red to the saliency map, prime C++ Toolkit project aims at developing the the object recognition module for the ‘traffic next generation of vision algorithms, the ar- sign’ object class).
    [Show full text]
  • Collection Neuroscience 1995 - 2009
    Elsevier Sciverse Sciencedirect - Evidence-based Selection für die Universität Osnabrück Collection Neuroscience 1995 - 2009 Titel Autor Jahr URL Animal Models in Eye Research Panagiotis Antonios Tsonis 2008 http://www.sciencedirect.com/science/book/9780123741691 Boxing Friedrich Unterharnscheidt / Julia Taylor Unterharnscheidt 2003 http://www.sciencedirect.com/science/book/9780127091303 Brain Mapping: The Disorders John Mazziotta/Arthur Toga/Richard Frackowiak 2000 http://www.sciencedirect.com/science/book/9780124814608 Brain Mapping: The Methods Arthur Toga/John Mazziotta 2002 http://www.sciencedirect.com/science/book/9780126930191 Brain Mapping: The Systems Arthur Toga/John Mazziotta 2000 http://www.sciencedirect.com/science/book/9780126925456 Brain Warping Arthur Toga 1999 http://www.sciencedirect.com/science/book/9780126925357 Cognitive Electrophysiology of Mind and Brain, The Alberto Zani/Alice Proverbio 2003 http://www.sciencedirect.com/science/book/9780127754215 Cognitive Systems - Information Processing Meets Brain Science Richard Morris/Lionel Tarassenko/Michael Kenward 2006 http://www.sciencedirect.com/science/book/9780120885664 Consciousness and Cognition Henri Cohen, Brigitte Stemmer 2007 http://www.sciencedirect.com/science/book/9780123737342 Consciousness Transitions Hans Liljenström, Peter Århem 2007 http://www.sciencedirect.com/science/book/9780444529770 Eye Movement Research J.M. Findlay, R. Walker, R.W. Kentridge 1995 http://www.sciencedirect.com/science/publication?issn=0926907X&volume=6 From Molecules to Networks
    [Show full text]
  • Mining Videos for Features That Drive Attention
    Chapter 14 Mining Videos for Features that Drive Attention Farhan Baluch and Laurent Itti Abstract Certain features of a video capture human attention and this can be measured by recording eye movements of a viewer. Using this technique combined with extraction of various types of features from video frames, one can begin to understand what features of a video may drive attention. In this chapter we define and assess different types of feature channels that can be computed from video frames, and compare the output of these channels to human eye movements. This provides us with a measure of how well a particular feature of a video can drive attention. We then examine several types of channel combinations and learn a set of weightings of features that can best explain human eye movements. A linear combination of features with high weighting on motion and color channels was most predictive of eye movements on a public dataset. 14.1 Background Videos are made up of a stream of running frames each of which has a unique set of spatial and textural features that evolve over time. Each video therefore presents a viewer with a large amount of information to process. The human visual system has limited capacity and evolution has incorporated several mechanisms into the visual processing systems of animals and humans to allow only the most important and behaviorally relevant information to be passed on for further processing. The first stage is the limited amount of high resolution area in the eye, i.e., the fovea. When we want to focus on a different spatial region of a video we make an eye movement, also F.
    [Show full text]
  • NSF Workshop on Hybrid Neuro- Computer Vision Systems
    NSF Workshop on Hybrid Neuro- Computer Vision Systems April 19-20, 2010 New York City Organizers: Shih-Fu Chang and Paul Sajda 1 NSF HNCV10 Workshop Objectives • Facilitate interaction among experts from neuroscience, computer vision, and brain machine interface • Explore new applications and requirements • Hybrid vision system designs that exploit synergistic combinations of neural and computer vision • Recommend future directions and programs NSF HNCV10 Speakers and Sponsors Neuroscience and Neural Computing Government Representatives • Garrett Stanley, Biomedical Engineering, Georgia Tech. • Qiang Ji, NSF (Workshop Sponsor) • Chris Rozell, Electrical Computer Engineering, Georgia Tech • Liyi Dai, Army • Charles Cadieu, Neural Science, UC Berkeley • Tom Nugent III , DARPA • Paul Sajda, Biomedical Engineering, Columbia U. • Dan Purdy , DARPA • Jack Gallant, Psychology, UC Berkeley • Grace Rigdon , DARPA • Frank Tong, Psychology, Vanderbilt U. • Wendell G Sykes , DARPA • Philippe Schyns, Psychology, U. of Glasgow Neural-Inspired Computer Vision Industry Participants: • Thomas Serre, Cognitive and Linguistic Science, Brown U. • Sarnoff • Yann LeCun, Computer Science and Neural Science, NYU • Google • Aude Oliva, Brain and Cognitive Science, MIT • IBM • Li Fei-Fei, Computer Science, Stanford U. • AT&T • Antonio Torralba, Computer Science, MIT • Neuromatters • Laurent Itti, Computer Science, Psychology and Neuroscience, USC • Total Immersion Software Hybrid Vision Systems and Applications • Others • Amy Kruse, Neural Science, Total Immersion
    [Show full text]
  • Visual Attention and Target Detection in Cluttered Natural Scenes
    Visual attention and target detection in cluttered natural scenes Laurent Itti Abstract. Rather than attempting to fully interpret visual scenes in a University of Southern California parallel fashion, biological systems appear to employ a serial strategy by Computer Science Department which an attentional spotlight rapidly selects circumscribed regions in the Hedco Neuroscience Building scene for further analysis. The spatiotemporal deployment of attention 3641 Watt Way, HNB-30A has been shown to be controlled by both bottom-up (image-based) and Los Angeles, California 90089-2520 top-down (volitional) cues. We describe a detailed neuromimetic com- E-mail: [email protected] puter implementation of a bottom-up scheme for the control of visual attention, focusing on the problem of combining information across mo- Carl Gold dalities (orientation, intensity, and color information) in a purely stimulus- Christof Koch driven manner. We have applied this model to a wide range of target California Institute of Technology detection tasks, using synthetic and natural stimuli. Performance has, Computation and Neural Systems however, remained difficult to objectively evaluate on natural scenes, Program because no objective reference was available for comparison. We Mail-Code 139-74 present predicted search times for our model on the Search–2 database Pasadena, California 91125 of rural scenes containing a military vehicle. Overall, we found a poor correlation between human and model search times. Further analysis, however, revealed that in 75% of the images, the model appeared to detect the target faster than humans (for comparison, we calibrated the model’s arbitrary internal time frame such that 2 to 4 image locations were visited per second).
    [Show full text]
  • Computational Models: Bottom-Up and Top-Down Aspects
    Computational models: Bottom-up and top-down aspects Laurent Itti and Ali Borji, University of Southern California 1 Preliminaries and definitions Computational models of visual attention have become popular over the past decade, we believe primarily for two reasons: First, models make testable predictions that can be explored by experimentalists as well as theoreticians; second, models have practical and technological applications of interest to the applied science and engineering communities. In this chapter, we take a critical look at recent attention modeling efforts. We focus on computational models of attention as defined by Tsotsos & Rothenstein (2011): Models which can process any visual stimulus (typically, an image or video clip), which can possibly also be given some task definition, and which make predictions that can be compared to human or animal behavioral or physiological responses elicited by the same stimulus and task. Thus, we here place less emphasis on abstract models, phenomenological models, purely data-driven fitting or extrapolation models, or models specifically designed for a single task or for a restricted class of stimuli. For theoretical models, we refer the reader to a number of previous reviews that address attention theories and models more generally (Itti & Koch, 2001a; Paletta et al., 2005; Frintrop et al., 2010; Rothenstein & Tsotsos, 2008; Gottlieb & Balan, 2010; Toet, 2011; Borji & Itti, 2012b). To frame our narrative, we embrace a number of notions that have been popularized in the field, even though many of them are known to only represent coarse approximations to biophysical or psychological phenomena. These include the attention spotlight metaphor (Crick, 1984), the role of focal attention in binding features into coherent representations (Treisman & Gelade, 1980a), and the notions of an attention bottleneck, a nexus, and an attention hand as embodiments of the attentional selection process (Rensink, 2000; Navalpakkam & Itti, 2005).
    [Show full text]
  • Data Sharing for Computational Neuroscience
    Neuroinform DOI 10.1007/s12021-008-9009-y Data Sharing for Computational Neuroscience Jeffrey L. Teeters & Kenneth D. Harris & K. Jarrod Millman & Bruno A. Olshausen & Friedrich T. Sommer Received: 9 January 2008 /Accepted: 10 January 2008 # Humana Press Inc. 2008 Abstract Computational neuroscience is a subfield of which will be publicly available in 2008. These include neuroscience that develops models to integrate complex electrophysiology and behavioral (eye movement) data experimental data in order to understand brain function. To described towards the end of this article. constrain and test computational models, researchers need access to a wide variety of experimental data. Much of Keywords Computational neuroscience . Data sharing . those data are not readily accessible because neuroscientists Electrophysiology . Online data repositories . Computational fall into separate communities that study the brain at models . Hippocampus . Sensory systems . Eye movements . different levels and have not been motivated to provide Visual cortex data to researchers outside their community. To foster sharing of neuroscience data, a workshop was held in 2007, bringing together experimental and theoretical neuroscient- Introduction ists, computer scientists, legal experts and governmental observers. Computational neuroscience was recommended Many aspects of brain function that are currently unex- as an ideal field for focusing data sharing, and specific plained could potentially be understood if a culture of data methods, strategies and policies were suggested for achiev- sharing were established in neuroscience. The fields of ing it. A new funding area in the NSF/NIH Collaborative computational neuroscience and systems neuroscience seek Research in Computational Neuroscience (CRCNS) pro- to elucidate the information processing strategies employed gram has been established to support data sharing, guided by neural circuits in the brain.
    [Show full text]
  • Biologically-Inspired Robotics Vision Monte-Carlo Localization in the Outdoor Environment
    Biologically-Inspired Robotics Vision Monte-Carlo Localization in the Outdoor Environment Christian Siagian and Laurent Itti Abstract— We present a robot localization system using simultaneous localization and mapping (SLAM) [12], [13], biologically-inspired vision. Our system models two extensively [14] is an active branch of robotics research. studied human visual capabilities: (1) extracting the “gist” of One additional categorization to consider here comes from a scene to produce a coarse localization hypothesis, and (2) refining it by locating salient landmark regions in the scene. the vision perspective, which classifies systems according to Gist is computed here as a holistic statistical signature of the the type of visual features used: local features and global image, yielding abstract scene classification and layout. Saliency features. Local features are computed over a limited area of is computed as a measure of interest at every image location, the image, as opposed to global features which may pool efficiently directing the time-consuming landmark identification information over the entire image into, e.g., histograms. Be- process towards the most likely candidate locations in the image. The gist and salient landmark features are then further fore analyzing the variety of approaches, which by no means processed using a Monte-Carlo localization algorithm to allow is exhaustive, it should be pointed out that, like other vision the robot to generate its position. We test the system in three problems, any localization and landmark recognition system different outdoor environments - building complex (126x180ft. faces the general issues of occlusion, dynamic background, area, 3794 testing images), vegetation-filled park (270x360ft. lighting, and viewpoint changes.
    [Show full text]