Program for the AVA AGM, 24 April 2015 School of Psychology, University of Nottingham 10.00 Coffee and Registration, Foyer 10.55 Welcome. A1 Lecture Theatre 11.00 Invited Talk: Mark Georgeson, Aston University Early spatial vision: a view through two eyes 11.45 Daniel Baker. Surround suppression within and between the eyes does not increase with speed 12.00 Elisa Zamboni, Timothy Ledgeway, Paul McGraw and Denis Schluppeck. Non-informative cues bias reports of visual motion direction 12.15 Fleur Corbett, Janette Atkinson and Oliver Braddick. Feedforward and feedback processing of global visual motion and form: Evidence from EEG 12.30 Lunch & Posters. Social Space and Foyer 1.30 AGM Business meeting A1. All members welcome. 1.45 Special session: Vision and Camouflage Chair: Tim Meese Invited Talk: Innes Cuthill, University of Bristol Matching the background: deceptively simple? 2.30 Jolyon Troscianko, Martin Stevens and John Skelhorn. Does Disruptive Camouflage Disrupt Search Image Formation? 2.45 Olivier Penacchio, P George Lovell, Simon Sanghera, Innes Cuthill, Graeme Ruxton and Julie Harris. Countershading camouflage and the efficiency of visual search 3.00 Paul George Lovell, John Egan, Ken Scott-Brown and Rebecca J. Sharman. Exploring the role of enhanced-edges in disruptive camouflage 3.15 Coffee & Posters 3.45 Geoffrey Burton Memorial Lecture, sponsored by CRS Jan Atkinson Visual Development Unit, London and Oxford The Developing Visual Brain – from Newborns to Numeracy 4.30 Richard Johnston, Timothy Ledgeway, Nicola Pitchford and Neil Roach. Visual processing of motion and form by relatively good and poor adult readers 4.45 Benjamin Vincent. Bayesian models of perception: key concepts 5.00 Reception & Posters Posters 1. Greta Vilidaite and Daniel Baker. What is the best way to measure internal noise in the visual system? 2. Louise O'Hare and Simon Durrant. Resting alpha - state or trait? 3. Darren Cunningham and Jonathan Peirce. Early Visual Evoked Potential Responses to Coherent and Incoherent Plaid Stimuli 4. Alexis Makin, Giulia Rampone, Damien Wright, Letizia Palumbo and Marco Bertamini. The Holographic weight of evidence model predicts the amplitude of the neural response to symmetry 5. Mike Long, Harriet Allen, Gary Burnett and Robert Hardy. Depth discrimination between Augmented Reality and Real-world Targets 6. Paul Hibbard, Rebecca Hornsey and Peter Scarfe. A local reference frame impairs ordinal depth judgements on smoothly curved surfaces 7. Keith Langley, Veronique Lefebvre and Joshua Solomon. Underlying Mechanisms of Repulsion, Assimilation and Enhancement in Direct Tilt After-effects: A Cascaded Bayesian View 8. Paul Tern, Anna Hughes and David Tolhurst. The effect of striped target patterning on speed perception 9. Nicholas Jones, Anna Hughes and David Tolhurst. Interactions between target and background patterning in speed perception 10. Abigail Webb, Paul Hibbard and Rick O'Gorman. Variation in female reproductive hormones does not affect contrast sensitivity. 11. Frouke Hermens and Robin Walker. Comparing social and symbolic cues for visual search in clutter Abstracts Invited Talks: Geoffrey Burton Lecture, 2015 Professor Jan Atkinson UCL Emeritus Professor of Psychology and Developmental Cognitive Neuroscience, Visiting Professor, University of Oxford Visual Development Unit, London and Oxford The Developing Visual Brain – from Newborns to Numeracy Tracking the development of visual abilities in infants and young children, as well as revealing the roots of adult vision, provides a unique window into the developing visual brain, both typical and atypical. Our work has helped to establish the major transition from a largely subcortical visual system at birth to the characteristic mechanisms of cortical vision and their control over subcortical visual responses. Delays in this development in the first year of life can be measured through orientation-specific responses, the cortical control of attention shifts, the timing of cortical event-related potentials, and the onset of visuomotor and visuocognitive milestones. These measures have proved to be sensitive indicators of the effects of early brain injury and premature birth in predicting later neurocognitive outcomes. Not all parts of the cortical visual system are equally vulnerable. Tests of children’s sensitivity to global form and global motion ( coherence thresholds) show that the global motion system, while developing early, is particularly impaired in a wide range of developmental disorders, both genetic and acquired ( in Williams Syndrome, autism, Fragile X, congenital cataract, hemiplegia, preterm birth). This differential impairment has led us to the hypothesis of ‘dorsal stream vulnerability’ in which poor motion performance is associated with a cluster of attentional, spatial, and visuomotor deficits. Our recent work with a large cohort of typically developing children in San Diego shows that individual differences in global motion sensitivity are associated with the area of a region of the intraparietal sulcus which has been widely implicated in mathematical cognition. Consistent with this, children’s global motion, but not global form, performance, is correlated with measures of mathematical achievement and numerosity judgments. We can speculate on how far reaching these differences in early visual brain development really are, and what are the dynamics of alternative developmental pathways. Professor Innes C. Cuthill School of Biological Sciences, Life Sciences Building, 24 Tyndall Avenue, Bristol Matching the background: deceptively simple? Animal camouflage comprises a suite of adaptations for concealment, variously exploiting different mechanisms of perception in the species from which it is important to remain undetected or unrecognised. However, the intuitively simplest form of camouflage, matching the background, is not without controversy and some issues remain unexplored. What the early writers on camouflage called ‘background picturing’ or ‘generalised resemblance’ has resolved into a theory of background matching that characterises animal colour patterns as statistical samples of the background. Military camouflage is based on similar principles, with modern patterns based on the spatiochromatic statistics of natural backgrounds. However, in the biological literature, there is disagreement about whether such sampling should be random and what is the optimal camouflage when an animal lives in multiple habitats. The latter issue is paralleled in military deployment of uniforms claimed to be ‘multi-terrain’ in functionality, as opposed to the traditional palette of habitat-specific woodland, desert, arctic or urban designs. Optimising a pattern for background matching pattern is further complicated by the fact that real animals, or soldiers, are 3D objects, the camouflage potentially undermined by shape from shading cues. Most considerations of camouflage comprise a 2D animal on a 2D background, and the (historically venerable but, until recently, mathematically primitive) literature on 3D object concealment has been very confused as to what such patterns achieve. I will review these areas of debate in the light of new theory and research on background matching, pointing to new research questions that need to be answered. Professor Mark Georgeson School of Life & Health Sciences, Aston University, Birmingham, UK Early spatial vision: a view through two eyes Simple features such as edges are the building blocks of spatial vision, and so I ask: how are visual features and their properties (location, blur and contrast) derived from the responses of spatial filters in early vision; how are these elementary visual signals combined across the two eyes; and when are they not combined? Our psychophysical evidence from blur-matching experiments strongly supports a model in which edges are found at the spatial peaks of response of odd-symmetric receptive fields (gradient operators), and their blur B is given by the spatial scale of the most active operator. This model can explain some surprising aspects of blur perception: edges look sharper when they are low contrast, and when their length is made shorter. Our experiments on binocular fusion of blurred edges show that single vision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression of one edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served by binocular combination of monocular gradient operators, but that combination - involving binocular summation and interocular suppression - is not completely understood. In particular, linear summation (supported by psychophysical and physiological evidence) predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B), but results surprisingly show that edge blur appears constant across all disparities, whether fused or diplopic. Finally, when edges of very different blur are shown to the left and right eyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor by the higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with the steeper gradient dominates perception. The early stages of binocular spatial vision speak the language of luminance gradients. .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-