Distal Attribution in Sensory Substitution
Total Page:16
File Type:pdf, Size:1020Kb
SPACING OUT: DISTAL ATTRIBUTION IN SENSORY SUBSTITUTION ____________________________________ A Thesis Presented to The Honors Tutorial College Ohio University _______________________________________ In Partial Fulfillment of the Requirements for Graduation from the Honors Tutorial College with the degree of Bachelor of Arts in Philosophy ______________________________________ by David E. Pence May 2013 Contents 1. Introduction: The ABCs of SSDs 1 1.1 More Background: Sensory Substitution and Enactivism 4 2. Why Action May not be Necessary 10 2.1 Seeing Without Structure 13 2.2 Seeing Without Corroboration? 18 2.3 A Thought Experiment 22 3. Alternative Frameworks for Sensory Substitution 25 3.1 The Prospects Considered 32 3.2. How it’s Done 32 3.3 Where it Happens 37 4. Crossmodal Plasticity 40 4.1 The Mechanisms of Plasticity 41 4.2 Where the Senses Trade 46 4.3 Acquired Synaesthesia? 48 5. The Multimodal Mind 52 5.1 Where it All Comes Together 53 5.2 Integration at Work 57 5.3 A Second Multimodal Proposal 67 6. Mental Imagery 71 6.1 The Step-by-Step 72 6.2 The Land of Imagination 77 6.3 Speaking For and Against 79 6.4 Does Reading Make a Better Metaphor? 87 7. How They All Fit Together 93 8. Conclusions: Lessons and Future Possibilities 97 9. References 100 List of Figures Figure 1: Retinal Disparity 9 Figure 2: Gibsonian Ecological Perception 9 Figure 3: High Contrast Image 14 Figure 4: Resolution of Early TVSS 16 Figure 5: Early TVSS Device 19 Figure 6: Working Memory Map 34 Figure 7: Map of Brain Regions Involved in Sensory Substitution 39 Figure 8: Multisensory Reverse Hierarchy Theory 69 Figure 9: Resolution of The vOICe 89 1 1. Introduction: The ABCs of SSDs One would be hard pressed to find a psychological result as surprising as sensory substitution. The basic idea is that, by translating sensory stimuli from one modality to another, one can regain lost or damaged perceptual capacities. Sensory substitution devices (SSDs) will turn images into vibrations on a tactile grid or variations in an auditory “soundscape,” and when suitably processed, these inputs are used for adaptive world-directed action, spatial navigation, and even object recognition. The blind, so many claim, can learn to see.1 Most significant among the gains shown by SSD users is the ability to perceive objects beyond their bodies, things “out there.” Sighted persons often take for granted their ability to sense and interact with the world of extrapersonal space, but for early and congenitally blind (i.e., blind from birth) users, the gain is massive. In the lab, users are often surprised to encounter depth and optical illusions (Bach-y-Rita, 1972); in everyday life, they might be surprised to “see” household objects long forgotten or distant trees on a walk outside.2 These perceptual capacities all fall under what psychologists have called “distal attribution” (Epstein et al., 1986; Auvray et al., 2003). Various definitions have been proposed, but it can be broadly understood as the ability to represent objects as occupying spatial locations beyond one’s own body. As surprising as the result itself, however, is the fact that despite over four decades of research, sensory substitution remains so mysterious. There is no orthodox explanation of what, exactly, is going on when subjects learn to use an SSD. The most 1 Sensory substitution is not limited to vision, but to date, the majority of research has centered on it. 2See (“what blind users say,” “www.seeingwithsound.com”). 2 influential account to date, and the one we seek to draw the sharpest contrast with, is that of enactivism (also known as the sensorimotor or sensorimotor contingency theory). For this radical way of modeling perception, our awareness of the world beyond is inextricably linked to our familiarity with movement and action. In particular, enactivism holds that perception is the recognition of sensorimotor contingencies, the law-like connections between our actions and resultant sensory input. To see a cup of tea, for example, it is necessary to understand how visual sensations caused by the object would change as a result of environmental exploration and movement. In sensory substitution, enactivists have found perhaps their most intuitively powerful source of evidence. First off, they can explain how one modality sitting in for another is even possible. Immediate sensation is unimportant so long as the sensorimotor contingencies that govern use of the device, that is, how movements affect video input, parallel those of vision. Second and still more impressively, enactivism seems to predict the reliance of distal attribution upon control over visual input from the camera. Passively trained subjects were able to pick out the orientation of lines and even identify 2D shapes (granted high error rates; see White, 1970), but they never reached the astounding capacities of their actively trained counterparts. While active subjects moved freely into perception of external objects, passive subjects never described their experiences in anything but proximal, tactile terms— exactly what one would expect were understanding action a key component of perception. We should likewise note that this has been one of the most longstanding and popular morals drawn from experiments with SSDs. The enactivists are in good 3 company. Bach-y-Rita, for example, thought it a “plausible hypothesis” that self- guidance of input constitutes “the necessary and sufficient condition for the experienced phenomena to be attributed to a stable outside world” (1972, p. 99). The position seems to have been shared by his colleagues, as well, since nearly identical language is used in White et al. (1970). The hypothesis was discussed rather positively in Epstein et al.’s (1986) seminal paper on distal attribution, and as recently as ten years ago reviewers (Lenay et al., 2003) went so far as to claim “empirical proof” that “there is no perception without action” (p. 282). Nevertheless, we suspect such judgments are hasty. In the following, we shall make our case. First, we contend that the purported necessity of action is overstated and that the reports upon which enactivists hinge their account have not been adequately scrutinized. In particular, the quality of visual input available to subjects in the early experiments they cite was so low that action could plausibly be transformed from a mere aid to an essential means of disambiguation. Second, there are at least three alternative mechanisms that could explain the substitution process and the empirical observations associated with it: blindness induced neural plasticity, multimodal3 learning, and mental imagery. These three accounts, we shall argue, fit together in a clear and mutually beneficial way, making for a formidable, albeit speculative, answer to the problem of sensory substitution. 3 Throughout the paper, we shall use the terms multimodal and multisensory to mean more or less the same thing. 4 1.1 More Background: Sensory Substitution and Enactivism Although the idea of sensory substitution is an odd one, it is not entirely new. The basic idea has been around at least since Descartes, who in his Dioptrics hypothesized that the blind gain a kind of sight through use of a cane: No doubt you have had the experience of walking over rough ground without a light, and finding it necessary to use a stick in order to guide yourself. You may then have been able to notice that by means of this stick, you could feel the various objects situated around you, and that you could even tell whether there were trees or rocks, or sand, or water, or grass, or mud, or any other such thing. It is true that this kind of sensation is somewhat confused and obscure in those who do not have long practice with it. But consider it in those born blind, who have made use of it all their lives: with them, you will find it is so perfect and so exact, that one might almost say that they see with their hands, or that their stick is the organ of some sixth sense given to them in place of sight. (Descartes, 1637/1985, p. 153) He went on to add that, given two such rods, the blind may even be able to triangulate depth (for a recent take, see Cabe, Wright, & Wright, 2003). Another philosopher to consider the possibility was Rousseau. In Emile, he noted the possibility of auditory sensory substitution: As our sense of feeling, when properly exercised, becomes a supplement to sight, why may it not also substitute for hearing to a certain degree, since sounds excite in resonant bodies vibrations sensible to touch? Lay a hand on the body of the cello, and you will be able, without the assistance of either eyes or ears, to distinguish, merely by the way in which the wood vibrates and trembles, whether the sound it gives is deep or shrill, whether it comes from the treble or the bass. If one were to train the senses to these differences, I do not doubt that, in time, one could become so sensitive as to be able to distinguish a whole air by means of the fingers. Now, if we concede this, it is clear that we might easily talk to deaf people by means of music; for tone and measures are no less susceptible of regular combination than voice and articulation, so they may be made use of in the same way as the elements of speech. (Rousseau, 1762, 237-238, quoted in Geldard, 1966)4 4 Interestingly, both philosophers drew this conclusion from reflection on haptic touch. Some studies we shall discuss later might help explain this. 5 In order to reach something close to the contemporary SSDs, however, one must go to the 20th century.