![1. Introduction: Global Capture of a Moving](https://data.docslib.org/img/3a60ab92a6e30910dab9bd827208bcff-1.webp)
2570 J. Opt. Soc.Am. A/Vol. 14,No. 10/0ctober 1997 Cheyet at. JonathanChef. StephenGrossberg. and Ennio Mingolla Departmentof Cognitiveand Neural Systemsand Centerfor Adaptive Systems,Boston University, 677Beacon Street,Boston, Massachusetts 02215 Received December 20, 1995; revised manuscript received March 12, 1997; accepted March 18, 1997 A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both type 1 and type 2). The model also simulates how the per- ceived speed of lines moving in a prescribed direction depends on their orientation, length, duration, and con- trast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from fea- ture tracking points, namely, locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally unselective cell responses to activate directionally tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally tuned cells. Spatially long-range filters and top-down feed- back from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention- based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular lateral geniculate nucleus lesions on motion perception. @ 1997 Optical Society of America [S0740-3232(97)00810-7] Key words: Vision, motion perception, directional grouping, speed perception, aperture problem, attention, visual cortex, neural network. 1. INTRODUCTION: GLOBAL CAPTURE the ambiguous motion signals near the line middle. The OF A MOVING OBJECT'S DIRECTION preferred ambiguous motion direction and speed are nor- AND SPEED mal to the line's orientation. For a vertical line, the When an object moves,aperture ambiguity and image or speed of the feature tracking signals at the line ends detector noise often prevent all but a small subsetof its equals that of the preferred ambiguous speed near the image features, suchas its bounding contours,from gen- line middle. For a tilted line, the preferred ambiguous erating unambiguousmotion direction cues. Despitethis speed is less than that of the feature tracking speed. If fact, object percepts often seem to pop out with a well- the speed of the line is judged by using a weighted aver- defined motion direction and speed. The present paper age of feature signals and ambiguous signals, then the developsa neural modelof how this feat is accomplished tilted line will be perceived to move slower than the ver- by the brain. tical line, as found by Castet et ai.1; see Fig. 2(a). These Five general designprinciples motivate the model'sde- data also show that ambiguous speeds have a greater ef- velopment. The first is that unambiguousfeature track- fect as line length increases when the line is briefly ing signals capture and transform ambiguousmotion sig- viewed. Feature tracking signals at the line ends thus nals into coherent representations of object direction. propagate inward along the line to capture ambiguous Two classic examplesof feature tracking are shown in motion speeds and directions. Since capture takes longer Fig. 1. A secondprinciple is that object direction and to complete when lines are longer, ambiguous motion sig- speedare both emergentproperties of this process. nals have a larger effect on longer lines. Theseconsiderations lead to the model'scentral design Our model simulates data of Castet et ai.1 It also problem: What type of feature tracking processescan se- simulates how the barberpole illusion is produced,2 how it lect unambiguous direction and accurate speed signals is affected by configurational changes, and how plaid pat- from ambiguousmotion signals? For example,consider terns move both coherently and incoherently. In particu- the horizontal motion of both a vertical and a tilted line lar, the model provides explanations of when moving that move at the same speed. Supposethat the unam- plaid patterns cohere or do not,3-5 how contrast affects biguous feature tracking points at the line ends capture their perceived speed and direction,6 and why movement 0740-3232/97/1002570-25$10.00 @ 1997 Optical Society of America Cheyet at. Vol. 14,No. 10/0ctober 1997/J. Opt. Soc.Am. A 2571 of type 2 patterns differs from that of type 1 patterns. 7-9 tor sum direction because of temporal delay between Fou- BownslO has provided experimental evidence against the rier and non-Fourier information and argues in favor of a Yo-Wilson9 hypothesis that type 2 plaids move in the vec- feature tracking explanation. These simulations build upon earlier ones that employ a subset of model mechanisms that do not include its mo- <" tion capture process.l1 The earlier model incorporates a third design principle: Perceived motion direction and speed are a collective property of multiple populations of cells with different receptive field sizes, or scales. This multiscale network models the short-range motion pro- cess of Braddick,12 including the fact that the short-range motion limit Dmaxdepends on the spatial frequency con- tent of the image.13-17 In the model, larger scales pref- erentially process higher speeds. A key problem in de- signing this multiscale short-range motion filter is to keep the largest scale from winning at all motion speeds, be- cause it has a larger receptive field that can attain a higher activation level. Once this problem is solved, the model simulates how visual speed perception, discrimina- tion, and reaction time are affected by stimulus contrast, duration, dot density, and spatial frequency.l,7.18-29 -- A fourth design principle concerns how nearby contours -local signals percept of different orientation and contrast polarity that are moving in the same direction cooperate to generate a (b) pooled motion direction signal. This process is modeled Fig. 1. Feature tracking and motion capture: Often only a by a long-range motion filter that has been used to simu- small subset of image features (e.g.,line ends)generate unam- late data about long-range apparent motion (Kolers30), in- biguous motion direction signals,as a result of aperture ambigu- cluding beta, gamma, delta, reverse, split, Ternus, and ity and image or detectornoise. We modelhow these unambigu- reverse-contrast Ternus motion and Korte's laws.31-33 ous locationsgenerate feature tracking signals that are used to capture ambiguoussignals: (a) moving line, (b) barberpoleillu- A fifth design principle concerns motion capture. The sion. key problem here is how to capture the correct motion di- (a) (b) ~ Q) Q) 0.. (.f) >Q) -~ ~Q) Relative Orientation (deg) Fig. 2. Effects of line length and orientation on perceivedspeed of horizontally moving lines. Relative perceivedspeed for three dif- ferent line orientationsand lengths are shownas percentages of the perceivedspeed of a vertical line of the samelength. Part (a) shows data from Castetet at. (Ref. 1,p. 1925). Eachdata line correspondsto a different line length (0.21,0.88, and 1.76deg). The horizontal axis showsthe ratio of the speednormal to the line's orientation relative to the actual translation speed. The three data points from left to right for eachline length correspondto line angles of 60, 45, and 30 deg from vertical, respectively. The horizontal dotted line indi- catesa veridical speedperception; results belowthis line indicate a bias toward the perceptionof slowerspeeds. Part (b) showssimu- lation results, also for three lengths and orientations. In both casesperceived relative speeddecreases with line length and angle from vertical. Simulated lines use slightly different orientations from those in the experimentsso that the simulated input conformsto the Cartesiangrid. ---"" 2572 J. Opt. Soc.Am. A/Vol. 14,No. 10/0ctober 1997 Chey et at. rection without distorting its speed estimate. Surpris- through experiments showing that the direction of motion ingly, motion capture can be achieved by a long-range of the individual components and the plaid as a whole re- grouping network that also allows attention to prime a spond differently to stimulus paramet.er changes. For desired motion direction. In other words, motion cap- example, Adelson and Movshon3 showed that contrast ture, which seemsto be an automatic and preattentive thresholds for the detection of plaid components alone process,is carried out by the same circuit that permits were lower than contrast thresholds for the detection of top-down attention to focus selectively on a desired coherently moving plaids. Ferrera
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages25 Page
-
File Size-