Automatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention Laurent Itti

Automatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention Laurent Itti

1304 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 10, OCTOBER 2004 Automatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention Laurent Itti Abstract—We evaluate the applicability of a biologically-moti- video quality depending on available transmission bandwidth vated algorithm to select visually-salient regions of interest in video (encode priority regions as the core of a compressed stream, streams for multiply-foveated video compression. Regions are se- with additional details transmitted only as additional bandwidth lected based on a nonlinear integration of low-level visual cues, mimicking processing in primate occipital, and posterior parietal is available [2]–[4]). cortex. A dynamic foveation filter then blurs every frame, increas- The selection of priority regions remains an open problem. ingly with distance from salient locations. Sixty-three variants of Recently, key advances have been achieved in at least two con- the algorithm (varying number and shape of virtual foveas, max- texts. First, real-time interactive gaze-contingent foveation for imum blur, and saliency competition) are evaluated against an out- video transmission over a bandwidth-limited communication door video scene, using MPEG-1 and constant-quality MPEG-4 channel, and, second, priority encoding for general-purpose (DivX) encoding. Additional compression radios of 1.1 to 8.5 are achieved by foveation. Two variants of the algorithm are validated noninteractive video compression. Gaze-contingent video against eye fixations recorded from four to six human observers transmission typically uses an eye-tracking device to record on a heterogeneous collection of 50 video clips (over 45 000 frames eye position from a human observer on the receiving end, and in total). Significantly higher overlap than expected by chance is applies in real time a foveation filter to the video contents at the found between human and algorithmic foveations. With both vari- source [5]–[11]. Thus, most of the communication bandwidth ants, foveated clips are, on average, approximately half the size of unfoveated clips, for both MPEG-1 and MPEG-4. These results is allocated to high-fidelity transmission of a small region suggest a general-purpose usefulness of the algorithm in improving around the viewer’s current point of regard, while peripheral compression ratios of unconstrained video. image regions are highly degraded and transmitted over little remaining bandwidth. This approach is particularly effective, Index Terms—Bottom up, eye movements, foveated, saliency, video compression, visual attention. with observers often not noticing any degradation of the signal, if well matched to their visual system and viewing conditions. Even in the absence of eye tracking, this interactive approach I. INTRODUCTION AND BACKGROUND has demonstrated usefulness, for example when there exists N INCREASINGLY popular approach to reduce the size a set of fixed priority regions, or when observers explicitly A of compressed video streams is to select a small number of point to priority regions [12]. Further, online analysis of the interesting regions in each frame and to encode them in priority. observer’s patterns of eye movements may allow more sophis- This spatial prioritization scheme relies on the highly nonuni- ticated interactions than simple foveation (e.g., zooming-in and form distribution of photoreceptors on the human retina, by other computer interface controls [13]). However, extending which only a small region of 2 – 5 of visual angle (the fovea) this approach to general-purpose noninteractive video com- around the center of gaze is captured at high resolution, with pression presents severe limitations. logarithmic resolution falloff with eccentricity [1]. Thus, the ra- In the context of general-purpose video compression, indeed, tionale is that it may not be necessary or useful to encode each it is assumed that a single compressed video stream will be video frame with uniform quality, since human observers will viewed by many observers, at variable viewing distances, and crisply perceive only a very small fraction of each frame, depen- in the absence of any eye tracking or user interaction. Very high dent upon their current point of fixation. In a simple approach inter-observer variability then precludes recording a single eye (used here), priority encoding of a small number of image re- movement scanpath from a reference observer and using it to de- gions may decrease overall compressed file size by tolerating termine priority regions in the video clip of interest. Recording additional degradation in exchange for increased compression from several observers and using the union of their scanpaths outside the priority regions. In more sophisticated approaches, partially overcomes this limitation [14], but at a prohibitive cost: priority encoding may serve to temporally sequence content de- An eye-tracking setup, population of observers, and time-con- livery (deliver priority regions first), or to continuously scale suming experiments are required for every new clip to be com- pressed. Algorithmic methods, requiring no human testing have the potential of making the process practical and cost-effective [15], Manuscript received June 24, 2003; revised June 24, 2004. This work was [16]. Computer vision algorithms have, thus, been proposed to supported by NSF, NEI, NIMA, the Zumberge Fund, and the Charles Lee Powell Foundation. The associate editor coordinating the review of this manuscript and automatically select regions of high encoding priority. Of par- approving it for publication was Dr. Fernando M. B. Pereira. ticular interest here, several techniques rely on known proper- The author is with the Departments of Computer Science, Psychology and ties of the human visual system to computationally define per- Neuroscience Graduate Program, University of Southern California, Los An- geles, CA 90089-2520 USA (e-mail: [email protected]). ceptually important image regions (e.g., based on object size, Digital Object Identifier 10.1109/TIP.2004.834657 contrast, shape, color, motion, or novelty [17]–[19]). This type 1057-7149/04$20.00 © 2004 IEEE ITTI: AUTOMATIC FOVEATION FOR VIDEO COMPRESSION 1305 of approach has been particularly successful, and those proper- scribed in details [29], [55]–[57], we focus here on specific ties which are well defined (e.g., contrast sensitivity function, new model additions for the prediction of priority regions in importance of motion, and temporal masking effects) are al- video streams. We then proceed with a systematic investiga- ready implemented in modern video and still-image codecs [20], tion of 63 variants of the algorithm, demonstrating a range of [21]. A limitation of these approaches, however, is that the re- achievable tradeoffs between compressed file size and visual maining properties of human vision are difficult to algorithmi- quality. We then validate the algorithm, for two of the 63 set- cally implement (e.g., evaluating object size and shape requires tings, on a heterogeneous collection of 50 video clips, including that first object segmentation be solved in a general manner). synthetic stimuli, outdoors daytime and nighttime scenes, video In contrast, an important contribution of the present study is to games, television commercials, newscast, sports, music video, evaluate the applicability of a computational model that mimics and other content. Using eye movement recordings from eight the well-known response characteristics of low-level visual pro- human subjects watching the unfoveated clips (four to six sub- cessing neurons in the primate brain, rather than attempting to jects for each clip), we show that subjects preferentially fixate implement less well-defined, higher-level visual properties of locations which the model also determines to be of high priority, objects and scenes. In addition, many of the existing compu- in a highly significant manner. We finally compute the additional tational algorithms have typically been developed for specific compression ratios obtained on the 50 clips using the foveation video content (e.g., giving preference to skin color or facial fea- centers determined by our model, demonstrating the usefulness tures, under the assumption that human faces should always be of our approach to the fully automatic determination of priority present and given high priority [4], [22], or learning image pro- regions in unconstrained video clips. cessing algorithms that maximize overlap with human scanpaths on a specific set of images [23]) and, thus, are often not univer- II. ATTENTION MODEL AND FOVEATION sally applicable. Instead, our model makes no assumption on video contents, but is strongly committed to the type of neuronal The model computes a topographic saliency map (Fig. 1), response properties found in early visual areas of monkeys and which indicates how conspicuous every location in the input humans. Finally, computational algorithms have thus far typi- image is [29], [58]. Retinal input is processed in parallel by cally been demonstrated on a small set of video clips and often multiscale low-level feature maps, which detect local spatial lack validation against human eye movement data. Another im- discontinuities using simulated center-surround neurons [59], portant contribution of our study is, hence, to widely validate [60]. Twelve neuronal features are implemented, sensitive to our algorithm against eye movements of human observers [24], color contrast (red/green and blue/yellow, separately),

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us