Vision Lecture 7 Notes: Depth Perception

Total Page:16

File Type:pdf, Size:1020Kb

Vision Lecture 7 Notes: Depth Perception Depth perception 1 Images removed due to copyright restrictions. Please refer to lecture video 2 Cues used for coding depth in the brain Oculomotor cues Visual cues accommodation Binocular vergence stereopsis Monocular motion parallax shading interposition size perspective 3 Stereopsis, basic facts and demos 4 Two simple methods for creating stereo images 1 2 stereo camera swiveled regular camera 5 Image is in public domain. Demos, page 1 6 Images removed due to copyright restrictions. Please see lecture video or Figure 1 of Schiller, Peter H., Geoffrey L. Kendall, et al. "Depth Perception, Binocular Integration and Hand-Eye Coordination in Intact and Stereo Impaired Human Subjects." Journal of Clinical & Experimental Ophthalmology 3, (210). Demos, page 1 & 2 7 Autostereogram from Magic Eye Images removed due to copyright restrictions. Please see lecture video or the autostereogram from The Magic Eye, Volume I: A New Way of Looking at the World. Andrews McMeel Publishing, 1993. Autostereograms: see Howard and Rogers, Depth Perception, Vol 2, pp. 549-556. Demos, page 3 8 Retinal disparity utilized for stereopsis Target hits corresponding retinal points Target hits non-corresponding retinal points target fix fix target Vieth-Muller circle y y x x x = y x = y 9 Stereopsis, neuronal responses 10 V1 cell response to drifting bars at varied stereo depths 0.58 +6.0 +4.0 0.21 +2.0 +1.5 +1.0 +0.5 _ 0 38 + 0 -0.5 Depth (cm) -1.0 Disparity (degree) -1.5 -0.23 -2.0 -4.0 -6.0 -1.13 -8.0 -10O 3O 0 1 2 O sec 0.14 Image by MIT OpenCourseWare. 11 V1 cell response to drifting bars at varied stereo depths 0.88 +10.0 +8.0 +6.0 +4.0 0.31 +3.0 +2.0 +1.0 +1.0 _ 0 38 + 0 Depth (cm) -0.5 Disparity (degree) -1.0 -0.23 -2.0 -4.0 -6.0 -1.13 -8.0 0 4O 0 1 2 3 0.18o sec R 38 L Key down Image by MIT OpenCourseWare. 12 Basic cell types for stereo Tuned excitatory far Tuned excitatory near Tuned excitatory zero 100 150 100 125 75 75 100 50 75 50 50 25 25 L R 25 0 0 0 .10 .05 00 05 10 .10 .05 00 05 10 .05 00 05 100 75 100 Tuned inhibitory Far Neural responses (impulses per sec) Near 75 75 50 50 50 25 25 25 0 0 0 .10 .05 00 05 10 .10 .05 00 05 10 .10 .05 00 05 10 Horizontal display (deg) Image by MIT OpenCourseWare. 13 Basic cell types for stereo Tuned excitatory Near ] Far Neural activity ] Tuned inhibitory Near Fixation point Far Depth Image by MIT OpenCourseWare. 14 The effects of V4 and MT lesions on stereoscopic depth perception 15 Images removed due to copyright restrictions. Please see lecture video or Figure 1 of Schiller, Peter H., Geoffrey L. Kendall, et al. "Depth Perception, Binocular Integration and Hand-Eye Coordination in Intact and Stereo Impaired Human Subjects. "Journal of Clinical & Experimental Ophthalmology 3, (210). 16 Stereopsis Graphs removed due to copyright restrictions. Please see lecture video or Schiller, Peter H. "The Effects of V4 and Middle Temporal (MT) Area Lesions on Visual Performance in the Rhesus Monkey." Visual Neuroscience 10, no. 4 (1993): 717-46. 17 Motion parallax 18 MOTION PARALLAX, the eye is stationary A B C Velocity of motion: A fastest, C slowest, creating velocity gradient With rigidity constraint highest relative velocity is judged to be closest 19 MOTION PARALLAX, the eye tracks 1 a 2 b eye movement object motion a 1 The eye tracks the circle, which therefore remains stationary on the fovea Objects nearer than the one tracked move 2 b at greater velocities on the retinal surface than objects further; the further objects actually move in the opposite direction on the retina. 20 21 Motion parallax To derive depth information from motion parallax neurons are needed that provide information about velocity and direction of motion and perhaps also about differential motion. The majority of cells in V1 are direction and velocity selective. Some appear also to be selective for differential motion. Cells that process motion parallax have also been found in MT. 22 Brain activation by stereopsis and motion parallax in normal and stereoblind subjects with fMRI 23 Images are projected here 24 25 A: Stereo 27 B: Parallax 28 C: Stereo + Parallax 29 Shading 30 31 32 TheThe powerpower ofof shadingshading forfor depthdepth perceptionperception 33 Examples of the 12 different shadings used 34 The next five displays have identical, repeating elements which are shaded differently in each set yielding a variety of stable and conflicting percepts 35 36 37 38 39 40 Stereo and shading in harmony and in conflict Demos, page 4-7 41 f f f 42 43 44 Putting stereo, parallax and shading together 45 3HUFHSWLRQ 35, no. 11 (2006): 1521. about the visual system." about the Image removed due to copyright restrictions. Please see lecture video or see Display 24 of Schiller, Peter H., and Christina E. Carvey. "Demonstrations of Spatiotemporal Integration and What they Tell us About the Visual System." Perception 35, no. 11 (2006): 1521. 46 Separate and combined disparity, parallax and shading depth cues Graphs removed due to copyright restrictions. Please see lecture video or Figure 3a,b of Schiller, Peter H., Warren M. Slocum, et al. "The Integration of Disparity, Shading and Motion Parallax Cues for Depth Perception in Humans and Monkeys." Brain Research 1377 (2011): 67-77. 47 Perspective 48 Image removed due to copyright restrictions. Please see lecture video or see William O'Brian, "Because it's here, that's why," New Yorker, September 21, 1963, 42. 49 50 51 52 53 Perspective illusion 54 55 The Savage family Painting is in public domain. Edward Savage 56 Hand-eye coordination and Binocular integration tests 57 Needle test 58 Movie of needle test needle_binoc needle_monoc 59 Touch-panel test 60 Examples of test of binocular integration 61 Binocular integration test, figures A B 62 Binocular integration test, words S U D T R Y A S U D T R Y B SSTT UU RR DD YY SSTT UU RR DD YY 63 Summary, depth: 1. Numerous mechanisms for analyzing depth have been identified that include vergence and accommodation, stereopsis, parallax, shading, and perspective. 2. Several cortical structures process stereopsis utilizing disparity information; the number of disparities represented is limited as in the case of color coding. 3. Utilizing motion parallax for depth processing necessitates neurons specific for direction, velocity and differential velocity; several areas, including V1 and MT process motion parallax. 4. Area MT contributes to the analysis of motion, motion parallax, depth, and flicker; however, these analyses are also carried out by several other structures. 5. Litte is known at present about the manner in which information about shading and perspective is analyzed by the brain. 64 MIT OpenCourseWare http://ocw.mit.edu 9.04 Sensory Systems Fall 2013 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms..
Recommended publications
  • 3D Computational Modeling and Perceptual Analysis of Kinetic Depth Effects
    Computational Visual Media DOI 10.1007/s41095-xxx-xxxx-x Vol. x, No. x, month year, xx–xx Research Article 3D Computational Modeling and Perceptual Analysis of Kinetic Depth Effects Meng-Yao Cui1, Shao-Ping Lu1( ), Miao Wang2, Yong-Liang Yang3, Yu-Kun Lai4, and Paul L. Rosin4 c The Author(s) 2020. This article is published with open access at Springerlink.com Abstract Humans have the ability to perceive 3D shapes from 2D projections of rotating 3D objects, which is called Kinetic Depth Effects. This process is based on a variety of visual cues such as lighting and shading effects. However, when such cues are weakened or missing, perception can become faulty, as demonstrated by the famous silhouette illusion example – the Spinning Dancer. Inspired by this, we establish objective and subjective evaluation models of rotated 3D objects by taking their projected 2D images as input. We investigate five different cues: ambient Fig. 1 The Spinning Dancer. Due to the lack of visual cues, it confuses humans luminance, shading, rotation speed, perspective, and color as to whether rotation is clockwise or counterclockwise. Here we show 3 of 34 difference between the objects and background. In the frames from the original animation [21]. Image courtesy of Nobuyuki Kayahara. objective evaluation model, we first apply 3D reconstruction algorithms to obtain an objective reconstruction quality 1 Introduction metric, and then use a quadratic stepwise regression analysis method to determine the weights among depth cues to The human perception mechanism of the 3D world has represent the reconstruction quality. In the subjective long been studied.
    [Show full text]
  • GSLC Library.Xlsx
    GOOD SHEPHERD - MADISON CAMPUS - TITLE Listing Updated 2/19/2018 TITLE AUTHOR DEWEY1 DEWEY2 FORMAT "God Talks" with Leggo Lamb Mead, Peter 252 Me book "Lord, Change Me!" Christenson, Evelyn 248 Ch book (Un) Common Good, The Wallis, Jim 261.8 Wa book 1 & 2 Peter and Jude Nystrom, Carolyn 217 Ny book 1 Corinthians Stevens, Paul 217 St book 1 John and James Kunz, Marilyn 217 Ku book 1 John and James Kunz, Marilyn 217 Ku book 10 Minutes to Showtime Goyer, Tricia Juv Fiction Go book 10 Things God Wants You To Know Clark, John C Audio/248.8 Cl 1 CD 100 Questions Jesus Asked Reinhard, Janet 232.9 Re book 1000 Faces of God Hind, Rebecca 704 Hi book 101 Best Small-Group Ideas Davis, Deena, ed. 253 Da book 101 Best Small-Group Ideas Davis, Deena 253.7 Da book 101 Hymn Stories Osbeck, Kenneth W. 783.9 Os book 101 More Hymn Stories Osbeck, Kenneth W. 783.9 Os book 101 Things To Do For Christmas O'Neal, Debbie Trafton 745.594 On book 101 Things To Do For Christmas O'Neal, Debbie Traft 745.594 On book 101 Ways to Get Your Adult Children to Move Out Melheim, Rich 818 Me book 12,000 Religious Quotations Mead, Frank S., ed 821.9 Me book 15 Minutes of Peace With God Barnes, Emilie 242 Ba book 2 Corinthians Stevens, Paul 217 St book 2004 Irrepressible Hope Clairmont, Patsy Video Cl 1 DVD 21 Indispensable Qualities of a Leader Maxwell, John C. 158 Ma book 21 Irrefutalbe Laws of Leadership Maxwell, John C.
    [Show full text]
  • Interacting with Autostereograms
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/336204498 Interacting with Autostereograms Conference Paper · October 2019 DOI: 10.1145/3338286.3340141 CITATIONS READS 0 39 5 authors, including: William Delamare Pourang Irani Kochi University of Technology University of Manitoba 14 PUBLICATIONS 55 CITATIONS 184 PUBLICATIONS 2,641 CITATIONS SEE PROFILE SEE PROFILE Xiangshi Ren Kochi University of Technology 182 PUBLICATIONS 1,280 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Color Perception in Augmented Reality HMDs View project Collaboration Meets Interactive Spaces: A Springer Book View project All content following this page was uploaded by William Delamare on 21 October 2019. The user has requested enhancement of the downloaded file. Interacting with Autostereograms William Delamare∗ Junhyeok Kim Kochi University of Technology University of Waterloo Kochi, Japan Ontario, Canada University of Manitoba University of Manitoba Winnipeg, Canada Winnipeg, Canada [email protected] [email protected] Daichi Harada Pourang Irani Xiangshi Ren Kochi University of Technology University of Manitoba Kochi University of Technology Kochi, Japan Winnipeg, Canada Kochi, Japan [email protected] [email protected] [email protected] Figure 1: Illustrative examples using autostereograms. a) Password input. b) Wearable e-mail notification. c) Private space in collaborative conditions. d) 3D video game. e) Bar gamified special menu. Black elements represent the hidden 3D scene content. ABSTRACT practice. This learning effect transfers across display devices Autostereograms are 2D images that can reveal 3D content (smartphone to desktop screen). when viewed with a specific eye convergence, without using CCS CONCEPTS extra-apparatus.
    [Show full text]
  • Stereoscopic Label Placement
    Linköping Studies in Science and Technology Dissertations, No. 1293 Stereoscopic Label Placement Reducing Distraction and Ambiguity in Visually Cluttered Displays Stephen Daniel Peterson Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Norrköping, 2009 Stereoscopic Label Placement: Reducing Distraction and Ambiguity in Visually Cluttered Displays Copyright © 2009 Stephen D. Peterson [email protected] Division of Visual Information Technology and Applications (VITA) Department of Science and Technology, Linköping University SE-601 74 Norrköping, Sweden ISBN 978-91-7393-469-5 ISSN 0345-7524 This thesis is available online through Linköping University Electronic Press: www.ep.liu.se Printed by LiU-Tryck, Linköping, Sweden 2009 Abstract With increasing information density and complexity, computer displays may become visually cluttered, adversely affecting overall usability. Text labels can significantly add to visual clutter in graphical user interfaces, but are generally kept legible through specific label placement algorithms that seek visual separation of labels and other ob- jects in the 2D view plane. This work studies an alternative approach: can overlap- ping labels be visually segregated by distributing them in stereoscopic depth? The fact that we have two forward-looking eyes yields stereoscopic disparity: each eye has a slightly different perspective on objects in the visual field. Disparity is used for depth perception by the human visual system, and is therefore also provided by stereoscopic 3D displays to produce a sense of depth. This work has shown that a stereoscopic label placement algorithm yields user per- formance comparable with existing algorithms that separate labels in the view plane. At the same time, such stereoscopic label placement is subjectively rated significantly less disturbing than traditional methods.
    [Show full text]
  • Stereopsis and Stereoblindness
    Exp. Brain Res. 10, 380-388 (1970) Stereopsis and Stereoblindness WHITMAN RICHARDS Department of Psychology, Massachusetts Institute of Technology, Cambridge (USA) Received December 20, 1969 Summary. Psychophysical tests reveal three classes of wide-field disparity detectors in man, responding respectively to crossed (near), uncrossed (far), and zero disparities. The probability of lacking one of these classes of detectors is about 30% which means that 2.7% of the population possess no wide-field stereopsis in one hemisphere. This small percentage corresponds to the probability of squint among adults, suggesting that fusional mechanisms might be disrupted when stereopsis is absent in one hemisphere. Key Words: Stereopsis -- Squint -- Depth perception -- Visual perception Stereopsis was discovered in 1838, when Wheatstone invented the stereoscope. By presenting separate images to each eye, a three dimensional impression could be obtained from two pictures that differed only in the relative horizontal disparities of the components of the picture. Because the impression of depth from the two combined disparate images is unique and clearly not present when viewing each picture alone, the disparity cue to distance is presumably processed and interpreted by the visual system (Ogle, 1962). This conjecture by the psychophysicists has received recent support from microelectrode recordings, which show that a sizeable portion of the binocular units in the visual cortex of the cat are sensitive to horizontal disparities (Barlow et al., 1967; Pettigrew et al., 1968a). Thus, as expected, there in fact appears to be a physiological basis for stereopsis that involves the analysis of spatially disparate retinal signals from each eye. Without such an analysing mechanism, man would be unable to detect horizontal binocular disparities and would be "stereoblind".
    [Show full text]
  • Algorithms for Single Image Random Dot Stereograms
    Displaying 3D Images: Algorithms for Single Image Random Dot Stereograms Harold W. Thimbleby,† Stuart Inglis,‡ and Ian H. Witten§* Abstract This paper describes how to generate a single image which, when viewed in the appropriate way, appears to the brain as a 3D scene. The image is a stereogram composed of seemingly random dots. A new, simple and symmetric algorithm for generating such images from a solid model is given, along with the design parameters and their influence on the display. The algorithm improves on previously-described ones in several ways: it is symmetric and hence free from directional (right-to-left or left-to-right) bias, it corrects a slight distortion in the rendering of depth, it removes hidden parts of surfaces, and it also eliminates a type of artifact that we call an “echo”. Random dot stereograms have one remaining problem: difficulty of initial viewing. If a computer screen rather than paper is used for output, the problem can be ameliorated by shimmering, or time-multiplexing of pixel values. We also describe a simple computational technique for determining what is present in a stereogram so that, if viewing is difficult, one can ascertain what to look for. Keywords: Single image random dot stereograms, SIRDS, autostereograms, stereoscopic pictures, optical illusions † Department of Psychology, University of Stirling, Stirling, Scotland. Phone (+44) 786–467679; fax 786–467641; email [email protected] ‡ Department of Computer Science, University of Waikato, Hamilton, New Zealand. Phone (+64 7) 856–2889; fax 838–4155; email [email protected]. § Department of Computer Science, University of Waikato, Hamilton, New Zealand.
    [Show full text]
  • Visual Secret Sharing Scheme with Autostereogram*
    Visual Secret Sharing Scheme with Autostereogram* Feng Yi, Daoshun Wang** and Yiqi Dai Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China Abstract. Visual secret sharing scheme (VSSS) is a secret sharing method which decodes the secret by using the contrast ability of the human visual system. Autostereogram is a single two dimensional (2D) image which becomes a virtual three dimensional (3D) image when viewed with proper eye convergence or divergence. Combing the two technologies via human vision, this paper presents a new visual secret sharing scheme called (k, n)-VSSS with autostereogram. In the scheme, each of the shares is an autostereogram. Stacking any k shares, the secret image is recovered visually without any equipment, but no secret information is obtained with less than k shares. Keywords: visual secret sharing scheme; visual cryptography; autostereogram 1. Introduction In 1979, Blakely and Shamir[1-2] independently invented a secret sharing scheme to construct robust key management scheme. A secret sharing scheme is a method of sharing a secret among a group of participants. In 1994, Naor and Shamir[3] firstly introduced visual secret sharing * Supported by National Natural Science Foundation of China (No. 90304014) ** E-mail address: [email protected] (D.S.Wang) 1 scheme in Eurocrypt’94’’ and constructed (k, n)-threshold visual secret sharing scheme which conceals the original data in n images called shares. The original data can be recovered from the overlap of any at least k shares through the human vision without any knowledge of cryptography or cryptographic computations. With the development of the field, Droste[4] provided a new (k, n)-VSSS algorithm and introduced a model to construct the (n, n)-combinational threshold scheme.
    [Show full text]
  • Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: a Computer Vision Approach
    remote sensing Article Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach Mostafa Mansour 1,2 , Pavel Davidson 1,* , Oleg Stepanov 2 and Robert Piché 1 1 Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland 2 Department of Information and Navigation Systems, ITMO University, 197101 St. Petersburg, Russia * Correspondence: pavel.davidson@tuni.fi Received: 4 July 2019; Accepted: 20 August 2019; Published: 23 August 2019 Abstract: Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
    [Show full text]
  • Chromostereo.Pdf
    ChromoStereoscopic Rendering for Trichromatic Displays Le¨ıla Schemali1;2 Elmar Eisemann3 1Telecom ParisTech CNRS LTCI 2XtremViz 3Delft University of Technology Figure 1: ChromaDepth R glasses act like a prism that disperses incoming light and induces a differing depth perception for different light wavelengths. As most displays are limited to mixing three primaries (RGB), the depth effect can be significantly reduced, when using the usual mapping of depth to hue. Our red to white to blue mapping and shading cues achieve a significant improvement. Abstract The chromostereopsis phenomenom leads to a differing depth per- ception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth R glasses, which induce chromatic aberrations in one Figure 2: Chromostereopsis can be due to: (a) longitunal chro- eye by refracting light of different wavelengths differently, hereby matic aberration, focus of blue shifts forward with respect to red, offsetting the projected position slightly in one eye. Although, it or (b) transverse chromatic aberration, blue shifts further toward might seem natural to map depth linearly to hue, which was also the the nasal part of the retina than red. (c) Shift in position leads to a basis of previous solutions, we demonstrate that such a mapping re- depth impression. duces the stereoscopic effect when using standard trichromatic dis- plays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.
    [Show full text]
  • Two Eyes See More Than One Human Beings Have Two Eyes Located About 6 Cm (About 2.4 In.) Apart
    ivi act ty 2 TTwowo EyesEyes SeeSee MoreMore ThanThan OneOne OBJECTIVES 1 straw 1 card, index (cut in half widthwise) Students discover how having two eyes helps us see in three dimensions and 3 pennies* increases our field of vision. 1 ruler, metric* The students For the class discover that each eye sees objects from a 1 roll string slightly different viewpoint to give us depth 1 roll tape, masking perception 1 pair scissors* observe that depth perception decreases *provided by the teacher with the use of just one eye measure their field of vision observe that their field of vision decreases PREPARATION with the use of just one eye Session I 1 Make a copy of Activity Sheet 2, Part A, for each student. SCHEDULE 2 Each team of two will need a metric ruler, Session I About 30 minutes three paper cups, and three pennies. Session II About 40 minutes Students will either close or cover their eyes, or you may use blindfolds. (Students should use their own blindfold—a bandanna or long strip VOCABULARY of cloth brought from home and stored in their science journals for use—in this depth perception and other activities.) field of vision peripheral vision Session II 1 Make a copy of Activity Sheet 2, Part B, for each student. MATERIALS 2 For each team, cut a length of string 50 cm (about 20 in.) long. Cut enough index For each student cards in half (widthwise) to give each team 1 Activity Sheet 2, Parts A and B half a card. Snip the corners of the cards to eliminate sharp edges.
    [Show full text]
  • Motion-In-Depth Perception and Prey Capture in the Praying Mantis Sphodromantis Lineola
    © 2019. Published by The Company of Biologists Ltd | Journal of Experimental Biology (2019) 222, jeb198614. doi:10.1242/jeb.198614 RESEARCH ARTICLE Motion-in-depth perception and prey capture in the praying mantis Sphodromantis lineola Vivek Nityananda1,*, Coline Joubier1,2, Jerry Tan1, Ghaith Tarawneh1 and Jenny C. A. Read1 ABSTRACT prey as they come near. Just as with depth perception, several cues Perceiving motion-in-depth is essential to detecting approaching could contribute to the perception of motion-in-depth. or receding objects, predators and prey. This can be achieved Two of the motion-in-depth cues that have received the most using several cues, including binocular stereoscopic cues such attention in humans are binocular: changing disparity and as changing disparity and interocular velocity differences, and interocular velocity differences (IOVDs) (Cormack et al., 2017). monocular cues such as looming. Although these have been Stereoscopic disparity refers to the difference in the position of an studied in detail in humans, only looming responses have been well object as seen by the two eyes. This disparity reflects the distance to characterized in insects and we know nothing about the role of an object. Thus as an object approaches, the disparity between the stereoscopic cues and how they might interact with looming cues. We two views changes. This changing disparity cue suffices to create a used our 3D insect cinema in a series of experiments to investigate perception of motion-in-depth for human observers, even in the the role of the stereoscopic cues mentioned above, as well as absence of other cues (Cumming and Parker, 1994).
    [Show full text]
  • Construction of Autostereograms Taking Into Account Object Colors and Its Applications for Steganography
    Construction of Autostereograms Taking into Account Object Colors and its Applications for Steganography Yusuke Tsuda Yonghao Yue Tomoyuki Nishita The University of Tokyo ftsuday,yonghao,[email protected] Abstract Information on appearances of three-dimensional ob- jects are transmitted via the Internet, and displaying objects q plays an important role in a lot of areas such as movies and video games. An autostereogram is one of the ways to represent three- dimensional objects taking advantage of binocular paral- (a) SS-relation (b) OS-relation lax, by which depths of objects are perceived. In previous Figure 1. Relations on an autostereogram. methods, the colors of objects were ignored when construct- E and E indicate left and right eyes, re- ing autostereogram images. In this paper, we propose a L R spectively. (a): A pair of points on the screen method to construct autostereogram images taking into ac- and a single point on the object correspond count color variation of objects, such as shading. with each other. (b): A pair of points on the We also propose a technique to embed color informa- object and a single point on the screen cor- tion in autostereogram images. Using our technique, au- respond with each other. tostereogram images shown on a display change, and view- ers can enjoy perceiving three-dimensional object and its colors in two stages. Color information is embedded in we propose an approach to take into account color variation a monochrome autostereogram image, and colors appear of objects, such as shading. Our approach assign slightly when the correct color table is used.
    [Show full text]