JMVC Simulation Environment

Total Page:16

File Type:pdf, Size:1020Kb

JMVC Simulation Environment Appendix A JMVC Simulation Environment The JVT (Joint Video Team), formed from the cooperation between the ITU-T Study Group 16 (VCEG) and ISO/IEC Motion Picture Experts group (MPEG), responsible for the standardization of the H.264, SVC (scalable video coding), and MVC provides software models used for algorithms experimentation and for stan- dards proof of concept. The JMVC (Joint Model for MVC), currently on version 8.5, is the reference software available for experimentation on the MVC standard. Along this work the JMVC software, coded using C++, was used and modifi ed to implement the proposed algorithms. Initially, the version 6.0 was used followed by an upgraded to version 8.5. Considering the length and complexity of the software, a high-level overview of the interaction between the main encoder classes is pre- sented here. Afterwards are shown the classes modifi ed to enable our algorithms experimentation. For in-depth details of the class structure refers to JMVC docu- mentation . A.1 JMVC Encoder Overview The JMVC classes are hierarchically structured as shown in Fig. A.1 . The JMVC encodes each view at a time requiring as many calls as number of view to be encoded. The reference views are stored in temporary fi les. The class H264AVCEncoderTest represents the top encoder entity; it initializes the encoder, and calls the CreateH264AVCEncoder class to initialize the other coding classes. At this level, the PicEncoder is initialized and the frame-level loop is controlled. The PicEncoder loops over the slices inside each frame and resets the RDcost. The slice encoder controls the MB-level loop and sets the reference frames for each slices. For MB encoding there are two main classes: the MbEncoder and the MBCoder . MbEncoder encapsulates all the prediction, transforms, and entropy steps. It imple- ments the mode decision by looping over and encoding all possible coding modes (in case of RDO-MD) and determining the minimum RDCost. At this point no MB B. Zatt et al., 3D Video Coding for Embedded Devices: Energy Effi cient 175 Algorithms and Architectures, DOI 10.1007/978-1-4614-6759-5, © Springer Science+Business Media New York 2013 176 Appendix A JMVC Encoder H264AVCEncoderTest (H264AVCEncoderTest.cpp) -Init encoder -Setup frame buffers CreateH264AVCEncoder -Loop over frames (CreaterH264AVCEncoder.cpp) PicEncoder - Init slice header - Set RDCost (λ) (PicEncoder.cpp) - Loop over slices SliceEncoder - Init reference frames (SliceEncoder.cpp) - Loop over MBs MbEncoder MbCoder BS (MbEncoder.cpp) (MbCoder.cpp) - Loop over coding modes - Encode the best mode - Perform the encoding for each mode - Create the bitstream - Select the best RDCost Fig. A.1 JMVC encoder high-level diagram coding data is written to the bitstream. Once the best mode is selected, the SliceEncoder calls the MbCoder to write the MB-level side information and resi- dues to the bitstream. Figure A.2 (Tech et al. 2010) depicts the hierarchical call graph of methods inside the mode decision process implemented in MbEncoder class. Firstly, the SKIP and Direct modes are evaluated,;along this monograph these modes are jointly referred as SKIP MBs. In the following, all inter-prediction block sizes are evalu- ated. For each partition size a call to the method MotionEstimation::estimateBlock WithStart (see Fig. A.3 discussion) is performed. The same happens for the sub- partitions in case of 8 × 8 partitioning. EstimateMb8 × 8Frext is only called in case the FRExt fl ag is set. Finally, the intra-frame coding modes including PCM, intra4 × 4, intra8 × 8 (FRExt only), and intra16 × 16 are called. The Estimate<mode> methods call the complete coding loop for that specifi c mode including prediction, transforms, quantization, entropy encoding, and reconstruction. It allows a precise defi nition of the minimum RDCost (λ) and an optimal best mode selection at the cost of elevated coding complexity. The MbCoder is called to entropy encode the best mode and write the data into the bitstream output buffer. The motion and disparity estimation search itself is defi ned in the method esti- mateBlockWithStart and is composed of three basic steps. The ME/DE datafl ow is represented by the arrows in Fig. A.3 . Once the estimateBlockWithStart is called, for instance in EstimateMb16 × 16 , the search runs for each reference frame list A.1 JMVC Encoder Overview 177 EstimateMbSkip Inter P EstimateMbDirect Inter B Inter P EstimateSubMbDirect Inter B estimateBlock EstimateMb8x8Frext 4x min EstimateSubMb8x8 1x WithStart estimateBlock EstimateSubMbDirect EstimateMb16x16 1x WithStart estimateBlock estimateBlock Minimum EstimateMb16x8 2x WithStart EstimateSubMb8x8 1x WithStart RDCost estimateBlock estimateBlock min EstimateMb8x16 2x EstimateSubMb8x4 2x WithStart Best WithStart Mode estimateBlock EstimateMb8x8 4x min EstimateSubMb4x8 2x WithStart estimateBlock EstimateSubMb4x4 4x WithStart EstimateMbPCM Intra EstimateMbIntra4 min 1x Test 9 Intra Modes EstimateMbIntra8 min 1x Test 4 Intra Modes EstimateMbintra16 min 1x Test 4 Intra Modes Fig. A.2 Mode decision hierarchy in JMVC InterSearch Minimum Search List 0 min ME/DE Sub Pel min ME/DE Full Pel MotionCost min Search List 1 min ME/DE Sub Pel min ME/DE Full Pel Interactive B Prediction min ME/DE Sub Pel min ME/DE Full Pel Fig. A.3 Inter-frame search in JMVC (List 0 and List 1) and for an interactive B search mode that exploits both lists in an interactive fashion (in case the interactive B is active). At the software perspective, there is no distinction between ME and DE. List 0 and List 1 store both temporal and disparity reference frames. The search for a given reference frame fi rstly fi nds the best candidate block among the integer pixels (ME/DE Full Pel) and then refi nes the result considering half and quarter pixels (ME/DE Sub Pel). The search pattern depends on the search algorithm, and JMVC implements TZ Search, Full Search, Spiral Search, and Log Search. The goal is to fi nd the candidate block that mini- λ mizes the Motion Cost ( Motion ) in terms of SAD, SAD-YUV (considering chroma channels), SATD, or SSE according to user-defi ned coding parameters. The posi- tion of the best matching candidate block position defi nes the motion or disparity vector. 178 Appendix A A.2 Modifi cations to the JMVC Encoder A.2.1 JMVC Encoder Tracing In order to generate the statistics used for coding modes and motion/disparity vec- tor, some modifi cations were done in the original JMVC code. The point selected for this tracing is inside the entropy encoder to guarantee that the extracted data is the same actually encoded and transmitted. The entropy encoder is declared as the virtual class MbSymbolWriteIf , but the actual implementation is in CabacWriter and UvlcWriter , depending on the entropy encoder selected in the confi guration fi le. The methods monitored are skipFlag that encodes the SKIP (and Direct)-coded MBs and mbMode that encodes all other modes. Note, MB coding mode codes ( uiMbMode ) vary with the slice type as defi ned in Tables 7-11, 7-12, 7-12, 7-13, and 7-14 of the MVC standard (JVT 2008). A.2.2 Communication Channels in JMVC Multiple algorithms proposed in this monograph employ the information from the 3D-neighborhood. For that, there is a need to build communication channels between neighboring MBs in the special, temporal, and disparity domains. In other words, an infrastructure to send and receive data at MB level, at frame level (in same view), and at view level (frames in different views). Therefore, a hierarchical com- munication infrastructure was designed and implemented. Figure A.4 presents graphically the modifi ed classes along with the new member data structures and communication methods. The MbDataAccess already provides direct access to the left and upper neigh- bors (A, B, C, and D in Fig. A.4 ). This access was extended to the right and bottom neighbors (A*, B*, C*, and D*) enabling access to data from all spatial neighboring PicEncoder SliceHeader MbDataAccess #MBx GOP Size #MBx Send() Send() D B C MB A MB A* Frame #MBy Frame Receive() #MBy Receive() C* B* D* MbDataAccess::Send(MB,x,y) SliceHeader::Send(slice.POC) MbDataAccess::Receive(MB,x,y) PicEncoder ::Read(file, view) SliceHeader::Receive(slice,POC) MbDataAccess::Write(MB,x,y) Read() Write() MB Storage File Storage File View N-1 View N Fig. A.4 Communication in JMVC A.2 Modifi cations to the JMVC Encoder 179 MBs. For temporal neighboring MB access the current MB data is sent to SliceHeader (using Send () methods) where a 2D array stores the information from the MBs belonging to the current slice. Once the slice is completely processed, the 2D array is sent to the PicEncoder class. PicEncoder maintains the data for the whole current GOP. For reading the data communication channel writes the requested data from PicEncoder to SliceHeader and fi nally to MbDataAccess (using Receive () methods ). As far as the views are processed in distinct encoder calls there is a need to use external temporary fi les to transmit the disparity neighboring infor- mation. The current MB data is written in these fi les from MbDataAccess , while the data from previous views is read in PicEncoder , as shown in Fig. A.4 . A.2.3 Mode Decision Modifi cation in JMVC The mode decision is programmed in a very simple becoming easy to fi nd and modify. MD is handled in MbEncoder::encodeMacroblock . To fi nd the exact point, search for the xEstimateMb methods responsible for calling the modes evaluation. Before this point are implemented the 3D-neighborhood communication calls and the calculation required to take the fast decisions. A.2.4 ME/DE Modifi cation in JMVC The modifi cations for fast ME/DE are inserted in two distinct classes. For modifi ca- tions at higher level such as avoiding interactive B search, search direction, and reference frames the modifi cations are done in the MbEncoder by modifying the xEstimateMb methods.
Recommended publications
  • Lec16, 3D Technologies, V1.00 Printfriendly.Pdf
    Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2012 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view (FOV). Maximum horizontal FOV of humans: ~200º (with two eyes) One binocular FOV (seen by both eyes): ~120º Two uniocular FOV (seen by only one eye): ~40º Binocular summation: The ability to detect faint objects is enhanced (neural summation). Perception of depth. Page 1 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cyclopean Image Cyclopean image is a single mental image of a scene created by the brain by combining two images received from the two eyes. The mythical Cyclops with a single eye Page 2 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Accommodation of the eyeball (eyeball focus) Focus by changing the curvature of the lens. Interposition Occlusion of one object by another Occlusion Page 3 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Linear perspective (convergence of parallel edges) Parallel lines such as railway lines converge with increasing distance. Page 4 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Familiar size and Relative size subtended visual angle of an object of known size A retinal image of a small car is also interpreted as a distant car. Page 5 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Aerial Perspective Vertical position (objects higher in the scene generally tend to be perceived as further away) Haze, desaturation, and a shift to bluishness Hight Shift to bluishness Haze Page 6 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Light and Shade Shadow Page 7 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Change in size of textured pattern detail.
    [Show full text]
  • Dolby 3D Brochure
    Dolby 3D Makes Good Business Sense Premium Quality Dolby 3D provides a thrilling audience experience and a powerful box-office attraction. It’s also more cost-effective and better Widely recognized as producing the best 3D image, Dolby for the environment than systems that use disposable glasses. Another key feature: Dolby 3D is part of an integrated suite of 3D provides an amazingly sharp picture with brighter, more Dolby Digital Cinema products from a company with over 40 years of cinematic innovation. natural colors—giving your audiences an unparalleled 3D experience they’ll want to return for again and again. Keep More 3D Profit Dolby 3D Return on Investment Dolby 3D is a one-time investment—no annual licensing fees ROI or revenue sharing. In fact, a complete Dolby 3D system generally pays for itself after only a few 3D releases. You $80K keep more of the additional revenues generated with Dolby $60K Flexible 3D and your ROI continues to improve over time. $40K Dolby 3D wows audiences on any screen—either on $20K traditional white screens or on silver screens, allowing Durable and Eco-Friendly for greater scheduling flexibility. Easily switch between Engineered to resist damage and hold their shape, Dolby 3D $0 2D and 3D, and move films from one auditorium to glasses deliver a comfortable fit and the highest quality 3D ($20K) another without screen restrictions. experience for hundreds of uses, without ending up in landfills like disposables. Reduce, reuse, recycle—it's good ($40K) for business. 1 5 10 15 Number of Dolby 3D Releases* World-Class Support Dolby provides comprehensive support for our 3D solution, *Based on 7,500 admissions per title.
    [Show full text]
  • Dolby 3D Installation Manual
    Dolby® 3D Installation Manual Issue 2 Part Number 9110260 Dolby Laboratories, Inc. Corporate Headquarters Dolby Laboratories, Inc. 100 Potrero Avenue San Francisco, CA 94103‐4813 USA Telephone 415‐558‐0200 Fax 415‐863‐1373 www.dolby.com European Headquarters Dolby Laboratories, Inc. Wootton Bassett Wiltshire SN4 8QJ England Telephone 44‐1793‐842100 Fax 44‐1793‐842101 DISCLAIMER OF WARRANTIES: EQUIPMENT MANUFACTURED BY DOLBY LABORATORIES IS WARRANTED AGAINST DEFECTS IN MATERIALS AND WORKMANSHIP FOR A PERIOD OF ONE YEAR FROM THE DATE OF PURCHASE. THERE ARE NO OTHER EXPRESS OR IMPLIED WARRANTIES AND NO WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, OR OF NONINFRINGEMENT OF THIRD‐PARTY RIGHTS (INCLUDING, BUT NOT LIMITED TO, COPYRIGHT AND PATENT RIGHTS). LIMITATION OF LIABILITY: IT IS UNDERSTOOD AND AGREED THAT DOLBY LABORATORIES’ LIABILITY, WHETHER IN CONTRACT, IN TORT, UNDER ANY WARRANTY, IN NEGLIGENCE, OR OTHERWISE, SHALL NOT EXCEED THE COST OF REPAIR OR REPLACEMENT OF THE DEFECTIVE COMPONENTS OR ACCUSED INFRINGING DEVICES, AND UNDER NO CIRCUMSTANCES SHALL DOLBY LABORATORIES BE LIABLE FOR INCIDENTAL, SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, DAMAGE TO SOFTWARE OR RECORDED AUDIO OR VISUAL MATERIAL), COST OF DEFENSE, OR LOSS OF USE, REVENUE, OR PROFIT, EVEN IF DOLBY LABORATORIES OR ITS AGENTS HAVE BEEN ADVISED, ORALLY OR IN WRITING, OF THE POSSIBILITY OF SUCH DAMAGES. Dolby and the double‐D symbol are registered trademarks of Dolby Laboratories. All other trademarks Part Number 9110260 remain the property of their respective owners. Issue 2 © 2008 Dolby Laboratories. All rights reserved. S08/18712/19863 ii Dolby® 3D Installation Manual Regulatory Notices FCC NOTE: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules.
    [Show full text]
  • Audiovisual Spatial Congruence, and Applications to 3D Sound and Stereoscopic Video
    Audiovisual spatial congruence, and applications to 3D sound and stereoscopic video. C´edricR. Andr´e Department of Electrical Engineering and Computer Science Faculty of Applied Sciences University of Li`ege,Belgium Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy (PhD) in Engineering Sciences December 2013 This page intentionally left blank. ©University of Li`ege,Belgium This page intentionally left blank. Abstract While 3D cinema is becoming increasingly established, little effort has fo- cused on the general problem of producing a 3D sound scene spatially coher- ent with the visual content of a stereoscopic-3D (s-3D) movie. The percep- tual relevance of such spatial audiovisual coherence is of significant interest. In this thesis, we investigate the possibility of adding spatially accurate sound rendering to regular s-3D cinema. Our goal is to provide a perceptually matched sound source at the position of every object producing sound in the visual scene. We examine and contribute to the understanding of the usefulness and the feasibility of this combination. By usefulness, we mean that the technology should positively contribute to the experience, and in particular to the storytelling. In order to carry out experiments proving the usefulness, it is necessary to have an appropriate s-3D movie and its corresponding 3D audio soundtrack. We first present the procedure followed to obtain this joint 3D video and audio content from an existing animated s-3D movie, problems encountered, and some of the solutions employed. Second, as s-3D cinema aims at providing the spectator with a strong impression of being part of the movie (sense of presence), we investigate the impact of the spatial rendering quality of the soundtrack on the reported sense of presence.
    [Show full text]
  • International CES Final Report
    2013 International CES January 6-11, 2013 Final Report presented by THE MEDIA PROFESSIONAL’S INSIDE PERSPECTIVE 2 2013 International Consumer Electronics Show This Report is Made Possible With the Support of our Executive Sponsors www.ETCentric.org © 2013 etc@usc 2013 International Consumer Electronics Show 3 INTRODUCTION The following report is the Entertainment Technology Center’s post show analy- sis of the 2013 International CES. To access the videos and written reports that were posted live during the show, please visit: http://www.etcentric.org/. Over the course of one week, January 6-11, 2013, the Entertainment Technology Center tracked the most interesting and breaking entertainment technology news coming out of this year’s event. The ETC team reported on new product announcements, evolving industry trends and whisper suite demonstrations. Reports were made available via ETC’s collaborative online destination for enter- tainment media news and commentary, ETCentric: The Media Professional’s Inside Exchange; its accompanying email newsletter, The Daily Bullet; and social networks Facebook and Twitter. The result was nearly 100 postings over a 7-day period (in addition to dozens of pre-show posts). Those stories from the site, rounded out with after-show research and observations, formed the basis for this report. We hope you find the reports useful in putting your finger on the pulse of consumer entertainment technology. As always, we are looking for feedback from you on ETCentric and this report. Please send your comments to [email protected].
    [Show full text]
  • A GPU Based Particle System for Rain and Snow
    ABSTRACT HUSSAIN, SYED ASIF. Stereoscopic, Real-time, and Photorealistic Rendering of Natural Phenomena – A GPU based Particle System for Rain and Snow. (Under the direction of Dr. David F.McAllister and Dr. Edgar Lobaton.) Natural phenomena exhibit a variety of forms such as rain, snow, fire, smoke, fog, and clouds. Realistic stereoscopic rendering of such phenomena have implications for scientific visualization, entertainment, and the video game industry. However, among all natural phenomena, creating a convincing stereoscopic view of a computer generated rain or snow, in real-time, is particularly difficult. Moreover, the literature in rendering of precipitation in stereo is non-existent and research in stereo rendering of other natural phenomenon is sparse. A survey of recent work in stereo rendering of natural phenomenon, such as vegetation, fire, and clouds, is done to analyze how stereoscopic rendering is implemented. Similarly, a literature review of monoscopic rendering of rain and snow is completed to learn about the behavior and distribution of particles in precipitation phenomena. From these reviews, it is hypothesized that the monoscopic rendering of rain or snow can be extended to stereo with real-time and photorealistic results. The goal of this study is to validate this hypothesis and demonstrate it by implementing a particle system using a graphics processing unit (GPU). The challenges include modeling realistic particle distributions, use of illumination models, and the impact of scene dynamics due to environmental changes. The modern open graphics library shading language (GLSL) and single instruction multiple threads (SIMT) architecture is used to take advantage of data-parallelism in a graphics processor.
    [Show full text]
  • Dolby 3D for Glasses-Free TV and Devices — Preliminary Table of Contents
    WHITE PAPER — PRELIMINARY Dolby® 3D for Glasses-Free TV and Devices Overview Three-dimensional television (3D TV) offers tremendous market potential, but up to now, the annoyance of the required glasses has limited consumer viewing and slowed adoption. The real future of consumer 3D as an everyday experience that can truly tap into the market lies with glasses-free viewing. While the industry as a whole acknowledges this, glasses-free technology to date has had its own problems, generally suffering from restricted viewing positions and distracting visual artifacts. Dolby® 3D changes that. It is a collection of processing and coding techniques brought to market through a collaboration between Dolby and Philips. The companies recently presented the Dolby 3D suite of innovative technologies to the public, garnering industry awards and accolades. 3D content is consumed from a variety of sources such as packaged media, broadcasts, and over the top (OTT) streaming. It is viewed on a variety of devices, increasingly including display devices equipped with advanced optical lenses that enable glasses-free viewing. When the Dolby 3D suite of technologies is implemented in devices along the path that 3D content takes to glasses-free 3D displays, Dolby ensures that the viewer’s experience will surpass any previous 3D TV experience. The underlying architectures of these devices must accommodate the complexities of Dolby 3D processing specifications, which are key to ensuring that 3D perception consistently meets the expectations of sophisticated consumers. But what is this underlying technology? How is it feasible in architectures prevalent today? When will all the pieces of the ecosystem come together? This white paper answers these questions in depth.
    [Show full text]
  • 1 Stereographics
    1 Stereographics Mike Bailey [email protected] This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Computer Graphics Stereographics.pptx mjb – August 5, 2021 1 2 Stereovision is not new – It’s been in common use in the movies since the 1950s Computer Graphics Life Magazine mjb – August 5, 2021 2 1 And, even longer than that in stills 3 Newport Maritime Museum Portland Art Museum Computer Graphics mjb – August 5, 2021 3 4 Binocular Vision In everyday living, part of our perception of depth comes from the slight difference in how our two eyes see the world around us. This is known as binocular vision. We care about this, and are discussing it, because stereo computer graphics can be a great help in de-cluttering a complex 3D scene. It can also enhance the feeling of being immersed in a movie. OSU President Dr. Becky Johnson Computer Graphics mjb – August 5, 2021 4 2 5 The Cyclops Model In the world of computer graphics, the two eye views can be reconstructed using standard projection mathematics. The simplest approach is the Cyclops Model. In this model, the left and right eye views are obtained by rotating the scene plus and minus what a Cyclops at the origin would see Y ψ ψ X L C R Z The left eye view is obtained by rotating the scene an angle +ψ about the Y axis. The right eye view is obtained by rotating the scene an angle -ψ about the Y axis. In practice, if you wanted to do this (and you don’t), a good value of ψ would be 1-4˚.
    [Show full text]
  • Stereoscopic 3D Videos and Panoramas Stereoscopic 3D Videos and Panoramas
    Christian Richardt Stereoscopic 3D Videos and Panoramas Stereoscopic 3D videos and panoramas 1. Capturing and displaying stereo 3D videos 2. Viewing comfort considerations 3. Editing stereo 3D videos (research papers) 4. Creating stereo 3D panoramas 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 2 Stereo camera rigs Parallel Converged (‘toed-in’) 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 3 Stereo camera rigs Parallel Converged (‘toed-in’) Kreylos Kreylos Oliver Oliver 2012 2012 © ©2012 Oliver 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 4 Computational stereo 3D camera system et al./ACM et Computational stereo camera system with programmable control loop S. Heinzle, P. Greisen, D. Gallup, C. Chen, D. Saner, A. Smolic, A. Burg, W. Matusik & M. Gross Heinzle ACM Transactions on Graphics (SIGGRAPH), 2011, 30(4), 94:1–10 2011 © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 5 Commercial stereo 3D projection Polarised projection Wavelength multiplexing e.g. RealD 3D, MasterImage 3D e.g. Dolby 3D 3.0 - SA - BY - Commons/CC Wilkinson/Sound and Vision and Wilkinson/Sound NK, 3dnatureguy/Wikimedia Raoul Raoul ©2011 ©2011 Scott © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 6 Medium-scale stereo 3D displays Active shutter glasses Autostereoscopy e.g. NVIDIA 3D Vision, 3D TVs 3.0 - SA - BY - ©2011 ©2011 MTBS3D/NVIDIA /Wikimedia Commons/CC /Wikimedia Cmglee © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 7 Other stereo 3D displays Head-mounted displays (HMDs) Anaglyph stereo e.g. HTC Vive, Oculus Rift, Google Cardboard e.g. red cyan glasses, ColorCode 3-D, Inficolor 3D ©2016 HTC Corporation ©2016 HTC Taringa 2016 2016 © 2017-08-03 Christian Richardt – Stereoscopic 3D Videos and Panoramas 8 Cinema 3D Narrow angular range that spans a single seat et al.
    [Show full text]
  • 3D Television - Wikipedia
    3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television From Wikipedia, the free encyclopedia 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. According to DisplaySearch, 3D televisions shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[1] As of late 2013, the number of 3D TV viewers An example of three-dimensional television. started to decline.[2][3][4][5][6] 1 History 2 Technologies 2.1 Displaying technologies 2.2 Producing technologies 2.3 3D production 3TV sets 3.1 3D-ready TV sets 3.2 Full 3D TV sets 4 Standardization efforts 4.1 DVB 3D-TV standard 5 Broadcasts 5.1 3D Channels 5.2 List of 3D Channels 5.3 3D episodes and shows 5.3.1 1980s 5.3.2 1990s 5.3.3 2000s 5.3.4 2010s 6 World record 7 Health effects 8See also 9 References 10 Further reading The stereoscope was first invented by Sir Charles Wheatstone in 1838.[7][8] It showed that when two pictures 1 z 17 21. 11. 2016 22:13 3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851.
    [Show full text]
  • Stereoscopic Media
    Stereoscopic media Digital Visual Effects Yung-Yu Chuang 3D is hot today 3D has a long history • 1830s, stereoscope • 1920s, first 3D film, The Power of Love projected dual-strip in the red/green anaglyph format • 1920s, teleview system Teleview was the earliest alternate-frame seqqguencing form of film ppjrojection. Through the use of two interlocked projectors, alternating left/right frames were projected one after another in rapid succession. Synchronized viewers attached to the arm-rests of the seats in the theater open and closed at the same time, and took advantage of the viewer's persistence of vision, thereby creating a true stereoscopic image. 3D has a long history • 1950s, the "golden era" of 3-D • The attempts filfaile d because immature technology results in viewer discomfort. • 1980s, rebirth of 3D, IMAX Why could 3D be successful today? • It finally takes off until digital processing makes 3D films both easier to shoot and watch. • New technology for more comfortable viewing experiences – Accurately-adjustable 3D camera rigs – Digital processing and post-shooting rectification – Digital projectors for accurate positioning – Polarized screen to reduce cross-talk 3D TVs Computers Notebooks Game consoles Nintendo 3DS HTC EVO 3D 3D contents (()games) 3D contents (()films) 3D contents (()broadcasting) 3D cameras Fu ji R eal3D W1 and W3 ($600) Sony HDR-TD10E Outline • Human depth perception • 3D disp lays • 3D cinematography • Stereoscopic media postprocessing Human depth perception Space perception • The ability to perceive and interact with the structure of space is one of the fundamental goals of the visual system. • Our visual system reconstructs the world from two non- Euclidean inputs, the two distinct retinal images.
    [Show full text]
  • Digital Images and Video
    CHAPTER 2 Digital Images and Video Advances in ultra-high-definition and 3D-video technologies as well as high-speed Internet and mobile computing have led to the introduction of new video services. Digital images and video refer to 2D or 3D still and moving (time-varying) visual information, respectively. A still image is a 2D/3D spatial distribution of intensity that is constant with respect to time. A video is a 3D/4D spatio-temporal inten- sity pattern, i.e., a spatial-intensity pattern that varies with time. Another term commonly used for video is image sequence, since a video is represented by a time sequence of still images (pictures). The spatio-temporal intensity pattern of this time sequence of images is ordered into a 1D analog or digital video signal as a function of time only according to a progressive or interlaced scanning convention. We begin with a short introduction to human visual perception and color models in Section 2.1. Next, we present 2D digital video repre-sentations and a brief sum- mary of current standards in Section 2.2. We introduce 3D digital video display, representations, and standards in Section 2.3. Section 2.4 provides an overview of popular digital video applications, including digital TV, digital cinema, and video streaming. Finally, Section 2.5 discusses factors afecting video quality and quanti- tative and subjective video-quality assessment. 53 Tekalp_book_COLOR.indb 53 5/21/15 7:47 PM 54 Chapter 2. Digital Images and Video 2.1 Human Visual System and Color Video is mainly consumed by the human eye.
    [Show full text]