<<

3rd Dimension Veritas et Visus October 2010 Vol 6 No 1

Table of Contents

Ecole Centrale, p18 Tampere University, p31 Thoma, p36 SD&A 2011, p45

Letter from the publisher : 3D from 2D objects…by Mark Fihn 2

News from around the world 4

Web3D 2010, July 24-25, 2010, Los Angeles, California 17

EuroITV Conference on Interactive TV and Video, June 9-11, 2010, Tampere, Finland 22

3DTV-CON 2010, June 7-9, 2010, Tampere, Finland 24

NAB 2010, April 10-15, 2010, Las Vegas, Nevada by Michael Starks 35

SD&A 2011 Advance Conference Program 45

Thar She Blows…by Fluppeteer 50

Another step in the maturation of 3D in the home by Arthur Berman 52

Where’s the Beef? by Pete Putman 54

The U-Decide Initiative by Neil Schneider 56

Last Word: Dark Country: An Interview with Thomas Jane by Lenny Lipton 59

The 3rd Dimension is focused on bringing news and commentary about developments and trends related to the use of 3D displays and supportive components and software. The 3rd Dimension is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com

Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Art Berman, Fluppeteer, Lenny Lipton, Pete Putman, Neil Schneider, and Michael Starks

Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate based on location and mailing method. Copyright 2010 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others.

Veritas et Visus 3rd Dimension October 2010

3D from 2D objects… by Mark Fihn

In my youth, I briefly became fascinated by the traditional Japanese art form of origami – the goal of which is to create a representation of an object using geometric folds and crease patterns preferably without the use of gluing or cutting the paper, and using only one piece of paper. Well, although I quickly learned that I didn’t have the patience or talent to create 3D objects from 2D surfaces, I still have the fascination…

Won Park is a master of origami. He is also called the “money folder”, a practitioner of origami whose canvas is the US one dollar bill. http://moneygami.blogspot.com

2 Veritas et Visus 3rd Dimension October 2010

Allen and Patty Eckman have developed a unique medium of cast paper sculpture to create amazing works of art using nothing more than acid-free paper. Their art mostly depicts American Indians, but also include nature, women, children, and animals. http://www.eckmanfineart.com

Veritas et Visus (Truth and Vision) publishes a family of specialty newsletters about the displays industry:

Flexible Substrate Display Standard 3rd Dimension High Resolution Touch Panel

http://www.veritasetvisus.com

3 Veritas et Visus 3rd Dimension October 2010 3D news from around the world compiled by Phillip Hill and Mark Fihn

Mitsubishi Electric debuts 3D high-definition home theater projector Mitsubishi Digital Electronics America's Presentation Products Division introduced its newest Diamond 3D 1080p full HD home theater projector. Mitsubishi’s new Diamond 3D projector creates 3D images that can display over 100-inch screens. Powered by an SXRD reflective liquid-crystal optical engine, Mitsubishi adds its own algorithms and processing technologies to create high-brightness, high-contrast (up to 120,000:1 full on/full off) images. A key element in projector performance is its lens, and Mitsubishi has incorporated extra-low dispersion glass into its six- piece, 17-cluster structure for higher functionality that minimizes chromatic aberration. The result is colors and details that are impeccably crystal clear. With its auto-iris function, Mitsubishi’s Diamond 3D home theater projector automatically sets the optimal aperture according to each scene, and its 120Hz refresh rate produces ultra- smooth transitions and life-like images. An independent management function allows adjustment to color characteristics such as hue, intensity and brightness individually (red, green, cyan, magenta and yellow), without affecting the other colors. With a wide range of powered lens shift (100% vertical and 45% horizontal), this new Mitsubishi Diamond 3D projector is easy to install, even in a complicated room configuration. In some cases there may not be a need to turn the projector upside down in a ceiling mount, and its 1.8x powered-zoom range also gives installers additional installation flexibility. At only 19dBa, this Diamond 3D projector is extremely quiet, so viewers can easily hear the movie, even in whisper-soft scenes, instead of an annoying projector hum. Two HDMI v1.4 inputs support 3D signals and its RS232 support offers plug-and-play connectivity with third-party remote operations. Mitsubishi's new Diamond 3D projector also offers low cost of ownership with an estimated 4000-hour lamp life in low conservation mode. http://www.mitsubishi-presentations.com

Consumer Electronics Association and ESPN partner to demo sports offerings 3DTV viewers The Consumer Electronics Association has teamed up with ESPN to show and educate consumers all that 3DTV offers in the world of sports. ESPN has a long history of leading in technology and sports including developing high definition sports seven years ago and is doing again with developing content in 3D for sports. In the course of the year ESPN will have nearly a hundred events telecast in 3D including college basketball, pro basketball, college football and the X Games from Los Angeles and Aspen. 3DTV technology, that is available to consumers in 60 million households, creates an opportunity to provide a lot of people with engaging content and will grow faster than HDTV grew. In contrast, when high definition programming first launched there was virtually no one who could see it. In the video Megan Pollack of the CEA and Bryan Burns from ESPN discuss the future of 3D TV and sports. You can see the video at http://yourupdate.tv/technology/3D_demo_days/.

Digital Projection launches Full HD single-chip 3D projector Digital Projection International (DPI) announced the active-3D enabled M-Vision Cine 400-3D. Equipped with the same DLP® DarkChip technology featured in DPI’s entire product line, the M-Vision Cine 400-3D delivers immersive 3D imagery for screens up to 12 foot wide. The M-Vision Cine 400-3D will be priced below $20,000. The 5,500 lumen M-Vision Cine 400-3D thrives in cinemas with screens up to 12 foot wide, as well as in venues with some ambient light, such as media rooms with smaller screen sizes. The compact and quiet Cine chassis, 1080p resolution, broad source connectivity and straightforward user interface make the M-Vision Cine 400-3D an optimum 3D solution for home entertainment. Serving as a compliment to the existing TITAN 3D product line, the M-Vision Cine 400-3D introduces a native 1080p, single-chip 3D display to home cinemas for a fraction of the cost. Enthusiasts can now experience flicker-free 3D at a full 120 frames per second. Installation is flexible due to the M-Vision Cine 400-3D’s compact and lightweight chassis design, and lens shift range of 30% horizontal and 120% vertical. Multiple easy-to-change lens options provide further flexibility, with throw ratios ranging from 1.25 to 3.0:1. There is also a fixed lens that offers a 0.73:1 throw ratio. Connectivity includes two HDMI inputs, as well as RGB via D-15, component, composite and S-Video inputs. http://www.digitalprojection.com/

4 Veritas et Visus 3rd Dimension October 2010

Technicolor 3D surpasses 250 screens in North America Further supporting the successful launch of Technicolor 3D, the 35mm 3D solution, Technicolor announced it has installed its Technicolor 3D system on more than 250 screens in North America just four months after its initial debut in theatres. Technicolor 3D was launched in theatres on March 26 with “How to Train Your Dragon”, followed by “Clash of the Titans”, “Shrek Forever After”, “The Last Airbender”, “Despicable Me”, “Cats & Dogs: The Revenge of Kitty Galore”, and “Piranha 3D”. Technicolor 3D recently launched in international territories with “Shrek Forever After” from DreamWorks Animation. Other studios indicating support for the format internationally include: Paramount Pictures, Universal Pictures, The Weinstein Company and Warner Bros. Countries included in the launch are the UK, Spain, Italy and Germany and Japan. http://www.technicolor3D.com

JVC introduces 3D-enabled D-ILA home theater projectors with up to 100,000:1 native contrast ratio JVC U.S.A. unveiled six new 3D-enabled D-ILA projectors. All six host native contrast ratios that are unmatched in the industry, with the top models delivering a 100,000:1 native contrast ratio. The new projectors are the Reference Series DLA-RS60, DLA-RS50 and DLA-RS40, to be marketed by JVC’s Professional Products Group, and the Procision Series DLA-X9, DLA-X7 and DLA-X3, to be available through JVC’s Consumer Electronics Group. For 3D content, each projector includes two HDMI 1.4a ports and supports side-by-side (broadcast), frame series (Blu-ray), and above-below 3D transmissions. An external 3D signal emitter (PK-EM1) syncs the projected image with JVC’s active shutter 3D glasses (PK-AG1). The external 3D signal emitter ensures solid signal transmission to the 3D glasses for a superior 3D experience, no matter what type of screen is used or how the home theater has been configured.

The new flagship projectors, the DLA-RS60 and DLA-X9, are built using hand-selected, hand-tested components and provide a 100,000:1 native contrast ratio. For 3D display, both models come with two pairs of 3D glasses along with a PK-EM1 3D signal emitter. Both projectors also have a three-year warranty. The DLA-RS50 and DLA-X7 offer 70,000:1 native contrast ratio, while the DLA-RS40 and DLA-X3 offer 50,000:1 native contrast ratio. All four projectors come with a two-year warranty and are compatible with JVC’s PK-AG1 active shutter 3D glasses and PK-EM1 3D signal emitter (sold separately) for 3D presentations. All six new projectors feature three 0.7” 1920 x 1080 D-ILA devices and are designed around JVC’s third generation D-ILA high dynamic range optical engine that is optimized to provide exceptional native contrast ratios without a dynamic iris to artificially enhance contrast specifications. A directed light integration system and wire grid polarizer ensures optimum light uniformity and minimal crosstalk in the light path. In the top four models (DLA-RS60, DLA-RS50, DLA-X9, DLA-X7) a 16-step lamp aperture system matches the 16-step lens aperture to optimize the f-number (relative aperture) of the optics system in all steps. The DLA-RS40 and DLA-X3 feature a 16-step lens aperture. With a new short arc gap, lamp brightness has been increased from earlier JVC models to 1,300 ANSI lumens. To reduce motion blur, JVC’s double-speed 120Hz clear motion drive technology uses a newly developed LSI for frame interpolation black frame insertion. The four top models – DLA-X7/DLA-RS50 and DLA-X9/DLA-RS60 – have been designed to achieve THX certification and are in the process of being tested by THX. These same four models also include a new seven-axis color management system (R, G, B, C, M, Y and orange) that allows precise color tuning, especially in skin tones, and a choice of color profiles, including Adobe RGB, DCI and sRGB/HDTV. They have also been designed for ISF certification and will include an ISF C3 mode for professional calibration. Ninety-nine screen correction modes help to maximize viewing environments. http://www.jvc.com

Monster debuts universal shutter eyewear system geared toward custom integrator channel Monster debuted Monster Vision “MAX 3D” – the world’s first and only universal wireless 3D eyewear “shutter system” that works via Zigbee RF technology and is compatible with every brand of stereoscopic 3D TV. Monster Vision MAX 3D eyeglasses and transmitter kit are scheduled for availability at a suggested retail price of $249.95 per set. Additional glasses will carry a suggested retail price of $169.95/pair. The glasses feature a stylish and lightweight high-gloss black design that allows users to wear them for hours without experiencing any fatigue. They are sized to fit all head sizes and may even be worn over prescription glasses. As the first wireless 3D

5 Veritas et Visus 3rd Dimension October 2010 eyewear system of its kind to offer universal compatibility with all major brands of 3D flat-screen TVs, Monster Vision MAX 3D represents a huge advancement in the realm of 3D shutter eyewear technology. MAX 3D is also the first and only 3D shutter eyewear system that utilizes RF transmission instead of the standard infrared transmission found in TV OEM glasses, allowing viewers to move freely around the room while watching and still enjoy the full 3D imaging effect. Importantly, with the ability to work with all major 3D TVs, MAX 3D overcomes a huge deficiency in the 3D world today for both consumers and retailers. For consumers, MAX 3D makes choosing accessory eyewear simple, with no worries of compatibility. For retailers, MAX 3D eliminates the need to carry multiple brands of glasses at high cost of inventory. http://www.monstercable.com

Ad Notam introduces 3D to mirror TVs Ad Notam has combined two ideas into one LCD line. With patented mirror TV technology joined with 3D picture technology “ad notam 3D LINE” presents 3D and high-resolution LCD combined with customized mirror and glass solutions for countless applications in interior décor, media rooms, living rooms, bedroom, presentation rooms, even bathrooms and kitchens. This piece of technology holds all the latest advances in 3D technology as well as Full HD 2D LCD. The 46.0-inch diagonal monitor has an integrated 3D formatter box for converting above & below or side-by-side 3D format into 3D mode for 2D content. The 2D features of the 3D LINE offer Full HD screen resolution at a 16:9 picture format. The ad notam 3D LINE supports 3D stereo experience through horizontal interlaced 3D stereo signals. http://www.ad-notam.com

Real D reveals plans for passive glasses in the home Josh Greer, president and co-founder of Real D recently revealed information about his company’s new 3D home display technology. Greer announced that Read D technology licensees will be able to offer the first Full HD passive 3D HDTVs in 2011, allowing the use of inexpensive, lightweight glasses (such as those in a 3D movie theater). Current consumer 3DTVs require battery-powered active shutter 3D glasses that retail from $130-$200 each. The Real D system uses patented ZScreen technology, an electro-optical system built into the front of a flat panel that very rapidly changes the light from clockwise circular polarization to counterclockwise and back again. The Real D circular polarized passive glasses act like shutters, permitting the left image to go to the left eye by making the right eye black out, during which time the right eye shows the right eye view while blacking out the left eye. Images are displayed sequentially on the flat panel, just like the current 3DTVs. Eyeglass maker Luxottica will offer its Oakley brand of 3D compatible passive glasses in uncorrected or with prescriptions later this year. To date, the only passive large screen flat panels available in the US are expensive (>$6000) 46-inch commercial monitors from JVC and Hyundai and these systems are only capable of one-half HD resolution at 1920x540 (versus 1920x1080 for Full HD). In other remarks, Greer said he expects that active shutter 3DTVs will continue to be sold alongside the new 3D “passive glasses” sets for the next 4-5 years. http://www.reald.com

Atlona Technologies releases universal digital video signal generator with 3D testing function Atlona Technologies released its AT-HD800, an HDMI signal generator capable of testing any video resolution or timing in any digital format. The HD800 allows users to diagnose specific resolution, video timing, EDID, and color space issues in any DVI or HDMI video system. The HD800 features 20 test patterns including 3D tests, side- by-side, top and bottom, and frame packing, a format that is mandatory for all HDMI 1.4 devices. This powerful pocket sized signal generator features an HDMI output as well as an HDMI pass through to easily switch between test patterns and content from any other digital sources. The HD800 in controlled using timing select and pattern buttons on the unit in concert with LED indicator lights for power, digital format, pattern, and bypass options. Users can cycle through multiple patterns, or display information about connected sources or display such as EDID

6 Veritas et Visus 3rd Dimension October 2010 version, sink name, source name, resolution capabilities, etc. Along with the ability to be powered through AA batteries, this unit can also be powered using the included 12V power supply. The HD800 comes with a silicon sleeve that allows users to access the unit while keeping the unit protected on any jobsite. Atlona’s HD800 started shipping in September with an MSRP of $299.00. http://www.atlona.com

Digital Projection and Mechdyne introduce next generation 3D media servers Digital Projection International (DPI), a manufacturer of high-performance projection systems, and Mechdyne Corporation, specialists in immersive, networked and collaborative visualization solutions, introduced two new 3D- enabled media servers under the Dimension brand. The Dimension 3D and Dimension 3D Ultra deliver two options through which home entertainment enthusiasts can enjoy 2D and 3D movies, gaming, digital pictures, video files and more, when paired with a Digital Projection 3D projector such as the TITAN Reference 1080-3D. The newly launched Dimension media servers have been completely redesigned from a user interface and navigation perspective. A custom-created interface replaces the previous Windows Media structure, enabling a more intuitive and straightforward user experience. Additionally, numerous hardware upgrades, including the inclusion of latest generation graphics technology, have been built in to the new Dimension servers. The Dimension 3D enters the market at less than $20,000 and the Dimension 3D Ultra starts at under $50,000 MSRP.

The Dimension 3D servers remedy a common concern among home entertainment enthusiasts regarding the confusion surrounding the large number of incompatible 3D file formats by creating an easy to use a server that focuses on the emerging standards. Whether enthusiasts create their own 3D content from now-available 3D cameras, enjoy 3D Blu-ray, or simply want to up-convert their 2D content to 3D in real time, the Dimension 3D servers act as flexible content hubs for a wide variety of 3D formats and file types. Simplified usability is a key component of the Dimension platform, so the Dimension 3D servers can be easily integrated with industry standard control systems or easily controlled through a simple iTouch control interface. http://www.digitalprojection.com/

Displaybank publishes 3DTV industry trend and market forecast The 3DTV market is expected to represent 3% of the all TV market with 6.2M units in 2010 and among these; 5 million units are expected to be 3D LCD TV and 1.2 million units for PDP TV.

Total 3D TV market forecast (unit based)

Displaybank forecasts 6.2 million 3DTVs will be sold in 2010 growing to 33 million units in 2012 and 83 million units by 2014 to represent 31% of the all TV market. In terms of device type, the 3D LCD TV market is expected to reach 5.1 million units in 2010 to represent 81% of the total 3D TV market and is expected to be mainly applied to premium products utilizing Full HD and 240Hz in large-sized TVs over 40 inches in size. By 2014, 3D LCD TV market size would reach about 70M units to represent 28% of the LCD TV market. In 2010, 3D PDP TV is expected to penetrate 8% of the PDP TV market but by 2014, most of PDP TV makers are expected to apply 3D as one of TV’s functions so 3D PDP TV is expected to represent 99% of all PDP TVs. The shutter glass type would be the mainstream 3D TV technology type in 2010. Excluding LG Electronics, other major TV makers including Samsung Electronics, and Panasonic are launching 3D LCD TV with shutter glass type and polarizer glass type would only show 2% penetration in 2014. Displaybank expects that the glassless type 3DTV will be unveiled by the year 2014. http://www.displaybank.com

7 Veritas et Visus 3rd Dimension October 2010

AVFoundry adds 3D processing to color correction device Looking to satisfy the growing demand for quality video during 3D video reproduction, the Seattle-based video company AVFoundry has updated its VideoEq processor to work with 3D content. The VideoEq is a device that corrects color balance, and it provides users with independent color control of all color definitions in the video signal. With the update the company says that users can implement the same color correction methods to 3D content as 2D content to maintain quality across all platforms. “The unique architecture of the VideoEq allows it to handle 3D video in a way that will be difficult for other video processors,” claims Eric Hernes, president of AVFoundry. http://avfoundry.com

CEA ups 3DTV forecast to 2.1 million units for 2010 The Consumer Electronics Association now expects 2.1 million 3DTV sets to ship in the US in 2010, double its forecast from earlier this year, with revenue in the segment expected to exceed $2.7 billion. The trade group's predictions for the nascent market have vacillated over the last several months. In December 2009, it pegged 2.2 million unit shipments for the US in 2010, then upped that to 4 million at the Consumer Electronics Show in January before revising that down to 1.05 million after the event. Now CEA has circled back to being more bullish on 3DTV. For 2011, the association is forecasting more than 6 million units to be sold, generating more than $7 billion in revenue. http://www.cea.com

CyberLink PowerDVD 10 supports full Blu-ray 3D MVC decoding on AMD Radeon HD 6800 Graphics CyberLink announced support in PowerDVD for full-quality stereoscopic 3D video on AMD Radeon HD 6800 series graphics. On systems with the updated version of PowerDVD combined with the new AMD Radeon HD 6800 series graphics, consumers can enjoy full-quality Blu-ray 3D disc playback on the latest 3D televisions, through an HDMI 1.4a connection. Home theater enthusiasts are able to stream full multi-channel Blu-ray audio through the same HDMI connection to their compatible Audio/Video Receiver. PowerDVD takes full advantage of the Unified Video Decoder feature of AMD Radeon HD 6800 series graphics, delivering outstanding video quality even when decoding Blu-ray 3D MVC stereoscopic video at up to 60 megabits per second. The new PowerDVD update will be released in November 2010, available to existing PowerDVD 10 users for free via PowerDVD Update. http://www.cyberlink.com

Viewsonic introduces 3D monitor Viewsonic is the first manufacturer to announce AMD Radeon 3D support. The the launch of the Radeon 6000 series of GPUs AMD added 3D capabilities to the Radeon line, including support for Blu-ray 3D through the added feature of (MVC) in UVD3. Viewsonic released a new monitor that uses AMD's technology. Its V3D241wm ships with 3D capability using active shutter glasses. The LED-backlit monitor runs at 120Hz and ultra-fast 2ms response time. The 24-inch screen has a brightness level of 300 nits and a contrast ratio of 20,000,000:1, providing a high level of detail and color reproduction. It also has full 1920x1080 resolution. The stereo speakers are rated at four Watts and a USB port is included for plug-in of the included active-shutter glasses. The monitors come with HDMI, DVI and VGA connections. http://www.viewsonic.com

8 Veritas et Visus 3rd Dimension October 2010

Vishay supplies components for 3D TV glasses Vishay Intertechnology Inc. has come out with two infrared receivers for the liquid- crystal shutter glasses used with 3D-ready television sets. The receivers are designed to be built into the glasses, where they receive infrared signals from the TV set, causing the glasses’ shutters to open and close at the times necessary to create a 3D effect. The receivers operate at a frequency and a wavelength that are different from the frequency and wavelength used by remote controls to TVs and set-top boxes so they won’t interfere with them and vice versa. Liquid-crystal shutter glasses don’t have physical shutters that open and close. Instead, they have liquid crystal layers that darken when voltage is applied to them, creating the effect of shutters closing. A 3D television shows images designed for one eye or the other in conjunction with the proper lens of the glasses darkening to create a 3D effect. The glasses retail from around $100 to around $200, depending on the model. Vishay is selling its receivers at $1 a piece in small quantities. http://www.vishay.com

DVS brings out CLIPSTER features in 3D DVS has showcased the latest CLIPSTER features that include new highlights for 3D workflows, such as depth grading, as well as support for Apple ProRes 422 using the RAW deliverable process. CLIPSTER now also comes with a convenient tool set for the logging and linking of digital material. Digital deliverables of modern RAW cameras can be created at high speed. CLIPSTER is capable of creating a single timeline from a wide range of different RAW data, e.g. material from RED, ARRI D-20/D-21, ALEXA, Phantom or SI-2K cameras. All RAW formats can be decoded and demosliced in real-time. With its comprehensive set of burn-in features and a wide range of deliverable formats, CLIPSTER covers all aspects of the deliverable workflow.

Post houses especially benefit from the DI workstation’s new DCI capabilities: CLIPSTER generates 3D Packages (DCP) in real-time and its sophisticated hardware helps accelerate DCI mastering and 3D subtitling in 4K. Numerous stereoscopic real-time workflows enable users to effortlessly handle 3D material. Thanks to the new 3D depth grading, it is now possible to fine-tune the depth of a 3D scene. Here, CLIPSTER is using a very nuanced approach that enables the creation of perfect 3D for any screen size. As of now, CLIPSTER also supports another Apple codec with capabilities for Apple ProRes 422 generation in Proxy, LT and 4444. In the 444 version, CLIPSTER offers 12-bit processing for ALEXA camera feed. The DI workstation reads and writes the codec under the Windows operating system. http://www.dvs.com

Latest video from Slash converted to 3D by PassmoreLab The first video off Slash’s #1 album has been converted into full 3D, it was announced in a by San Diego-based 3D studio PassmoreLab. The 5-minute rock music video, “By The Sword”, is the first single off the album and the first video ever from Slash to be released in 3D. PassmoreLab began work on the project in May, and the conversion took about four weeks to complete, using PassmoreLab’s proprietary conversion technology. http://www.passmorelab.com

Elemental brings out 3D streaming Elemental Live recently featured an end-to-end solution for GPU-accelerated acquisition, processing and delivery of broadcast video. The system will simultaneously process live video streams, saving valuable formatting time and eliminating traditional hardware. Live output will stream via Microsoft Smooth Streaming to several devices simultaneously; including a 3D display viewed using glasses. Elemental also demonstrated streaming of high definition 2D and 3D content to several devices at once, including a 3D display, an HDTV via set-top box and iPad and iPod Touch platforms. In HP’s stand, Elemental Live ingested an SDI input directly from a live camera and encoded multiple live streams with Dynamic Streaming for Flash and Apple HTTP Adaptive for the iPad and iPhone. http://www.elementaltechnologies.com

9 Veritas et Visus 3rd Dimension October 2010

Nokia and Intel focus on 3D and virtual reality for mobile devices Intel, Nokia and Finland's University of Oulu announced that they are developing a joint research center to create software for 3D and virtual reality experiences for use on mobile devices. The software will be developed by about 24 engineers in Oulu using the open-source MeeGo operating system, which was launched in February by Intel and Nokia. An early version of the MeeGo mobile phone OS went to developers in late June. The companies said that they envisioned 3D and virtual reality software running on a broad range of mobile devices, including smart phones and tablets. One of the earliest practical applications for the software will be to develop a virtual control panel for a mobile device to regulate heating and lighting in a real world home. Creating social networks within virtual worlds for mobile devices coupled with GPS and other location information “will be a killer app or at least very successful”. Intel Labs said the focus of the research center will be on building open-source software that complements Intel’s chips. Also the software could give rise to new Nokia devices on future Intel chip architectures.

An avatar watches YouTube in a virtual world created using the RealXtend open-source platform. Nokia, Intel and Finland’s University of Oulu are working on making similar virtual worlds possible via mobile platforms.

The research effort does not require building a custom 3D display, because some 3D experiences can be shown in two dimensions. The research group also wants to avoid creating a 3D experience that requires using special glasses, noting that small screens don’t have the same physical requirements as larger displays, such as big-screen TVs and movie theater screens, whose viewers can’t see 3D effects with the naked eye. The research center gives Nokia and Intel an opportunity to kick-start the MeeGo operating system. 3D will be most important in larger screen mobile devices, such as tablets, and less so in smart phones. http://www.nokia.com

Mitsubishi’s nationwide 3D experience tour brings largest lineup of 3DTVs to cities across America Mitsubishi Digital Electronics America (MDEA) announced the launch of its nationwide 3D Experience Tour, which will introduce its unparalleled lineup of theater-like immersive 3D DLP home cinema televisions to US consumers. The tour packs Mitsubishi’s latest 3D TV technology into Mitsubishi’s Mobile Marketing Showroom, a 995 sq. ft. 18-wheeled home theater experience that gives viewers across the country the chance to experience the industry’s largest and broadest lineup of 60-inch plus large-screen 3D-capable TVs available today. The tour will showcase Mitsubishi’s full lineup of very large 3D DLP home cinema TVs, including an 82-inch behemoth, which offers more than three times the viewing area of a 46-inch screen. All 3D DLP home cinema TVs use the same core DLP technology as the vast majority of 3D movie theaters to deliver a cinema-quality 3D sensory experience at home. Sharing the stage will be Mitsubishi’s flagship 3D LaserVue TV, one of the world’s most energy efficient large screen TVs, bettering Energy Star by 50%. LaserVue is the only TV that uses advanced laser technology to deliver true cinema-like color, approximately doubling the spectrum of color available in any other TV. For consumers who want fully immersive, cinema-quality 5.1 without the hassle of extra wires and speakers, Mitsubishi’s 3D Experience Tour will include the Unisen Immersive Sound LED TV series. The Unisen series integrates up to 18 intelligent speakers that use an advanced algorithm to delay and project perfectly balanced, independent sound waves that bring concert-quality audio to the viewing experience, without the clutter of separate audio components. Mitsubishi will tweet from all locations throughout the tour, including the cities below. Find out when it will visit an area at www.twitter.com/Mitsubishi3D. October 15-17- Boston, MA; October 22-24- San Antonio, TX; October 29-31- Dallas, TX; November 5-7- Albuquerque, NM; November 12-14- Tulsa, OK; November 19-21- Dallas, TX. http://www.mitsubishi-tv.com

10 Veritas et Visus 3rd Dimension October 2010

STMicroelectronics unveils new high-performance digital TV system-on-ship with integrated 3DTV STMicroelectronics announced a new TV system-on-chip (SoC) IC offering 3DTV support and advanced 120Hz MEMC (motion estimation, motion compensation). Designed for next–generation 1080p full-high-definition (FHD) integrated digital TVs (iDTVs), ST’s new FLI7525 SoC enables a new class of mid-range, 3DTV-enabled, 120Hz Internet TVs. The FLI7525 is the flagship product in ST’s new Freeman Premier series of SoCs and is the industry’s first TV SoC with integrated 3DTV support and 120Hz MEMC processing. ST implements MEMC with a proprietary technology that reduces film judder and motion blur to preserve 3D effects with fast motion. The FLI7525 is compliant with HDMI v1.4a and its required 3DTV formats. The FLI7525 also includes proprietary with its proprietary judder reduction technology and depth control for video and on-screen displays. In addition to offering high-performance H.264/MPEG audio/video decoding and comprehensive integrated features, the FLI7525 SoC delivers an enhanced user experience with full support for broadband-Internet iDTV functionality and expanded graphics capabilities, enabling compelling services and new premium-content business models. The FLI7525 is pin and software compatible with the FLI7510, which was introduced at CES 2010, and is the culmination of the integration of world-class technologies from ST’s industry-leading set-top-box chips and the Emmy Award winning Faroudja video processing technologies. http://www.st.com

Aiptek launches cheap 3D German manufacturer Aiptek has launched a cheap as chips glassless . Panasonic announced that its high-end HDC-SDT750 camcorder would be first to market. Nintendo also said it would be the first company to offer a glassless 3D product with its 3DS handheld console. Aiptek beat them both to market. Aiptek’s 3D camcorder was designed with portable portrait dimensions. The 3D trickery comes courtesy of two separate lenses and image sensors for recording videos in 3D. The camcorder has an added image sensor to record at 720p. The video can be viewed without glasses on the Aiptek display using a , which sounds exactly like the same technology Sharp used in its Nintendo 3D display. 3D content can be uploaded to YouTube 3D using the USB connection and even convert to old school anaglyph, with the 1950s retro glasses supplied. The 3D can even be displayed on an active shutter 3D TV using a mini HDMI port. The camcorder can record in HD 2D and has a 5MP camera as well. Aiptek reckons the battery will last for one-half hour and the camcorder also supports SD cards. It comes at £199. http://www.aiptek.de

Hillcrest Labs brings motion control to first 3D Internet-connected LED-backlit LCD HDTVs Hillcrest Labs disclosed that LG Electronics is using its patented Freespace in-air pointing and motion control technology for LG’s first 3D-ready, Internet-connected HDTVs that are currently shipping in the Korean market and will be available globally. LG and Hillcrest have entered into a worldwide license agreement for LG to use Hillcrest Labs’ patented Freespace in-air pointing and motion control solution in current and future products. The “Magic Wand” remote, powered by Hillcrest Labs' Freespace technology will be included in certain models within LG’s new INFINIA line of LED LCD HDTVs. Hillcrest Labs' Freespace technology is a complete solution for in- air pointing and motion control that can be added to a wide range of CE devices, including: remote controls, game controllers, mobile handsets, PC peripherals and more. The LG Magic Wand system utilizes Freespace to make a remote control with the most advanced in-air pointing and motion control capabilities on the market today. These include: high resolution pointer accuracy (the Magic Wand is a highly precise pointing remote control which allows users to easily select icons and images, small and large, on a high-resolution screen); orientation compensation (regardless of the orientation of the Magic Wand remote in space, e.g. pointing at the ground, turned sideways, etc., Freespace technology generates intuitive cursor motions on the screen. MEMS sensors combined with Hillcrest’s proprietary software enable consistent control of the device from any position - standing, sitting or reclining. Adaptive tremor removal: Hillcrest's technology can distinguish between intentional and unintentional movement, including natural hand tremors. http://www.hillcrestlabs.com http://www.lg.com

11 Veritas et Visus 3rd Dimension October 2010

Panasonic develops consumer 3D camera Panasonic announced that it is releasing what it terms “the world’s first 3D camcorder for consumers”. The device is known as the model HDC-SDT750, and allows users to create true-to-life three-dimensional images by simply attaching a 3D conversion lens. The camcorder appeared on dealers’ shelves in Japan in August, with a rollout in other countries expected by the fall. The HDC-SDT750 is based around Panasonic’s 3MOS sensor system and records in 1080/60p (NTSC). The unit also operates in a 50-field PAL mode. It also features the company’s hybrid OIS (optical stabilization system) for steady shots without a tripod. It operates as a standard HD camcorder when used without the special conversion lens. http://www.panasonic.com

Fujifilm launches revamped “true” 3D camera Fujifilm has unveiled its latest two-lens, two-sensor 3D-capable compact camera. The FinePix Real 3D W3 comes a year after the debut of Fuji's first 3D camera and, like that model, sports a pair of 10Mp sensors behind 3x optical zoom lenses placed 75mm apart. Fuji says that the W3 delivers more realistic stereoscopic 3D imagery than can cams that interpolate the left and right-eye images from one lens and sensor. The W3 improves on its predecessor by upping the rear-facing LCD from 2.8 inch to 3.5 inch in size and retooling the controls to a more digicam-standard set of dials and buttons. The screen can show snaps in 3D without special specs. The W3 can shoot 720p video and play it back through its HDMI 1.4 port. It will cost £399 when it goes on sale at the end of September. http://www.fujitsu.com

Samsung unveils world’s first portable Blu-ray player with 3D capability Samsung Electronics America announced the US availability of the world’s first portable Blu-ray player with 3D capability. The Samsung BD-C8000 delivers true 1080p HD video, enables 3D playback when connected to a 3D- capable TV and used with 3D glasses, and features a brilliant 10.1-inch screen. It includes built-in Wi-Fi for easy access to advanced connectivity features, including an expanding library of content and applications via Samsung Apps and HDMI 1.4a support and three hours of battery life. Samsung also introduced three additional standalone Blu-ray players and three new Blu-ray Home Theater Systems that deliver crystal-clear picture and sound quality. The Samsung BD-C7900 Blu-ray player is designed for the HD guru. With two HDMI outputs, the BD-C7900 makes it easy to connect multiple HD sources to experience a full 1080p picture and digital surround sound. The two HDMI outputs also allow for support of legacy receivers so there is no need to upgrade to a new HDMI 1.4 receiver; the user can simply connect their receiver to the HDMI 1.3 output for rich, life-like surround sound audio. This sleek and stylish device offers built-in Wi-Fi for seamless connectivity to Samsung Apps. It also features Samsung’s proprietary AllShare, which allows people to wirelessly sync digital devices so that they can enjoy music, movies and photos directly from their DLNA certified PC, camera and mobile devices to their Samsung TV screen. Samsung’s BD-C6800 Blu-ray player includes built-in Wi-Fi, enabling it to wirelessly receive web-based content via Samsung Apps and playback media files from DLNA certified devices via Samsung’s AllShare. The entry-level BD-C5900 is designed for those looking for a high-performing yet affordable 3D Blu-ray player. All Samsung Blu-ray players support a wide range of media formats and access to an expanding library of applications through Samsung Apps, the first HDTV-based applications store. These new 3D Blu-ray players deliver a premium experience with both 2D and 3D content and allow consumers to future-proof their living rooms for when they are ready to purchase a 3D HDTV. http://www.samsung.com

12 Veritas et Visus 3rd Dimension October 2010

Sharp plans to launch 3D smart phone this year Sharp Corp plans to take on Apple’s iPhone by the end of the year, with the international launch of a smart phone featuring a 3D panel that can be viewed without special glasses. The new phone will also likely have a 3D capable camera. Though Japan's biggest mobile phone maker, Sharp is a small player in global terms and has so far failed to capitalize on rapid growth in the smart phone market. A previous venture into the segment, the Kin model manufactured for Microsoft, flopped in the United States and was canceled in June after less than three months. Sharp unveiled its 3D panel technology, which is only suitable for small screens such as those on mobile phones and portable game consoles, in April. Mobile electronics that let users view 3D images without special glasses have been around for some time, since people tend to look at the display from a fixed distance and angle on personal devices, making it technologically less difficult to offer a 3D function. http://www.sharp.com

Zeon to mass-produce highly-productive phase difference film for 3DTVs Zeon Corp announced that it will start volume production of a highly-productive phase difference film for LCD panels. Expecting that about 40% of the 40-inch and larger TVs will be 3DTVs in 2014, Zeon aims to acquire a large share in the market with the new phase difference film. The phase difference film can be attached to a polarizing plate in a continuous manner by controlling the array direction of molecules and using a roller. This method is called “roll-to-roll lamination”. Other phase difference films are attached to a polarizing plate by punching out the films into desired shape and size, cutting the polarizing plate and attaching them together (batch method). The new phase difference film is expected to streamline such manufacturing processes and lower costs. Zeon will build new facilities at a manufacturing plant in Toyama Prefecture, Japan, to have an annual production capacity of 15,000,000 m2 and start volume production in October 2011. http://www.zeon.co.jp

The “roll-to-roll lamination” method, in which a phase difference film and a polarizing plate are attached together in a continuous manner

CableLabs issues 3D encoding specification Cable Television Laboratories (CableLabs), a non-profit R&D consortium, has announced the publication of a new specification for producers and others involved in 3DTV content. “Content Encoding Profiles 3.0 Specification OC- SP-CEP3.0-I01-100827” provides details of the requirements for formatting or “panelizing” 3D content into a frame-compatible format for cable television systems. “This new CableLabs specification was developed with support from cable operators, programmers and equipment vendors and will be publicly available for any industry to use,” said Paul Liao, CableLabs president and CEO. The new document replaces a previous video-on-demand spec, building on the 2D coding framework set forth in the earlier document. It is intended to be used as a reference for both 2D and 3D CATV formats. A key part of this specification includes the definitions for signaling 3D content over existing digital video infrastructure that uses either MPEG-2 or MPEG-4 (AVC/H.264) coding. This signaling is critical for the receiver/decoder to enable automatic format detection and simplified user experiences when going between 2D and 3D programs. Cable Television Laboratories was founded in 1988 by members of the cable television industry. The new specification can be found at the organization’s website, under OpenCable Specifications (PDF). http://www.cablelabs.com

13 Veritas et Visus 3rd Dimension October 2010

Panasonic announces Full HD 3D Blu-ray home theater systems Panasonic announced the debut of its first Full HD 3D Blu-ray Disc home theater systems, providing the ultimate immersive 3D movie experience at home. The two new home theater systems, models, SC-BTT750 and SC- BTT350, employ advanced proprietary technology to envelop viewers in a remarkably authentic home theater experience and step into the 3D world. Networking and connectivity capabilities abound with the new models through VIERA CAST compatibility, which provides access to new online content from Netflix, Video on Demand, YouTube, Picasa, Bloomberg TV, weather information and other services. The top-of-the-line SC- BTT750 is equipped with a wireless LAN adaptor, which plugs into the USB terminal to deliver online video streaming without a LAN cable so the system can be installed without the need for any complicated wires. The SC- BTT350 is wireless LAN-ready and can be enjoyed wirelessly by linking to a Wireless LAN adaptor, DY-WL10 (requires Broadband Internet service). To achieve true to cinema picture quality, both of these models incorporate the PHL Reference Chroma Processor Plus, developed at the Panasonic Hollywood Laboratory. This high precision processing technology reproduces clear, vibrant colors that are extremely faithful to the original film.

Panasonic's Full HD 3D VIERA Plasma TVs include five screens sizes - 42-inch class (41.6 inches measured diagonally), 50-inch class (49.9 inches measured diagonally), 54-inch class (54.1 inches diagonal), 58-inch class (58.0 inches diagonal) and 65-inch class (64.7 inches diagonal). In addition, Panasonic also offers three professional Full HD 3D plasma monitors in 85-inch class (85.3 diagonal inches), 103-inch class (102.5 diagonal inches) and a soon-to-be-released 152-inch class (152 diagonal inches) - the world’s largest plasma screen. All Panasonic VIERA and professional Full HD 3DTV models are also Full HD TVs that display pristine 1080p content in 2D for conventional HD viewing.

In addition, Panasonic and DIRECTV recently ushered in a new age in the rapid growth of 3D entertainment for the home with the launch of n3D powered by Panasonic - a DIRECTV channel dedicated exclusively to 3D programming. n3D powered by Panasonic is now available at no additional cost to millions of DIRECTV HD customers and features a range of sports and entertainment programming exclusively in 3D. In July, Panasonic took the 3D era to a new level for consumers with the unveiling of the Panasonic HDC-SDT750 (available in October 2010), the world's first consumer 3D camcorder, which includes a 3D conversion lens that enables the camcorder to shoot powerful and true-to-life 3D video content and play it back on 3D-capable HDTVs (a TV that is capable of side-by-side method 3D playback, 3D eyewear, and HDMI cable connection are required to play the recorded 3D images). http://www.panasonic.com

Sunny Ocean Studios enables 84-inch display of sports events in 3D without need for glasses Sunny Ocean Studios presented a world first at the Youth Olympic Games in Singapore: a public video wall with an 84-inch screen providing 3D entertainment without the need for glasses. On the display, Sunny Ocean Studios will show material it has devised itself and which will allow a 3D experience without additional visual aids. A total of four displays of this kind were set up during the Youth Games in prominent locations in Singapore. The 3D video walls are based on four synchronized monitors and work with 30 views technology (one image is taken from 30 different perspectives). This technology allows a “glasses-free” 3D experience from several points in the room and is ideally suited for large rooms or public squares. Specially for the Olympic Youth Games, Sunny Ocean Studios has converted conventional 3D films in house into 3D video material that does not need additional aids such as glasses. The 3D wall has a diagonal of 3.5 meters and works on the basis of commercially available monitors with 25fps. http://www.sunny-ocean.com

84-inch 3D wall in Singapore

14 Veritas et Visus 3rd Dimension October 2010

Oakley Innovation achieves unrivaled 3D eyewear Oakley announced that it has engineered innovative new 3D eyewear that both complements and optimizes the technology used in the majority of 3D movie theaters around the world. The company's proprietary frame innovations have been combined with lens technologies that will maximize the 3D experience. Taking advantage of Oakley's new HDO-3D technology, premium editions in the new line will be the first 3D eyewear on Earth with optically correct lenses. Oakley 3D eyewear will be available prior to the 2010 holiday season. It will initially be sold through premium optical distribution channels in the US, followed by a global launch in 2011. Oakley is pursuing partnerships with manufacturers of home 3D systems that utilize passive polarization. This will allow consumers to use the same eyewear for home and cinema 3D entertainment. In addition to the first optically correct 3D eyewear in the world, Oakley has achieved the first 3D lenses ever made with high-wrap curvature, essential for maximizing the wearer's field of vision. A full panoramic view can be experienced without the need to turn one's head in order to see what is happening at the corners of the theater screen. http://www.oakley.com

3D movie streaming receives a boost with new standard Movies in 3D could soon stream to PCs and TV sets with the development of a new video file compression standard. Researchers at the Heinrich Hertz Institute (HHI) are developing the new MVC (Multiview Video Coding) video compression standard, which would allow compressed 3D video to be transmitted over the Internet or satellite without interruption, the institute said. The institute, based in Berlin, showcased streaming of 3D movies based on the standard over the Internet and satellite at the International Broadcasting Convention (IBC). The research is being done by the Fraunhofer Institute for Telecommunications, which is a part of HHI.

3D movies and broadcast content are being created for new devices like 3DTVs and Blu-ray 3D players, but an efficient way to transmit them over broadband networks is missing, the institute said. The content requires considerably more bandwidth than regular video feeds, and observers have said that a large percentage of homes do not have capacity to play streaming 3D movies. MVC could potentially resolve quality-of-service and buffering issues by squeezing 3D movies into compact files that can be transmitted over existing broadband networks. The trick is to load files quickly so 3D video can be viewed without interruption, and the standard packs two separate images, each for the left and right eyes, needed to provide the stereoscopic 3D effect. The MVC standard helps reduce the bit rate significantly, which helps transmit the movies faster. The latest 3DTVs and Blu-ray 3D players will be able to decode the separate images from MVC-coded movies to display the 3D effect. The MVC file format is a 3D add-on to the existing H.264/MPEG-4 AVC video compression standard. Movie services like Netflix are already delivering movies in HD format, but do not yet offer 3D streaming. Samsung has said it will start streaming 3D movie trailers from content providers later this year, but not full 3D movies. http://www.hhi.fraunhofer.de/ip

3D plays important role in 2010 theater revenues Rank Movie Title Total Gross Box Office Mojo reports that 2010 US movie revenues to date continue to tell an extremely positive story for the 1 Toy Story 3 $412,844,168 3D format. All-time revenue record-holder, Avatar, was 2 Alice in Wonderland (2010) $334,191,110 released in December 2009, so although most of its run- 3 Iron Man 2 $312,128,345 up occurred in 2010, the Avatar numbers are not included 4 The Twilight Saga: Eclipse $300,527,315 in 2010 data. Toy Story 3 already rates as the #9 5 Inception $289,814,553 revenue-producing movie of all time. In addition to Toy 6 Despicable Me $247,097,470 Story 3, Alice in Wonderland, Despicable Me, Shrek 7 Shrek Forever After $238,395,990 Forever After, and How to Train Your Dragon, and Clash 8 How to Train Your Dragon $217,581,231 of the Titans all are in the top 10 grossing movies for the 9 The Karate Kid $176,591,618 year (so far), all of them exceeding $150 million. 10 Clash of the Titans (2010) $163,214,888 http://boxofficemojo.com/yearly/chart/?yr=2010&p=.htm 2010 domestic grosses (through 10/23/2010)

15 Veritas et Visus 3rd Dimension October 2010

http://www.insightmedia.info/training/index.php

16 Veritas et Visus 3rd Dimension October 2010 Web3D 2010 July 24-25, 2010, Los Angeles, California

In the first of two reports Phillip Hill covers papers from High Performance Computing Center (HLRS)/ Stellba Hydro GmbH & Co. KG, Ecole Centrale/Université de Lyon/EDF R&D, ISTI-CNR/Scuola Normale Superiore, University of Trier, Armstrong Atlantic State University, Fraunhofer IGD/ TU Darmstadt, and ISTI-CNR

Collaborative Steering and Post-Processing of Simulations on HPC Resources Florian Niebling, and Andreas Kopecki, High Performance Computing Center (HLRS), Stuttgart, Germany Martin Becker, Stellba Hydro GmbH & Co. KG, Heidenheim an der Brenz, Germany

Nowadays, most if not all work concerning the technical development of products is aided by computer technology. CAD tools are used to construct the objects, high-quality rendering is used to visualize prototype designs, and most importantly, products are optimized using computational simulation methods. Ensuring the proper and optimal functioning of a future product requires the collaboration of human experts using a multitude of different tools. Computational meshes suitable for numerical simulations have to be created from geometry by meshing experts, boundary conditions – i.e. parameters that define the different conditions a simulation should try to reproduce – have to be defined, the simulation has to be started and monitored, and the results have to be analyzed. All those steps usually use different tools and need special expertise and knowledge to use.

This paper shows how to merge those steps usually performed by different tools into one coherent platform, and how to present the uniform access to this platform in a browser interface. This gives teams of engineers collaborative access to all parameters needed to influence the geometry creation, meshing, simulation on high- performance computing resources and post-processing of their simulation results, delivering these results directly into the web browser using modern WebGL and AJAX techniques. The web-based access introduced in this work also contributes to lowering the threshold for engineers to use remote simulation and post-processing facilities, allowing them to create and analyze data sets on their local workstation or mobile devices. In this paper, the researchers present a real-life scientific engineering workflow from the field of turbo- machinery design to demonstrate the applicability of their solution for web-based collaborative interactive simulation and post-processing including a web-based 3D rendering environment.

Collaborative post-processing and steering of a computational fluid dynamics (CFD) simulation in an HTML5 compatible web browser

17 Veritas et Visus 3rd Dimension October 2010

Remote Scientific Visualization of Progressive 3D Meshes with X3D Adrien Maglo, and Céline Hudelot, Ecole Centrale, Paris, France; Guillaume Lavoué, Florent Dupont, and Ho Lee, Université de Lyon, Lyon, France; Christophe Mouton, EDF R&D, Paris, France

This paper presents a framework, integrated into the X3D file format, for the streaming of 3D content in the context of remote scientific visualization; a progressive mesh compression method is proposed that can handle 3D objects associated with attributes like colors, while producing high quality intermediate levels of detail (LOD). Efficient adaptation mechanisms are also proposed so as to optimize the LOD management of the 3D scene according to different constraints like the network bandwidth, the device graphic capability, the display resolution and the user preferences. Experiments demonstrate the efficiency of the approach in scientific visualization scenarios.

Progressive decompression of the radiator model (16.002 vertices). Its original X3D size is 953kB.

Visualization Methods for Molecular Studies on the Web Platform Marco Callieri, Marco Di Benedetto, Monica Zoppè, and Roberto Scopigno, ISTI-CNR, Pisa, Italy Raluca Mihaela Andrei, Scuola Normale Superiore, Pisa, Italy

This work presents a technical solution for the creation of visualization schemes for biological data on the web platform. The proposed technology tries to overcome the standard approach of molecular/biochemical visualization tools, which generally provide a fixed set of visualization methods. This goal is reached by exploiting the capabilities of the WebGL API and the high level objects of the SpiderGL library. These features will give the users the possibility to implement an arbitrary visualization scheme, while keeping simple the implementation process. To better explain the philosophy and capabilities of this technology, the researchers describe the implementation of the web version of a specific visualization method, demonstrating how it can deal with both the requirements of scientific rigor in manipulating the data and the necessity to produce flexible and appealing rendering styles.

Standard representation of molecular surface properties using color ramps and field lines (left), the same properties drawn using complex shading techniques (center) and the electrical interaction of two proteins (right), rendered on a webpage by using SpiderGL and WebGL

18 Veritas et Visus 3rd Dimension October 2010

ChartFlight – From Spreadsheets to Computer-Animated Data Flights Rainer Lutz, and Stephan Diehl, University of Trier, Trier, Germany

In business as well as science a clear and professional presentation of quantitative information is often required and helps to efficiently communicate new insights. The predominant approach is to integrate charts into slide shows created with standard presentation programs. In this paper, the researchers introduce the chart flight metaphor for visualizing spatially distributed statistical data as a computer-generated three-dimensional camera flight over a map with animated charts. Their web application leverages the Blender 3D modeling and animation tool to allow end users to submit their data sets, and easily generate chart flight videos without profound knowledge of computer graphics methods and systems. The generated videos can be included into slide show presentations, put on web pages or shared via file hosting sites, and even displayed on low-performance hardware devices like mobile phones or netbooks.

Selection of different real-world examples. Results of German parliamentary elections for its federal states (a), touches of the ball for players of a soccer team (b), migration in Germany (c), cancer incidence statistics for the most common cancer types (d and e).

Extending the Web3D: Design of Conventional GUI Libraries in X3D Ivan Sopin, and Felix G. Hamza-Lup, Armstrong Atlantic State University, Savannah, Georgia

Extensible 3D (X3D) modeling language is one of the leading Web3D technologies. Despite the rich functionality, the language does not currently provide tools for rapid development of conventional graphical user interfaces (GUIs). Every X3D author is responsible for building from primitives a purpose-specific set of required interface components, often for a single use. This paper addresses the challenge of creating consistent, efficient, interactive, and visually appealing GUIs by proposing the X3D User Interface (X3DUI) library. This library includes a wide range of cross-compatible X3D widgets, equipped with configurable appearance and behavior. With this library, the researchers attempt to standardize the GUI construction across various X3D-driven projects, and improve the reusability, compatibility, adaptability, readability, and flexibility of many existing applications.

SpiderGL: A JavaScript 3D Graphics Library for Next-Generation WWW Marco Di Benedetto, Federico Ponchio, Fabio Ganovelli, and Roberto Scopigno, ISTI-CNR, Pisa, Italy

Thanks to the WebGL graphics API specification for the JavaScript programming language, the possibility of using the GPU capabilities in a web browser without the need for an ad-hoc plug-in is now coming true. This paper introduces SpiderGL, a JavaScript library for developing 3D graphics web applications. SpiderGL provides data structures and algorithms to ease the use of WebGL, to define and manipulate shapes, to import 3D models in various formats, and to handle asynchronous data loading. The researchers show the potential of this novel library with a number of demo applications. Furthermore, they introduce MeShade, a SpiderGL-based web application for

19 Veritas et Visus 3rd Dimension October 2010 shader material editing from within the web browser, which produces all the code needed for embedding interactive 3D model visualization capabilities inside web pages and online repositories.

The WebGL specification is still in draft version and it is only implemented in the experimental version of almost all web browsers. The researchers successfully tested their library with the latest builds of the most common web browsers on several desktop systems. The results presented here have been run on the Chromium web browser on a Windows Vista system with Intel i7 920 processor, 3GB RAM, 500GB hard drive and an Nvidia GT260 graphics board with screen vertical synchronization disabled. The collected results should be analyzed by considering that a minimal HTML/JS page that only clears the color buffer reaches the limit of exactly 250 frames per second; the researchers suspect that some kind of temporal quantization occurs in the browser event loop.

Left: rendering a BlockMap with SpiderGL. Center: architecture of SpiderGL. Right: MeShade, a SpiderGL- based tool for shaders authoring.

Shadow Mapping (on the left): a scene composed of 100K triangles is rendered at the maximum reachable speed of 250fps. The 10,242 shadow map has been packed on a 32-bit RGBA texture because the browser does not support depth texture fetches in fragment programs. Snapshot of an adaptive multi-resolution rendering (on the right) of a 4Kx4K terrain (Puget Sound model)

Interactive Textures as Spatial User Interfaces in X3D Y. Jung, S. Webel, M. Olbrich, T. Drevensek, T. Franke, M. Roth, and D. Fellner Fraunhofer IGD/TU Darmstadt, Darmstadt, Germany

3D applications, e.g. in the context of visualization or interactive design review, can require complex user interaction to manipulate certain elements, a typical task which requires standard user interface elements. However, there are still no generalized methods for selecting and manipulating objects in 3D scenes and 3D GUI elements often fail to gather support for reasons of simplicity, leaving developers encumbered to replicate interactive elements themselves. Therefore, the researchers present a set of nodes that introduce different kinds of 2D user interfaces to X3D. They define a base type for these user interfaces called "InteractiveTexture”, which is a 2D texture node implementing slots for input forwarding. From this node they derive several user interface representations to enable complex user interaction suitable for both desktop and immersive interaction.

20 Veritas et Visus 3rd Dimension October 2010

21 Veritas et Visus 3rd Dimension October 2010 EuroITV Conference on Interactive TV and Video June 9-11, 2010, Tampere, Finland

Phillip Hill covers papers from CERTH–ITI, and Federal University of Paraiba

Incorporating 3D Technologies to the Brazilian DTV Standard Daniel F. L. Souza, Tatiana A. Tavares, Liliane S. Machado, and Guido L. Souza Filho Federal University of Paraiba, João Pessoa, Brazil

The support for 3D technologies in a digital television (DTV) environment extends the possibilities of entertainment and interactivity. In this paper the researchers describe an architecture for integrating 3D technologies in middleware for digital television systems. As a case study, they present the use of the proposed strategy in the Brazilian Digital Television middleware, which is called Ginga. The integration strategies are presented and compared with other studies in the literature. Finally, they present an architecture based on the proposed strategies.

In their studies the researchers developed a simplified virtual environment using a 3D library in Java and rebuilt the same environment using NCL with X3D/VRML (see figure). This activity was important to analyze certain aspects related to the difficulties of developing 3D applications in both environments (procedural and declarative). They used the same application so that the development process was evaluated on the basis of the same requirements. In terms of practicality, the use of techniques for texturing, collision and event handling, a certain environment provides benefits over the other. In terms of architecture the application developed based on procedural environment runs on a native implementation of the OpenGL ES API, accessing the features of this native API directly. The application developed with X3D is performed on a conventional web browser that incorporates a plug- in for rendering the X3D scene. This plug-in is implemented on a standard 3D API such as OpenGL, Direct3D, SDL and so on. The use of browsers avoids the need to implement importers to 3D meshes; however, when using a low level API, like OpenGL ES, the developers need to incorporate, by implementing or using another library, those importers to their applications.

The same environment developed using OpenGL ES (left) and X3D (right)

22 Veritas et Visus 3rd Dimension October 2010

Improved Depth Field Estimation for Autostereoscopic 3DTV based on Graph-Cuts Kosmas Dimitropoulos, Theodoros Semertzidis, and Nikos Grammalidis, CERTH–ITI, Thessaloniki, Greece

Three-dimensional TV is now closer than ever to become a reality for consumers providing a complete life-like image experience. Recent advances in autostereoscopic displays have resulted in improved 3D viewing experience, wider viewing angles, no need for special glasses and support for multiple viewers. However, due to their content formatting requirements (2D+depth), live action content is much more difficult to create. In this paper a new content generation approach for autostereoscopic 3DTV is proposed, by integrating a state-of-the-art MRF-based (Markov Random Field) depth estimation method with additional pre- and post-processing steps, such as rectification, color segmentation as well as decomposition of the scene in foreground and background depth maps. Experimental results with different stereo sequences show the great potential of the proposed method.

The researchers conducted a number of experiments with real stereo video sequences in order to compare the proposed method with the initial algorithm in terms of silhouette accuracy and background stability from frame to frame. Background problems are clear in Figure 1(c) and (d), where the initial algorithm assigns different depth values to pixels representing the same background objects in two different frames of the sequence. Figures 1(e)-(f) show that the new approach clearly addresses this problem and creates better silhouettes for the foreground objects, even in the case of the ball where the motion blurring makes more challenging its accurate detection. Experiments were also conducted using a uniform green background, which along with blue color are considered to be the furthest away from skin tone. Figure 2(a)-(b) present the left and right frame of a sequence with a green background. Figures 2(c)-(e)-(g) show depth maps from frames of the same sequence using the move-expansion algorithm, while in Figures 2(d)-(f)-(h) the corresponding depth maps with the proposed method are presented. As it is clear from Figure 2, the proposed method creates more accurate silhouettes (Figure 2(c)-(d)) for the foreground objects, while background problems, which can be observed in Figures 3(e) and (g) have been clearly addressed. Another significant benefit is that stationary objects in the scene (e.g. the chair arms as well as the man’s body and head) keep the same depth values in all frames of the sequence.

Figure 1: (a)-(b) Two frames of the same sequence. (c)-(d) The corresponding depth maps using graph cuts. (e)-(f) The result of the proposed approach using low smoothness parameters for the background. Figure 2: (a)-(b) The left and right frame of a sequence. (c)-(e)-(g) Depth maps from different frames of the same sequence using directly the move-expansion algorithm. (d)-(f)-(h) Depth maps for the corresponding frames using the proposed method.

23 Veritas et Visus 3rd Dimension October 2010 3DTV-CON 2010 June 7-9, 2010, Tampere, Finland

In the first of three reports from this IEEE conference, Phillip Hill covers papers from University of Tubingen, Fraunhofer Institute for Telecommunications/Heinrich-Hertz Institut (x2), Tampere University of Technology (x2), Bremer Institut fur Angewandte Strahltechnik GmbH, Delft University of Technology/Middle East Technical University, Tsinghua University, Nagoya Institute of Technology/Tokyo Institute of Technology/Nagoya University, Tianjin University/Chaozhou Chuangjia Electronic Co./Tianjin Construction Machinery Co., University of Munster, Tsinghua University, Tsinghua University/Graduate School at Shenzhen, Tampere University of Technology/FogScreen Inc., De Montfort University/Fraunhofer-Heinrich Hertz Institute, Samsung India Software Operations, Electronics and Telecommunications Research Institute/Sungkyunkwan University, University of Kingston, Electronics and Telecommunications Research Institute, Warsaw University of Technology, and Berlin University of Technology

Real-time Depth Estimation for Immersive 3D Videoconferencing I. Feldmann, W. Waizenegger, N. Atzpadin, and O. Schreer Fraunhofer Institute for Telecommunications/Heinrich-Hertz Institut, Berlin, Germany

The interest in immersive 3D videoconference systems has existed now for many years from the commercialization point of view as well as from a research perspective. One of the major bottlenecks in this context is the computational complexity of the required algorithmic modules. This paper discusses this problem from a hardware point of view. The researchers use new fast graphics board solutions, which allow high algorithmic parallelization in consumer PC environments on one hand and look at state-of-the-art powerful multi-core CPU processing capabilities on the other hand. They propose a novel scalable and high performance 3D acquisition framework for immersive 3D videoconference systems that takes benefit from both sides. In this way they are able to integrate complex computer vision algorithms, such as Visual Hull, multi-view stereo matching, segmentation, image rectification, lens distortion correction and virtual view synthesis as well as data encoding, network signaling and capturing for 16 HD cameras in one real-time framework. This paper is based on results and experiences of the European FP7 research project 3DPresence, which aims to build a real-time three-party and multi-user 3D videoconferencing system.

3DPresence demonstrator system, 3DMedia workshop in Berlin, Germany, October 2009

24 Veritas et Visus 3rd Dimension October 2010

SIFT vs. SOFT - A Comparison of Feature and Correlation Based Rotation Estimation Timo Schairer, Sebastian Herholz, Benjamin Huhle, and Wolfgang Straßer University of Tubingen, Tubingen, Germany

Orientation estimation based on image data is a key technique in many applications. Robust estimates are possible in the case of omni-directional images due to the large field of view of the camera. Traditionally, techniques based on local image features have been applied to this kind of problem. Another very efficient technique is to formulate the problem in terms of correlation on the sphere and to solve it in Fourier space. While both methods claim to provide accurate and robust estimates, a quantitative comparison has not been reported yet. In this paper the researchers evaluate the two approaches in terms of accuracy, image resolution and robustness to noise by comparing the estimated rotations of virtual as well as real images to ground-truth data. As can be seen in the figure, the rotation of an omni-directional camera not only causes variations in position, scale and orientation of the features, but also locally leads to affine transformations, that are not handled explicitly by the SIFT feature detector.

Using virtual as well as real scenes the performance of a feature based approach versus a technique based on correlation was evaluated according to accuracy, image resolution and robustness to noise. The experiments showed, that for a given anticipated accuracy both methods are comparable in terms of runtime. The SIFT based rotation estimation leads to superior estimates on larger input images that contain a low amount of noise while the accuracy turned out to be inversely proportional to the magnitude of the rotation. On the contrary, the SOFT based approach is very robust to noise and performs well, even on very small images. Additionally, it is not affected by the amount of rotation. Further experiments could be conducted to evaluate the sensitivity to varying lighting conditions and changes in the scene.

Examples of the effect of rotating an omni-directional camera in a real and virtual scene. Default orientation (left) and rotated around horizontal and vertical axis (right). Note the black areas in the lower images are due to the limited field of view of the camera.

3D Wave Field Phase Retrieval from Multi-plane Observations Artem Migukin, Vladimir Katkovnik, and Jaakko Astola Tampere University of Technology (TUT), Tampere, Finland

The researchers reconstruct a spatially distributed 3D wave field from a number of intensity observations, obtained in different sensor planes, parallel to the object plane. The proposed algorithm can be treated as a multiple plane iterative Gerchberg-Saxton algorithm. It is obtained from the best linear estimate of the complex-valued object

25 Veritas et Visus 3rd Dimension October 2010 distribution derived for the complex-valued observations. This estimator is modified for the intensity measurements in the sensor planes. The algorithm is studied by numerical experiments performed for amplitude and phase object distributions. It is shown that the proposed method allows reconstructing the whole 3D wave fields for different setup parameters. This technique can be applied for 3D imaging. The comparison versus the successive iterative method shows an accuracy advantage of the proposed algorithm provided that the type of modulation in the object plane is known.

Experienced Audiovisual Quality for Mobile 3D Television Timo Utriainen, and Satu Jumisko-Pyykkö, Tampere University of Technology, Tampere, Finland

For mobile 3D television, substantial optimization of system resources is needed to provide adequate quality to the viewers while sparing valuable and limited system resources. In the end, the experienced quality of 3D needs to outperform the quality of existing services. The goal of this paper is to explore the influence of audiovisual encoding parameters on viewers’ experienced quality on mobile 3D television. The researchers conducted two extensive subjective quality evaluation experiments where presentation modes (2D/3D), frame rates, and video and audio bit rates with several content types were varied. The experiments were carried out on a portable autostereoscopic device using parallax barrier display technology utilizing simulcast stereo video encoding with relatively low total bit rates relevant for broadcasting onto mobile devices. The results showed the superiority of 2D presentation mode, importance of visual over audio quality and that a significant increase in bit rate/frame rate resources did not improve the visual quality of 3D. Further work needs to address several display techniques as a part of quality evaluation studies to provide reliable comparisons of critical system factors.

Advanced Digital Lensless Fourier by means of a Spatial Light Modulator Thomas Meeser, Christoph von Kopylow, and Claas Falldorf Bremer Institut fur Angewandte Strahltechnik GmbH, Bremen, Germany

The paper presents an optical setup that makes use of a spatial light modulator (SLM) in order to electronically control the modification of the reference wave in digital lensless Fourier holography. The SLM provides the advantage of avoiding any mechanical adjustments in order to adapt the reference wave to different object positions. A rule for the complex transmittance required to be generated by the SLM is presented and experimental results prove the big potential of the configuration in regards to digital holography. The discrete Fourier transform (DFT) of a digital hologram captured in a lensless Fourier configuration reconstructs the wave field in the object plane and therefore lets the object appear to be in focus.

Module of the DFT of a hologram of two Lego bricks located 55mm away from the plane of the reference source point. The reference wave is being modified by the complex transmittance of the SLM and the lensless Fourier scheme is restored. The aliased frequencies disappear; hence the support of the recorded signal’s band is minimized.

Utilization of Spatial Information for Point Cloud Segmentation Oytun Akman, and Pieter Jonker, Delft University of Technology, Delft, The Netherlands Neslihan Bayramoglu, and A. Aydın Alatan, Middle East Technical University, Ankara, Turkey

Object segmentation has an important role in the field of computer vision for semantic information inference. Many applications such as 3DTV archive systems, 3D/2D model fitting, object recognition and shape retrieval are strongly dependent to the performance of the segmentation process. In this paper the researchers present a new algorithm for object localization and segmentation based on the spatial information obtained via a time-of-flight (TOF) camera. 3D points obtained via a TOF camera are projected onto the major plane representing the planar

26 Veritas et Visus 3rd Dimension October 2010 surface on which the objects are placed. Afterward, the most probable regions that an item can be placed are extracted by using kernel density estimation method and 3D points are segmented into objects. Also some well- known segmentation algorithms are tested on the 3D (depth) images.

Test images captured with the TOF camera, and point cloud representation of the test image

Overcoming View Switching Dynamic in Multi-view Video Streaming over P2P Network Zhibo Chen, Lifeng Sun, and Shiqiang Yang, Tsinghua University, Beijing, China

Multi-view video streaming over P2P networks has emerged as a scalable solution for providing multi-view video services on the Internet, as it utilizes users’ bandwidth resource to reduce streaming server’s bandwidth cost when delivering the data intensive multi-view video. However, the view switching behavior in multi-view video introduces an excessive dynamic to the system, which brings two performance issues to existing solutions: large view switching delay and poor streaming quality for views with a small number of audience. To address the problems, the researchers propose a novel P2P streaming framework for multi-view video that organizes viewers of different views together to cooperate in view switching and content delivery, achieving reduced view switching delay. Further, they propose a heuristic method to balance resources across all views and improve the streaming quality for views with a small number of viewers. The experiments show that the proposed method achieves lower view switching delay and better streaming quality compared with existing solutions.

Real-time Free Viewpoint Image Rendering by Using Fast Multi-pass Dynamic Programming Norishige Fukushima, and Yutaka Ishibashi, Nagoya Institute of Technology, Nagoya, Japan Toshiaki Fujii, Tokyo Institute of Technology, Tokyo, Japan Tomohiro Yendo, and Masayuki Tanimoto, Nagoya University, Nagoya, Japan

In this paper, the researchers introduce a free viewpoint image generation method with an optimization method for a view dependent depth map. Image based rendering (IBR) can render photo-realistic images from natural images, and ray space or light field is IBR method for 3D representation. To generate a free viewpoint image from light filed data that are captured by a camera array, a disparity map on the virtual view is required. To improve the quality of the generated image, the accurate disparity map is required; however, the computational cost of the disparity map for some optimization is usually huge. For real-time rendering, the researchers have used a disparity optimization method called multi-pass dynamic programming (MPDP), which applies the dynamic programming method to the disparity map multi-directionally. In this paper, they improve the MPDP method to speed up optimization process and add occlusion detection process. The experimental results show that fast MPDP can interpolate virtual view with high quality, and the PSNR of this synthesized image to the actual image is 29.2dB. A synthesized image from belief propagation, which is one of the best optimization algorism, has 29.4dB. In addition, the MPDP computational time is almost real-time, and that time is 51.5ms; meanwhile belief propagation takes as long as 397.7ms.

27 Veritas et Visus 3rd Dimension October 2010

Analysis of Transmission Induced Distortion for Multiview Video Yuan Zhou, ChunPing Hou, Han Cui, and Lei Yang, Tianjin University, Tianjin, China; Rongxue Zhang, Chaozhou Chuangjia Electronic Co., Chaozhou, China; Yufeng Xue, Tianjin Construction, Tianjin, China

In this paper, the transmission distortion introduced in the coded multi-view video stream is analyzed by a mathematical method. This model takes into account the interdependent coding among views, which is an important component of multi-view video coding. The model can be used for any multi-view video coding scheme that uses motion compensation and disparity compensation, transform coding and entropy coding, with different estimation of parameters. The distortion estimation formula in this paper explicitly considers both the temporal dependencies and inter-view dependencies of multi-view video, and is applicable to any motion-compensated and disparity-compensated concealment method at the decoder. Simulations are conducted to validate the analysis. The results represent that the proposed formula estimates the actual amount of distortion with high accuracy.

Depth Image Based Rendering: A Faithful Approach for the Disocclusion Problem Michael Schmeing, and Xiaoyi Jiang, University of Munster, Munster, Germany

In this paper the researchers address the disocclusion problem that occurs in depth image based rendering (DIBR). With a computed background model, the areas that were occluded by foreground objects can be filled with their true color values, in contrast to approximate values as in previous approaches. This method avoids artifacts that occur with common approaches and can additionally reduce compression artifacts at object boundaries. DIBR takes the pixels of the original view and builds a simple 3D model of the scene by projecting them into the 3D world according to their depth values specified by the depth stream. This 3D model is then projected onto the image plane of a virtual camera. This process is called 3D image warping. One problem that occurs when rendering the virtual views is called the disocclusion problem. It is based on the fact that one single video stream does not contain the color values of all pixels of the 3D scene. Figure 1 shows the problem: When using DIBR for 3D video, the virtual views usually are shifted horizontally with respect to the original view. This allows the virtual view to “look behind” some foreground objects and to see background that is occluded in the original view. The original video stream does not provide any information about the color values of those background areas, so gaps in the rendered view occur. Figure 2 shows two frames of the Dublin sequence. The first row shows the “holes” that the 3D image warping process produces. The second row shows the artifacts that image in-painting produces when filling those holes. Image in-painting can only infer the color values from the surrounding pixel. This leads to the observed artifacts. In contrast to this, the new approach, which is shown in the third row, produces no artifacts.

Figure 1: The disocclusion problem. There is no information about the region behind the foreground object, so a disocclusion gap occurs. Figure 2: (a) Two frames of the Dublin sequence with no disocclusion handling after DIBR. (b) Disocclusion handling with in-painting. Heavy artifacts are visible. (c) The new approach produces no artifacts. The background contains less noise due to the background modeling algorithm.

28 Veritas et Visus 3rd Dimension October 2010

Multi-view Image Denoising Based on Graphical Model of Surface Patch Zhou Xue, Jingyu Yang, Qionghai Dai, and Naiyao Zhang, Tsinghua University, Beijing, China

The paper targets denoising of multi-view images with both intra-view and inter-view redundancy exploited under the guidance of 3D geometry constraints. A graphical model of surface patches from each view of the multi-view image sequence is proposed to model the redundancy more effectively and efficiently. Patches are clustered according to their similarities between each other measured by the geodesic distance on the graph. Noises are attenuated via Wiener filtering on the sparse representations transformed by DCT of these patches. The graphical model is first used in image denoising and outperforms the state-of-the-art denoising methods on the multi-view image sequence because the model fits the feature of the two kinds of redundancy very well. Furthermore, the 3D model reconstructed from multi-view images denoised by this method is more accurate and complete compared with those reconstructed from denoised images by other methods.

3D model reconstructed from (a) original multi-view images and denoised multi-view images by (b) the new method and (c) VBM3D. Within each sub-figure, left are the captured images and right are the reconstructed models with the same viewpoints.

Alpha Model Based Mixed Pixel Processing for View Synthesis Hariprasad Kannan, Kiran N Iyer, Kausik Maiti, Devendra Purbiya, Ajit Bopardikar, and Anshul Sharma Samsung India Software Operations, Bangalore, India

The commercial success and acceptability of 3D technology will critically depend on the overall visual quality of the rendered images. Therefore, view synthesis techniques like depth image based rendering (DIBR) are crucial components of the 3D system chain. In this paper, the researchers propose a fast alpha-model based approach that enables high quality DIBR. Given an image and depth map, a point based representation of the scene is obtained, where each point is associated with color, transparency and depth. A low complexity method estimates these attributes, making the system suitable for deployment in consumer electronics. The researchers show that there are considerable gains in computational time compared to other methods with comparable quality.

Portion of the synthesized view (Teddy): (a) without mixed pixel processing, (b) with image matting, (c) with the alpha-model based method, (d) a zoomed region.

29 Veritas et Visus 3rd Dimension October 2010

A Novel Method for Automatic 2D-to-3D Video Conversion Youwei Yan, Feng Xu, and Xiaodong Liu, Tsinghua University, Beijing, China Qionghai Dai, Graduate School at Shenzhen, Shenzhen, China

In this paper, the researchers propose an efficient scheme to automatically convert existing 2D videos to 3D ones. The proposed method extracts motion information from two consecutive frames to estimate depth map for each of them. In the method, they first develop a region-based graph cut method to fast and accurately perform motion segmentation, which is robust to large interframe motions. Then, a depth-assigning step for the segments is conducted to obtain a smooth depth map for each frame. Experimental results on standard testing sequences demonstrate that this scheme achieves accurate motion segmentation and accordingly smooth depth map. The proposed method was tested on two video sequences and segmentation results are compared with Wills’ method in aspects of both segmentation quality and time complexity. All experiments were performed via Matlab and C++ hybrid programming on a computer with the following configuration: Intel Core2 Duo CPU E7500 2.93Hz 4.00GB memory. Figure 1 shows segmentation results obtained by the new method and Wills’ method. As can be seen from the experimental results, the method improves both segmentation quality and efficiency. Finally, they assigned depth to all segments and smooth depth maps illustrated in Figure 2.

Figure 1: The first row is the original first and second frame extracted from a sequence. The second row is obtained by the proposed method. The third row is obtained by Wills’ method. Figure 2: Depth map for the sequence

Multi-user Glasses-free 3D Display Using an Optical Array Rajwinder Singh Brar, Phil Surman, and Ian Sexton, De Montfort University, Leicester, England Klaus Hopf, Fraunhofer-Heinrich Hertz Institute, Berlin, Germany

This paper describes the design and building of a novel stereoscopic display that does not require the wearing of special eyewear (auto-stereoscopic) and employs head position tracking in order to enable a large degree of freedom of viewer movement and the display of the minimum amount of information. A stereo image pair is produced on a single LCD by simultaneously displaying left and right images on alternate rows of pixels. Novel steering optics controlled by the output of a head position tracker is used to direct regions, referred to as exit pupils, to the appropriate viewers’ eyes. The display that was developed in the MUTED (Multi-user Television Display) project locates the viewers’ head positions and uses this information to direct exit pupils to the positions of the viewers’ eyes. An eye located in the left exit pupil region (see figure a) will see left image over the complete screen area. Similarly, a right image is seen by an eye located in the upper right exit pupil region. The display optics is capable of producing several exit pupil pairs that can be moved independently of each other. The display consists of

30 Veritas et Visus 3rd Dimension October 2010 a direct-view LCD whose backlight is replaced by optics that forms the exit pupils under the control of a head position tracker. An image-pair is produced on a single LCD screen either by the use of spatial multiplexing (MUX) where left and right images are produced on alternate pixel rows, or by temporal multiplexing where left and right images are produced sequentially. Figure (b) shows that a screen located behind the LCD enables light from two separate sources to be directed to the appropriate rows. This screen could consist of either a mask with horizontal apertures or a lenticular screen with horizontally-aligned lenses. In order to capture the maximum amount of light a lenticular screen, where light is not blocked, is used. The pitch of the screen is slightly less than twice the vertical pixel pitch in order to allow for parallax.

Exit pupil formation and spatial MUX; Multi-user head tracker images captured on a 6-camera array

Feasible Mid-air Virtual Reality with the Immaterial Projection Screen Technology Ismo Rakkolainen, Tampere University of Technology, Tampere, Finland/FogScreen Inc., Helsinki, Finland

The immaterial projection screen is an emerging display technology that enables high-quality projected images in mid-air. It can be extended to virtual reality (VR) and augmented reality (AR) mid-air displays. In order to make such mid-air VR and AR displays feasible, unobtrusive and low-cost user tracking solutions are needed. This paper presents such feasible desktop VR and AR mid-air displays. The FogScreen creates a non-turbulent particle flow enclosed within a wider airflow. flow remains thin and planar, which enables high-quality, walk-through projections in mid-air (see photo). As it uses atomized, extremely tiny water particles as projection medium, it feels dry to the touch. The device is suspended from ceiling or from a truss, and several devices can be linked seamlessly together. The FogScreen is used as a special effect in many world-class events, theme parks, science museums, TV studios and trade shows.

The researchers implemented a low-cost computer vision-based tracking with some advanced features, including a near-IR webcam and projector’s hotspot attenuation. Additionally they present improved image resolution methods and informal evaluations. They did not use as it requires glasses, which is not a realistic use scenario for many environments such as trade shows or museums, where people just want to stroll and look around.

The FogScreen as a mid-air virtual reality screen. It is partially translucent, and the projector hotspot is blocked with a black spot

31 Veritas et Visus 3rd Dimension October 2010

Caption Insertion Method for 3D Broadcasting Service Kwanghee Jung, and Namho Hur, Electronics and Telecommunications Research Institute, Daejeon, Korea Sunghyun Choi, Hyung-Seok Kim, and Joong Kyu Kim, Sungkyunkwan University, Suwon, South Korea

In this paper, the researchers present a new caption insertion method for 3D broadcasting service. Caption insertion technique for broadcasting service is very important because the caption emphasizes significant information during a broadcasting program and can help people with hearing impairment. In recent years, some caption insertion methods have been proposed for 3D broadcasting called next generation broadcasting service. However, recently proposed caption insertion methods possess problems such as the visual discomfort during the disparity of a 3D caption. To solve this problem, the researchers propose a new caption insertion method by using histogram based K-means clustering based on depth image based rendering (DIBR). Through the proposed method, visual discomfort during the disparity of a 3D caption can be relieved. Furthermore, the proposed method provides selectivity of caption disparity for various user viewing environments by using the clustering technique.

Result of caption inserted image: (a) reference and depth clustered image; (b) depth value 59; (c) depth value 96; (d) depth value 137; (e) depth value 220

Reduced-reference Quality Metric for 3D Depth Map Transmission Chaminda T.E.R. Hewage, and Maria G. Martini, University of Kingston, Kingston upon Thames, England

Due to the technological advancement of 3D video technologies and the availability of other supportive services such as high bandwidth communication links, introduction of immersive video services to the mass market is imminent. However, in order to provide better service to demanding customers, the transmission system parameters need to be changed “on the fly”. Measured 3D video quality at the receiver side can be used as feedback information to fine tune the system. However, measuring 3D video quality using full-reference quality metrics will not be feasible due to the need of original 3D video sequence at the receiver side. Therefore, this paper proposed a reduced-reference quality metric for 3D depth map transmission using the extracted edge information. This work is motivated by the fact that the edges and contours of the depth map can represent different depth levels and hence can be used in quality evaluations. Performance of the method is evaluated across a range of packet loss rates (PLRs) and shows acceptable results compared to its counterpart full-reference quality metric.

Improvement of Segment-based Depth Estimation using a Novel Segment Extraction Gi-Mun Um, Gun Bang, Won-Sik Cheong, Namho Hur, and Soo In Lee Electronics and Telecommunications Research Institute, Daejeon, South Korea

The paper presents a novel segment extraction and segment-based depth estimation technique. The proposed segment extraction technique exploits depth and motion information of segments between frames as well as color information. The researchers firstly divide each frame of reference view into foreground and background areas based on initial depth information obtained from time-of-flight (TOF) camera. Then they extract segments with color information by applying image segmentation technique to each divided area. Moreover, they track the extracted segments between frames in order to maintain depth consistency. They set the disparity search range for local segment-based stereo matching based on the initial depth from the TOF camera. Experimental results showed the superior performance of the proposed technique over conventional ones that do not use foreground and

32 Veritas et Visus 3rd Dimension October 2010 background separation or motion tracking of segments especially at the static background and the regions that have depth discontinuities in depth but similar colors.

Comparison of segment extraction results by the proposed technique and conventional technique without using depth-based foreground and background information: (a) proposed technique, (b) conventional technique (enlarged red box indicates the difference between the two results)

3D Shape Measurement System Based on Structure Light and Polarization Analysis Piotr Garbat, and Marek Sutkowski, Warsaw University of Technology, Warsaw, Poland

The problem of 3D shape acquisition has received increasing attention in recent years. Recently the structure light measurements system based on digital light projection supported by processing allows rapid acquisition of data about 3D real objects. Obtaining 3D shape of objects in optically scattering media presents a challenging set of problems. Many applications for 3D imaging through fog, dust, mist, rain and turbid water require improving the quality of data obtained in scattered media. The presented 3D shape measurement system is supported by polarization image analysis. Enhancement of image quality is realized using a detector unit with a special liquid crystal filter.

Experimental results: a) object in scattering media, b) edge detection without polarization analysis, c) segmentation after contrast enhancement with polarization analysis, and d) corrected edge detection Calibration of a Synchronized Multi-camera Setup for 3D Video Conferencing Wolfgang Waizenegger, and Ingo Feldmann Fraunhofer Institute for Telecommunications/Heinrich-Hertz-Institute, Berlin, Germany

In this paper the researchers present an efficient camera calibration workflow. The target application is the calibration of a 3D video conferencing setup. Nevertheless, it is also well suited for almost every small-scale environment and any number of cameras. The main contribution of this work is a novel integrated high precision pattern based calibration workflow, which accounts for color channel-wise lens undistortion, chromatic aberration correction and linear camera parameter estimation simultaneously. Moreover, this calibration workflow incorporates a convenient automatic spatial camera registration, in case the calibration pattern is not concurrently visible for all cameras.

33 Veritas et Visus 3rd Dimension October 2010

Rapid Radiometric Enhancement of Colored 3D Point Clouds using Color Balancing Ulas Yilmaz, and Olaf Hellwich, Berlin University of Technology, Berlin, Germany

A rapid radiometric enhancement framework to improve the quality of the point clouds obtained by 3D scanning devices is explained. Given a collection of colored point clouds of the same static scene, for each point cloud a linear transformation function is computed using the variances and the means of the overlapping parts, such that having applied these linear transformation functions the point clouds become radiometrically similar with the neighboring point clouds. Proposed methods are applicable to structured as well as unstructured point clouds.

Devices such as laser and multi-focal systems acquire high resolution 3D surface geometry of objects in a relatively less controlled environment. In these systems point clouds acquired from different views of the scene are merged into one single point cloud, which is followed by the surface reconstruction process for further visualization and transmission. Although these systems are also capable of acquiring surface reflectance properties, surface geometry remains the primary concern. Color information is mostly taken into account during texturing which is the last step of the processing pipeline. However, due to various reasons, such as environmental conditions, camera pose and scene properties, acquired point clouds have radiometric differences as shown in the figure. The first two illustrations are the individual point clouds, whereas the third one is the superimposing of one over the other. Radiometric differences are visible especially at overlapping parts. Minimizing such effects before merging point clouds improves the quality of reconstruction and avoids unexpected contrast and color variations on the resulting surface.

Sample point clouds before color balancing. (a) and (b) show individual point clouds, (c) shows the superimposing of (b) over (a).

62 fascinating pages about optical illusions

http://www.veritasetvisus.com

34 Veritas et Visus 3rd Dimension October 2010 NAB 2010 April 10-15, 2010, Las Vegas, Nevada

Michael Starks provides a detailed commentary of the exhibitions and happenings at NAB 2010, along with his insights about the burgeoning market for 3D devices. This is the third of three sections of his exhaustive coverage.

by Michael Starks

After graduate work in cell physiology at UC Berkeley, Michael Starks began studying stereoscopy in 1973, and co- founded StereoGraphics Corp (now Real D) in 1979. He was involved in all aspects of R&D including prototype 3D videogames for the Atari and Amiga and the first versions of what evolved into CrystalEyes LCD shutter glasses, the standard for professional stereo, and is co-patentee on their first 3DTV system. In 1985 he was responsible for starting a project at UME Corp, which eventually resulted in the Mattel PowerGlove, the first consumer VR system. In 1989 he started 3DTV Corp. In 1990 he began work on “Solidizing”-- a realtime process for converting 2D video into 3D. In 1992 3DTV he created the first full color stereoscopic CDROM (“3D Magic”) including games for the PC with shutter glasses. In 2007 companies to whom 3DTV supplied technology and consulting produced theatrical 3D shutter glasses viewing systems, which are being introduced worldwide in 2008. Starks has been a member of SMPTE, SID, SPIE and IEEE and has published in Proc. SPIE, Stereoscopy, American Cinematographer and Archives of Biochemistry and Biophysics. The SPIE symposia on 3D Imaging seem to have originated due to his suggestion to John Merritt at a San Diego SPIE meeting some 20 years ago. Michael more or less retired in 1998 and lives in China where he raises goldfish and is researching a book on the philosophy of Wittgenstein. http://www.3dtv.jp

Dolby bought the world cinema rights to Infitec several years ago (Dolby ) but for this trade show and other non cinematic apps Infitec has the rights and supplied flat lens glasses much less classy than the curved lens ones Dolby uses in theaters. Dolby is the only one using the active tech since it was developed and patented by them (i.e., color filter wheel inside a single projector--for flash movies of how this works see http://www.jdsu.tv/view1_3d/jdsu3dtechnology.html and for a little more info and a dynamite Angelina Jolie car chase sequence see http://www.dolby.com/professional/solutions/cinema/3ddc-solution-flash.html). Dolby however has almost nothing on their page about 3D http://www.dolby.com/professional/technology/cinema/dolby- 3ddigital.html. Infitec and everyone else uses twin projectors--that is, a passive system without the rotating filter wheel. In 2009 Infitec (a German Company) started a new company with fancy DualColor glasses and a new page http://www.infitec-global-sales.com/german/infitec-anwender.html (i.e., not in English yet). In any case the original team from Daimler-Chrysler (which I assume owns the patents) continues to work to improve the product for better color, decreased color flicker and increased brightness US 2010/0066813, as does Dolby US 2010/0060857, US 2010/0073769, 21010/0013911 and others US 2010/0039352, US 2009/0257120. For a list of Dolby 3D theaters see http://www.dolby.com/consumer/product/movies/theater/find-a-cinema.html.

In addition to its well known Cinema audio systems, Dolby also has a line of Pro Cinema authoring tools and playback systems for 2D and 3D Cinemas such as the SCC2000 Secure Content Creator for authoring DCP‗s (Digital Cinema Packages), the Dolby Show Library (DSL100) for storing and managing digital films in multiplexes, and the Dolby Screen Server (DSS200) for playback.

One of their competitors in providing DCI (Digital Cinema Initiative – i.e., a Hollywood monopoly created to protect their films and guarantee quality) compliant film delivery and playback in theaters is pioneer in cinema

35 Veritas et Visus 3rd Dimension October 2010 quality JPEG2000 playback www.doremilabs.com (over 6K installs worldwide), who were showing their latest DCP-2K4 2k/4k Digital Cinema Servers, DoremiAM and CineAsset authoring software and broadcast hardware. They have the Dimension 3D for converting Dual Link HDSDI into the common 3D playback formats, the Nugget to stream 3D in SBS or OU formats, the DSV-J2 which intakes edited 2D or 3D from CineAsset in MXF wrapped JPEG2000 format via Ethernet or USB and outputs it as dual HDSDI to the Dimension 3D, or even as dual 4K streams to Media Blocks (i.e., cinema servers) which deliver it to dual 4K projectors such as the Sony SRX‘s or (for small theaters) the JVC DLA-RS4000‘s or the newer DLA- SH4KNLG.

Dolby Color Filter wheel (above) and glasses used for the Dolby Digital Cinema system is made by electro-optics giant JDSU.

A small Texas company has also begun using laser projection with Infitec glasses for 3D shows http://www.prismaticmagic.com/index.php. And Ed Sandberg and longtime 3D enthusiast Bradley Nelson have recently figured out how to get polarized or triple notch filter anaglyphs (the type now common in Dolby 3D cinemas) from a pair of lasers – US 2009/0257120.

Correct 3D compositing of this 2D camera in vizRT‘s booth was achieved by input into two VizEngines which was keyed on the graphics as an input cutout. The white balls are the Thoma WalkFinder http://www.thoma.de/en/index.html and were also shown in the Thoma booth. They have an emitter inside them that flashes periodically enabling IR cameras on the ceiling to provide x, y, z tracking info to the VizEngine virtual camera. Pan, tilt, zoom and focus (now often called lens mapping) is provided using traditional encoders mounted in the camera head and lens. Many other such systems exist such as Intersense which was shown by Lightcraft Technologies in their PreVizion system http://www.lightcrafttech.com/. The capture of all the camera metadata is becoming routine in 2D and 3D production and recent lenses have sensors built in. Even cranes and dollies are sometimes encoded – e.g., with Encodacam http://www.encodacam.com/

Lightcraft technologies advanced camera tracking system (to the left) does much more than just tracking--including previewing camera moves with motion scaling by putting sensors on a monitor and moving around it. See an amazing sample of this at http://www.lightcrafttech.com/previzion/features/motion-scaling/

36 Veritas et Visus 3rd Dimension October 2010

Jason Goodman of 21st Century Media USA explains the operation of his beam-splitter rig, fitted with dual REDs, to Arthur Berman, display expert and writer of many of Insight Media‘s exhaustively detailed reports on current display technology. Jason was solely responsible for hardware and stereography on one of the first totally live action 3D features in recent years – Call of the Wild 3D. A fan on Fandango says it all: “Beautiful 3D throughout the whole film not just here and there, like so many other movies, You really feel like you are out in the wild for the whole movie.ǁ It has had limited release but I bet we can get in on Blu-ray soon and it should do well as it‘s one of the few 3D live action family films”.

Gregg Wallace and Griff Partington of Netblender http://www.netblender.com were in the 3D Pavilion with their state of the art 3D Blu-ray authoring software. As they say ―NetBlender‘s new 3D capabilities will be available options with the new DoStudio EX Edition, which will ship in Q2 2010 at an expected starting price of $4,995.00. DoStudio EX also includes workgroup productivity features and offers customers the option to include third party interactive apps and bootstrapping for BD Liveǁ They also have BD Touch for interaction with Blu-ray via iPhone and Android phones.

Chris Chinnock and Dian Mecca of Insight Media, which produces a steady stream of exhaustive reports on display technology. Chris also created the 3D@home consortium which produces conferences and reports on the 3D industry and which had its own booth nearby.

Adam Little of Canada based I0 Industries www.ioindustries.com, purveyors of high performance DVR‘s, introduced its latest product the DVR Express Core/3GSGI with dual solid state drives (the little black box by his hand) for simultaneous twin 1080p/60hz recording in the field. Nearly everyone at the show was touting current or coming compatibility with 3G-SDI, which is a single 2.970 Gbit/s serial link standardized in SMPTE 424M that is replacing the dual link HD-SDI (http://en.wikipedia.org/wiki/Serial_digital_interface).

Canada-based Ross Video, http://www.rossvideo.com, has their real-time broadcast hardware in over 100 countries and is ready for 3D with their Xpression 3D capable character and graphics generator which did live 3D Graphics on the JVC 3D panel in their booth. Their huge booth debuted the Vision Octane production switcher, whose 8 MLE‘s (Multi-Layered Effects) can handle multiple 3D streams. They pioneered the most flexible and advanced terminal equipment solution ever developed --the 2RU (2 rack unit) openGear modular frames.

37 Veritas et Visus 3rd Dimension October 2010

Jens Wolf of German media company Wige http://www.wige.de with two of the tiny but superb Cunima MCU[1] cameras http://www.cunima.tv on a Stereotec rig. Last year I had them in my NAB booth as prototypes but now they are in wide use. Cunima‘s featherweight 33.5mm x 38mm x 111.5 mm, 182g, full HD/SD multi-format cameras use only 3 watts http://www.cunima.tv/sites/WIGE_factSheet_CUNIMAMCU[1].pdf

Clay Platner of Technica operating one of their beam-splitter rigs with a remote. They seem to be the dominant 3D camera rig company to date with over 50 delivered worldwide (April 2010) but I don‘t know the stats for Pace, 3ality and PS Technik.

My friend, the hyperactive California based and multimedia whiz Bruce Austin in the Technica, http://www.technica3d.com, booth showing the agility of one of their smaller beamsplitters, which he has recently used on several major projects (e.g., a week in the Amazon). Several of the rigs now flip down to side by side format in seconds.

The electronics arm of Japanese Telecom giant NTT was showing the capabilities of their real-time codec hardware MPC1010-3D for perfect GOP (Group of Pictures-an mpeg 2 coding term) sync and PTS (Presentation Time Stamp) of two full HD channels or of one 4K x 2K by using 4 synced channels.

3Ding and Pennsylvania-based NEP-a 25 year industry veteran- showed their readiness with the SS3D-a van equipped with Pace-Fusion 3D rigs. All Mobile Video www.allmobilevideo.com will soon have ready their Epic 3D 3G Van with six 3ality Digital rigs (with capability up to 9) featuring Sony 1500R cams with T Blocks for 3D. Onsite sports 3D has been done at least a dozen times to date with the NBA All Stars game in 2007 (to 14K people in one arena) and 2009 but this year we are talking about worldwide broadcasts to hundreds of venues and over satellite and cable.

38 Veritas et Visus 3rd Dimension October 2010

Mobile TV Group, http://www.mobiletvgroup.com, is one of the largest onsite video producers with a nationwide fleet of vans (currently 24) and over 4000 events/year. They had live 3D feed from a 3D rig outside the 26HDX truck (a 53 foot HD Expando) with a full 3D record/edit/broadcast suite inside with stereo displays including the polarized panel top center of photo above. The live feed here is from a pair of Ikegami HDL40‘s with custom Canon primes, a GVG (i.e., Grass Valley Group) Dyno 3D replay system, Chryon HyperX3 switcher, Davio SIP (stereo image processor made by CineTal), and GVG Kayenne switcher to the CP JVC 46 inch 3D monitor. One point of this set up was the ability to simultaneously originate 2D and 3D broadcasts.

Industry veteran Harmonic Inc www.harmonicinc.com of California demo‘s the 3D capacities of its wide range of hardware and software for IP based codec, processing and delivery via every avenue to every type of user. Featured at the show was the frame based full 3D HD Electra 8000 – the world's first 1-RU (one rack unit or 1.75 inches high) encoder with multi-resolution, multi-standard, multi-service and multi-channel capabilities. The image on the Panasonic Viera frame sequential PDP was excellent. DirecTV will use their Harmonic encoders to launch three 3D channels in June and will also carry ESPN 3D and eventually others. Customers will receive an automatic software upgrade for 3D but of course will have to buy new sets.

Jihoon Jo of Korean polarized 3D display manufacturer Zalman, http://www.zalman.com/ENG/3D/work01.asp, shows a prototype 3D laptop. They have been the only company other than Japan‘s Arisawa that makes their own polarizer for their panels (and by a less costly means) so theirs sell e.g. for about 1/5th the price of those by Miracube (but there is much work on this now by the giants – e.g., WO2010/044414). These have so far been smaller sizes sold to gamers, scientists and 3D video professionals but they now have several new FHD (Full High Definition) panels up to 32 inches and may intro the 32 this year as a TV set. With the 3D market going insane many manufacturers are making their own polarized panels now but Zalman has experience and a good product at a remarkable price, so I have decided to distribute them and am showing them at Infocomm 2010.

Dr. Inge Hillestad of Norwegian pro video transport company T-Vips, http://www.t-vips.com, demonstrates their JPEG2000 over IP codec on a Hyundai CP panel. Station to station, DTT, live broadcast, etc-- they let you deliver high quality content at low cost. A pioneer in JPEG2000 over IP based origin and transport of video worldwide, they used NAB to launch their 3D ready JPEG2000, ATSC, switching and transporting solutions.

39 Veritas et Visus 3rd Dimension October 2010

In the ATSC (i.e., the 195 member intl. consortium Advanced Television Systems Committee) booth Korean electronics giant LG shows an ATSC 2.0 NRT (Non Real Time – i.e., stored on USB etc) VOD (Video On Demand) 2D/3D compatible monitor – the LG XCanvas. This system has been tried with terrestrial Korean station SBS and will be publicly tested soon. The video is downloaded in extra bandwidth while watching other programs. This is a facet of the Korean OHTV (Open Hybrid TV) initiative which seeks to set standards for next generation interactive broadband and RF delivery which includes VOD, EPG, NRT, CE-HTML and DAE with enriched services such as the ability to select clips from programs and give feedback. For recent related 3D codec work by Korean research group ETRI see WO 2010/053246.

Craig Lutzer of Korean 3D CP and 3D barrier panel maker Miracube, http://www.miracube.net, shown with two of their products. Miracube (a spinoff of Pavonine) has been making these for about 8 years and I have seen them many times. Good image quality on both CP and auto monitors. Their smaller ones are being used as onset 3D production monitors. Input supports– Side by Side, Top and Bottom, Interlaced and Frame sequential. A 2D/3D button allows you to use the monitor in 2D mode. Minimum crosstalk due to use of Wire-Grid Polarization (WGP) with 178 degrees viewing angle and resolution up to 1920x1200. They also have a beam splitter rig of their own.

Wisconsin-based Weather Central, http://www.wxc.com, introduced their live weather graphics 3D ready program 3D:Live Fusion here shown on a Sony shutter glasses monitor. Live Fusion is the world's most viewed on air broadcast weather platform and so we have all seen it countless times as various versions are also online, in print and in PDAs, mobile phones, and cars.

As reported last year Avid was one of the first to incorporate stereo capabilities into its world famous editing tools and they showed the current capabilities on two CP monitors from JVC. However the RED camera was not being used to shoot live 3D.

40 Veritas et Visus 3rd Dimension October 2010

Quantel‘s expensive (in the $500k range depending on the box and options) video editing hardware has been an industry standard for about 20 years. The Pablo Neo color corrector showed its updated stereo tools by editing RED 4K displayed on CP monitors. It was extensively used on Avatar where its Resolution Coexistence feature enabled the 2K film to be easily prepared for release in other formats such as 4K for IMAX. Most of this work was done by Modern VideoFilm for conforming, Stereo3D checking, adjustment of all parameters, QC and 3D subtitling of the Na'vi language. Find the brochure here http://quantel.com/repository/files/brochures_Pablo_nab08.pdf. A used one is on the net as I write for a mere $275K. However you might be able to find the SID Stereo 3D workstation or the iQ for less http://www.quantel.com/page.php?u=179a76e77017de7ea9d5a630e40f6523 and http://www.quantel.com/page.php?u=7bb3fa7666b4fea8207089f7b70b0ebd. For a nice 29-page whitepaper on 3D see http://quantel.com/repository/files/whitepapers_s3d_aug09.pdf. They also showed the latest incarnation of another industry standard – the Enterprise sQ Server doing Stereo 3D Workflow. Here is the URL for it http://www.quantel.com/page.php?u=81d679affb82e612a56008d06192a0af

There were other high end S3D (now the common abbreviation for Stereo 3D) editing box options at the show, such as SGO’s, http://www.sgo.es, Mistika that may cost half the roughly equivalent Quantel box and also now does 4K and 3D http://www.sgo.es/products/sgo-mistika-4k-2k-hd-sd/. Another Spanish company that makes camera rigs S3D, http://www.s3dtechnologies.com/, had their beam-splitter in the SGO booth. Get a four page brochure here http://www.s3dtechnologies.com/docs/rigs.pdf. S3D also has a stereo-calculator, CGI plug-ins for Maya and Max and a 2D to 3D video converter.

Screenshot of the S3D calculator; The S3D beam-splitter rig in graphic form with cameras shown as transparencies.

Motorola has been working on 3D capable STB‘s for years and showed their latest one at NAB. You can find a very simple but clear guide to 3DTV and other info at http://business.motorola.com/3dtv/index.html. They feature floating 3D Menus with automatic detection of 3D content and 3D format and seamless switching between 2D and 3D channels. They support 3D TV over both MPEG-4 and MPEG-2 and are capable of 1080p 24/30 output. Upon detection of 3D, it automatically reformats on-screen text and graphics to match the format. It supports all on- screen displays such as closed captioning, emergency alerts, application graphics and text overlays, electronic program guides and other apps. Notice the top/bottom format on the 2D monitor on the right. This was called over/under and side by side and subfield for decades and was almost always half vertical res in each eye. A

41 Veritas et Visus 3rd Dimension October 2010 top/bottom format is now officially defined as two full (no missing pixels) HD frames, one for each eye and termed – frame packing. It is essentially the format used in Neotek’s http://www.neotek.com TriD 3D video system for the last 6 years. It is one of those (including side by side, interleaved and nearly every 3D format you can think of with space left for ones you can’t) mandated in the new HDMI 1.4a 3D specifications. HMDI 1.4 permits handshaking between signal origination (e.g., DVD, STB, DVB, cable) and the TV set to send only 2D if it is not 3D and the correct 3D format if it is… http://www.hdmi.org/manufacturer/specification.aspx

Korean Pro monitor maker TVLogic shows the world‘s first 3D OLED monitor viewed with shutter glasses – the TDM-150W with 1366x768 resolution. Image was excellent but perhaps not as bright as one would like. This new technology is coming on fast and we should see both polarized and shutter OLED becoming common.

Bart Stassen of leading Canadian hardware, production and distribution company International Datacasting www.datacast.com showing live 3D going through the up/downlink chain via their Superflex Pro Video Coders/Decoders, http://www.datacast.com/Media/Content/files/DataSheets/superflexSheetBrochure.pdf, and the Sensio box) whose side by side compression tech they use. Get the latest brochure on their Superflex Pro Cinema 3D Live Decoder here http://www.datacast.com/Media/Content/files/DataSheets/IDC_ProCinema.pdf.

Sensio hardware was also in many other booths (e.g., in the Miranda Densite 3DX-3901 Stereoscopic 3D video processor and in the Grass Valley booth) and hidden inside others, and in a private suite. These coups for a tiny company were possible due to a decade long R&D which produced the needed multi-format 3D codec hardware and they richly deserve their recent success. Those interested in their tech may see US 2010/0111195, US 7,580,463 and US 7,693,221.

Though famed broadcast video hardware company Grass Valley had an anaglyph demonstration, they also had polarized displays evidencing their 3D readiness. Broadcast video hardware company Grass Valley showed the 3D readiness of their products with stereo displays driven by the VIBE family of contribution encoders and Elite transmitters able to encode and transmit 3D from remote locations with user variable codecs and compression ratios. They are owned by media giant Technicolor which is owned by the Fortune 500 company Thales--the French based international electronics, aerospace and defense entity, were formerly owned by Tektronix, and now are up for sale. The company originated in Grass Valley, California in 1958 and is famed for a long line of leading edge products and have won 22 Emmy‘s for their video products.

Though they did not have a booth, giant (20K employees worldwide) film and media company Technicolor is deep into the 3D biz with everything from their new over/under 3D Cinema lens – i.e., for film based 3D in non-digital theaters (150 installs as of April 2010) – to 3DBR disc authoring http://www.technicolor.com/en/hi/about- technicolor/technicolor-news/all-news-articles/2010/technicolor-brings-3d-to-the-home-and-beyond. Since there are over 100K 35mm (i.e., non-digital) cinemas, this lens may greatly speed up the 3D cinema installs and hence

42 Veritas et Visus 3rd Dimension October 2010 the growth of the whole industry. Major reasons are cost and convenience – approximately $12K since it uses the classic (i.e., since Warhol‘s Frankenstein etc in the 70s) over/under projection format) and a similar lens is being promoted by at least one other entity. This format has its problems such as vignetting and easy production of pseudo-scopic images with the projector framing knob or incorrect splicing and so Oculus 3D www.oculus3d.com has recreated another classic format with side by side twisted images, but I doubt they can compete with the giant Technicolor.

You can get all the latest on Grass Valley’s 3D with whitepapers, 3D posters, brochures and a lovely downloadable anaglyph animation on themes (real products) in the above poster at http://www.theycamefromgrassvalley.com/. Be sure to click the 3D button at top right of the pages and have your red/cyan glasses ready.

The famous filter company Tiffen, http://www.tiffen.com, acquired the revolutionary Steadicam years ago and continues to put out new models such as the Smoothie being demonstrated above – with a PS Technik Freestyle beamsplitter rig that is specifically designed for this use http://www.pstechnik.de/en/3d.php. It stays balanced when interaxial is changed, holds cams up to 14kg/pair, can be used upside down, and works with other balancing systems such as Artemis http://www.artemis-hd.com/index.php?id=2.

Speaking of camera stabilizers I will mention five other new slick devices any shooter will want. The Tyler MiniGyro is a handheld battery powered device with four gyro wheels that supports cams up to 30 lbs http://www.tylerminigyro.com. The Eagle (L‘Aigle) is a French made system similar to the Steadicam but with its own unique features http://www.laigleparis.fr. Polecam, http://www.polecamusa.com now offers special versions of its widely used supports for use.

I also saw the excellent, inexpensive, and highly adaptable Nano rigs for small cameras from Redrock Micro http://www.redrockmicro.com/. From Japanese company Rocket, http://www.rocketjapan.com, we have a variety of devices of which the most impressive is the spring loaded Spring Stabilizer XY Damper which is so cool I have to show it to you (ca. $3000) and this photo does not do it justice.

43 Veritas et Visus 3rd Dimension October 2010

44 Veritas et Visus 3rd Dimension October 2010 2011 Advance Conference Program

Monday-Thursday 24-27 January 2011 Hyatt Regency San Francisco Airport Hotel, San Francisco, California, USA

Monday 24th January 2011

SESSION 1 Mon. 8:30 to 10:10 am Visual Comfort and Quality: Session Chair: John O. Merritt, The Merritt Group

 Adapting stereoscopic movies to the viewing conditions using depth-preserving and artifact-free novel view synthesis, Frederic Devernay, Sylvain Duchêne, Adrian Ramos-Peon, INRIA Rhône-Alpes (France)  Visual fatigue monitoring system based on eye-movement and eye-blink detection, Donghyun Kim, Sunghwan Choi, Jaeseob Choi, Kwanghoon Sohn, Yonsei Univ. (Korea, Republic of)  Factors impacting quality of experience in stereoscopic images, Liyuan Xing, Junyong You, Norwegian Univ. of Science and Technology (Norway); Touradj Ebrahimi, Ecole Polytechnique Fédérale de Lausanne (Switzerland); Andrew Perkis, Norwegian Univ. of Science and Technology (Norway)  Visual discomfort of stereoscopic images induced by local motion characteristics, Hosik Sohn, Seong-il Lee, Yong Man Ro, Hyun Wook Park, Korea Advanced Institute of Science and Technology (Korea, Republic of)  3D video disparity adjustment for preference and prevention of discomfort, Hao Pan, Chang Yuan, Scott Daly, Sharp Labs. of America, Inc. (United States)

SESSION 2 Mon. 10:50 to 11:30 am Combining Depth Cues: Session Chair: Vivian K. Walworth, StereoJet, Inc.

 Can the of stereoscopic images be influenced by 3D sound? Amy Turner, Nicolas S. Holliman, Durham Univ. (United Kingdom)  Evaluating motion parallax and as depth cues for autostereoscopic displays, Ulrich Leiner, Fraunhofer-Institut für Nachrichtentechnik Heinrich-Hertz-Institut (Germany); Marius Braun, Fachochschule für Technik und Wirtschaft Berlin (Germany)

SESSION 3 Mon. 11:30 am to 12:30 pm Keynote Presentation 1

The SD&A Keynote Presentation provides an opportunity to hear an eminent speaker discuss a topic of interest to the global stereoscopic community. Speaker and topic to be announced.

SESSION 4 Mon. 2:00 to 3:20 pm View Synthesis: Session Chair: Nicolas S. Holliman, Durham Univ. (United Kingdom)

 A multi-resolution multi-size windows disparity estimation approach, Judit Martinez Bauza, Qualcomm Inc. (United States); Manish P. Shiralkar, Clemson Univ. (United States)  Warping error analysis and reduction for depth image based rendering in 3DTV, Luat Do, Svitlana Zinger, Technische Univ. Eindhoven (Netherlands); Peter H. N. de With, CycloMedia Technology B.V. (Netherlands)  Novel view synthesis for dynamic scene using moving multi-camera array, Takanori Yokoi, Tomohiro Yendo, Mehrdad Panahpour Tehrani, Nagoya Univ. (Japan); Toshiaki Fujii, Tokyo Institute of Technology (Japan); Masayuki Tanimoto, Nagoya Univ. (Japan)  Depth-based representations: which coding format for 3D video broadcast applications? Paul Kerbiriou, Guillaume Boisson, Korian Sidibe, Thomson R&D France (France)

45 Veritas et Visus 3rd Dimension October 2010

SESSION 5 Mon. 3:50 to 5:10 pm Multiview Systems: Session Chair: Neil A. Dodgson, Univ. of Cambridge (United Kingdom)

 A new basis representation for multiview image using directional sampling, Takehiro Yamada, Toshiaki Fujii, Tokyo Institute of Technology (Japan)  Design of tuneable anti-aliasing filters for multiview displays, Atanas R. Boev, Robert Bregovic, Atanas P. Gotchev, Tampere Univ. of Technology (Finland)  Multiview image compression based on LDV scheme, Benjamin Battin, Cédric Niquin, Philippe Vautrot, Univ. de Reims Champagne-Ardenne (France); Didier G. Debons, 3DTV Solutions (France); Laurent Lucas, Univ. de Reims Champagne-Ardenne (France)  Upsampling range camera depth maps using high-resolution vision camera and pixel-level confidence classification, Chao Tian, Vinay A. Vaishampayan, AT&T Labs. Research (United States); Yifu Zhang, Texas A&M Univ. (United States)

SD&A 3D Theatre Mon. 5:30 to 7:30 pm

Session Chairs: Andrew J. Woods, Curtin Univ. of Technology (Australia); Chris Ward, Lightspeed Design, Inc.

This ever-popular event allows attendees to see large-screen examples of 3D content from around the world. Program announced at the conference. 3D glasses provided.

SD&A Conference Annual Dinner Mon. 8:00 pm to late

The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.

Tuesday 25th January 2011

SESSION 6 Tues. 9:30 to 10:30 am Applications of Stereoscopic Displays: Session Chair: Chris Ward, Lightspeed Design, Inc.

 Attack of the s. mutans! A stereoscopic-3D multi-player direct-manipulation behavior-modification serious game for improving oral health in pre-teens, Ari Hollander, Firsthand Technology Inc. (United States)  Stereoscopic multi-perspective capture and display in the performing art, Volker Kuchelmeister, The Univ. of New South Wales (Australia)  Machine vision and vitrectomy: three-dimensional high definition video for surgical visualization in vitreoretinal surgery, Christopher D. Riemann M.D., Cincinnati Eye Institute and MedNet Technologies, Inc. (United States)

SESSION 7 Tues. 11:10 am to 12:30 pm Stereoscopic Display Developments: Session Chair: Michael A. Weissman, TrueVision Systems

 Novel active retarder 3D displays having full resolution and high brightness with polarizer glasses, Sung- Min Jung, Young-Bok Lee, Hyung-Ju Park, Jin-Woo Park, Dong-Hoon Lee, Woo-Nam Jeong, Jeong-Hyun Kim, In-Jae Chung, LG Display (Korea, Republic of)  High brightness film projection system for stereoscopic movies, Lenny Lipton, Oculus3D (United States)  New generation of universal active glasses, Bernard Mendiburu, Volfoni (United States); Bertrand Caillaud, Gilles Jovene, Thierry Henkinet, Volfoni (France)  Continuously adjustable Pulfrich spectacles, Kenneth M. Jacobs, Binghamton Univ. (United States); Ronald S. Karpf, ADDIS Inc. (United States)

46 Veritas et Visus 3rd Dimension October 2010

SESSION 8 Tues. 2:00 to 2:40 pm Evaluating the Quality of the Stereoscopic Experience I Session Chairs: Andrew J. Woods, Curtin Univ. (Australia); Christopher W. Tyler, The Smith-Kettlewell Eye Research Institute

 Visual discomfort with stereo displays: effects of viewing distance and direction of vergence- accommodation conflict, Takashi Shibata, Univ. of California, Berkeley (United States) and Waseda Univ. (Japan); Joohwan Kim, David M. Hoffman, Martin S. Banks, Univ. of California, Berkeley (United States)  Analysis of glossy screen reflections in causing false depth artifacts on 3DTVs, Scott J. Daly, Sachin G. Deshpande, Jon M. Speigle, Sharp Labs. of America, Inc. (United States)

Discussion Forum 1 Tues. 2:40 to 3:30 pm 3DTV Dangers: Truth or Fiction?

There has been a lot of recent discussion in the media about the potential dangers of 3DTVs and 3D Movies - and yet have been with us for over 150 years, 3D movies for over 50 years, and 3D viewing is also widely used in industry. 3DTV is, however, transitioning from a special event to a 24/7 experience and becoming available to a wider demographic. Where is the truth in the concerns being expressed, where are the falsehoods, and where are the gaps in our knowledge? The panelists will give their views on this important topic.

Moderator and panelists to be announced.

SESSION 9 Tues. 4:00 to 5:20 pm Evaluating the Quality of the Stereoscopic Experience II Session Chairs: Christopher W. Tyler, The Smith-Kettlewell Eye Research Institute; Andrew J. Woods, Curtin Univ. (Australia)

 Effect of image scaling on stereoscopic movie experience, Jukka P. Häkkinen, Jussi Hakala, Aalto Univ. School of Science and Technology (Finland); Miska Hannuksela, Nokia Research Ctr. (Finland); Pirkko Oittinen, Aalto Univ. School of Science and Technology (Finland)  Examination of 3D visual attention in stereoscopic video content, Huynh-Thu Quan, Luca Schiatti, Technicolor (France)  Relationship between perception of image resolution and peripheral visual field in stereoscopic images, Masahiko Ogawa, Kazunori Shidoji, Kyushu Univ. (Japan)  Quantifying how the combination of blur and disparity affects the perceived depth, Junle Wang, Marcus Barkowsky, Vincent Ricordel, Patrick Le Callet, Univ. de Nantes (France)

SD&A Demonstration Session Tue. 5:30 to 8:00 pm Session Chairs: Neil A. Dodgson, (UK); Andrew J. Woods, Curtin Univ. of Technology (Australia)

This year's demonstration session is again a combined event with the entire Electronic Imaging Symposium. The symposium-wide demonstration session open to all attendees. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging. The session will have a focused "Stereoscopic Displays & Applications" area.

The demonstration session hosts a vast collection of electronic stereoscopic displays - there’s no better way to witness so many stereoscopic displays with your own two eyes than at this one session!

Poster Session Tues. 5:30 to 7:00 pm A poster session, with authors present at their posters, will be held Tuesday evening, 5:30 to 7:00 pm.

The poster session is co-located and runs concurrently with the Demonstration Session.

47 Veritas et Visus 3rd Dimension October 2010

Wednesday 26th January 2011

SESSION 10 Wed. 9:30 to 10:30 am Autostereoscopic Displays I: Session Chair: Gregg E. Favalora, Optics for Hire

 Implementation of autostereoscopic HD projection display with dense horizontal parallax, Shoichiro Iwasawa, Masahiro Kawakita, National Institute of Information and Communications Technology (Japan); Sumio Yano, NHK Science & Technical Research Labs. (Japan); Hiroshi Ando, National Institute of Information and Communications Technology (Japan)  Full-parallax 360 degrees horizontal viewing using anamorphic optics, Munkh-Uchral Erdenebat, Ganbat Baasantseren, Jae-Hyeung Park, Nam Kim, Ki-Chul Kwon, Chungbuk National Univ. (Korea, Republic of)  Optical characterization of autostereoscopic 3D displays, Michael J. Sykora, 3M Co. (United States)

SESSION 11 Wed. 11:00 to 11:40 am Autostereoscopic Displays II; Session Chair: Vivian K. Walworth, StereoJet, Inc.  Depth cube display using depth map, Byoung-Sub Song, Sung-Wook Min, Jung-Hun Jung, Kyung Hee Univ. (Korea, Republic of)  Surface representation of 3D objects for aerial 3D display, Hiroyo Ishikawa, Hayato Watanabe, Satoshi Aoki, Hideo Saito, Keio Univ. (Japan); Satoru Shimada, Masayuki Kakehata, Yuji Tsukada, National Institute of Advanced Industrial Science and Technology (Japan); Hidei Kimura, Aerial Systems Inc. (Japan) and Burton Inc. (Japan)

SESSION 12 Wed. 11:40 am to 12:40 pm Keynote Presentation 2

The SD&A Keynote Presentation provides an opportunity to hear an eminent speaker discuss a topic of interest to the global stereoscopic community. Speaker and topic to be announced.

SESSION 13 Wed. 2:00 to 3:40 pm Crosstalk in Stereoscopic Displays; Session Chair: Takashi Kawai, Waseda Univ. (Japan)

 How are crosstalk and ghosting defined in the stereoscopic literature? Andrew J. Woods, Curtin Univ. of Technology (Australia)  A simple method for measuring crosstalk in stereoscopic displays, Michael A. Weissman, TrueVision Systems (United States); Andrew J. Woods, Curtin Univ. of Technology (Australia)  Ergonomic evaluation of crosstalk in stereoscopy through heart activity and forehead blood flow, Satoshi Toyosawa, Hiroyuki Morikawa, Kouichi Nakano, Kawai Takashi, Waseda Univ. (Japan); Chin-Sen Chen, Hung-Lu Chang, Jinn-Cherng Yang, Industrial Technology Research Institute (Taiwan)  Optical characterization of shutter glasses stereoscopic 3D displays, Pierre M. Boher, Thierry R. Leroux, Veronique Collomb-Patton, ELDIM (France)  The effect of crosstalk on perceived depth magnitude in stereoscopic displays, Inna Tsirlin, Robert S. Allison, Laurie M. Wilcox, York Univ. (Canada)

SESSION 14 Wed. 4:00 to 5:20 pm 3D Perception and Interaction; Session Chair: Hideki Kakeya, Univ. of Tsukuba (Japan)

 Effects of stereoscopic presentation on visually induced motion sickness, Hiroyasu Ujike, Hiroshi Watanabe, National Institute of Advanced Industrial Science and Technology (Japan)  Vergence and accommodation to multiple-image-plane stereoscopic displays: 'Real world' responses with practical image-plane separations? Kevin J. MacKenzie, Ruth Dickson, Simon J. Watt, Bangor Univ. (United Kingdom)  Both efficiency measures and perceived workload sensitive for manipulations in binocular disparity, Maurice van Beurden, Wijnand Ijsselsteijn, Technische Univ. Eindhoven (Netherlands)  Comparison of relative (mouse-like) and absolute (tablet-like) interaction with a large stereoscopic work- space, Melinos Averkiou, Neil A. Dodgson, Univ. of Cambridge (United Kingdom)

48 Veritas et Visus 3rd Dimension October 2010

Electronic Imaging All-Conference Reception Wed. 7:00 to 9:00 pm

The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues. Plan to join us for this relaxing and enjoyable event.

Thursday 27th January 2011

SESSION 15 Thu. 8:30 to 9:10 am 3D Content: Session Chair: Janusz Konrad, Boston Univ.

 Optimal design and critical analysis of a high resolution video plenoptic demonstrator, Valter Drazic, Jean- Jacques Sacré, Jérôme Bertrand, Arno Schubert, Technicolor (France)  Geometric and subjective analysis of stereoscopic I3A cluster images, Mikko Kytö, Jussi Hakala, Pirkko Oittinen, Aalto Univ. School of Science and Technology (Finland)

Discussion Forum 2 Thu. 9:10 to 10:10 am The panelists will give their views on a topic of general interest to the SD&A audience. Topic, moderator and panelists to be announced.

SESSION 16 Thu. 10:50 am to 12:30 pm Stereoscopic Production and Playback: Session Chair: Samuel Z. Zhou, IMAX Corp. (Canada)

 The Dynamic Floating Window: a new creative tool for 3D movies, Brian R. Gardner, Independent 3D Consultant (United States)  Stereo video inpainting, Felix Raimbault, Anil Kokaram, Trinity College Dublin (Ireland)  A modified non-local mean inpainting technique for occlusion filling in depth-image based rendering, Lucio Azzari, Federica Battisti, Univ. degli Studi di Roma Tre (Italy); Atanas P. Gotchev, Tampere Univ. of Technology (Finland); Marco Carli, Univ. degli Studi di Roma Tre (Italy); Karen Egiazarian, Tampere Univ. of Technology (Finland)  A study on the stereoscopic codecs for non-real time 3DTV services, BongHo Lee, Electronics and Telecommunications Research Institute (Korea, Republic of)  A modular cross-platform GPU-based approach for flexible 3D video playback, Roger Olsson, Håkan Andersson, Mårten Sjöström, Mid Sweden Univ. (Sweden)

3rd Dimension

Back issues only $7.99

http://www.veritasetvisus.com/3rd_dimension.htm

49 Veritas et Visus 3rd Dimension October 2010 Thar She Blows… by Fluppeteer

Fluppeteer is contributing to Veritas et Visus based on a long background working as a computer graphics programmer, and a similarly long background torturing his display hardware to within an inch of its life. He uses an IBM T221 display (3840x2400 pixels) and multi-monitor setups, the attempts to extract the best out of which have given him some insight into the issues specific to high-resolution displays. Fluppeteer holds an MA from the University of Cambridge (England) and an MSc in Advanced Computing from King’s College London. His efforts to squeeze the most from monitors stretch from ASCII art to ray tracing. Laser surgery left him most comfortable 1-2 feet away from the monitor, making high- resolution a necessity. He is currently ranked 18th in the word at tiddlywinks.

My computer recently blew up. Actually, that's over-stating it; it actually shut down in a relatively calm manner, but it has nonetheless passed on. Is no more. Has ceased to be. Has expired and gone to meet its maker. Is an ex- computer.

The reason for this is that the water cooler pump has died. I could probably consider replacing it, but the effort involved in plumbing a new water cooler into a computer that needs draining to be removed from its cabinet is probably beyond what the machine can reasonably justify – assuming that it did, in fact, survive the experience. Note to self: using the graphics card temperature monitor to check how hot the CPUs are getting if the water pump is suspiciously silent, on the basis that they're all plugged in to the same water cooling system, only works if the water is actually flowing. Otherwise a large heat sink with water in it can keep a CPU running for just long enough for you to leave the room before it falls over. #LFMF

Why did I have a water-cooled computer? I wasn't over-clocking it: it was a frantic attempt to stop the graphics cards over-heating. Before I fitted the water cooler, one of the graphics cards hit 105 degrees centigrade. Filling the case with fans didn't make the situation any better, just louder. What was the problem? Most computer cases assume that air is coming in the front and blowing out the back. Altering this is possible, but a pain; however, it's certainly possible to amplify the amount of air that's blowing. Allow for big fans – which are quieter – and add rounded wires for better airflow, and you have a small hurricane blowing between the front and the back.

Unfortunately, I suspect a lot of graphics card manufacturers run all their graphics cards in cases with the sides off, because the cards in this machine sucked in air at the back of the card (but still inside the case) and blew it forwards. Where this nice hot air would then get blown right back to the fan intake by the prevailing airflow of the case – I got a hot air cyclone. The only easy solution was to get rid of the air flow entirely and water cool the graphics card, which left the rest of the air in the case unobstructed.

This was some years ago. Having now, by obligation, gone shopping for parts for a replacement computer, I assumed that all would be much better. Spotting low-end graphics cards with an open fan on the side, I remained dubious about the effect on the heat of the rest of the case, so I got a pair of nice large shrouded cards that vent air out the back. All is well, yes? Well, not so much. The intake fan – something I would have noticed had I not been desperately trying to buy it for when I needed a computer in a hurry – is in the middle of the card, not (as with reference cards I'd seen elsewhere) sucking in air at the front of the case and blowing it over the card and out the back. Air does vent out the back, but only from half way down the card; the other half of the air sucked in by the fan blows towards the front of the case, right towards the 120mm fan that's cooling the rest of the system. Having the fan half way down the card also means that, because I have two cards in Crossfire configuration, the air is being sucked through a 2mm gap between the cards – so even the flow of warm air (mostly being sucked back in, having been blown out by the shroud vents above and below the card) is obstructed. I'm also mildly amused that the fan is mounted directly over the GPU, so there is no airflow over the part itself, only over the heat pipes (which is fine, but seems a bit redundant).

50 Veritas et Visus 3rd Dimension October 2010

I could avoid the airflow problem by mounting a fan in the side of the case and blowing air onto the sides of the cards, so that cool air went directly to the card fan intakes. Which would have been fine if the modern fad of perspex sides (because looking at wires is more important than, say, RF screening) didn't turn the case into a sounding board when a fan is mounted there. Admittedly, I'm not a great fan of the case (it wasn't a cheap one and was made by a reputable manufacturer, but didn't have enough stand-offs for the motherboard, the PCI slot blanks bent the back-plate on their way out and the drive bay – while almost blocking the graphics cards – also eats screws), but it's far from alone in this feature. Silly me for hoping that computer assembly would have been getting easier over the twenty or so times I've done it; you still have to sacrifice some blood to make the computer work.

Fortunately, some modern graphics cards are over-engineered, and the airflow issues don't seem to cripple them as much as older versions: I took out the extra fans, put the air flow back the way it started, and ran some tests, and the cards seem to stay reasonably cool in moderate use despite my fears. I have to feel that it's in spite of the cooling solution rather than because of it, though, and I live in fear that at some point I'm going to run something that pushes the thermal capacity of the cooling solution off a cliff. The way to make a cool, quiet computer is to channel cooling down a nice straight path as efficiently as possible, passing heat exchange surfaces on the way. Adding in fans and relying on the right one winning is not a solution.

Maybe the manufacturers have to target the worst environment: I've seen many cases (usually pre-built systems) that are cooled entirely by the CPU and PSU fans, often stuffed with quite a lot of warm graphics cards, and blowing air in every direction is probably the way to keep a card cool if there's no air movement in the first place. Even so, it seems to me that the solution for the component manufacturers would be to require cases with decent cooling – if the cost of a 120mm fan is the only reason for a dual-slot cooling solution, it would be better for everyone if the box shifters bit the bullet.

Graphics cards have feelings too. Give them somewhere nice to live, and let them breathe easily. Then I can, too.

MultiView Veritas et Visus  Andrew Woods, volume 10: 20 articles, 62 pages

The MultiView compilation newsletters bring together the contributions of various regular contributors to the Veritas et Visus newsletters to provide a compendium of insights and observations from specific experts in the display industry. http://www.veritasetvisus.com

51 Veritas et Visus 3rd Dimension October 2010

Another step in the maturation of 3D in the home by Arthur Berman

Arthur Berman has an extensive background in the technology and business associated with liquid crystals. He has 23 US patents and was the lead technologist in four overseas factory deployment/start-up operations. For the past several years he has been the founder and senior executive in LCoS/projection technology companies. He has a Ph.D. in physics from Kent State University. This article was reprinted with permission from Insight Media’s Display Daily on August 20, 2010. http://www.displaydaily.com

All technologies go through a series of "way stations" as the products they enable mature from prototype to consumer item. This Display Daily article will report on one such development in the life cycle of an important part of bringing 3D video into the home. More specifically, two companies have teamed to offer a certification program to assure the quality of 3D Blu-ray discs.

The companies are BluFocus Inc. (Burbank, CA) and THX Ltd. (San Raphael, CA). The partners bring a great deal of relevant experience to the task. BluFocus is an official Testing Center for the Blu-ray Disc Association (BDA) . THX is well known for the quality system that assures certified theaters provide sound as close as possible to the intentions of the mixing engineer. In addition, THX has created certification processes for a number of additional products such as HDTVs, and now, 3D Blu-ray discs.

Paulette Pantoja, CEO of BluFocus described the problem they seek to address. “The added dimension of 3D brings with it more technical challenges than traditional 2D post-production and authoring, and requires more steps in the production chain. The certification process we are creating with THX will help refine 3D post-production and authoring and help content providers minimize technical flaws long before 3D content is broadcast, streamed or authored on optical disc.”

Another aspect of certification is discussed in a press release on the THX web site: “Attention is also paid to disc interoperability, with THX and BluFocus testing Blu-ray Discs and Players to ensure they play seamlessly.” To accomplish these goals, “The new THX-BluFocus 3D certification will set authoring and production guidelines and testing procedures for evaluating 3D AV quality, as well as examining 3D Blu-ray disc and player interoperability and the physiological effects of 3D on home viewers.”

The 3D certification consists of three parts. Each part will have a separate icon. If the disc is certified, the manufacturer can include the icon on the product package and, in this way, inform consumers of the assured high quality of the 3D video. The three certification parts are as follows:

 Interoperability Certification: This certification will assure that the Blu-ray disc will play without problem on the conventional and 3D Blu-ray players offered by a wide range of major consumer electronics companies. THX has stated that each disc seeking certification will be tested on more than 100 Blu-ray players.

 Audio-Video Certification: This certification is an assurance that the image in every frame of the 3D video has been analyzed. That both the left eye and right eye images have the same quality, sharpness and detail of the original master video. As a separate matter, the audio elements will also be evaluated to assure that they are true to the master. Achievement of a certification assures the user that this 3D Blu-ray disc is free of digital artifacts.

 Creative Certification: This certification indicates that engineers have reviewed all 3D visual elements including characters, menus, graphics and subtitles to assure they are

52 Veritas et Visus 3rd Dimension October 2010

properly focused and in the accurate "action location" on screen. 2D-to-3D conversions will also be analyzed to detect creative errors and/or flaws that diverge from the director’s intent or that may cause 3D related viewing problems.

It still remains very early days for 3D in the home. Incompatibilities exist and, in fact, are common between equipment and software provided by different companies. It would seem to be of value to have a certification to assure that any 3D video a consumer chooses to purchase will in fact work and be of high quality when played on their Blu-ray player. With this in mind, all the 3D industry can wish good luck to THX and BluFocus with the certification program.

3D @ Home 3D Workshop A Full-day, Hands-on Workshop Spanning the Entire 3D Eco-system November 8, 2010 8:30 - 5:00 PM

Workshop Location: 3D@Home Headquarters/ SEMI Bldg. 3081 Zanker Rd. San Jose, California 95134

The 3D@Home Consortium will be hosting a full-day, hands-on workshop spanning the entire 3D eco- system. The workshop, to be held on Monday, November 8, 2010 in San Jose, CA, will be presented by Insight Media University's Chris Chinnock.

The program consists of a full day of intermediate and basic 3D workshops that are geared at 3D @ Home member companies, and local Bay-area companies, with a desire to know more about the 3D landscape. These four workshops offer in-depth knowledge about 3D topics.

The courses are tailored to the technical sophistication of the audience. Given the wide-range of information, there is something for everyone to learn on 3D technology from creation to distribution to display. Attendees may choose to attend the full-day of workshops, or only the AM or PM sessions.

Registration link: http://www.cvent.com/EVENTS/Info/Summary.aspx?e=cfae8041-89a1-4fa1-9f70-1c47f7443505

53 Veritas et Visus 3rd Dimension October 2010 Where’s the Beef?

by Pete Putman

Pete Putman is an expert in the HDTV market and its associated channels, and also is a leading display testing authority. He has authored hundreds of technical articles, reviews, and columns covering a wide range of topics including front and rear projection, video format conversion, electronic cinema, DTV and HDTV reception and display, plasma and LCD display technologies, LED tiled displays, and networked AV installations. He is a member of InfoComm International (Academy Faculty), SID, and SMPTE He was awarded as the InfoComm International 2008 Educator of the Year. He has a BA in Communications from Seton Hall University, and an MS degree in Television/Film from Syracuse University. This article originally appeared on Pete’s blog site on October 6, 2010 and is reprinted by permission: http://www.hdtvexpert.com

Yesterday I made the trek up to northern New Jersey to visit John Turner at Turner Engineering, a 40-year-old broadcast systems integration company that is well-known in the industry for having built (among other things) the video infrastructure at EPCOT, the experimental WHD TV station, control rooms and distribution facilities for companies such as AT&T and Prudential, and a host of other projects including re-wiring the Las Vegas Convention Center and dropping the sixteen HD-SDI fiber optic feeds required for NHK’s 2009 demo of 8K resolution UHDTV.

The occasion was an attempt to get a DirecTV HD receiver to “talk” to a Hyundai S465D commercial 3D LCD monitor. The S465D, which sells for about $7,000, uses built-in micro-polarizers and works with passive X-pol (circular polarized) eyewear, such as Real D’s movie theater glasses. It displays top+bottom and side-by-side frame-compatible 3D signals as half-resolution images – 1920×540 pixels to the left eye, and 1920×540 pixels to the right eye. (Sorry, the S465D doesn’t support the Blu-ray frame-packing 3D format (HDMI v1.4a) – we tried that test with Samsung C6800 and C6900 BD players.)

Hyundai's 465D 3D LCD monitor. (The tropical fish are optional!)

The problem was that the Hyundai’s EDID (Electronic Display Interface Data) is not supported by the DirecTV receiver, so we couldn’t see any of the available 3D channels. That problem was solved by using Gefen’s HDMI Detective to read and save the EDID from a Samsung UN46C7000 3D LCD TV and subsequently ‘spoofing’ the DirecTV receiver into delivering the desired programming.

Once that was set up, we donned the glasses and scanned through all three channels. The first one consisted of a DirecTV 3D logo sitting in mid-air, which was interesting to watch for all of five seconds. There was no crawl, or barker, to tell us what programming was coming up later. The second channel, n3D (sponsored by Panasonic), was running a loop of content promoting 3D concerts featuring Peter Gabriel and Jane’s Addiction from the Guitar Center, wherever that is. These were interspersed with promos for a couple of nature shows. And that was it. The third channel was running an on-demand promo for Journey to the Center of the Earth in 3D for $4.99. And that was it. Granted, we checked out the programming between 2 PM and 6 PM EST, when there probably wouldn’t be too many viewers to tap into. But why not loop entire 3D programs all day, as the old PBS HDTV demos used to do ten years ago?

Comcast isn’t any better now. I have a choice of ESPN 3D (also available on DirecTV), which is mostly a barker channel during the daytime promoting the upcoming Saturday 3D NCAA college football game. Go up one channel, and you see a graphic telling you that Comcast’s own 3D channel is “coming soon!” And that’s it.

54 Veritas et Visus 3rd Dimension October 2010

It’s unrealistic to expect consumers to spend the extra $$ on a new 3D TV when there is so little content to choose from. Once you get through the “bundled” Blu-ray disc(s) that came with your set and watch a football game or a pay-per-view movie, you’re going to be sitting on your hands waiting for more 3D content to come along.

The TV manufacturers have to take some of the blame here. Yes, I know that Sony has a partnership with IMAX and Discovery, but that 3D channel won’t launch until next year. And the pickings on n3D are like finding a couple of cans of beans in an otherwise-empty pantry.

Want to buy a 3D Blu-ray? There’s not a lot to choose from there, either, and won’t be until we get closer to Christmas. All of the “hot” 3D movies are already tied up in bundling arrangements with TV manufacturers, and that’s just plain silly. How to Train Your Dragon, Alice in Wonderland, and Despicable Me are three of the highest- grossing 3D movies of 2010. But you have to buy a new 3D TV to get a 3D BD copy of any of them. Oh, well. There’s always that 3D DirecTV logo to look at…

Afterthought: The Hyundai S465D does a decent job displaying 3D, but a reference-grade monitor it’s not. I tried to do a basic calibration, but the monitor’s image adjustments were so limited that it wasn’t worth the effort. And the S465D has inconsistent gamma response and wanders over 1000 degrees in color temperature from black to white. I’ve seen consumer TVs do a better job for one-third the price.

As an air check monitor, it works fine. Don’t expect any more than that from it, though. You will be disappointed.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

55 Veritas et Visus 3rd Dimension October 2010 The U-Decide Initiative

by Neil Schneider

Neil Schneider is the President & CEO of Meant to be Seen (mtbs3D.com) and the newly founded S-3D Gaming Alliance (s3dga.com). For years, Neil has been running the first and only stereoscopic 3D gaming advocacy group, and continues to grow the industry through demonstrated customer demand and game developer involvement. His work has earned endorsements and participation from the likes of Electronic Arts and Blitz Games Studios, and he has also developed S-3D public speaking events with the likes of Crytek, Epic Games, iZ3D, DDD, and NVIDIA. Tireless in their efforts, mtbs3D.com is by far the largest S-3D gaming site in existence, and has been featured in GAMEINFORMER, Gamecyte, Wired Online, Joystiq, Gamesindustry.biz, and countless more sites and publications.

The S-3D Gaming Alliance and its partners are pleased to reveal the preliminary results from The 2010 U-Decide Initiative. Here are some key features:

 Data collected from July 7th to October 1st, 2010  Jointly partnered and promoted with Panasonic, Electronic Arts, Ubisoft, Steelseries, Zalman, Blitz Games Studios, Computer Power User, and Meant to be Seen.  Purposely targeted to gamers (console and PC)  1,169 respondents: 735 traditional 2D gamers (non-stereoscopic) that don't own 3D equipment, 434 experienced stereoscopic 3D gamers that do.  These preliminary results are 100% based on the 2D gamer portion to avoid skewing.  This is a tiny sampling of the data collected. A full report will be released in November, 2010.  75% of respondents are based in North America, 15% from Europe, and the remaining 10% from ROW.  According to the Entertainment Software Association, over 50% of adults play video games.

3D glasses or no 3D glasses? While pundits and general interest 3D studies are all up in arms about the need for glasses-free technology, these statements continue to go unfounded among gamers. According to traditional 2D gamers who don't yet own 3D equipment of their own, the use of glasses is an insignificant deterrent for 3D gaming and 3D Blu-ray movie content. It's only for 3D sporting events and traditional television broadcasting that glasses mark a steep difference in the willingness to watch 3D content. It seems many of the original findings from 2009's U-Decide Initiative still hold water.

Left – Willingness to watch 3D content with and without glasses (blue – with glasses; red – no glasses needed); Right – Will HDTV owners buy a second TV? 253 don’t own an HDTV; 482 already do (3D not specified); orange – don’t own an HDTV; blue – already own an HDTV)

56 Veritas et Visus 3rd Dimension October 2010

When will gamers buy a second HDTV? The above chart is based on the 2D gamer purchase plans for HDTVs. This is not 3D HDTV specific. For pundits and analysts that believe consumers won't buy a second HDTV if they already own one, U-Decide had the advantage of possessing a disproportionately larger sample of gamers that already own an HDTV. As shown above, owning or not owning an HDTV has a very limited impact on interest levels in making a future HDTV purchase.

3D HDTV purchase plans by 2D gamers: Stereoscopic 3D is considered a high interest HDTV feature for 57% of respondents, 19% were neutral, and 24% were at a low interest level. The above chart is based on the 57% high interest level group combined with 50% of the neutral group.

2D Gamer 3D HDTV purchase plans; 253 don’t own an HDTV, 482 already do; orange – don’t own an HDTV, blue – already own an HDTV

2D gamer purchase plans for 3D games: If game developers think that they can wait until the last minute to be 3D Ready, they should think again. According to almost 50% of respondents, when gamers own a 3D HDTV they will give preferential treatment to 3D Ready games. In fact, 8% say they will only buy 3D games. We expect this trend will grow more and more dominant as 3D gaming awareness increases.

Left – Relationship between 3D and game purchases; Right – 3D premium for PC games

Is there money to be made for game developers? One of the leading issues for stereoscopic 3D game developers is whether or not there is money to be made from having a special 3D mode, and whether or not gamers would be willing to spend a premium on this mode.

 3D premium for 3D PC games: According to traditional 2D gamers, there is a clear willingness to spend a bit more for 3D compatibility (if they owned a 3D display). Approximately 40% of respondents are willing to spend anywhere from $3 to $5 more on a $50 game title on PC. Another 12% is ok with playing as much as $10 more. Over 15% suggest premiums as high as 50% to 100% more for their

57 Veritas et Visus 3rd Dimension October 2010

video games. We are unconvinced that premiums this high would sell very well with gamers, but it's a good indicator that 3D compatibility has measurable value associated with it.

3D premium for 3D console games: Willingness to pay a premium for 3D compatibility is a bit less for console gamers versus PC gamers, but not by much. Just under 40% are comfortable with the $3 to $5 range, and almost 12% will go as high as $10. As with PC, we aren't big believers in the $25 to $50 premium range for console 3D games, but almost 14% of 2D gamers suggest this as a possibility.

3D premium for console games

Up until now, 3D gaming has been treated as a free feature on PC and console video games. Do you think this trend will change? Why or why not? Do you see this as a feasible way for game developers to make some extra money out of stereoscopic 3D gaming? What criteria would game developers need to meet for a premium like this to be feasible?

http://www.electronic-displays.de

58 Veritas et Visus 3rd Dimension October 2010 Last Word: Dark Country

An Interview with Thomas Jane

by Lenny Lipton

Lenny Lipton is recognized as the father of the electronic stereoscopic display industry, Lenny invented and perfected the current state-of-the-art 3D technologies that enable today's leading filmmakers to finally realize the dream of bringing their feature films to the big screen in crisp, vivid, full cinematic-quality 3D. Lenny Lipton became a Fellow of the Society of Motion Picture and Television Engineers in 2008, and along with Petyer Anderson he is the co-chair of the American Society of Cinematographers Technology Committees’ subcommittee studying stereoscopic cinematography. He was the chief technology officer at Real D for several years, after founding StereoGraphics Corporation in 1980. He has been granted more than 30 patents in the area of stereoscopic displays. The image of Lenny is a self-portrait, recently done in oil.

This unedited interview was recorded a couple of years ago at the Shanghai Grill in Beverly Hills.

Lipton: What gave you the idea of shooting a 3D – I’ll call it a – horror movie?

Jane: Let’s call it a thriller.

Lipton: It’s a thriller. Because “horror movie” is wrong. Today it means gore.

Jane: Yeah, this is much more… a psychological thriller. And the idea of exploring some psychological issues in the vein of film noir, where the heroes are typically conflicted psychologically and are working out some deep personal issues… For me, shooting the film stereoscopically was an allusion… The depth in the picture gave me a chance to explore depth in filmmaking. In other words, I felt like I could heighten the symbolism that’s inherent in the dreamlike narrative of film noir, with a heightened sense of depth and using the visuals in a way that would cast them in relief, bring some of the visuals to the foreground, and allow me to explore psychological issues in a visual way.

Lipton: So you knew 3D was going to be an element in this story from the get-go?

Jane: Yes, I did.

Lipton: Just the way any director would know that color and sound were going to be part of the story, and almost take it for granted.

Jane: Yes.

Lipton: But in this case I presume you had to be more self-conscious because it’s new, or it’s a novelty.

Jane: Yeah, it is novel. So there are less filmic references that I can draw upon in stereo, which allowed me to be freer in making my associations. Whereas in the 2D world you know you’re going to be working with color, sound, and a composition that’s going to be in a 4x3 or a 16x9 frame – you know, you have your wide lenses, and your tight lenses, and you have your depth of field, and these are the tools with which we’ll tell a visual story. With 3D we add the extra element of the z-axis to our toolbox to tell a visual story – to use visuals to tell a psychological, emotional narrative.

Lipton: So when you’re on the set shooting a shot, are you able to visualize what the 3D effect is going to be like to the audience?

Jane: In creating the narrative, I try to incorporate aspects of stereoscopy. In other words, I try to plan, using storyboards and the script, where the 3D is going to have an impact on the narrative beforehand – before I get to the stage and the set. And for that reason I storyboarded the entire film – shot by shot, frame by frame – so I knew

59 Veritas et Visus 3rd Dimension October 2010 where each shot was going to land in the film, and where that shot was going to exist on the z-axis of the 3D narrative. In other words, I created the narrative in a storyboard form, and I went back through that storyboard and decided where the subject in each frame was going to exist in the z-axis of the picture. Looking at the film scene by scene, act by act, I could then map out how I wanted the stereo to play in the film – in other words, how I wanted the audience to experience the 3D narrative. For instance, I believe that if you’re watching a and everything is put into stark 3D relief, eventually your eyes will get used to the 3D effect and you will lose the impact that you have. The first 30 seconds of a 3D film are always the best, because your eye is visually stimulated in a new way that you’re not used to, even in real life; because stereoscopic film is still an illusion. It’s two flat images married together to create the illusion of depth. So when you’re watching it on a screen, although closer to real life it’s still an illusion.

Lipton: That’s a very important point, because people often say that the addition of the stereoscopic element to the cinema makes it more realistic. But you’re getting at something that I think is deeper. It’s not necessarily that it’s more realistic; it’s something else that you can use to involve the audience, and manipulate.

Jane: That’s correct. Like color, like stereo sound, it is a reflection of how we perceive reality in our day-to-day lives, but it is still a manufactured, man-made illusion. And you’ve experienced that. In no way do you feel like these images are real at any time. You’re still using depth, you’re still using depth of field, you’re still using focus to tell the story. I still can pull focus from subject to subject. I’m still confined by the frame in telling the story, and 3D is just another tool to manipulate the image.

Lipton: In Dark Country your main character is trapped, and I believe that he’s even more trapped in 3D than he would be in a 2D movie. It’s a paradox. It would seem that you’d have more space, and yet it’s as if the 3D element in this film puts the guy, the poor bastard, in a cage that he can never get out of.

Jane: That’s correct. That’s absolutely correct. And that’s why I felt like stereo would be a perfect medium to explore the psychological issues that the film deals with, and those issues of being trapped. That’s why the setting is both containing and… You know, it’s contained inside of a car for a lot of the film, but it’s set in the desert, which is an extremely vast wide open space.

Lipton: Right. I knew you had something after I saw… I guess I saw an untimed rough cut. Movies go beyond intellect. For days I thought about… Not that I thought about it, but it kept coming back to me and bothered me. So your work haunted me. No matter what I thought about it intellectually, analyzed it or whatever, it was very disturbing and it stayed with me. And then I knew that it was real. That this was a worthwhile film, and entertaining at the same time. Of course nobody wants to sit through anything if it isn’t fun to look at. But it’s actually very disturbing.

Jane: That’s the idea. And you’re right: The stereo can enhance the sense of confinement by further defining the borders.

Lipton: It’s very monochromatic, and very dark. And it’s honestly easier to see in stereo than it would be in 2D.

Jane: Yes. Again, that’s the monochromatic choices that we made, you know. The film noir, chiaroscuro style was chosen to enhance the 3D effect of the film. I think that, paradoxically, the less visual information that you can fill in a frame, the more what’s there will stand out in relief. If I can shoot a character in shadows against black, the more that character will stand out and what he’s doing will take on an inherent larger-than-life meaning than if I were to shoot the character in a much more realistic manner.

60 Veritas et Visus 3rd Dimension October 2010

Lipton: In a sense movies have always been three-dimensional, because directors and cinematographers have always been attempting to direct the audience’s attention to a specific part of the frame through depth of field, or by adding smoke and fog and choice of focal lengths. So the stereoscopic element is another way to tell a story in depth. But it’s taken a long time for us to get here.

Jane: Yeah, all the tricks we use to create an image are mimicking binocular sight. So we’re trying to create depth in a 2D image, a 2D space, and we’ve become very good at it. And our visual depth cues are so inherent and instinctual that when we see a 2D image that has foreground, mid-ground and background, we automatically can extrapolate that into what that would feel like in real life – and thus we can have a visceral experience watching a 2D image. Nothing changes with the stereo image. You are still confined by the same limitations, and you still have the same tools to bring about the same effect. In other words, if you want someone to focus on something in the foreground you’re still going to have to use, like you said, depth of field, focus, and composition to achieve that. But you’re adding the stereo binocular effect when you shoot with two cameras.

Lipton: But curiously, your tormented character who, in a 2D movie, would be confined to the x and y plane, is now confined to the x, y, and z plane or space. He’s even more confined. It’s a paradox. What you did was very clever. And I think it’s very important that directors start thinking about the stereoscopic medium not as a gimmick, but as an intrinsic part of telling the story. The thing is, it isn’t that it’s any more realistic. It’s that it lets you tell the story better, if you use it right.

Jane: If you use it right, it’s another dimension – like having a sound come from the back of the theater and then roll forward to the front of the theater. It’s the same immersive technology that you can use… It’s another tool in your kit.

Lipton: What was it like to be on the set with this new gear, this new equipment? This is the first picture you directed, correct?

Jane: The first picture I directed. We used the Silicon Imaging cameras. No one had ever shot a feature with those cameras before. They’re very small. The digital live-action 3D – if we weren’t the first, we were the second film to use this digital technology to shoot a live-action stereo film. The workflow was not established and kept changing on a daily basis as the technology evolved. The RED cameras hadn’t been used very much, and that technology was evolving as we shot the film. So there were a lot of technical glitches that we came across.

Lipton: You took a lot of risk for your first outing as a director. You’re either crazy, or you like to take risks.

Jane: I think what’s exciting for me about digital filmmaking in stereo is the newness of the technology. Because I was able to put it on a Steadicam rig and shoot live action, I was using the camera in ways that hadn’t been done in stereo before, in live action.

Lipton: So there were several different cameras that you used?

Jane: There were two. We used the Silicon Imaging camera, which is a very small camera, a 2k camera; and then we used the RED camera. Both were excellent, excellent cameras.

Lipton: And these were in rigs that were put together by Max Penner of Paradise FX.

Jane: Yeah. They basically hand-built these rigs to make them work to suit our purposes, cramming them into a special Steadicam rig that can swing from high to low all in one… It was something that hadn’t been done before. I’m not saying that they hadn’t shot stereo with a Steadicam before, but the flexibility and the mobility of the camera was something that we were breaking new ground on in the film. And just the digital workflow – working with stereo and making a stereo film from inception to completion was the exciting part about making the film. It was that I got to use the stereo to tell the story in a way that was intrinsic to the story that we were telling. In other words, it wasn’t a gimmick. In my film there are very few shots where we see objects coming off the screen and

61 Veritas et Visus 3rd Dimension October 2010 coming out at us. And when you do see it, it’s for reasons that are narrative. In other words, when that happens it forwards the narrative.

Lipton: Up until now you’ve been an actor in action pictures, mostly?

Jane: Action, drama… I’ve done dramas and action films and comedies. I’m a jack-of-all-trades in the acting world.

Lipton: So if you would use stereo in any particular genre, where do you think it would work the best? Because what I’m thinking now is that what we’re going to see in terms of live-action 3D is going to be movies that use a lot of CG backgrounds or characters. Your movie is different.

Jane: That’s right. The movies you’re going to see, of course, are going to be the horror films where you have knives and axes being thrown out at the audience, and it’s going to be used in much the same way that House of Wax used 3D in the beginning – which is for the novelty, and to explore the cheap-thrills aspect of 3D. If 3D’s going to evolve and become the tool that it is, and if we’re going to use it as the tool that it is, then we need to find ways to evolve stereoscopic filmmaking and incorporate it into the narrative of a film. I think the best use of 3D is going to be in the drama, in the sheer sort of physical, visual, filmmaking largesse – the John Ford, the 70mm, Stanley Kubrick 2001… And it needs to be used with restraint, and it needs to be used in ways that make the experience of the film deeper. In other words, it’s not just for saying, “Wow! Pretty pictures!” It’s for deepening the psychological experience of the film that you take home with you. Apocalypse Now is a film that stays with people because of the psychological experience that they take away from the film. It makes them think, it keeps them up at night, it asks serious questions about the nature of existence and humanity and why we do the things we do to each other. And we can use stereo to enhance that experience. We can use stereo to further drive home the ideas that we wish to communicate through film.

Lipton: Your vision is an avant-garde vision, because I think it’s safe to say that the great majority of studio people, and the producers, don’t see it that way. They don’t understand that movies have always been three- dimensional and that this is an evolution. They would use it as a gimmick. I think we will have arrived at a really good place for the stereoscopic medium when the kinds of dramas that you’re talking about are shot stereoscopically. But for some reason I think we’re going to have to endure a period of gimmicks. Paradoxically, and oddly, the cartoon cinema – the CG animation cinema – doesn’t do it. They’re very well modulated. The stereo supervisors at DreamWorks, and at Imageworks, and at Disney – and at Blue Sky, I’m familiar with their work… It’s not gimmicky. It’s very well controlled.

Jane: The animation guys, and the guys who are doing animation like Pixar, I think are the most technologically and dramatically evolved filmmakers of our generation, and of any other generation to come before us. They’re making – and I think this will be proved out – they’re making the best films ever made.

Lipton: And they get no respect. The Academy absolutely doesn’t respect…

Jane: But they will. And part of the reason why is because they have to – proof-of-concept, you know – they have to pre-visualize their film every frame, frame by frame. So their narrative is constantly revised so that the visual experience and the narrative become seamless. They are one. The music in these films is the best music out there. It’s brilliant. The music to Toy Story is absolutely brilliant.

Lipton: I’ve actually been listening to it on XM radio. They have a channel devoted to cinema music, so my kids and I, when we drive around we listen to a lot of the stuff.

Jane: But you’ll see it. So when they use stereo, they’re using it as a narrative function.… I was thinking about the animated movies. I think that because the narrative is so important in an animated film, they just inherently understand that 3D is used to enhance the narrative. And one of the things I think is important to understand about 3D is that when you create a 3D film, it’s not just a one-size-fits-all stereoscopic effect that you put on the film. You can vary the intensity of the 3D effect. In other words, you can make it stronger or more subtle, and you can

62 Veritas et Visus 3rd Dimension October 2010 go right down to flat 2D in a 3D film for sequences. I think that’s important, because I believe that the eye gets used to the 3D effect, and halfway through Hondo with John Wayne I stopped seeing the stereo. I take it for granted, which is a good thing because I’m no longer consciously aware that I’m watching a 3D film.

Lipton: It’s the same thing for color, or sound.

Jane: Exactly. Eventually you become desensitized to it slightly. So I think that one of the things we can do is to vary the intensity of the 3D. In other words, if I have a sequence coming up where I really want to use the stereo to enhance the experience of the film, then before that sequence I might back off on the stereo effect, make it much more subtle, so that I lull you into a sense of… You feel like you’ve become completely unaware of the 3D. And then turn up the volume on the 3D when I want you to see it, and it becomes more impactful. The 3D will suddenly come roaring out at you at the screen for that sequence. Then I can back off on it again. Just like building tension and release in a film, I can also build tension and release using stereo.

Lipton: One of the things I’ve been thinking about lately has been how to control that in post.

Jane: Right. You have a certain amount of control over it, but you want to kind of think about this stuff while you’re revising your film.

Lipton: That’s the thing: The animation people have total control over it in post. They can control the strength of the effect of an individual shot and see how it plays with adjacent shots, or they can control the strength of a sequence to make a point in their depth script. Actually, what you were describing to me earlier is the “depth script” concept that is used in making animated movies. In CG animation they make a chart that looks like a musical chart, and they chart the strength of the sequences.

Jane: I’ve done the same thing with my film. Ray Zone and I created what we call a depth chart. It’s a color-coded chart – I’ll include a copy of it in your book – that delineates the z-axis in the space. So you show the screen, you show the audience, and you show the z-space. You show the depth behind the screen and the depth in front of the screen, color coded: Yellow is neutral. That’s stuff you want to appear at the surface of the screen. As you go farther back the colors change according to the rainbow. I think orange is a little bit of depth, red is a lot of depth, and then forward you get…

Lipton: What is this? This is actually the storyboard that’s colored in that way?

Jane: Exactly. It’s a depth chart. Once I have that color code I know that things that are blue are coming out of the screen. They exist in the audience space – sort of in a, maybe a midpoint. Things that are purple might exist deep into the audience space – something you’d use very rarely, I think. Then I can go look at my storyboards, and with colored markers I can color different pieces, portions, of the frame according to where I want those things in the frame to appear in the depth space of the film. And then I can show that color-coded storyboard to my cinematographer and my stereographer, and they’ll know exactly where I want elements to appear in the frame, and they can make the adjustments.

Lipton: So you think 3D movies require a stereographer and a cinematographer?

Jane: Right now they do. I think that stereography is something that is fairly complex. I think it’s fairly easy to understand, but it has certain complexities, certain guidelines that your average cinematographer – although he can educate himself – he won’t have as much experience as a stereographer.

Lipton: So the question is, is the stereographer a real creative force on the set, or is he more like a focus puller?

63 Veritas et Visus 3rd Dimension October 2010

Jane: Well, you know, I think that a focus puller can be a creative force on the set. But the job of the stereographer is a technical job. His job is to achieve… first of all, to keep it all comfortable so we’re not diverging and making your eyes hurt, and second of all to achieve the effect that you wish to achieve shot by shot.

Lipton: So those are the two elements: You really want the image to be comfortable, and you want the image to be appropriate – and look the way it should to tell the story.

Jane: Yeah. I think comfort is your guideline, your number one rule that you only want to break when it’s intentional. And I think you can intentionally create shots that are uncomfortable. What’s the word for opposing ocular…?

Lipton: Accommodation and convergence?

Jane: Yeah, but what’s the word for…?

Lipton: Divergence.

Jane: Yeah, but the discrepancy between eye-to-eye, you know?

Lipton: You’re talking about either retinal or binocular disparity.

Jane: Retinal rivalry. Exactly. I think that retinal rivalry, when both eyes are seeing two different things, can be a tool. It’s a rule that a stereographer won’t break: There is no retinal rivalry. You can’t have it. It’s taboo. I think it’s a fantastic tool. It’s very disorienting.

Lipton: Retinal rivalry was used in Robot Monster purposely: one image in one eye, and the other in the other. And I used that years ago in work I did when I was making super-8 movies in 3D. I put two different reels of film on the projector. It can be an effective technique – and very disturbing.

Jane: I used it in my film, towards the end. The character becomes very disoriented, and I use retinal rivalry to give us a sense of the distortion that he’s perceiving.

Lipton: You are very thoughtful, and way out there. This is going in extremely strange directions. I never thought, because I don’t know you very well… but I’m very impressed.

Jane: What’s great is, it’s a whole brand new toolbox that people haven’t seen or used a lot. I mean, there are tools I was hoping to do on this film that I haven’t been able to do. One of them is that, we create the stereo window. We determine that the stereo window is going to be in a 4x3 format, a 16x9 format, a 1.33 format, and then we stick to that window. If we have an object that goes out of frame, then the stereo window is broken. One of the tools we use, and one of the things we can create, is something that… You can create a stereo window.

Lipton: I saw that in your film.

Jane: But I don’t think Sony’s going to let me do it, which is really unfortunate.

Lipton: You projected in Scope in 2.4:1 and the movie was shot in 1.85 – then you can have material enter the frame. And with digital projection, you can do it.

Jane: Digital projection affords… For the first time, I can create a window. In other words, I can shoot a 1.85 image, and then I can letterbox it. And I’ve done this in the cut: I create a letterbox on the top and bottom of the frame. If you do it subtly enough, you can weave in a fake stereo window with a letterbox that I can then have an object break the frame. What is that called? Breaking the frame.

Lipton: I literally had a filmmaker call me up this morning asking me how to do it. And I told him that you’d done it, because I saw it in one of your shots. You may be the first person to do it – I don’t know.

Jane: I am the first person to do it. And now I’m having trouble with Sony in allowing me to break the stereo window, because there are certain guidelines that you have to conform to when you deliver a film, and they’re

64 Veritas et Visus 3rd Dimension October 2010 giving me a crap about, if I create a stereo window then I’m going to be in breach of it… What it means is that people are scared, and they don’t want to break the ground.

Lipton: I saw The Dark Knight in IMAX. Some of that movie is in about 1.4:1, and some of it is about 2:1 – and you never notice the difference.

Jane: Exactly. In the first cut of my film, I vary the stereo window. When we’re dealing with tight shots that are confining and inside the car, I switch to a 2.33 frame. I have a very long, narrow frame inside the car. And then when I crash outside to a big wide shot of the desert, I can go 1.85. Nobody notices.

Lipton: We’re actually talking about two things. One is breaking the window by having material outside of the edges of the frame or the surround, and the other is something that Eisenstein called the “dynamic square.” Eisenstein wrote an essay called “The Dynamic Square” in which he said, “Why not change the aspect ratio or the shape of the shot as you need it to tell the story?”

Jane: Yes. So my goal was to create a dynamic frame where I could move the window around, and then break that frame – letting you know that this box we’ve created is not real. It can be broken. It also enhances the hell out of a 3D effect. Because when you reach out into the audience, or when you have something break the audience space, if you move and hit the edge of the frame – top or bottom, left or right – the effect is broken, because you can’t escape the box. But if you create the box, then you can break the frame. You can literally reach out and break the frame. It’s a great effect.

Lipton: Another effect that people have done is to adapt something that was done in the ‘50s – the floating or virtual window. Have you seen that effect? It was done in Beowulf, it was done in Meet the Robinsons, in which the screen surround, or the window, actually moves out into the audience and you can control what is contained within it. It increases your depth range. It’s another effect you can use. It’s very effective. And it’s not noticed by the audience.

Jane: Right. I saw it in Meet the Robinsons. Very cool. So, you know, these are new tools. These are fantastic tools. You can use them subtly, and people don’t notice – they just feel it.

Lipton: The digital cinema turns out to be a wonderful opportunity.

Jane: These are the advantages of the digital cinema. It’s fantastic.

Lipton: It’s the opportunity for stereo, because without it, either with capture or content creation we wouldn’t have it. In projection we wouldn’t have it.

Jane: In film, if you were projecting in 4x3 you’d have to get that little window and put it in front of your projector, and that was it. That would give you your solid lines.

Lipton: For a technical reason, in projection you don’t need masking on the screen for digital projection, because you have a hard edge. So digital projection screens can be larger than the projected frame, and you still get a very hard edge, if that’s what you want. But you have to mask or crop movie projection in order to get a hard edge.

Jane: So for the first time we’re able to use Eisenstein’s dynamic frame.

Lipton: This is pretty far out. You don’t want to be too far out, because they’ll kill you.

Jane: That’s right. You want to use these tools, and not get caught.

Lipton: I was thinking of the business people.

Jane: Believe me, I tried. What’s funny is, when I screened my rough cut of the film I used the dynamic frame, and no one noticed. That’s when I knew it was effective. No one noticed. I had to point it out to them, and then they told me I couldn’t do it.

65 Veritas et Visus 3rd Dimension October 2010

Lipton: The movie business is a tension between artists and business people. If the artists weren’t allowed to be as free as they were, people wouldn’t come to the movies because they’d be so dull. So subject matter and techniques are always on the edge, but the business people try to pull it in. It’s interesting. It’s a very interesting business. I’d been selling 3D systems to engineers. I was a filmmaker, and then I got interested in 3D and I started making 3D systems that were sold mostly to engineers and scientists and people like that. Now for the past four years I’ve been back in the movie business – which is much more fun. This is a great business.

Jane: Yeah. There’s always a tension that exists between the technical side, the business side, and the artistic side. And I think that tension is what allows us to create such great films.

Lipton: I must tell you, your thinking is very advanced. This is not necessarily the kind of thing that would advance a career. Even if you were a very well established director, you would have trouble promoting these ideas and getting approval from a studio because they are… Someday I think they’ll be taken for granted.

Jane: Absolutely. This stuff will someday be taken for granted. And probably in the very near future. There’s no reason why we couldn’t do it now, other than ignorance. And just the status quo, you know? This will change.

Lipton: Tom, this has really been fun. This has been a great conversation.

http://www.veritasetvisus.com

We strive to supply truth (Veritas) and vision (Visus) to the display industry. We do this by providing readers with pertinent, timely and exceptionally affordable information about the technologies, standards, products, companies and development trends in the industry. Our five flagship newsletters cover:

 3D displays  Display standards  High resolution displays  Touch screens  Flexible displays

If you like this newsletter, we’re confident you’ll also find our other newsletters to be similarly filled

with timely and useful information.

66