3rd Dimension Veritas et Visus August 2009 Vol 4 No 7/8

Avatar, p4 NASA, p16 Shapeways, p21 LBO, p48

Letter from the publisher : Final Destination…by Mark Fihn 2

News from around the world 5

Conference summaries

SIGGRAPH 2009, August 3-7, 2009 32 Web3D Symposium 2009, June 16-17 38 Projection Summit, June 15-16, 2009 43 SID Display Week, June 2-5, 2009 46 FINETECH Japan, April 15-17, 2009 53 Electronic Displays, March 4-5, 2009 56 SD&A Conference, January 19-21, 2009 59

SpaceSpex anaglyph by Michael Starks 68

Is Anaglyph in Our 3D Future? by Chris Chinnock 74

Hologlyphics: a creative system for autostereoscopic movies by Walter Funk 76

The 3D Interface by Fluppeteer 81

The game developers have spoken, are you in? by Neil Schneider 86

3D game review: Trine by Eli Fihn 89

Looking ahead at a and TV future with Avatar by Alexander Lentjes 92

Increasing frame rates of liquid crystal displays by Adrian Travis 94

Showing 3D at trade shows and conferences by Bernard Mendiburu 96

Perceptual Paradoxes by Ray Zone 99

The Truth about 3D TV: Questions… by Lenny Lipton 103

Last Word: Half a century of stereoscopic viewing…by Mike Cook 105

Calendar of events 107

The 3rd Dimension is focused on bringing news and commentary about developments and trends related to the use of 3D displays and supportive components and software. The 3rd Dimension is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com

Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Chris Chinnock, Mike Cook, Eli Fihn, Fluppeteer, Walter Funk, Alexander Lentjes, Lenny Lipton, Bernard Mendiburu, Neil Schneider, Michael Starks, Adrian Travis, George Walsh, and Ray Zone

Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate based on location and mailing method. Copyright 2009 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others. Veritas et Visus 3rd Dimension August 2009

Final Destination…

by Mark Fihn

I learned the other day that the Cineplex in my home town of Temple, Texas now has 4 screens that are 3D-capable, so I no longer need to drive to Austin or Dallas to catch the latest 3D flick.

But this past weekend, I made the decision to go drive to Austin anyway, so that I could check out the first ever 3D film to be combined with D-BOX Motion Code. My 15-year old daughter made the trip with me; she is a fan of horror films and so was hopeful to get a good showing.

Well, Final Destination is certainly not the worst film ever made, but it’s reasonably certain that no Academy Awards nominations will be coming its way… Although hyped as a , the audience was laughing as much as cringing.

The D-BOX hype went: “This was the first time D-BOX's immersive technology was combined with 3D technology, offering moviegoers the opportunity to enjoy an unparalleled cinematic experience where they can watch the action in three dimensions and live the extraordinary sensations of onscreen action. D-BOX's motion-enhanced seating provides an unmatched, realistic and immersive experience, utilizing refined subtle motion effects that work in perfect sync with the onscreen action”. Well, I was quite impressed by the D-BOX technology – the movement in the seats was well-linked to the action on the movie – accentuating the immersive feel of the movie and punctuating the thrills of the 3D experience. Both my daughter and I were quite impressed.

Twenty of the seats in the theater were equipped with the D-BOX Motion Code, requiring a special reservation. The price was a bit of a gulp. Regular prime-time prices for 2D movies at this theater are $8.75. The 3D films generally add a $2.00 premium, bumping the price to $10.75. The D-BOX seats were $19.75. Although I enjoyed the sensation, I probably won’t spring for that sort of premium too often in the future. That said, all of the D-BOX seats at this showing were sold – while perhaps 50% of the remaining seats were left empty.

Although I personally rank this film as a “horrible film” – and not as a “horror film”, I must admit that the 3D effects were spectacular. No subtlety in this one – blood and guts and sharp objects were constantly leaping out of the screen, but it worked. I walked away with no feeling of nausea and didn’t notice many horrible 3D visual violations during the showing. So from that perspective, the movie was well done.

One of the fun parts of the film is that one of the main scenes took place at a movie theater (see image to the left) – which was showing a 3D horror film. An interesting perspective to actually be in a 3D theater while the film’s action is in a 3D theater…

2 Veritas et Visus 3rd Dimension August 2009

Amazing, in its first weekend, early estimates from the Hollywood.com Box Office projected that Final Destination bagged $28.3 million for a #1 berth. Although just over half its 3,121 theaters were screening the flick in 3D, fully 70 percent of its box office take was from 3D theaters.

The best part of attending Final Destination is that one of the 3D previews was for Avatar – the much ballyhooed film by James Cameron that will hit the theaters in mid-December. The preview was captivating – obviously a very imaginative movie with some amazing effects. And the 3D was simply stunning.

On Friday, August 21, James Cameron and Fox debuted a 15-minute 3D trailer of Avatar in select IMAX theaters in the US and select international 3D digital and IMAX cinemas. Even for only a 15-minute glimpse, tickets were hard to come by – reservations for all showings were completely booked. A “teaser trailer” of Avatar was posted on iTunes on Thursday, August 20, and by Sunday, Fox announced that the trailer had become iTunes' most-viewed trailer ever. The teaser registered over 4 million streams in its first day on the site (more than doubling the previous record of 1.7 million).

Many are writing that Avatar will be the make-or-break movie for the fledgling 3D market. It’s easy to imagine that if the movie is hugely successful that it will accelerate 3D in the movie theatre, and perhaps even accelerate it into the home. If the movie is a dud, I’m not sure that it will actually be that big a detriment to the future of 3D. The success points have already been demonstrated at theatre after theatre – but I’m ready for a real blockbuster to showcase the 3D format(s) and am eager to see the show.

3 Veritas et Visus 3rd Dimension August 2009

No matter how Avatar ends up doing at the box office, I am quite comfortable predicting that it will fare better than Final Destination. The preview of Avatar unquestionably was the highlight of my Final Destination experience…

While at the Austin theater to see Final Destination, one of the previews promoted live showings of Monday Night Football. Although not in 3D, the ads are clearly leading people to gain comfort with the idea of these event showings at the movie theater. I’ve now seen two live basketball games in 3D, as well as two pre-recorded concert events, and I am a strong believer that such events will ensure the future of 3D in the theater venue.

Although Monday Night Football (at least for the near term) is not scheduled to be shown in 3D, the marketing is clearly shifting people’s mindsets in that direction…

I should also mention that I continue to believe that the early entrée for 3D into the home will be via the gaming market – with TV following later. And Avatar will no doubt help spark that move, since will be introducing an enhanced 3D game version of the movie.

To the end of helping popularize 3D gaming, a bevy of companies recently banded together to form the Stereoscopic 3D (S-3D) Gaming Standards & Advocacy Group. Some of S3DGA's founding companies include Blitz Games Studios, DDD, iZ3D, Jon Peddie Research, TDVision Corp, and XpanD. The President and CEO of S3DGA will be Neil Schneider. You can read Neil’s article later in this newsletter which more fully describes the activities of the new advocacy group.

An introductory S3DGA meeting will be held during the 3D Entertainment Summit. This conference is considered the largest of its type, and features industry leading names including: James Cameron, Jeffrey Katzenberg, and countless more.

4 Veritas et Visus 3rd Dimension August 2009 3D news from around the world compiled by Mark Fihn and Phillip Hill

Infocus DepthQ HD 3D projectors used in VR room at the University of Louisiana Carolina Cruz-Neira, professor of electrical and computer engineering and co-inventor of the CAVE environment, has incorporated three InFocus DepthQ HD 3D projectors into her design of a new VR immersion room that she and her team of researchers built at the University of Louisiana at Lafayette for a collaboration with the Human Research and Engineering Directorate of the US Army Research Lab. The InFocus DepthQ HD 3D projectors create stereoscopic HD 3D images on the walls of the three-sided VR immersion room while an omni-directional treadmill, which serves as the floor, allows research subjects to travel non-stop in any direction through the virtual reality 3D world. The system was built to create software tools to enable studying the cognitive load on the dismounted soldier (on foot) in highly stressful, rapidly changing scenarios. The goal is to determine what kind and how much information ground troops take in, what distracts them and how they choose which pieces to pay to in life and death situations. Dr. Cruz-Neira, and her team, developed the software architecture to create the scenarios in which the soldiers are immersed as well as to collect data from the soldiers through biosensors and . http://create.louisiana.edu http://www.infocus.com

Connecticut Science Center delivers educational content in announced that the Connecticut Science Center selected Dolby 3D and surround sound for its Maximilian E. and Marion O. Hoffman Foundation 3D Science Theater. The science center opened to the public on June 12, 2009. In alignment with the science center’s initiative, the Dolby 3D system uses high- performance, eco-friendly passive that require no batteries or charging and can be reused repeatedly, reducing waste associated with a disposable glasses model. The Connecticut Science Center selected the Dolby 3D system not only to support its eco-initiative but also to deliver a premium viewing experience to it patrons. Dolby’s 3D solution uses a unique full-spectrum color-filter technology licensed from Infitec that provides realistic color reproduction and an extremely sharp image. The Dolby 3D system supports both 2D and 3D presentations to give the Connecticut Science Center flexibility in delivering a wide range of content. This works by retracting the full spectrum color filter wheel out of the light path for 2D and into the light path for 3D in the projector. Applying the Dolby 3D filter to the light before the image is formed, delivers stable and sharp images without modulation of the actual image. As a result, there is no degradation to the picture presenting a premium image to every visitor in the 200-seat stadium-seating theater. Dolby 3D is incorporated into an NEC STARUS NC2500S digital projector, featuring Texas Instruments DLP digital cinema technology delivering crystal clear images onto a 30x40 foot screen. http://www.dolby.com/ctsc

5 Veritas et Visus 3rd Dimension August 2009

PureDepth rolls out “multi-layer type” 3D displays PureDepth Inc of the US announced that it established a Japanese unit at the end of April 2009 and explained the characteristics and commercialization schedule of a display based on its “MLD (multi-layer display)” technology and its potential markets. The MLD technology can realize a variety of image displays that are not possible with existing LCD displays. By stacking two LCD panels with a certain space between them, the technology can enhance color and contrast and enable users to view 3D images of characters, pictures, etc. without special glasses. The 3D images are realized by displaying images that are identical except for brightness and size on the two panels. The exact reason why images look three-dimensional with this method is unclear. Nevertheless, this display has the following advantages compared with existing naked eye 3D displays that use a lenticular : viewers do not experience the side effects common to 3D images, such as dizziness, headache and eyestrain; resolution is not compromised even when images are displayed in 3D; 2D images of, for example, characters can be displayed in combination with and at the same time as 3D images; and there is no distinct border of the viewable angle (the angle at which images appear in 3D). The technology has already been commercialized. For example, PureDepth signed a license contract with Sanyo Electric Co Ltd in 2006. And Sanyo Electric System Solutions Co Ltd, a subsidiary of Sanyo Electric, manufactured panels using the MLD technology based on this contract. The panels were adopted by companies including Pachislo (gambling machine) maker Abilit Corp, which used the technology in Pachislo machines the company released in November 2008. PureDepth also went into partnership with major US-based slot machine manufacturer International Game Technology (IGT) in 2006 and sold 20.1-inch displays that can be used in slot machines to casinos in Las Vegas and other facilities in 2008, according to PureDepth. Moreover, the company will start volume production of 12.1-inch displays in Korea. The production was contracted to Kortek Corp of Korea, which manufactures the 20.1-inch display too. http://www.puredepth.com

The 20.1-inch multi-layer display manufactured by IGT. The rotating parts of the slot machine appear in 3D. On the right is the 20.1-inch product (front) and the 12.1-inch prototype (back) to be mass-produced in Korea

Disney Interactive Studios pioneers 3D consoles with G-Force and Toy Story Mania! video games Disney Interactive Studios introduced stereoscopic 3D features in two of its premier games this summer. G-Force and Toy Story Mania! both feature innovative experiences using 3D technology and are the first games for next- generation systems to fully feature three-dimensional stereoscopic technology. G-Force and Toy Story Mania! allow the player to choose to turn on special 3D features in the games while wearing enclosed 3D glasses. Using the anaglyphic 3D glasses, players see the action extend forward from their television screens in a three-dimensional perspective while playing the games. Two sets of 3D glasses are enclosed in each package. Both standard 3D graphics and the stereoscopic 3D experiences are offered in the games and can be changed by the player. http://www.disneyinteractivestudios.com

6 Veritas et Visus 3rd Dimension August 2009

HoloVizio 3D display receives LG projector base The latest version of HoloVizio 3D displays provide 3D images for multiple viewers without special glasses or head tracking by incorporating several LG projectors. The 3D display technology – developed and patented by Holografika Ltd. from Hungary – in its latest form uses LG’s HS101 LED projectors. Developed by redesigning LG Electronics’ projector the HoloVizio 3D display creates a holographic movie medium, the equivalent of a 110-cm screen, in which 3D objects and appear behind and in front of the screen. Further features include an RGB input, a 2000:1 contrast ratio, and an 800x600 resolution. http://www.holografika.com

TrueVision adds 3D flat panel displays to its visualization platform for microsurgery TrueVision Systems announced its new 3D 1080p flat panel product configurations. For the first time, operating rooms and exam rooms will have 3D 1080p flat panel LCDs to display live and recorded content of the surgical view from a microscope or slit lamp. Different configurations include wall mounted and portable ergonomic cart solutions with a selection of high quality LCD 3D 1080p panels ranging from 24 to 46 inches. One or more displays can be mounted on the wall, from the ceiling, or on a boom arm for operating rooms, exam rooms, and offices. TrueVision is an intelligent 3D visualization system for microsurgery that enables surgeons to perform or view surgery via a heads- up display instead of looking through the microscope. It features the ability to record, edit and playback 3D 1080p operative content. The 3D video playback enables viewers to see surgery as if they were performing the surgery themselves through the microscope. The system is designed to seamlessly bring patient images and data from the exam room into the operating room. All systems include the patented TrueVision 3D Image Capture Module that attaches to the surgical microscope or slit lamp, an Image Processing and Recording Unit, and choice of a display from 24- to 46-inches on a cart or wall/ceiling mounted. Prices start at $49,900. The company also sells custom 3D Playback Systems for uses outside the operating room. The TrueVision Surgical System is registered with the FDA as a class one medical device. http://www.truevisionsys.com

RealView Innovations adds 3D capability to PSP RealView Innovations Ltd. is to launch the V- Screen in the fourth quarter of this year, and the peripheral promises to turn PSP systems into 3D. The innovation uses special optical components in unconventional ways according to RVI. No software, electronics or headgear is required. The video game industry is a perfect fit for our technology. The V-Screen offers consumers a tremendous leap forward in optical enhancement, said the company. Details will not be available till September. http://www.realviewinnovations.com

7 Veritas et Visus 3rd Dimension August 2009

Lightspeed releases the new DepthQ family of stereoscopic HD video software Lightspeed Design announced the release of its new DepthQ family of stereoscopic software for state-of-the-art capture and playback of HD 3D media. DepthQ HD 3D software has been successfully serving hundreds of professional clients in corporate, entertainment, medical and industrial applications since 2002. That software has now evolved into the new DepthQ family of stereoscopic software: DepthQCapture and DepthQPlayer. DepthQ is the only 3D software package capable of low-latency (66ms), real-time monitoring with simultaneous capture, highly efficient GPU image processing and smooth, high-resolution media playback. Lightspeed’s experience in 3D display technology and , combined with their of stereography and image processing, led them to develop their own toolsets to capture, process, configure, and playback HD 3D imagery with greater flexibility, efficiency, and accuracy. DepthQCapture is a powerful software solution for the precise capture, recording and monitoring of 3D video from two simultaneous camera inputs (analog, HD-SDI, HDMI, FireWire or USB) at dual HD resolution. Features include low-latency, real-time monitoring, a camera alignment aid and visual overlays to assist in optimizing 3D effects for various target playback screens. DepthQCapture captures the two independent camera video sources, concatenates them together as an above/below image to preserve sync, processes the result – applying any scaling and compression required – and then serves the final stereo data as a single data stream to any 3D (or 2D) monitoring device at the required resolution, , and standard for that display. DepthQPlayer is a feature-packed software solution for the high-quality playback of stereo 3D movies from a standard PC. Designed from the ground up as a professional product, DepthQPlayer combines efficient code architecture and superior throughput for high-bandwidth playback of either locally stored or URL-accessible stereoscopic movies, as well as 3D streaming IP video (MPEG2-TS RTP/UDP). DepthQPlayer allows cost-effective viewing of high-resolution stereoscopic media in a wide variety of environments – from remote 3D viewing stations, operating rooms and conference rooms to 3D cinemas, museum exhibits, portable theaters, motion simulators, corporate events and trade shows. http://www.lightspeeddesign.com

Lightspeed Design and LC-Tec Displays sign agreement Lightspeed Design of the US and LC-Tec Displays of Sweden announced the signing of a new agreement that grants Lightspeed the exclusive rights to market and sell the high speed liquid crystal rotator devices manufactured by LC-Tec Displays AB for stereoscopic 3D visualization products. LC-Tec Displays AB has developed a new range of polarization rotators capable of switching the polarization of a projection-image at ultra- high speeds of 400Hz. The polarization rotator can be used together with a high-speed projector and polarization- preserving projection-screen in order to create high quality stereoscopic images using either linear or circular polarized passive viewing-glasses. Products based on the Polarization Rotator will be commercially available from August 2009. http://www.lightspeeddesign.com http://www.lctecdisplays.com

YouTube experimenting with 3D video YouTube has begun experimenting with 3D video, according to a report in the blog, Search Engine Roundtable. The report is based on a in the YouTube Help forum, in which a YouTube engineer who goes by the moniker, YouTube Pete, writes that he is working on a “stereoscopic player as a 20% project” (Google engineers are encouraged to spend 20% of their work week on projects that interest them, a number of which have subsequently been launched as commercial products). The player allows viewers to choose between 10 different 3D viewing modes (e.g., / Glasses: Full Color, Red/Cyan Glasses: Optimized (Dubois), Red/Cyan Glasses: B&W, Amber/ Glasses: Full Color, etc.), as well as Left Image Only and Right Image Only modes.

8 Veritas et Visus 3rd Dimension August 2009

Sky to switch on 3D TV channel in 2010 Sky in the UK is to offer a broad selection of 3D content, including movies, entertainment and sport. All content will be captured using HD cameras and broadcast over the firm’s existing HD infrastructure, making use of existing Sky+ HD set-top boxes. To see a 3D image, one must have a 3D-ready TV and wear polarizing glasses. There are no 3D TVs for sale in the UK, but sets from manufacturers should be available by launch. Sky said that resolution will sit somewhere between SD and 1080p. The resolution depends on Sky’s use of current HD infrastructure, which provides a broadcast rate for content of up to 18Mb/s. But because a 3D image requires two images, Sky must broadcast both images simultaneously. As a result, the resolution an event is captured in 3D must be downgraded for it to be successfully broadcast.

Channel 4 in the UK plans 3D broadcasts Following on from the news that Sky is planning 3D broadcasting, reported elsewhere this issue, in the UK also has broadcast plans for a week-long series of 3D TV programs. Several movies and a Darren Brown magic hour (3D Magic Spectacular) have been slated for a 3D makeover. A rundown of the best ever 3D moments will also be shown. Channel 4 has also obtained some rare footage of the Queen from around the time of her 1953 coronation, which the channel is planning to broadcast in 3D. The channel has opted to use Danish ColorCode technology for the broadcasts, requiring viewers to wear special glasses with amber and blue filter . The amber filter relays color information and the blue filter helps create that 3D . The glasses will be given away through supermarket chain Sainsbury’s, with the first 3D programs set to be shown this autumn.

ESPN to provide 3D for Ohio State/USC match ESPN announced that it will use the Ohio State/USC game to test out its 3D production capabilities, creating a separate 3D telecast in addition to the 2D high- and standard-definition broadcasts. The 3D HD program will be shown in five locations, to invited audiences only. ESPN will use seven cameras for the 3D production and will place them as close to the field as possible. One 3D camera will be placed in an end zone to capture head-on shots at the goal line. ESPN is touting this broadcast as the first use of true stereoscopic graphics in a 3D telecast. http://espn.go.com

Panasonic collaborates with James Cameron to promote 3D TV Panasonic has signed a deal a deal with James Cameron to use his film “Avatar” to promote its 3D HDTVs. According to the agreement, Panasonic will become the exclusive A/V partner of the film. Panasonic gets to promote its 3D TV technology for the home, while also promoting the film for Twentieth Century Fox Film Corp. and Lightstorm Entertainment. Panasonic has also provided some of its A/V gear to assist in the creation of the movie, which is set to debut on December 18 in both 2D and 3D formats. Panasonic plans to launch a global advertising campaign solely tied to “Avatar” and promoting its new HD 3D TVs. In the US, Panasonic will embark on a “truck tour” that features multiple customized tractor trailers outfitted with Panasonic’s 103-inch and Blu-ray players. In Europe, the company will erect HD 3D theatres within shopping malls and other locations by the fall. Japan will see a series of TV commercials surrounding both the technology and the movie.

Mitsubishi starts 3D TV promotion with video game publication Mitsubishi Digital Electronics America (MDEA) launched a promotion of its 3D-ready home theater series TVs with Internet-based video game publication IGN.com. The promotion showcases the capabilities of 3D gaming and movies, and includes a sweepstakes for a chance to win a complete Mitsubishi 3D home entertainment package, including a 65-inch 3D-ready home theater TV, Aspen media server, graphics card, an emitter and two pair of active-shutter 3D glasses. Mitsubishi estimates that about 400 PC games will be converted to 3D, including such titles as World of Warcraft; Spor; Warhammer Online: Age Of Reckoning; and Batman Arkham Asylum. Hollywood movie titles produced in 3D to date include “Journey to the Center of the Earth”, “Ice Age: Dawn of the Dinosaurs”, and “Avatar” (coming December 18). The IGN.com 3D entertainment promotion with MDEA will run from Aug. 28 to Sept. 30, and is open to all US residents. http://www.mitsubishi-tv.com

9 Veritas et Visus 3rd Dimension August 2009

3D movie release schedule Here is a list of 3D movies scheduled for release through 2012. The release dates are subject to change and some of the films do not have specific release dates.

2009 Sep 18 Cloudy with a Chance of Meatballs Nov 06 A Christmas Carol Oct 02 Toy Story and Toy Story 2 Nov 20 Planet 51 Oct 23 The Nightmare Before Christmas Dec 18 Avatar Oct 23 Astroboy The Dark Country 2010 Jan 15 Hoodwinked 2: Hood vs. Evil Sep 24 Guardians of Ga'Hoole Feb 08 Disney's Beauty and the Beast Oct 01 Alpha and Omega Mar 05 Tim Burton's Alice in Wonderland Oct Saw VII Mar 26 How to Train Your Dragon Nov 05 Oobermind Apr 16 Piranha 3D Dec Disney's Rapunzel May 21 Shrek Forever After Dec 17 The Smurfs 3D Jun 18 Toy Story 3 IMAX Hubble 3D Jul 09 Despicable Me Joe Dante's The Hole Jul 30 Cats and Dogs 2 Alex Winter's The Gate Aug 06 Step Up 3D Hoodwinked 2: Hood vs. Evil 2011 Jan Underworld 4 Dec Pixar's The Bear and the Bow Jan Blue Man Group 3D Dec 23 The Adventures of Tintin Apr 08 Rio Journey to the Center of the Earth 2 Jun 03 Kung Fu Panda - The Kaboom of Doom Captain Nemo Jun 24 Pixar's Cars 2 Flanimals Nov 04 The Guardians of Childhood Frankenweenie Nov 18 Happy Feet 2 in 3D 2012 Mar 03 Nov 12 Boo U Mar 27 Madagascar 3 in 3D Disney's King of the Elves Jun Pixar's Newt Dr. Seuss: The Lorax

Mechdyne announces new release of Conduit software for advanced real-time visualization Mechdyne Corporation announced the release of Conduit v2.6 of its real-time graphics distribution software that seamlessly displays desktop applications in 3D stereo and virtual reality environments. Compatible with popular engineering, design and geoscience applications such as CATIA, Pro/ENGINEER, NX, 3ds Max, Maya, ESRI, Schlumberger Petrel and Google Earth, Conduit enables users to view and interact with application data with no time consuming data conversion and no resolution scaling limits. It drives graphics through PC clusters to support any advanced visualization environment, including immersive displays such as the CAVE, PowerWall, and ultra- high-resolution tiled projection environments. The new release of Conduit allows users to navigate and interact with models much faster than before. Depending upon the particular data model and application, the new version of Conduit achieves a 2x to10x improvement in performance compared to its previous version. When working with large datasets such as aerospace, automotive and large terrain models in immersive displays, interaction and analysis becomes more natural. Data review in advanced visual environments can provide new insights, foster collaboration and generate better decisions. The enhanced performance of Conduit v2.6 can accelerate and improve engineering data review sessions, which can reduce design cycles that can add up to cost savings and faster time to market. http://www.mechdyne.com

10 Veritas et Visus 3rd Dimension August 2009

Sorenson Media announces beta kit for media players on Internet video publishing platform Sorenson Media announced the availability of the beta version of the Sorenson 360 Software Developers Kit (SDK), which enables customers of the Sorenson 360 Video Delivery Network to use custom media players with the video publishing platform. With Sorenson 360, customers now have the choice of using their own proprietary media players, third-party players, or using the built-in customizable Flash media player Sorenson 360 provides. The Sorenson 360 SDK component is free to customers and developers as a beta release and can be integrated with existing media players by adding only three lines of code. For customers who do not have their own software developers, Sorenson Media offers a professional services department to provide the necessary expertise. Sorenson 360 users, including those who use their own media players, benefit from the platform’s client-side encoding that gives the user full control of video quality and saves time by encoding before upload, providing a smooth, professional workflow experience. Once uploaded, users can employ the full spectrum of video content management features, such as detailed video analytics and comprehensive password protection to manage access to videos. http://www.sorensonmedia.com/flash-player-sdk

StudioGPU reveals next-generation 3D workflow and rendering features StudioGPU showcased the next-generation features and performance benefits of MachStudio Pro on the AMD ATI FirePro V8750 3D workstation graphics accelerator. MachStudio Pro provides a seamless way to create and interact with cinematic 3D objects and environments in a non-linear workspace, leveraging the horsepower of off-the-shelf professional graphics processing units (GPUs) to deliver real-time and near real-time workflow performance on a desktop workstation. The MachStudio Pro workflow is akin to a virtual 3D real-time studio environment, allowing artists, designers, engineers, directors, and technical directors (TDs) to work with lighting, camera views and multi- point perspectives for a real-time view of frames as they will appear in the final rendered format. Users can easily manage and interact with complex lighting, caustics, cameras, , materials, , and for real-time shot finalizing and compositing. http://www.studiogpu.com

The Embassy uses Luxology “modo” to develop alien weaponry for Sony Pictures The modeling tools in “modo” played an essential role for artists at Vancouver-based The Embassy as they developed alien characters, weaponry and objects for the feature film “District 9”. As they worked on a buried alien ship, an alien pet, a missile launcher and an alien exo-suit for the film, visual effects artists relied heavily on modo’s advanced UV mapping tools, 3D painting and texturing capabilities. Some of their work involved models already created by WETA Digital, while other tasks involved ground-up modeling. Among the other modeling projects completed in modo was the creation of a base mesh for the alien pet, which is featured in cockfighting scenes. Designers began with a scan of a physical sculpture of the character and created a displacement map that was put onto a cage made in modo. They then used modo’s 3D paint tools to apply colors for baking in 3D. As The Embassy’s main modeling application, modo integrates seamlessly into a pipeline that includes Autodesk Softimage XSI, mental images’ , Pixologic ZBrush and Apple Shake. For District 9, the artists used LWOs to go back and forth between applications. No matter what changes were done further down in the pipeline, all the modeling, texturing and look development work done in modo remained intact because of modo’s consistent stability. http://www.luxology.com http://www.theembassyvfx.com

Patent covering SENSIO 3D technology is issued by the USPTO SENSIO Technologies, inventor of the SENSIO 3D technology, announced that it has obtained its US patent number (#7,580,463). This is the final step confirming that SENSIO’s technology is now patented in the American market. Following the delays encountered with regard to issuance of the patent, the United States Patent and Trademark Office (USPTO) is granting an extension of 600 days so that the patent will be valid for a longer period. The patent will be valid starting from the date of filing of the patent application, which was in 2003, and will remain valid for twenty years not including the extension period. The patent obtained by SENSIO covers more than just the SENSIO 3D technology. It gives SENSIO exclusive rights over its whole method of compression, decompression, formatting, and playback of 3D content for various 2D and 3D screens, and applies to the markets for home theater, professional movie theaters, personal computing and mobile telephony. http://www.sensio.tv

11 Veritas et Visus 3rd Dimension August 2009

Sensio expands its 3D live network for digital cinema in Europe Sensio Technologies announced the continued expansion of its 3D live roll-out in high definition for digital cinemas throughout Europe. The announcement was made following a flawless 3D live satellite broadcast of the Julien Clerc concert on July 16th to four locations across France. Julien Clerc is a leading French entertainer with a broad following throughout France and Europe. The concert was broadcast live to multiple digital theatres in high definition 3D along with high quality and crisp images. The event was broadcast by OpenSky, who purchased and installed International Datacasting Corporation’s Pro Cinema 3D live decoders equipped with Sensio 3D technology. This equipment is an add-on to the existing digital cinema broadcast network previously provided to them by IDC. As part of their digital cinema service offering, OpenSky has formed the 3D Stereoscopic group (3DSG), a partnership with dBW Communication and Eutelsat for the end-to-end production and distribution of 3D live events across Europe. http://www.sensio.tv

Sensio signs a licensing agreement with Hyundai IT Sensio Technologies announced that it has signed a licensing agreement with manufacturer Hyundai IT Corporation for integration of Sensio’s technology into Hyundai IT’s LCD HD 3D televisions, developed for the consumer electronics market. Consumers will be able to obtain televisions that integrate Sensio 3D decoding technology in the fall of 2009. Hyundai IT plans to market this television model in the American, European and Asian markets. http://www.hyundaiit.com http://www.sensio.tv

Sonic Solutions takes part in 3D “Over-the-Top” initiative Sonic Solutions announced that it is embarking on a 3D initiative, which it says will see it leveraging its various digital media applications, technologies and services, including Roxio CinemaNow, to provide consumers with a convenient way to access 3D movies at home. According to the company, its solution for the digital delivery of 3D entertainment includes professional tools for preparing content for electronic distribution and sell-through, services for delivery of device-optimized 3D content, and software for 3D playback. Sonic says that it is working with a number of prominent companies in the 3D space to ensure that 3D content it delivers can be accessed across a range of 3D-capable devices. In order to ensure what it promises will be a “stunning visual experience for PC users”, the company is working with Nvidia to optimize 3D movies from Roxio CinemaNow for Nvidia’s GeForce graphics processing units (GPUs); it says that the Roxio CinemaNow player will support and 3D Vision-Ready displays, including the Samsung SyncMaster 2233RZ and ViewSonic FuHzion VX2265wm displays which are targeted at the 3D gaming market. According to Sonic, 3D movies from Roxio CinemaNow will be delivered via the PC, which will carry out the necessary decoding in order to display them both on 3D-ready PC displays and 3D-ready digital TVs and projectors. Unlike older anaglyph solutions (i.e. colored glasses), the company says, this solution will provide a “true and compelling” 3D experience. Sonic says that it is also collaborating with consumer electronics manufacturers to deliver 3D content directly to next-generation connected HDTVs and Blu-ray players. http://www.sonic.com

ArcSoft TotalMedia Theatre 3 now shipping in Aspen 3D media servers ArcSoft announced its collaboration with SENSIO to ship TotalMedia Theatre 3 in Aspen 3D Media Servers GL1000 and GL 3159, allowing 3D movies to be watched at home. By incorporating SENSIO S3D Decoder, ArcSoft TotalMedia Theatre is now equipped with a 3D stereoscopic playback. ArcSoft TotalMedia Theatre can now decode and display 3D encoded DVD movies and streaming files. ArcSoft TotalMedia Theatre can now support a wide range of 3D stereoscopic displays such as Pageflip, Interlace, Anaglyph, and Checkerboard modes. Aspen’s 3D media servers, GL-3159 and GL-1000, come with a component style case and dual core processing with Nvidia (GL-3159) and ATI (GL-1000) support. They allow users to store and view digital photos, burn and listen to music, record and watch TV shows, download and watch movies. Moreover, the servers come with two pairs of XpanD 3D glasses for viewing 3D PC games and movies. http://www.arcsoft.com/intouch/tmt3dpr http://www.sensio.tv/en/default.3d

12 Veritas et Visus 3rd Dimension August 2009

Japanese researchers develop holographic GPU that renders at near real-time speeds Researchers from Chiba University, Kisarazu National College of Technology, and Tokyo University of Technology in Japan have developed the HORN-6 special-purpose computer for . The HORN-6 board handles an object image composed of one million points, and the researchers constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, they succeeded in creating a computer- generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1920x1080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CP. The work was reported in a recent paper in Optics Express. http://www.opticsinfobase.org Optical system for electroholography used in the study

Inition’s StereoBrain helps transform 3D post production Inition is to present the latest version of the StereoBrain Processor, a 3D HD video-processing unit created for stereoscopic 3D production, post and broadcast, at the IBC September 11-14. Developed by Inition and broadcast video experts NuMedia, the StereoBrain has been helping to transform the 3D production process since its launch earlier this year. The StereoBrain (SB-1) is designed to allow live viewing of a stereoscopic camera pair or other genlocked 3D video on any of the current breed of 3D televisions. This enables live 3D monitoring or 3D viewing in post-production environments from any 3D pair of HD-SDI sources. In the post production environment 3D viewing often wasn’t possible or affordable before the StereoBrain Processor. Iridas, one of the leading software developers for stereoscopic playback, color correction, and workflow automation has been using StereoBrain technology with its FrameCycler DI and SpeedGrade XR/DI products. http://www.inition.co.uk

XpanD X101 Series 3D glasses at the premiere of X Games The 3D Movie XpanD displayed why the company’s X101 Series active glasses are the viewer’s choice in achieving the absolute 3D experience during their recent collaboration with ESPN and Walt Disney Studios Motion Pictures, at the premiere of X Games The 3D Movie, the first sports-themed 3D film, recently held at the NOKIA Theatre at LA Live on July 30, 2009. Slated for a limited theatrical release this summer, the film attracted a large audience of more than 4,000 people while delivering non-stop action. The X Games 3D movie, which was filmed using the most groundbreaking techniques, was perfectly paired to the XPAND brand of technologies and, more significantly, heightened the company’s newly launched X101 Series 3D glasses, regarded by many industry professionals as one of the most immersive stereoscopic image delivery systems available. It was estimated that more than 10,000 people got to view a sneak preview of the X Games 3D movie over the course of a few days, which audiences were giving rave reviews. Topping just over 1000 installations over five continents, XpanD has become a major player in 3D and the X101 active glasses have been a dominant choice for exhibitors seeking the most optimal guest experience. Hosting many special features, the ecology-minded reusable glasses have a durable and contemporary design that moviegoers prefer. http://www.xpandcinema.com

13 Veritas et Visus 3rd Dimension August 2009

Disney Interactive pioneers 3D console technology Three-dimensional experiences are taking over the film experience at movie theaters in 2009 and Disney Interactive Studios is leading the way in video games by introducing stereoscopic 3D features in two of its premier games this summer. G-FORCE and TOY STORY MANIA! both feature 3D technology and are the first games for next- generation systems to fully feature three-dimensional stereoscopic technology. G-FORCE and TOY STORY MANIA! allow the player to choose to turn on special 3D features in the games while wearing the enclosed 3D glasses. Using the anaglyphic 3D glasses, players will see the action extend forward from their television screens in a three-dimensional perspective while playing the games. Two sets of 3D glasses will be enclosed in each package for players to experience visuals that seem to pop out from their television sets. The glasses enable both players to experience 3D during a two-player TOY STORY MANIA! game or for an accompanying family member to view that perspective in G-FORCE. Both standard video game 3D graphics and stereoscopic 3D are offered in the games and can be changed by the player. http://buenavistagames.go.com

Samsung and Nvidia unveil 3D technology for the Middle East market Samsung Electronics and Nvidia announced the launch of 3D stereoscopic technology in the Middle East. The bundled 3D kit comprising of a Samsung 2233RZ monitor, Nvidia high-tech wireless glasses, a high-power IR emitter and advanced software, combine to form the foundation for a new consumer 3D stereo ecosystem for gaming and home entertainment PCs. The Samsung 2233RZ monitor is housed in a thin, glossy black bezel that provides a compelling 3D experience with incredible picture quality, and users can now set up a shortcut key to easily shuttle between 2D and 3D. Unlike many other wide-screen monitors, the Samsung 2233RZ displays 5:4 and 4:3 images at accurate aspect ratios without enlargement or distortion. The 2233RZ, in conjunction with Nvidia’s advanced software, automatically converts over 350 games to stereoscopic 3D without the need for special game patches. Active shutter glasses from Nvidia deliver double the resolution per eye and ultra-wide viewing angles than passive glasses. In addition, the new 120Hz LCD monitors unlock flicker-free stereoscopic 3D gaming that provides 60Hz per eye. The 3D glasses are designed so that users can simultaneously wear prescription glasses comfortably. This ensures that users with corrective eyewear can view fully-immersive stereoscopic 3D. The glasses are easily powered over a standard USB cable and can last an entire week without a recharge. In addition, Nvidia’s “The Way It’s Meant to Be Played” program ensures that future games will support 3D vision. http://www.nvidia.com

Real D achieves 100% growth worldwide, and 400% growth in Europe in first half of 2009 Real D announced that it has doubled its installation base of Real D 3D equipped cinema screens worldwide and notched 400% growth in Europe in the first half of 2009. The Real D 3D platform now accounts for over 8,700 screens under contract and over 3,200 screens installed in more than 45 countries. http://www.realD.com

Real D adds 3D to CinemaxX Real D, and major German cinema circuit, CinemaxX, have partnered to equip CinemaxX theatres across Germany and Denmark with Real D 3D. This multi-screen pact builds on Real D’s momentum that saw 400% growth across Europe in the first half of 2009. The multiyear agreement kicks off with 30 CinemaxX screens in Germany and Denmark immediately upgraded to Real D 3D in time for the highly anticipated September release of Disney. Pixar’s Up. An additional 30 Real D 3D equipped screens will be added across the CinemaxX circuit by the end of the year, with an ongoing rollout of more Real D 3D enabled screens through 2010. http://www.realD.com

National Geographic introduces first 3D title National Geographic will debuted its first 3D home video title with “Sea Monsters 3D: A Prehistoric Adventure”, coming to DVD ($19.97) and Blu-ray Disc ($28.99) August 11 from Warner Home Video. The releases come with four pairs of 3D glasses, as well as an interactive timeline. The film covers prehistoric marine animals of the Dinosaur Age, such as Dolichorhynchops; the lizard-like Platecarpus; the Styxosaurus, with paddle-like fins the size of humans; and the monstrous Tylosaurus. http://www.nationalgeographic.com/store

14 Veritas et Visus 3rd Dimension August 2009

NewTek releases 3D motion graphics packs for sports NewTek announced the release of the 3D Content Pack Volume 1: Sports, a collection of network-style, ready-to- use, customizable animated 3D scenes designed for use with LightWave 3D and 3D Arsenal. The NewTek 3D Content Pack, Volume 1: Sports offers nearly 100 dynamic and animated scenes designed for football, baseball, basketball, soccer and hockey productions. Each sequence contains:

 Program opening and bumper for each sport  Logo treatments for team and player identification with areas to customize with logos and player pictures  Program title or team logo animations for segment introductions  Fast-paced bumper and transition animations that include placement for an image or video

The NewTek 3D Content Pack, Volume 1: Sports is available worldwide for US$295. http://www.newtek.com

TYZX announces iRobot's selection of TYZX 3D vision systems for advanced sensing on military TYZX announced that iRobot Corporation has chosen the TYZX DeepSea G2 Embedded Vision System (EVS) for several military robotics research projects requiring real-time vision and depth-. iRobot has demonstrated the ability to integrate the TYZX G2 EVS onto its PackBot and Warrior platforms: rugged tactical mobile robots designed to perform dangerous search, reconnaissance and bomb-disposal missions while keeping troops out of harm’s way. http://www.TYZX.com The TYZX G2 EVS provides:

 Enhanced Situational via 3D Visualization –Standard monocular cameras provide video footage that is “flat,” sometimes making it difficult for a operator to judge distance. 3D visualization provides and a more detailed view of the environment. Using an operator control unit (OCU) integrated with TYZX stereo vision data, the robot operator can more easily manipulate objects such as unexploded ordnance.  Person Detection and Person Following Capabilities – Using the TYZX system for person detection, iRobot researchers are developing advanced autonomous navigation algorithms to demonstrate person following capabilities. Using onboard sensing from the TYZX system, iRobot’s tactical mobile robots have demonstrated the ability to detect, recognize, track, and follow specific persons of interest.  Obstacle Detection and Obstacle Avoidance (ODOA) Capabilities for Increased Autonomy – TYZX G2 technology has enabled iRobot’s SEER payload for its PackBot and Warrior platforms to support autonomous ODOA for complex vertical structures. Whereas traditional planar provide only a 1D horizontal sweep of obstacles, TYZX technology provides range details in 2D. This allows the robot to sense how high an obstacle is and to determine if it can overcome that obstacle.

Google Earth lights up 3D terrain on moon for Apollo 11 anniversary Google switched on the Moon in Google Earth to commemorate the 40th anniversary of the Apollo 11 lunar landing. The newly included Moonscape for Google Earth features 3D terrain of the lunar surface for users to fly around and explore like they already can with and Earth. Users can also switch the scenery into planning charts used by NASA for Apollo missions, links to landing sites, a guide to artifacts left behind, and panoramic “street view” imagery taken by astronauts. Those who prefer a more structured lunar excursion can also take a guided tour by original moonwalker Buzz Aldrin and A Man on the Moon author Andrew Chaikin. There's also a narrated fly-through from Apollo 17 astronaut Harrison Schmitt, the only geologist to have walked on the Moon.

15 Veritas et Visus 3rd Dimension August 2009

STEREO Spies First Major Activity of Solar Cycle 24 NASA′s Solar Terrestrial Relations Observatory (STEREO) spacecraft has spotted the first major activity of the new solar cycle. On May 5 STEREO-B observed a Type II radio burst and a bright, fast coronal mass ejection (CME) emanating from the far side of the sun. The activity originated in a solar active region that rotated into view from Earth on May 8. A Type II radio burst is a discharge of radio waves that are emitted when shocks are accelerated by a CME – the sudden eruption of energy and solar material. The active region appears well above the sun’s equator, at about 30 degrees latitude, which indicates it is part of the new solar cycle. Activity from the previous solar cycle would appear nearer to the sun′s equator. These regions also have a distinct magnetic organization characteristic of new cycle regions. The last years of Solar Cycle 23 marked the longest and deepest solar minimum in 100 years. Its unusually small number of active regions and sunspots has led some impatient space-weather watchers to wonder if we were entering another “Maunder minimum.” That period, in the late 17th and early 18th centuries, saw few, if any, sunspot regions, and coincided with the deepest part of the “Little Ice Age” of global cooling. STEREO, the third mission in NASA′s Solar Terrestrial Probes series, launched on October 26, 2006. STEREO′s mission, now in the extended phase, is to provide the first-ever stereoscopic measurements to study the sun. http://www.nasa.gov/stereo

Fraunhofer HHI to host 3D Media Workshop in , 15-16 October 2009 Recent developments in the area of 3D media have of course an influence on the home market, because secondary distribution (DVD, Blu-ray) of movies has become an important market segment. In addition recent advances in 3D display technology have stimulated research and development in this area and several consortia have been founded and standardization activities have been initiated. In 2003, the 3D Consortium with 70 partner organizations was founded in Japan and recently new activities have been started: The 3D@Home Consortium (USA), 3D Fusion Industry Consortium (Korea), 3D Interaction & Display Alliance (Taiwan), China 3D Industry Association have been founded, and in field of standardization the ICDM, the ISO TC159/SC4, and the SMPTE 3D Home Entertainment Task Force are working in 3DTV. In the area of 3D display and display measurement standardization, the Rapporteur Group on 3DTV of ITU-R Study Group 6 and the TM- 3D-SM group of DVB began to work. In addition, the Moving Pictures Experts Group (MPEG) of ISO/IEC is working on a new coding format for 3D video. These industry consortia and standardization bodies rely on R&D projects, which develop the new technologies which lead to new standards, products and markets. In Germany these developments take place in the PRIME project, while in Europe there are 11 projects in the 3D Media Cluster, which is co-organizers of this workshop. Especially the projects 3DTV, 3D4YOU, 2020 3D MEDIA, MOBILE3DTV, 3DPHONE, 3DPRESENCE, GAMES@LARGE, MUTED, and Helium3D will contribute to this workshop. The 3D Workshop will provide an international forum to discuss and give guidance to future research, standardization and development activities of 3D. It will bring together production and post production companies, multimedia content providers, broadcast operators, regulators, manufacturers, and R&D organizations. Topics to be presented and discussed cover the complete chain of 3D production to 3D displays for 3D cinema, (mobile) 3DTV, 3D games, and other immersive applications around 3D content. http://3dmedia.hhi.fraunhofer.de

16 Veritas et Visus 3rd Dimension August 2009

Silicon Image introduces first port processor with 3D over HDMI Silicon Image introduced the SiI9389 port processor incorporating High-Definition Multimedia Interface (HDMI) Specification Version 1.4 features including 3D over HDMI, HDMI Ethernet Channel, Audio Return Channel and Content Type Bits. In addition, the Silicon Image semiconductor product family of transmitters (SiI9334 and SiI9136) and receivers (SiI9223 and SiI9233) has been upgraded to include 3D over HDMI capabilities, resulting in one of the industry’s broadest product portfolios incorporating HDMI Specification Version 1.4 features. Manufacturers of DTV, Blu-ray Disc player, set top box, audio/video receiver and other home theatre products are now able to incorporate key features of the HDMI 1.4 Specification Version 1.4 in their next-generation products. Now consumers can experience 3D cinema quality in their home, as well as enhance their gaming experience with 3D games. In addition to 3D over HDMI, the SiI9389 port processor also offers HDMI Ethernet Channel, which simplifies the connectivity infrastructure that enables personal entertainment technologies like LiquidHD to bring new services and applications to the home. Silicon Image’s LiquidHD technology is a suite of protocols that runs over standard IP networks such as those that include HDMI Ethernet Channel functionality. Designed to quickly and easily connect TVs, consumer electronics devices, personal computers, portable media devices, and home theaters into a seamless network, LiquidHD technology lets consumers enjoy their digital content from any LiquidHD-enabled source device on any LiquidHD-enabled display in the home. http://www.siliconimage.com

FinePix REAL 3D technology pioneers new dimension in imaging For the FinePix REAL 3D W1, Fujifilm has developed a ground- breaking image capture system comprising two Fujinon lenses and two CCDs, and the system is integrated in a compact body. An aluminum die-cast frame provides the solid platform for the precision alignment of the left and right lenses so users can take 3D images with an unprecedented quality of reality. Image data captured by the twin lens-CCD system is processed by the RP (Real Photo) Processor 3D – a newly developed processor that evaluates all photographic factors from focus and brightness to color tonality, and then merges the left and right images in a single 3D image. It is also the power behind 3D Auto, the function that makes point-and-shoot 3D photography a reality. This processor also controls the two capture systems independently to shoot two different images of the same subject simultaneously, each with different photographic settings. http://www.fujifilm.com

17 Veritas et Visus 3rd Dimension August 2009

Holga introduces 120-3D The new Holga 120-3D Stereo camera is based on the Holga CFN (Color Flash) body design. The Holga 120-3D features standard tripod mount (1/4 - 20), bulb setting, one mask for 6x6cm image area, built-in threaded shutter release button for cable release. The Color Flash is a built in flash which contains a spinning color filter wheel. The Holga 120-3D stereo camera is available in both standard and pinhole models. The camera retails for around $100. http://microsites.lomography.com/holga

Queensland University of Technology work allows photos in 3D New free software developed by a researcher from Queensland University of Technology and the Australasian CRC for Interaction Design (ACID), based at QUT allows photographs to be turned into three-dimensional images. Dr David McKinnon has designed software that could revolutionize the way three-dimensional images are created. The software automatically locates and tracks common points between the images allowing a determination of where the cameras were when the photos were taken. This information is then used to create a 3D model from the images using graphics cards to massively accelerate the computations. http://3Dsee.net.

Cyclopital3D brings out digital stereoscopic hand viewer Cyclopital3D’s digital allows the user to show stereo photographs without a computer or making prints. Features include 800x480 resolution for each eye; wide field of view accommodating ortho-stereo perspective for most digital stereo cameras; compatible with the new Fuji FinePix Real 3D camera; holds up to 20,000 stereo pairs; six hours of continuous viewing; focus knob allows glasses free viewing for most people; precision optics with large achromatic lenses and high quality front surface mirrors; and standard tripod mount for table-top use. Its size is 200x120x95mm and it weighs 850 grams. http://www.cyclopital3d.com

Nihon Unisys uses 3D digital signage to study faces of passers-by Macnica Networks Corp and Nihon Unisys Ltd conducted a marketing test using face recognition technology for two weeks from May 15-29. This was an additional test to the digital signage test using a 3D display that Nihon Unisys carried out at Central Japan International Airport (Centrair) for three months from March 30 to June 30. The 3D display is Provision Interactive Technologies Inc’s 40-inch rear projection display. It does not require dedicated glasses to view 3D images. In addition to 3D video, it provides information on stores at the airport and issues coupons through a kiosk terminal standing next to the display. This time, the 3D display features Macnica Networks’ face recognition technology. As a result, it can count the number of people who look at the display and analyze them to collect marketing data. Specifically, the display measures the directions of passers-by’s faces with a compact camera mounted on its upper part and determines whether they are viewing the content. Then, the number, age and gender of viewers as well as the time of viewing are determined and recorded for each content. And it counts and records the number of people who operated the kiosk terminal’s touch panel after viewing the content and the number of issued coupons. Macnica and Nihon Unisys will comprehensively analyze those data to verify the effect of digital signage, they said. http://www.unisys.co.jp/welcome-e.html

18 Veritas et Visus 3rd Dimension August 2009

75-foot Criss Angel lenticular mural showcased at the Luxor in Las Vegas The Cirque du Soleil theater lobby at the Luxor hotel in Las Vegas was reworked in 2009 by famed architecture firm Hamilton Anderson Associates. A component of the re-design was a 75-foot wide back-lit lenticular mural created by Big3D, the lenticular provider of record on the project. . The art features imagery of an expanse of red velvet curtain through which, via the magic of lenticular, iconic white rabbits and Criss Angel periodically peek through, then retreat back behind the curtain. Additionally, five large back- lit lenticular panels line the hallway leading into the theater itself, each a 3-flip effect featuring more eerie rabbit imagery. The mural installation was complex, made up of 50 different lenticular panels of varying heights. Since the custom light boxes were being constructed on-site, constant fine-tuning of panel sizes was required throughout the lenticular production process. Four days were required to produce the complex job. Installation went smoothly and the end result is truly unique and stunning. http://www.big3d.com

Sony Pictures' Angels & Demons lured by lenticular A blockbuster film franchise steeped in secrets and ancient intrigue comes to life in a dramatic poster and lobby display promotion by Big3D.com the world leader in 3D . Two different 3D posters were produced to promote the Ron Howard-directed film. Over 600 backlit 27x40-inch 3D posters and over 500 48x72- inch animated 3D lobby displays were distributed to theaters across the country. The large lobby display is a 3D image of a stone gargoyle, half angel, half demon, glaring menacingly against a dark gloomy sky, seemingly reaching out to passersby. The theater poster is 3D and animated-as you move past the poster, the image of the gargoyle changes from good to evil, highlighting the plot of the film. http://www.big3d.com

19 Veritas et Visus 3rd Dimension August 2009

Stratasys makes four more materials compatible with Fortus 3D production systems Stratasys announced it has made four more materials and one more support material compatible with its Fortus 900mc 3D production system. The materials include ULTEM* 9085, PC-ABS, PC-ISO, and ABS-M30i. These options more than double the number of materials compatible with the Fortus 900mc, and they provide an array of mechanical properties to chose from, such as FST (flame, smoke, toxicity) compliance, heat resistance, medical-sterilization capability, strength, and flexibility for prototyping and production. Stratasys materials previously compatible with the 900mc are ABS-M30, PC, and PPSF/PPSU (polyphenylsulfone), the company says. http://www.Stratasys.com

Objet shows off examples of PolyJet technology PolyJet enables the printer to use more than one kind of material during a single print run. In other words, users might have soft and hard portions on one model. Consider the example, below left, where an entire bike was printed – including soft wheels and a hard frame, (and the wheels turn). The center image shows an amazing example of the technology, where the solid bones of a foot are printed inside of a “fleshy” material, producing a medical model. http://www.objet.com

Chicago Architecture Foundation presents “Chicago Model City” The Chicago Architecture Foundation is presenting “Chicago Model City” until November. It's a gigantic and highly detailed three-dimensional model of the windy city's downtown towers. Note the relative size of the individuals in the image below. City models are not a new phenomenon – but in the past their construction involved dozens, perhaps hundreds of craftsmen toiling over teeny building parts for years. The results were impressive, but Chicago's approach was quite different. Due to budget difficulties, they were led towards 3D printing, and that's how the buildings were made, more quickly and with less expense. The results are amazing: 1000 buildings, 400 blocks of downtown Chicago, home of many of the world's most amazing skyscrapers. The scene will be illuminated just as the actual city is by our sun. http://chicagomodelcity.org

20 Veritas et Visus 3rd Dimension August 2009

3D scanner from Next Engine recreates real-life objects on the screen The NextEngine 3D Scanner does just what its name implies. It scans any object with a bunch of lasers and makes it into a fully workable CAD assembly right on the computer screen. The laser scans the object and takes a 3D snapshot of the face, employing wave-particle duality theories to get 400 data samples per square inch. The computer then automatically takes the data points and strings them together to make a 3D computer model that is easily imported into other CAD software. Depending on how big the item is, a couple of scans at different angles might be required. NextEngine says it takes about two minutes to complete a scan and that about 12 scans will get an article fully scanned. They also claim that, at a price of $3000, their scanner can operate better than the competition at a tenth of the cost to the consumer. Combining the technologies with 3D printing allows for objects to flow freely between the real world and that of computers. This is the beginning of an age where anybody can bring a 3D object into their computer, edit it to suit their needs, and print it back out in a fully functioning form. http://www.nextengine.com

Shapeways introduces stainless steel 3D printing Shapeways newly enables users to print objects in stainless steel. Shapeways has traditionally worked in plastics and resins, recently enabled options to print designs using steel. Stainless steel printing holds the promise of machines that can replicate themselves and build anything. Stainless steel printing, like most 3D printing, is accomplished by the slow layering of material on a substrate. As you watch the printing machine build a design it’s like seeing an object grow out of thin air. For stainless steel printing, the layers are formed of steel powder that is held together by a binding material. When the whole model is done, the steel is infused with bronze. The price for steel is about 5-6 times greater than for traditional resin or plastic, but still amounts to only about $10 USD per cubic centimeter. http://www.shapeways.com

Urbanscreen develops 555 KUBIK video projection facade Daniel Rossa worked with Urbanscreen to create the 555 kubik facade video projection at the Kunsthalle in Hamburg, Germany. Giant hands appear to manipulate the surface of the museum in a surreal sequence that is the result of Rossa asking the question: “How it would be, if a house was dreaming?” http://www.urbanscreen.com

21 Veritas et Visus 3rd Dimension August 2009

Helmholtz Zentrum München develops 3D imaging inside living material Using a process that makes light audible, bioengineers in Germany developed a technique that allows 3D optical and fluorescence imaging of tissue to a depth of several centimeters. The Helmholtz Zentrum München (German Research Center for Environmental Health) has rendered 3D images through at least six millimeters of tissue, allowing whole-body visualization of adult zebra fish. To achieve this feat, the researchers made light audible. They illuminated the fish from multiple angles using flashes of laser light that are absorbed by fluorescent pigments in the tissue of the genetically modified fish. The fluorescent pigments absorb the light, a process that causes slight local increases temperature, which in turn result in tiny local volume expansions. This happens very quickly and creates small shock waves. In effect, the short laser pulse gives rise to an ultrasound wave that the researchers pick up with an ultrasound microphone. The real power of the technique, however, lies in specially developed mathematical formulae used to analyze the resulting acoustic patterns. An attached computer uses these formulae to evaluate and interpret the Light and ultrasound can be used to visualize the specific distortions caused by scales, muscles, bones and red fluorescent spinal column of a live fish. internal organs to generate a 3D image. The result of this Multispectral optoacoustic tomography, or MSOT, “multispectral optoacoustic tomography”, or MSOT, is an allows the investigation of subcellular processes in image with a spatial resolution better than 40µm. live organisms. Image: Helmholtz Zentrum http://www.helmholtz-muenchen.de/en/ München/TU München montage

3M previews new 3D film at SID 3M previewed a new 3D film in its booth at SID Display Week in June. The three-dimensional film delivers clear, true autostereoscopic viewing at a glance, without the need for special glasses. It provides a simple 3D solution for mobile phones, gaming devices, and other handheld devices, and easily integrates into the backlight module of the LCD display. Only one LCD panel is required, operating at 120Hz refresh rate. Backlight module assembly is nearly identical to existing systems, allowing simple integration at the assembly stage. The usual optical film stack is replaced with a reflective film, custom light guide and 3D film. Through directional backlight technology, left and right images are focused sequentially into the viewer’s eyes, allowing full resolution of the display panel.

EnFuzion enables 3D resizable image rendering in the Amazon Cloud Axceleon, a company in high performance distributed computing solutions for render farms and clusters, announced that EnFuzion is the first commercial product on the market today to enable seamless rendering with major 3D applications in the Amazon EC2 Cloud. EnFuzion3D 2009 interfaces, launches and manages image renders in EC2 supporting major 3D applications such as , Autodesk 3ds Max, Adobe After Effects with plug-in renderers like vRay, 3Delight, Turtle and mental ray. EnFuzion with EC2 provides “elastic rendering” within the cloud and changes the economics of rendering by allowing studios and end users to pay only for their usage. This notion of “elastic rendering” using EnFuzion decreases the management of a studio’s hardware and software stockpile. The EnFuzion render farm in the cloud shrinks and expands on demand. It can start with one render node and rapidly expand to thousands of render nodes transparently increasing the potential of the render farm within minutes using Amazon’s EC2 Cloud. http://www.axceleon.com TOPS Systems Corporation unveils

800 teraflop real-time GPU introduced by TOPS Systems TOPS Systems Corporation announced a system that uses what is essentially a complex, 45nm ray-tracing GPU to accelerate real-time ray traced rendering. The 800 teraflop real-time ray tracing (RTRT) system gangs together nine, 73-core chips into a single system that fits inside a desktop computer form factor. The new chip, which is being jointly developed with Toyota and Unisys, is aimed at the auto industry, where designers will use it to prototype body designs and paint combinations. The bus hosts a 64-bit RISC master controller that takes work in batches and assigns it to the other eight cores, which do the work of computing the rays. http://www.topscom.co.jp

22 Veritas et Visus 3rd Dimension August 2009

Innovative Designs, Barco, and Hoberman Associates debut expanding video screen for the U2 360° tour Hoberman Associates, a New York-based design firm, assisted Innovative Designs and its parent company Barco, in creating the centerpiece for the U2 360º tour – the “Expanding Video Screen” – that debuted on June 30, 2009 during the band’s opening night in Barcelona, Spain. While large video screens are a familiar fixture for arena style rock concerts, U2 was looking for something unprecedented for its 360º tour – a giant screen that could change its size and shape. Hoberman and Innovative Designs, conceptualized this fusion of architecture, stage scenery and extreme technology. They came up with a design for an elliptical video display, approximately the size of a tennis court that could morph into a 7-story high cone-shaped structure, enveloping the band as it extends. Constructed of stainless steel and aircraft aluminum, the display is made of 888 LED screens, with 500,000 pixels spanning across them, providing concertgoers with clear and visually stunning images. It has a screen area of 3,800 square feet, and weighs approximately 120,000 pounds. Suspended from the center of “The Claw,” the main stage set named for its futuristic, four-legged design, the Expanding Video Screen provides U2 fans the first-ever 360-degree concert view. With a height of 164 feet, the entire U2 360º set is twice as tall as the stage from the Rolling Stones’ A Bigger Bang tour, which was, according to Rolling Stone magazine, the largest stadium set built to date. Accordingly, every seat in the U2 tour’s 75,000-plus seat stadiums will have a completely unobstructed view of the show. The Expanding Video Screen’s development is based on Hoberman’s patented “Iris Structure” that has been realized in other forms, including the Iris Dome at The Museum of Modern Art (New York City, 1994), the Iris Dome at the World’s Fair (Hanover, Germany, 1999) and the Olympic Arch (Salt Lake City, Utah, 2002). The screen was produced by Barco and Innovative Designs. Hoberman Associates and Innovative Designs were responsible for screen design and engineering. Buro Happold provided structural analysis. http://www.hoberman.com

23 Veritas et Visus 3rd Dimension August 2009

Organic Motion helps create immersive digital experience at Sony Wonder Technology Lab Organic Motion announced the deployment of its unique technology in the new “Dance Motion Capture” experience at the Sony Wonder Technology Lab (SWTL) in midtown Manhattan. The new exhibit adapts Organic Motion’s technology to give SWTL visitors the unprecedented opportunity to use their own movements to bring Sony animated characters to life. Upon stepping into one of three new Dance Pods, each attendee becomes a “puppet master”, controlling a life-size film or video game character in real-time. The new experience is one of 14 new interactive exhibits recently unveiled after an extensive renovation to the SWTL’s third and fourth floors. The Lab’s “Dance Motion Capture” exhibit marks the world’s first public entertainment experience to incorporate markerless motion capture, a new way to digitize human motion that is free of all markers or tracking devices. Children and parents alike can step into the “Dance Pods,” and instantly have their full body motion animated onto a character of their choice. Leveraging Organic Motion’s OpenSTAGE technology, the new attraction showcases how people interact with technologies used to create 3D games and movies, and are transforming the way people use computers. Unlike traditional motion capture, which requires the subject to put on special tracking body suits, Organic Motion’s systems let people simply wear their regular clothing. This is especially beneficial for other applications for the general public as well, such as sports, medicine and entertainment, where a number of individuals can quickly pass through the system and have each person’s exact body movement be digitized and streamed instantly into software. http://www.sonywondertechlab.com http://www.organicmotion.com

Jon Peddie Research says graphics will come blazing back in 2010 Jon Peddie Research (JPR) announced estimated global graphics chip shipments for 2009 will see the worst ever year-over-year drop in shipments. The decrease in shipments for 2009 will be even worse than the 2000-2001 recession. However, 2010 promises an amazing comeback. Taking together data, interviews with suppliers, and world economic forecast models, JPR believes the worst is over and Q3 will show recovery leading all the way through 2010, subject to seasonal adjustments. Portable devices such as notebooks, laptops, and netbooks will be the strongest, but they will not overwhelm desktops, which are still the preferred choice of platform for the power users and professionals. Architectural changes like Intel’s Nehalem and new product introductions from AMD, ATI, Intel, and Nvidia are going to be disruptive to the status quo and traditional market share of suppliers. The continued expansion and development of heterogeneous computing and GPU computing will stimulate growth in 2010 enabled by Apple’s and Microsoft’s new operating systems. New programming capabilities using OpenCL, DirectX 11, and Nvidia’s CUDA architecture will remove barriers to the exploitation of the GPU as a serious, economical, and powerful co- processor in all of PCs. The net result is a new PC environment starting in Q3, and this new environment will have a beneficial impact on computing in 2010 onward. Graphics chip shipments are a leading market indicator - the graphics chips go http://www.jonpeddie.com to the ODMs and OEMs, which then build and ship PCs

24 Veritas et Visus 3rd Dimension August 2009

AMD soars in Q2’09, Intel and Nvidia show gains according to Jon Peddie Research Jon Peddie Research (JPR) announced estimated graphics shipments and supplier market share for the second calendar quarter of 2009. Graphics chips (GPUs and IGPs) are the leading indicator of the PC market. After the channel stopped ordering GPUs and depleted inventory in anticipation of a long drawn out worldwide recession in Q3 and Q4 of 2008, expectations were hopeful, if not high that Q1’09 would change for the better. In fact, Q1 showed improvement but it was less than expected, or hoped. Instead, Q2 was a very good quarter for vendors, counter to normal seasonality.

8 yr avg. 2003 2004 2005 2006 2007 2008 2009 ChangeQ1 to Q2 0.83% -5.42% -5.18% 1.63% -4.22% 3.13% -0.49% 31.29%

Table 1: Growth rates from Q1 to Q2 from 2003 to 2009

Vendor This quarter Market share Last quarter Market share Growth Qtr-Qtr A year ago AMD 18.13 18.4% 12.81 17.1% 41.5% 17.11 Intel 50.30 51.2% 37.20 49.7% 35.2% 44.67 Nvidia 28.74 29.2% 23.26 31.1% 23.6% 29.63 Matrox 0.06 0.1% 0.07 0.1% -6.2% 0.10 SiS 0.40 0.4% 0.70 0.9% -42.9% 1.90 VIA/S3 0.67 0.7% 0.84 1.1% -19.5% 1.00 Total 98.30 100.0% 74.87 100.0% 31.3% 94.42

Table 2: Total graphics chip market for Q2’09

Things probably aren’t going to get back to the normal seasonality till Q3 or Q4 this year, and we won’t hit the levels of 2008 until 2010. JPR is still predicting an upturn in the PC market in Q3 and Q4 and in particular for the graphics market (which serves not just PCs but aerospace and automotive, industrial systems, medical systems, kiosks and POS). JPR is optimistic because these are seasonally the best quarters. In Q4 there will be two new operating systems: Apple’s Snow Leopard, and Windows 7 which will help stimulate new purchases in the holiday season. ATI and Nvidia will be introducing new 40nm designs with higher performance, GPU-compute, and surprisingly aggressive prices. http://www.jonpeddie.com

3D modeling and market shows resilience, says new study by Jon Peddie Research Jon Peddie Research (JPR), the industry's research and consulting firm for graphics and multimedia, just released a new report on the 3D modeling and animation market. The same software that is used primarily film/TV production and game development is also being put to work for rendering and visualization in architecture, manufacture, and science and is on the verge of major breakthroughs due to demand from new vertical markets as well as hobbyist and consumer sectors. Like all others, the 3D M&A industry is Worldwide forecast 3D 2007 2008 2009 2010 2011 2012 2013 Modeling and Animation going through a period of 219 237 221 238 262 293 329 contraction and consolidation. ($M US dollars) CAGR - 6.8% However, as difficult as it is for all participants, the JPR study points out that this is often a prelude to growth. The 3D modeling and animation market includes software tools that are used for TV and movie special effects, creating content for games, product design, and for the web. Over the years the industry has grown steadily, but the 3D M&A tools are still expensive and used primarily by professionals. Beyond traditional industries, new markets are also opening up for more casual users of 3D modeling and animation tools, defying barriers posed by high cost and complexity. Free 3D M&A tools are becoming available and millions of copies are being downloaded every year suggesting a pent-up demand for easy-to-use 3D tools. In addition, there is a hard core of hobbyists and casual users who are using 3D tools even though the curve is steep. New distribution models are just now opening up including online worlds, YouTube, MyToons, the Daz communities, and more. http://www.jonpeddie.com

25 Veritas et Visus 3rd Dimension August 2009

Creative 3D painting

I continue to be fascinated by those artists with the incredible ability to paint depth-based images on 2D surfaces. A range of sidewalk painters is succeeding in identifying that the fascination is rather universal. A Madonnaro in Italy may be a Strassenmaler in Germany, a Sidewalk Artist in the United States, or a Pavement Artist in Britain, but street painting and pavement art have been transformed beyond recognition. A few favorites are identified below.

Julian Beever’s sidewalk chalk paintings continue to astound In past editions of the 3rd Dimension, we’ve shown images of Julian Beever’s amazing chalk paintings (see issues #12, #13, #20, #23/24, and #31/32), which so clearly show us the importance of perspective. In the below images, the lines in the sidewalk serve to remind us that these really are 2D paintings. http://users.skynet.be/J.Beever

Tracy Lee Stum creates 3D sidewalk chalk paintings Tracy Lee Stum is an internationally recognized talent who’s versatile and exceptional abilities excel in the realms of painting, drawing, street painting and decorative design. Tracy has participated as an invited featured artist in many festivals and events in the US and internationally. Her paintings have won numerous awards and accolades and she currently holds a Guinness World Record for the largest street painting by an individual, which was set in 2006. Best known for her spectacular 3D anamorphic and interactive street paintings, Tracy is actively creating commissioned works in chalk for leaders in the advertising, events, corporate, and educational sectors. Additionally, Tracy promotes arts education and has conducted street painting workshops at these festivals and

26 Veritas et Visus 3rd Dimension August 2009 other events throughout the US, including the prestigious Getty Center in Los Angeles. She is also well known for her extraordinary trompe l’oeil murals, oil paintings and decorative design work throughout the high end hospitality, entertainment and luxury residential markets. Clients include casinos, resorts and hotels in Las Vegas (Caesar’s Palace, The Venetian), Atlantic City (The Borgata) and Hong Kong. http://www.tracyleestum.com

Edgar Mueller also shows off incredible 3D Street Art Edgar Mueller is described as a “master of street painting”. If one looks from the right spot, its three-dimensional painting becomes the perfect illusion. He paints over large areas of urban public life and gives them a new appearance, thereby challenging the of passers-by. Around the age of 25, Müller decided to devote himself completely to street painting. He travelled all over Europe, making a living with his transitory art. He gave workshops at schools and was a co-organizer and committee member for various street painting festivals. Müller set up the first (and so far only) Internet board for street painters in Germany – a forum designed to promote solidarity between German street painters.

Inspired by three-dimensional illusion paintings (particularly by the works of Kurt Wenner and Julian Beever) he is now pursuing this new art form and creating his own style. The observer becomes a part of the new scenery offered. While going about their daily life, people change the painting's statement just by passing through the scene.

27 Veritas et Visus 3rd Dimension August 2009

The waterfall below is a great example of Mueller’s work. With about 270 m² this painting is the first large-sized project of Edgar Mueller. On the occasion of the Prairie Art Festival in Moose Jaw (Canada) he turned the “River Street” into a river which ends in a huge waterfall. The bottom image clearly identifies the importance of perspective in such paintings. http://www.metanamorph.com

28 Veritas et Visus 3rd Dimension August 2009

Kurt Wenner paints 3D chalk drawings based on the old masters Kurt Wenner is famous for inventing three-dimensional pastel drawings. Wenner explains that his art form is a type of anamorphism – frequently considered a form of illusion or Trompe l’oeil, but is really the logical mathematical continuation of perspective. Wenner is currently writing a Street Painting History, explaining how sidewalk art and pavement art transformed itself into a spectacular medium, popular in advertising, publicity, and numerous street painting festivals. While studying classical architecture and perspective, Kurt Wenner applied the principles of classical drawing and classical design to the sidewalk, completely transforming the art form. http://www.kurtwenner.com

A few other examples… I recently received an e-mail with several additional examples of paintings that rely largely on perspective to convey their message. I don’t know the source of these pieces, but find them fascinating…

29 Veritas et Visus 3rd Dimension August 2009

30 Veritas et Visus 3rd Dimension August 2009 Twenty Interviews

Volume 4 now available!

Interviews from Veritas et Visus newsletters – Volume 4

+ 2D-3D Video, Craig Summers, Founder + Luminus Devices, John Langevin, VP of Sales/Marketing + Fox Sports Network, Mike Anastassiou, Sr. Exe. Producer + MacDermid Autotype, Steven Abbott, + Axis Films, Paul Carter, CEO + Merck KGaA, Roman Maisch, Sr. VP of Marketing/Sales + Dallas Mavericks, Dave Evans, Director of Broadcasting + Mitsubishi, David Naranjo, Director of Product Dev. + Can Communicate, David Wooster, Head of Production + Nouvoyance, Candice B. Elliott, CEO + Ceravision, Tim Reynolds, CEO + nVidia, Andrew Fear, Product Marketing Manager + Cypress Semiconductor, Darrin Vallis, Director + Rutherford Appleton Lab, Bob Stephens, Prin. Scientist + Dolby, Barath Rajagopalan, Director + SID, Tom Miller, Executive Director + Fusion Optix, Terry Yeo, CEO/Founder + Synaptics, John Feland, Human Interface Architect + LG Display, Eddie Yeo, Executive Vice President + Westar Display Technologies, Phil Downen, Sales Mgr

80 pages, only $12.99

http://www.veritasetvisus.com

31 Veritas et Visus 3rd Dimension August 2009 SIGGRAPH August 3-7, 2009, New Orleans, Louisiana

In this report, George Walsh from Jon Peddie Research covers highlights from the floor of SIGGRAPH, featuring discussions about exhibits at 3DVIA, ATI, Autodesk, Autodessys, Foundation, Dimensional Imaging, Envisiontec, e-on software, Intel, Matrix Engine, Nvidia, Pixel Farm, studio|gpu, Smithmicro, SpeedTree, and Wacom

Walking the floor: SIGGRAPH product exhibits

by George Walsh

While you can count the number of classes at a convention or ask how many attendees are there, one of the easiest ways to judge an event’s size is to take a look at the number of booths on the show floor. In the past, SIGGRAPH was a massive event that took up (eyeballing of course) at least four times the booth space that was used in New Orleans this year. However, the diehards were all still on hand for the 2009 event and there was plenty to see. And, as in previous years, recruiters and educational institutions were in attendance, providing opportunities for the attendees.

This little walkabout provides highlights some of the flashier technologies being shown at the booths. It’s an indication of where companies are putting their money to attract attention.

3DVIA: 3DVIA, a part of Dassault Systèmes’ group, was showing off 3DVIA Virtools for . Virtools is a development and deployment platform for interactive 3D content creation. The Virtools for Wii product lets you export content from a PC to a Wii. What’s interesting is that the app uses the Wii controllers to move the content from one machine to another. That’s the barebones explanation. To use the system, you need a Wii NDEV kit and a SDK. It’s a pretty unique means of moving content between two different platforms.

ATI: For designers on the move, ATI announced its newest graphics hardware for content creators and computer- aided designers. The ATI FirePro M7740 graphics accelerator is set to power Dell Precision M6400 Mobile Workstations. In addition, ATI is now offering a free OpenCL for CPU beta download as part of the ATI Stream SDK v2.0 Beta Program. The beta is intended to help programmers more easily develop parallel software programs and take further advantage of multi-core CPUs to accelerate software. AMD has submitted conformance logs from its and CPU beta releases to the Khronos Working Group for certification. The company was also showing off a number of GPU Compute applications.

Autodesk: Autodesk, the behemoth of 3D design tool providers, announced a number of product updates at SIGGRAPH. Among them were Autodesk Maya 2010, which will include advanced match-moving capabilities, and high dynamic range compositing in a single package. Also included for a complete CG workflow are five additional mental ray for Maya batch rendering nodes and the Autodesk Backburner network render queue

32 Veritas et Visus 3rd Dimension August 2009 manager. In addition, the company announced a new version of its Mudbox digital sculpting and texture painting software, which lets users create 3D digital artwork as if they are working with clay and paint. The company also showed their Autodesk Softimage 2010 release which boasts accelerated performance, greater data handling, and new tools to help manage scene complexity. Significantly, very significantly, Autodesk has incorporated the Face Robot tool into Softimage 2010. Face Robot was formerly being sold as a service and software package for around $30K. MotionBuilder 2010 delivers faster overall perforanimation workflows, and improved interoperability with Autodesk Maya, 3ds Max, and Softimage software. And Autodesk announced that it would be offering new product bundles designed to help users save money while getting their work done. The Autodesk Entertainment Creation Suites offer customers a choice of either Autodesk Maya 2010 or Autodesk 3ds Max 2010 software, together with both Autodesk Mudbox 2010 and MotionBuilder 2010 software. These suites are priced at US$4,995. Autodesk Real-Time Animation Suites are designed for animation-intensive productions. These Suites also offer a choice of either Maya 2010 or 3ds Max 2010 software, in addition to MotionBuilder 2010 software (no Mudbox). They’re priced at US$4,795.

Images from the booths 3DVIA, ATI, and Autodesk

Autodessys: Autodessys has long been a player in the modeling and rendering space and this year the company was on hand to show off the latest version of its form·Z flagship product. Your basic run-of-the-mill form Z is a general-purpose solid and surface modeler with an extensive set of 2D/3D form manipulating and sculpting capabilities. It’s a useful tool for architects, landscape architects, urban designers, and all other design fields that deal with the 3D spaces and forms. The new form·Z 6 RenderZone Plus adds photorealistic rendering based on the LightWorks rendering engine. It offers three levels of rendering: simple, z-buffer, and raytrace. You can start developing the image of a 3D model at a simple level and gradually turn on features to make it more photorealistic. That’s the “in a nutshell” explanation, but the product also has a slew of other new features including shaders, decals, and predefined trees.

Blender 2.5: Say the words “free” and “open source” and you’re going to attract not just a large number of users but a large number of developers looking for an app that is flexible and doesn’t tie them into the constraints of a commercial product. At SIGGRAPH this year, the Blender Foundation, the organization behind the free and open source 3D creation suite Blender, announced the result of the Blender 2.5 project, bringing forth a fully customizable user interface layout system, with advanced access to all the tools and options Blender offers. All of the interface elements in Blender are now defined using a scriptable layout engine, with scripted access to the data Blender provides. Keyboard layouts for hot keys and user input can be changed on the fly, including the definition of macros, and reapplying tools with adjusted settings. Blender 2.50 also provides an upgraded animation system. Now every setting and option can be animated via function curves, higher level actions, the non-linear animation mixer, or scripting. Blender 2.5 will be available in October of this year.

33 Veritas et Visus 3rd Dimension August 2009

Dimensional Imaging: 3D imaging seems to be progressing by leaps and bounds and Dimensional Imaging has taken it to the point of doing a minimal number of snapshots. The photo below shows a 3D image captured by taking three photos. Facial motion capture is accomplished by taking additional photos of the subject making common expressions. The company’s expression capture system is now being evaluated by game developer Valve.

Envisiontec: There were a number of 3D printers at the show (and some had more attractive displays) but a company that was showing off one of the most impressive technologies was Envisiontec. The company’s line of 12 different machines are capable of producing parts in 11 different media for applications as disparate as jewelry, toys, industrial, medical and dental parts. Envisiontec has over 20 patents and patent applications pending worldwide.

Autodessys showed off its flagship Form Z product; the image on the right is an example of the newly released Blender 2.5 from the open-source Blender Foundation.

Dimensional Imaging showed off its facial expression solution; Enviontec demonstrated it 3D printing capabilities. e-on software: Many say that if you want to succeed, you pick your niche and stick with it. At first glance, that seems to be the mantra of e-on software, devoted to giving developers the means to create digital nature scenes via its Vue applications. At SIGGRAPH, the company was showing off its new Vue 8 (a step up from 7) products. These include the Ozone 5 atmosphere plug-in for 3ds Max, Maya, XSI, LightWave, and Cinema 4D. Also on display was Vue 8 Infinite, a tool for architectural visualization, design, illustration, and animation. Upping the ante was Vue 8 xStream, a product for developing natural environments in 3ds Max, Maya, XSI, and Cinema 4D.

Intel: Aside from classes designed toward taking advantage of Larrabee, Intel was primarily showing off the speed of Nehalem by having content creation software companies strut chip. The Intel Threading Building Blocks team

34 Veritas et Visus 3rd Dimension August 2009 were also there to talk about some new things along with members of the NASA Computational Technologies Project team, which employs advanced computers to further understanding of the earth, sun and solar system. Software vendors demonstrating Intel’s technology included Autodesk, Luxology, Cakewalk, Mass Animation, Adobe, and Corel. Hardware demos were given by HP, BOXX, Lenovo, Sony, ATI, Mellanox, and others.

Both e-on and Intel showed of a broad range of solutions.

Matrix Engine: Looking for multiplatform 3D? MatrixEngine from Japan was showing a development environment for creating 3D animation and multimedia content and applications that run on web browsers, netbooks, older PCs, and even mobile phones, handhelds and consumer electronics. The company says it minimizes the need for programmers and programming, and shortens development projects from months to weeks. So far, it’s been used to create 3D mobile games, maps, content, product user interfaces, and a number of business and consumer applications. Demos of products created with the tools were impressive, especially on smaller devices where 3D is a challenge.

Nvidia: Nvidia showed off a number of new and not so new technologies but the main focus of its showcase was its Plex scalable visualization tools. Using Nvidia SLI Mosaic technology, Quadro Plex scalable systems let both the and any professional application scale across up to eight displays and drive immersive stereoscopic 3D environments. Each Quadro Plex system uses dual Quadro FX 5800 GPUs and a combined 8 GB of graphics . Performance can be scaled even further by connecting two Quadro Plex systems, with a

35 Veritas et Visus 3rd Dimension August 2009 total 16 GB of memory, to a single workstation. An optional rack mount kit enables Quadro Plex systems to be mounted within any standard 19-inch rack environment while taking up 3U of vertical space.

Pixel Farm: UK-based Pixel Farm was on hand, showing off the latest version of their match-moving application. The company, however, likes to say that its PFTrack is not merely another single-purpose match-moving app. It can be used with film or video sequences regardless of their resolution. Others seem to agree that it’s a little different than its competition. It’s currently used by The Mill, UbiSoft, Riot, Stan Winston Studios, Digital Domain, Cinesite, Double Negative, The Orphanage, CORE Digital, MPC and Animal Logic. Featured at the show was the company’s new PFPipe C API for reading PFTrack projects and obtaining their data – currently that’s camera data and distortion data. Initially, this will allow people to establish direct integration of PFTrack data into third-party applications and in-house pipelines and technologies. The first release of PFPipe will provide access to both PFTracks camera data and distortion models, as well as providing all project links to footage and clips used in the project file itself. It’s free to PFTrack users. studio|gpu: The need for speed is ever present in 3D rendering. Hence the ubiquitous use of render farms that make use of multiple machines to take the burden off of the user’s desktop. According to studio|gpu, that’s obsolete technology and the company says that rendering on the desktop can be reduced from hours to minutes and minutes to seconds using its MachStudio Pro app. Showcased at SIGGRAPH 2009 were the features and performance benefits of the company’s application using the ATI FirePro V8750 3D workstation graphics accelerator. Using this combination of software and hardware, studio|gpu says that MachStudio Pro can render projects 196% faster than the previous version of the product. MachStudio Pro is now said to perform up to 900 times faster than traditional rendering packages.

Smithmicro: Smithmicro was out on the floor showing off its new Poser 8 3D character creation app. Poser 8 lets you work with avatars and all types of 3D characters. It also offers eight new, ready-to-pose, fully-textured humans of different ethnicities and body types to give you a starting point in designing your characters. The new Poser content library provides more than 2.5 gigabytes of human and animal figures accompanied by a wide range of accessories including hair, clothing, sets and real-world props and elements such as lights and cameras to produce complete scenes.

SpeedTree – build a forest, make a tree: Down the aisle, a turn around the corner and what have we here, realistic trees, weeds, and grasses, undulating on the large screen monitor. SpeedTree combines procedural algorithms with the ability to hand tool, giving developers the ability to quickly create optimized foliage. At SIGGRAPH 2009, the company announced a deal for SpeedTree Cinema with Industrial Light and Magic for work in a “high profile film.” The software also includes a huge library of plants. The company’s plants are featured in and IV among others. http://www.speedtree.com.

36 Veritas et Visus 3rd Dimension August 2009

Wacom: Wacom has probably been making tablet devices for, well, as long as there have been tablet devices. Many artists prefer to get creative in a high-tech way that emulates the feel of using a pen and paper. At SIGGRAPH, the company was showing off its new Cintiq 12WX. This new tablet has a 12.1-inch display at 1280x800 pixels. The pen technology provides 1,024 levels of pressure sensitivity and 5,020 lines of resolution. Its video control unit provides video, USB, and power connections for compatibility with notebooks, desktops, and workstations.

SpeedTree’s software enables realistic plant-life among other things. Wacom’s pen-based solutions provide an easy input format for many 3D artists.

Education and recruiters: Since SIGGRAPH is really more a learning experience than a product showcase (though the latter is certainly an important aspect) more than 20 educational institutions were on the show floor, letting those interested take a look at what they had to offer. Not to be overshadowed, several high-end production houses were on hand looking for talent. Or maybe we should say they were looking for future talent. Google, for example, had no openings but was taking names. The same was true for Rhythm and Hues who had “possibly one opening sometime in the near future.” Seeing the glass as half full, you could say even that’s a good sign. With hiring at a standstill, it’s still worth it for these and other companies to put up a big booth to look for that one future employee.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

This article was first published in Jon Peddie’s Tech Watch newsletter on August 17, 2009. http://www.jonpeddie.com

37 Veritas et Visus 3rd Dimension August 2009 Web3D Symposium June 16-17, 2009, Darmstadt, Germany

In this first of two reports, Phillip Hill covers papers from Northwestern Polytechnical University/Liaoning Shihua University, TU Graz, University of Brighton, CINECA/CNR ITABC, and National University of Ireland

Web-based Presentation of Semantically Tagged 3D Content for Public Sculptures and Monuments Karina Rodriguez-Echavarria, David Morris, and David Arnold, University of Brighton, Brighton, England

The documentation and presentation of 3D digital content is a critical but non-trivial task for the cultural heritage sector. Curators are often faced with the task of cataloguing every piece of heritage and maintaining the resulting information in such a way that is suitable for scholarly research and public dissemination. Hence, the integration of 3D content poses additional challenges. This paper introduces research conducted to integrate semantically tagged 3D content to the catalogue acquired within the Public Monuments and Sculpture Association’s National Recording Project (NRP) in the UK. This research involves the combination of graphical APIs and semantic technologies in order to integrate 3D content with semantic tags in a web browser. Although the initial results are still experimental; it is expected that they will support scholarly research and public dissemination by presenting a variety of integrated documentation on the project website: http://www.publicsculpturesofsussex.co.uk.

The mechanism for tagging was developed via an application called Tagg3D which links parts of a 3D geometry to CIDOC-CRM URIs in order to store entry points to a metadata repository. Although this metadata is currently structured using the standard CIDOC-CRM, it is possible to use other ontologies as long as the mechanism provided for querying uses a SPARQL endpoint. The current architecture uses the D2R server to store the CIDOC-CRM mapping to the metadata by extracting the data from MySQL via a mapping file. Within the testing website, 3D models can be viewed via the 3D plug-in which was developed using Qt4 and OpenSG 2.0. 3D content is displayed either as a single model or a more complicated model comprised of multiple sections connected via a scene-graph. All of this combines into a simple to use website which after the installation of a plug-in, allows a user to interact with a 3D model of a sculpture whilst exploring both textual and semantic information relating to it.

Web page showing the 3D model and semantic information

38 Veritas et Visus 3rd Dimension August 2009

Digital Oil and Gas Pipeline Visualization using X3D Li Zhen-pei, Northwestern Polytechnical University, Xi’an, China Li Ping, and Wu Ming, Liaoning Shihua University, Liaoning, China

In this study, the researchers use Extensible 3D (X3D), which is a software standard for defining interactive web- and broadcast-based 3D content integrated with multimedia, to build a web-based, interactive and 3D dynamic virtual oil and gas pipeline system. The implementation process and method of 3D terrain modeling, pipeline modeling, pipeline affiliated facilities modeling, and pipeline parameters real-time displaying through integrating X3D with Java and OPC are also introduced. The digital oil and gas pipeline visualization system provides an effective way for pipeline staffs and managers to visually fetch pipeline information through the Web. The system also provides some useful functions such as over standard early-warning, linkage alarm for pipeline routine administering and maintaining. The researchers first generated a six-level earth model by Rez tools, and then overlap 3D terrain model of the region, which contains the pipeline, on the earth model, to show the location of this region on the earth. The 3D earth model is shown in Figure 1. The pipeline model is built by the IndexedLineSet node of the Rendering component, which represents a 3D geometry formed by constructing polylines from 3D vertices specified in the coordinate field. They can also file by the GeoCoordinate node. The point field in the GeoCoordinate node represents the vertices of polylines; every vertex contains actual geographical coordinates, so conjuncting these vertices, they get a polyline, which is endowed with geographical meaning. The coordinates of vertices are sampled by actual geographical coordinates of the pipeline, and the sampling frequency is depended on needs. They can increase sampling frequency to get more points to coincide with the location of the pipeline model with the actual pipeline location. The pipeline model and additional six levels 3D terrain model overlapping on the earth model is shown in Figure 2.

Figure 1 (on the left): Six levels earth model generated by Rez displaying in Xj3D browser; Figure 2 (on the right): Additional six levels 3D terrain model overlapping on the earth model. The blue line is a pipeline model built by the IndexedLineSet node.

In the exterior pipeline roaming system, they need to add some scenes of important affiliated facilities such as gas stations, etc., so users may have an opportunity to access the detailed information about these facilities. They build rough scenes of these facilities, and put them into the 3D terrain model by the GeoLocation node according to the geographical coordinates. Users can enter the 3D virtual scenes of these facilities by clicking the rough objects in the 3D terrain scenes. This project is just one aspect of the application of X3D. Programmers can expand their application according to their requirements due to the high extensive ability of X3D. X3D is still being improved. The researchers believe that X3D will become a more important technology for 3D web applications and virtual reality applications.

39 Veritas et Visus 3rd Dimension August 2009

3D Modeling in a Web Browser to Formulate Content-Based 3D Queries René Berndt, Sven Havemann, and Dieter Fellner, TU Graz, Graz, Austria

The researchers present a framework for formulating domain-dependent 3D search queries suitable for content- based 3D search over the Web. Users are typically not willing to spend much time to create a 3D query object. They expect to quickly see a result set in which they can navigate by further differentiating the query object. This system innovates by using a streamlined parametric 3D modeling engine on both the client and server side. Parametric tools have greater expressiveness, they allow shape manipulation through a few high-level parameters, as well as incremental assembly of query objects. Short command strings are sent from client to server to keep the query objects on both sides in sync. This reduces turnaround times and allows asynchronous updates of live result sets. The presented framework opens many possibilities for future research. The most promising perspective is that they can now develop and test new approaches for domain-dependent 3D search engines. Classical 3D-retrieval is very good at discriminating between shape classes. Domain-dependent tools obviously work best when used within a class of similar models. So both approaches ideally complement each other. They envisage a system where very generic 3D search formulation tools are used in the beginning of a query. Once the intended domain becomes clear, the user interface switches to the appropriate set of domain-dependent modeling tools. Technically this is not a problem with this framework. This would be the combination of the best of the two approaches. Another interesting avenue of research is to evaluate the modeling sessions for the query objects. They give valuable insight for improving the modeling tools, and might also help solving the question of how average people deal with shape creation in general.

Domain-dependent parametric query objects. Top row: configurable objects with a fixed number of parameters. Middle row: objects with configurable parts, e.g., control vertices, or hierarchies: buildings can be described on different levels of detail, which can be realized by part differentiation: rough block, type with dimensions, facade. Bottom row: free design of query objects using parametric tools, e.g., blocks on a grid, CAD-like drawing, or polygon drawing, used as manipulator for building ground polygons or castle walls.

Virtual Rome: a FOSS approach to WEB3D Calori, CINECA, Bologna, Italy; Carlo Camporesi, and Sofia Pescarin, CNR ITABC, Rome, Italy

The goal of the VirtualRome project (www.virtualrome.it) is to provide a web-based infrastructure for distribution, collection, annotation and sharing over the web of 3D interactive content such as actual and reconstructed landscapes. For this project the researchers have developed a framework for the integration of 3D real-time applications within web browsers; the full functionality is currently available only for Windows Firefox; limited

40 Veritas et Visus 3rd Dimension August 2009 versions are available for Firefox Linux and Explorer. They present a completely FOSS approach to the web deployment of a 3D database consisting of landscape data at different times and different resolutions.

The researchers found that the most time consuming task was the browser integration and relative testing: the amount of code itself is quite limited but it has a lot of interaction with all the other aspects of the web application such as user interaction (reload, resize, page change, tab opening, multiple instance), which tends to generate browser hang and/or memory leaks as two distinct applications such as when the browser and the rendering engine have to cooperate in the same address space. One future direction, as showed by a Google Current landscape plug-in is to try to execute the rendering engine in a separate process: the main rendering application will be so executed in a separate process that inherit windows from the browser but are not able to crash the browser itself in case some problems occur. They also note that the integration and testing code is largely independent of the application and also of the rendering engine used, so it would be a good candidate for an open source project involving several FOSS applications and Ancient landscape rendering engines.

2LIP: Filling the Gap Between the Current and the Three-Dimensional Web Jacek Jankowski, and Stefan Decker, National University of Ireland, Galway, Ireland

In this paper, the researchers present a novel approach, the 2-Layer Interface Paradigm (2LIP), for designing simple yet interactive 3D web applications, an attempt to marry advantages of 3D experience with the advantages of the narrative structure of hypertext. The hypertext information, together with graphics, and multimedia, is presented semi-transparently on the foreground layer. It overlays the 3D representation of the information displayed in the background of the interface. Hyperlinks are used for navigation in the 3D scenes (in both layers). The researchers introduce a reference implementation of 2LIP: “Copernicus – The Virtual 3D Encyclopedia”, which can become a model for building a 3D Wikipedia. Based on the evaluation of Copernicus they show that designing web interfaces according to 2LIP provides users with a better experience during browsing the Web, has a positive effect on the visual and associative memory, improves spatial of presented information, and increases overall user’s satisfaction without harming the interaction. Images and videos that illustrate the research presented in this paper, evaluation materials, implementation details, and the user guide can be found at: http://copernicus.deri.ie.

Articles about the Irish Heritage Park in both (a) the Copernicus prototype and (b) MediaWiki

41 Veritas et Visus 3rd Dimension August 2009

42 Veritas et Visus 3rd Dimension August 2009 Projection Summit June 15-16, 2009, Orlando, Florida

In this report, Phillip Hill covers presentations from SBG Labs, National Taiwan University of Science and Technology/ALVIS Technologies, Microvision, Digital Projection Limited, and Flexible Picture Systems

Stereoscopic Projection: A Comparison of Approaches and Addressing the Challenge of Emerging Markets Dermot Quinn, Digital Projection Limited, Manchester, England

The presentation firstly summarizes the pros and cons of existing active techniques and considers their applicability to different markets. The technical and cost challenges to be overcome to bring active stereo within the reach of broader markets such as industrial design, e-cinema, and home theater are explored. Quinn pointed out that all active stereo approaches are inefficient. The key to new markets are new lamp technologies – in the short term, mercury, in the medium term, lasers and LEDs. Glasses are cheaper and resolve the dilemma of silver screen installation. There is a synergy with solid-state light sources and leads to faster switching. Distribution and delivery is presently too expensive. E-cinema needs single pipe, low-cost J-PEG. Open architecture is critical, and a home cinema 3D standard is needed that is cross-compatible with gaming and movies, and backward compatible with Blu-ray, he said.

Efficiency comparison of various 3D systems

43 Veritas et Visus 3rd Dimension August 2009

LED Ultra-portable Projection Display Light Engine Utilizing Electrically Switchable Bragg Gratings Jonathan Waldern, SBG Labs, Sunnyvale, California

SBG Labs has developed an LED ultra-portable projection display light engine architecture utilizing electrically switchable Bragg gratings (SBGs) in which “digital optics” are recorded. This optical platform technology, called a DigiLens, is enabled by SBG Labs’ unique nano-composite material system called a “reactive monomer liquid crystal mix” (RMLCM). SBG Labs’ architecture addresses the critical packaging and efficiency requirements for a new emerging class of daylight-bright, ultra-compact projectors, while overcoming the light collection efficiency limitations of conventional optical architectures. The ability of the DigiLens to time-sequentially combine light from multiple widely separated LEDs whilst separately correcting the individual R,G,B optical aberrations, provides a compact, thermally efficient method of combining light – even from two LEDs of the same color. The unique polarization management properties of the DigiLens also enables an ultra-compact, high-brightness, low- cost light engine for 3D projection displays.

The DigiLens holographically records replicas of the distorted wavefronts generated by actual condenser optics, accomplishing aberration correction and compensating distortions. This dramatically reduces light engine size, complexity, and cost whilst increasing LED collection efficiency and hence brightness.

The DigiLens LED condenser provides color sequential illumination for an ultra-portable projector

44 Veritas et Visus 3rd Dimension August 2009

Novel Laser-MEMS Projection System Facilitates Head-up Display on Transparent Materials Chih-Hsiao Chen, National Taiwan University of Science and Technology, Taipei, Taiwan Glora Griffel, ALVIS Technologies, Cranbury, New Jersey

This presentation describes a novel laser- MEMS based projection system capable of displaying high-resolution video and data on transparent material. The produced images can be viewed from virtually any position around the screen. The laser speckle phenomenon, which used to be the bottle-neck of laser display technologies, is totally resolved. Proprietary optical design and scanning algorithms result in unprecedented brightness and image clarity. Applications include automobile head-up displays, infrared imaging, and large-scale projection displays. Wavelength conversion as proposed by ALVIS Technologies 3D Content Meets 3D Laser Projector Ben J. Averch, Microvision, Redmond, Washington

3D content is proliferating rapidly. Movies, games, and increasingly live sports and concert films are being filmed in 3D. One unique challenge facing this emerging category is the lack of in-home 3D displays. 120 Hz LCD panels and plasma screens typically require shutter glasses for viewing 3D content, but next-generation display technologies employ passive 3D glasses, which are lighter weight, lower cost, and more acceptable to the wearer. Microvision’s laser scanning pico projection platform enables 3D content in the home, viewed through lightweight passive glasses, without the purchase of a new, large, expensive flat panel monitor. During this presentation, Averch addresses the burgeoning 3D content market and the unique value proposition for a mobile 3D laser projector.

A Novel Stereographic Viewing Solution Paul Carey, Flexible Picture Systems, Los Angeles, California

All projection options for stereographic viewing ultimately suffer from compromise. Active, frame sequential systems have temporal effects. Passive eyewear and active selection can add cross talk during the transition. Chromatic selection with current techniques is inefficient requiring high light output. Dual projector passive systems, whether polarized or chromatic selection, offer the best performance providing the two projectors can truly behave as one. At infinity, both eyes see the same image and hence the presentation argues for absolute registration. This presentation describes an electronic registration system suitable for all applications, from academic through cinema, that ensures the best stereographic viewing experience. Electronic sub-pixel Proper overlay achieved by correcting geometry: alignment of two projectors fitted with passive selection individual pixels mapped onto the viewing surface filters provides a consistent stereographic viewing are rotated, translated, and scaled according to experience, relatively free of visual compromise. meshes defined in the alignment process

45 Veritas et Visus 3rd Dimension August 2009 SID Display Week Symposium June 2-5, 2009, San Antonio, Texas

In this second report, Phillip Hill covers papers from Light Blue Optics/BMW, NEC Corporation, National Chiao Tung University, Kyung Hee University/Pavonine Korea, Inc., LG Display, and Samsung Electronics/Samsung Mobile Display

High Resolution Autostereoscopic 3D Display with Scanning Multi-Electrode Driving Liquid Crystal Lens Yi-Pai Huang, Chih-Wei Chen, and To-Chiang Shen, National Chiao Tung University, Hsinchu, Taiwan

An autostereoscopic display with full resolution of 2D and 3D images with wide viewing angle potential is proposed by combining an “active scanning film” and a fast response OLED. In order to realize the active scanning film to project the images to different directions sequentially, a multi-electrode driving liquid crystal (MeD-LC) lens is utilized. Compared with traditional LC lens structures, the MeD-LC lens could not only be switched on/off, but also horizontally moved to “scan” and project images. Additionally, the MeD-LC lens is more competitive in respect of structure simplicity and various curvature generations. Currently, the pixel resolution is sacrificed for yielding more views. Therefore, the researchers use an “active scanning film” to integrate the time domain to “scan” the images with a high frame rate. The active scanning device can compensate for resolution with viewing angle efficiently. However, this high frame rate device should have fast response time to display various image sequentially. Consequently, an OLED display with microsecond switching time is utilized. The final schematic is shown in Figure 1. As a result, a compact form of flat panel can yield a 3D image in full view without degrading the image resolution. By applying the operating voltage sequentially, the LC layer in the MeD-LC device will reorient and build in a lens-like shape, thus the lens can move (shift) in the horizontal direction to project the images to different viewing angles without degrading the resolution as shown in Figure 2.

Figure 1 (on the left): Scanning 3D system with active scanning film and fast response OLED; Figure 2 (on the right): Scanning MeD-LC film

46 Veritas et Visus 3rd Dimension August 2009

Novel Human-Machine Interface (HMI) Design Enabled by Holographic Laser Projection Edward Buckley, Light Blue Optics, Colorado Springs, Colorado; Dominik Stindt, Light Blue Optics, Cambridge, England; Robert Isele, BMW, Munich, Germany

Despite the current proliferation of in-car flat panel displays, designers continue to investigate alternatives to flat and rectangular thin-film transistor (TFT) panels - principally to obtain differentiation by freedom of design using, for example, free-form shapes, round displays, flexible displays or mechanical 3D solutions. A perfect demonstration was provided at the 2008 Paris Motor Show by the BMW Mini Center Globe, a novel instrument cluster design that combines lighting, a circular flat panel and a holographic laser projector provided by Light Blue Optics (LBO) to redefine the state of the art in human-machine interface (HMI). In this paper, the authors show how the incorporation of LBO’s holographic laser projection technology can allow the construction of a unique display technology like the Mini Center Globe, and how such a combination of technologies represents a significant advance in the current state of the art in automotive displays. Due to the diffractive nature of LBO’s technology, which by definition exerts accurate control over the optical wavefront, it is possible to correct for aberrations caused by the projector optics by appropriate modification of the hologram patterns. It is therefore possible to a projector using simple, cost effective optical elements and correct for any resultant aberration in software; in the same way, the optical subsystem can be designed with a far wider range of tolerances than would ordinarily be possible. Not only does this allow cost effective assembly, but also it provides a degree of insensitivity to process tolerances, which is crucially important when integrating optical subsystems into, for example, automotive instrument clusters. A demonstration of this powerful capability is provided in Figure 1. The laser spot shape of Figure 1(a), after propagation through the projector optics, demonstrates that significant aberration is present. As a result, the quality of the projected images (b) and (c) is severely impacted. Appropriate correction for the optical aberrations, however, results in a laser spot that is almost diffraction limited (d), leading to recovery of the image fidelity in (e) and (f).

Figure 1: Aberrations caused by the optical system result in an imperfect projected image (c). Correction is performed by appropriate modification of the hologram patterns leading to near diffraction-limited spot (d) and projected image (f).

47 Veritas et Visus 3rd Dimension August 2009

One of the advantages of the LBO technology is the ability to realize novel projection geometries, which is one of the key requirements for next-generation automotive instrument clusters. By combining the wide depth of field, wide throw angle, distortion correction and aberration correction capabilities of the projector it is possible to realize some truly unique projection geometries. Due to the phase-modulating nature of the light engine, the luminous flux of the projector remains unchanged despite the geometry correction. This is in contrast to conventional imaging systems, which will always block light due to the unused (cropped) pixels. Figure 2 gives a succinct demonstration of these capabilities. In (a), LBO’s projection technology is demonstrated in wide-angle (90°), front-projection mode; the image diagonal is 13.5 inches and the horizontal distance from the projector aperture to the screen is approximately 6 inches. Using the same projector optics, however, it is possible to project in a table-down mode; by appropriate pre-distortion, shown in (b), the table down operation of (c) is obtained. The image has a diagonal of 9 inches and the vertical distance of the projector from the surface is approximately 3.5 inches.

Figure 2: The LBO projection technology can demonstrate truly novel projection geometries. Both front (a) and table-down projection geometries (c) can be achieved using the same projection optics.

A photograph of the Mini Globe display concept in operation, showing images produced by LBO’s laser projector. On the right is an artist’s rendition of the BMW Mini Center Globe instrument cluster, incorporating circular TFT, laser projection and lighting technologies

The authors have demonstrated that current HMI limitations imposed by flat panel displays can be overcome by combining several novel display technologies. In particular, the advantages provided by LBO’s holographic laser projector allow the construction and integration of a curved instrument cluster display. By transforming displays to real 3D components, the BMW Mini Center Globe has redefined the state of the art in human-machine interaction and created a new class of automotive display.

48 Veritas et Visus 3rd Dimension August 2009

Liquid Crystal Privacy-Enhanced Displays for Mobile PCs Junichiro Ishii, Goro Saito, Masao Imai, and Fujio Okumura, NEC Corporation, Kanagawa, Japan

NEC has developed a 12.1-inch, 1024x768 pixel liquid crystal privacy-enhanced display for mobile PCs. Only an authorized viewer with shutter glasses can see a private image while other viewers see a public image. The privacy enhanced display system is based on a high-frame rate image switching technology. In general, it is difficult for LCDs to achieve high-speed image switching because of its slow response speed. A high-speed driving scheme and a high frame rate overdriving method made it possible to realize this function in LCDs. The figure shows the principle of the LC-PED. The private image, the mask image and the public image are displayed sequentially at a high speed. Since the mask image is an inversed image of the private image, the sum of them causes a plain gray image. The public image is added on the plain gray image in time domain. Therefore, a viewer without the shutter glasses perceives a grayed public image. The shutter glasses are controlled to be open while the private image is displayed, and closed while the other images are displayed. Only an authorized user with the shutter glasses can view the private image.

Principle of the LC-PED

Auto-stereoscopic TFT-LCD with LC Barrier on Wire Grid Polarizer Dong Han Kang, Beom Seok Oh, Jae Hwan Oh, Mi Kyung Park, Hyo Joon Kim, Sung Man Hong, Ji Ho Hur, and Jin Jang, Kyung Hee University, Seoul, South Korea Sung Jung Lee, Kyo Hyeon Lee, and Kwang Hoon Park, Pavonine Korea, Inc., Incheon, South Korea

The researchers have developed a 5.5 inch auto-stereoscopic TFT LCD (640x480) display with a liquid crystal having a pitch of 58μm on the back of a wire grid polarizer substrate. The auto-stereoscopic display of the proposed structure can be 2D/3D convertible LCD, indicating that it can provide 3D displays as well as conventional 2D displays. The display has the advantage of short viewing depth and thinness. Figure 1 shows the principle of the conventional (a) and proposed (b) structures for auto-stereoscopic 3D displays with an LC parallax barrier. The proposed 3D display uses a conventional TFT-LCD panel and an LC parallax barrier by forming a wire grid polarizer onto the back of a color filter panel. Therefore, it is not necessary to laminate an LC parallax barrier panel on the TFT-LCD panel. Besides, the proposed structure adopting a wire grid polarizer has LC material between the TFT-LCD with a wire-grid polarizer and a glass plate having ITO pattern for LC switching. Therefore, the thickness can be less than 2mm, which is at least thinner than conventional 3D TFT-LCDs by one glass

49 Veritas et Visus 3rd Dimension August 2009 thickness. Note that three glass substrates are used for this 3D whereas a conventional one uses four glass plates. Figure 2 shows a photograph of an image displayed on the 5.5-inch auto-stereoscopic display. The pixel size is 174x58μm and the aperture ratio is about 41%. At 360mm apart viewing distance, people can see the 3D images well. Because they located the right and left pixels in alternative columns, 3D resolution is reduced to half in the horizontal direction. Note that 3D display brightness is proportional to LC slit ratio of the barrier. If they increase the LC slit ratio, crosstalk also increases. There is a tradeoff relationship between 3D display brightness and crosstalk.

Figure 1, (on the left): The principle of the conventional (a) and proposed (b) structures for an auto-stereoscopic 3D display with LC parallax barrier; Figure 2, (on the right): The photograph of an image displayed on a 5.5-inch auto-stereoscopic display with LC parallax barrier on wire grid polarizer

World’s First 240Hz TFT-LCD Technology for Full-HD LCD-TV and its Application to 3D Displays Bong Hyun You, Heejin Choi, Dong Gyu Kim, and Nam Deog Kim Samsung Electronics, Chungcheongnam-Do, South Korea Sang Soo Kim, and Brian H. Berkeley, Samsung Mobile Display, Gyeonggi-Do, South Korea

A full-HD LCD TV has been enhanced by increasing the panel’s frame rate to 240Hz, and this 240Hz driving technology has been applied to 3D TV. Compared to a 120Hz LCD, the 240Hz LCD has two challenges: 1) half of the available pixel charging time, and 2) three times as many interpolated frames. A new architecture has doubled the available pixel charging time by means of a half-gate two-data driving scheme and a charge-shared super PVA pixel structure. Additionally, a 240Hz ME/MC algorithm has been implemented on the LCD module to convert 60Hz incoming frames into 240Hz frames. Motion picture response time (MPRT) of the new LCD TV has been measured as 4.7ms, which is similar to the MPRT for a CRT TV.

Additionally, in the proposed 240Hz LCD TV, it is also possible to deliver 3D functionality without any loss of resolution by using LC shutter glasses and providing the left eye and right eye images by time sequential driving as shown in Figure 1. Conventionally, 120Hz LCD driving is not suitable for a 3D display using LC shutter glasses because of crosstalk due to progressive scanning. With the 240Hz LCD TV, however, it is possible to minimize the

50 Veritas et Visus 3rd Dimension August 2009 crosstalk between the left eye and right eye images by image processing using twice as many frames as the 120Hz driving scheme. Therefore, the 240Hz LCD TV has many attractive points for the consumer, as it not only provides superior motion picture image quality, but it can also provide an uncompromised 3D experience using LC shutter glasses. Pixel data are actually updated 240 times per second, thereby avoiding ghost images and other issues associated with so-called 240Hz hybrid techniques. MPRT of the new 240Hz LCD TV has been measured at 4.7ms, providing a level of motion picture quality comparable to that of CRTs. Moreover, this 240Hz LCD driving technology makes it possible to deliver a full resolution 3D display by using LC shutter glasses, thereby enabling 3D to become mainstream technology for television. This 240Hz LCD TV panel, the world’s first, was demonstrated at IFA’08 and at KES’08 as shown in Figure 2.

Figure 1: 3D display based on 240Hz driving technology with LC shutter glasses

Figure 2: World’s first 240Hz LCD TV exhibited at Korea Electronics Show 2008

A Novel Polarizer Glasses-type 3D Display with an Active Retarder Sung-Min Jung, Ju-Un Park, Seung-Chul Lee, Wook-Sung Kim, Myoung-Su Yang, In-Byeong Kang, and In-Jae Chung, LG Display, Gyeonggi-do, South Korea

In this paper, the researchers suggest a novel method achieving high resolution and high brightness in glasses-type 3D displays and fabricated a prototype of 15-inch size in diagonal, which is composed of an active retarder synchronized with an image panel. The active retarder is configured to a TN mode to have a function of polarization switching for the input polarization states. They expect that the AR3D technology can give high resolution and high brightness for 3D users with a convenience of simple polarizer glasses and an inexpensive cost compared with the shutter glasses type 3D display, where not only the image panel and but also the glasses should have an LCD panel. By using an actively controlled retarder in front of the display panel and simple polarizer glasses, left and right images are field-sequentially transmitted to left and right eyes, respectively. The active retarder panel is made of two glasses and one liquid crystal layer. The liquid crystal layer is configured to a TN mode as an active wave-guide of input polarization states.

51 Veritas et Visus 3rd Dimension August 2009

Figure 9 shows snapshots of the AR3D sample. In the illustration, (a) shows the display image without polarizer glasses, (b) and (c) are the images through the A snapshot of left and right polarizer glasses. Since the crosstalk the AR3D level of the left eye image is lower than the right eye sample images. image, the left eye image is clearer than the right eye (a) shows the image, as the researchers expected. Although it has an image picture asymmetric crosstalk level, the images can show three- without polarizer dimensional images with a sufficient 3D volume. By glasses. (b) and attaching the active retarder in front of the image panel (c) are the left and using some simple polarizer glasses, the 3D and right viewers can experience finer and brighter 3D images images looking with inexpensive cost. Although the residual crosstalk through causing the ghost effect in 3Ds is still observed, the polarizer researchers think that it can be suppressed by glasses. developing faster liquid crystals and better structural designs. Moreover, they suggest an efficient way for the measurement of the 3D crosstalk in a sequential type 3D display with glasses.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<< Forty Favorites Favorite news stories, commentaries, tutorials and insights from the Veritas et Visus catalog of newsletters. Fascinating insights into the world of display technologies…

http://www.veritasetvisus.com

52 Veritas et Visus 3rd Dimension August 2009

FINETECH Japan April 15-17, 2009, Tokyo, Japan

Phillip Hill covers presentations from Tokyo University of Agriculture & Technology, 3D Consortium, Seiko Epson, Toshiba Matsushita Display Technology Co., Ltd., and Panasonic Electric Works Co., Ltd.

Improvement of 3D Display’s Performance and Trend of Standardization Goro Hamagishi, Seiko Epson, Tokyo, Japan

To improve 3D display’s performance, manufacturers are required for a wide range of observation, prevention of resolution degradation, control of accommodation and convergence, and cross talk prevention. The lecture describes detailed development examples to overcome these challenges, and reports ongoing approaches toward standardization. The presentation described the slanted pixel arrangement where a group of 3MxN subpixels (MxN subpixels for each R, G and B colors) correspond to one cylindrical lens to generate MxN viewpoints. Subpixels of the same color in each 3D pixel have different horizontal positions, and the R, G, B subpixels are repeated in the horizontal direction. Ray-emitting areas of subpixels in a 3D pixel are continuous in the horizontal direction in each color.

Leading-edge Trends and Business Strategy of 3D Display Kuniaki Izumi, 3D Consortium, Tokyo, Japan

3D display is now shifting from the research phase to the commercial phase, with a growing tendency toward home use. The leading-edge trends in 3D display are reported, together with approaches of industry manufacturers toward wide prevalence.

Research and Development Trend of 3D Display: Current Problems and Future Yasuhiro Takaki, Tokyo University of Agriculture & Technology, Tokyo, Japan

Stereoscopical display technology is the most important key to ultra-realistic communications. The current technology status, as well as challenges for further development, was discussed in the presentation. In particular, the speaker described stereoscopical display technology with “people-friendly” characteristics such as eye reduction.

53 Veritas et Visus 3rd Dimension August 2009

Panasonic Blu-ray disc 1920x1080 3D system, on the left, discussed by the 3D Consortium. Photograph of the subpixel structure of a fabricated LCD panel is shown on the right. The screen size is 2.57 inches, number of viewpoints 16, 3D resolution 256x192, pixel density 500ppi, width of subpixel 12.75 microns, and width of black matrix region 4.25 microns.

Expansion of Business and Technologies for 3D Hiroyuki Echigo, Toshiba Matsushita Display Technology Co., Ltd., Tokyo, Japan

3D display is expected to trigger recovery of the stalled FPD industry without business drivers. While its various applications are being developed, 3D technology is gradually penetrating the cinema market. The presentation looked at the potential expansion and development of 3D technology, which is currently gathering great attention worldwide.

Displaybank’s 3D display technology and market forecast (2008-2015). 3D is expected to take 9.2% of the total display market by 2015.

A Stereoscopic System for Medical Application: Development of Novel 3Dimensional Dome-shaped Display System for Laparoscopic Surgery Kazuya Sawada, Panasonic Electric Works Co., Ltd., Tokyo, Japan

Today’s laparoscopic surgery requires surgeons to have a lot of skills and experience for the 2-dimensional operation. The presentation offered a solution with a 3Dimensional dome-shaped display system under development.

54 Veritas et Visus 3rd Dimension August 2009

55 Veritas et Visus 3rd Dimension August 2009

Electronic Displays Conference March 4-5, 2009, Nuremberg, Germany

Phillip Hill covers presentations from this conference organized by Design&Electronik in Germany: Light Blue Optics, Bonn-Rhein-Sieg University of Applied Sciences, and Hochschule Heilbronn

Next Generation HMI using Holographic Laser Projection Dominik Stindt, Light Blue Optics, Cambridge, England

There are two common techniques for laser projection. Scanning systems use one or more mirrors to raster scan a narrow beam to form an image. Imaging systems use a projection lens to magnify and focus the image from a microdisplay onto a viewing surface. LBO’s patented projection technology is unique, forming images entirely using diffraction. Computer generated holograms are displayed rapidly on a phase modulating microdisplay. Laser light is used to illuminate the microdisplay, forming a projected image by coherent interference. There is no complex beam shaping, no blocking of light and low laser modulation frequency.

A microdisplay displaying phase-only modulation does not block any light. This leads to substantial efficiency gains, particularly when displaying sparse images. A symbology image has an average pixel intensity of less than 10%, so greater than 90% of light would be blocked by an amplitude modulating microdisplay. A phase-only microdisplay, however, would direct all of the incident laser light to the required pixels. To achieve video-rate imagery, the holograms are displayed on a liquid crystal on silicon (LCoS) microdisplay.

LBO’s technology removes the quantization noise from the image, by displaying multiple holograms per video frame. Each hologram gives rise to a subframe, containing the same image but with different and independent additive noise with variance σ2. The average of N such noisy subframes has noise variance σ2/N, which satisfies the psychometrically determined condition for the display of video-style images with perceptually pleasing properties (see illustration).

Laser speckle reduction is uniquely achieved in LBO’s projector by phase averaging of multiple subframes per frame. Time-varying speckle reduction methods – such as moving diffusers or diffractive optical elements – can also be implemented inside the light engine because all pixels are formed simultaneously, laser modulation frequency is low, and an intermediate image plane is formed inside the projector. This provides speckle-reduced images, but crucially there is no resolution loss, and the long focal depth of the projected image is maintained.

56 Veritas et Visus 3rd Dimension August 2009

LBO’s holographic laser projection technology can steer light to where it is needed in the image. This provides the ability to increase brightness and reduce power consumption, especially for sparse images where average pixel energy is low. Dramatic improvements in inefficiency over conventional LED HUDs are therefore possible.

In any real optical system, lenses introduce aberrations into the wave front. This leads to imperfections in the projected image such as reduced resolution and poor image contrast. LBO’s holographic projection technology can correct for these aberrations in software. The holograms displayed on the phase modulating microdisplay apply a wave front correction function to correct for imperfections in the optical system. This capability allows lower manufacturing tolerances of the optical system and therefore reduces cost significantly.

Laser speckle reduction results with (right) and without (left)

Immersion Square: A Cost-effective Multi-screen VR Visualization Technology R. Herpers, W. Heiden, M. Kutz, and D. Scherfgen Bonn-Rhein-Sieg University of Applied Sciences, St. Augustin, Germany

A mobile immersive visualization environment called Immersion Square was described which enables immersive visualizations for many application areas and offers a wide range of configuration flexibility. The standard setup of the projection environment consists of a three- wall-back-projection screen system. Immersion Square’s hardware configuration is based on pure PC technology. The PC hardware components used are scalable in terms of their performance, starting from a single PC system equipped with Matrox TripleHead 2Go technology up to a multi processor or multi-computer system wired by a local network. As soon as the spectators entire visual field is enveloped within the Setup of the FIVISquare visualization environment with a test Immersion Square multi-screen visualization bicycle on static ground as presented at the Hannover Industrial environment, it is supports an “immersive” Fair 2008

57 Veritas et Visus 3rd Dimension August 2009 presence for the user as being part of a computer generated virtual environment. This virtual environment is used, for example, to simulate traffic situations or any other realistic environments in which the spectator should immerse in a virtual environment. Within those environments different basic visual stimuli can be specifically added and manipulated, offering the possibility to examine the impact of on physical and mental performance under controlled conditions. Apart from traffic education this offers a wide range of further interesting application areas.

The visualization system is fully modular and therefore mobile and easy to transport. A prototype of a bicycle simulator has been developed which opens up new and very challenging application areas. Studies within this simulation environment could be used to develop a comprehensive model for occupational safety testing. Moreover, one can investigate how far the practicing behavior of professional athletes (e.g. bicycle riders) can be manipulated by visual stimuli. It is unclear so far how an operator would react if the perceived velocity is higher or slower than the expected physical velocity. FIVIS should serve for road safety education, for work safety prevention purposes, and for specific simulations of visual information under simultaneous physical exercise in the future. Relying only on standard hardware components, the Immersion Square hardware system can easily be configured according to user requirements. Thus, 3D scenes of an average size of up to 500.000 polygons are possible

Head-up Display with an Integrated Zoom Thomas Müller, Hochschule Heilbronn, Heilbronn, Germany

The innovation in this thesis is the ability to zoom the virtual image of a head-up display. The distance is fixed and the magnification of the image is variable. The advantages are that the accommodation time of the eyes is quicker (especially older people have a problem). The concept of the optical system is the development of a method to calculate the for any system parameter, and develop a zoom system with two or three optical mirrors (free formed surfaces). The approach is to integrate the optical system in MATLAB to optimize the free-formed surface mirrors of the system.

The optical design in MATLAB (on the left); Head-up display demonstrator (on the right)

58 Veritas et Visus 3rd Dimension August 2009 Stereoscopic Displays & Applications Conference January 19-21, 2009, San Jose, California

In this third report, Phillip Hill covers papers published by SPIE from SeeReal Technologies, Seoul National University/Kyung Hee University, Fraunhofer Institute for Photonic Microsystems, Geola Digital, Toshiba Corporation, Industrial Technology Research Institute/National Tsing Hua University, Eldim, NHK (Japan Broadcasting Corporation), Yonsei University, Philips Research/Philips 3D Solutions, and ITRI/EOL

Autostereoscopic Projector and Display Screens Stanislovas Zacharovas, Ramunas Bakanas, and Evgenij Kuchin, Geola Digital, Vilnius, Lithuania

The researchers investigated H1-H2 transfer analogue and digitally printed reflection holograms suitable for autostereoscopic projection displays. They proved that a reflection hologram, having part of a replayed image in front of its surface may be used as an autostereoscopic display. They assembled a 3D streaming image projection device, incorporating a digitally printed reflection hologram. They say that digitally printed holograms can be used as 3D projection screens.

The figure shows the copying process of the master hologram – H1-H2 transfer. During the transfer the laser beam is again split into two parts. One of them is used to illuminate the master or H1 hologram. This beam passes through master and forms in space the spatial light shape of the image recorded on holographic media. If another unexposed high-resolution photomaterial is placed nearby the formed image and is illuminated with the second part of split laser beam, then both beams interfere in the photomaterial and the H2 copy of the H1 master hologram is recorded on the photomaterial. Unexposed high-resolution photomaterial can be placed in front, into, or behind the image formed in space – from that will depend the spatial position of the image reconstructed by the H2 copy.

Copying H1-H2 transfer using Geola’s pulsed holographic camera. Left: general view of the copying setup. Center: H1-H2 transfer when the spatial image is formed in the middle of unexposed photomaterial. Right: H1-H2 transfer when the spatial image is formed in the front of unexposed photomaterial. 1 Object beam directed to holographic media (reference beam); 2 holographic media with recorded master hologram; 3 object beam forming image in space; 4 spatial shape of the image recorded in master holographic media and reconstructed in space; 5 unexposed photomaterial; 6 reference laser beam.

59 Veritas et Visus 3rd Dimension August 2009

Large Real-time Holographic Displays: from Prototypes to a Consumer Product R. Häussler, S. Reichelt, N. Leister, E. Zschau, R. Missbach, and A. Schwerdtner SeeReal Technologies GmbH, Dresden, Germany

Large real-time holographic displays with full color are feasible with SeeReal’s new approach to holography and today’s technology. The display provides the information about the 3D scene in a viewing window (VW) at each observer eye. A tracking system always locates the viewing windows at the observer eyes. This combination of diffractive and refractive optics leads to a significant reduction of required display resolution and computation effort and enables holographic displays for widespread consumer applications. SeeReal tested its approach with two 20-inch prototypes that use two alternatives to achieve full color. One prototype uses color filters and interlaced holograms to generate the colors simultaneously. The other prototype generates the colors sequentially. In this paper the researchers review the technology briefly, explain the two alternatives to full color and discuss the next steps toward a consumer product. After presentation of the company’s display in 2007, SeeReal has now reached a further milestone on the roadmap toward a consumer product. They demonstrated that a full-color with 3D scene size of 20 inch is possible with existing technology. A window is located at each eye of an observer through which the observer sees the 3D scene. The relevant information is present in these viewing windows only. The viewing windows are tracked to the observer eyes in real time. This approach saves 3 or 4 orders of magnitude of the SLM resolution and computation power compared to conventional holographic displays and makes large real-time holographic 3D scene reconstructions possible. They demonstrated that full color can be achieved by two alternatives:

 The colors may be generated sequentially. The holograms for the red, green and blue color components and the backlight colors are switched synchronously. A sophisticated control of the backlight compensates the delayed and inhomogeneous response of the SLM. This temporal color multiplexing requires fast SLMs in order to avoid color flickering and color breakup.  Spatial color multiplexing uses color filters. A prototype arrangement has the color filters on the beam- splitting lenticular whereas color filters integrated in the SLM pixels would be the solution for a product. This alternative requires a resolution of the SLM that is higher than with sequential colors, but is content with a lower SLM frame rate. The required SLM resolution already exists in available LCD panels. The required amendments to existing LCD panels are moderate. Faster LCD panels with higher resolution are emerging more and more on the market, as there is an increasing demand of such panels for TV applications.

Set-up of the 20-inch prototype. The components are (from left to right): LED backlight, shutter display, Fourier lenticular, SLM and beam-splitting lenticular. The inset shows two interlaced holograms on the SLM that are separated by the beam-splitting lenticular in order to generate two VWs. On the rights is an image of the full-color holographic display with a screen diagonal of 20 inches

60 Veritas et Visus 3rd Dimension August 2009

High Definition Integral Floating Display with Multiple Spatial Light Modulators Joohwan Kim, Keehoon Hong, Jae-Hyun Jung, Gilbae Park, James Lim, Youngmin Kim, Joonku Hahn, and Byoungho Lee, Seoul National University, Seoul, Korea; Sung-Wook Min, Kyung Hee University, Seoul, Korea

In this paper, a high-definition integral floating display is implemented. An integral floating display is composed of an system and a floating lens. The integral imaging system consists of a two-dimensional display and a lens array. The researchers substituted multiple spatial light modulators (SLMs) for a 2D display to acquire higher definition. Unlike conventional integral floating display, there is space between the displaying regions of the SLMs. Therefore, the SLMs should be carefully aligned to provide a continuous viewing region and seamless image. The implementation of the system is explained and a 3D image displayed by the system is represented.

An integral floating display is a combination of integral imaging and a large convex lens or concave mirror. Throughout this paper, the large convex lens is called a floating lens. The integral imaging provides a 3D image to the floating lens. The floating lens forms a floating image in the vicinity of the observer. The structure of the system is illustrated in the figure. It is composed of four controllers, 12 SLMs, lens array, and a floating lens. The SLMs are acquired by disassembling full color LCD projectors. They are black and white and three of them with color filters form a full color SLM unit. They implemented a black and white 3D display in the system. One controller can handle three SLMs, and four controllers can handle 12 SLMs. 12 SLMs are aligned on a specially made steel supporter to locate SLMs exactly in desired position. There are six SLMs in horizontal direction and two SLMs in vertical direction. The distance between the centers of neighboring SLMs is designed to be the same as the distance between the centers of neighboring elemental lenses. The lens array is composed of 12 elemental lenses is placed in front of the SLM array. Each elemental Concept of the integral floating display using multiple SLMs lens is in front of its corresponding SLM. The SLM array and lens array combine to constitute an integral imaging system. A floating lens is placed in front of the integral imaging system. The floating lens forms a viewing window at the focal plane and produces a 3D floating image in the image space.

OLED Backlight for Autostereoscopic Displays U. Vogel, L. Kroker, K. Seidl, J. Knobbe, Ch. Grillberger, J. Amelung, and M. Scholles Fraunhofer Institute for Photonic Microsystems, Dresden, Germany

In this contribution, Fraunhofer presents a 3.5-inch 3D 320x240 display that uses a highly-efficient, patterned and controllable OLED backlight. Several major achievements are included in this technology demonstrator, like the large-area OLED backlight, highly-efficient and fast-response OLED top-emitter, striped patterned backlight, and individual electronic driving for adaptive backlight control. A 3D mobile display application has been successfully demonstrated. The system consists of thin, individually controllable OLED stripes that are associated to the columns of the images, and the LC modulator that generates the image. In addition to the patterned OLED backlight and the LC modulator, a cylindrical micro lens array is an important element to realize the 3D effect of the display. The micro lens array has as many cylindrical optical elements as the number of columns of the LC

61 Veritas et Visus 3rd Dimension August 2009 modulator as well as the same pitch as the LC modulator. Therefore the described illumination principle is called “single pixel illumination”. In principle, the OLED stripes are imaged into the pupil of the eyes, whereas the eyes are accommodated on the image generated by the LC-modulator (see the figure). The image of the stripes in the pupil of the eye is termed “Eye Box”. In order to realize the 3D effect two images corresponding to left and right eye have to be directed into their proper eye again.

Generally there are two possibilities. The variant with the higher resolution would be the time-multiplex solution. OLED areas behind one micro-lens were alternately switched on and off respectively. To realize the time-multiplex approach one needs a very fast LC- modulator, which was not accessible. In the first step they pursued the second variant, the space-multiplex approach. There the resolution will be half but the LC- modulators were available. On the other hand the optical principle will be generally the same for both approaches. The only difference will be a time-continuous illumination of the OLED backlight. For the space- multiplex approach every pixel column of the LC- modulator as well as the cylindrical micro lens and the Principle for imaging of the OLED stripes into the Eye corresponding illumination area behind is predefined Box exemplary for one column either for the left or right eye.

Coherent Spatial and Temporal Occlusion Generation R. Klein Gunnewiek, and R-P.M. Berretty, Philips Research, Eindhoven, The Netherlands B. Barenbrug, and J.P. Magalhães, Philips 3D Solutions, Eindhoven, The Netherlands

A vastly growing number of productions from the entertainment industry are aiming at 3D movie theatres. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distance), we need a flexible 3D format that can adjust the depth effect. Such a format is the image plus depth format in which a video frame is enriched with depth information of all pixels in the video. This format can be extended with an additional layer for occluded video and associated depth that contains what is behind objects in the video. To produce 3D content in this extended format, one has to deduce what is behind objects. There are various axes along which this occluded data can be obtained. This paper presents a method to automatically detect and fill the occluded areas exploiting the temporal axis. To get visually pleasing results, it is of utmost importance to make the inpainting globally consistent. To do so, we start by analyzing data along the temporal axis and compute a confidence for each pixel. Then pixels from the future and the past that are not visible in the current frame are weighted and accumulated based on computed confidences. These results are then fed to a generic multi-source framework that computes the occlusion layer based on the available confidences and occlusion data. During testing the researchers saw that the temporal inpainting algorithm gives good results for a variety of sequences. An example is depicted in the figure. In the upper part (a) is depicted the original image-plus- for the Pinocchio sequence, whereas in the lower part (b) is depicted the occlusion and associated depth map that generated using the temporal inpainting algorithm. The nose of Pinocchio

62 Veritas et Visus 3rd Dimension August 2009 has almost completely disappeared and the inpainted data resembles the background data well. It should be noted that the body of Pinocchio has hardly any disparity with the background and therefore temporally generated occlusion data between Pinocchio and the floor is not really needed.

Temporal inpainting example: (top pair) image plus depth (bottom pair) occlusion plus depth

Effect of Light Ray Overlap between Neighboring Parallax Images in Autostereoscopic 3D Displays R. Fukushima, K. Taira, T. Saishu, Y. Momonoi, M. Kashiwagi, and Y. Hirayama, Toshiba, Kawasaki, Japan

A display system with a lens arrays at the front of a high-resolution LCD is a method to realize an autostereoscopic 3D display. In these displays, a light ray overlap between neighboring parallax images affects the image quality. In this study, the overlap effects were investigated for the one-dimensional (horizontal parallax only) integral imaging (1D-II) method. The researchers fabricated samples of 1D-II displays with different levels of light ray overlaps and evaluated the 3D image by subjective assessment. It is found that the 1D-II display utilizing the proper parallax overlaps can eliminate banding artifacts and have good 3D image quality within a wide range of viewing area. It was found that an appropriate amount of parallax overlap plays an important role to achieve banding-free 3D images with smooth motion parallax. It is that there is a trade-off between the smooth motion parallax and the image quality. The 1D-II display utilizing the proper parallax overlaps has enough 3D image quality as well as a smooth motion parallax for some practical applications. Diagram illustrating the relationship between MV and II displays

63 Veritas et Visus 3rd Dimension August 2009

Shutter Glasses Stereo LCD with a Dynamic Backlight Jian-Chiun Liou, Kuen Lee, Jui-Feng Huang, Wei-Ting Yen, and Wei-Liang Hsua, Industrial Technology Research Institute, Hsinchu, Taiwan; Fan-Gang Tseng, National Tsing Hua University, Hsinchu, Taiwan

Although a naked-eye 3D display is more convenient to watch for a viewer, so far and in the near future the image quality of a stereo display watched with special glasses is still much better than the former (e.g., the viewing angle, the crosstalk, the resolution, etc.). While focusing on the glasses-type stereo display, the image performance of a time multiplexed shutter-glasses-type 3D display should be better than that of a spatial multiplexed polarization- encoded 3D display. Shutter-glasses-type 3D displays were implemented many years ago by CRT. However, due to the generation supersedure the CRT was replaced by LCD, the shutter-glasses solution couldn’t work for several years as a result of the long response time of LCD. Thanks to the development of over-drive technology, the response time of LCD is getting faster, and a 100-120Hz panel refresh rate is possible. Therefore, 3D game fans have a very good opportunity to watch full resolution, large viewing angle and low crosstalk stereo LCDs again. In this paper, a 120Hz LCD and an LED dynamic backlight to overcome the hold-type characteristic of an LCD are used to implement a time-multiplexed 3D display. A synchronization circuit is developed to connect the time scheme of the vertical synchronization signal from the display card, the scanning backlight and the shutter glasses. The crosstalk under different scanning conditions was measured.

A New Way to Characterize Autostereoscopic 3D Displays using Fourier Optics Instrument P. Boher, T. Leroux, T. Bignon, and V. Collomb-Patton, Eldim, Herouville St Clair, France

Auto-stereoscopic 3D displays offer presently the most attractive solution for entertainment and media consumption. Despite many studies devoted to this type of technology, efficient characterization methods are still missing. Eldim presents an innovative optical method based on high angular resolution viewing angle measurements with a Fourier optics instrument. This type of instrument allows measuring the full viewing angle aperture of the display very rapidly and accurately. The system used in the study presents a very high angular resolution below 0.04 degree, which is mandatory for this type of characterization. They can predict from the luminance or color viewing, angle measurements of the different views of the 3D display that will be seen by an observer at any position in front of the display. Quality criteria are derived both for 3D and standard properties at any observer position and Qualified Stereo Viewing Space (QSVS) is determined. The use of viewing angle measurements at different locations on the display surface during the observer computation gives more realistic estimation of QSVS and ensures its validity for the entire display surface. Optimum viewing position, viewing freedom, color shifts and standard parameters are also quantified. Simulation of the moiré issues can be made leading to a better understanding of their origin. In paper a new characterization system, VCMaster-3D, dedicated to 3D displays is presented in detail. With an excellent angular resolution, it is the most advanced solution to characterize precisely 3D displays. Calculation of 3D contrast in the observer space using multi-location measurements is useful to measure the QSVS and to verify if the emission properties of the display are in line with the design. Different parameters such as viewing freedom, color shift, and standard contrast can be estimated. This approach is valid for any kind of auto-stereoscopic display with two views or multi-views. Patented optical setup of the Eldim viewing angle instrument

64 Veritas et Visus 3rd Dimension August 2009

The Development of the Integrated-Screen Autostereoscopic Display System Wei-Liang Hsu, Wu-Li Chen, Chao-Hsu Tsai, Chy-Lin Wang, Chang-Shuo Wu, Ying-Chi Chen, and Shu-Chuan Cheng, ITRI/EOL, Hsinchu, Taiwan

A novel autostereoscopic display system, the integrated-screen autostereoscopic display system, has been developed to substantially increase the total number of pixels on the screen, which in turn increases both the resolution and number of view zones of the 3D display. In this system, a series of miniature projectors are arrayed and the projection images are tiled together seamlessly to form an image of ultra high resolution. For displaying 3D images, a lenticular screen with pre-designed tilted angle is used to distribute the pixels into the plural view zones. In this paper, an integrated-screen autostereoscopic display system with 30-inch screen and 15 view zones is presented. The total resolution of the tiled image is 2930x2700 pixels, which is much higher than a traditional Full HD display, and the resultant 3D resolution in each view zone is 880x600 pixels. The advanced version of the integrated-screen system uses twenty miniature projectors arrayed 4x5 as shown in the figure. In this system, they use a lenticular plate with specific tilted angle for rearranging subpixels into each view zone because the usage of a lenticular plate makes the integrated-screen system have a higher light intensity and lower crosstalk. Moreover, the dead- zone problem that always happens in lenticular-type 3D displays would not occur since there is no black Integrated-screen system using twenty arrayed matrix in the projection screen of the integrated- miniature projectors screen system.

Effects of Sampling on Depth Control in Integral Imaging Jun Arai, Masahiro Kawakita, and Fumio Okano, NHK (Japan Broadcasting Corporation), Tokyo, Japan

In integral imaging, lens arrays are used to capture the image of the object and display the 3D image. In principle, the 3D image is reconstructed at the position where the object was. NHK has hitherto proposed a method for controlling the depth position of the reconstructed image by applying numerical processing to the captured image information. First, the rays from the object are regenerated by numerical processing by using information captured from the actual object together with a first virtual lens array. Next, the regenerated rays are used to generate 3D information corresponding to a prescribed depth position by arranging a second virtual lens array. In this paper, NHK clarifies the spatial frequency relationship between the object and the depth-controlled reconstructed image, and propose filter characteristics that can be used to avoid aliasing. They also report on experiments in which they confirm the effectiveness of the proposed filter. Integral imaging is a technique that requires no special viewing glasses and can use natural light to capture and display images of an object. When using optical techniques to control the depth position of a reconstructed image, it is necessary to reconfigure the imaging apparatus. But using a depth control method based on numerical processing, it is possible to control the depth position of the reconstructed image without reconfiguring the imaging apparatus. They propose a bandwidth limiting technique for avoiding aliasing due to sampling in this numerical processing. They also propose a numerical processing technique based on geometrical optics for correctly reconfiguring the ray directions.

65 Veritas et Visus 3rd Dimension August 2009

The reconstructed image and an enlarged elemental image for an ordinary object: (a) without depth control processing; (b) with depth control and without filtering processing; (c) with depth control and filtering processing. Note that there was visible deterioration in (b) (i.e. around the marked area), whereas there was no significant deterioration in (c) compared with (b). Since the depth position was far from the lens array, the resolution was degraded both in (b) and (c) compared with (a).

Compressed Stereoscopic Video Quality Metric Jungdong Seo, Donghyun Kim, and Kwanghoon Sohn, Yonsei University, Seoul, South Korea

Stereoscopic video delivers depth perception to users contrary to two-dimensional video. Therefore, we need to develop a new video quality assessment model for stereoscopic video. In this paper, the researchers propose a new method for objective assessment of stereoscopic video. The proposed method detects blocking artifacts and degradation in edge regions such as in a conventional video quality assessment model. In addition, it detects video quality difference between views using depth information. They performed subjective evaluations of stereoscopic video to verify the performance of the proposed method, and confirmed that the proposed algorithm is superior to PSNR in respect to correlation with the subjective evaluation. The figure shows an example of blurring in an edge region of the processed image. Comparing (c) with (d), the Sobel value of the original image is larger than the processed image in the box region. According to the proposed algorithm, this region is regarded as a blurred region. Comparing (a) with (b), we can verify blurring in the box region.

(a) Aquarium sequence; (b) Processed image; (c) Sobel operation result of original image; (d) Sobel operation result of processed image

66 Veritas et Visus 3rd Dimension August 2009

67 Veritas et Visus 3rd Dimension August 2009 SpaceSpex anaglyph The only way to bring 3DTV to the masses

Michael Starks gives his “colorful” perspectives about anaglyph glasses alternatives…

by Michael Starks After graduate work in physiology at UC Berkeley, Michael Starks began studying in 1973, and co-founded StereoGraphics Corp (now Real D) in 1979. He was involved in all aspects of R&D including prototype 3D videogames for the Atari and Amiga and the first versions of what evolved into CrystalEyes LCD shutter glasses, the standard for professional stereo, and is co-patentee on their first 3DTV system. In 1985 he was responsible for starting a project at UME Corp, which eventually resulted in the Mattel PowerGlove, the first consumer VR system. In 1989 he started 3DTV Corp. In 1990 he began work on “Solidizing”-- a realtime process for converting 2D video into 3D. In 1992 3DTV he created the first full color stereoscopic CDROM (“3D Magic”) including games for the PC with shutter glasses. In 2007 companies to whom 3DTV supplied technology and consulting produced theatrical 3D shutter glasses viewing systems, which are being introduced worldwide in 2008. Starks has been a member of SMPTE, SID, SPIE and IEEE and has published in Proc. SPIE, Stereoscopy, American and Archives of Biochemistry and Biophysics. The SPIE symposia on 3D Imaging seem to have originated due to his suggestion to John Merritt at a San Diego SPIE meeting some 20 years ago. Michael more or less retired in 1998 and lives in China where he raises goldfish and is researching a book on the philosophy of Wittgenstein. http://www.3dtv.jp

SpaceSpex is the name I applied to my versions of the orange/blue anaglyph technique in 1993. Like all the bicolor anaglyph methods it is compatible with all video equipment and displays and I think it’s the best of the methods using inexpensive paper glasses with colored lenses. Until someone comes up with a way to put hundreds of millions of new 3D TVs in homes which can use polarized glasses or LCD shutter glasses, anaglyph is going to be the only way for mass distribution of full color high quality 3D over cable, satellite, the web, or on DVD. However the solution I have proposed for Set Top Boxes, PC’s, TV sets and DVD players for the last 20 years is to have user controls, so those with display hardware that permits polarized or shutter glasses or even autostereo viewing or who want 2D can make that choice from the single 3D video file. This is the method of the TDVision codec, Next3D, and of Peter Wimmer’s famous StereoScopic Player, (all of which should appear in hardware soon) and probably the best stereoplayer of all in Masuji Suto’s StereoMovie Maker, and is being incorporated in most well known software DVD and media This is a photo of SpaceSpex Model E glasses for players. a webzine project Starks did in 1995.

Although many have experimented with orange/blue (I the Marks brothers showing some at the CES show in the 80s and their 1981 patent 4,247,177 is cited in the recent US ColorCode patent 6,687,003, which has an international filing priority for 1999), one might say that the ColorCode type of Orange/Blue anaglyph method of making and viewing stereo was invented by my friend Gang Li of Ningbo, China in the late 80s and described in his articles and patents. It was used at that time for TV broadcasts (both live and taped) in Xian and other cities for several years. I showed personnel associated with Xian TV how to genlock a pair of cameras and described how to make live anaglyphs when I was there in 1993. They made expensive plastic frame glasses with glass color filters. I still have a few pairs which I got when I went there for China’s first ever 3D Imaging conference in 1993. The method is a direct outgrowth of the work of Ed Land (the scientific genius who founded Polaroid Corp) in the 50s. In the course of his work on , which was motivated by his desire to create instant color photos, Land discovered that he could produce nearly perfect color images using only two primaries and that the orange and blue

68 Veritas et Visus 3rd Dimension August 2009 portions of the spectrum worked best. This led to his Retinex Theory of color vision. It is well known that the retina has R, G and B sensitive cones, so the production of essentially full color from Y and B is a mystery.

In 1999 Danish inventors also patented the Orange/Blue method and unlike Li or myself have promoted it heavily. Their patent is impressive as they have worked hard to give this old and simple method a modern digital twist and, as always in patents, the claims, which are really the only part of a patent that matter, are rather opaque and very difficult to interpret. However the bottom line is very simple – Gang Li, working a decade earlier with just a swatch of color filters and without benefit of digital computers, digital cameras or displays, or sophisticated equations, came up with essentially identical filters. It is, after all, our eyes (and ) that determine optimal color, depth and ghosting and then we can make equations and not the reverse. One can indeed use a spectrophotometer to determine optimal cancellation, but brightness, depth and natural color (especially skin tone) must be simultaneously determined and these are subjective matters the spectrophotometer cannot judge.

The Danes use the name ColorCode and the glasses are produced by APO Corp in the USA. The biggest 3D glasses order ever was filled by APO in January 2009 with the production of 130 million pairs for the Super Bowl ads. You can download these ads on YouTube and many other sites, but get the high-resolution versions as there are many very poor low-resolution versions. The consensus is that it was not highly successful as 3D; much of the material was unsuitable due to its color (i.e., all white background with people in white suits), or to that fact that it was animated (i.e., the Monsters vs. Aliens trailer). Animations work less well with any stereo method due to their lack of all the rich stereo cues in real world video.

If you want see good and to verify the clear superiority (in my view) of SpaceSpex, look at 3D such as the Chuck 3D ad or series 2 episode 12 (http://www.youtube.com/watch?v=vNyqwgI5jic) – also available on many sites or as p2p – or at the 3D stills available on ColorCode's page (http://www.colorcode.com) with the ColorCode glasses versus SpaceSpex Model U and you will see better color and more than double the brightness. It’s like day and night with ColorCode producing a dim image with muted colors that looks like it’s been shot in the evening or on a rainy day, which turns to a sunny day when you put SpaceSpex on. No contest. To convert any 3D video for real-time viewing with SpaceSpex you can download Peter Wimmer’s popular StereoScopic Player from http://www.3dtv.at. The free version times out after 5 minutes and the full version is about $50. You can play field sequential, right/left, top/bottom or separate R and L files in any stereo format including Yellow/Blue anaglyph (i.e., ColorCode/SpaceSpex) and you can download 3D video sample files. I recommend the Heidelberg demo. You can freeze frame for careful comparison and alter H and V parallax with the arrow keys on your keyboard. SpaceSpex support is also being included in the Next3D and TDVision HD DVD players.

However, Masuji Suto’s StereoMovie Maker has what seems to be the most sophisticated stereo player and it’s free! http://stereo.jpn.org/eng/stvmkr/index.html. Not only do you have a large number of choices of input and output formats but you can even control the gamma of each eye independently and the stereo window. Anyone technically adept will surmise that it should be straightforward to use edge detection and other well known functions to create a program that automatically registers the two images for minimal ghosting by reducing H and V parallax, size (i.e., zoom correction), skew, brightness, and color. Although I mention in my other articles that such things have been done in research work many times, Suto is the only one to do this with readily available software. Use the stereo player in Movie Maker and not the standalone stereo player as it is older and lacks many of the advanced features. The program seems to currently work only with still images but they can be batch processed with multithreading and it should be simple to register a 3D video using the easy align or other choices in his menu. Clearly this program can be improved and put in firmware for real-time alignment by cameras, PCs, DVD players, broadcasters, set top boxes, and TV sets and this would be another great advance in the stereo art, and of especially great value for anaglyph viewing.

69 Veritas et Visus 3rd Dimension August 2009

Keep in mind that the all the yellow/blue images and players were created using the ColorCode software for the ColorCode filters and the improvement of SpaceSpex is even more striking when the images are tweaked to exactly match the SpaceSpex filters (such as those below). However all three models of SpaceSpex are 100% compatible with any images created with the excellent ColorCode Player or the 3D Suite ($200 from their page for PC and MAC) or with Wimmer‘s Stereoscopic Player or with Suto‘s StereoMovie Maker. The elegant ColorCode software (a new version just became available in mid 2009) or the other players can easily convert video realtime for SpaceSpex.

Of course one does not get the brighter image and better colors of SpaceSpex without giving up something and the downside is that there is a greater brightness imbalance with Model U, which may take some time to get used to. For this reason I created the SpaceSpex Model E and Model C. Model E gives an even brighter image that is more comfortable for longer viewing, but it requires tweaking the colors of the images and ideally reducing the horizontal parallax and adjusting your TV or monitor/projector for best results. Model C gives a less bright image but is more tolerant of ghosting.

Since I did my original work 16 years ago I recently did extensive testing of the newest stereo players, 3D DVDs and LCD displays with all SpaceSpex Models. My conclusion that no other method gives a bright beautiful image with good color was confirmed. Here are a few of the tests I did. In each case I tried not only the glasses that came with the DVD but several variants on them (i.e., slightly different colored filters) all viewed with HDMI connection from a PC to a new 23-inch HP LCD monitor with the brightness at about 3/4 maximum.

 Fly Me to the Moon is an animated feature with Red/Cyan glasses. Dim, dull image with very poor color and noticeable ghosting.

 The Stewardesses is a live action film digitally re-mastered by a team led by veteran stereoscopist and anaglyph expert Daniel Symmes with its own unique Red/Blue glasses is probably the best registered stereo film ever to be released on video. Reasonably good but color and brightness still modest and some ghosting.

 Shrek 3D is an animated film with Red/Cyan glasses. Dim, dull image with poor color and ghosting.

 Shark Boy and Lava Girl – live action embedded in graphics with Red/Cyan glasses. Dim with poor color. I found that some other Red/Cyan glasses gave a brighter image with better color and no more ghosting.

 Barbie and the Magic of Pegasus is an animation with Red/Cyan glasses. Dim, poor color, ghosting – almost unwatchable.

 Friday the 13th Part 3 is live action in a new (2009) release with Red/Cyan glasses. Dim, poor color and horrible image misregistration with severe ghosting. Pretty much unwatchable. And this is from Paramount, owned by Viacom, one of the world’s largest media conglomerates.

 Journey to the Center of the Earth is live action with a new (for DVD releases) Magenta/Green glasses (TrioScopics). Dim, poor color, ghosting.

 The Polar Express is an animated feature with Red/Cyan glasses. Dim, poor color, ghosting.

 Amityville 3D is a live action film in frame sequential format. Using one of its few daylight sequences ColorCode gave its usual dim image with modest color but good depth (provided of course that the monitor brightness is near max) while SpaceSpex U gave an excellent image in all respects. Surprisingly, SpaceSpex E also gave an excellent image very similar to that of SpaceSpex U, using the same yellow/blue setting in those sequences where the parallax was minimal. This shows that subtleties of encoding/decoding the color gamut and parallax can be manipulated to make all the yellow/blue glasses types compatible and to give a 3D image which is excellent in all respects.

70 Veritas et Visus 3rd Dimension August 2009

 Ape is an old live action 3D film in the frame sequential format which gave essentially the same results as Amityville 3D.

 Taza-Son of Cochise is a live action Technicolor film from 1953 released in 2008 in side by side squeezed format by Sensio Corp for full color viewing with projection using their custom hardware, but playable on a pc with various stereoplayers such as Wimmer’s Stereoscopic Player. I chose either red/blue anaglyph, high quality red/blue anaglyph, yellow blue anaglyph (i.e, ColorCode or SpaceSpex U or C). In spite of the bizarre choice of the H squeezed format (also done by StereoGraphics Corp for many years), which eliminates half of the H pixels needed for depth, the sharpness of the original dual filmstrips and the spectacular color of the 3 strip/eye Technicolor save the day when projected or viewed in frame sequential mode on a CRT or probably on one of the 3D Ready DLP TV’s from Mitsubishi or Samsung (Wimmer and many other consumer and Professional programs now have settings for these). On my LCD monitor with red/blue glasses it was dim, with very poor color and ghosting but good depth. ColorCode gave OK depth with little ghosting but, as always, a dim image with modest color. SpaceSpex U gave a bright image with essentially full color and little ghosting and OK depth.

 Bugs 3D is a live action IMAX film released by Sensio in their side by side format. Results were similar to those of Taza.

The bottom line is that only the SpaceSpex give a bright, colorful 3D image. The fact that this happened even though neither the files nor the players were optimized for SpaceSpex indicates that with such optimization they are suitable for any use including the cinema. ColorCode may be feasible in situations where the brightness of the display can be very high without washing out the color and contrast.

I assume everyone knows that you have to view anaglyph DIGITALLY- i.e., with a good LCD or plasma or DLP monitor or TV or projector with DVI or HDMI connection to the DVD player, PC/Mac or server and NOT a CRT and NOT with a VGA connection (i.e., not with the analog DB9 or HD15 cables)! Most consumers will not know this but it is a testament to the sloppiness of nearly all anaglyph DVD releases that they give little or no instructions. A few mention reducing room lights and avoiding glare on the monitor but only one I looked at (Shrek 3D) mentions that you get the best 3D from DVI (or HDMI) connection, next best from component etc. and not one that I have ever seen for any method mentions that keeping the glasses free of fingerprints is mandatory.

Ideally, you will adjust the brightness, contrast, sharpness, gamma, hue, saturation or color temperature on your display//server/broadcast equipment optimally.

Many anaglyph DVDs have appeared, one in ColorCode (the Japanese release of Cameron’s Ghosts of the Abyss, about a dozen in red/blue or cyan/blue (SpyKids 3D, Treasure of the Four Crowns etc) and recently at least one (Journey to the Center of the Earth) in a magenta/green method called TrioScopics, but it seems to me that all these other methods are lacking in either color, brightness, depth or comfort and SpaceSpex seems to me easily the best choice.

As noted above, I cannot see any possibility that any bicolor anaglyph (i.e., one color lens for each eye) is patentable. Anaglyphs have been common for well over 100 years and there are hundreds of patents. All claims relating to bicolor glasses fail the mandatory requirement that the inventions must not be “obvious to one skilled in the art.” The Daimler-Chrylser/Infitec/Dolby Digital 3D triple notch filter system now common in cinemas is also obvious after the fact, but sufficiently inventive that it seems protectable. I used single orange and blue notch filter (i.e., multilayer interference type) glasses for SpaceSpex in 1993 but did not regard it as patentable. I got these old glasses out of recently and they do give a better image than the plastic filters, but of course they are far more expensive. So far as I know a double notch filter in each eye has not been used.

It should be understood that in order for ColorCode to get their patent they had to narrowly define the filter spectra to avoid the patents by Marks and Beiser and my work with SpaceSpex (and to be unaware of Li’s papers and

71 Veritas et Visus 3rd Dimension August 2009 patents). Consequently even if one ignores the clear priority of Li, ColorCode cannot claim any orange blue filters except those narrowly defined in its patent. It is abundantly clear that SpaceSpex are different just by looking at them and dramatically demonstrated by looking at the same images with the two types of glasses.

You can find sample SpaceSpex images and info on how to make them (but you can also use the ColorCode software for Model U or even Model E SpaceSpex as noted above) on our page (where they have been for 16 years) http://www.3dmagic.com/spacespex/spacespex.html. I reproduce them here for convenience.

On the left, 3D Video pioneer James Butterfield showing his 3D video microscope to Takanori Okoshi, author of the classic text “3D Imaging Techniques”. Photo by Susan Pinsky circa 1985. On the right is a photo by famous British David Burder circa 1985.

On the left is Lucia – Queen of Bahia, photo by Michael Starks, 1988; on the right is a Balinese dancer, photo by Michael Starks, 1985

If you look at these images successively with the Gang Li/ColorCode glasses, then the SpaceSpex Model U and then the SpaceSpex Model E you will see that for the converged objects (i.e., those having little or no horizontal parallax) all three glasses types show good depth and color (any differences can be largely eliminated by tweaking the images when made or the display parameters (tint, brightness etc). The Li/ColorCode method gives lowest ghosting but at the cost of diminished brightness and color and with some eyestrain for most people with prolonged

72 Veritas et Visus 3rd Dimension August 2009 viewing, while the SpaceSpex U (i.e., for 3D video not specifically edited for them) gives a bright image and good color at the cost of brightness asymmetry which may be bothersome to some people. SpaceSpex Model E (i.e., for properly edited video) give the best 3D image, but at the cost of ghosting on objects with significant horizontal parallax. If it is impossible to H shift and color adjust or ghost-bust the image to reduce ghosting then Model U (Unedited) is best, but when possible Model E (Edited) is the choice. As with any stereo-viewing modality, ghost reduction is desirable but the general algorithms created by Graham Street, Real D, JVC, and others will probably need to be modified for anaglyph ghost-busting. Now that Real D has released their real-time ghost-busting server software Real D 3D EQ, this can be easily tested. Of course all methods need to be given a serious trial and this means at least 20 minutes and preferably repeated viewings of films on different displays over a period of time.

All anaglyphs force one eye to focus at a different plane than the other (the basis of and the recent ChromaDepth method --first noted by famous scientist Hermann von Helmholtz) and also the different light levels tend to make one pupil dilate more than the other. Stereographer Allan Silliphant has tried to ameliorate this situation with glasses that contain a low diopter in one eye http://www.anachrome.com. He has produced the best red/blue anaglyph video I have seen, but I still think SpaceSpex has an edge, so we agreed to try to combine his diopter method with the SpaceSpex colors.

Following are instructions we made 16 years ago on how to make SpaceSpex images from a stereo pair. They are of course largely obsoleted by the growing availability of programs to convert stereo formats in real-time but I present them so that one can get some idea as to what is done to make anaglyphs. As noted, you just take the blue of the left image and replace with the blue of the right and then if feasible tweak it in any way possible with your particular program to optimize color and depth and to reduce ghosting. It should not be difficult to find the optimal settings in Premiere, Final Cut Pro etc to do this or to set hardware such as Pirhana’s, Pablos, DaVinci’s etc or even the cameras themselves to create SpaceSpex video in real-time for live broadcasts via cable, satellite, or the net. Of course for optimal viewing at home the broadcaster/DVD maker should test the final result on samples of actual consumer equipment at the end of the broadcast or playback chain and there should be some instructions and a test image so the end user can tweak their own PC or TV. The single commonest adjustment needed is brightness.

Using Adobe Photoshop to Create SpaceSpex Blue/Orange Anaglyphic Stereo Images: These instructions are based on version 3 of Photoshop for Windows (this was done 16 years ago). The details will be different for other versions, and of course there are other ways to do this, but the principle is the same: remove the blue component of the left image and replace it with the blue component of the right image. Start with a stereo pair of images of the same size and scale, preferably in 24 or 32 bit color. To minimize ghosting, avoid images with lots of horizontal parallax and high contrast (e.g., a person with a white shirt on a black background) in the extreme foreground and background (i.e., in the typical shot with convergence in the mid-ground). The color depth of your display should be at least 15 bits.

Open the left image in Photoshop >>> From the Mode menu choose RGB color >>> Open the Layers window (right click in window, click “Show Layers”) >>> Click the Channels tab and drag the Blue thumbnail to the trashcan >>> Open the right image, repositioning it if needed to uncover part of the left image >>> Choose RGB Color from the Mode menu >>> Drag the Blue thumbnail from the Layers window and drop it on the left image >>> Close the right image without saving changes >>> The left image is now selected and in Multichannel mode >>> Choose RGB Color from the Mode menu >>> Choose Save As...from the file menu to save the altered left image with a new file name in a 24 bit color format >>> Click the Blue thumbnail from the Layers window to select the Blue channel >>> Click to the left of the RGB thumbnail in the Layers window to display all three channels >>> Click the reposition tool from the standard toolbar >>> Put on your SpaceSpex and drag the blue channel to align the right and left images. Use the zoom control if needed. Try to get the main subject of the image lined up properly, so that ghosting is minimized and confined to the background and extreme foreground.

These images do not respond well to color reduction techniques. As you might expect, reducing them to 256 colors with any dither at all mixes the color channels enough to destroy the stereoscopic effect.

73 Veritas et Visus 3rd Dimension August 2009 Is Anaglyph in Our 3D Future? by Chris Chinnock

Chris Chinnock is founder and president of Insight Media, and focuses his efforts on projection systems, 3D displays, and FPD business expertise. Preceding his 10 years heading up the Insight Media efforts, Chris spent 15 years in a variety of engineering, management ,and business development positions at MIT Lincoln Labs, Honeywell Electro-Optics, GE AstroSpace, and Barnes Engineering. He holds a Bachelor of Science degree in Electrical Engineering (BSEE) from the University of Colorado. This article is reprinted by permission from the Display Daily, published by Insight Media on August 12, 2009. http://www.displaydaily.com

Viewing 3D with anaglyph glasses means that each eye has a colored filter in front of it to separate the left and right eye images. The most common form is a red/blue or red/cyan combination, but dozens of variations exist in the selection of the colors. Many see anaglyph as the least desirable form of 3D, but there are a number of advantages to it. In talking with experts about this topic, it is my personal conclusion that anaglyph will be around for some time. As such, we need to better understand its strengths and weaknesses and formulate products, messages and strategies that recognize anaglyph, but place it in its proper context. That may be easier said than done.

Let’s start with some of the pros. Anaglyph is a color encoding approach that creates a 3D image within a standard 2D video frame so it can be transmitted over existing distribution channels and will display 3D on any 2D display. That means any cell phone, laptop, monitor or TV can display a 3D image that is viewable with the matched anaglyph glasses. Being able to play 3D content on this huge installed base is a tremendous advantage over most other approaches, which will require a new display that is 3D capable. This is the approach that most consumers think of as "3D" because it has been around a long time and it has wide recognition.

Anaglyph glasses are also inexpensive – often throwaways. Many forms of anaglyph carry no licensing fee to encode it, so there are few barriers to creating the content. In fact, it is quite often used by professionals in the content creation and post production process as it is easy to create and display and it provides useful feedback.

But there are many cons. The most significant problem is the wide variety of quality the process can produce. For example, the choice of the color bands can have a big impact on the color quality (some implementations can be nearly black and white) and the 3D effect. The process reduces the resolution of the 3D image and the encoding process must be matched up with the proper glasses. Having a variety of glasses will create confusion for the end user. And, the quality of the 3D can be very scene, or content, dependent. As stereoscopic expert Peter Anderson noted in a meeting the other day, Shrek will not work well in anaglyph because of the green color dominance. Even anaglyph fan, Ray Zone, had to concede that point.

There is also a lot of mixed messaging about the approach from the studios. Disney and others have already released some new 3D movies in anaglyph. But other studios are choosing not to release content in anaglyph (like Monsters vs. Aliens). I have seen content that looked great in a polarized projection mode, but looked horrible when broadcast to a TV in anaglyph (Chuck episode and Sobe commercial). I have seen demos of anaglyph that can look pretty good, however. The bottom line is that not all anaglyphic 3D content is created equal. The encoding approach must be carefully matched to the content and some content will likely never look good.

So how should anaglyph be positioned? NVIDIA has one approach. In a conversation with them earlier this week, we learned of a product the company is now offering as part of its 3D Vision line. Remember, NVIDIA has been leading the charge in PC-based platforms for 3D gaming, coupled to NVIDIA certified 3D monitors, projectors and TVs. The TVs require a checkerboard encoding, while the monitors and projectors use a 120Hz page-flipping

74 Veritas et Visus 3rd Dimension August 2009 approach. As a result, we were a little surprised to see they are now offering an anaglyph solution that they call 3D Vision Discover.

NVIDIA is positioning this approach as a "sneak peak" into the 3D experience. The solution includes:

 Custom-designed, specialized anaglyph (red/cyan) glasses  NVIDIA software to transform over 350 standard PC games into full 3D  NVIDIA 3D Movie and Video player software, along with free downloads of 3D movies, pictures, and game previews directly to your PC

NVIDIA says they have optimized the color bands to work best with LCD monitors. They want to use this approach as a marketing tool to get users excited about 3D, thereby stepping up to the higher quality experience offered in the 3D Vision solutions. The approach allows the installed based of 65 million PCs with G-Force graphics cards to play with 3D, which is a good thing, but it is also risky. Suppose these users find anaglyph is good enough and there is no need to step up to a better display solution? Suppose they don’t find the experience very good and they get completely turned off to 3D?

NVIDIA downplayed these risks and pointed to the fact that they are at least offering a stepped solution that explains the quality level differences. As NVIDIA’s Andrew Fear pointed out, "Many studios offer a great theatrical experience, then releases the movie in anaglyph. The end user buys the DVD thinking they will get the same 3D theatrical experience and they don’t. But the studios are not explaining the difference to consumers, which is what we are trying to do." He has a good point.

I think the key point of all of this is that anaglyph 3D content will be in the market for some time, despite its shortcomings. Therefore, the whole industry needs to do a much better job of educating consumers about anaglyph as well as the other higher quality 3D solutions that exist. Are we up to the task?

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

You are invited to attend our next 3D @ Home Member Meeting at the San Jose Marriott on Tuesday, September 1, 10:00 AM - 12:00 PM to learn more about the benefits of membership in the 3D @ Home Consortium and the Consortium's activities. The meeting will be held in conjunction with the TV Ecosystem Conference. The 3D @ Home Consortium’s mission is to speed the commercialization of 3D capability into homes around the world.

Please attend the open portion of our Quarterly meeting being held at the Downtown San Jose Marriott on Tuesday, September 1. The open portion of the meeting runs from 10:00 AM - 12:00 PM in the Blossom Hill Rooms I & II.

To attend, please RSVP your name, company and interest to me at [email protected]. To learn more about the Consortium before the meeting, please visit http://www.3DatHome.org or review the attached Consortium overview brochure. If you join the Consortium, you will be invited to participate in our Steering Team meetings later that afternoon. Tentative agenda:

10:00 AM Call to Order of Full Membership Meeting & Consortium Update

10:30 AM CableLabs Update – Dave Broberg

11:00 AM Member Profiles & Market Overview

11:45 AM Brief Overviews of Steering Team Activities

75 Veritas et Visus 3rd Dimension August 2009 Hologlyphics: a creative system for autostereoscopic movies

by Walter Funk

Walter Funk has been producing autostereoscopic movies since 1994, pioneering the artistic use of volumetric and autostereoscopic displays for entertainment. His Hologlyphics performances involve volumetric animations and live music performed for an audience. He founded Hologlyphics (http://www.hologlyphics.com), bringing live volumetric entertainment to film & video festivals, museums, art shows, and live music events Walter studied holography at The Holography institute and music at Center for New Music and Audio Technology, UC Berkeley.

Autostereoscopic and volumetric displays, which provide three dimensional images that can be viewed without additional apparatus such as glasses, offer a huge potential for enhancing viewing experiences across the whole entertainment industry, not just in movies and television, but in music, arts exhibitions, and video games. However, this potential has not been realized due to roadblocks such as the inaccessibility of true 3D displays, the lack of content, the lack of standards for autostereoscopic movies, and especially the lack of autostereoscopic content creation tools. Most volumetric and autostereoscopic display research is focused on innovation of the technical aspects. Very little attention has been geared towards innovation of volumetric and autostereoscopic content creation and experimentation with live audiences. In addition, autostereoscopic and volumetric displays are generally not suitable for large screen theaters as yet, and the expense may not be justifiable for small audiences. This greatly hinders creative exploration and advancement of true 3D displays. In order to address this, I avoided emphasis on developing new display technologies or mass-market media and instead decided to utilize existing autostereoscopic display technology in 3D movie showings to small audiences.

I first developed multiple autostereoscopic movie content creation tools, secondly generated extensive content, then finally showed extensive and various content to live audiences in numerous settings. The process of adapting traditional film and video techniques to the autostereoscopic movie system through experimentation and employing novel techniques allowed me to develop a flexible environment, called Hologlyphics1,2 for autostereoscopic content creation.

The moving spatial images are displayed on the parallactiscope, invented by Homer B. Tilton3,4. The parallactiscope is an automultiscopic dynamic parallax barrier CRT display. It displays monochromatic 3D images with full horizontal parallax within a 90° viewing range. All views are present within that viewing range. This is made possible by using an electro-mechanical moving virtual slit filter. There are no pseudo-scopic zones when viewing the parallactiscope. Audience members can “look around” moving objects with great clarity.

I developed a system for a live performance of autostereoscopic movies along with music, and created a family of real-time spatial image synthesis and processing algorithms for artistic use. All images are automultiscopic, multiple true 3D views are seen by observers without 3D glasses.

The parallactiscope automultiscopic display has 4 main components: an electrostatically deflected CRT, a Parallax Scanner, a Parallax Computer, and a Scanner Driver. The Parallax Scanner is an electronically controlled direction- sensitive spatial filter placed in front of the CRT. The Parallax Scanner is constructed with two crossed linear polarizers and a vertical sliver of half-wave retarder in the center. The crossed polarizers block the light coming from the CRT, but, with the .10 inch sliver of half-wave retarder in the center, after the light passes through the first linear polarizer and the half-wave retarder, the light is rotated 90° and then is able to pass through the subsequent linear polarizer. This forms a virtual slit which allows only particular light rays to pass into observer space while blocking all others. The sliver of half-wave retarder is mounted on a high- pendulum connected on its other end to a small audio loudspeaker. The speaker acts as a linear drive motor, allowing the virtual slit to scan horizontally in front of the CRT. If the slit moves side to side fast enough, it is no longer perceived as a slit.

76 Veritas et Visus 3rd Dimension August 2009

The Parallax Scanner slit's horizontal motion is driven by the Scanner Driver with a sinusoidal signal. A second sinusoidal signal with the same frequency, but variable phase, feeds into the Parallax Computer. The Parallax Computer is a special purpose analog computer, which accepts x, y, and z deflection signals and processes them to provide the proper horizontal and vertical deflection signals for the CRT. Since the Parallax Computer receives a signal from the Parallax Scanner, it is able to keep track of the slit's instantaneous horizontal position at all times. As the CRT screen is viewed through the slit, each narrow vertical zone on the CRT screen is keyed to a unique horizontal viewing direction. This configuration allows controlled parallax: the Parallax Computer processes the x, y, and z signals in relation to the slit's instantaneous position, displaying the proper image on the CRT, which, when viewed through the Parallax Scanner, produces automultiscopic images. Autostereopsis is produced, along with multiple views allowing 90° horizontal parallax, with no pseudo-scopic zones. The displayed 3D image represents the x, y, and z deflection signals fed into the Parallax Computer. All images are directly drawn onto the display; there is no raster. The CRT must be electrostatically deflected to obtain a sufficient redraw speed, and must have a high intensity, short persistence phosphor. High intensity is required due to some loss of brightness with the slit; the short persistence Figure 1: Left – Parallax Computer with Scanner Driver on top, helps avoid smearing. Figure 1 shows the basic Right – large Parallax Scanner with one linear polarizer removed large screen parallactiscope hardware. to show slit and pendulum.

To display moving images on a parallactiscope, synthesized via digital signal processing on a computer, the x, y, and z inputs of the Parallax Computer are fed from a multi-channel digital-to-analog converter running at 192khz connected to the computer. A blanking output signal from the DAC controls the intensity modulation of the CRT. Real-time signal processing is used to run spatial image processing and sound synthesis algorithms, defining 3D images by XYZ signals. Audio is also output from the computer via the multi-channel soundcard and routed to four speakers. Using multiple outputs of the DAC, up to four sets of XYZ and intensity signals can be routed to as many as four parallactiscopes.

The moving autostereoscopic images can be controlled via human-machine interfaces, in real-time as a live performance. These interfaces include a musical keyboard connected via MIDI to the computer, three motion sensors plus a control panel are connected to the computer's USB port, and external audio, to be analyzed for controlling 3D image synthesis parameters, is run into the ADC input of the multi-channel soundcard.

3D visuals and sound are integrated, and can be linked in unique and interesting ways. The images can also be controlled by internal modulation sources. They can share modulation sources or variations and derivatives of specific modulators. When specific images appear, they can have their own associated sound triggering at the same time. If that same image is modulated in size, rotation, or shape, the associated sound can be modulated by multi- channel spatial audio effects at the same rate of change as the visual. This creates a synchronicity between the visuals and sounds. A basic performance setup is shown in Figure 2 (on the next page).

Once generated, the autostereoscopic images can be transformed. Classic film effects such as traveling mattes, fades, and wipes were extended to video when television began. I have further extended these basic visual special effects to autostereoscopic movies. Video wipes, originally developed in the television industry, are a process of graphically combining two video images with other video effects to generate the electronic equivalents of film

77 Veritas et Visus 3rd Dimension August 2009 wipes, fades, superimpositions, and traveling mattes. Volume based image wipes work on the same principal as video wipes, except that they work in the spatial realm by combining two volumetric images and splitting along a whole plane rather than a line. With the volumetric video wipe, you can either slide or rotate the splitting plane 360°along any of the three axes to create a horizontal wipe, forwards/backwards wipe, or diagonal wipes. You can also rotate the splitting plane while sliding to create a spinning wipe effect. Volumetric wipes can be cylinders, cubes, spheres, or other arbitrary 3D shapes.

Figure 2: Left- Musical keyboards, microphone, the control panel and motion sensors connect to the computer. 3D Audio/Visual scene generation is controlled by the interfaces, then output to four spatial displays and speakers. Right – Autostereoscopic image control box.

Using real-time digital spatial image processing, dozens of other volumetric video effects have been created including spatial morphing, kaleidoscope transforms, volume bending and effects that interact with sound. With this large flexible family of spatial image processing and synthesis algorithms, a creative environment exists for the production of autostereoscopic and volumetric movies.

The Hologlyphic system has a distinct architecture for generating volumetric images with sound. Both visuals and sound can be grouped into three main processes: synthesis, processing and control. Synthesis involves the creation of a visual or sound as a digital signal, wherein processing takes the generated signal as an input and transforms it. Both forms of synthesis and processing can be controlled by the interfaces and internal modulators. This provides complete integration between the spatial visuals and sound.

To create a full spatial scene, number of spatial images are first generated simultaneously with synthesized or sampled sound. Then both the generated visuals and sound are added together, processed, and sequenced to generate a 3D scene. The visual 3D scene is displayed on one or more automultiscopic displays and the audio is spatialized through a multi-channel sound system. After a suitable number of scenes and animations have been Figure 3: Volumetric wipe. A side view of a created, each with their associated sounds, they can forwards/backwards wipe along the Z axis.

78 Veritas et Visus 3rd Dimension August 2009 be pulled together to create a sequence of scenes that comprise a whole performance, using serial and parallel audio/image synthesis and processing techniques, parameter mapping, scene switching, and performer interface devices.

Control signals can be low frequency oscillators of many waveshapes, random and chaotic functions, inputs from external interfaces such as motion sensors, perceptual features from acoustic instruments, musical keyboards, or knobs and sliders. The parameters of the modulators themselves can also be controlled and modulated, either by other modulators or any of the external interfaces. The basic spatial image/sound generation, processing, control, and distribution configuration is shown in Figure 4.

Due to the change in perspective and parallax as the audience members move, there is a different “natural” pacing preferred for many images in the Hologlyphics system. For certain very detailed images, with the viewers capability to ‘look around’ the object, it needs to be taken into consideration how much ‘time you leave’ to look around the object. Imagine a series of shots of a stationery spatial object displayed on a ; each shot lasts 3 seconds. With a stationery object displayed on Figure 4: Volumetric animation/sound a volumetric display, “you don’t see all you can see” for synthesis structure. Spatial visuals and that shot as soon as your mind visualizes the object. This sound are integrated by sharing the is not a consideration with works using only one point of same control sources. view as is the case with video, film, and even traditional 3D-Movies where, even though the viewer sees 3D, only one perspective is viewable at any moment. Again, while no ‘right’ pacing exists, such techniques, rules, frameworks and preferences for temporal pacing of multiview shots will need to incorporate that consideration. The pacing considerations have been noted not only by the artist, but by colleagues and audience members as well. Untrained people at public art events have mentioned they did not get too see all of a scene before the shot changed.

Figure 5: Left – audience member views autostereoscopic animation as live music plays. Right – large scale art installation in Hong Kong incorporates motion sensors inside colored cones hanging above the crowd, which provide input for adaptable autostereoscopic visuals.

79 Veritas et Visus 3rd Dimension August 2009

Multiple venues for the performance of autostereoscopic movies have been explored. Recently three completely different art festivals featured Hologlyphics. The first, a movie festival, had the audience viewing recorded autostereoscopic movies. The next situation was for an international music festival with real-time generated autostereoscopic animations controlled by live music, as shown in Figure 5. The third was an art installation involving motion sensors controlling autostereoscopic animations and sound, displayed at an international sound art festival. In all three situations, each with different content, audience feedback was extremely positive. By not limiting ourselves to waiting for technology or standards, we were able to obtain useful information that has greatly accelerated our development of autostereoscopic movie techniques and content.

The adaptation of traditional film and video techniques to the autostereoscopic movie system coupled with the development of new unique effects exclusive to multiview displays has resulted in compelling content shown to live audiences in multiple settings with great success. I am now working with larger scale displays in order to incorporate these techniques into projects such as a public art installation I developed in Hong Kong. It involves motion sensors inside colored cones hanging above a crowd (see Figure 5, right) that use the motion of the crowd to trigger sounds, and along with sensors already interfaced to the current Hologlyphics autostereoscopic movie system, the foundation for these types of installations is laid.

As a result of experimentation with creating autostereoscopic movies for live audiences instead of working in a vacuum, the larger displays have a strong foundation of techniques, technology, and content to build upon.

References:

1 W. Funk, Hologlyphics: Volumetric image synthesis performance system, Proc. SPIE 6803, 2008. 2 B. Hopkin, Gravikords, whirlies & pyrophones: experimental musical instruments, pp. 81, Ellipsis Arts, Roslyn, NY, 1996. 3 H. B. Tilton, Real-time direct-viewed CRT displays having holographic properties, Proc. of the Technical Program, Electro-Optical Systems Design Conference, pp. 415-422, 1971. 4 H. B. Tilton, Nineteen-inch parallactiscope, Proc. SPIE 902, pp. 17-23, 1988.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

The MultiView compilation newsletters bring together the contributions of various regular contributors to the Veritas et Visus newsletters to provide a compendium of insights and observations from specific experts in the display industry. http://www.veritasetvisus.com

80 Veritas et Visus 3rd Dimension August 2009

The 3D Interface by Fluppeteer

Fluppeteer is contributing to Veritas et Visus based on a long background working as a programmer, and a similarly long background torturing his display hardware to within an inch of its life. He uses an IBM T221 display (3840x2400 pixels) and multi-monitor setups, the attempts to extract the best out of which have given him some insight into the issues specific to high-resolution displays. Fluppeteer holds an MA from the University of Cambridge (England) and an MSc in Advanced Computing from King’s College London. His efforts to squeeze the most from monitors stretch from ASCII art to ray tracing. Laser surgery left him most comfortable 1-2 feet away from the monitor, making high-resolution a necessity. He is currently ranked 18th in the word at tiddlywinks.

Some years behind the game industry, the user interface with computers and electronic devices is moving to 3D. Of course, this is no new thing: attempts at 3D interfaces date back decades, but only recently is the technology starting to catch on. I'd like to discuss the merits of this trend.

The 3D look: The WIMP desktop has undergone a number of aesthetic revisions during its lifetime. The first windows were simple rectangles containing text. Rounded corners were quickly added - younger readers may be amused at the excitement that this caused – and as color displays appeared, color schemes allowed better visual cues. As the performance of the display system improved (and sometimes before it did), more effects were added: faster screen redraw allowed "solid drag" of windows (rather than positioning a representation of the window border); “3D effect” window furniture both added a “look and feel” and used common visual cues to delineate UI elements; antialiased fonts improved the smoothness of the display; textured window elements improved the unified feel of desktop items; dynamic effects such as window minimizing and maximizing gave better cues about a change in the state of the machine.

Since the prettiness of the desktop was a significant factor in winning over new users of a machine, more artistic effort was put into the display to better use the improving technology. Representing GUI elements in vector form allowed better dynamic resizing and display portability – in the early days, features shown off by NeXT and SGI. With the Aqua interface in OS X, Apple presented a particularly artistic representation of the desktop and extended the flexible UI element representation with Quartz, at some cost in performance.

Meanwhile, computers were increasingly being used for 3D modeling in CAD and for 3D gaming - which almost universally replaced the 2D sprite-based game in the years shortly after Doom, at least on systems with sufficient graphical ability. It's unsurprising that, given the increasing capabilities of hardware and the ubiquity of 3D graphics, developers should decide that the method of interaction with a computer should also take on the third dimension. Surely three dimensions should be better than two?

There are obvious limitations to how helpful the third dimension can be when the user is editing a fundamentally two-dimensional document: 3D graphics are unlikely to make word processing significantly better, for example. In contrast, some applications are inherently 3D: a CAD model or 3D graph from a spreadsheet does not rely on the desktop itself being 3D. Here, I'm discussing manipulating windows, arranging multiple documents, selecting files, GUI elements that are simply part of the UI rather than something to edit of themselves.

Those with an experience of CAD software will be aware that the accurate representation of a 3D object on a two- dimensional screen is a task which inherently involves compromises. In the case of a CAD application, there is no alternative: a three-dimensional object is being modeled, so some attempt to represent more than a two-dimensional view of it - typically by showing several viewpoints at once – is necessary; this approach provides a way to see the detail of the model, but is hardly natural or efficient in terms of use of the display area.

81 Veritas et Visus 3rd Dimension August 2009

Games make an alternative compromise: it's unusual for a game to present a static representation of a 3D world which can give the user an accurate view of spatial relationships.

What games provide instead is a highly dynamic environment: the user has control over the movements of their avatar, and change in the viewpoint provides both a temporally-separated form of stereoscopic vision and the ability to change the view of elements in the environment. As graphics performance increased, the same could be said of CAD software: while the traditional multi-view representation of objects is still available, it also became possible to get a feel for the three dimensional shape of an object by manipulating the viewpoint of it.

The desktop, in contrast, is a static thing. The user's attention should be on the work they're doing - the document being edited or viewed; interaction with the desktop itself is intermittent and should require the minimum of effort. The user cannot be expected to have to maneuver the point of view in order to locate GUI elements, and having elements in constant movement is distracting and disorienting.

It's unfair to suggest that all depth cues rely on the user actively manipulating the scene. Stereoscopic vision gives a non-interactive three-dimensional view, so an autostereo display or stereo glasses can provide a natural view of the 3D relationship between desktop elements. Unfortunately, both approaches (and more exotic alternatives) have an implementation cost and inconvenience to the user. If every computer display becomes 3D capable in order to support 3D applications, it will make more sense to make the 3D interface ubiquitous; in the meantime, interactive desktop holographic interfaces are conspicuously rare, and there is little justification to buy a 3D display purely for office applications.

This is not to say that selective 3D elements don't have their place in a GUI. Apple's Cover Flow interface provides a natural representation of a way to flip through a list with a 3D representation of sorts; Windows Vista's Flip 3D task switcher provides a preview of multiple windows with a natural relationship between them. BumpTop provides a novel three-dimensional way of manipulating files. However, none of these features truly rely on a 3D representation of objects. Much of 3D graphics consists of finding cheats, ways to give the impression of a realistic image without the computational overhead of taking all aspects of a three-dimensional scene reconstruction into account. Is Cover Flow better in 3D than a similar 2D distortion of the contents? (In fact, the representation used by Cover Flow can be rendered efficiently without a full 3D renderer). Is Flip 3D any better than other window preview mechanisms such as Apple's Exposé? The 3D effects provide a metaphor for the way in which a GUI element is distorted, in the way that a “zoom” effect represents a window being minimized; whether they are inherently more effective purely because they are 3D is debatable. Flip 3D, in particular, is an approach enabled by the implementation of the desktop rather than the interface at which someone working from scratch may arise.

Interacting in 3D: If the user interface is truly a 3D environment, how is the user to control it? While the mouse is universal and the (multi-)touch panel is becoming more common, 3D input devices are less common.

Nintendo have done more than most to change this with the , but the limited accuracy of this device restricts its usefulness outside a gaming environment: many games deliberately provide an awkward interface as part of the challenge. Sony's Sixaxis technology for the PS3 similarly allows some 3D control (although without the camera-based absolute position sensing of the Wii); Microsoft's project Natal for the 360 will also provide 3D input. All of these input methods are energetic and require high mobility – good for games, but not so useful for regular computer interaction.

“Serious” 3D input devices are very different. While “3D mice” with much in common with the Wii remote have been available for years, actually waving an object around in 3D space is tiring and inaccurate – not an option for any kind of accurate manipulation. Even touch screens have similar disadvantages: the natural viewing position is approximately straight ahead, but touching a display in this position requires the arms to be supported against gravity in an uncomfortable posture. The much-vaunted “Minority Report” display – while allowing fast and

82 Veritas et Visus 3rd Dimension August 2009 apparently natural interface manipulation – would be hopelessly tiring for extended use; nor is it even comfortable to switch back and forth between using such a display and using a conventional mouse or keyboard: the additional arm movement outweighs any usability benefit (one reason why hot key alternatives for menu options are useful).

Devices based on force rather than repositioning, such as the range by 3DConnexion, or based on a limited range of motion such as the products of Butterfly Haptics, allow the arm to be supported while providing 3D (or, in these cases, 6D) control. These devices provide a natural way to manipulate objects within a limited range accurately, but map poorly to most desktop metaphors. Haptic pen input devices such as those made by SensAble Technologies provide a faster interaction over a wider range of motion, with the haptic feedback complementing the visual cues for 3D; however, arm support is limited and I remain to be convinced that such a device – even if made affordable – is viable for long-term UI manipulation.

It is no coincidence that mice, keyboards and graphics tablets are all designed to sit on a desk in front of the user. The hands are comfortable in this position, and – with practice – most people seem to adapt reasonably well to mapping horizontal hand motion to vertical motion on-screen. However, a small amount of 3D motion in the hands is possible even when the arms are supported at the elbow. There has been some development of systems which use web cams to interpret hand movements for gesture control of an interface; maybe this, at last, will allow an efficient way to control a 3D desktop – if such a thing is actually wanted.

Graphics hardware and the desktop: While I've argued somewhat against the absolute use of 3D graphical effects on the desktop, there is a move to use the GPU as part of the display mechanism.

There is a long history of varying kinds of hardware acceleration in the display. While the earliest displays relied solely on the CPU - and it was relatively recently that the mainstream systems started doing any more than basic 2D manipulation - more esoteric systems have been able to make better use of their design. The Amiga had three ways to update window contents: by mapping the memory corresponding to the window directly to the display (plane by plane), by hardware blitting from an off-screen area to the display, or by reporting the need for a redraw to the application, which could then draw directly to the main display memory. Since this last approach did not require any additional display memory, it was often the preferred approach – ironically, since modern APIs are moving away from direct drawing.

Unsurprisingly, dedicated graphics hardware is much more efficient at moving graphics around than a CPU, but the historical advantage to using separate hardware for composing the display has been to offload work from the CPU. This argument is still made for recent composited desktops: by giving as much work as possible for the screen redraw to the GPU, more CPU cycles become available. Once the GPU is being used to arrange the display it becomes possible to take advantage of the horsepower to enhance the look and feel - with Quartz Extreme in OS X and for features such as the frosted glass look in Windows Aero. By making the CPU load less significant in the scheduling of the screen redraw and by reducing the screen redraw time, the interface can be made more responsive.

However, the separation of window content from the screen redraw means that windows have a significant memory overhead. With modern discrete graphics board including a relatively large amount of memory this may not be a major issue in many cases, but it is wasteful and can be a limitation in some systems. It can be more efficient to render directly to the screen, although the security benefits of composited desktops discourage this approach even when it would better suit the application.

More significantly, using the GPU to redraw the screen – especially to do so in an inefficient manner, with limited knowledge of changes to window content – is only viable for so long as the GPUs performance is significantly greater than that of the CPU. This is often the case in current applications: the CPU is busy and the GPU is effectively unloaded. However, there is a move to expose more of the computational capabilities of GPUs through

83 Veritas et Visus 3rd Dimension August 2009

OpenCL, CUDA and other stream libraries. It may soon become the case that the CPU is relatively unloaded and the GPU is busy. Additionally, while a GPU is typically more efficient at copying graphics memory around than the CPU is, this assertion is disputable in portable systems once we take into account the overhead of enabling a fully- capable 3D rendering device just to shuffle some bits.

These arguments may never rule out getting the GPU to perform the display redraw (it is, after all, attached to the right bit of memory to do so and it may never be efficient for the CPU to get involved), but it does suggest that it is dangerous to treat 3D interface features as “free” and rely on the performance of the GPU to outweigh any inefficiencies in the rendering scheme.

I'm happy that a pretty user interface is a necessary part of marketing and makes a product more pleasant to use. I believe that 3D features in a GUI can have their place, although I also believe that in many cases the same effect can be achieved at lower cost. Maybe, though, 3D isn't always better.

The computer screen is, at least for now, 2D. Vision is 2D, albeit stereoscopic. For most input devices, the range of comfortable input motion is 2D. Most documents are 2D. The top of a desk – on which is based a metaphor for user interaction which has lasted decades – is 2D; nobody ever looks at a WIMP GUI and wants to see the table legs.

We should think before breaking the paradigm: 3D may be pretty, but what of the performance, the efficiency, the usability? 3D needs to justify its existence as the exception rather than the rule – and it certainly shouldn't be selected just because it's trendy.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

MultiView  Adrian Travis

Now available for $12.99

http://www.veritasetvisus.com

84 Veritas et Visus 3rd Dimension August 2009

85 Veritas et Visus 3rd Dimension August 2009 The game developers have spoken, are you in?

by Neil Schneider

Neil Schneider is the President & CEO of Meant to be Seen (mtbs3D.com) and the newly founded S-3D Gaming Alliance (s3dga.com). For years, Neil has been running the first and only stereoscopic 3D gaming advocacy group, and continues to grow the industry through demonstrated customer demand and game developer involvement. His work has earned endorsements and participation from the likes of Electronic Arts and Blitz Games Studios, and he has also developed S-3D public speaking events with the likes of , , iZ3D, DDD, and NVIDIA. Tireless in their efforts, mtbs3D.com is by far the largest S-3D gaming site in existence, and has been featured in GAMEINFORMER, Gamecyte, Wired Online, Joystiq, Gamesindustry.biz, and countless more sites and publications.

I would say that August, 2009 marks the month that every industry myth about stereoscopic 3D gaming was shattered. Here are the leading myths as I see them:

1. For years, it has been argued that game developers will not support stereoscopic 3D gaming. It was the classic chicken and the egg excuse, the assumption that game developers will not participate with stereoscopic 3D technologies until X million display units have been sold.

2. Game developers will only support certain brands of technology. We have seen examples of this with Capcom Games and EIDOS being strongly associated with NVIDIA's GeForce 3D Vision for Resident Evil 5 and Batman: Arkham Asylum. This has also been sampled by Crytek's native support of iZ3D monitors in their .

3. Our industry is dependent on S-3D standards in the home. That S-3D gaming on XBOX and PS3 console will never be possible without this standard.

4. Since stereoscopic 3D gaming requires a left and right render, current XBOX and PS3 consoles are not powerful enough to support stereoscopic 3D gaming.

5. Game developers don't care how their games look in S-3D...period.

At SIGGRAPH 2009, I had the privilege of forming and moderating a special stereoscopic 3D gaming panel. It included Habib Zargarpour, Senior for Electronic Arts, Nicolas Schultz, Graphics Engineer for Crytek, and Andrew Oliver, Co-Founder of Blitz Games Studios.

Andrew Oliver demonstrated Invincible Tiger: The Legend of Han Tao. Invincible Tiger is a Kung Fu console game that runs on and Sony PS3. Even without a standard in place, it has been natively programmed to run on the majority of modern stereoscopic 3D solutions! The audience was wowed by its immersive stereoscopic 3D effects on the Real D projection screen. Absolutely breathtaking. Watching the high speed action also put to rest the idea that consoles don't have the processing power needed to play in S-3D...clearly incorrect!

Crytek is best known for the original FarCry, and the critically acclaimed series. Crysis was the flagship DirectX 10 game promoted by Microsoft, and is best known for its stellar visual effects and detailed environments. At SIGGRAPH, Nicolas demonstrated their new CryEngine 3 running in stereoscopic 3D with native support. Normally, a modern PC game requires an iZ3D, NVIDIA, or DDD stereoscopic 3D driver to run. This was a case where the S-3D experience was coming straight from the game engine itself, without the need for software.

According to Nicolas, Crytek's interest level in stereoscopic 3D is “8 on 10” – very high considering what has been assumed until now. In fact, Crytek's leading frustrations have nothing to do with display monitor sales figures. According to Nicolas, theirs is twofold:

86 Veritas et Visus 3rd Dimension August 2009

First, LCD shutter glasses (e.g. XpanD, NVIDIA, E-Dimensional, etc.) cannot be supported natively on PC. It's impossible to synchronize the glasses with the game without direct cooperation from the graphics card manufacturers (AMD, NVIDIA, Intel). The second problem is the level of compatibility they can get is only possible through stereoscopic 3D drivers. In the case of S-3D drivers, the experience is only an estimate of what game developers are after. Ultimately, Crytek would like be to able to pass on a left and right image exactly to their specifications without any middleware getting in the way.

Finally, Habib Zargarpour from EA demonstrated their highly anticipated Need For Speed SHIFT with a never before seen stereoscopic 3D demo using DDD's Tridef Ignition drivers. While he had seen several S-3D solutions at FMX/09, this was the first time he had experienced Need For Speed SHIFT in this manner. “It is going to be hard to go back,” he told the audience.

As a Senior Art Director for Need for Speed, Habib Zargarpour’s game development credits also include Need for Speed: Most Wanted, James Bond 007: Everything or Nothing, and Need for Speed Underground. His diverse film background includes work with IMAX 3D, and a whole range of visual effects projects such as Star Wars: Phantom Menace, The Perfect Storm, Twister, Star Trek: First Contact, The Mask, and several more memorable films.

As an artist, his number one frustration is a lack of quality control with stereoscopic 3D games and drivers. It's not that the drivers are doing a poor job! The problem is stereoscopic 3D gaming reveals the shortcuts developers take to get their games on the shelf. He shared an example where an earlier version of Need For Speed which originally looked elaborate suddenly looked simple: flat looking trees and billboards in 3D! Could you imagine his horror?

Habib would very much like to see quality control that the game developers have a say in. A working relationship that gives artists the confidence to know that their games will look the way they envisioned 100% of the time in 2D and in stereoscopic 3D.

In my opinion, the most important lessons learned from the panel was that all the game developers want to support all the solutions in the market. They see S-3D gaming as an industry-wide technology, and they will jump on-board given the opportunity. However, no favorites. This goes against the nature of being industry-wide and undermines their video game product sales. This is where a standard comes in because it gives game developers the means to render once, and work equally well on countless solutions. Otherwise, similar to what Blitz Games Studios went through with Invincible Tiger, game developers will be forced to manually program for every S-3D solution on the market.

I have to say that I am very pleased with what has been learned at SIGGRAPH. This has been the mindset that MTBS has been encouraging for over two years now, and it is clear that the decision makers, the game developers, see stereoscopic 3D as a strong and fruitful industry. It is for this reason that we launched the S-3D Gaming Alliance (S3DGA). Our goals include setting the S-3D gaming standards the developers are after and acting as a credible education resource for gamers and media alike. While 3D cinema is well covered, there currently isn't a single qualified organization in existence with an S-3D gaming focus. This was both very necessary and timely.

“When I saw Need for Speed SHIFT in stereoscopic 3D at SIGGRAPH, I was impressed. The audience was blown away by how well SHIFT’s immersive cockpit was represented by the 3D tech, and we see a great future for S-3D in games. We need an industry-wide standard for S-3D gaming, and S3DGA has the drive and experience to push this forward. All developers and manufacturers should participate.” – Habib Zargarpour, Senior Art Director, Electronic Arts Inc.

87 Veritas et Visus 3rd Dimension August 2009

In addition to Habib Zargarpour from EA, our advisory board also includes Andrew Oliver from Blitz Games Studios and Neil Trevett, President of the Khronos Group and VP of NVIDIA Mobile Content. The Khronos Group is responsible for setting the standard and implementation for OpenGL, the largest API in existence. Initial members include iZ3D, DDD, Blitz Games Studios, Sensio, TDVision Corp., Jon Peddie Research, XpanD, and Electronic Arts. We are going to be having our first meeting at 5:30PM PST on September 17th, 2009 at the 3D Entertainment Summit. There is no expense to attend, and participants qualify for a 25% discount on summit registration by using the S3DGA discount code.

This is where the real story begins. The game developers have spoken, and they have made their wishes clear: they want S-3D gaming to happen. They want it to happen on PC, they want it to happen on console, and they want it to happen for Neil Schneider, S3DGA President, with Habib everybody. Will the industry grab hold of this outstretched arm Zargarpour, Senior Art Director at Electronic Arts and support S3DGA's efforts?

Hurry up and RSVP [email protected] because space has grown limited. We look forward to seeing you in September!

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

http://www.veritasetvisus.com

We strive to supply truth (Veritas) and vision (Visus) to the display industry. We do this by providing readers with pertinent, timely and exceptionally affordable information about the technologies, standards, products, companies and development trends in the industry. Our five flagship newsletters cover:

 3D displays  Display standards  High resolution displays  Touch screens  Flexible displays

If you like this newsletter, we’re confident you’ll also find our other newsletters to be similarly filled

with timely and useful information.

88 Veritas et Visus 3rd Dimension August 2009 3D game review Trine

by Eli Fihn

Eli Fihn is a gaming enthusiast who has become intrigued by the capabilities offered by 3D gaming. He is currently a 12-year-old who is home-schooled in central Texas. In addition to gaming, he enjoys soccer, swimming, reading, and playing with his friends.

Trine is a single-player artistic fantasy physics-based side-scroller developed by indie- game developer Frozenbyte.

In Trine, you take control of three characters: The Magician, the Thief, and the Knight. The Magician takes control of the physics of the game. He can create and move various objects around the levels. The Thief is much like Altair from the game Assassin's Creed. She can rapidly move through the levels, and can get into many secret areas. The Knight is the basic fighter. He can kill enemies with ease, and his shield is useful for blocking attacks. These three characters must save the kingdom from destruction by an evil undead army.

The game-play of Trine is, sadly, slightly repetitive. You are mainly swinging and jumping through the level as the Thief, shooting enemies with your bow. Eventually, the Magician and Knight become increasingly obsolete, only used for mere seconds to get past certain puzzles. The variety of enemies was also disappointing. There are skeletons, archer skeletons, fire skeletons, and skeletons. Occasionally you will come by a few spiders, bats, or giant mini-bosses, but you will mainly be killing skeletons.

The voice acting, sounds, and music are superb. The music fit the theme of the game exceptionally well, and the sound effects and voice acting were of extremely high quality for an indie game.

Trine in stereoscopic 3D is great. The only problems I saw with it were fire effects and certain water effects. The fire and water effects tended to pop out to the front of the screen, which caused minor loss of depth perception, which isn't even important in a side-scroller. With the amazing artwork and level design, the 3D looked fantastic, bringing more depth out into the game.

The graphics in Trine are like a fairy-tale, which in stereo-mode are really very beautiful

89 Veritas et Visus 3rd Dimension August 2009

90 Veritas et Visus 3rd Dimension August 2009

91 Veritas et Visus 3rd Dimension August 2009 Looking ahead at a 3D film and TV future with Avatar

by Alexander Lentjes

Alexander Lentjes is producer & director at animation and stereoscopic production and consulting company 3D Revolution Productions. http://www.the3Drevolution.com. Alexander has aided and consulted independent 3D film and short productions, film and animation alike since 2000. Alexander is one of the very few individuals in the animation industry to be specialized in 3D stereoscopic animation production, from both a creative and a technical perspective. Trained in animation and television science at the Utrecht School of Arts, the Netherlands, Alexander graduated with the very first 3D Stereoscopic Stop-Motion/CGI animation short in film history, “The Incredible Invasion of the 20,000 Giant Robots from Outer Space”, which has played at all major 3D film and animation festivals around the world. Alexander is available as a freelance 3D director and consultant to help out with 3D stereoscopic projects, at any stage of production, be it a special 3D project, short or feature film, webcast or television broadcast, in Real-D format, polarized projection, anaglyphic, field-sequential or otherwise.

It is being heralded as the 3D movie that will single-handedly save the stereoscopic industry, or, rather, kick-start it into full gear and propel it into the common man’s cinema diary and living room. Avatar. All talk about it with awe and expectation, while salesmen of 3D hardware excitedly shout out its name. But what is Avatar? A high-concept science-fiction film of the purest kind. Space marines, alien planets, a war between man in spaceships and helicopters and exoskels with machine guns and a jungle-bound alien race of blue giants. Some would describe it as effects for effects’ sake, but whatever the case, it is not a broad audience movie. By all accounts Avatar is a niche market film, appealing to young men and even younger boys. Are our mothers and wives going to want to invest emotional energy into giant blue warmongering aliens? I seriously doubt it. So in my opinion the subject matter alone is going to keep the female audience away from Avatar. And that is even before one starts talking about donning 3D glasses to see the movie stereoscopically.

A big problem being bluntly ignored by professional proponents of 3D is that the largest part of the cinema-going and movie-buying audience still needs to be convinced that wearing 3D glasses is not stupid, annoying or even uncomfortable. Dissing anaglyph glasses is certainly not the way to convince, because it is the diss that is remembered, not the difference between dichromatic and polarized image separation. Yes, glasses could be physically more comfortable, but that does not take away the psychological barrier most people experience when faced at the box office with the prospect wearing them. So how can we remove this inbuilt reluctance and fear? By

92 Veritas et Visus 3rd Dimension August 2009 presenting the doubters and cynics with their favorite content in really well shot 3D and letting the power of word of mouth do its job. What could this industry-changing content be then? Not a schoolboy’s wet dream of aliens and guns, but quite the opposite.

In my opinion the key to a 3D future lies in the romantic comedy, the costume drama and the psychological thriller. Taking real numbers as found on IMDB, less than 1 in 10 movies produced overall is a science-fiction or fantasy film (6%), 1 in 10 movies is a family film (including animation, 10%) while 5 in 10 movies are a romantic comedy / drama (47%), yet when it comes to 3D movies slated for a 2009 release, 3 out of 10 movies is a science-fiction / horror / action film (33%), almost 4 out of 10 is animation / fantasy / music (35%) and just more than 2 out of 10 is a documentary (24%). The remaining 1 out of 10 movies (9%) is reserved for music specials and naughty movies, while no romantic comedies of drama films are slated for 3D release. How can there possibly be proper penetration of stereo 3D as a mass-audience medium if the main types of cinematic story are not told in 3D? If Sandra Bullock and Meryl Streep don’t look good in 3D, don’t even bother trying to sell the 3D Ready LCD screens.

In terms of broadcast TV, cooking programs and reality shows will have to work with 3D to make financial sense. Again, sales men are focusing all their energy on sports broadcasts and thus targeting boys and men. But what is one football match in a sea of time-filling content such as that of The Apprentice, Strictly Come Dancing, As the World Turns, Oprah Winfrey and Jerry Springer? That is what the reality of a 3D stereoscopic future is all about. We all know a tomato will look fantastic in 3D and that bargain diamond ring on QVC’s home shopping channel will sell very well when it pops off the screen, but what about book reviews and embarrassing celebrity reality filler? I, for one, will not feel enticed to don 3D glasses to watch animals do the funniest things – in 3D. But perhaps I just don’t know what I’ll be missing...

On the production side of things the only way for a true 3D switch-over to happen is complete standardization and idiot-proofing of recording, playback and delivery hardware. Of course us stereo experts will all be out of a job when everything is standardized and built-in, so we can all enjoy long weekdays in front of the . Fixed interaxials for studio shoots, fixed minimum distances to the camera, no more convergence control and a pipeline that allows for previewing and editing in the final screen size of choice all the way. No more need for lookup tables and heated discussions over what to do or not do in 3D. Producers want a straight off-the-shelve stereoscopic camera and pipeline solution and that’s what they will get. We will see a return to cameras with three fixed lens options as standard in the 1950s and 60s. But what is the bulk of 3D films to be produced with such standardized equipment going to look like? Creative 3D control will go out the window. On most productions, that should actually be a blessing though: watch one movie with divergence and vertical parallax and you will agree with this point. Experience is everything in 3D shooting and even then, with veteran stereographers at the helm, eyestrain can creep in. Automatic, real-time vertical parallax detection and correction hardware will remove the strain of having to precisely align a 3D camera rig before every shot. Miniature cameras and lenses will mean effortless, small and light 3D rigs that don’t even look like they contain 2 cameras or lenses. And to top everything off, image capture will happen with inbuilt retinal rivalry correction and will be dual-stream compatible all the way down the pipeline without you even noticing it.

Will you still be excited about 3D in this future? Well no, because it will be a normal, every-day, run-off-the-mill format – unless we will shoot extraordinary content in it and produce visual stories that have never been seen or experienced before. And although Avatar may be breaking new ground in terms of VFX and live-action integration, it is its story and characters that are going to determine whether we will want to see more science fiction films about giant blue aliens in 3D.

93 Veritas et Visus 3rd Dimension August 2009 Increasing frame rates of liquid crystal displays

by Adrian Travis

After completing his PhD on fiber-optic waveguides in 1987 at Cambridge University, Adrian Travis started working on displays which use lenses and fast-switching liquid crystals to synthesize a three dimensional image. The focal depths of the lenses made these displays bulky, so he created a way of scanning illumination from a light-guide, which evolved into a virtual image display. Next came a wedge-shaped waveguide which projects real images, but may also perform many of the other functions of a lens. Travis remains a fellow of Clare College but in autumn 2007, left his lectureship at Cambridge to work for Microsoft in Seattle.

A simple way to get 3D is to replace the backlight of a liquid crystal display with a lens and then to scan repeatedly a spot source of light in the focal plane of that lens. The lens and scanning spot source act like a flashlight whose illumination makes the image on the liquid crystal panel visible to one direction at a time and if the sequence is repeated sufficiently quickly, it is possible to multiplex an autostereoscopic image.

The number of views in the autostereoscopic image is determined by the frame rate of the liquid crystal display and conventional nematic liquid crystals switch too slowly to be useful. Ferroelectric liquid crystals switch much more quickly and switching times of 50 microseconds are feasible at conventional drive voltages. But the effect is bistable so either resolution or frames must be sacrificed in order to multiplex a grey-scale. It does not help that the 1.5 micron cell gap of a ferroelectric cell requires more stringent cleanliness than the 3 micron cell gap of a nematic cell so ferroelectric devices have tended to stay within the research community with the honorable exception of Displaytech’s liquid crystal on silicon device. It is because the transistors are of crystalline silicon and that the dimensions are small that liquid crystal on silicon devices can deliver high frame rates but larger displays use transistors of lower mobility with longer addressing lines so RC time constant can be what constrains their frame rate. That is why the recent research activity in the area of high frame-rate liquid crystal displays is particularly interesting.

Liquid crystal displays with higher frame rates are sought for several reasons but perhaps the most important is the effort to eliminate the smear which appears to follow images such as footballs as they move across the screen. One way to reduce smear is to address the liquid crystal display in half the normal time then switch on the backlight for whatever time remains once all the pixels have settled. This pulsing of the backlight can cause perceptible flicker so it helps to combine this scheme with a doubling of the frame rate to 120 Hz so as to eliminate all perception of flicker and reduce smear still further. Samsung, who achieved this recently with their 240Hz liquid crystal display, very much deserved the prize which they were awarded.

The other reasons for increasing frame rate include the time-multiplexing of binocular stereo 3D (which Samsung may also have had in mind), the time-multiplexing of color, and the wish totally to eliminate artifacts caused by visual saccades. The latter can require frame rates of at least 1 kHz and a paper at SID 09 (N. Koshida, Y. Dogen, E. Imaizumi, A. Nakano and A. Mochizuki, “An Over 500-Hz Frame-Rate-Drivable PSS-LCD: Its Basic Performance” SID 09, paper 45.2), announced a matrix addressed display with frame rates of the order of 1 kHz which is also a considerable achievement.

It is difficult as an observer to be sure of the conditions under which such high frame rates are viable, for example some liquid crystal effects switch off more slowly than they switch on which can limit applicability, but it is interesting to watch the trend of gradually pushing for higher frame rates and it would be good news for the 3D community if this trend were to push a larger fraction of the industry towards fast switching materials such as ferroelectrics.

94 Veritas et Visus 3rd Dimension August 2009

95 Veritas et Visus 3rd Dimension August 2009 Showing 3D at trade shows and conferences AKA plugging a computer to a DCI projector

by Bernard Mendiburu

Bernard Mendiburu is a stereographer and digital cinema consultant working with feature animation studios in Los Angeles, where his credits includes “Meet The Robinsons” and “Monster vs Aliens”. He just published “3D Movie Making, Stereoscopic Digital Cinema from Script to Screen” with focal Press. His lectures and workshops on 3D cinema were selected by Laïka and CalArt's Experimental . In 2009, Bernard presented a paper on 3D Workflows at the SPIE Stereoscopic Display and Applications conference and at the NAB's Digital Cinema Summit. He gave the Paris' Dimension3 two days workshop on 3D Post Production. He recently joined the 3D@Home Consortium's Advisory Committee on 3D Quality and was an active member of the SMPTE 3D Task Force.

This year, I have been invited to present on 3D at a couple conferences, including big events like NAB and IBC, and I was the presenters' contact person for 3D projections at Dimension3 in Paris. Since I saw the situation from both sides of the fence, as user and as provider of the 3D projection system, I can tell how bad the situation is. And this was just one occurrence of the ongoing issue of being capable to feed a DCI-compliant projector from a regular computer.

Digital Cinema projectors have a flock of inputs, most likely pairs of DVI and HD-SDI that act as left and right inputs. Some of them even accept active stereo on a single DVI, if you search deep in the menus leading to some easter-egged “undocumented features”. At least that's the case until it get configured and certified as DCI-compliant projector. My experience is one cannot use the DVI inputs of a 3D projector.

My first encounter with the issue was when I shown up at NAB to talk about 3D with, something as unimaginable and unexpected as a 3D PowerPoint. The only thing the projection system would ingest was a DCP, or a Sony SRW tape. For I don't stock any of them in my basement, I was out of luck. Don't even think about providing JP2K still frames and expect to be allowed to slide-show them. You need to make a 25' movie if you are to talk 25'. I really did not feel like speaking in front of such a large audience following a fixed-time slide show... If I were willing to, I'd have to create HD or 2K frames. My 25' Power point would have been now 450GB of 48fps RGB TGAs before they are ingested in the DCP encoder that will take many hours to convert them. I needed to find a terabyte disk for what used to be on my thumb drive... I eventually presented 3D storytelling in 2D, with anaglyph images for the happy few in the audience that append to have appropriate glasses.

I came back from the NAB with the firm intent to find a way to feed a 3D projector with my PC. The canonical solution is to use SDI-capable QuadroFX cards. You need a couple of them, a genlocking add-on, a giant motherboard with high-end to provide the adequate number of PCI slots. Who wants to buy a $20-30K workstation to do a 3D slideshow? That's the reason why many of us have a Matrox DualHead2Go, and use it flawlessly for passive stereo. Otherwise, dual-DVI video cards are the norm, and they fit easily in a transportable shuttle computer. More about this later.

My second encounter with the issue was from the projection booth at Dimension3, the great French conference on 3D hosted in Paris early June (www.dimension3-expo.com) [yes, I'm shamelessly promoting what is arguably the best 3D event of the galaxy]. I was in charge of caring that all the 3D content would find its way to the screen. Most presenters came with DCPs of trailers or work-in-progress. As expected we had three last-minute requests for non- DCP 3D contents. One full 3D presentation, one 3D computer movie, and one 3D camera feed coming from an

96 Veritas et Visus 3rd Dimension August 2009 image processing PC that corrected retinal disparities in real time. Hopefully we had planned for such surprise and identified a solution. They were not that many; the Geffen DVI to HD-SDI Pro Scaler, the Miranda DVI-Ramp2 and the Folsom Research ImagePRO-HD. The guys at Miranda France were so kind as to loan us their little marvel and we were all set... almost. The DVI Ramp2 is a genuine piece of magic that will present itself to the PC, via the DVI data link, as whatever set of display you want. It would then extract the pictures and output it in HD-SDI, with the same format versatility. Set it up as two pairs of 1080-24p display on both input and output and you are done. Perfect synch, flawless picture, full desktop on the theater screen.

To be perfectly honest, I should mention that the graphic card drivers made every possible thing to make things impossible and drove me almost . I really wonder why, when I set up a display to 1920x1080, it is set to 960x1080 with a 2:1 pixel aspect ratio. Obviously without any warning or dialog box, or explanation. The “Vista” version of the QuadroFX drivers are such a quality drawback from the “XP” ones. And knowing how buggy the later were, it is quite as achievement. At least, they were showing the available display settings in one logical layout, and were doing what they were asked for. I was told that the issue was documented and the next generation of drivers would be behaving more like the “XP” ones. At this point, all I expect is a good 3D offer from Intel or ATI/AMD that will drive some competition in the field.

Let's get back to our DCI projectors issue. The trouble seems to come from the fine prints of the DCI specification. By requirement, the DVI inputs have to be deactivated on the projector, and my understanding is that's part of the anti-copy projection. The other possible reason it related to the financing scheme that supports the Digital Cinema deployment. Basically, the studios pay for the equipment, get a loan for it, and get their money back from the virtual copy fee paid by the theater. If there were any chance to use the projector by hooking a regular PC to it that would open the road for alternative uses that does not include any kick-back fee to the studios. This makes perfect business sense. But when the business and the user experience collide, who wins?

I was told by a digital projectionist that they share among them the cheat codes implemented in factory to re- activate DVI inputs when they are in trouble and need to feed a picture to the projector from their laptops for testing and maintenance purpose. As you can imagine the very same limitation is implemented in the Linux PC that boast a “DCI Player” sticker. One should not hope to run Open Office on theses computers even if I saw once a projectionist playing a patience game on it, and I guess it was another Easter egg, for the GUI was primitive. I did not push my enquiries too far, for the limit between asking question and reverse engineering is not the cup of tea of my legal consul. Anyway, I would be extremely surprised the Linux desktop were even accessible. That would be an obvious security breach. Root password anyone ?

Eventually, the best DVI-to-SDI 3D converter box is expected to come from Doremi, the world leader in DCI servers. It would input any 3D format (anamorphic, checkerboard, active, passive) and spit out the perfect SDI signal your DCI projector wants. I can't wait to see that marvel in action next month at IBC. One last request; Due to its very design the DualHead2Go generates a half-line delay between the left and right images. Could you please guys, compensate for this in your many converter boxes?

One last consideration as a conclusion. We started this story with a revolutionary product, the DC projectors. Then we met its downgraded twin, the DCI projector that cannot handle the computer-ubiquitous DVI 3D signal. Film makers, artists, presenters, conference hosts, facing this issue, looked for a solution. I first asked about this on the Geffen booth at NAB 2007. Eventually, $3-6K boxes shown up, and virtually re-activated the DVI input. Where are we now? In an Ubu-esque situation that will likely endure. For many years, we'll look at the cable webs and converters stacks in 3D projection booth knowing that all this was such a useless loss of time and money. In the meantime, the audience keeps downloading movies.

97 Veritas et Visus 3rd Dimension August 2009

The Display Week 2010 Symposium introduces four Special Topics of Interest to address the rapid growth of the field of information display in the following areas: Touch, 3D, Lighting, and Green technologies. Submissions relating to these topics are highly encouraged.

Special Topic on Touch: Touch-screen technologies, displays, systems, subsystems, and applications.

Special Topic on 3D: Display technologies for enabling depth perception in viewers, applications for 3D displays, measurement and characterization of 3D systems, and their human factors.

Special Topic on Lighting: Advancements in the LED industry open up new opportunities to increase the perception of reality. Submissions on all aspects of the lighting related to information display are encouraged

Special Topic on Green Technologies: Submissions are encouraged on the topics of reducing energy and material consumption, reducing waste and emissions, and reducing the use of materials that are harmful to the environment or society.

See the 2010 First Call for Papers for complete details at www.sid2010.org.

98 Veritas et Visus 3rd Dimension August 2009 Perceptual Paradoxes A stereo-challenged journalist crashes the 3D party

by Ray Zone

Ray Zone is a 3D artist, , and the author of “3D Filmmakers: Conversations with Creators of Pictures” (Scarecrow Press: 2005). He recently completed a book on the origins of stereoscopic cinema to be published by University Press of Kentucky in the Fall of 2007. Zone’s website: www.ray3dzone.com

Writing on Slate.com on April 2, 2009 with an article titled “The Problem With 3D,” (http://www.slate.com/id/2215265/) internet journalist Daniel Engber pulled no punches in slamming the incipient digital 3D cinema revolution with a highly-researched article. It addressed one of the classic perceptual paradoxes of stereoscopic viewing and projection: the issue of vergence and accommodation, or the fact that focus must be decoupled from converging eye muscles with most stereographic displays and is a potential source of eyestrain with 3D movies.

Engber’s article appeared just one week into the opening theatrical run of the DreamWorks feature Monsters vs. Aliens which was well on its way to becoming the highest grossing 3D movie of all time after pulling in 33 million 3D dollars on its opening weekend. Engber even delivered the coup de grace for 3D with the subtitle to his article that admonishes “It Hurts Your Eyes. Always has, always will.”

A Self-Styled Stereo Adversary: “Let me go on record with this now, while the 3D bubble is still inflating,” wrote Engber, “Katzenberg…and all the rest of them are wrong about three-dimensional film—wrong, wrong, wrong. I've seen just about every narrative movie in the current 3D crop, and every single one has caused me some degree of discomfort – ranging from minor eye soreness (Coraline) to intense nausea (My Bloody Valentine).”

With a previous article for Slate.com titled “I Heart 3D” dated January 16 reviewing My Bloody Valentine and referenced within his April 2 article, Engber was a little more circumspect, even complimentary, about the use of 3D in the film. He notes that the use of the gasmask and headlamp by the villain in stereo adds a “stroke of gimmicky genius” as “an ominous beam of light extends into the audience” and that, slightly embarrassed, he found himself “ducking in the seat to avoid the pokes and splatters.” And he even reports that “there's still a rich and communal joy in having a dismembered jawbone come hurtling at the audience.”

These positive remarks, however, only serve to create a deceptive appearance of journalistic objectivity and “set-up” a disheartening observation. “I watched My Bloody Valentine: 3D last night, and my eyes still feel sore. At one point during the movie, I nearly threw up.” Here is a man who is clearly challenged by stereoscopic displays and yet who insists on seeking them out (a thumbnail photo of Engber on the Slate.com site shows him wearing glasses). Convergence/accommodation in stereoscopic projection demonstrates the Ironically, both Coraline and My Bloody Valentine make a very eyes focused on the screen while conservative and minimal use of parallax with the stereoscopic converging on the sign post, a condition “budget” or “real estate.” Max Penner, the Paradise FX 3D technician which could lead to eye strain if too responsible for stereo parameters on My Bloody Valentine, informed extreme. (Dewhurst: 1954)

99 Veritas et Visus 3rd Dimension August 2009 me that the average interaxial or inter-ocular distance (IO) between the twin cameras used for stereo cinematography was .75 inch, a very conservative number for live action photography.

Coraline, with director of photography Pete Kozachik at the helm, was produced with minimal parallax values and very precisely controlled IO ranging from 1-3 millimeters in close-ups to 3-10 millimeters in the wide shots. If these 3D movies, with their beautifully controlled IO values, make Engber nauseous, it is perhaps he, rather than the film-going public, who should avoid stereoscopic displays.

Yet oddly enough, in his April 2 article, Engber insists that “As much as it pains me to say this – I love 3D, I really do – these films are unpleasant to watch.” In view of his nausea, Engber’s insistence on his affection for 3D movies proves entirely baffling. His Jeremiad consigns today’s digital 3D cinema to gimmick status as yet another reappearance of a fad, dismissing it as a novelty without enduring appeal in the cinematic legacy. “It's happened before and it will happen again,” he concludes, “At some point soon, 3D cinema will regain its well-earned status as a sublime and The esotropic condition ridiculous headache.” (Dewhurst: 1954)

Some Intelligent Online Responses: On the [email protected] list, which includes many professional stereographers who have worked on some of the current 3D movies, there was a variety of thoughtful responses to Engber’s article.

“I must say that's, technically, the best piece on 3D so far,” wrote Bernard Mendiburu (Meet the Robinsons, Monsters vs. Aliens). “Can anyone find something factually wrong in this paper? Besides its conclusion, obviously.”

Mendiburu acknowledged that “Yes, the vergence/accomodation decorrelation is working against our natural reflex, and yes 3D movies are more eye straining than 2D. 3D is not free, not even cheap.” But Mendiburu summarizes with the observation that it is the job of the stereographer to make the “reward (more engaging images)” supersede the perceptual “cost” of viewing 3D films.

“I would say that when he implies that watching 3D causes vision impairment,” responded Eric Deren, “THAT is something factually wrong with this paper.” Deren is the creator of the 2008 NSA Award-winning stereoscopic video Sky Diving. “This article was written by someone who is in the minority of people who feel visual stress when stereoscopic parameters are still well within what is generally identified as being ‘comfortable’ for the majority,” observed Deren. “He Photograph of 5 year old boy (left) exhibiting esotropia believes he is speaking for the majority, but regardless of the right eye. The photo was taken by the boy’s of how much research he has done, he is just mis- mother shortly after the young boy had viewed the informed about what the majority of viewer's eyes can anaglyphic cartoon movie comfortably handle.”

100 Veritas et Visus 3rd Dimension August 2009

Steven McQuinn offered an analysis of Engber’s use of rhetoric in the Slate.com article. “This is propaganda writing,” notes McQuinn, “employing tropes familiar to anyone who has crafted a corporate denial, a speech, a diatribe, or a clever piece of journalistic innuendo masquerading as a public service warning.” Acknowledging the platform from which it was published, McQuinn adds that “This is Slate, which specializes in sensational articles and pseudo-intellectual contrarian positions, the national online counter-culture weekly for readers who need to feel smugly smarter than everyone else without having to make much of an effort.”

Award-winning stereographer Boris Starosta responded reflectively that “It's a good article, and it will be good for the 3d cinema that it has been published. I hope people The anaglyphic 3D cartoon movie which generated esotropia involved in the 3d cinema read it and do not (as reported by Tsukuda and Murai, 1988) dismiss it. To reduce the human factors "costs" of the stereoscopic cinema, the product will have to be very carefully produced.” Starosta concludes that “In his own way, perhaps ironically, Engber has made a contribution towards a longer-lived stereo-cinematic renaissance this time around.”

Science Buttressing Attack: One of Engber's most troubling assertions in the article states that "There's already been one published case study, from the late-1980s, of a 5-year-old child in Japan who became permanently cross- eyed after viewing an anaglyph 3D movie at a theater." The two hot links in this assertion take one to a wikipedia page on "esotropia" and "anaglyph" and not to the real sources where Engber got his information and to which he could have given attribution. A Google search (with assistance from Eric Kurland and Andrew Woods) and an email query to Engber confirmed the fact that he got the information from the following two papers published by Kazuhiko Ukai of the School of Science and Engineering in Waseda, Japan:

Kazuhiko Ukai, (School of Science and Engineering, Waseda University) “Human Factors for Stereoscopic Images,” ICME 2006, p. 1697-1700. http://www.ray3dzone.com/Ukai2006.pdf

Kazuhiko Ukai and Peter A. Howarth, “Visual Fatigue Caused by Viewing Stereoscopic Motion Images: Background, Theories, and Observations” Displays 29 (2008). http://www.ray3dzone.com/Ukai2008.pdf

Both of these articles by Ukai cite and discuss the following paper by Tsukuda and Murai that provides the account of the child who became cross-eyed from viewing anaglyph: Shoichi Tsukuda and Yasuichi Murai (School of Orthoptics, National Osaka Hospital), "A case report of manifest esotropia after viewing anaglyph stereoscopic movie," Japanese Orthoptic Journal, vol. 18, pp.69-72, 1988. A PDF of this paper in Japanese can be downloaded at my website from: http://www.ray3dzone.com/Tsukuda.pdf An email from Ukai confirmed that no English translation of the Tsukuda and Murai paper exists.

In writing about the case, Ukai and Howarth observe that “Some ophthalmologists remain concerned that viewing stereoscopic images may cause strabismus in young children. Strabismus is an abnormality in binocular alignment that is usually congenital. It is influenced by accommodation, vergence and . There is no evidence for or against the hypothesis that viewing stereoscopic images causes strabismus, except for a report by Tsukuda and Murai. They reported one case of a four years and 11 month old child who manifested esotropia, after viewing stereoscopic animation at a cinema using an anaglyph.”

101 Veritas et Visus 3rd Dimension August 2009

They add that “Photographs of the boy taken by his mother before and after viewing the stereoscopic movie helped in the diagnosis since the onset of the deviation of the eye can be clarified.” Glasses were prescribed for the boy, who continued to wear them, but the esotropia remained unchanged. After strabismic surgery on the deviating eye, “the patient kept orthophoria and binocular vision” in a condition that was described as “almost normal.”

Ukai and Howarth conclude that “Viewers should be careful to avoid viewing stereoscopic images for extended durations because visual fatigue might be accumulated. They should be ready to stop immediately if fusion difficulties are experienced.” They add that "Children should be cautioned about stereoscopic images because they may not subjectively perceive a problem even if an eye is deviated. Although there is little evidence that viewing stereoscopic images causes irreversible damage to health, there is also no evidence that contradicts this contention.”

After reviewing his papers I sent Mr. Ukai an email asking the following questions:

 Is the case documented by Tsukuda and Murai still the only documented case of strabismus caused by viewing stereoscopic images?  Was this case a result of a congenital condition?  What metrics, if any, were used to measure the worsened strabismic condition after stereoscopic viewing?  Do you think the strabismus caused by viewing stereoscopic images was only reversible with strabismic surgery?  Or do you think the condition could have been reversed by standard eye training exercises and therapy?"

“Clearly their case was supposed to be caused by the stereo movie,” responded Ukai. “Many pictures were taken by the patient’s mother. Photographs proved that the strabismus was caused on the day or one day before the day watching stereo movie. The patient may have congenital factor, such as weak phoriazation. Stereo movie may be a trigger. Sometimes we know adult cases who became intermittent exotropia after one eye occlusion. Mechanism is similar, I suppose. "However, the patient could be recovered spontaneously, as many other cases. But this cannot be proved. Authors waited a certain period. Is it enough? I have no answer.”

It seems to be a somewhat equivocal response despite the apparent clinical rigor of the evidence. Was this an isolated case? It seems so. In email correspondence with Martin Banks, a vision scientist who has done extensive research into vergence and accomodation, Banks pointed out that “It's worth noting that a small percentage of children develop esotropia when they start reading. This is referred to as accommodative esotropia. It's triggered by accommodating to sharpen the image of the text. Ironically, the onset age for accommodative esotropia is the early school years, which includes 5 years of age.”

The Author’s Diagnosis: It has long been a tenet among 3D filmmakers to “first do no harm,” an attitude which shows real sensitivity to the limitations of the safe binocular viewing zone mediating convergence and focus. It’s a perception that recognizes the dangers of extreme relative parallax and uncontrolled interaxial values. More than ever, stereographers like Phil McNally at DreamWorks, Max Penner at Paradise FX, Rob Engle at Sony Image Works, Brian Gardner with Coraline and, of course, Lenny Lipton are having some say in shaping 3D movies that are easier to view and minimize eye strain.

Digital toolsets for stereoscopic production are also giving greater control to 3D storytellers. Though visually- challenged naysayers may still be heard from, the republic of stereo-hungry moviegoers is speaking loud and clear with its demand for 3D films and new forms of stereoscopic narrative in motion pictures.

102 Veritas et Visus 3rd Dimension August 2009

The Truth about 3D TV: Questions…

by Lenny Lipton

Lenny Lipton is recognized as the father of the electronic stereoscopic display industry, Lenny invented and perfected the current state-of-the-art 3D technologies that enable today's leading filmmakers to finally realize the dream of bringing their feature films to the big screen in crisp, vivid, full cinematic-quality 3D. Lenny Lipton became a Fellow of the Society of Motion Picture and Television Engineers in 2008, and along with Petyer Anderson he is the co-chair of the American Society of Technology Committees’ subcommittee studying stereoscopic cinematography. He was the chief technology officer at Real D for several years, after founding StereoGraphics Corporation in 1980. He has been granted more than 30 patents in the area of stereoscopic displays. The image of Lenny is a self-portrait, recently done in oil.

 Can stereoscopic TV gain a foothold in the midst of a world-wide economic catastrophe?

 Can stereoscopic TV make penetration advances before HD TV becomes fully established? By that I mean when half of the homes that have TVs have HD TV.

 Does a stereoscopic TV services need to have a signal that plays in 2D on existing digital receivers?

 Will most programming be any more interesting in 3D than in 2D?

 Will people put up with wearing 3D eyewear?

 Will people be willing to pay more money for 3D TV hardware – not just the set but whatever accessories are required?

 Is 3D gaming the killer app for home 3D viewing?

 Does the Philips withdrawal from autostereo put the kibosh on autostereo?

 How will the decline of hard media and the rise of on-demand and IP TV influence the future of 3D TV?

 What economic factors drive the introduction of 3D TV? Which manufacturers will make out the best?

 How will the rise of non-TV set viewing of programming influence the introduction of stereoscopic TV?

 Is there a market for snapshot and camcorder 3D products?

 Will we see various stereoscopic PC applications?

 Are there significant perceptual issues with the viewing of 3D TV images? Will people find viewing to be comfortable?

 Despite the fact that there may be a couple of million DLP TVs that are so-called stereo-ready, practically nobody is using them for home 3D viewing. Is this installed base a sign of trouble or a sign of hope?

 Is it truly possible for this medium to gain traction with no content delivery f format or selection device standards?

 How will the introduction of 3D TV alter programming? Will it be the same old same old or will there be some kind of an advance in creativity sparked by the new modality?

 TV is mostly recycled content. Does that mean that this vast library will have to be converted to 3D, and is the conversion technology up to the task?

103 Journal of the SOCIETY FOR INFORMATION DISPLAY Call for Papers on 3-D/2-D Switchable and Multiview Display Technologies to be published in a Special Section in the Journal of the SID The Journal of the SID is planning a Special Section to be published during the fourth quarter of 2010. We are soliciting original contributed papers describing advances in 3-D/2-D Switchable and Multiview Display Technologies. Suggested topical areas include: • 3-D/2-D switchable display technologies • Electrically controllable lenses for 3-D/-2D displays • 3-D image generation from 2-D images • Multiview display technologies • Crosstalk in multiview display systems • Moiré reduction in multiview display systems • Multiview image generation from two-view images • 3-D displays with full-panel resolution • Camera-array systems • Human factors of 3-D display • 3-D image coding Guest Editors for this Special Section dedicated to 3-D/2-D Switchable and Multiview Display Technologies will be Prof. Byoungho Lee, Seoul National University; Korea, Prof. Yasuhiro Takakai, Tokyo University of Agriculture and Technology; and Dr. Chao-Hsu Tsai, Industrial Technology Research Institute, Taiwan. Authors, please submit your complete manuscript online in electronic form to the Journal of the SID by following the instructions listed under the Information for Authors tab on the JSID Web page, or find it at http://sid.aip.org/jsid. Authors submitting their manuscript have to identi- fy their manuscript as one submitted for the Special Section on 3-D/2-D Switchable and Multiview Display Technologies and need to select Prof. Byoungho Lee as the guest editor. The Information for Authors document provides a complete set of guidelines that are required for the preparation and submission of your manuscript. Deadline for the submission of manuscripts is: March 1, 2010. All inquiries should be addressed to Prof. Byoungho Lee at SID [email protected] SOCIETY FOR INFORMATION DISPLAY Veritas et Visus 3rd Dimension August 2009 Last Word: Half a century of stereoscopic viewing…

by Mike Cook

Mike Cook graduated with a B.Sc. (1963) and M.Sc. (1966) in Electronics from Southampton University, England. During his career, he designed:

 Electronics systems for 747, SST and 737 airplanes at Boeing Airplane.  Custom Integrated Circuits for calculators, terminals, and many other systems at Fairchild Semiconductor in Mountain View, CA and Wiesbaden, Germany.  Integrated Circuits for calculators, electronics for terminals and disk drives, at HP.  Workstation at Corvus Systems.  LaserWriter controller electronics for Apple Computer.  Inkjet print head electronics for Topaz printers.  Inkjet print head testing equipment for Inkjet Technology.

Now retired, spending time with grandchildren, travelling, singing and recording with several community choirs.

I got fascinated by stereoscopic viewing when I saw some displays at the Science Museum in London when I was in the equivalent of Junior High School. At that time I was not able to do much about it, but later bought a German camera which produced almost full size stereo pairs from 35mm film. It did this by advancing one frame on one film advance, and 3 frames on the next. Eventually I replaced this with a more usual camera which advanced the film the same amount each time. Unfortunately this meant that the frames were narrower. Not a happy thing to do with stereoscopic images.

I tried projecting these slides with a double projector onto a silvered screen, with crossed polarizers on the lenses, and polarized glasses. The images were adequate, but too much light was lost on the screen, compared with the usual slide projection. A worse problem was sea sickness. With a single slide, one action of focusing is required. With a double projector, not only are two separate focusing actions needed, but the images must be centered horizontally and vertically. The eyes try to track these motions, out of correlation with what the body is doing, causing motion sickness. Flipping a depolarizer in front of the lenses helped some, but eventually this system became too much trouble.

A partial solution was to use prisms on the front of a regular 35mm camera, to produce a stereo pair in a single 35mm frame. This removed one of the focusing problems, and for portrait format, also the alignment problems, using prisms in front of a standard slide projector. However, I preferred landscape format, for which prisms which rotated each image by 90° were needed. This worked tolerably well, but the projector rarely inserted the slide in exactly the right place, which resulted in the image for one eye being raised, and the other dropped. Again a recipe for headaches and motion sickness… At that time, hand viewers were much easier to use than projection, but of course only one person can see them at a time.

It was interesting to find that the usual rules for making photographs were there to be broken in stereo photography. One did want something in the foreground of a landscape, splitting the photograph in a way which would be unfortunate on a one eyed image. One could photograph objects or animals behind bars or in cages. One's eyes ignored the foreground and looked at the object of interest behind the bars. A favorite photograph is of a petrified log behind a wire grill. If using one eye, all one notices is the grill. With two eyes one's makes the grill essentially disappear. Similarly, a photograph of tadpoles in a somewhat scummy pond: with one eye, all one notices is the scum. With two, all one notices is the tadpoles. One could photograph a hedge or a tree. The one eye version looked like wallpaper, but with two eyes the picture had texture and depth. But the most satisfying photographs were those of people. Mounting independent stereo pairs was always a problem, since each slide had to be carefully aligned so that the pair was properly in place, otherwise eyestrain resulted.

105 Veritas et Visus 3rd Dimension August 2009

Then came Holographs. One found oneself trying to move around stereoscopic photographs, in the way that one could with holographs, and being frustrated because the image moved with the viewer instead of staying still, allowing itself to be looked at from different angles. However, holographs were not a subject for casual photography. Eventually color CRTs became sufficiently inexpensive that one could experiment with stereoscopic images at home. The best results were by using occulting glasses, allowing alternating frames to be presented to one eye or the other. The frame rate had to be high enough to avoid flicker induced headaches, and the brightness sufficient to overcome the losses from the glasses and the fact that each eye had an image presented to it for less than half the time. But the system was independent of size or resolution, and there was no need to hold the head in a particular position. A problem back then was that CRTs were bulbous, and the corners of the picture did not register very well for each eye. A later solution was to use two flat screen monitors. The “obvious” solution was to put them side by side, and use a pair of mirrors for each eye to make the images appear in the appropriate place. Unfortunately with reasonably sized screens, the image was too far away. The alternative of having two smaller images side by side on one screen ended up with the resolution being uncomfortably low, and the raster being too visible. Another alternative was to face the monitors at each other, and to have only one mirror between each eye and the monitor. Images had to be flipped horizontally, but excellent stereo images of a reasonable size were possible. Unfortunately one had to place ones head carefully between the monitors, which ended up being uncomfortable and inconvenient.

I tried some experiments with putting lenticular screens on the front of monitors, but alignment was a big problem. Similarly with prints with lenticular screens applied to them. Some people can look cross-eyed at two images, and see the stereoscopic image in the middle, and ignore the two unnecessary images to the sides, but I found this too disturbing. Similar trials with anaglyphic images and glasses with colored lenses were also unsatisfactory. A ray of hope was the wearable displays, with one independent display for each eye. So far I have not found a display sufficiently free of internal reflections, and of an adequate resolution to be useful. I understand that the military and surgeons can obtain such displays, but they are far out of my price range.

Perhaps the least expensive way to make a reasonable stereoscopic display today would be to make an analog of the old fashioned stereo viewer, replacing the cardboard mounted images with a pair of iPod Touches. These iPods have excellent resolution and brightness, and are a convenient size. I am considering trying out a pair of these to make something that a medical student could afford, to display the motion of all the joints of the body. Various professions, including, for example, dancers, artists, masseurs, need to have an appreciation of the range of action of all the joints, which is difficult to visualize under the skin. There are plenty of two dimensional books, none of which present a very understandable image. Of course, given an iPod one can also make a "fly around" of a 3d object, reducing the need for stereoscopic images.

Commercially, but at a price far above what a home user or student can afford, there are several monitor/glasses sets, or even glasses free displays, are available. But to me the most impressive are the stereoscopic iMax movies, even though they require polarized glasses to be worn.

>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

Veritas et Visus (Truth and Vision) publishes a family of specialty newsletters about the displays industry:

Flexible Substrate Display Standard 3rd Dimension High Resolution Touch Panel

http://www.veritasetvisus.com

106 Veritas et Visus 3rd Dimension August 2009

Display Industry Calendar

A much more complete version of this calendar is located at: http://www.veritasetvisus.com/industry_calendar.htm. Please notify [email protected] to have your future events included in the listing.

September 2009

September 1 Digital Signage 2009 San Jose, California

September 1-5 HCI 2009 Cambridge, England

September 2 TV Conference 2009 San Jose, California

UK Plastic Electronics Development September 3 Birmingham, England Showcase

Touch Conference 2009/Emerging September 3 San Jose, California Technology Showcase 2009

September 3-4 China FPD Shanghai, China

September 4-9 IFA 2009 Berlin, Germany

International Symposium on Wearable September 4-9 Linz, Austria Computers China International Optoelectronics September 6-9 Shenzhen, China Expo

September 7-10 Foundation in Displays Dundee, Scotland

September 8-11 electronicIndia Bangalore, India

September 9 EL Workshop Swansea, Wales

September 9-13 CEDIA Expo 2009 Atlanta, Georgia

International Stereoscopic Union September 9-14 Gmunden, Austria Congress

September 11-13 Taitronics India 2009 Chennai, India

September 11-15 IBC 2009 Amsterdam, Netherlands

September 13-16 PLASA '09 London, England

September 14-17 Eurodisplay Rome, Italy

September 16-17 3D Entertainment Summit Los Angeles, California

Successful Adoption of Current September 17 London, England Generation Printed Electronic Devices

International Conference on Digital September 20-25 Louisville, Kentucky Printing Technologies

107 Veritas et Visus 3rd Dimension August 2009

September 20-25 Digital Fabrication 2009 Louisville, Kentucky

International Conference on September 22-25 Microelectronics and Plasma Pusan, Korea

Technology Organic Semiconductor Conference September 28 - 30 London, England 2009

September 29-30 RFID Europe Cambridge, England

September 28 - Liquid Crystal Displays Oxford, England October 1 September 29 - World Summit 2009 San Francisco, California October 3 September 30- Printed Electronics Asia Tokyo, Japan October 1 September 30 - Semicon Taiwan 2009 Taipei, Taiwan October 2 September 30 - Symposium on Applied Perception in Chania, Crete, Greece October 2 Graphics and Visualization October 2009

Symposium on User Interface Software October 4-7 Victoria, British Columbia

and Technology Annual Meeting of the IEEE Photonics October 4-8 Belek-Antalya, Turkey Society

October 6-7 TV 3.0 Summit and Expo Beverly Hills, California

October 6-8 Semicon Europa 2009 Dresden, Germany

October 6-10 CEATAC Japan 2009 Tokyo, Japan

October 6-11 CeBIT Bilisim EurAsia Istanbul, Turkey

October 7-8 Displays Technology South Reading, England

October 7-10 ASID'09 Guangzhou, China

October 8-11 Taipei Int'l Electronics Autumn Show Taipei, Taiwan

October 12-13 Lighting Korea Seoul, Korea

Workshop on the Impact of Pen-based October 12-13 Blacksburg, Virginia Technology on Education International Meeting on Information October 12-16 Seoul, Korea Display

October 13-14 Asian Solar/PV Summit Seoul Korea

October 13-15 Image Sensors San Diego, California

October 13-16 ElectronicAsia 2009 Hong Kong, China

October 15-16 3D Media Workshop Berlin, Germany

October 15-16 Symposium on Vehicle Displays Dearborn, Michigan

108 Veritas et Visus 3rd Dimension August 2009

October 18-21 AIMCAL Fall Technical Conference Amelia Island, Florida

October 19-22 SATIS 2009 Paris, France

October 20-22 LEDs 2009 San Diego, California

October 21-23 Integrated Systems Russia Moscow, Russia

October 26-29 Showeast Orlando, Florida

October 27 Green Display Expo Washington, D.C.

October 27 Smart Textiles 2009 Dresden, Germany

October 27 Printed Silicon and Hybrids 2009 Dresden, Germany

October 27-29 Plastic Electronics 2009 Dresden, Germany

October 27-29 Solar Power International Anaheim, California

October 27-29 SMPTE 2009 Hollywood, California

October 28-30 FPD International Yokohama, Japan November 2009

International Workshop on 3D Geo- November 3-5 Ghent, Belgium Information

November 4-5 HD Expo Burbank California

Workshop on Virtual Reality Interaction November 5-6 Karlsruhe, Germany and Physical Simulation

November 5-7 Viscom Milan, Italy

November 9-10 It's Not Easy Being Green Irvine, California

November 9-13 Color Imaging Conference 2009 Albuquerque, New Mexico

November 10-11 Digital Signage Show 2009 New York, New York

November 13 Taiwan TV Supply Chain Conference Taipei, Taiwan

International Workshop on Flexible and November 16-18 Ghent, Belgium Stretchable Electronics

November 16-19 Latin Display Sao Paolo, Brazil

November 16-21 FPD & LED Expo 2009 Shenzhen, China

November 23-25 Tabletops and Interactive Surfaces Banff, Canada

China International Touch Screen November 26-28 Shenzhen, China Exhibition & Seminar November 30 - International Symposium on Visual Las Vegas, Nevada December 2 Computing

109 Veritas et Visus 3rd Dimension August 2009

December 2009

December 2-3 Forum 'be-flexible' Munich, Germany

December 2-3 Printed Electronics US San Jose, California

December 2-4 SEMICON Japan Tokyo, Japan

International Conference on Organic December 6-10 Cairns, Australia Solar Cells

December 8-10 CineAsia Macau, China

December 9-11 International Display Workshops Miyazaki, Japan

December 14-17 Optics for Displays Cambridge, England

December 16-19 SIGGRAPH Asia Yokohama, Japan

110