<<

3rd Dimension Veritas et Visus March 2010 Vol 5 No 3/4

Organovo, p33 Smart Holograms, p36 MIT, p55 Trinity College, p78

Letter from the publisher : Tidal Waves and Illusions …by Mark Fihn 2

News from around the world 10

Conference Summaries: 29

 Prime Time in Ottawa, February 18-19, 2010, Ottawa, Ontario by Neil Schneider 45  TEI, January 25-27, 2010, Cambridge, Massachusetts 48  SIGGRAPH Asia, December 16-19, 2009, Yokohama, 53  VRCAI, December 14-15, 2009, Yokohama, Japan 58  IDW ’09, December 9-11, 2009, Miyazaki, Japan 62  IUCS, December 3-4, 2009, Tokyo Japan 67  VRST, November 18-20, 2009, Kyoto, Japan 71  ACM Multimedia 2009, October 19-24, 2009, Beijing, 77  Eurodisplay 2009/IDRC, September 14-17, 2009, Rome, 82  3DTV Conference, May 4-6, 2009, Potsdam, 87

Collimated backlights by Adrian Travis

How can it be? 3D Illusion by Alan Stubbs

Display of Three-Dimensional Images: A Review of the Prior Art by Robert A. Connor

Thoughts on Avatar and 3D TV Adoption by Ross Young

Film-based 3D Pushes Forward by Aldo Cugnini

Last Word: The Oculus3D Projection System by Lenny Lipton

Display Industry Calendar 131

The 3rd Dimension is focused on bringing news and commentary about developments and trends related to the use of 3D displays and supportive components and software. The 3rd Dimension is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com

Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Robert A. Connor, Aldo Cugnini, Lenny Lipton, Neil Schneider, Allan Stubbs, Adrian Travis, and Ross Young

Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate based on location and mailing method. Copyright 2010 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others.

Veritas et Visus 3rd Dimension March 2010 Tidal Waves and Illusions…

by Mark Fihn

The past couple of months have been rather gratifying for me as many friends and industry observers have come to me and apologized for previously belittling me as being overly optimistic about the future of 3D displays. have publicly predicted over the past five years, both in this newsletter and in public forums, that 3D technologies are coming and coming much faster than anyone expects. (And I must admit that 3D is coming much faster than even my optimistic view of the technology ever predicted). I’ve backed up my enthusiasm by acquiring a 3D-ready DLP TV about 2-1/2 years ago, followed by a 3D monitor almost 2 years – both of which get more than a little use. I can’t tell you how many people have shaken their heads with disbelief and told me that 3D is just a fad that will fade quickly. Even well-known industry analysts have repeatedly downplayed 3D display technologies, (with the major market researchers all but ignoring the growing market trends – up until very recently.

Well, let me be bold and predict that the “tidal wave” of enthusiasm for 3D is unlikely to change any time soon. I suspect that virtually 100% of all TVs sold in 2012 will be “3D-ready”. And I’m starting to suspect that autostereoscopic displays are going to invade our homes much faster than previously expected. (Most industry observers will still tell you the technology is “at least 10 years away”, but I’m betting we’ll start seeing very credible and commercially affordable autostereoscopic solutions within 5 years or perhaps even less.

Hollywood will certainly continue to lead the way. Here’s the latest summary of leading 3D movie grosses (US market only). Amazingly, Alice in Wonderland rose to #3 after just 10 days of release – running at a pace similar to the recent blockbuster Avatar. Note also that almost all of these 3D movies were released in the last year – signifying a huge surge in the format that shows no sign of slowing.

Rank Title (click to view) Studio Lifetime Gross Date 1 Avatar Fox $736,907,957 12/18/09 2 Up BV $293,004,164 5/29/09 3 Alice in Wonderland (2010) BV $265,433,637 3/5/10 4 Monsters Vs. Aliens P/DW $198,351,526 3/27/09 5 Ice Age: Dawn of the Dinosaurs Fox $196,573,705 7/1/09 6 A Christmas Carol (2009) BV $137,855,863 11/6/09 7 Chicken Little BV $135,386,665 11/4/05 8 Cloudy with a Chance of Meatballs $124,870,275 9/18/09 9 G-Force BV $119,436,770 7/24/09 10 Bolt BV $114,053,579 11/21/08

http://boxofficemojo.com/genres/chart/?id=3d.htm

As a journalist covering the world of 3D displays, I can attest to the tidal wave associated with enormous amount of news that is coming out about 3D technologies and applications. It’s truly overwhelming and I must report that I’m really struggling to keep up with it all. My friend Art Berman wrote up a listing of all the press releases issued at CES that were related to 3D, which I include on the next page:

2 Veritas et Visus 3rd Dimension March 2010

3D Tidal Wave at CES 3D TV products  to unveil 14-inch OLED capable of 3D  Samsung will integrate Real D’s 3D tech into their 3D TVs  LG sets 3D TV target, to offer new lineup in 2010  Mitsubishi Offers Two 3D-Ready HDTV Lines  Plasma Panel Plant to Serve as 3D TV Supply Base 3D computers  MSI to show 3D notebook, dual-screen e-reader at CES 3D set top boxes  Next3D to Launch Broadband-Delivered Stereoscopic 3D Home Service in First Quarter of 2010  STMicroelectronics Unveils 3D-Ready Set-Top-Box IC Enabling TV Providers to Realize Consumer- Friendly Content Convergence 3D on cell phones  Get your 3D Glasses Ready, iHologram App for iPhone Coming Soon 3D encoding and transmission  Next3D Announces Patent-pending Technology for Encoding and Universal Playback of Stereoscopic 3D Content  Magnum Semiconductor Announces 3D Recording and Playback Solution with TDVision Systems  TDVision Systems and CyberLink Announce Full HD Stereoscopic Video Decoding Solution Compatible with 3D signage  Magnetic 3D debuts new auto-stereoscopic 3D tech at CES  The Seelinder: Cylindrical 3D display viewable from 360 degrees 3D gaming  iZ3D Announces Premium 3D Lens Technology for Gaming at CES  Industry’s First S3D Gaming Conference Announced!  WebOS finally getting 3D game support?  "Shake&Spell 3D" adds a whole new dimension to social gaming 3D projection screens  Da-Lite’s 3D Virtual Grey Projection Screen Increases Brightness Over 40 Percent 3D photo frames  Fuji 3D photo frame 3D broadcasting  ESPN and DirecTV in the US are to launch stereoscopic 3D services in the first half of 2010.  Panasonic, DirecTV team up on 3D  Next3D and Turner Broadcasting System, Inc. to Collaborate on Stereoscopic 3D HD Content Delivery to the Home  in Talks to Carry ESPN 3D Channels  Discovery, Imax and Sony Form 3D Channel  "ON" SkyLife 3D channel  DirecTV to Launch 3D HD Channels 3D theater  TotalMedia Theatre 3 Features Blu-ray 3D Playback Using NVIDIA GeForce and NVIDIA 3D Vision Technology  CinemaCity at Arabian Center brings Digital technology to the UAE

3D players

3 Veritas et Visus 3rd Dimension March 2010

 Sony confirms 3D Blu-ray, introduces iPhone app for remote control 3D Standards  MPEGIF Kicks off 3DTV Working Group at CES 2010  HDMI upgrade one of latest pieces in stereo 3D puzzle 2D - 3D conversion  Quartics and DDD to Demonstrate a Stunning 3D Experience on Electronics Devices at CES 2010  New JVC Unit Offers Real-Time 2D-to-3D Conversion and 3D L/R Mixing  2D-3D Conversion / $40 gets you stereoscopic pseudo-3D on the PSP. No… really 3D glasses  Bit Cauldron Fills Critical Gap in 3D Ecosystem: Demonstrates 3D Glasses for Home Theater and Gaming at 2010 International CES  XpanD nabs Spanish rights to 3D pics 3D content creation  Samsung, DreamWorks Animation and Technicolor Form a Global Strategic Alliance for 3D Home Entertainment in 2010  Yogi Bear Director Eric Brevig to Tackle 3D Korean War Epic  BSAT Labs 3D to use VFX facility in  Offhollywood Takes "The Mortician 3D" to Another Dimension with State-of-the-Art 3D Production Gear, Tech Support, and Talent  Bitmanagement cooperates with 3Dconnexion for 3D Realtime Rendering New 3D software  Majesco Entertainment Announces "Hot and Cold: A 3-D Hidden Object Adventure" is Now Available for Download on DSiWare 3D chips  NXP Introduces Advanced 3DTV Processor 3D test and evaluation  Testronic Labs Debuts New 3D Test Lab New 3D technologies  Cobalto Cellphone: Which can create 3D images in the Air  Welho 3DTV test leaves glasses behind 3D cameras  Sony plans to introduce 3D-technology in its SLR cameras  Quasar 3D rig used to capture Dave Mathews Band concert live New 3D reports  Goldman Sachs: 3D TV Will "Fall Short of Hype"  3D Display Revenues Forecast to Reach $22B by 2018; 3D-Ready TV Shipments to Reach 64M Units  DisplaySearch forecasts that 1.2 million 3D-capable TVs will be shipped in 2010, with growth to 15.6 million sets in 2013  Study: 3D TV Tech Growing Faster than Theatrical  The bottom line? Beyond any doubt, 3D has finally arrived. Insight Media will be fully reporting on these news stories as well as all maters 3D in our Large Display Report and Mobile Display Report newsletters.

No doubt, despite his meticulous listing, Art missed a few of the press releases from CES. What’s even more amazing, is that over the past couple of months, the rate of news about 3D display-related technologies and applications has not slowed a bit.

4 Veritas et Visus 3rd Dimension March 2010

Illusions Long time readers of the 3rd Dimension and the High Resolution newsletters will know about my fascination with optical illusions. In fact, 3D displays are fundamentally an – whereby we fool the brain into thinking we see a 3D image on a 2D surface. Here are a few recent favorite illusions that have to do with 3D imaging and the importance of perspective.

Artist Emma Cammack takes on the challenge of disguising a 3D image on a 2D plane. http://www.emmacammack.com

On the left is a 3D chalk drawing, called Mysterious Caves, created by world-class pavement artist Edgar Müller. His newest creation was exhibited on West India Quay’s Festival in ... It took 5 days for Müller to finish this project. Below is the “Behind the Scenes” gallery, where you can see the work in progress. On the right is the view from the other side, identifying how critical your perspective is to see the proper image. http://www.metanamorph.com

5 Veritas et Visus 3rd Dimension March 2010

Gregor Wosik is primarily known as a street painter. His works are characterized as high-quality and detailed. The motives of the images range from classical to current issues. Above is an advertisement entitled “Jack Daniels on Cafe Floor”. Again, the two pictures clearly illustrate the importance of perspective in this sort of street painting… http://www.klassiko.de

Craig Tabary created is a body-painter that in his “Leopard Illusion” created an amazing 2D image on a 3D surface. Note that you can see the woman’s foot in the lower right corner of the image on the right. http://vfrey.tumblr.com/post/343925758/by-craig-tabary

6 Veritas et Visus 3rd Dimension March 2010

In “Lion & Mouse”, painter René Milot captures an illusion that again showcases perspective, but also how our visual system naturally associates images. http://picture-book.com/users/ren-milot

One of the most remarkable artists that works in the third dimension is Patrick Hughes. Hughes has developed an art form he calls reverspective. Reverspectives are three-dimensional paintings that when viewed from the front initially gives the impression of viewing a painted flat surface that shows a perspective view. However as soon as the viewer moves their head even slightly the three dimensional surface that supports the perspective view accentuates the depth of the image and accelerates the shifting perspective far more than the brain normally allows. This provides a powerful and often disorienting impression of depth and movement. Patrick Hughes takes full advantage of this effect in his use of surrealist images that reinforce the altered reality of the viewer. The illusion is made possible by painting the view in reverse to the relief of the surface, that is, the bits that stick furthest out from the painting are painted with the most distant part of the scene. This is where the term reverse perspective or reverspective comes from.

7 Veritas et Visus 3rd Dimension March 2010

Patrick Hughes uses all the monocular visual cues in his paintings in a creative and meticulous fashion to fool the user into accepting the plausibility of the scene in front of them. All the edges converge on vanishing points, the texture gradients of the flooring and walls are correct, the shadows reinforce the geometry of the surfaces, objects are in the correct relative order and sizes from front to back. However since this scene is painted on a 3 dimensional surface that exactly reverses the apparent perspective, when we move our eyes in relation to the painting the scene appears to move as if it were real. This is the effect of motion . For many first time viewers of reverspective it can be quite disconcerting, as they can get a visceral feeling of accelerated motion or body extension when they move their head in relation to the painting. Very often viewers are so curious on their first observation of reverspective that they move or wobble from side to side to accentuate the effect. This is often referred to by those visiting shows with a Hughes reverspective as the 'Hughes dance'. A more detailed explanation of the relationship between perspective and reverspective can be found on Hughes’ website. For the technically inclined in depth discussion can be found in the various scientific papers on this site: http://www.patrickhughes.co.uk

It’s actually difficult to imagine that this image is from the same piece of art as the one on the previous page, just from a different perspective…

The piece by Patrick Hughes goes by the intriguing title “Paradoxymoron”

8 Veritas et Visus 3rd Dimension March 2010

I’m not sure these are actually optical illusions, but they are clever and certainly illustrate the importance of perspective…

62 fascinating pages about optical illusions

http://www.veritasetvisus.com

9 Veritas et Visus 3rd Dimension March 2010 3D news from around the world compiled by Phillip Hill and Mark Fihn

Bay Area Chapter of SID to feature “Fixed-Viewpoint Volumetric Displays” On March 23, Kurt Akeley from Microsoft Research will be presenting to the Bay Area Chapter of SID on the topic of “Fixed-Viewpoint Volumetric Displays”. Abstract: Conventional stereoscopic displays force viewers to focus on a single display surface, decoupling focus distance from convergence distance and compromising (if not eliminating) cues from defocus blur. Auto-multiscopic volumetric displays largely correct these deficiencies, but significant limitations in image quality have hindered their adoption. Over the past decade we have been implementing fixed-viewpoint volumetric displays: displays which forego auto-multiscopy to achieve high image quality, nearly correct accommodation and focus blur cues, and potentially reasonable production cost. To date these displays have been used only to conduct vision-science research. After describing the capabilities, limitations, and implementations of such displays, as well as some results achieved with them, I will speculate on how they might become practical for non-research usage. http://www.sidchapters.org/ba/index.html

“3D Fact vs. 3D Fiction” seminar to be held in Santa Clara On March 31 in Santa Clara, a seminar sponsored by the iHollywood Forum will examine which 3D technologies will win the race for consumer eyeballs – and which won't. Hype at CES aside, how realistic are projections for the expansion of 3D into mainstream and the home? Will consumers pay for 3D TV and Blu-ray after spending so much on HDTVs? How will standards battles be resolved? Are games the next big thing? And what about mobile – where device makers are betting big that they can put content (sans glasses) on next-generation cell phones and iPods? http://3dsc-318.eventbrite.com Speakers include:

 Eric Edmeades, CEO, ILM's Kerner Studios; camera work, The Cove, this year's Oscar for best documentary.  Barry B. Sandrew, Founder & President, Legend Films, 3D for Alice in Wonderland  Patrick Waddell, Sr. VP, Harmonic Inc.; Chair, 3D Working Group, MPEG Industry Forum  Steve Venuti, President, HDMI Licensing, LLC  Rick Dean, Sr. Vice President, THX Ltd; Chairman, 3D@Home Consortium  Moderator: Michael Stroud, CEO, iHollywood Forum

Animation Magazine joins the S-3D Gaming Alliance, offers free on-line subscriptions Even with Avatar’s continued success and 3D Blu-Ray just around the corner, many see gaming as the core driver that will bring 3D to the home. Founded in September of 2009, The S-3D Gaming Alliance (S3DGA) represents the interests of game developers, display manufacturers, and technology enablers in this field. Non-profit and non- proprietary, some of The S-3D Gaming Alliance’s top members include Blitz Games Studios, LG, Real D, DDD, iZ3D, The Game Creators, HDMI Licensing, and more. S3DGA announced its latest affiliate member, Animation Magazine. S3DGA and Meant to be Seen members will qualify for a free digital subscription. The offer is only available through links on the 3DGA and MTBS3D websites. http://www.s3dga.com http://www.mtbs3D.com http://www.animationmagazine.net

Heidi Hoffman appointed managing director of 3D@Home Consortium The FlexTech Alliance announced the appointment of Heidi Hoffman to the position of Managing Director of 3D@Home Consortium, the organization focused on accelerating the adoption of high-quality 3D entertainment into homes worldwide. FlexTech Alliance (http://www.flextech.org) provides management services to the 3D@Home Consortium (http://www.3dathome.org), including the managing director. In her new role, Hoffman will ensure that the consortium is operating effectively and providing industry-enabling services to its member companies and affiliated organizations.

10 Veritas et Visus 3rd Dimension March 2010

OFH acquires Actuality Optical product developer Optics for Hire (OFH) has acquired the patent portfolio and other key assets of Actuality Systems Inc. for an undisclosed amount. With this transaction, OFH acquired 19 US patents and numerous patent applications, including foreign counterparts. Key assets include: Laurin Publishing's 2003 Photonics Circle of Excellence Award winner, the Perspecta spatial 3D system, as well as multiple free-eye 3D image projection patents, and a suite of software and hardware technologies for cancer treatment. The IP fortress was developed over 12 years and nearly $20 million of investment. Actuality’s flagship technology, Perspecta, is a free-eye volumetric 3D display that creates hologram-like imagery visible from every angle. Applications include medical imaging, in which a patient’s CT scan appears to float inside a transparent sphere, which can be magnified and studied by multiple clinicians around the product. Other applications include oil and gas, security, military, and molecular visualization. Actuality founder, Gregg Favalora, joined OFH on a full-time basis in November 2009, and is assisting with technology transfer. http://www.opticsforhire.com

Actuality/OFHs Perspecta autostereoscopic 3D patents offered by Quinn Pacific Actuality’s technology covers 3D display architectures that produce full color, autostereoscopic images without the need for special glasses. The quasi-holographic imaging techniques patented by Actuality Systems, Inc., allow higher resolution, lower computational requirement, 3D imaging compared to diffraction-limited or massively parallel alternatives. These developments evolved out of Actuality’s Perspecta product efforts, but go far beyond volumetric displays – for example, exploiting 3D technology to dramatically increase resolution by trading off color depth. These assets were acquired in 2009 by Optics for Hire, which has commissioned Quinn Pacific to make them available for purchase. http://www.quinnpacific.com

Fox announces Avatar Blu-Ray and DVD release dates – but not 3D Twentieth Century Fox Home Entertainment announced that it will launch director James Cameron's history- making motion picture AVATAR on Blu-ray Disc and DVD on Thursday, April 22 – coinciding with the 40th anniversary of Earth Day. While no formal statement has been made about a 3D DVD for “Avatar,” Fox said one would be released in the future, probably in 2011. http://www.foxconnect.com

Samsung and Panasonic fighting over Avatar 3D Blu-ray bundle According to Australian tech site SmartHouse, Samsung and Panasonic may be fighting over the rights to bundle a 3D version of James Cameron's Avatar with the 3D-compatible HDTVs from their respective brands in the second half of 2010. Initially, Panasonic was the clear favorite. However, now Samsung is reportedly offering “serious money” in a bid for the rights to bundle the sci-fi blockbuster. Avatar producer Jon Landau had spoken at the Panasonic press conferences at the IFA and CES fairs, and clips from the movie had been used by the Japanese CE maker to showcase its 3D tech. Panasonic had also helped in the production of Avatar. However, Samsung scored a publicity coup Wednesday when Cameron attended the Korean corporation's 3D launch event in New York. The filmmaker took the stage after a brief concert by the Black Eyed Peas, which was filmed with 3D cameras and that footage (along with the rest of the band's tour) will be available in 3D Blu-ray exclusively for Samsung customers. Cameron noted that the concert had been filmed with the same cameras used for Avatar and added that Samsung had been “such a visionary leader in getting to market first with these sets.” http://www.smarthouse.com.au

Samsung shows On the Red Carpet in 3D Samsung Electronics America brought the first ever view of Hollywood’s red carpet arrivals live in 3D to guests at an invitation-only viewing event at the Four Seasons Hotel at Beverly Hills on Sunday, March 7, 2010. The exclusive event featured a special closed circuit 3D feed from KABC-TV’s On the Red Carpet live local pre-show at the Academy Awards Ceremony. The event was Samsung’s first 3D viewing party since it unveiled its complete 3D home entertainment solutions in January. http://www.samsung.com

11 Veritas et Visus 3rd Dimension March 2010

Samsung's first full HD 3D LCD TV now available in the US Samsung's first Full HD 3D LCD TV set is now shipping in the US. shows the 55-inch UN55C7000 TVs with 3D 240Hz motion technology in stock with a $3,299.99 price tag and free delivery. (Note: numerous blogs reported that this was Samsung’s first 3D TV to be made available in the US, but that is incorrect, as the company shipped rear-projection 3D-ready TVs starting about 3 years ago. Note also that Samsung continues to refer to their offering as “LED HDTV”, which is misleading, as the technology is still LCD, but incorporating and LED backlight rather than a CCFL backlighting system). Sears is also reported shipping as well, including Samsung’s 46-inch UN46C7000 3d LCD TV for $2,600.

Samsung bundles Shrek series and Monsters vs. Aliens in 3D with new 3D LCD TVs and 3D Blu-ray players Samsung Electronics announced the world’s first-available “Full HD 3D LED TV” (actually an LCD TV with an LED backlight) and a full line-up of 3D home entertainment products to consumers worldwide. Samsung also announced the expansion of its strategic alliance with DreamWorks Animation to feature an exclusive offering of the Company’s beloved Shrek series – which to date has grossed over $2 billion in worldwide box office – in its entirety in 3D for the first time ever. Samsung unveiled a promotional program where everyone who purchases a 2010 Samsung 3D TV and 3D Blu-ray Player or Home Theater System will receive a 3D starter kit that includes two pairs of Samsung 3D active glasses and a first time, feature-length 3D Blu-ray version of DreamWorks Animation’s 2009 release, Monsters vs. Aliens (exclusive to Samsung products). Additionally, during the second half of 2010, the most successful animated film franchise of all time – DreamWorks Animation’s Shrek film series – will be available in 3D Blu-ray for Samsung home solutions. Samsung 2010 3D TV line-up includes the LED 7000/8000/9000 Series, LCD 750 Series, and the Plasma 7000/8000 Series. http://www.samsung.com

Samsung unveils slim 2010 plasma HDTVs Samsung Electronics unveiled its new portfolio of ultra-slim plasma HDTVs with the 8000, 7000 and 6500 Series. Each provide technologies that deliver exceptional picture quality, advanced connectivity with the updated Internet@TV feature and compliance to revised Energy Star 4.0 standards – all in a slim form factor. Additionally, the premium plasma TV line (8000 and 7000 Series) includes Samsung’s proprietary built-in 3D processor that adds a new dimension to TV viewing at home. Samsung’s built-in 3D technology makes images leap off the screen while innovations like Real Black Filter and Motion Judder Canceller (MJC) deliver unprecedented picture quality. The Real Black Filter reduces the onscreen glare caused by ambient light, so blacks and shadow details are as crisp and defined as possible, while MJC reduces motion judder inherently found in fast-paced action scenes in film- based movies. The upgraded Internet@TV feature now includes Samsung Apps, providing consumers with an expanded, easy to navigate selection of content and applications from leading services like Accedo Broadband, AccuWeather.com, The Associated Press, Blockbuster, Fashion TV , , Picasa, Pandora, Rovi, Travel Channel, Twitter, USA TODAY, and Vudu. Applications can be downloaded and viewed all while watching TV. The first batch of TV apps will launch in the spring free of charge, while premium apps will be available for purchase via the engine’s monetary transaction solution in the summer of 2010. http://www.samsung.com

12 Veritas et Visus 3rd Dimension March 2010

Samsung’s 3D Blu-ray player now available Samsung's new 2010 Blu-ray player with 3D capabilities is available from Amazon for $359.99, (down from the pre-order price of $399). The new technologies presented at CES 2010 are becoming available to consumers. Samsung’s new 2010 Blu-ray players, featuring Internet@TV and Samsung Apps, are now currently available– including Samsung’s 3D- enabled BD-C6900 on Amazon.com. Samsung says its BD-C6900 Blu-ray player is compatible with the 3D Blu-ray standard, and Samsung’s own 3D displays. http://www.samsung.com

Samsung, DreamWorks Animation and Technicolor form a global strategic alliance for 3D Samsung Electronics America, DreamWorks Animation, and Technicolor announced that they have formed a global strategic alliance for the delivery of a complete 3D home entertainment solution in 2010. The three companies have joined forces to accelerate the worldwide deployment of in-home 3D to mainstream consumers. The solution includes a broad line-up of 3D-capable HDTVs from Samsung, its new 3D Blu-ray disc player, and an exclusive promotion that includes a first-time feature-length, 3D Blu-ray version of DreamWorks Animation’s 2009 release, “Monsters vs. Aliens”. The disc will be created and produced by Technicolor. Samsung will provide customers several 3D selections, including a short entitled “Bob’s Big Break” as well as trailers for 2010 DreamWorks Animation feature film releases “How to Train your Dragon” and “Shrek Forever After” on Samsung HDTVs with its Internet@TV feature. http://www.samsung.com http://www.technicolor.com

NineSigma seeks “Development Partners for Rewriteable 3D Hologram Applications” Nitto Denko Corporation, invites proposals for collaboration partners for rewriteable 3D hologram technology that Nitto Denko has developed. More information is available in the Request for Proposal (RFP) document online at https://www.myninesigma.com/sites/public/_layouts/RFPs/NineSigma_RFP_66165.pdf. The final submission date for Proposals is April 5, 2010.

Sony debuts Blu-ray 3D capable models Sony introduced its new Blu-ray Disc line featuring Blu-ray 3D playback, Wi-Fi Internet connectivity, and instant streaming of online video content from the BRAVIA Internet Video platform. The full HD 1080p line featuring Sony’s new Monolithic Design Concept includes three stand alone models (BDP-S770, BDP-S570 and BDP-S370) and three integrated home theater systems (BDV-HZ970W, BDV-E770W, and BDV-E570), as well as a 5.1 channel matching home audio system (HT-SS370). When connected to a broadband Internet network, the models instantly stream movies, videos, music and more from Netflix, Amazon Video , YouTube, Slacker Internet Radio, Pandora, NPR, , Sony Music, and over 25 total providers through the Sony BRAVIA Internet Video platform. Sony’s new Blu-ray Disc players and home theater systems feature an Entertainment Database Browser, using Gracenote technologies, that allows users to browse details like actor and production information from a Blu-ray disc and access related content found on BRAVIA Internet Video content. Unique to the new models, users with an iPhone or iPod touch devices can control the players using a free application that can be downloaded from the Apple App store. The application allows an iPhone/iPod touch to function as a remote control that includes the ability to access a Blu-ray Disc’s details such as jacket artwork, actor, and production information as well as search for additional video clips online. http://www.sony.com

Sony Pictures Home Entertainment to deliver Blu-ray 3D content To coincide with the rollout of 3D electronics hardware from Sony Electronics, Sony Pictures Home Entertainment (SPHE) announced the studio will begin releasing 3D content on Blu-ray Disc worldwide in 2010. The first planned SPHE Blu-ray 3D release will be the recent animated blockbuster “Cloudy With A Chance Of Meatballs”, timed to the availability of Sony Electronics' 3D compatible BRAVIA LCD TVs and 3D compatible Blu-ray disc players in the summer of 2010. More information about the upcoming Blu-ray 3D edition of “Cloudy With A Chance Of Meatballs”, as well as other future SPHE Blu-ray 3D releases, will be announced in the Spring of 2010. http://www.sonypictures.com

13 Veritas et Visus 3rd Dimension March 2010

Sony details backlight units in its upcoming 3D LCD TVs Sony recently gave some insights into the backlighting systems utilized in their upcoming HX900, HX800, and LX900 3D LCD TVs:

 HX900 series: The backlight unit of the HX900 series consists of groups of white LEDs, and those groups of LEDs (as units) are attached to the back of the panel like tiles. The light-emitting part of the white LED is parallel to the panel, and it lights the back of the panel by using optical elements such as a light guide plate. This structure enables a slimmer LCD panel than the structure of a normal direct-type LED backlight with LEDs whose light-emitting part is vertical to the panel. Furthermore, by controlling the light emission of each unit of LEDs, it is possible to improve the contrast.  HX800 series: The HX800 series is equipped with Sony's edge-lit white LED backlight that has a local dimming function. White LEDs are arranged in lines at the upper and lower edges of the panel. And the light emissions of groups of white LEDs are separately controlled. Controllable areas are arranged in two lines in the vertical direction. Sony did not disclose the number in the horizontal direction but said that it is 10 or less and that the areas cannot be controlled as minutely as those of a direct-type LED backlight.  LX900 series: The LX900 series comes with a normal edge-lit LED backlight that does not have a local dimming function. The HX900 and LX900 series have a structure where the LCD panel unit and the glass plate on the surface are integrated by filling the gap between them with newly-developed resin. The structure is called “Opti Contrast Panel.” Because of the integration, the reflection of backlight and the light from outside can be reduced compared with panels made by filling the gap between the LCD panel and glass plate with air. The surface of the glass plate is coated with an AR film.

The "HX900" series (right), which has a direct-type LED backlight, and the "XR1" series (left), which features RGB-color LED backlight

The "HX800" series (right), which has an edge-lit backlight with a local dimming function, and an LCD TV with a normal edge-lit LED backlight (left).

14 Veritas et Visus 3rd Dimension March 2010

Sony releases first 3D capable Bravia HDTVs Sony Electronics introduced its 2010 BRAVIA LCD HDTV line featuring its first 3D HDTVs, a new innovative and stylish Monolithic Design Concept, and LED backlighting. It offers built-in Wi-Fi (802.11) for easy access to BRAVIA Internet video, BRAVIA Internet Widgets and personal content through Digital Living Network Alliance (DLNA) certified home networks. The line is made up of 38 models ranging in screen sizes from 60 to 22 inches. The LX900 series offers integrated 3D functionality with Sony’s 3D active shutter glasses and built-in 3D transmitter, while the HX900 and HX800-series are 3D capable using Sony 3D active shutter glasses and 3D transmitter (each sold separately). The 3D models incorporate a frame sequential display and active-shutter glasses that work together with Sony’s proprietary high frame rate technology reproducing full high-definition 3D images. http://www.sony.net/united/3D

Sony's 3D TVs available for pre-order Sony's Bravia 3D TVs, which were unveiled at International CES in January, are now being displayed at the company's Sony Style stores – and are available for preorder. Products included in the display are the Bravia XBR- 52LX900, as well as the 3D-capable HX900 and HX800 series TVs. The 38 models in Sony's 3D lineup will be available starting this spring. http://www.sony.net/united/3D/

Sony and ESPN set up rival 3D networks Sony Corp. of America, Discovery Communications, and IMAX Corp. have announced plans to form a US television network entirely devoted to 3D programming. The three parties have signed a letter of intent to form the unnamed venture, which is scheduled to launch in 2012. ESPN also plans its own 3D network, potentially testing the waters for an ABC/Disney 3D network of its own down the line. ESPN 3D will show 80 or so events in 2010, including the World Cup and some NBA and college basketball and football. Sony said that it will license television rights to current and future 3D films, plus music- and game-related 3D content. Discovery, meanwhile, will provide 3D rights to its television content while promoting the new channel across its 13 properties. IMAX appears to be supplying image-enhancement and 3D technologies, while doing some cross-promotion across its own theater network. http://www.sony.com

Mitsubishi Digital Electronics America offers 82-inch 3D home entertainment system Mitsubishi Digital Electronics America, Inc. (MDEA) is showcasing its 82-inch Home Theater TV, the world’s largest mass production 3D-Ready television available. With almost four times the viewing area of a “small screen” 42-inch flat panel TV, the 82-inch Mitsubishi 3D-Ready Home Theater TV provides a high quality, cinema-like, large screen 3D experience. Mitsubishi has been selling 3D-Ready TVs since 2007. Utilizing the same core technology that is used in the vast majority of 3D movie theaters, Mitsubishi 3D-Ready TVs bring the 3D DLP Cinema experience home. In order to display 3D images, Mitsubishi’s 3D TVs require source devices to support checkerboard display formats for display of 3D gaming or 3D cinema content. http://www.mitsubishi-tv.com

Philips selects XpanD active 3D technology Underscoring the innovation leadership of XpanD’s active 3D technology platform for cinematic, television and gaming applications, XpanD announced a partnership with to provide consumers with co-branded iterations of its patented pi-cell active 3D glasses with Philips’ sets. XpanD has an estimated 90-percent 3D market share in Asian cinemas and a share of more than 50-percent in Europe with 75-percent share of the 3D sector in , Europe’s largest cinema market. http://www.xpandcinema.com

Technicolor develops advanced 3D compression; broadcast via conventional HD channel Technicolor announced several innovations to support the consumer electronic industry’s migration to 3D, including new technologies for Blu-ray 3D, broadcast 3D, 3D subtitling, and auto-stereoscopic 3D delivery to mobile handsets. Technicolor demonstrated the following at CES: Blu-ray 3D: Technicolor has developed the first advanced compression and authoring solution to bring the recently released Blu-ray 3D specifications to production reality. Technicolor has expanded its capacity in anticipation of increased demand for Blu-ray 3D authoring.

15 Veritas et Visus 3rd Dimension March 2010

Broadcast 3D: Technicolor has built the first independent 3D broadcast facility located at its London broadcast location at Chiswick Park. The 3D channel is capable of transmitting both live and pre-recorded 3D content using a conventional HD channel. The solution is currently delivering content to a Technicolor HD set-top-box. The company is now ready to offer this service to its network service provider and broadcast clients. Automated 3D Subtitling Creation: in 3D introduce unique challenges due to the number of various depths at which objects appear on the screen, thus creating limitations on where subtitles can be placed. Technicolor has developed a production tool that automatically recommends the best placement of subtitles to minimize disruption to the creative intent of the content. http://www.technicolor.com

LG 3D TV line to debut in May LG will release its 3D TV range in May, the company said today. The line-up will comprise a pair of LED TVs and a Blu-ray Disc player. The TVs are both part of LG's 32mm- thick LX9900 series, part of its micro-bezel Infinia range. Two sizes are planned: 47- and 55-inch. Both will feature 400Hz frame interpolation technology, LED array backlighting – 864 on the 47-inch set, and 960 LEDs on the 55-inch device, a 10 million:1 contrast ratio, two 10W “invisible” speakers, and four HDMI 1.4 ports. The LX9900s use active-shutter 3D technology, which LG is apparently not planning to bundle with the sets. http://www.lge.com

DIRECTV and Real D work together to deliver 3D content to the home DIRECTV and 3D technology company Real D are currently working together to deliver high-definition 3D movies and TV programming via satellite to DIRECTV subscribers. DIRECTV’s content providers will be able use Real D tools to format their 3D content and deliver it to millions of homes. DIRECTV will be launching three 3D channels sponsored by Panasonic. The delivery of Real D Format content is compatible with DIRECTV's current HD satellite broadcast and on-demand systems and works with existing HD set-top boxes. As part of the agreement, Real D has delivered to DIRECTV a license to use the Real D Format and associated 3D technology patents. The Real D Format builds on the company's patented side-by-side 3D formatting technology and is capable of delivering crisp, clear, high-definition 3D to the home utilizing all channels of the existing HD broadcast infrastructure. DIRECTV has chosen to use the side-by-side method as its primary method of delivering 3D content due to its ability to deliver high-quality progressive and interlaced video over existing infrastructure including existing HD set-top boxes and DVRs. http://directv.com http://www.RealD.com

DIRECTV to be first television provider to launch 3D in the home DIRECTV and Panasonic announced a strategic relationship that, for the first time, will bring 3D TV to the largest audience nationwide. Beginning in June 2010, millions of DIRECTV HD customers will receive a free software upgrade enabling them to have access to three dedicated 3D channels through their 3D television sets, such as Panasonic’s VIERA Full HD 3D TVs. Panasonic will be the exclusive presenting sponsor of DIRECTV's new HD 3D channels, which will deliver movies, sports and entertainment content from some of the world’s most renowned 3D producers. DIRECTV and Panasonic will leverage current relationships with programming partners and movie studios to obtain new and existing 3D content. DIRECTV is currently working with AEG/AEG Digital Media, CBS, /FSN, Golden Boy Promotions, HDNet, MTV, NBC Universal and Turner Broadcasting System, Inc., to develop additional 3D programming that will debut in 2010-2011. The sponsorship will feature Panasonic branding on all DIRECTV 3D channels for a one-year period. http://www.panasonic.com/3D http://directv.com

Panasonic introduces 3D TV and goes into details Panasonic Corp announced a PDP TV that can display 3D images. The “3D Viera VT2” will hit the Japanese market April 23 along with a Blu-ray Disc (BD) recorder and a BD player that support 3D video, according to a

16 Veritas et Visus 3rd Dimension March 2010 report in Nikkei Electronics. The 3D Viera VT2 comes in two screen sizes, 54 and 50 inches, and their expected street prices are ¥530,000 ($5,900) and ¥430,000, respectively. The BD recorder will be available in three models whose hard disk drives have capacities of 2Tbytes, 1Tbyte and 750Gbytes, respectively. Their expected street prices are ¥300,000, ¥200,000 and ¥160,000. The price of the BD player is expected to be about ¥130,000. Panasonic employed a time-sharing method that combines a PDP with a drive frequency of 120Hz, a BD player and active shutter glasses. The time-sharing method enables to view 3D images with a resolution of 1920x1080 (full HD). The 3D TV will come with a pair of the special glasses, which can also be purchased separately for an expected street price of ¥10,000. As for the contents for the 3D TV, firms such as film companies plan to release Blu-ray discs. And trial software will be included in a limited number of the new BD recorders and players for a sales campaign. The new PDP, which was announced at the 2010 International CES, has a luminous efficiency four times higher than that of Panasonic’s TVs released in 2007. The company increased the amount of ultraviolet rays generated by electric discharge, improved the light conversion efficiency when a fluorescent material is irradiated with ultraviolet rays and emits light, and enhanced the light extraction efficiency of the emitted light. Because of the improved luminous efficiency, the fluorescent material keeps glowing for about 66% shorter time than that of the company’s existing TVs. As a result of the shortened afterglow time of the fluorescent material, the overlap of the images for the right and left eyes was reduced. In the active liquid crystal shutter glasses, the crosstalk was reduced by enhancing the accuracy of the liquid crystal shutters so that they open and close at a more precise timing. The new BD recorder and player support the MPEG-4 MVC (Multi-view Video Coding) standard to play 3D video stored on a Blu-ray disc. Panasonic proposed that Blu-ray Disc Association (BDA) should employ the standard, said Takuya Sugita, Panasonic’s video business unit director. The MPEG-4 MVC data of the encoded full-HD images for the left eye is recorded as “standard images”. As for the images for the right eye, the part that overlaps the standard images is not recorded (only the part that is different from the standard images are encoded). With this method, full-HD images can be encoded 1.3 times more efficiently than in the case where full-HD images for the right and left eyes are respectively encoded, Panasonic said. To improve the accuracy in decoding the images for the right eye, not only the corresponding images for the left eye but also the preceding and following frames of the frame being decoded are referred to, Sugita said. http://www.panasonic.com

Panasonic’s 3D PDP TVs; The active liquid crystal shutter glasses weigh 63g

Panasonic 3D TVs sell out in US stores Panasonic Corp. said its 3D TVs sold out in the US in their first week. Panasonic became the first major TV maker to sell 3D sets in the U.S. when its 50-inch full high-definition plasma TV went on sale at outlets of Best Buy with a pair of glasses and a -D Blu-ray player for $2,899.99 on March 10. Samsung Electronics Co., the world’s largest TV maker, began offering a 55-inch 3-D model there for $3,299.99 on March 14, while Sony Corp. plans to start selling 3D Bravia TVs from June. Samsung has said it aims to sell more than 2 million 3D TVs this year, while Panasonic expects to sell as many as one million globally in the year starting April 1. LG Electronics Inc. has said it’s targeting sales of 400,000 3D TV sets in 2010. Sony, which said last week it plans to sell at least 25 million TVs in the year starting April, predicts sales of 3-D sets will probably account for about 10 percent of the total.

17 Veritas et Visus 3rd Dimension March 2010

“Avatar” and other 3D movies set the stage for 3D TV debut, In-Stat says The popularity of Avatar and other 3D movies will put 3D TV on the map for consumers, reports In-Stat. 2010 will be a big year for 3D entertainment, as movie studios release more 3D films shown in a growing number of 3D- equipped theaters. “In-Stat’s 3D consumer survey shows that 64% of consumers are at least somewhat interested in 3D in the home. For those who have seen a 3D movie in the last 12 months, the percentage increases to 76%. Exposure to 3D films is important to the debut of 3D TV, because consumers who have seen 3D films are more interested than the general population in being able to view 3D content at home,” says Michelle Abraham, In-Stat analyst. Recent research by In-Stat found the following: In-Stat projects worldwide 3D TV shipments will reach 41 million in 2014. 3D Blu-ray player shipments will track closely with 3D TVs. Pricing is a major barrier, as survey respondents are not willing to pay much of a premium for 3D TV sets and Blu-ray players. Many Pay-TV operators will use half resolution 3D as a stepping-stone and learning opportunity for full HD 3D in the future. On a regional basis, North America will be the largest market. The research, “3D TV Coming Soon to a Home Near You”, covers the worldwide market for 3D television. It includes: examination of the 3D eco-system: 3D formats, 3D content, consumer interest in 3D, transmitting 3D to the home, and 3D consumer devices; worldwide five-year forecasts for 3D channels, 3D TV set shipments, ASPs, revenues by region, and 3D Blu-ray player shipments through 2014; analysis of 3D standards and formats, and 3D content availability. http://www.in-stat.com

Sky to put 3D TVs into British pubs Sky has reportedly bought 15,000 3D TVs from LG and intends to install them in pubs throughout Britain. Sky's 3D TV service is due to launch in April following a preview event held in January when nine UK pubs screened the Barclays match between Arsenal and Manchester United. Sky intends to show matches and other sporting events live. http://www.sky.com

Apparently, Sky sees pub viewing as a good alternative to special events cinema as a way to showcase 3D viewing. It remains to be seen whether glasses will be accepted in what is typically a social venue. And if 3D does induce nausea or headaches, perhaps the attendant beer-drinking will serve to diminish (or mask) the effects…

Technicolor and Bow Tie Cinemas pact to deploy Technicolor 3D Technicolor reached an agreement with New York City-based Bow Tie Cinemas to install Technicolor 3D in all Bow Tie locations on 25 of its 150 screens. Technicolor 3D is a new 3D system for 35mm projectors, enabling exhibitors to equip theatres for high-quality 3D at a fraction of the cost of installing a projection system. The Technicolor 3D system utilizes a next-generation 3D lens for projectors and film prints created with patent- pending digital processes to optimize the motion picture image for 35mm 3D projection. Bow Tie will install Technicolor 3D in advance of the first film available in the Technicolor 3D format: How to Train Your Dragon from DreamWorks Animation SKG, Inc. on March 26. DreamWorks Animation SKG, Inc., , , Overture Films, , Warner Bros., and have announced support for Technicolor 3D. These studios represent 13 of the 19 3D films already announced for 2010 release. Technicolor 3D employs a proprietary “production to projection” system that leverages 35mm film projectors, in use today by the majority of US and international theatres, to deliver a 3D presentation to moviegoers. A patent-pending lens system splits the left and right eye images as the film runs through the projector and delivers a 3D-ready image onto a silver screen. The solution works with circular polarized glasses – identical to the ones used for existing digital 3D cinema – to “translate” the film’s content into a 3D image. The silver screen can be used for the projection of both Technicolor 3D as well as digital 3D content. Technicolor 3D is available now in the US, Canada, , select European countries, and Japan. http://www.technicolor.com http://www.bowtiecinemas.com

18 Veritas et Visus 3rd Dimension March 2010

Barco and Cinema West partner in ’s largest 3D multiplex Barco announced that Cinema West has selected Barco DP-2000 projectors for their new Palladio 16 Cinemas site located in Folsom, California. Six DP- are 3D-equipped, using 3D technology from Real D – making the Palladio 16 the largest single-site 3D installation in California, if not the West Coast. For the 16-theatre installation, digital servers and theatre management systems are supplied by Dolby, and the 5.1 surround- audio system is supplied by both QSC and Dolby. http://www.cinemawest.com

CBS to offer Final Four in 3D theaters CBS Sports announced it will provide the Final Four college basketball contests and the championship game live and in 3D at dozens of movie theaters from coast to coast – in a team effort which includes the NCAA, chief sponsor LG Electronics, and Cinedigm Digital Cinema. The 3D HD broadcasts will be provided by about 100 Cinedigm-certified Digital Cinemas. (Also, those attending the Final Four events in Indianapolis can see the games in 3D on LG LCD HD screens located throughout Lucas Oil Stadium). Both semi-final 3D games are set for Saturday, April 3 (6 p.m. and 9 p.m. EDT) and the primetime championship game on Monday, April 5 (9 PM, EDT). The telecasts will be fully produced by CBS Sports. LG Electronics, which introduced its first 3D LCD LED sets several months ago in , will use the two game nights to market their upcoming 3D HD units (and 3D Blu-ray players) heading for North America this spring. CBS Sports said it will be working with NEP on the 3D productions, along with Vince Pace. http://www.cinedigm.com

Stereo Vision launches world’s first theatrical 3D television network Stereo Vision Entertainment announced that it has launched SVTV, the world’s first theatrical 3D television network. Stereo Vision’s CEO Jack Honour stated, “For the last ten years, Stereo Vision has been a driving force in the development of the 3D entertainment industry. With Sony announcing that they will have 46,000,000 3D TVs out by 2013, and virtually every major television manufacturer following suit, the forming of SVTV was a natural evolution for Stereo Vision. While developing our own in house 3D content, we've also been busy gathering existing content, and aligning ourselves with the major studios that are now producing new 3D content. We are currently in the final phase of preparations for our 3D broadcast beta test, and expect to be broadcast ready by summer.” http://www.stereovision.com

Comcast to feed Masters in 3D The Masters golf the tournament will be televised in 3D and feed for free by , the largest TV service provider in America. The live 3D coverage will be produced from the Augusta National Golf Club. 3D production is being sponsored by Sony. Rights holders for the Masters are ESPN and CBS. The 3D production will be concentrated on the famed course's final nine holes. The production will be distributed live to those in the US with television sets and computers that are 3D capable. Two hours of live afternoon 3D coverage will be available each day beginning during the Par-3 Contest on April 7 and continuing through the four rounds, April 8-11. Comcast and IBM, the tournament's technology partner, will combine to offer the 3D feed. http://www.masters.com.

Sky Deutschland airs 3D soccer to select audience reported it made German broadcast history with the nation's first telecast in 3D HD. The event was a soccer match between Bayer Leverkusen and Hamburg SV, which was piped into an invitation-only audience at a large Munich beer hall. Sky (until recently known as Premiere in Germany) treated the mostly celebrity crowd to the typical 3D glasses to view the match in stereoscopic 3D. Sky gave no hint when it might actually begin offering some 3D content on its DBS system, which is busy (like most other media outlets) trying to grow its HD subscriber list, much less make a partial marketing shift towards 3D. Sky is using 3D technology as a way to further promote its HD product, with one Sky Deutschland exec telling the assembled crowd in Munich, “What we've seen on this memorable evening is what will be possible with HDTV in the future. 3D is the HD experience of the future and the logical next step for HDTV.” Sky produced the 3D soccer broadcast simultaneously with its “regular” HD broadcast with the aid of nine 3D cameras and six 3D stereo rigs from P+S Technik were used. http://www.sky.de

19 Veritas et Visus 3rd Dimension March 2010

Turner Sports uses Orad’s 3DPlay for NBA All-Star Week Turner Sports deployed Orad Hi-Tec Systems real-time, 3D graphics solutions to place virtual graphics during the NBA All-Star Game coverage. The network’s production team used Orad’s 3DPlay to place virtual graphics promoting advertisers, sponsors, and network programming onto objects around Cowboys Stadium. Using the same software and graphic controller, the team also created real-time, on-air graphics from within one of the on-site outside broadcast trucks. The 3D graphics were used throughout the four days of coverage, and during the 59th Annual NBA All-Star Game. Virtual production took place outdoors about 20 miles from Dallas atop Rangers Stadium, home of the Texas Rangers major league baseball team. Due to snow, the virtual show planned for the NBA All-Star Game was nearly canceled. However, the Orad graphics artist stationed onsite was able to build a snow filter into the graphic design application, which allowed the show to then go to-air. Virtual clouds were also created and then used to blend the 3D graphics into the Dallas skyline. http://www.orad.tv

Kenny Rogers to celebrate 50 years with 3D TV special Country music legend Kenny Rogers announced that he will tape an upcoming television special commemorating the first 50 years of his career. KENNY ROGERS: The First 50 Years will be filmed in high definition, digital 3D on April 10th at the MGM Grand at Foxwoods in Las Vegas. Dolly Parton, Lionel Richie, Alison Krauss, Wynonna, and the Oak Ridge Boys are some of the artists already slated to appear on the show. Details on the network airing the special, more guest stars and other surprises will be announced shortly. http://www.kennyrogers.com

James Cameron announces plans to re-release Titanic in 3D In a recent interview, James Cameron revealed that he plans to re-release his epic historical blockbuster, Titanic, in 3D in spring 2012. This is to coincide with the 100th year anniversary of the ship’s first and final voyage. Cameron continues to criticize the mishandling of 3D and how it’s ruining the potential 3D of the movie-going experience:

“You know, everybody is an overnight expert. They think, ‘what was the takeaway lessons from Avatar? Oh you should make more money with 3D.’ They ignore the fact that when natively authored the film in 3D, and decide that what we accomplished in several years of production could be done in an eight week (post-production 3-D) conversion with ‘Clash of the Titans.’ It’s never going to be as good as if you shot it in 3D, but think of it as sort of 2.8-D.”

XpanD introduces new XpanD ONE solution for theaters With an announcement that extends 3D technology to cinema environments of all sizes, XpanD is introducing the XpanD ONE 3D solution. Designed for theaters with a maximum capacity of 150 seats, the XpanD ONE system gives owners of intimate cinema exhibition spaces and fully digitized multiplexes the opportunity to offer customers the same groundbreaking 3D technology found in the world’s largest and most popular multiplexes. The XpanD ONE is a single cable, plug and play IR controller and 150 pairs of XpanD active 3D glasses. The XpanD ONE can easily be moved from one theater to another in a fully digitized multiplex, turning all the secondary halls into 3D theaters, as movie programming requires. Like standard XpanD systems, XpanD ONE does not require a silver screen, provides the brightest 3D images, and works with all DLP Cinema projectors. http://www.xpandcinema.com

20 Veritas et Visus 3rd Dimension March 2010

Sony introduces its first 3D compatible audio/video receiver Sony announced its first A/V receiver capable of supporting 3D audio and video. Featuring HDMI 1.4 3D pass- through technology, ample high definition connectivity and compatibility with all of the latest Blu-ray Disc audio formats, the new STR-DN1010 A/V receiver is designed to create a simple solution for controlling any high definition or 3D capable home theater. The 7.1 channel STR-DN1010 A/V receiver (110 watts power per channel @8-Ohms, 1kHz, 1% THD) features full high definition 1080/24p support and seven HD inputs in total (four HDMI and three component) allowing for connection to a wide variety of HD devices. The receiver's HDMI 3D pass-through technology supports 3D video from connected devices and passes them through to a 3D compatible high definition television, while decoding high-resolution audio codecs. The STR-DN1010 is compatible with all advanced audio codecs, including Dolby TrueHD, dts-HD Master Audio and features wireless 2nd zone capabilities through Sony's S-AIR wireless technology. With the addition of an S-AIR transmitter (model EZW-T100) and separate S-AIR speakers (sold separately), the receiver can also drive wireless audio in up to 10 additional rooms. The STR-DN1010 A/V receiver also features a Digital Media Port (DMP) input for simple connection to external sources including an iPod and iPhone (compatible DMP accessories required and sold separately) and is compatible with both Deep Color and x.v.Color. The STR-DN1010 A/V receiver will be available this June for about $500. http://www.sony.com

Onkyo introduces the world's first THX certified 3D-ready A/V receiver Onkyo announced March deliveries of its first 3D-Ready home theater receivers and home theater in a box (HTiB) systems. The new models consist of three A/V receivers and three HTiB systems ranging in price from $299 to $599, and all of them support the new HDMI v1.4 connectivity standard for new 3D video displays and Audio Return Channel capabilities. The line-up includes Onkyo's new easy-to-setup overlaid onscreen graphical display that lets the user watch the program in the background while using the function menus. Additionally, all 2010 HDMI v1.4 models include a new feature call HDMI Thru. HDMI Thru allows content to pass through to the TV when the receiver is in a standby state. All of Onkyo's receivers offer as many as six HDMI inputs, plus component and composite video, numerous stereo input jacks, optical/coaxial digital inputs, and the popular front-panel connections on many models. Two models include Sirius Radio connections, and all these receivers incorporate Onkyo's proprietary Universal Port (U-Port) connector which simplifies connections to optional HD Radio tuners and iPod Docks. http://www.onkyousa.com

Pioneer adds -streaming, 3D-ready receivers Pioneer has introduced a line of 5.1 channel A/V receivers that can wirelessly transmit audio via Bluetooth. The VSX-520-K and VSX-820-K are 3D ready with the inclusion of HDMI 1.4 and both have at least three HDMI inputs. The VSX-820-K also has "Works with iPhone" certification, meaning the receiver will play back content on an iPhone, iPod and iPod Touch. An iPhone Control Button on the front panel of the VSX- 820-K transfers iPod navigation control and on-screen display from the A/V receiver's remote control back to the connected Apple device. http://www.pioneerelectronics.com

21 Veritas et Visus 3rd Dimension March 2010

CBS and Sony Electronics unveil new 3D consumer research center Sony Electronics and CBS unveiled “The Sony 3D Experience.” This research center and screening facility will focus on consumer preferences and toward 3D programming, as well as how broadcasters and studios can best deliver 3D content for viewing both in and out of the home. The Sony 3D Experience will be located within the expanded CBS Television City research facility at MGM Grand Hotel & Casino in Las Vegas.

The new center is also being supported by Real D, which is providing its advanced 3D filters and eyewear to help complete the 3D experience. The facility is divided into two primary zones: 3D theatrical entertainment; which will preview and promote the latest 3D motion picture releases; and 3D home entertainment, which will highlight and demonstrate the newest trends for 3D in the home, including 3D compatible HDTVs, PlayStation3 systems and upcoming Blu-ray 3D players and titles. Additionally, consumers will be able to learn about the latest 3D developments, such as the upcoming launch of the ESPN 3D Network and the new 3D channel resulting from a joint venture among Discovery Communications, IMAX and Sony. The Sony 3D Experience is one of several recent Sony initiatives in the 3D arena. The company also just unveiled its new 3D Technology Center on the Sony Pictures Entertainment lot in Culver City, Calif., which will offer industry professionals the opportunity to learn more about the techniques and equipment for 3D production and content creation. http://www.sony.com

UC Berkeley and Bangor University researchers flag health concerns related to 3D Stereo 3D movies and TV could generate as many as seven different perceptual problems, said Martin Banks, a professor of optometry and vision science at the University of California at Berkeley. He gave a talk in February to a broad group of consumer and Hollywood technologists about some of his biggest concerns. “I think there are real things to be concerned about with the use of stereo displays becoming very widespread, especially if younger children are exposed to them routinely,” added Simon Watt, a lecturer in the school of psychology at Bangor University in Wales who, like Banks, has been conducting studies on eye movements and stereo 3D displays.

One of the main issues the researchers are studying is the so-called convergence-accommodation conflict. People watching stereo 3D content have to adjust what they see at one point on a flat screen to information in the content that tells them that object is at another point in 3D space. Such adjustments are not needed in the real world, so the is not wired to handle them smoothly. Recent 3D movies such as Avatar did a good job of minimize the effect, Banks said. But “as you decrease the distance [to the display] the problems created by this conflict accelerate and it's non-linear so they accelerate quickly. “Things you could get away with in movies, you can't in a video game where a kid is close to the screen, so I am more troubled about stereo 3D TVs than movies,” he added. Both Banks and Watt are working on one possible solution. In separate efforts they are developing so-called multi- focal-plane displays that could reduce eye strain.

The Eye 3D short demo from a stereoscopic 3D documentary movie The Eye 3D is a 3D documentary about the Very Large Telescope (VLT) of the European Southern Observatory (ESO), the most powerful optical telescope in the world that has found its place in Chile’s Atacama desert, 75 miles away from any settlement. The Eye 3D is a journey: To the Atacama desert and further to the outmost depths of the universe. A 2-1/2 minute stereo 3D demo of the clip was made available by Peter Wimmer, the author of the Stereoscopic Player. The video is in 16:9, 1080p HD format with WMV compression in Over/Under format and can be viewed with the 3D Vision Video player, Sterescopic Player or any other that supports Above/Below format for S3D video. You can download the clip here: http://uploading.com/files/76dm8d1m/The-Eye-1080p-s3d.wmv/

22 Veritas et Visus 3rd Dimension March 2010

Technicolor 3D launches in North America Technicolor announced that its next-generation 3D-on-film solution has commitments for more than 150 screens to be installed in North America by the release of Clash of the Titans from Warner Bros. on April 2. Due to overwhelming global demand for more 3D screens, Technicolor is expanding its 3D offering internationally to UK, and Italy starting immediately. Technicolor has partnered with German-based TC3D to act as its sales and marketing agent for Germany, and Switzerland. The system launched with a demonstration at the Berlin Film Festival in February. In addition, Technicolor has formed a strategic alliance with FujiFilm to market the 3D solution in Japan starting in April. The Technicolor 3D system will initially debut in theatres for How to Train Your Dragon from DreamWorks Animation SKG, Inc. on March 26. http://www.technicolor.com

Panasonic announces professional 25.5-inch 3D production monitor Panasonic announced the BT-3DL2550, a 25.5-inch professional-quality 3D LCD monitor for field use, and the AG-HMX100, a professional HD digital AV mixer for live 3D event production. Panasonic will offer professional production equipment to allow video professionals to efficiently create 3D content, so consumers can enjoy 3D video using Panasonic 3D home theater systems. The monitor can be connected directly to Panasonic’s Full HD 3D and other 3D cameras via HD-SDI inputs. It can also be connected to high-end NLE systems like Quantel’s IQ and Pablo via its two HD-SDI (simultaneous signal) or to a NLE system running Final Cut Pro via DVI-D (line-by-line signal) 3D formats and connects easily with common professional interfaces, so it can integrate into any production. The monitor displays 3D content using an Xpol polarizing filter, so content can be viewed with polarizing (passive) 3D eyeglasses. It switches from Left to Right image display, overlay, Left and Right two window display and 3D. With an In-Plane Switching (IPS) panel and 10-bit processing circuit, the monitor delivers full 1920x1200 resolution with exceptionally clear detail and offers six color settings - SMPTE, EBU, ITU-R BT.709, Adobe 2.2, Adobe 1.8 and D-Cinema – for superior color range and a three-dimensional look-up table (LUT) for calibration. Additional features include pre-installed calibration software, Cine-gamma Film-Rec compensation, Standard Markers and Blue-only, H/V delay display, monochrome and Cross Hatch overlay display, split-screen/freeze frame (live input vs. freeze frame), The BT-3DL2550 3D production monitor will be available this September at a suggested list price $9,900. http://www.panasonic.com/broadcast

Alienware now shipping 23-inch OptX AW2310 1080p 3D monitor Alienware started shipping the OptX AW2310 LCD monitor, the company’s first 3D monitor. Measuring 23-inches at 1920x1080 pixels, the new device features a 3 millisecond response time, 120Hz , and stereoscopic support when NVIDIA's GeForce 3D Vision Kit is utilized. It's up for order right now at $469, but has been seen on-line for $449. The 3D Kit, however adds about $200 to the price-tag. http://accessories.us.dell.com

23 Veritas et Visus 3rd Dimension March 2010

XpanD’s introduces universal active 3D glasses The XpanD X103 glasses are designed to work seamlessly within XpanD cinema and with almost all the new 3D-ready TVs of all brands, allowing XpanD X103 to use their personal glasses with their friends 3DTVs, 3D computer monitors and XpanD cinema. The new XpanD X103 active 3D glasses are available in 12 different colors, allowing users unprecedented freedom of expression. XpanD is exhibiting the world’s first universally compatible active 3D glasses for 3D-ready . The XpanD X103 glasses are compatible with virtually any monitor capable of playing 3D-ready content, making 3D an affordable social experience. As with all XpanD models, the X103 active 3D glasses utilize a fast-switching, liquid crystal cell, know as “pi-cell” – the fastest 3D glasses in the world. Owners of XpanD universal glasses are starting to come to the cinema with their personalized glasses. As a result, the cinema owners and the studios do not need to pay for 3D glasses anymore, making 3D cinema distribution and exhibition less expensive. The cinemas are using their unique position and becoming a point of sale for universal 3D glasses, and can profit from these sales. http://www.xpandcinema.com

GUNNAR Optiks announces 3D lens technology for gaming and movies GUNNAR Optiks announced today that they will be offering a collection of 3D glasses enabled with components of their i-AMP lens technology. Versions will be available for the most widely used 3D platforms in gaming and video. For the first time, premium consumers will have a chance to view 3D technology with ergonomically correct and distortion free optics. GUNNAR is relying on components of its i-AMP technology to provide the optics. “While typical 3D eyewear is stamped from a flat sheet of plastic, GUNNAR lenses are shaped, formed and cut to provide distortion free optics,” said Joe Croft, co-founder of GUNNAR and EVP of research design & development. “For the amount of technology and effort that goes into the creation and delivery of the content, it is a shame that the weakest link in any 3D system today is the eyewear used to view the final product.” GUNNAR eyewear with i-AMP 3D will initially be available in Q2 of 2010 in configurations that are compatible with iZ3D gaming systems and Real D video. Ready to wear versions will be priced from $89.00 to $149.00. Prescription eyewear in both configurations will be available in Q3. http://www.gunnars.com

Vuzix introduces video eyewear Corporation debuted the Wrap 920AR eyewear complete with a pair that “looks” into the world, bringing mixed and augmented reality content to life. With the new Wrap 920AR, users can view the real-world environment and computer-generated imagery seamlessly mixed together; allowing video game characters to jump out of the TV and come to life in your living room, or magazines and books with animated links back to the web in real time. The stereo camera pair delivers a single 1504x480 side-by-side image that can be viewed in 3D stereoscopic video, while the video eyewear provides an unprecedented 67-inch display as seen from 10 feet. The Wrap 920AR also includes a 6 Degree-of-Freedom Tracker, which allows for absolute accuracy of roll pitch and yaw and also X, Y and Z positioning in 3D space. The Wrap 920AR’s stereo camera assembly and 6-DoF Tracker will also be available separately for upgrading existing Wrap video eyewear. http://www.vuzix.com

24 Veritas et Visus 3rd Dimension March 2010

Vuzix expands eyewear product line to include customized accessories Vuzix Corporation unveiled a new line of eyewear accessories for its Wrap series of video eyewear. Accessories include the Wrap VGA Adapter, Wrap 6 Degrees-of-Freedom Tracker, Wrap Power Sled, Wrap CV Recharge Pack, Wrap Style Lens options and a Deluxe Carry Case. These new products transform the Wrap line into the first-ever pair of upgradeable video eyewear. These upgrades offer consumers the ability to purchase a product that can adapt to a user’s specific needs. The new accessories line to be shown at CES includes Wrap VGA Adapter that connects any model of Wrap video eyewear to a desktop or laptop computer’s VGA port. With the Wrap VGA Adapter, a single pair of eyewear can meet all in- home and mobile video viewing needs. Wrap Power Sled provides power, protection and convenience for iPod Touch or iPhone. Wrap 6 DoF Tracker with compass provides the ultimate in dead reckoning tracking in a miniature 30x10x15 millimeter package. Utilizing multiple magneto-resistive sensors, accelerometers and gyros for high accuracy, the 6DoF tells their computer or mobile phone exactly where they are looking or moving to for immersive and interactive applications ranging from virtual and augmented reality to game playing. The Wrap Stereo Camera allows users to add on twin cameras to their Wrap video eyewear. Now users can see and record their real world in 3D. Connecting to computers and other devices via USB, users can see and view a combination of real world and computer generated data. They can even use the cameras to record their life in 3D with their computing device as the recorder. http://www.vuzix.com

Dolby reduces price for 3D glasses announced that it has reduced the price of its reusable 3D glasses. Dolby exhibitors can now purchase new 3D glasses at a list price of $17.00, reduced from US $27.50, making them even more affordable and cost-effective. Dolby reports that it has shipped more than 3,200 3D systems to over 400 exhibitor partners in 67 countries since introducing their 3D cinema technology just over two years ago. This growth in the number of Dolby 3D equipped digital cinemas around the world has enabled then to reduce the price of the glasses further. Dolby is also offering additional cost savings through new, bundled pricing for its standard Dolby 3D single projector kit with up to 500 pairs of glasses as well as a Dolby 3D bundle for its large-screen solutions 3D kit with up to 1,000 pairs of glasses. Dolby’s 3D glasses are passive glasses that require no batteries or charging. The color filter coating in the Dolby glasses allows specific wavelengths of light to reach each eye. http://www.dolby.com

Bit Cauldron demonstrates 3D glasses for home theater and gaming Bit Cauldron demonstrated its stereoscopic 3D shutter glasses showcasing how consumers will soon get to enjoy high-fidelity 3D movies and games at home with their own televisions or computer. Bit Cauldron glasses incorporate fast, neutral density lenses for a clearer, brighter picture than ever while preserving HDTV color and resolution. The glasses work together with AMD GPUs. They use advanced IEEE 802.15.4 radios for a reliable connection from display to glasses. Bit Cauldron glasses will be available from major household brand names in the second half of 2010. http://www.bitcauldron.com

Khronos Group announces OpenGL 4 spec The Khronos Group announced two new OpenGL specifications. The headline release, OpenGL 4, includes a raft of new features bringing OpenGL in line with Microsoft's Direct3D specification. OpenGL 3.3 was also released, providing as many of the new version 4 features as possible to older hardware. Direct3D 11 mandated support for complex programmable tessellation and compute shader integration. Although Khronos' OpenCL specification provides a general API for GPGPU programming, this didn't have the same integration into the graphics pipeline. The Khronos group promotes OpenGL's platform-independence, in contrast to Direct3D's Windows-specificity. But even that benefit is diluted somewhat; though OpenGL is a fundamental technology to MacOS X's graphical stack, Apple hasn't offered full OpenGL 3 support on its latest operating system, instead sticking to version 2.1 with a few extensions. NVIDIA promises OpenGL 4 support will coincide with the launch of its new Fermi GPUs. ATI/AMD has made no specific commitment, but support is likely to come sooner rather than later. http://www.khronos.org

25 Veritas et Visus 3rd Dimension March 2010

HDMI Licensing releases version 1.4A, enabling 3D enhancements HDMI Licensing announced the release of HDMI Specification Version 1.4a featuring key enhancements for 3D applications including the addition of mandatory 3D formats for broadcast content as well as the addition of the 3D format referred to as Top-and-Bottom. The complete HDMI Specification Version 1.4a, along with the 1.4a version of the Compliance Test Specification (CTS), is available to Adopters on the HDMI Adopter Extranet. An extraction of the 3D portion of Specification Version 1.4a is available for public download on the HDMI Web site at http://www.hdmi.org. The purpose of the extraction document is to provide public access to the 3D portion of the HDMI Specification for those companies and organizations that are not HDMI Adopters but require access to this portion of the Specification. The HDMI Specification Version 1.4a provides a level of interoperability for devices designed to deliver 3D content over the HDMI connection. The mandatory 3D formats are:

 For movie content:  For broadcast content: o Frame Packing o Side-by-Side Horizontal . 1080p @ 23.98/24Hz . 1080i @ 50 or 59.94/60Hz  For game content: o Top-and-Bottom o Frame Packing . 720p @ 50 or 59.94/60Hz . 720p @ 50 or 59.94/60Hz . 1080p @ 23.97/24Hz

Implementing the mandatory formats of the HDMI Specification facilitates interoperability among devices, allowing devices to speak a common 3D language when transmitting and receiving 3D content. The mandatory requirements for devices implementing 3D formats are Displays – must support all mandatory formats; Sources – must support at least one mandatory format and Repeaters - must be able to pass through all mandatory formats.

HDMI Licensing makes 3D portion of HDMI specification version 1.4 available for public download HDMI Licensing announced that it has made the 3D portion of the HDMI Specification Version 1.4 available for public download on the HDMI website at http://www.hdmi.org. The purpose of this document is to provide public access to the 3D portion of version 1.4 of the HDMI Specification for those companies and organizations that require access to this portion of the specification but have not executed an HDMI Adopter Agreement. The document available for download is extracted from version 1.4 of the HDMI Specification. The HDMI Consortium intends to release a 1.4a version of the HDMI Specification shortly which will include updates to the 3D portion of the Specification. As soon as the 1.4a version of the specification is published to adopters, an update to the 3D portion of the document, available for public download, will also be published. http://www.hdmi.org

CableLabs develops 3D test support and opens laboratory for 3D TV technology CableLabs has expanded support for development of 3D television technology. CableLabs is providing testing capabilities for 3D TV implementation scenarios over cable. These capabilities cover a full range of technologies including various frame compatible, spatial multiplexing solutions for transmission. Based upon an RFI issued by CableLabs in March 2009, CableLabs opened its test facilities for development and support to vendors and TV designers to explore interoperability with 3D cable delivery systems. As a result of these investigations, CableLabs has determined that many of the digital set-top boxes deployed by cable operators are capable of processing 3D TV signals in frame-compatible formats. Today’s new generation of 3D TV receivers is expected to support these formats using an HDMI video connection. It was through this testing that CableLabs played an influential role in the recently announced changes to the HDMI 3D specifications to add support for the “Top/Bottom” format and enable legacy STBs to signal 3D carriage. A “frame-compatible” 3D format is one that carries separate left and right video signals within the video frame used to convey a conventional (2D) high-definition signal by squeezing them to fit within the space of one picture. The advantage of such a format is that it can be delivered through existing plant and equipment as if it were a 2D HDTV signal. While the frame-compatible formats will enable support for stereoscopic 3D signaling almost immediately, work continues on an effort to define a long-term solution that will enable support for 3D content that can be delivered at resolutions and frame rates as high as 1080p60 for both eyes. http://www.cablelabs.com

26 Veritas et Visus 3rd Dimension March 2010

Revware Systems announces hand-held wireless 3D metrology system Revware Systems, the developer of CAD-Driven Reverse Engineering software and manufacturer of the MicroScribe digitizer for touch and laser digitizing solutions, announced the worldwide release of MobiGage, a uniquely integrated metrology system for MicroScribe digitizers. The release is part of Revware’s strategic initiative to bring a broader catalog of affordable productivity tools for professionals who rely on real-time measurement. MobiGage is the first hand-held metrology application. Installed on an Apple iPhone or iPod touch, MobiGage uses wireless communication to manage data collection from one or more MicroScribes linked to a MobiBox Silver interface. No other computer is needed. MobiGage expands the use of MicroScribes from its base of modeling applications into the market for lower cost metrology applications. MobiGage quickly captures a full range of part measurements or creates, edits, and runs repeatable measurement plans complete with reporting. The MicroScribe/MobiGage solution follows measurement methodology industry standards that include: NIST & PTB fitting, ANSI Y-14.5 GD&T, RPS alignments, and Modifiable HTML, tabular, AS9102 reporting. MobiGage requires an Apple iPod-Touch or iPhone, a MobiBox Silver wireless interface and a MicroScribe G or M series portable digitizer. http://www.revware.net

Testronic Labs debuts new 3D test lab Testronic Laboratories announced the opening of its new 3D Test Lab. Set to launch in the first quarter of 2010, the Testronic 3D Test Lab will be housed in the company’s 1st Street Facility, located in Burbank, CA. The Blu-ray Disc Association’s (BDA) recent announcement of a finalized 3D enhancement to the Blu-ray specification has set the stage for the proliferation of 3D in the home this year. Testronic Labs has spent the past year consulting with the leaders in the field, including major studios, broadcasters, hardware manufacturers, 3D technology pioneers and authoring facilities, to prepare test plans and procedures for the new technology. With the cooperation of the company’s 3D supply chain partners, the equipment at the new Testronic 3D Test Lab will include pre-release 3D players and monitors. The facility will thoroughly test Blu-ray discs for the entire 3D and 2D viewing experience to ensure quality for the consumer. http://www.testronic.com

Verisurf unveils portable rapid 3D inspection solution Verisurf Software introduced Master3DGage – an affordable and portable rapid 3D inspection solution that enables machine shops to significantly increase production and improve part quality. The complete hardware/software solution automates the 3D inspection process and quickly verifies manufactured parts directly to 3D CAD models. Master3DGageTM integrates a Hexagon Metrology six-axis Portable CMM – one of the world’s most accurate – with Verisurf’s advanced 3D model-based inspection software. This complete solution delivers a precise, fully automated digital process to inspect directly to CAD models anywhere on the shop floor. First article inspections are completed in minutes. http://www.Master3DGage.com

Eldim develops measurements for 3D displays with active glasses Time sequential stereoscopic 3D displays are likely to be one of the more popular solutions for 3D TV in the near future. Optical characterization is mandatory for quality control and comparison between the different technologies. Eldim recently presented a full working solution to fully characterize these displays. OPTIScope-SA allows full characterization and analysis of the temporal behavior of such displays. UMaster with dedicated software allows full quality control. ELDIM's OPTIScope-SA measures precisely the response times and luminance levels for each grey level transitions of 3D-ready LCD displays. Shutter glasses transmittance and temporal behavior is also obtained. Thanks to dedicated grey to grey transition analysis software, luminance levels seen by left and right eye observer across shutter glasses can be computed. Grey level variations due to response time and temporal synchronization can be deduced. ELDIM's UMaster videocolorimeter thanks to its telecentric objective, Peltier cooled CCD sensor and new generation of color filters allows precise imaging measurements even in low luminance conditions. Used with an automatic software and dedicated grey level patterns it is the tool of choice for quality control of time sequential stereoscopic 3D displays. Grey level stability and temporal synchronization are quantified simultaneously. http://www.eldim.fr

27 Veritas et Visus 3rd Dimension March 2010

3D e-book atlas, Lakes of the Sangres available on CD The “Lakes of the Sangres” in 3D is an atlas with over 200 color anaglyphs, viewable with supplied red-cyan glasses. It is a 230-page e-book containing 206 aerial photographs, and 21 USGS 7.5-minute TOPO maps with index of the lake and mountain names. This e-book shows the lakes in a 110 miles-long north section of the Mountains, in the entire Colorado. Embedded USGS topographic maps show every lake presented. Reading the files requires Adobe Acrobat Reader or any other software for Windows or Macintosh that opens files in pdf format. The CD with one pair of red-cyan anaglyph glasses is available for $20 and it includes US shipping. http://pikespeakphoto.com/sangres/sangre_lakes3d.html

UK researchers twist light into knots A branch of abstract mathematics inspired by knots that occur in shoelaces and rope has been used by UK researchers to design holograms capable of creating knots in optical vortices. Physicists from the Universities of Bristol, Glasgow and Southampton say that understanding how to control light in this way has important implications for laser technology used in wide a range of industries. Optical vortices can be created with holograms that direct the flow of light. The teams designed holograms using knot theory and were able to create knots in optical vortices. The hologram design required for the experimental demonstration of the knotted light shows advanced optical control, which undoubtedly can be used in future laser devices, the researchers say. The paper, “Isolated optical vortex knots”, was published online January 17 in the journal Nature Physics. http://www.bris.ac.uk

The colored circles represent the hologram, out of which the knotted optical vortex emerges Images: University of Bristol

Chinese researchers debut practical 3D holographic screen Researchers in China have designed what they believe to be the largest 3D color display with a holographic functional screen. The 1.8x1.3m2 screen provides continuous, natural 3D images, which the team hopes will not only make great viewing for consumers, but will also benefit applications in medical, industrial, military and advertising, reports optics.org (Optics Letters 34 3803). Possible applications include creating realistic training environments for medical professionals and soldiers, as well as a new generation of digital billboards for outdoor advertising. The Chinese team adopts a straightforward approach to image acquisition and display. A total of 64 digital cameras are placed along a single axis to capture a 3D object from different viewing angles. This 3D data is then displayed using the same number of projectors arranged in the same configuration as the camera array. Each projector emits its data onto a holographic functional display screen at various angles. By mirroring the image capture setup, the projectors can recreate the original 3D object. Since the magnification depends only on the ratio between the geometry sizes of projector and camera arrays, the screen size is not limited by the usual restraints experienced by holographic technology.

28 Veritas et Visus 3rd Dimension March 2010

Panasonic unveils world’s first integrated full HD Panasonic Corporation will release the world’s first professional, fully-integrated full HD 3D camcorder in Fall 2010. The company will begin taking orders in April. In Panasonic’s new full HD 3D camcorder, the lenses, camera head, and a dual memory card recorder are integrated into a single, lightweight body. The camcorder also incorporates stereoscopic adjustment controls making it easier to use and operate. The twin-lens system adopted in the camcorder’s optical section allows the convergence point to be adjusted. Functions for automatically correcting horizontal and vertical displacement are also provided. Conventional 3D camera systems require these adjustments to be made by means of a PC or an external video processor. This new camcorder, however, will automatically recalibrate without any need for external equipment, allowing immediate 3D image capture. The solid-state memory file-based recording system offers greater flexibility to produce full HD 3D videos in more challenging shooting environments. The camcorder is lighter weight and smaller than current 3D rigs, while providing the flexibility of handheld-style shooting. Setup and transportation is simplified, making it ideal for sports, documentary and filmmaking projects. Right and left full HD video streams of the twin-lens 3D camcorder can be recorded as files on SDHC/SD Memory Cards, ensuring higher reliability than on other tape, optical disc, HDD or other mechanical-based recording systems. This solid-state, no-moving-parts design will help significantly reduce maintenance costs, and the 3D camcorder will be better able to perform in extreme environments and be more resistant to temperature extremes, shock, and vibration. http://www.panasonic.com

The world’s first “office photography machine” now shipping from Ortery Technologies Ortery Technologies introduced Photosimile 5000, the next generation imaging device for the office. This PC- controlled desktop photography studio, integrates a 28x28x28 inch light box (featuring 6500K daylight bulbs, an automated camera positioning system and built-in turntable) with a Canon SLR camera and powerful workflow software to simplify and automate business photography. With Photosimile 5000, anyone can create professional, shadow-free pictures ideal for web, print and daily business communication. The camera and light box connect to a PC via USB. Photosimile 5000 software controls the studio, camera location, turntable movement, camera settings, picture taking and processing workflows. In addition to still photos, Photosimile 5000 creates professional 360- degree and hemispherical flash files. It’s fully automated, fast and accurate. The turntable holds up to 25lbs and the system takes between 4 and 200 pictures per 360-degree rotation at 10 unique angles from 0 to 90°. Ortery Real3D, included in Photosimile 5000 for image composition, creates full spherical, hemispherical and cylindrical animations with mouse control, image tagging and deep zoom capabilities to 14x. Resulting product animations are comparable to interacting with the physical product. The Custom Define feature allows users to create and re-use custom sequences of photos at different camera positions and turntable locations. http://www.ortery.com

Photosimile 5000: office photography and 3D photo machine solution

Holga releases 3D camera The Holga 120-3D Stereo camera comes with two lenses and captures two images at the same time to create a 3D image. The camera captures the images on a 120mm slide film; (in contrast to the recent Fujifilm offering, which is digital). To look at the images in 3D, users will have to see them through a special slide viewer. Photos can be developed with the film, though each print will have the same image printed twice, side-by-side. The Holga 120-3D Stereo camera is on sale now for under US$100. http://microsites.lomography.com/holga/

29 Veritas et Visus 3rd Dimension March 2010

Boeing introduces 3D camera for military use The Boeing Company announced it has begun offering a new, compact, energy-efficient camera that provides three-dimensional images for military and commercial applications. Boeing Directed Energy Systems and wholly owned Boeing subsidiary Spectrolab have jointly developed the camera using their own research and development funding, and successfully tested it over the past two years by attaching it to mobile ground platforms and a Boeing AH-6 Little Bird helicopter. Equipped with advanced sensors that were developed by the Massachusetts Institute of Technology's Lincoln Laboratory and transferred to Boeing under a teaming arrangement, the cube-shaped camera is one-third the size and uses one-tenth the power of most comparable 3D imaging cameras. The camera, which Boeing can customize for each customer, has many potential uses, including mapping terrain, tracking targets and seeing through foliage. To create a 3D image, the camera fires a short pulse of laser light, then measures the pulse's flight time to determine how far away each part of the camera's is. Boeing is currently integrating the camera into compact 3D imaging payloads on unmanned aerial vehicles and will be testing that capability this spring. The team will also add 3-D video capability to the camera soon to complement its existing still-image capability. http://www.boeing.com

Intermap Technologies’ 3D map data enables accurate planning for ICAO initiative Intermap Technologies announced its high-resolution NEXTMap digital elevation data, currently available in numerous regions around the world including Europe and the , is being used to develop an advanced terrain and obstacle solution in advance of the ICAO’s eTOD (Electronic Terrain Obstacle Data) initiative. The process for collecting the NEXTMap dataset is currently undergoing certification by the US Federal Aviation Administration (FAA). NEXTMap data will ultimately enable an eTOD solution, which will help address cross- border harmonization issues in an effort to satisfy the need for uniformity and consistency in the provision of aeronautical information. Developed in concert with several European-based air navigation service providers, the solution can potentially reduce the risk of collision with terrain and other obstacles by satisfying new reporting and accuracy requirements within well-defined distances surrounding aerodromes throughout Europe. Available as early as Q2 2010, the solution will be compatible with existing planning tools and software, enable low upfront costs to the member states, and provide easy access to a seamless terrain and obstacle database that is centrally located. http://www.Intermap.com

Intermap Technologies and Hella partner to combine 3D maps and camera information Intermap Technologies announced a collaboration with Hella KGaA Hueck & Co., a leading provider of innovative driver assistance systems, regarding a predictive front lighting system based on Intermap’s 3D road geometries. The partnership integrates Intermap’s high-resolution 3D road geometries, and information supplied by camera systems in an automobile, into Hella’s front lighting demonstration system – ultimately providing a significant increase in visibility for drivers at night and during inclement weather by automatically directing the headlamp before the driver manually steers the vehicle into a bend or up and down a slope. Intermap has developed the world’s only database encompassing accurate 3D road geometry for every road in the United States and Europe. It has been demonstrated that certain vehicle functions, such as lighting, can be operated via an independent 3D map database, separate from onboard 2D navigation systems. This lighting

30 Veritas et Visus 3rd Dimension March 2010 application is the first of many ADAS and safety applications, leveraging Intermap’s 3D maps. According to the Insurance Institute for Highway Safety, adaptive headlight systems are meant to improve visibility on curved roads. Accidents related to this type of driving account for 4% of front-to-rear, single-driver, and sideswipe same direction crashes in the US – or approximately 143,000 per year. Adaptive headlight systems could help reduce the number of related fatal crashes from the nearly 2,500 currently in the US every year. http://www.intermap.com/automotive http://www.hella.com

Magnetic 3D shows off autostereoscopic 3D display solutions at Super Bowl Magnetic 3D announced the recent deployment of glasses-free auto-stereoscopic 3D LCD displays in the “Suites of the Future” campaign at Sun Life Stadium for the Super Bowl held on February 7, 2010. The “Suites of the Future” showcased entertainment innovations that redefine the stadium experience for fans while generating additional advertising revenue channels for the franchise owners. The “Suites of the Future” enables sports and entertainment venues to target and deliver customized 2D and 3D video, promotional content, as well as relevant game-day information to virtually any display within a venue. For the deployment within Sun Life Stadium at the Super Bowl, Magnetic 3D installed its state-of-the-art “Allura” 3D Digital Signage product line to provide a sports entertainment experience unlike any seen before. The demonstration solution featured a 42-inch Allura screen, which is capable of displaying high definition glasses-free 3D video and images while also offering backwards compatibility with traditional 2D using the versatile Magnetic 3D FuzionCast network player. Through the FuzionCast player, those in the suites were even able to watch 2D content on the same screens, providing a seamless. The upgrade to glasses-free 3D content and deployment of 3D content to the “Suites of the Future” screens was simplified using Magnetic 3D’s proprietary E3D auto-stereo file format, which uses the latest in image compression technology. http://www.magnetic3d.com

Patents for 3D home viewing boom in the past six years The Intellectual Property Solutions business of Thomson released a report that tracks unique inventions published in patent applications and granted patents from 2003 to 2009. The firm identified areas showing the sharpest growth over the last five years. The findings include: 3D TV in the living room: between 2003 and 2008, patent activity in the 3D television space grew by 69%. Breakthrough new technologies include lenticular lenses, which create a more natural 3D viewing experience without the need for special glasses. 3D photos: 3D photographic technology has grown by 57% between 2003 and 2009 as the industry works to combat declines in other areas. 3D accessories: a great deal of 3D cinema innovation has less to do with movie production than it does with ancillary products. Between 2003 and 2008, patent activity in the 3D cinema space grew by 45%. Areas receiving the most attention include: projection systems, specialized glasses, cleaning apparatus and registration systems for glasses. http://thomsonreuters.com

Patenting activity by company in 3D television category, 2008

31 Veritas et Visus 3rd Dimension March 2010

Insight Media issues 3DTV forecast Insight Media issued a new report: 2010 3DTV Forecast Report: A Comprehensive Worldwide Forecast of 3D Television Unit Sales by Region and Technology. The report finds that nearly 50 million 3DTVs are expected to be sold in 2015, rising from 3.3 million 3DTVs in 2010. The forecast features Insight Media’s unique and proprietary convergence method that includes Tops-Down and Bottoms-Up approaches that are converged, reconciled, adjusted and validated to produce the final forecasts. Expected, optimistic and conservative forecasts are offered in the report. The report provides 3DTV forecasts by region, with a technology breakdown for each region. The Tops-Down analysis includes a determination of the Total Available Market (TAM), adjustment for TVs over 30”, evaluation and selection of penetration rates to model 3DTV acceptance, and the development of intermediate, optimistic, and conservative forecasts. The Bottoms-Up approach includes a Consumer Expectations Analysis (technology dependent and technology independent), a Price-Performance-Competitiveness analysis and a Market Development analysis. The report is available in pdf format as a company site license for $2,000 http://www.insightmedia.info/reports/20103dtvdetails.php

iSuppli estimates that 3D TVs will command a $600-700 price premium in 2010 According to iSuppli, the typical 3D TV will sell for about $1768 in 2010. The researchers calculate that $600-700 premium over regular LED-backlit LCD TVs will keep 3D out of the mainstream for the next few years. The limited availability of 3D content to watch, and uncertainties over the price and the compatibility of the 3D glasses needed to view the material, will act as barriers to growth. iSuppli's near-term forecast is more bullish than the one made by rival market watcher DisplaySearch earlier this year. DisplaySearch predicted that only 1.2 million 3D TVs will ship this year, rising to 15.6 million in 2013 and then to 64 million in 2018. iSuppli, on the other hand, estimates shipments will reach 78 million by 2015 after passing the 45 million mark in 2013. That's a compound annual growth rate of 80 per cent. The growth driver will be a sharp fall in average selling prices, according to iSuppli. From that average figure of $1768 today, the price will pass below the $1000 in 2014 and reach $825 in 2015. http://www.isuppli.com

32 Veritas et Visus 3rd Dimension March 2010

Animation with 3D printing It has been done before, with Coraline, but now a post from Creative Review walks us through how the title sequence for Dutch TV program “Het Klokhuis” ("The Apple Core") was created. The process involved printing numerous objects corresponding to frames of the sequence, which were then placed on a mini-stage and recorded. Animation was accomplished not only through traditional stop- motion techniques, but also through the use of ingenious and tiny motors. Agency KesselsKramer commissioned Johnny Kelly and artist Jethro Haynes to collaborate and make this sequence which recently aired for the first time. http://www.creativereview.co.uk/cr-blog/2010/january/3d-printing-in-animation

Organovo brings out first commercial 3D bioprinter Organovo has developed a research prototype of a bioprinter capable of producing very basic tissues like blood vessels. Invetech, Organovo’s strategic partner, will be providing the company with commercial versions of their device in 2010 to 2011. While it is still limited to simple tissue structures (full organs are a long way off), Organovo plans to deliver the printers to various research institutions interested in organ and tissue production. Working with these institutions, Organovo hopes to one day progress to creating a system that can print organs as easily as other 3D printers print plastic figurines. Organovo’s commercial version of the 3D bioprinter includes a design software package that would allow tissue engineers to simulate their constructions before they are printed. Two different heads on the printer allow for the scaffold (or support matrix, or hydrogel) to be applied separately from the living cells. Those cells can even be printed with micron precision thanks to a laser guidance system on the device. http://organovo.com

3D bioprinters could one day construct an organ from a person’s own cells as easily as printing a map (artist’s rendering)

Stratasys expands 3D printer line One year after introducing what has become the world’s best-selling 3D printer – the Dimension uPrint – Stratasys says it has expanded the product line with the uPrint Plus – an enhanced version with new features – while still keeping the price under $20,000. Like the Dimension uPrint personal 3D printer, the uPrint Plus has a small footprint for true desktop use (25x26 inch). uPrint Plus can print in eight colors of Stratasys ABSplus material, making it easier for designers to differentiate individual assembly components and better depict their product. The printer has a build envelope of 8x8x6 inches – 33% more volume than the uPrint, enabling larger models. uPrint Plus offers two resolution settings – 0.010 inch (0.254mm) and 0.013 inch (0.330mm) – to give users additional print options. uPrint Plus also features two support-material enhancements that reduce material consumption and modeling time. The first, Smart Supports, is a software enhancement that reduces material usage by 40%, cutting costs. The second, SR-30, is an improved soluble support material that dissolves 69% faster, to speed the modeling process. Smart Supports are available for both uPrint and uPrint Plus. uPrint Plus material colors include red, blue, olive, black, dark gray, nectarine, fluorescent yellow, and ivory. The new 3D printer will be available for shipment in March through authorized Stratasys resellers. http://www.DimensionPrinting.com

The new Dimension uPrint Plus personal 3D printer from Stratasys

33 Veritas et Visus 3rd Dimension March 2010

HP to release a 3D printer Hewlett-Packard (HP) is creating a 3D printer with Stratasys. Stratasys will make a 3D printer that HP will release later in the year. This printer is designed for architecture and component prototyping. It will build 3D models layer- by-layer using ABS plastic, one of the most widely used thermoplastics in today’s injection-molded products. 3D printers allow users to evaluate design concepts and test models for functionality, form and fit. http://www.hp.com

Axis Three enhances aesthetic consultations for facial procedures with new 3D scanner Axis Three, a pioneer of 3D surgical simulation tools for aesthetic consultations, announced the availability of its new XS-200 scanner, designed to capture high quality, anatomically-accurate, 3D images of a patient’s face in order to simulate facial surgery outcomes. The XS-200 complements the company’s recently launched Face Simulation software module for nose, chin, cheek, and jaw procedures. Combined with Axis Three’s software package, the XS-200 enables surgeons and their staff to show patients, in 3D, how they will look post-surgery, redefining the aesthetic consultation experience for patients undergoing face procedures. The XS-200 is a desk- mountable unit specifically optimized to capture the face topology with a minimal hardware footprint. With a modular, plug and play design, the scanner is easy to install and attaches directly to the USB port of a surgeon’s computer. The unit incorporates three high-resolution imaging heads and a Color Coded Triangulation (CCT) algorithm, which enables precise 3D image capture within a compact hardware footprint. When combined with Axis Three’s 3D simulation software, the result is a more positive patient experience, greater patient satisfaction post-surgery, and a way for surgeons to quickly realize their return on time and investment. The XS-200 hardware can address the full suite of 3D image capture required for invasive face simulations, including simulations for chin, cheek, nose and jaw procedures, and non-invasive procedures, including dermabrasion, chemical peels, facial fillers and Botox. The XS-200 and face simulation module, along with all planned updates and upgrades, are available for $22,750. http://www.axisthree.com

RIEGL introduces new airborne laser scanner RIEGL Laser Measurement Systems GmbH, a manufacturer of 3D laser scanners, announced its latest state-of-the- art airborne laser scanner LMS-Q680i with an unmatched laser pulse repetition rate of 400kHz, providing an effective measurement rate of up to 266,000 coordinates per second. The new LMS-Q680i offers the established industry-leading echo digitization for in-depth full waveform analysis, now smoothly combined with multiple-time- around signal processing. This combination allows the user to benefit from the high pulse rate also from high flight altitudes and thus to achieve high measurement densities on the ground. A high scan rate of up to 200 lines per second at a constant 60 degrees field of view provides an evenly distributed point pattern of highest resolution for various applications like e.g. city modeling, power line monitoring, and even large area and flood plain mapping. http://www.riegl.com

Multiple–target capability shown by red dots and by digitized echo signal. Green background: terrestrial laser scan of the same tree. On the right is a high point density (>50 pts/m2) achieved with RIEGL LMS-Q680i flown on fixed- wing airplane (altitude 550m AGL, speed 90 knots)

34 Veritas et Visus 3rd Dimension March 2010

Aperio collaborates with ADCIS on advanced computer-assisted stereology module Aperio Technologies, a global leader in digital pathology for the healthcare and life sciences industry, in partnership with ADCIS, (Advanced Concepts in Imaging Software) of Normandy, France, has introduced a stereology software module for the quantitative evaluation of complex 3D biological structures that originate from 2D sections, built on Aperio’s digital pathology platform. Stereology is an effective alternative to complex image processing algorithms when structures of interest are too intricate, staining is problematic, or there is too much variability between slides or within the same slide. Stereology is used in a wide variety of applications including neuropathology, pulmonary and kidney diseases, diabetes, and cancer. The stereology module is distributed by Aperio and is fully compatible with the company’s ImageScope digital slide viewing software. Aperio currently has an installed base of more than 500 systems in 31 countries, including more than two-thirds of the top 15 rated US hospitals, leading academic medical centers and reference laboratories, and 14 of the top 15 pharmaceutical companies. http://www.aperio.com

Determine the sensitivity and specificity of your image analysis applications by comparing the image analysis results (red markup) to a quickly computed, unbiased and robust estimate using the ADCIS stereology tools (blue grid).

Better X-Ray machines image fossils in 3D The world's most powerful X-ray machines are a by-product of high-energy physics. Synchrotron particle accelerators are helping shed light on some of the world's rarest fossils. Last spring, a synchrotron helped scientists create a 3D image of a 300 million year-old brain. Synchrotrons use magnetic and electric fields to send electrons careening along a circular path; the process radiates X-rays that can be used to illuminate structures as small as atoms. In March 2009, scientists from France and the U.S. announced they had X-rayed remnants of a brain inside an Iniopterygian fossil from Kansas. Iniopterygians are extinct relatives of modern ratfishes, also known as “ghost sharks” or chimaeras. The team was using the synchrotron at the European Synchrotron Radiation Facility in Grenoble, France, to study the rare 3D fossil (most are squashed flat), and the researchers noticed part of the fish's head was denser than normal. Using X-ray holotomography – holographic mapping, basically – they realized they were looking at fossilized brain tissue. Along with studying fossils, accelerators can be used to search for priceless works of art. Some art historians believe Leonardo da Vinci's greatest painting is hidden inside a wall in Florence's city hall.

A 3D reconstruction of the iniopterygian braincase viewed from the side after a synchrotron acquisition in absorption contrast (green= braincase; red=endocranial cavity; orange=brain).

VisiSol inks multi-year deal with Japanese consumer electronics company I-TEC VisiSol announced they have inked a multi-year exclusive deal with I-TEC for sales and distribution of the QTEC brand of consumer electronics products in North America and Europe. Under the agreement, VisiSol will handle the full-line of QTEC products including the “glasses-free” 3D display, Blu-ray players, 3D Blu-ray players, Digital Audio Players, CD Ripping Device and other consumer electronics products under development in Japan. http://www.visisol.net

35 Veritas et Visus 3rd Dimension March 2010

Smart Holograms announces an interactive consumer authentication sensor Smart Holograms announced the commercial availability of its new Verif-EYE bio-optical, interactive visual sensor that allows consumers to “self validate” their purchases prior to consumption or use to ensure the product is genuine and tamper-free. It has a wide application in consumer products including subscription and over the counter (OTC) pharmaceuticals, food and cosmetics which continue to be subject to counterfeiting and tampering at an alarming rate. Verif-EYE shows a visual holographic image/color that transforms into a different image/color upon detection of human breath or a drop of water depending on how the sensor is programmed. Holographic images or messages can be unique to each product. Smart Holograms claims to be the world’s leading technology company in the development and applications for optically programmable holographic sensors and related materials. Based in Cambridge, UK, the company develops solutions for customers working in the consumer protection, brand & product protection and medical solutions industries worldwide. http://www.smartholograms.com

New display from Sunny Ocean Studios enables 3D without special glasses The technology supplier Sunny Ocean Studios makes it possible to enjoy a 3D experience from 64 different perspectives for the first time without additional visual aids. The 27-inch monitor and upstream optical system enables “glasses free” 3D images. The company already possess the technology needed to quickly and inexpensively make large displays up to a size of 100-inch 3D capable. In Singapore, Sunny Ocean Studios is currently developing the world’s first 3D cinema in which the audience will no longer require any special glasses. At CeBIT the company presented this technology for the cinema of the future in a mobile 27-inch format. By using 64 individual frames for the different perspectives in each 3D image for the first time, they can achieve a significantly improved 3D quality. http://www.sunny-ocean.com

Niles Creative Group creates world’s largest daylight-viewable 3D display with Barco The Comcast Experience in Philadelphia is the world’s largest indoor four-millimeter LED display, comprised of 6,771 Barco NX-4 LED modules. For Comcast’s 2009 Christmas Spectacular, the Niles Creative Group takes the 10-million Barco LED wall into the next dimension, with the production of the world’s first daylight-viewable 3D experience. The 18-minute 3D Christmas spectacular ran hourly through the end of the year. For David Niles, founder of the Niles Creative Group, the task was challenging on many levels: how to produce a show in 3D at five times the resolution of high-definition television, how to switch the LED wall between 2D and 3D playback, and how to achieve the level of brightness and contrast required to properly display 3D in broad daylight. To solve the technical challenge of compositing a dual stereoscopic image into a single anaglyphic image that the LED wall requires, Niles partnered with Sirius 3-D, the creators of the ColorCode 3-D system. “They've perfected a blue-yellow anaglyptic method of encoding which is quite spectacular, and which preserves colorimetry,” explained Niles. “Even in broad daylight, the results on Comcast’s existing, unmodified 4mm wall are beautiful.” For the challenge of switching the wall between 2D and 3D playback modes, the video system itself at The Comcast Experience accomplishes the task. “Using our content delivery system, we were able to program changes into the way the wall operates,” said Niles. “When the Christmas Spectacular is running, we increase the wall’s output level to make it optimum for anaglyphic viewing. Then, as soon as the 3D Christmas show is over, the wall returns to its normal viewing levels.” http://www.nilescreative.com http://www.comcast.com

36 Veritas et Visus 3rd Dimension March 2010

ITU maps out 3D TV from to The International Telecommuncations Union (ITU) has established a far-reaching roadmap for 3D TV technology, though it admits it will take at least 20 years for its more advanced notions to be realized. The ITU’s roadmap defines the next three generations of 3D TV, starting with today’s stereoscopic technology and going through to holographic images. The focus of the ITU Radiocommunications Study Group 6 is stereoscopic 3D in which separate left- and right-eye HD images are transmitted alternately and special glasses ensure the correct eye sees the correct picture. Today’s 3D TV pictures don’t change when you move your point of view, but the ITU said the next generation of the technology will enable just that. It will require the transmission of multiple 3D images in parallel, each view selected as the playback system tracks the motion of the viewer. Looking further ahead, the ITU also defined the third generation of the 3D TV technology as “systems that record the amplitude, frequency and phase of light waves, to reproduce almost completely human beings' natural viewing environment”. Since any hologram can be represented mathematically, it can also be stored and transmitted as data. http://www.itu.int

French surgeons practice virtual surgery on 3D model of a patient’s liver A new process has been developed that allows 3D scans of a patient's liver to be made prior to a surgery. These 3D scans allow the surgeon to see the actual anatomy of the patient's liver rather than hoping the anatomy matches what was described by Couinaud in 1957. Couinaud’s description is the classical anatomy of the human liver. However, 3D modeling has shown that as much as 50% of the human population has a different liver anatomy than what the classical description would lead a surgeon to believe. Project Odysseus was developed to form a 3D image of a person's liver and the vasculature of the liver to allow surgeons to train before a surgery. The modeling also allows the surgeon to see how the liver is segmented. The software developed for the project is Virtual patient modeling that enables a patient specific pre-operative assessment and Virtual Planning software that allows navigation and tool positioning to be done in 3D on any multimedia computer. France Telecom has also developed a communication system for the project called Argonaute. This is the software that allows doctors and specialists in several locations to advise on images at the same time. The simulated surgery gear is called the unlimited laparoscopic simulator (ULIS) and the robotic surgery simulator (SEP Robot) is the part of the project that adds the physical properties of texture and tissue resistance to the virtual surgery. http://www.odysseus-project.com

Quartics and DDD demonstrate 3D on electronics devices Quartics and DDD Group are collaborating to bring the most compelling and cost effective 3D technology yet to HDTVs and netbooks by optimizing the Quartics’ Qvu Video Processor with DDD’s TriDef 3D software technology. The combined solution is compatible with virtually any 3D display technology including passive polarized and active shutter glasses. It supports the decoding of a wide variety of original 3D content formats including those used in Blu-ray and broadcast 3D applications. Devices equipped with the Qvu 3D features can automatically convert any existing 2D HD content to 3D from Blu-ray, social media sites, games consoles and more. Qvu is a programmable SoC solution that provides a “Beyond HD” level of video quality surpassing even that of high definition. Qvu consumes a minimal amount of power and thus significantly improves battery life and viewing time, while offering a single platform that can be repurposed for use in multiple applications to handle all HD video processing tasks in a consumer electronics device. Qvu is ideal for netbooks, laptops, IP set-top boxes and HDTVs. http://www.DDD.com http://www.quartics.com

Frog Design showcases Canesta’s 3D Sensing technology To create a truly memorable experience at the official opening party for the South by Southwest (SXSW) Interactive Conference taking place now in Austin, Texas. Frog Design chose to use Canesta Inc.'s leading 3D sensor technology as a key element of the augmented reality-themed event. Frog's use of Canesta's technology for SXSW focused on an exploration of interactive artistic and video game environments. Frog created an experience incorporating large projections of dynamic visualizations of the environment's depth as well as allowing the user to be the controller in a series of rotating, retro arcade-style games. Users were able to be turned into “Mario” in “Mario Land,” the paddle of “Breakout,” or the spaceship in “Galaga”. Thanks to projection onto a 12x9-foot screen, the user was completely immersed in the visualization. http://www.frogdesign.com

37 Veritas et Visus 3rd Dimension March 2010

Jon Peddie Research reports astounding year-to-year growth in PC graphics Jon Peddie Research (JPR) announced its estimated graphics chip shipments and suppliers’ market shares for Q4'09. Shipments for the year came in above expectations with a 14% year to year growth, an amazing comeback, in this year of retrenching and recession.

2004 2005 2006 2007 2008 2009 2010 2011 Total Graphic Chips: 239.0 269.4 316.5 351.7 373.0 425.4 544.0 600.1 Annual percentage growth 10.1% 12.7% 17.5% 11.1% 6.1% 14.0% 27.9% 10.3%

Table 1: Growth rates from 2002 to 2011

Intel was the leader in Q4'09, elevated by Atom sales for Netbooks, as well as strong growth in the desktop segment. AMD gained in the notebook integrated segment, but lost some market share in discrete in both the desktop and notebook segments due to constraints in 40nm supply. Nvidia picked up a little share overall. Nvidia’s increases came primarily in desktop discretes, while slipping in desktop and notebook integrated.

Q4’09 Q3’09 Unit Growth Q4’08 Vendor Growth Yr-Y Market share Market share Qtr-Qtr Market share AMD 19.9% 20.1% 13.6% 19.3% 91.5% Intel 55.2% 53.6% 17.9% 47.7% 114.7% Nvidia 24.3% 25.3% 10.2% 30.6% 47.3% Matrox 0.0% 0.0% 66.7% 0.1% -16.7% SiS 0.0% 0.3% -81.8% 1.1% -92.5% VIA/S3 0.6% 0.7% -3.9% 1.2% -9.5% Total 100.0% 100.0% 14.7% 100.0% 85.7%

AMD reported revenue of $427 million from their graphics segment for the quarter, up 40% sequentially. AMD’s graphics segment reported an operating income of $53 million, a substantial improvement from the prior quarter. Intel reported revenue from chipset and other revenue of $1.877 billion in Q4. Nvidia’s quarter, which straddles the calendar quarters reported revenues of $903 million for their Fiscal Q3’10, which is from August to the end of October. Their next quarter ends in January. Q4’09 saw the first shipments of a new category, the CPU-integrated graphics - CIG. With the advent of new CPUs with integrated or embedded graphics, we will see a rapid decline in shipments of traditional chip-set graphics or IGPs (integrated graphics processors.) http://www.jonpeddie.com

AMD updates Catalyst graphics driver has introduced updates to the ATI Catalyst graphics driver meant to improve the game- running performance of the ATI GPU. The first update, version 10.2, is available now, and the second, 10.3, is scheduled for release in March. The short time between releases reflects AMD’s strategy of providing regular driver updates to provide incremental performance boosts to Radeon graphics processors. Version 10.2 changes how Catalyst handles game profiles, making it possible for AMD to offer on its Web site a separate executable file to provide optimal performance for a new game. In addition, the update makes it possible to apply AMD’s CrossFireX multi-GPU technology when Radeon customers are running multiple displays, a feature AMD calls Eyefinity. Version 10.3 represents the first of monthly Catalyst updates AMD plans to release for Windows Vista and Windows 7 laptops that feature ATI Mobility Radeon HD 2000, HD 3000, HD 4000, and HD 5000 series graphics processors. The driver upgrade will offer a wizard for adjusting display layout when using multiple monitors to compensate for the viewing area sometimes hidden at the edge of a screen. In addition, version 10.3 provides support for 3D stereoscopic gaming options by allowing for independent left and right images at 120Hz. AMD added Windows 7 support to ATI Catalyst in March 2009 with the release of version 9.3. The update enabled developers to fully utilize the DirectX 10.1 application programming interface in Microsoft's latest operating system. http://www.amd.com

38 Veritas et Visus 3rd Dimension March 2010

AMD announces Open Stereo 3D Initiative ATI announced their Open Stereo 3D Initiative as a way to work with more partners to ensure support for multiple different stereoscopic 3D solutions. The idea of the open initiative is to offer consumers additional choices when selecting a 3D solution, more innovations, and help in lowering the costs for the hardware and software making it easier for wider adoption. ATI (AMD) and its partners will supposedly soon announce a lineup of 3D products, including a 3D-enabled ATI Eyefinity technology (to counter Nvidia’s 3D Vision Surround), 120Hz 3D-ready displays and notebooks, active shutter glasses and passive polarized glasses, S3D support for DirectX 9, 10 and 11, Quad Buffered OpenGL, Blu-ray 3D support. As you can see from the presentation slide below the 3D-gaming middleware partners are DDD and iZ3D, and for Blu-ray 3D support ArcSoft and CyberLink. ATI is also going to try to work on establishing standards that will help in having compatibility and much wider choices when building your computer and stereo 3D setup if they are adopted by others. http://www.amd.com

Sigma Designs integrates SENSIO 3D format decoder into new generation of 3D media processors SENSIO Technologies announced that Sigma Designs has selected the SENSIO 3D format decoder for integration into its next generation of media processors. A leading provider of highly integrated, high-performance system-on- chip (SoC) solutions for entertainment and control throughout the home, Sigma is licensing SENSIO 3D technology to allow set top box, AV receiver, HDTV, and Blu-ray player manufacturers to provide their customers with an optimal 3D viewing experience. SENSIO 3D is a distribution technology that enables the broadcast and storage of stereoscopic (3D) content for home viewing, transforming the conventional home theatre experience into a rich, immersive stereoscopic experience. Delivering true 1080p HD picture quality, the SENSIO 3D format meets the exacting demands of high-performance 3D systems including the latest HDTVs, receivers, and set-top boxes. http://www.sensio.tv http://www.sigmadesigns.com

Sigma showcases new range of 3D technologies Sigma Designs recently demonstrated 3D video capability that takes advantage of advanced 3D video algorithms and content from Real D to show stereoscopic output on a 3D equipped television. This 3D video platform is based on Sigma’s SMP8644 media processor combined with Sigma’s GF9452 VXP video processor, which will output full HD left eye/right eye interleaved images. The demonstration provided image enhancement of the 3D content using proprietary VXP detail enhancement and adaptive contrast enhancement algorithms. In a second demonstration, Sigma showcased robust 3D graphics that operates over the OpenGL ES interface to render high performance 3D imaging. This 3D graphics platform is based on Sigma’s latest silicon that offers a new 3D engine – featuring a tile-based architecture with a universal shader engine supporting multi-threaded operations with both pixel and vertex shading. It is capable of rendering 16 million polygons/second, pixel fill rates of 500 million pixels/second, and texture element fill rates of over 100 million texels/second. http://www.sigmadesigns.com

39 Veritas et Visus 3rd Dimension March 2010

NVIDIA introduces 3DTV Play NVIDIA launched a companion app for home theater PCs. 3DTV Play gives any PC with a modern, 3D Vision-capable GeForce card the ability to output a true 3D image to any TV with both 3D and HDMI 1.4 inputs, including new Panasonic VIERA plasmas as well as LCDs from Samsung, Sony and others. The system works with any set's existing active shutter glasses and doesn't need a format change for games or movies. The graphics chip maker is joining Panasonic on a 3D TV tour starting today in New York City but only plans to ship 3DTV Play for $40 to those jumping directly to a TV for the added depth. Those who already bought into the GeForce 3D Vision kit for desktop viewing can get the software for free. http://www.nvidia.com/object/3D_TV_play.html

Latest Unreal Engine 3 will have official 3D vision support NVIDIA and Epic Games just announced the addition of official support for the stereoscopic 3D technology called 3D Vision in the latest Unreal Engine 3. This announcement means that licensees of the Unreal Engine 3 will be able to take full advantage of integrated 3D Vision support by offering games that handle very well when played in stereoscopic 3D mode. Even the popular Unreal Development Kit (UDK), a free version of Unreal Engine 3, will offer support for 3D Vision. The updated version of Unreal Engine 3 and UDK will be available in the near future, allowing games developed with that revision (or later) to deliver support for 3D Vision. http://www.udk.com

RealView delivers stereoscopic pseudo-3D on PSPs for $40 The V-Screen makes pseudo-3D possible on PSP games, although the science behind the technology has long been known to PC gamers. The product will retail for around $40. When you slide a PSP into the case, actual depth is added to the images, and the screen appears larger without sacrificing brightness or resolution. The effect is particularly apparent in racing games, and it adds an impressive dimension to both “Gran Turismo PSP” and “MotorStorm: Arctic Edge”. The marketing materials describe it as a “depth-enhancing” screen that is “similar to a 3D experience”. Similarly, by adding a Fresnel lens to a standard monitor, it is possible to both magnify the image and add the illusion of depth to the picture. There are companies that sell oversized Fresnel lenses for this purpose, and simulation gamers have long known about the effect. This lead to many racing and flight simulation enthusiasts creating DIY Fresnel lens setups, including the monstrous three-monitor, three-lens set up used for racing games (see photo). The size of the lens, as well as the distance it’s kept from the screen, determines the strength and efficacy of the effect. What RealView Innovations has done is create a Fresnel lens of the perfect size for a PSP.

A do-it-yourself Fresnel set up for racing games

40 Veritas et Visus 3rd Dimension March 2010

PlayStation 3 to receive 3D upgrade this summer The Sony PlayStation 3 will be getting the 3D treatment for its gaming and Blu-ray platforms this summer, according to Pocket-lint.com. The summer timeframe coincides with the planned firmware upgrade rollout for Sony’s compatible Bravia LCD TVs as well as the Blu-ray players announced – the BDP-S470, BDP-S570 standalone players and BDV-E770W and BDV-E570 Blu-ray home theater systems. Sony is also developing stereoscopic 3D game titles for the PS3, to be released separately at some point this year. http://www.sony.com

ARM, Movial and ST-Ericsson use Qt in next gen 3D demo Movial and ST-Ericsson have used the Qt application and UI framework to showcase the latest in next generation set top boxes. This first-ever implementation of the Qt application and UI framework on the ST-Ericsson U8500 mobile platform, utilizing the energy efficient high performance of the ARM Cortex-A9 and ARM Mali graphics processor technologies integrated by Movial, delivers a Web-connected and social network set-top box user experience. The demonstration combines communication from mobile phones, social networking from netbooks with traditional TV content enabling users to browse the Web, connect on Facebook, use widgets and watch videos from their personalized video wall. http://www.movial.com http://www.stericsson.com http://qt.nokia.com

Disney releases 3D texture mapper source code Walt Disney Animation Studio's cutting-edge 3D texture mapping library, which was first used on nearly every surface in the 2008 animated feature “Bolt”, has been released under the BSD license. Quoting the announcement: “We expect to follow Ptex with other open source projects that we hope the community will find beneficial. We will soon be launching a new Walt Disney Animation Studios Technology page under http://disneyanimation.com. It will include links to our open source projects as well as a library of recent publications.”

TDVision Systems and CyberLink announce full HD stereoscopic video decoding TDVision Systems Inc. and CyberLink showcased their full HD 3D solutions. CyberLink’s PowerDVD video player software, integrated with TDVision’s 2D+Delta decoding technology, delivers full HD 3D video decoding capabilities. The 2D+Delta Format, invented and patented by TDVision Systems, makes use of redundancies between the left and right stereoscopic views. This is accomplished by encoding a full resolution 2D view and only the Delta Difference information between the left and the right views. This difference is stored into the video stream in a format that updated decoders and 3D displays can play out in any 3D format at the highest quality possible while legacy 2D televisions and decoders play the stream in 2D, making the system fully backward compatible and display agnostic. The 2D+Delta format is also known as TDVCodec and is a key part of the “” (MVC) codec (ISO-MPEG-14496-10:2008, Amendment 1), an extension to the ITU-T H.264 (AVC) recently adopted by the Blu-ray Disc Association as the “Blu-ray 3DTM” specification. TDVision is also partnering with Nvidia to provide a premium 3D experience in households around the world. http://www.cyberlink.com http://www.tdvision.com

Japan becomes 11th jurisdiction to grant foundation patent for PureDepth interactive MLD applications PureDepth announced that another of its foundation patent applications has been granted in Japan. Patent P002JP, which was filed by PureDepth with an international priority date of August 19, 1999, broadly covers a process for interacting with data or images on a multi-level screen display using an input device, such as a mouse, keyboard, joystick, touch screen, pen, stylus, voice activation system or more. The newly-allowed patent is the 83rd awarded to PureDepth. This patent has now been granted in 11 jurisdictions, including the US, New Zealand, and select EU member states. PureDepth’s MLD displays contain two or more layers of display panels placed in front of one another in a single monitor allowing for 3D effects. This latest patent allows people to move information and visual content from plane to plane, across the various display layers of an MLD, a process that can greatly improve the usability of software for interactions such as gaming, drawing, content creation, data analysis, information access, and more. By manipulating images between screens, developers can create visual links between objects on different planes or create an illusion of three-dimensional movement. The patent covers usages with a wide array of displays, including , LCDs, plasma screens, e-paper and more. http://www.puredepth.com

41 Veritas et Visus 3rd Dimension March 2010

Microsoft shows full 3D XNA games on Windows phone Microsoft recently showed off a pair of 3D games running on a Windows Phone prototype utilizing the new XNA Game Studio 4.0. The two titles are The Harvest, a touch-controlled dungeon crawler with destructible environments, being developed by Luma Arcade; and Battle Punks. Microsoft spoke to the ease of its Direct3D development platform, which was built by the same folks responsible for the first-gen Xbox. XNA Game Studio 4.0 lets developers work on games for Windows Phone 7 Series, and Windows PC. The integration with Visual Studio 2010 allows developers to build a single project and then make slight modifications to let it run on each platform respectively. Microsoft specifically mentioned that 4.0 will include hardware accelerated 3D APIs for Windows Phone 7 Series. http://msdn.microsoft.com/en-us/library/bb203897.aspx

StudioGPU ships real-time non-linear 3D workflow and rendering software StudioGPU continues to streamline 3D workflow with MachStudio Pro 1.4 software, offering new features and functionality for real-time 3D workflow and rendering performance. The software newly features such things as physically-based cameras, the ability to export directly into layered Adobe Photoshop files, a materials ID renderer, procedural ramp texture generation on projected lights, and more. MachStudio Pro allows artists, designers, engineers, directors, and technical directors (TDs) to work with 3D lighting, camera views, and multi-point perspectives in an interactive non-linear fashion for real-time high fidelity views as they will appear in the final rendered format. http://www.studiogpu.com

Cognex launches 3D vision software library Cognex announced 3D-Locate, a library of 3D vision software tools that expands application possibilities in vision- guided robotics, assembly, and inspection. Cognex 3D-Locate delivers accurate, real-time, three-dimensional position information that enables automation equipment to work with a wider variety of parts, including items that are stacked or tilted. Using 3D-Locate can improve vision performance for challenging applications such as logistics and robot-guided de-palletizing and precision assembly, and it can eliminate the need for expensive mechanical fixtures or measurement devices. Cognex 3D-Locate can also be used in combination with Cognex code reading, gauging, and inspection tools. Cognex 3D-Locate uses multiple sets of two-dimensional features found by Cognex’s patented geometric pattern matching tool, PatMax, to determine an object’s precise three- dimensional orientation. PatMax tolerates non-uniform lighting and remains reliable even when patterns are partly covered, ensuring accurate part location even in the most challenging settings and conditions. Cognex 3D-Locate can also use input from other Cognex location tools such as SearchMax and PatFlex to help determine part orientation. Application performance is enhanced by high-precision Cognex calibration tools that adjust for optical distortion and camera position, and synchronize cameras with moving elements like robot grippers, key to the success of any 3D application. PC-based 3D-Locate can handle high throughput applications, and users have the option to choose from a wide range of industrial cameras. http://www.cognex.com

Cyberhus interactive 3D installation opens in Denmark The Women’s Museum in Aarhus, Denmark, recently opened a new interactive 3D installation that deals with the inner thoughts of teenagers about love, loneliness, health and so on. The installation is based on 3D graphics projected on a large screen. The audience wears 3D glasses and navigates the three rooms that make up the content using a controller from a console. The installation is an extension of an online meeting place entitled Cyberhus that reaches out to vulnerable children in need of help and emotional support from peers and trustworthy adults. Navigating the 3D spaces lets the user explore common rooms such as a kitchen or a bathroom. At certain points, dialogues and monologues are presented to the user revolving typical problems that youngsters deal with, and sometimes the rooms transform in unrealistic ways to accent a certain mood in their thoughts. The installation was created as a collaboration between Kvindemuseet, Cyberhus, Signe Klejs, CAVI, Niels Gade, Linda Klein and several adults and youngsters. http://www.kvindemuseet.dk

42 Veritas et Visus 3rd Dimension March 2010

Acer introduces NVIDIA 3D-ready projectors Acer America introduced two new NVIDIA 3D Vision-Ready video projectors. The three-dimensional experience is made possible by a combination of the projectors’ DLP projection capabilities, high refresh rates and NVIDIA 3D Vision technology. As a result, the flat surface of any wall can be transformed into a 3D screen.

 Home theater enthusiasts will enjoy video, game content, photos and more in an incredible new level of realism and video immersion using the new Acer H5360 projector. Delivering HD-ready 720p (1280x720) resolution, the Acer H5360 boasts the latest technology for a truly unsurpassed video projection experience. The projector has a 50-120Hz vertical refresh rate. The Acer H5360 projector has an HDMI port that provides a seamless connection to the latest digital sources ensuring exceptional high-definition viewing and audio from Blu-ray Disc high definition technology as well as DVDs. The Acer H5360 projector is available now for U.S. customers at leading retailers for an MSRP of $699.00.

 Delivering bright colors and crisp images, the Acer X1261 projector features advanced lamp technology with illumination of up to 2500 ANSI lumens, a high 3700:1 contrast ratio and a vertical refresh rate of 50-120Hz. Its native XGA (1024x768) resolution and 4:3 aspect ratio are ready for presentations, photos, multimedia, and more. The projector can also be adjusted to a 16:9 aspect ratio for video content such as that from Blu-ray Disc and DVD. It is available now for U.S. customers at leading retailers for an MSRP of $579.00.

Both new Acer projectors – the Acer H5360 and Acer X1261 – deliver a realistic 3D viewing experience when combined with NVIDIA 3D Vision technology, which transforms traditional 2D images into stunning 3D. NVIDIA 3D Vision is a combination of an NVIDIA 3D Vision compatible computer and graphics card, and 3D Vision Kit that includes wireless active-shutter glasses and advanced software that can transform hundreds of PC games into a 3D experience. The lightweight glasses, which can be worn over regular eyeglasses, provides up to 40 hours of 3D entertainment on a single charge. http://www.nvidia.com/object/3D_Vision_Main.html http://www.acer-group.com

Acer launches 3D, home automation-ready projector Acer’s new S5200 projector includes a 120Hz refresh rate and full support for 3D when paired with a compatible graphics card (and 3D glasses), along with built-in support for Creston's home automation system, which allows operation from distance. The projector only manages a standard XGA resolution so Acer is pitching it more at classroom use than for home theater. http://www.acer.com

ViewSonic delivers 3D-ready projection ViewSonic recently introduced the PJD6531w projector, featuring the latest Texas Instruments DLP Link technology as well as Nvidia’s 3D- vision technology to deliver what the company says is projection optimized for 2D and 3D performance. ViewSonic says the projector can work for commercial or home theater applications, with its native 1280x800 resolution. It delivers 3,000 lumens, 3,200:1 contrast and a 120Hz frame rate. It could also work well as even a secondary room projector for the home. An Eco Mode extends lamp life to 6,000 hours, and there are integrated 10W speakers. The projector is priced at $799. http://www.viewsonic.com/

43 Veritas et Visus 3rd Dimension March 2010

TI shows off 3D phone display At the Mobile World Congress, TI showed off a tablet-sized device with a 3D display that doesn't require glasses, running on an existing TI OMAP3 chipset. The company also promised high-def, 3D movies with its new OMAP4 chips. The 3D demo showed images and video in 3D by using a standard 120-Hz LCD with a special overlay film from 3M that can direct images either towards your left or right eye. By flickering two images very quickly – running at 60 frames per second – the display transmits a different picture to each eye, creating a simulated 3D image. The 3D picture can be created using a handheld with dual 3-megapixel cameras and an 800-MHz TI OMAP 3630 chipset, which are all components that are available today. At any time, the display can switch back to 2D. The new OMAP4 chipset announced at this show supports “dual 720p.” With dual cameras on the front of OMAP4 phones, TI will be able to record 3D images as well. http://www.ti.com

Spatial View and UVPHACTORY bring glasses-free 3D to the iPhone Spatial View and UVPHACTORY (UVPH) have joined forces to debut the first-ever glasses-free 3D music video available for the iPhone. UVPH designed and directed the video entitled “Drown in the Now” for The Crystal Method. “Drown in the Now” features a performance by Matisyahu and is taken from The Crystal Method’s recent album “Divided by Night” nominated for a Grammy. After UVPHACTORY created the video in Stereo 3D, Spatial View adapted it for use with its 3DeeSlide, a mobile iPhone and iPod Touch accessory with Spatial View’s proprietary lenticular technology. http://www.spatialview.com/uvph

Apple shows a patent application for a 3D interface for mobile devices A recent patent application details a user interface for interacting with three- dimensional objects. The described UI may show the future of interaction on the iPhone or the Apple iPad. The patent application, filed last year but published early January, describes a number of multi-touch gestures to manipulate objects, including icons, presented in a simulated 3D space. Such gestures could present users with a simplified and intuitive way to interact with increasingly complex mobile devices. “As portable electronic devices become more compact, and the number of functions performed by a given device increase, it has become a significant challenge to design a user interface that allows users to easily interact with a multifunction device,” according to the patent application.

44 Veritas et Visus 3rd Dimension March 2010 Prime Time in Ottawa February 18-19, 2010, Ottawa, Ontario

In this report, Neil Schneider covers the 3D-related events at the Prime Time in Ottawa Conference – a national networking event for Canada’s business leaders, decision-makers and policy experts in the television, film and interactive media production industry.

by Neil Schneider

Neil Schneider is the President & CEO of Meant to be Seen (mtbs3D.com) and the newly founded S-3D Gaming Alliance (s3dga.com). For years, Neil has been running the first and only stereoscopic 3D gaming advocacy group, and continues to grow the industry through demonstrated customer demand and game developer involvement. His work has earned endorsements and participation from the likes of Electronic Arts and Blitz Games Studios, and he has also developed S-3D public speaking events with the likes of Crytek, Epic Games, iZ3D, DDD, and NVIDIA. Tireless in their efforts, mtbs3D.com is by far the largest S-3D gaming site in existence, and has been featured in GAMEINFORMER, Gamecyte, Wired Online, Joystiq, Gamesindustry.biz, and countless more sites and publications.

In mid-February, S-3D Gaming Association had the honor of being a panel speaker at the Canadian Film and Television Producers Association conference: Prime Time in Ottawa. This is Canada’s leading event for those working in film, television, and new media. However, this is only part of the story. One of the motivations for this year’s conference was to encourage 3D content development here in Canada, and we were invited to create a 3D showcase to demonstrate S-3D gaming.

The S-3D Gaming Demo at Prime Time in Ottawa

One of the benefits of being just a (long) drive away from Ottawa is we could bring most of our 3D equipment with us. For demonstration purposes, we brought an iZ3D monitor, an NVIDIA GeForce 3D Vision / Samsung monitor combo, and a Zalman interlaced monitor too. I regret that we didn’t unpack the Zalman, but XpanD had a spare DLP HDTV and shutter glasses combo, so we used our third computer to show some big screen gaming on PC!

45 Veritas et Visus 3rd Dimension March 2010

Let’s just say that 3D gaming was a huge success. When people sat down in front of the computers for the first time and put on the glasses, they immediately understood what all the excitement was about.

I think the high point was the visit by the Canadian Radio-Television and Telecommunications Commission (CRTC). For those unfamiliar, the CRTC is responsible for regulating Canada’s airwaves and media pipelines. They determine everything from which channels get broadcast, to what your local telephone area code is going to be. They are one of Canada’s most influential government bodies.

After our 3D discussion panel, the event organizers told us that the head of the CRTC was so impressed with what was shared, he wanted a closed room demonstration of the 3D showcase later that day.

Well, we had some tough choices to make! Which games do you show to impress the CRTC? We went with: Fallout 3 and Unreal Tournament 3 on the DLP HDTV, Left4Dead 2 on the NVIDIA solution, and Call of Duty: Modern Warfare (the original) on the iZ3D monitor.

Before I continue this story, it’s important to remember that as diversified as the CRTC is, they have a very professional, conservative, and formal image attached to them. We really had no idea how they would react to the video games and what their expectations would be.

Once the CRTC board sat down at the computers, they all had a great time! We enjoyed listening to them give each other instructions on how to play the games ("You see? You have to SHOOT the zombies! No! You are doing it wrong. SHOOT the zombies!")

One of the CRTC delegates explained that she recently had surgery on her eyes that made one effective for distance viewing, and one for up-close viewing. I had heard of this surgery before, and it brought up an issue our industry has yet to consider – do we need special viewing options or apparatus for people with unique visual needs? Very interesting…

One of the challenges we had with the exhibit was there was only one shutter glasses prototype available for sharing between ourselves and XpanD’s 3D education exhibit across the room. It was a non-issue, though, because everyone was very patient to try things out. It turns out that Ray Sager, Executive Producer for Event Horizon Media, is a huge Fallout 3 fan. He came back to the exhibit a few times so he could try it out in 3D for the first time. I’m told it was worth the wait!

I regret not getting a picture, but I also met Jonathan Barker, President and CEO of SK Films. Jonathon is responsible for the movie Bugs!. For those unfamiliar, Bugs! is a “fly on the wall” capture of what it’s like to be an insect, with a focus on butterflies and the Preying Mantis. Narrated by Judy Dench, it is a golden film that is best seen in stereoscopic 3D.

I don’t want to ruin the movie for you, but all the butterflies don’t survive the film. What intrigued me was the flack they received because of this. While it is very natural for a Preying Mantis to gobble some butterflies, because the Mantis was male and the butterfly was female, SK Films was accused of promoting the image of men preying on women. Something like that, anyway. This is just Bugs!, people!

Bugs! movie image

46 Veritas et Visus 3rd Dimension March 2010

On the other side of the room, Michael Williams from XpanD was exhibiting samples of educational material in 3D. One of the videos was about osteoporosis. He was also showing his collection of 3D pictures taken with his Fuji S-3D camera on a DLP HDTV. It looked very good on the big screen, I’ll say that! I’ve asked XpanD to send me some pictures of the material if they can, so I will share ASAP.

One last point: The Prime Time in Ottawa conference was indeed a paperless conference. Everyone had to have either an iPhone or an iPod Touch so they could download the proceedings, delegate lists, everything. When I first heard about this, I was less than pleased because I didn’t yet own the device, and it sounded like a pain in the neck.

Now that the conference is over, I take it all back. It was awesome! You just had to download the application from iTunes, it refreshed itself as the conference moved ahead, and it really saved the inconvenience of fumbling through papers and carrying heavy books. I’m sure a few trees are thankful too.

The only caveat is that it required a specific device or brand of devices. Next time round, it would be good to see a web based application that everyone can access via WiFi without a specific device type.

http://www.veritasetvisus.com

We strive to supply truth (Veritas) and vision (Visus) to the display industry. We do this by providing readers with pertinent, timely and exceptionally affordable information about the technologies, standards, products, companies and development trends in the industry. Our five flagship newsletters cover:

 3D displays  Display standards  High resolution displays  Touch screens  Flexible displays

If you like this newsletter, we’re confident you’ll also find our other newsletters to be similarly filled

with timely and useful information.

47 Veritas et Visus 3rd Dimension March 2010 Tangible, Embedded and Embodied Interaction Conference January 25-27, 2010, Cambridge, Massachusetts

In the first of two reports, Phillip Hill covers presentations from New York University/Umeå University, JST/Carnegie Mellon University, MIT Media Lab, MIT Media Lab/InSitu, and Rensselaer Polytechnic Institute

Relief: A Scalable Actuated Shape Display Daniel Leithinger, and Hiroshi Ishii, MIT Media Lab, Cambridge, Massachusetts

“Relief” is an actuated tabletop display, which is able to render and animate three-dimensional shapes with a malleable surface. It allows users to experience and form digital models like geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of 120 motorized pins, which are controlled with a low-cost, scalable platform built upon open-source hardware and software tools (Figure 1). Each pin can be addressed individually and senses user input like pulling and pushing. Creating tangible interfaces that are not only coupled with digital information as mere static physical objects, but can also be actuated, is a common emerging topic in HCI research. An example of this domain is shape displays, which can change their physical appearance to provide a haptic 3D experience in addition to graphic output. While a number of approaches to create shape display have been proposed, most of them are complex to build and therefore often unavailable to the HCI community. “Relief” utilizes commercially available components and open source hard- and software to provide a comparably low-cost, scalable platform, which can be used for creating prototypes with a variety of form factors and applications domains. The display is both able to render shapes and sense user input through a malleable surface, which is actuated by an array of electric slide potentiometers. The first example application of the system features geospatial exploration. Figure 2 depicts the display rendering a 3D model of a landscape, while the landscape texture is projected onto a Lycra surface covering the pins.

Figure 1: Relief system with uncovered aluminum pins; Figure 2: Relief system with pins covered with Lycra surface and top projected landscape

Objects in Play: Virtual Environments and Tactile Learning Lillian Spina-Caza, Rensselaer Polytechnic Institute, Troy, New York

When creating technology environments for children, consideration needs to be given to how touch, gesture, and physical interactions impact on play and learning. This is particularly important for video games or educational software appealing to young people with different learning styles. Children who are tactile learners are frequently

48 Veritas et Visus 3rd Dimension March 2010 left out of the design equation. New approaches to tangible design can address this imbalance. Animal Wrangler, a prototype of a PC-platform videogame the author co-designed for an experimental game design course, demonstrates objects children encounter in the physical world – everyday playthings – can also be used to enrich virtual play. The next step is to develop the game prototype for dissertation research and gather data to help identify potential benefits of mixed reality play for learning, development, and children’s overall well being. Three game levels were built for the prototype. In the tutorial, a player learns how to use physical objects or “wrangler” tools to bait, startle, round up, and capture starfish on the Great Barrier Reef in the videogame. In level one, the player must capture the Red Fox, a feral animal endangering small rodents and marsupials like the Bandicoot. When playing through level two (Figure 1), Cane Toads in the northern part of Australia can be captured and removed before they poison fish, freshwater crocodiles, and egrets. To instigate gameplay, children use 3D wooden shapes (Figure 2) designed both to appeal to tactile learners and to represent the sorts of objects that might be found in a child’s toy box. Each physical shape has a distinct purpose and correlates to a graphical object, but the two are not identical: in different game levels, physical objects become different virtual objects. For example, a red wood circle covered with netting becomes a black net in the game. In the Cane Toad level the Swiss cheese object or “lure” becomes a cookie embedded with flies to trap toads, while in the Red Fox level it is a steak. Each wooden shape is also textured and/or has materials attached to it to enhance its touchability (i.e. buttons, googly eyes, felt, etc.).

Figure 1: Cane Toad level; Figure 2: Wooden shapes

Interactive Paper Devices: End-User Design and Fabrication Greg Saul, JST, Tokyo, Japan Cheng Xu, and Mark D Gross, Carnegie Mellon University, Pittsburgh, Pennsylvania

The paper describes a family of interactive devices made from paper and simple electronics: paper robots, paper speakers, and paper lamps. The researchers developed construction techniques for these paper devices and the Paper Factory software with which novice users can create and build their own designs. The process and materials support DIY design and could be used with low-cost production and shipment from an external service.

The paper begins by describing three varieties of paper devices that illustrate the scope of the researchers’ design and manufacturing domain. Then it outlines the materials, components, and fabrication process employed to make these products. It next describes the paper factory computational environment, used to design 3D objects from paper by employing an evolutionary design method. It concludes with a discussion of how this particular set of materials and methods points at more general themes in DIY design and manufacture of interactive objects and a list of directions for future work. Figure 1 shows three kinds of paper devices made: all employ similar materials and construction; each highlights different aspects. Paper robots emphasize interaction and physical actuation; paper speakers illustrate how non-visual attributes such as sound can be features of a paper device; and paper lamps emphasize the physical form, aesthetics, and visual effects.

49 Veritas et Visus 3rd Dimension March 2010

Figure 1: Three kinds of paper devices: Left: Paper robot “Sleepy Box”. Center: Paper speaker. Right: Paper lamp.

A designer begins with an initial 3D model loaded from an STL file or simply with a cube. At each design iteration a 3D model of a candidate design is generated (either by the user or automatically by the program), its physical properties are simulated, and it is either retained or discarded by the natural selection module. At each stage, the designer can develop the form of the design by making changes to the geometric model by hand or by applying the “evolve design” option. When the designer requests evolution, Paper Factory iteratively varies the design, placing each “child” variation next to its parent. The program subjects the design to a physics simulation to determine how it would behave, e.g., whether its structure would fail if made from paper. Additionally, the design is tested against a list of designer-specified rules such as: Am I taller than my parent? Do I use less material? Can I stand? Could I be produced? A design that passes these tests is kept and a new iteration is made. For example, one image of a lamp resulted from a fitness function set to grow a taller lamp while wasting the least paper. The designer can turn rules off and on to influence the evolution of the final form. If a design has desirable or interesting attributes the designer can select it and instruct the application to make tangent iterations or evolutions. A desirable form can be selected and the unfolded 2D pattern of the 3D object exported to a PDF file for printing. g-stalt: a Chirocentric, Spatiotemporal, and Telekinetic Gestural Interface Jamie Zigelbaum, Alan Browning, Daniel Leithinger, and Hiroshi Ishii, MIT Media Lab, Cambridge, Massachusetts; Olivier Bau, InSitu, Orsay, France

This paper presents g-stalt, a gestural interface for interacting with video. g-stalt is built upon the g-speak spatial operating environment (SOE) from Oblong Industries. The version of g-stalt presented here is realized as a three- dimensional graphical space filled with over 60 cartoons. These cartoons can be viewed and rearranged along with their metadata using a specialized gesture set. g-stalt is designed to be chirocentric, spatio-temporal, and telekinetic. The g-stalt gestural interface allows users to navigate and manipulate a three-dimensional graphical environment filled with video media. They can play videos, seek through them, re-order them according to their metadata, structure them in various dimensions, and move around the space with 4 degrees of freedom (3 of translation, and 1 of rotation in the transverse plane). The videos are displayed on large projection screens and metadata for the videos is arranged on the projection surface of a table.

The g-stalt interface

50 Veritas et Visus 3rd Dimension March 2010

Texturing the “Material Turn” in Interaction Design Erica Robles, New York University, New York, New York; Mikael Wiberg, Umeå University, Umeå,

The researchers say that advances in the creation of computational materials are transforming our thinking about relations between the physical and digital. This paper characterizes this transformation as a “material turn” within the field of interaction design. Central to theorizing tangibility, the researchers advocate supporting this turn by developing a vocabulary capable of articulating strategies for computational material design. By exploring the term “texture”, a material property signifying relations between surfaces, structures, and forms, they demonstrate how concepts spanning the physical and digital benefit interaction design.

They highlight the case study of the Icehotel, a spectacular frozen edifice. The site demonstrates how a mundane material can be re-imagined as precious and novel. By focusing on the texture of ice, designers craft its extension into the realm of “computational materiality”. Tracing this process of aligning the physical and digital via the material and social construction of textures speaks back to the broader field of interaction design, the researchers say. It demonstrates how the process of crafting alliances between new and old materials requires both taking seriously the “materialities” of both, and then organizing their relation in terms of commonalities rather than differences. The result is a way of speaking about computational materials through a more textured point of view. In the Icehotel X configuration, two walls covered with LED pixels formed an 8m long and 2.4m high interactive wall. These “pixel walls” contained bulbs mounted in a grid and spaced 5cm apart. The resulting low-resolution display (160x48 pixels) was designed specifically to support abstract images and animations (see Figure). The developers smoothed the raw pixels by covering them with 4mm opaque plastic film. The resulting installation appeared as a continuous wall of light rather than an isolated piece of equipment. With two pixel walls and an ice- based interior in place they began aligning the materials into a desired texture. An early version of the project proposed creating an immersive representation of the north, but the scheme was rejected in favor of a strategy centered on composing the LED light with the qualities of ice. An iterative design process, carried out by engineers, light designers, and film editors produced a final composition of abstract animation and ice. The final composition communicated the spirit of Icehotel by blending ice and computational power. Bringing the raw ice to life meant crafting an environment that evoked wilderness in the north. They included common tropes like open fires, northern lights, blizzards, and glowing stars.

The paper concludes by discussing how texture generates research directions at the intersection of computing, materials science, architecture, and art. Ultimately, crossing the final divide between atoms and bits requires finding the language to describe both traditional and new materials as part of a common world. Organizing design in terms of these properties might help us understand when we need to invent something new, and when a mundane ubiquitous substance as simple as water, provides exactly what we need.

Figure 2: Animation frames from the Icehotel X installation

51 Veritas et Visus 3rd Dimension March 2010

52 Veritas et Visus 3rd Dimension March 2010 SIGGRAPH Asia December 16-19, 2009, Yokohama, Japan

Phillip Hill covers papers from Purdue University, MIT Media Lab/Brown University, Massachusetts Institute of Technology/University of Southern California/Adobe Systems/University of Washington/ Princeton University, IIT Delhi/KAUST/ETH Zurich, and The Hong Kong University of Science and Technology/Nanyang Technological University

The Graph Camera Voicu Popescu, Paul Rosen, and Nicoletta Adamo-Villani, Purdue University, West Lafayette, Indiana

A conventional pinhole camera captures only a small fraction of a 3D scene due to occlusions. This paper introduces the graph camera, a non-pinhole camera with rays that circumvent occluders to create a single layer image that shows simultaneously several regions of interest in a 3D scene. The graph camera image exhibits good continuity and little redundancy. The graph camera model is literally a graph of tens of planar pinhole cameras. A fast projection operation allows rendering in feed-forward fashion, at interactive rates, which provides support for dynamic scenes. The graph camera is an infrastructure level tool with many applications. The paper explores the graph camera benefits in the contexts of virtual 3D scene exploration and summarization, and in the context of real- world 3D scene visualization. The graph camera allows integrating multiple video feeds seamlessly, which enables monitoring complex real-world spaces with a single image. Most 3D applications rely on the planar pinhole camera (PPC) model to compute images of a 3D scene. However, the PPC model has limitations such as a uniform sampling rate, a limited field of view, and a single viewpoint. In this paper the researchers address the single viewpoint limitation. The graph camera is a graph of PPC frusta constructed from a regular PPC through a sequence of frustum bending, splitting, and merging operations. Despite the camera model complexity, a fast 3D point projection operation allows rendering at interactive rates. The paper explores three applications. The most direct application is the enhancement of navigation in virtual 3D scenes. Instead of being limited to a single viewpoint, the user benefits from an image that integrates multiple viewpoints, which allows viewing multiple scene regions in parallel, without having to establish direct line of sight to each scene region sequentially. The enhanced navigation enabled by the graph camera promises to reduce the time needed to find static targets and to greatly increase the likelihood that dynamic targets – moving or transient – are found. In Figure 1 the user position is shown in red (bottom left). The graph camera image lets the user see up to as well as beyond the first street intersections.

Figure 1: Enhanced virtual 3D scene exploration. The graph camera image (top) samples longitudinally the current street segment as well as the three segments beyond the first intersections (bottom left). The four side streets are occluded in conventional images (bottom right).

53 Veritas et Visus 3rd Dimension March 2010

Another graph camera application is in the context of 3D scene summarization, where the goal is to take an inventory of the representative parts of a scene in a visually eloquent composition. A graph camera can be quickly laid out so as to sample any desired set of scene parts, producing a quality summarization image at a fraction of the time costs associated with previous techniques. Finally, the graph camera is also useful for visualizing real-world scenes. A physical implementation is obtained by assigning a video camera to each of the PPCs in the graph camera. The result is a seamless integration of the video feeds, which enables monitoring complex real-world spaces with a single image. Unlike individual video feeds, the graph camera image is non-redundant and mostly continuous. In Figure 2 monitoring the hallway is facilitated by the graph camera image, which bypasses the need to monitor individual video feeds sequentially. Moreover, the moving subject is easier to follow in the graph camera image, which alleviates the jumps between individual video feeds. The graph camera is a departure from the conventional approach of using a simple and rigid camera model in favor of designing and dynamically optimizing the camera model for each application, for each 3D scene, and for each desired view. The researchers foresee that this novel paradigm will benefit many other applications in computer graphics and beyond.

Figure 2: Single-image comprehensive visualization of real-world scenes. The graph camera image (left) seamlessly integrates three video feeds (right) and shows all three branches of the T-corridor intersection.

3D Polyomino Puzzle Kui-Yip Lo, and Hongwei Li, The Hong Kong University of Science and Technology, Hong Kong, China Chi-Wing Fu, Nanyang Technological University, Singapore

This paper presents a computer-aided geometric design approach to realize a new genre of 3D puzzle, namely the 3D Polyomino puzzle. The researchers base their puzzle pieces on the family of 2D shapes known as polyominoes in recreational mathematics, and construct the 3D puzzle model by covering its geometry with polyomino-like shapes. They first apply quad-based surface parametrization to the input solid, and tile the parametrized surface with polyominoes. Then, they construct a non-intersecting offset surface inside the input solid and shape the puzzle pieces to fit inside a thick shell volume. Finally, they developed a family of associated techniques for precisely constructing the geometry of individual puzzle pieces, including the ring-based ordering scheme, the motion space analysis technique, and the tab and blank construction method. The final completed puzzle model is guaranteed to be not only buildable, but also interlocking and maintainable. With these proposed techniques, one can create 3D puzzle models that are not only tangible, but are also buildable and playable. They employed rapid-prototyping to construct a physical puzzle model for the BUNNY puzzle (see illustration) as a concrete demonstration.

Left: a completed BUNNY puzzle. Right: an image sequence (a-d) showing the building of BUNNY from bottom to top.

54 Veritas et Visus 3rd Dimension March 2010

Dynamic Shape Capture using Multi-View Photometric Stereo Daniel Vlasic, and Ilya Baran, Massachusetts Institute of Technology, Cambridge, Massachusetts; Pieter Peers, and Paul Debevec, University of Southern California, Marina del Rey, California; Wojciech Matusik, Adobe Systems, San Jose, California; Jovan Popovíc, University of Washington, Seattle, Washington; Szymon Rusinkiewicz, Princeton University, Princeton, New Jersey

The paper describes a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low- frequency deformation, the researchers performed multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human- size working volumes at a temporal resolution of 60Hz. They employed a variant of photometric stereo to compute per-camera and per-pixel normal information. This requires an active illumination setup, for which they use the system built by Einarsson and colleagues (2006). This lighting device consists of the top two-thirds of an 8-meter, 6th-frequency geodesic sphere with 1,200 regularly spaced individually controllable light sources, of which 901 are on the sphere and the rest are placed on the floor. A central area is reserved for the subject. They capture dynamic performances at a 1024x1024 resolution with eight Vision Research V5.1 cameras. The cameras are placed on the sphere around the subject, at an approximate height of 1.7 meters relative to the central performance area. An optional ninth camera looks down onto the performer from the top of the dome. The performances are captured at a constant rate of 240fps, and the geometry is acquired at an effective rate of 60fps. Figure 1 shows the capture setup, with two selected cameras marked in red and the performance area marked in green.

Figure 1: The acquisition setup consists of 1200 individually controllable light sources. Eight cameras (of which two are marked in red) are placed around the setup aimed at the performance area (marked in green). An additional ninth camera looks down from the top of the dome onto the performance area.

The researchers have experimented with a number of matching metrics to perform the correspondences, two based on images alone and two that rely on the integrated surfaces. They compared these four metrics on a variety of datasets, as illustrated in Figure 2. As expected, the illumination-stack metric yields better surfaces than simple image-based matching, but, due to the wide baseline, still produces wrong matches. The projected illumination metric further improves the reconstruction, but exhibits artifacts in shadowed regions. On the whole, they have found that the surface-based metric usually yields the least noisy surfaces.

Figure 2: Results (surfaces and normals) of several matching strategies on two data sets. The surface-based metric yields the overall best results, especially in regions that lack texture or are shadowed. This is visible in the first row by comparing the reconstructed legs. In the second row, only the surface-based metric does not push the waist inward.

55 Veritas et Visus 3rd Dimension March 2010

BiDi Screen: A Thin, Depth-Sensing LCD for 3D Interaction using Light Fields Matthew Hirsch, Henry Holtzman, and Ramesh Raskar, MIT Media Lab, Cambridge, Massachusetts Douglas Lanman, Brown University, Providence, Rhode Island

The researchers have transformed an LCD into a display that supports both 2D multi-touch and unencumbered 3D gestures. The “BiDirectional” (BiDi) screen, capable of both image capture and display, is inspired by emerging LCDs that use embedded optical sensors to detect multiple points of contact. The paper’s key contribution is to exploit the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality. The method switches between a display mode showing traditional graphics and a capture mode in which the backlight is disabled and the LCD displays a pinhole array or an equivalent tiled-broadband code. A large-format image sensor is placed slightly behind the liquid crystal layer. Together, the image sensor and LCD form a mask-based camera, capturing an array of images equivalent to that produced by a camera array spanning the display surface. The recovered multi-view orthographic imagery is used to passively estimate the depth of scene points. Two motivating applications are described: a hybrid touch plus gesture interaction and a light-gun mode for interacting with external light-emitting widgets. The researchers demonstrate a working prototype that simulates the image sensor with a camera and diffuser, allowing interaction up to 50cm in front of a modified 20.1-inch LCD. Light-sensing displays are emerging as research prototypes and are poised to enter the market. As this transition occurs the researchers hope to inspire the inclusion of some BiDi screen features in these devices. Many of the early prototypes enabled either only multi-touch or pure relighting applications. They believe their contribution of a potentially thin device for multi-touch and 3D interaction is unique. For such interactions, it is not enough to have an embedded array of omni-directional sensors; instead, by including an array of low- resolution cameras (e.g., through multi-view orthographic imagery in this design), the increased angular resolution directly facilitates unencumbered 3D interaction with thin displays.

The researchers modified an LCD to allow co-located image capture and display. Left: Mixed on-screen 2D multi- touch and off-screen 3D interactions. Virtual models are manipulated by the user’s hand movement. Touching a model brings it forward from the menu, or puts it away. Once selected, free-space gestures control model rotation and scale. Middle: Multi-view imagery recorded in real-time using a mask displayed by the LCD. Right top: Image refocused at the depth of the hand on the right; the other hand, which is closer to the screen, is defocused. Right bottom: Real-time depth map, with near and far objects shaded green and blue, respectively.

Shadow Art Niloy J. Mitra, IIT Delhi/KAUST, Mumbai, India/Thuwal, Saudi Arabia Mark Pauly, ETH Zurich, Zurich, Switzerland

Shadow art is a unique form of sculptural art where the 2D shadows cast by a 3D sculpture are essential for the artistic effect. This work introduces computational tools for the creation of shadow art and proposes a design process where the user can directly specify the desired shadows by providing a set of binary images and corresponding projection information. Since multiple shadow images often contradict each other, they present a

56 Veritas et Visus 3rd Dimension March 2010 geometric optimization that computes a 3D shadow volume whose shadows best approximate the provided input images. Their analysis shows that this optimization is essential for obtaining physically realizable 3D sculptures. The resulting shadow volume can then be modified with a set of interactive editing tools that automatically respect the often intricate shadow constraints. They demonstrate the potential of the system with a number of complex 3D shadow art sculptures that go beyond what is seen in contemporary art pieces.

The visual effect of shadow art is often best appreciated in a dynamic setting where the light source or view points are in motion. The illustration shows a particularly challenging example with three concurrent shadows at a 45- degree angle. The optimization significantly deforms the images, yet preserves the intricate geometric features of the animal silhouettes. The resulting shadow hull has been edited using brush, ray, and erosion tools to create a complex geometric shape that looks substantially different than the input images from most viewpoints.

The key enabling technique is a geometric optimization for constructing a consistent shadow hull from a set of inconsistent input images using shape-preserving deformations. The researchers believe that their system can help to make this art form accessible to a broader audience and see several directions for future work, both from an artistic and a scientific point of view. Interesting questions arise in the analysis of the set of all subsets of the shadow hull that satisfy the shadow constraints. Connections to Boolean satisfiability, minimum set cover, and graph coloring problems, are immediate and can potentially lead to new theoretical insights. So far they have considered only opaque materials and hence black-and-white shadow images. Semi-transparent materials offer the potential for gray-scale shadows that would pose challenging questions both for optimization and interaction. Their current system is designed for static shadow sculptures. Interesting artistic effects can also be achieved with dynamic shadow art, where multiple moving parts can create animated shadows.

A 3D shadow art sculpture that simultaneously casts three distinct shadows. The side columns show the desired shadow image provided by the user (left), inconsistencies due to conflicting shadow constraints (middle), and optimized images (gray) that avoid shadow conflicts with the outline of the original for comparison (right).

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

57 Veritas et Visus 3rd Dimension March 2010 Virtual Reality Continuum and its Applications in Industry December 14-15, 2009, Yokohama, Japan

In the first of two reports, Phillip Hill covers papers from the Agency for Science Technology and Research, Linköping University/Swedish Defense Research Agency (FOI), Tokyo Institute of Technology, and Avatar Reality Inc./University of Hawaii/Samsung

Hologram Calculation for Deep 3D Scene from Multi-View Images Koki Wakunami, and Masahiro Yamaguchi, Tokyo Institute of Technology, Tokyo, Japan

A computer generated hologram (CGH) is usually calculated by simulating the light propagation from point sources on the object surface, but the reproduction of the angular reflection properties of surfaces such as gloss, which is important for realistic 3D image, is difficult. Light-ray reconstruction based CGH can render the angular reflection properties from light-ray information, and as well, it is possible to apply the method to real objects using the image- based rendering from multi-view images captured by a camera array. However, the image far from the hologram plane is blurred due to the light-ray sampling and the diffraction at the hologram surface, so it is not suitable for the display of deep scenes. The researchers propose a new algorithm for calculating CGH with the use of a virtual “light-ray sampling (RS) plane”. The proposed method can be applied to both virtual and real objects obtained by artificial computer graphics or the image data captured by a camera array. Even if the objects are located distant from the CGH plane, the resolution of the reconstructed image is not degraded since the long-distance light propagation is calculated by diffraction theory. So, the method makes it possible to display deep scene objects far from the CGH plane. Additionally, the method can reproduce the angular reflection properties of the object surface, such as glossy or metallic characteristics, using directional light-ray information. The diagram illustrates the model for the calculation of CGH by the proposed method. The RS plane is set near the object location and light ray information from object is sampled at the RS plane. Then light-ray information is transformed into the wavefront based on the principle of holographic stereograms. The wavefront propagation from the RS plane to the CGH plane is calculated by Fresnel diffraction. Finally, simulating the interference of the wavefront propagated from the RS plane and the reference wave at the CGH plane, the hologram pattern is obtained.

Model of the proposed calculation

Automatic and Real-time 3D Face Synthesis Hong Thai Nguyen, Ee Ping Ong, Arthus Niswar, Zhiyong Huang, and Susanto Rahardja Agency for Science Technology and Research, Singapore

This paper describes a system for automatic and real-time 3D photo-realistic face synthesis from a single frontal face image. The system employs a generic 3D head model approach for 3D face synthesis that can generate the 3D mapped face in real time. The system first automatically detects face features from the input face image that corresponds to landmark points on the generic 3D head model. Thereafter, the generic head model is deformed to match the detected features. Finally, the texture from the face image is mapped onto the deformed 3D head model

58 Veritas et Visus 3rd Dimension March 2010 to create a photo-realistic 3D face. The system has the advantage of being totally automatic and runs in real time. Experiments conducted show that good results can be obtained with no user intervention. Such a system is useful in many applications such as the creation of avatars for virtual worlds by an end-user with no need for manual and tedious processes such as manual feature placements on the face image. The results generated are shown in the Figures. In the experiments, the researchers are able to generate these results in real time immediately after doing a capture of a person’s face using a normal webcam. In their work, they have ignored the hair part. They also did not treat the ears and the neck part in the texture mapping (as the 3D model they used contains a neck region attached to the head), so there are some artifacts on the ears and the neck. This could be resolved by some tweaking of the program (e.g. using a generically textured ear and modulated with color taken from the face and also removing the neck from the head model).

Results generated using the proposed automatic 3D face synthesis system

A Co-located Collaborative Augmented Reality Application Susanna Nilsson, and Arne Jönsson, Linköping University, Linköping, Sweden Björn J.E. Johansson, Swedish Defense Research Agency (FOI), Stockholm, Sweden

This paper presents results from a study on using an augmented reality (AR) application to support collaborative command and control activities requiring the collaboration of three different civil service organizations. The technology is used to create a common ground between the organizations and allows the users to interact, plan resources and react to the ongoing events on a digital map. The AR application was developed and evaluated in a study where a forest fire scenario was simulated. Participants from the involved organizations acted as command and control teams in the simulated scenario and both quantitative and qualitative results were obtained. The results show that AR can become a useful tool in these situations in the future. In order to interact with the AR system the users had a joystick-like interaction device allowing them to choose objects and functions affecting their view of the digital map (see Figure 1). The study was conducted at a helicopter base (Figure 2). In order to create a dynamic scenario and realistic responses and reactions to the participants’ decisions in the three sessions, they used a gaming simulator, C3 Fire. C3 Fire generates a task environment where a simulated forest fire evolves over time. One important technical design feature, which was a result of the iterative design process, is the ability to point in the map. The system allows the user’s hand to be superimposed over the map image for clarification (Figure 3).

Figure 1: Joystick interaction device; Figure 2: The simulated natural setting (a helicopter base); Figure 3: The users’ display showing the digital map with symbols and pointing used in the collaborative AR application

59 Veritas et Visus 3rd Dimension March 2010

Towards Virtual Reality Games Andrei Sherstyuk, Avatar Reality Inc, Honolulu, Hawaii; Dale Vincent, University of Hawaii, Honolulu, Hawaii; Anton Treskunov, Samsung, Seoul, South Korea

Game engines of cinematic quality, broadband networking and advances in virtual reality (VR) technologies are setting the stage to allow players to have shared, “better-than-life” experiences in online virtual worlds. The researchers propose a mechanism of merit-based selection of players, as a solution to the long-standing problem of limited access to VR hardware. The authors preface their paper by saying that games and VR are worlds apart. Games, including online virtual worlds, populate the entertainment arena of the consumer market. They are engaging, mass produced, inexpensive, targeting wide audiences. Visual realism in games approaches cinematic quality. The amount of 3D content available online today exceeds what can be explored in a single person’s lifetime. In contrast to gaming, VR systems require expensive hardware and customized software. They are available for limited audiences. VR systems are difficult to maintain and upgrade. Both the visual quality and the extent of virtual content are typically lower than in games. Nevertheless, VR has one feature that makes it stand out from all other platforms: the unmatched sense of presence, delivered by immersion and body tracking. The ability to make users believe that they actually “are there” has made VR a tool of choice for medical, military, and extreme condition training applications. The paper discusses a concept of virtual reality games that will combine the best features of games and VR: large persistent worlds experienced in photo realistic immersive settings. The researchers suggest several solutions for bridging the gap between the two platforms.

Games are already entering what used to be exclusively VR territory by incorporating elements of body tracking into user interface controls. The remarkable success of Nintendo’s Wii underscores the value of physical interactivity in game play. VR technologies are also experiencing steady growth in both high end and inexpensive systems. Optical trackers may cost as little as a single web camera, and cover a few feet of tracked area. High-end trackers, such as PPT from WorldViz are capable of capturing user motions in a 50x50x3 meter area, with sub- millimeter precision. Head mounted displays are also advancing rapidly. The model shown in Figure 1 has 1920x1200 pixels per eye and 120° field of view. It weighs only 350 grams. Figure 2 shows a recent and realistic example. This golf game is currently under development by Avatar Reality, as one of the attractions on Blue Mars Virtual World. Although Blue Mars is designed for non-VR platforms, this provides a good illustration of what VR pockets may look like. VR Golf will require tracking of the user’s head and both hands, for making a perfect swing. Travel on the golf course can be conveniently implemented with a virtual golf-car, using a “point-and-go” steering metaphor.

Figure 1: Using an HMD and motion trackers, players will be able to literally walk into this 3D scene, with 360° viewing Image credits: Crytek GmbH (Crysis game), Sensics Inc. (xSight HMD) Figure 2: Snapshot from the golf course on Blue Mars Virtual World Credit: Avatar Reality Inc.

60 Veritas et Visus 3rd Dimension March 2010

From June 1 to 3, 2010, the Seine-Saint-Denis department, just north of Paris, will host the 4th edition of Dimension 3, the International Forum on S-3D image (stereoscopic) and new images, on an historical and symbolic venue: the Pullman Dock at the Plaine Saint-Denis studios, at the of the French audio-visual and new media industry main place.

A place for sharing, discovering and networking for companies who include S-3D into their development strategy but also for research labs, Dimension 3 2010 aims at being a must-attend event for engineers, producers, entrepreneurs, manufacturers and researchers on a variety of markets: cinema, TV, video games, communications, visualization, medicine…

Dimension 3 2010 offers a number of highlights for all professionals:

 36 conferences and workshops over 3 days, gathering some sixty international experts from all sectors (scientists, developers, producers, artists, engineers, manufacturers, technicians, directors). Three conference rooms including a 500-seat auditorium will accommodate attendees.

 A 2,000+ m² exhibition area where manufacturers, publishers, contractors and producers will offer demonstrations and real-life product presentations.

 A Campus featuring bilingual training sessions designed for an international public with a MEDIA certification in 2010. The MEDIA support will further develop the training program’s ambitions and objectives. (in 2009, the Campus had gathered trainees from the USA, Europe and Asia, offering a series of training sessions on S-3D from filming to screening). The training session will be held from May 24 to 28, 2010.

 A film festival, Dimension 3 Festival, which in 2007 was the first international film event open to 3D films. In 2010 the festival further grows, now featuring a competition with 6 categories and 3 special prizes. The competing programmes (either the full version or clips) will be looped in a screening auditorium during the Forum.

Dimension 3 also develops a S-3D learning section open to the public: 3D films will be screened as a part of 3Discovery. The event will tour several cinemas in Seine-Saint-Denis throughout June (5 participating cinemas in 2009). A sponsored exhibition on the history of 3D will also be displayed in the cinemas’ halls.

One of the area’s most beautiful exhibition venues! Dimension 3 2010 will take place in the Pullman Docks, a building located at the heart of the Plaine Saint-Denis studios, where filming crews, technical contractors and producers from French TV all work together. By selecting this venue, the Forum’s organizing team opted not only for a famous and prestigious location, but also for the comfort of a 3,200 m² area with easy access.

About Dimension 3: Dimension 3, the International Forum on S-3D and new images, is organised by Avance Rapide Communication and sponsored by the General Council of Seine-Saint-Denis.

www.dimension3-expo.com

61 Veritas et Visus 3rd Dimension March 2010 IDW ’09 December 9-11, 2009, Miyazaki, Japan

In this first report, Phillip Hill covers this conference organized by The Institute of Image Information and Television Engineers, and The Society for Information Display, with papers from the 3D session. Presentations from NHK, Communications Research Center, AU Optronics Technology Center, Chunghwa Picture Tubes, Nagoya University/Tokyo Institute of Technology, and Disney Research/Holorad

Integral 3D Television: A Real-time Imaging System based on Integral Photography Makoto Okui, Jun Arai, Masahiro Kawakita, and Fumio Okano, NHK, Tokyo, Japan

Integral photography is highly effective at allowing observers to see natural and realistic 3D images without any special glasses. The technology is expected to be used in future broadcasting and information-communication technology. NHK has been making continuous research into integral 3D television based on the principle of integral photography. The most challenging issue to overcome is that the method needs a huge amount of pixels for practical applications. The paper reports progress in recent years on the development of prototypes using an extremely high-resolution video system and related technologies. NHK has carried out research in vision, binocular , and stereoscopic/auto-stereoscopic image systems including stereoscopic HDTV and electro-holography for several decades. In this paper, NHK reports research on integral 3D television as one of its most recent research topics. This method has the generality for application beyond broadcasting alone to the entire future field of information and communication technology. Currently, they are researching the use of a high-resolution projector with integral 3D TV as a means of improving 3D picture quality. They believe that use of a projector also brings various other benefits, such as using multiple projectors for higher resolution, freedom of screen size, and moiré- free with most projectors. They have studied a number of optical screens for projector type integral 3D TV. One conceptual diagram of that system is presented in the diagram. This is a new technique in which a mirror and a converging lens are used for front projection of the elemental images. This method may eliminate the need for a large installation space.

The systems NHK is now investigating have not yet reached a practical stage, but they do have the potential for development into a future, long-lived 3D TV system. They also expect to produce high quality 3D images and widen the range of application of integral 3D TV through the development of specialized 3D devices.

Proposed front-projection arrangement. The offset projection with a mirror allows observers to see a 3D image diagonally.

62 Veritas et Visus 3rd Dimension March 2010

3D TV: Are Two Images Enough? How Depth Maps can Enhance the 3D Experience Carlos Vázquez, Wa James Tam, and Filippo Speranza, Communications Research Center, Ottawa, Ontario

For 3D TV viewing the two images provided by stereoscopic imaging systems offer very little control on the perceived 3D experience. However, depth maps allow for improved depth visualization, through disparity customization and new viewpoint generation. In this paper, the researchers explore the main problems associated with stereoscopic visualization and how the use of depth maps can help solve some of them. The extraction of the depth information and the generation of new views to adjust the depth of the scene to the viewing conditions are two of the topics covered. When more than one view is available the depth information can be extracted with high confidence from the available content. This is known as the stereo problem for two views and is well documented in the literature. For the multi-view case, the researchers have developed a method for extracting the depth information in order to use it for the generation of virtual views. The proposed method is based on an exhaustive search strategy and “total-variation regularization” that leads to an accurate estimation of the depth and occlusion regions. Figure 1 shows the resulting depth map created with the proposed method for the well-known test image “Cones”. The researchers have also proposed a method for the coding of the disocclusion information based on interpolating wavelets. The method allows for the efficient transmission of the disocclusion information to help fill the newly exposed regions when rendering new virtual views. Figure 2 shows an example of disocclusion coding based on the proposed algorithm. The disocclusions are coded by using an interpolating wavelet strategy that reduces the amount of information that need to be transmitted. As can be seen from Figure 2e, the distribution of values for the wavelet coefficients in the resulting representation of the disocclusion are packed around zero, making it more efficient to transmit.

Figure 1: Depth estimated from a multi-view source; Figure 2: Example of disocclusion coding

Ray-Based Acquisition and Reproduction of 360-degree 3D Images Tomohiro Yendo, Masayuki Tanimoto, and Mehrdad Panahpour Tehrani, Nagoya University, Aichi, Japan Toshiaki Fujii, Tokyo Institute of Technology, Tokyo, Japan

The paper introduces a cylinder-shaped 3D display that allows viewers to see 3D images from 360 degrees, and a novel 3D image acquisition system. The acquisition system acquires multiview images from all horizontal directions around an object with narrow view interval, which the display needs as light ray data. The proposed method is based on a parallax panoramagram, which is a technique that allows for different images to be shown in different viewpoints. As shown in Figure 1, a system constructed using this technique is composed of a two- dimensional imaging device and a , which has many vertical slits positioned in front of the imaging device. Several different types of multiview autostereoscopic display systems based on this technique have been developed. These systems are designed based on the concept of preparing two-dimensional images that correspond to each viewpoint and then choosing and showing the appropriate image. Here, the fundamental function of the parallax barrier is to independently control the color of the light ray in each direction so that the three-dimensional image can be displayed as a cluster of rays if the resolution of the ray direction is high enough. The rays from any part of the display surface concentrated to a point reproduce the light spot as if it is in the air. Based on this approach, the researchers propose a 3D display technique. The basic structure of the proposed display is shown in

63 Veritas et Visus 3rd Dimension March 2010

Figure 2. It has two spinning cylinders, one inside the other. The outer cylinder is a parallax barrier that has a series of vertical slits spinning rapidly while the inner cylinder spins more slowly in the opposite direction with a series of one-dimensional LED arrays on its surface. If the slit width of the outer cylinder is sufficiently small, the light through the slit becomes a thin flux, whose direction is scanned rapidly by the spinning of the outer cylinder. By synchronously changing the intensity of each LED in the arrays according to the spinning, the rays of different directions have different colors based on time multiplexing. Moreover, as the inner cylinder also rotates more slowly, the LED array’s position is slightly different when the next slit comes; therefore, the rays are shot in each direction from each position. In this way, a cylindrical ray-space/light field display is realized.

Figure 1: Parallax panoramagram; Figure 2: Structure of the proposed display system

The researchers developed three prototypes and the latest one is capable of displaying color and moving images. A photograph of the display is shown in Figure 3. Image size is 200mm diameter and 256mm height. The pixel numbers are 1254x256 with a 1mm pitch. To display color images, LED arrays of three colors are used. Each LED array has frame memory to enable playback of stored video image. The total amount of the memory is 6.9GBytes, which stores dynamic imagery of approximately just less than 10 seconds. The refresh rate is 30Hz.

Figure 3: Photograph of the prototype; Figure 4: Schematic diagram of the proposed acquisition system

64 Veritas et Visus 3rd Dimension March 2010

As shown in Figure 4, the proposed acquisition system consists of a scanning optics system and a high-speed camera. The scanning optics system is composed of a double-parabolic mirror shell and a rotating flat mirror slanted at 45 degrees to the horizontal plane. The mirror shell produces a real image of an object that is placed at the bottom of the shell. In previous work, Fujii used a similar double-parabolic mirror shell, which is known as “3D mirage” toy. However the real image produced by the shell is seen from obliquely downward directions only, so they modified it so that the real image can be captured from right horizontal directions. The rotating mirror at the position of the real image reflects it to the camera-axis direction. The reflected image acquired by the camera varies according to the angle of the rotating mirror. This means that the camera can capture the object from various viewing directions that are determined by the angle of the rotating mirror. To acquire the time-varying reflected images, they use a high-speed camera that is synchronized with the angle of the rotating mirror.

Switchable 3D/2D Display using LC GRIN Chung Hsiang Chiu, Chih Wen Chen, Chih Hung Shih, and Wei Ming Huang AU Optronics Technology Center, Hsinchu, Taiwan

Switchable 3D/2D displays using a LC graded-index lens (GRIN) lens with low crosstalk were fabricated. By optimizing the LC material parameters, patterned ITO, and relationship between lens cell and image cell, crosstalk of less than 5% for 2-view was achieved. They demonstrated a 2.83-inch 2-view 2D/3D switchable LC GRIN lens display. Patterned ITO electrodes, material parameters, and matching between pixels and lens array are optimized for the LC GRIN lens effect. The driving voltage is 15V. LC GRIN lens technology can be a possible method for 2- view or multi-view displays, the researchers say.

A Novel Real-time Technique Using Depth Based Rendering Meng-Chao Andy Kao, and Tzu-Chiang Shen, Chunghwa Picture Tubes, Taoyuan, Taiwan

A faster algorithm for generating depth maps of 2D images has been developed by grayscale analysis, and spatial relative setting. This novel technology was successfully implemented in a 26-inch and 37-inch barrier-type, 4-view, 3D LCD. In this paper, the proposed depth based video processing system can generate depth maps automatically to output multi-view video signals for 3D displays without any manual procedure. As long as there is a depth map, one can generate most kinds of stereoscopic images to conform to any type of 3D display format. Besides, the proposed algorithm does not require motion vector detection so can be easily incorporated in an ASIC.

On the left is a manufactured 3D LC GRIN panel as developed by AUO; on the right is a 3D image generated with the CPT’s proposed 2D-to-3D conversion in a 26-inch barrier-type, 4-view 3D LCD

65 Veritas et Visus 3rd Dimension March 2010

An Interactive Zoetrope for the Animation of Solid Figurines and Holographic Projections Lanny Smoot, and Katie Bassett, Disney Research, Glendale, California Stephen Hart, and Daniel Burman, Holorad, Salt Lake City, Utah

The paper describes an interactive zoetrope that can animate holographic images, solid figurines, or other still- frame images. Unlike previous zoetropes, it is capable of aperiodic, interactive behavior. For example, the researchers have used it to animate a talking character’s mouth in real time in response to human speech. Zoetropes preceding this one display their images sequentially, that is, in the order in which the images are physically placed. The work describes techniques to instantaneously vary the order in which the images are displayed, allowing one to change the course of the animation in real time. This allows for infinitely non-repetitive and non-trivial animation using a small, finite number of images or frames. Control can come from a predetermined script, or media track. Alternatively, control can come from direct human intervention in real time. The researchers chose to use the animation of a character face to illustrate their techniques. This approach allows instantaneous interactivity with physical objects and holograms. The images used are faces with increasingly more open mouth positions (see Figure 1). The characters can be made to “talk” as these positions are lit in an order corresponding to the average level of a voice-audio signal. To achieve the interactive holograms, the researchers fabricated the holographic disc (30cm in diameter and 6mm thick) that has encoded within it the several separate images of the talking head. When viewed, each head (eight in the prototype) appears to float 15cm in front of the hologram disk (see Figure 2).

Figure 1: Increasingly open-mouthed characters successively positioned on a rotating platform

The research provides new ways to provide constantly alterable physical animation sequences with a finite number of animation images. It also demonstrates human-interface driven animation based on speech input. Characters’ motions, in this case, speech-driven mouth movements, can be made interactive. Entertainment companies can apply these techniques to broaden the appeal and interactivity of attractions. Advertising applications include interactive signage, and virtual character kiosks. The user- interface community can give computers a virtual face using interactive holographic characters. The telepresence/video- conferencing community can use holo- characters as surrogates for live remote persons, and the concept can also be applied to novelty items such as voice responsive lip-syncing toys.

Figure 2: Photograph of the interactive holographic zoetrope with features labeled

66 Veritas et Visus 3rd Dimension March 2010 International Universal Communication Symposium December 3-4, 2009, Tokyo Japan

Phillip Hill covers papers from Corporation, Advanced Telecommunications Research/National Institute of Information and Communications Technology, Korea Institute of Science and Technology/Daegu University, NIICT Tokyo, and NIICT Kyoto

One-Dimensional 3D Display Systems Yuzo Hirayama, Toshiba Corporation, Kawasaki, Japan

Toshiba has developed several kinds of autostereoscopic display systems using the one-dimensional integral imaging method. The integral imaging system reproduces light beams similar to those produced by a real object. Therefore the displays have continuous motion parallax. The design, fabrication, and optical evaluation of the displays have been done. By using a proprietary software, the fast playback of CG movie contents and real-time interaction have also been realized with the aid of a graphics card. Realization of safe 3D images to humans is very important. The researchers have measured the effects on the visual function and evaluated the biological effects. They have found that their displays show better results than those of a conventional stereoscopic display. The display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle. Mixed reality of virtual 3D objects and real objects can also be realized on a flatbed display. The new technology opens up new areas of application for 3D displays, including communications, arcade games, e- learning, simulations of buildings and landscapes, and even 3D menus in restaurants. To reproduce natural 3D images on the flatbed display, the researchers developed proprietary software that utilizes 10 or more views of an object. Their middleware supports fast playback of the images. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. The combination of advanced technologies achieves a full 3D effect when viewed at an angle as wide as 30 degrees from the center of the screen, and from distances of over 30cm. The naturalness of the image signal allows long viewing. They have applied the new technology to some kinds of size displays with 480x300 3D pixels and 480x400 3D pixels allowing viewers to see high quality stereoscopic images. The flatbed-type display brought about a more effective stereoscopic experience than that available in the case of the conventional upright-type display. 3D images and real objects can be seen all together because these displays reproduce light rays similar of those produced by a real object. In the images, the yellow cap can is real, the other objects are generated by the display.

Prototype of a 24-inch flatbed-type display

Measurements of Vergence/Accommodation while Viewing a Real 3D Scene and its 2D Image on a Display Haruki Mizushina, Takanori Kochiyama, and Shinobu Masaki, Advanced Telecommunications Research, Kyoto, Japan; Hiroshi Ando, National Institute of Information and Communications Technology, Kyoto, Japan

It is widely thought that the conflict between vergence and accommodation may be a major factor of visual fatigue and discomfort caused by viewing stereoscopic images on 3D displays. However, few studies measured vergence

67 Veritas et Visus 3rd Dimension March 2010 and accommodation simultaneously while viewing a real 3D scene and its 2D image on a traditional display. In this study the researchers measured vergence and accommodation responses simultaneously while viewing a 3D real object located at various distances from the and its 2D image (see photograph) including background scenes presented on a display located at fixed distance. The result shows that vergence and accommodation varied with changing target distance while viewing the 3D real object, as expected. On the other hand, changing the target distance depicted in the photographic image while viewing the 2D display evoked no systematic change of vergence and accommodation. Some participants noticed noticeable accommodation lag and fixation disparity. In addition to that, they observed considerable conflicts between vergence and accommodation in both 3D and 2D conditions, but no one reported perceived defocus and/or double vision.

The instrument used for simultaneous measurements of accommodation and vergence responses (Topcon Corporation)

3D Display and Communication Technology Min-Chul Park, Korea Institute of Science and Technology, Seoul, South Korea Jung-Young Son, Daegu University, Gyeongbuk, South Korea

This paper describes 3D displays in terms of communication technology. 3D displays provide 3D images to the viewers with more accurate and realistic information than a 2D display does. This feature is an essential component of communication technology. Generally communication technology has the aim of exchanging and sharing of thoughts, feelings and ideas. 3D displays are effective contact media to achieve these goals. The concept of an accessible spatial dimension of a person is used to describe 3D displays for communication. Several research results related to 3D display and communication technology are introduced based on the concept. A 3D mobile visual communication system requires integrating technologies of the 3D image display, processing and mobile networking. A slanted parallax barrier generates viewing zone to the viewer without Moiré effect, but the complexity of the structure causes computational burden. An autostereoscopic 3D display for mobile visual communication using the typical parallax barrier strip is exploited in the experiment considering computational capability of mobile phones and electric power consumption. 3D mobile visual communication is realized by transmitting stereo images over the CDMA (Code Division Multiple Access) networks. To testify 3D mobile visual communication, stereo images are captured and transmitted to a server terminal in real time as shown in the photos. External stereo cameras are used in the experiment because mobile phones in daily use are not equipped with stereo cameras up to now. Transmitted images are compressed using a JPEG Codec module and the optimized condition of compression ratio is determined by QoS (Quality of Service) function. Allowable transmission bandwidth of CDMA network is 128kbps and compression ratio is determined by of 3D effect and image brightness by subjective evaluation. In the experiment the compression ratio is determined around 20%, and left and right images are transmitted separately to avoid image deterioration. Combining the images for 3D display is processed at the mobile phone.

Electronic Holography Generated from Integral Photography Ryutaro Oi, Kenji Yamamoto, Tomoyuki Mishina, Takanori Senoh, and Taiichiro Kurita National Institute of Information and Communications Technology, Tokyo, Japan

In this paper, the researchers describe an electronic holography for non-coherent lighting environments. They used integral photography (IP) to obtain 3D information of the scene. This method demands neither laser beams nor a darkroom at the recording. Therefore living or moving objects may be captured onto a hologram. The converter hardware calculates fringe patterns according to the IP at 30 frames per second by using a formerly proposed conversion algorithm. In an experiment, 3840x2160 pixels of color holograms were generated in real-time. They

68 Veritas et Visus 3rd Dimension March 2010 observed that moving color objects captured at a distant place are converted to holograms, then successfully reconstructed as moving color image volume with depth by irradiating with laser on the holograms. The authors believe that an ultra realistic 3D communication is enabled in future by further research of the proposed electronic holography method. The researchers used a 4Kx2K IP camera to obtain 3D information of the scene. The IP camera outputs video data via four channels of high definition serial digital interface (HD-SDI) signals (4x1.485Gbps). The converter hardware calculates fringe patterns according to the IP in real-time. The converter hardware outputs resulting color holograms to the display hardware via four channels of other HD-SDI signals. The hologram uses three panels of LCoS (liquid crystal on silicon) for red, green and blue holograms. For the hologram reconstruction, they displayed red, green and blue fringe patterns onto reflective LCD panels. The LCDs used for the hologram had a pixel pitch of 6.8 micron (horizontal) x 6.8micron (vertical). The reconstructed three primary color holograms are integrated by beam combiners to form a full- color image volume as shown in the Figure. The prototype hologram display adopts three sets of JVC D-ILAs that have 3840x2160 resolution. The reference beams are lasers of 633nm, 543nm and 470nm. We used an in-line hologram setting for the reconstruction.

Reconstructed three primary color holograms are integrated by beam combiners to form a 3D image volume

An Improved Optical Device for Floating Displays Sandor Markon, and Satoshi Maekawa, NIICT, Kyoto, Japan

The paper proposes an improved design of an optical device for projecting floating images. The improved device is a modification of the original design of dihedral corner reflector arrays reported before, improving its manufacturability while largely maintaining its image forming capability. The paper describes the construction of the device, and shows its properties by mathematical analysis and optical simulation. Optical devices that can form real images are of particular interest for ultra-realistic communication. A recent development using dihedral corner reflector arrays made it possible to form a floating image of any object, without distortion, above its plane. The resulting image, apparently floating in mid-air, has a wide viewing angle, and can be inspected from various directions. Such optical devices can be combined with sensors to create a new kind of interactive experience, where the user can manipulate the image floating in the air. To create floating images, the following properties are required from the optical device: light rays originating from the target should pass through the device with minimal attenuation; and, on the other side, the rays should behave as if being refracted by a negative index material. These properties ensure that a non-distorted, plane-symmetric, real image will be formed. A practical way of achieving these properties is by using an array of dihedral corner reflectors (DCRs), arranged in a plane. Rays that are reflected twice by a corner reflector will pass through the plane of the device, as shown in the Figure. This optical device can be manufactured using nano- technology, by first creating a metal mold by nano-machining, then using UV imprinting or other methods to create the optical device from acrylic or other transparent materials. The dihedral mirrors are formed as the walls of rectangular plastic pillars, using total internal reflection on their surfaces. The researchers have shown that it is possible to design a DCR array in such a way that it is easily manufactured by a new optical design that uses both total internal reflection and refraction. They plan to manufacture a device using the new geometry and experimentally verify its properties.

Imaging by a dihedral corner reflector array

69 Veritas et Visus 3rd Dimension March 2010

70 Veritas et Visus 3rd Dimension March 2010 Symposium on Virtual Reality Software and Technology November 18-20, 2009, Kyoto, Japan

In this first report of two, Phillip Hill covers papers from the 16th ACM symposium on VRST: University of Colorado at Boulder, Kyoto University/Osaka University, University of New South Wales, Federal University of Pernambuco/Worcester Polytechnic Institute, Univesitat Politècnica de Catalunya, University of Udine, Mines ParisTech, Worcester Polytechnic Institute/University of Hamburg, and Osaka University

Wearable Imaging System for Capturing Omni-directional Movies from a First-person Perspective Kazuaki Kondo, Kyoto University, Kyoto, Japan Yasuhiro Mukaigaway, and Yasushi Yagiz, Osaka University, Osaka, Japan

The paper proposes a novel wearable imaging system that can capture omni-directional movies from the viewpoint of the camera wearer. The imaging system solves the problems of resolution uniformity and gaze matching that conventional approaches do not address. The researchers combine cameras with curved mirrors that control the projection of the imaging system to produce uniform resolution. Use of the mirrors also enables the viewpoint to be moved closer to the eyes of the camera wearer, thus reducing gaze mismatching. The optics, including the curved mirror, have been designed to form an objective projection. The capability of the designed optics is evaluated with respect to resolution, aberration, and gaze matching. They have developed a prototype based on the designed optics for practical use. The researchers constructed an optical unit with a camera, the curved mirror, and an additional flat mirror, which aids compactness. Since flat mirrors do not affect optical properties such as focusing, they safely ignored their influence. The total system consisting of four optical units where each optical unit captures either front, right, left, or back scenes of the camera wearer. A result of a projection onto a virtual cylindrical screen (a panoramic projection) is shown in Figure 2.

A result of image unwarping (panoramic). (A)-(D) Input images for each direction, left, front, right, and back. (E) Unwarped and mosaic image.

71 Veritas et Visus 3rd Dimension March 2010

GPU Acceleration of Stereoscopic and Multi-View Rendering for Virtual Reality Applications Jonathan Marbach, University of Colorado at Boulder, Boulder, Colorado

Stereo and multi-view rendering of three-dimensional virtual environments can be accelerated using modern GPU features such as geometry shaders and layered rendering, allowing multiple images to be generated in a single geometry pass. These same capabilities can be used to generate the multiple views necessary for co-present multi- user projection environments. Previous work has demonstrated the feasibility of applying such techniques, but has not shown under what circumstances these techniques provide increased or decreased rendering performance. This paper provides a detailed analysis of the performance of single-pass stereo and multi-view generation techniques and provides guidelines for when their application is beneficial to rendering performance. For a real-world test, the university used OpenSG 1.8 within a testbed application to load and display two different VRML models. OpenSG was chosen due to its known use in existing virtual reality applications, especially those based on VRJuggler, and for its support of the VRML 97 file format. The Balcony House model, based on a site at Mesa Verde National Park, is a superset of the model used in previous work. The second model is a digital elevation model (DEM) of the state of Colorado, including a texture map showing elevation and road locations (see photos). The Balcony House model comprises approximately 700 unique objects, and in total contains only 40,000 triangles; however, it uses over twenty high-resolution (up to 4kx4k) textures requiring at least 500MB total storage. The Colorado DEM scene, on the other hand, contains less than twenty unique objects, but has over 400,000 total triangles, and only one medium-resolution (2kx2k) texture. Displaying the Balcony House scene puts more demands on the CPU for scene traversal, frustum culling, and graphics driver overhead than the Colorado DEM. The DEM represents the near-ideal situation for maximum GPU performance: a small number of objects with a large number of triangles per object. Applications that visualize architectural data or CAD/CAM models, which are composed of many small pieces, might see improved performance. Also, molecular visualization applications that are not already optimized to aggregate small objects into batches might also improve performance through single-pass approaches, but since these scenes are not normally texture-mapped, the benefits may not be significant. Geoscience applications and terrain visualization systems are most likely not good candidates for layered rendering, since these applications tend to present fewer batches, each containing large lists of triangles. Other applications such as medical training systems or cultural heritage environments are difficult to predict, as their content varies from simple to very complex models.

The two scenes used in the layered rendering analysis: Balcony House Model (top) and Colorado DEM (bottom)

SparseSPOT: Using A Priori 3D Tracking for Real-Time Multi-Person Reconstruction Anuraag Sridhar, and Arcot Sowmyay, University of New South Wales, Sydney, Australia

Voxel reconstruction has received increasing interest in recent times, driven by the need for efficient reconstructions of real world scenes from video images. The voxel model has proven useful for activity recognition and motion capture technologies. However most current voxel reconstruction algorithms operate on a fairly small

72 Veritas et Visus 3rd Dimension March 2010

3D real world volume and only allow for a single person to be reconstructed. In this paper the researchers present SparseSPOT, an extension of the SPOT voxel reconstruction algorithm that enables real-time reconstruction of multiple humans within a large environment. They compare SparseSPOT to SPOT and show (by extensive experimental evaluation) that the former achieves superior real time performance. The illustration shows a comparison of SparseSPOT against SPOT. The results show that SparseSPOT reconstruction is comparable to that of the original SPOT algorithm and additionally provides two other benefits for the reconstruction. It eliminates some noise that SPOT picks up outside the tracked human, by not checking them at all. SparseSPOT only checks voxels that are near a tracked human, so any voxels that are too far from humans are removed. Another benefit is that SparseSPOT assigns voxels to tracked individuals during voxel reconstruction itself. Therefore, no more computation time need be wasted in assigning voxels to individuals after reconstruction, making SparseSPOT highly suitable for voxel model-based tracking applications.

SPOT vs. SparseSPOT comparison. Left is the original image, Middle is the result from SPOT. Right is the result from SparseSPOT.

Standalone Edge-Based Markerless Tracking of Fully 3D Objects for Handheld Augmented Reality João P. Lima, Veronica Teichrieb, and Judith Kelner, Federal University of Pernambuco, Recife, Robert W. Lindeman, Worcester Polytechnic Institute, Worcester, Massachusetts

This paper presents a markerless tracking technique targeted at the Windows Mobile Pocket PC platform. The primary aim of this work is to allow the development of standalone augmented reality applications for handheld devices based on natural feature tracking of fully three-dimensional objects. In order to achieve this goal, a model- based tracking approach that relies on edge information was adopted. Since it does not require high processing power, it is suitable for constrained devices such as handhelds. The OpenGL ES graphics library was used to detect the visible edges in a given frame, taking advantage of graphics hardware acceleration when available. In addition, a subset of two computer vision libraries was ported to the Pocket PC platform in order to provide some required algorithms to the markerless mobile solution. They were also adapted to use fixed-point math, with the purpose of improving the overall performance of the routines. The port of these libraries opens up the possibility of having other computer-vision tasks being executed on mobile platforms. An augmented reality application was created using the implemented technique and evaluations were done regarding tracking performance, accuracy and robustness. In most of the tests, the frame rates obtained are suitable for handheld augmented reality and a reasonable estimation of the object pose was provided. After first tests with synthetic data, the handheld, edge- based tracker was evaluated using images of the real world captured by a camera. The illustration depicts some augmentation results. The tracker proved to be robust up to a certain level of occlusion of the tracked object. The cube model has 12 contour edges and was tracked at 15fps. The wood toy model has 30 contour edges and was tracked at 10fps. The tracking on the mobile device is highly dependent on the edge count, currently being suitable only for non-complex objects. It can achieve interactive frame rates (4-5fps) when the object has at most a hundred edges.

Handheld augmented reality results using the developed markerless 3D tracker

73 Veritas et Visus 3rd Dimension March 2010

Visual Feedback Techniques for Virtual Pointing on Stereoscopic Displays Ferran Argelaguet, and Carlos Andujary, Univesitat Politècnica de Catalunya, Barcelona, Spain

The act of pointing to graphical elements is one of the fundamental tasks in human-computer interaction. In this paper the researchers analyze visual feedback techniques for accurate pointing on stereoscopic displays. Virtual feedback techniques must provide precise information about the pointing tool and its spatial relationship with potential targets. They show both analytically and empirically that current approaches provide poor feedback on stereoscopic displays, resulting in low user performance when accurate pointing is required. They propose a new feedback technique following a camera viewfinder metaphor. The key idea is to locally flatten the scene objects around the pointing direction to facilitate their selection. They present the results of a user study comparing cursor- based and ray-based visual feedback techniques with this approach. This indicates that the viewfinder metaphor clearly outperforms competing techniques in terms of user performance and binocular fusion. The viewfinder metaphor technique assumes a selection ray cast from the eye. The key idea is to locally flatten potential targets in the vicinity of the pointing direction by projecting them onto a small virtual screen attached to the pointing direction itself (see figure). They call this technique viewfinder because the resulting effect is similar to looking at a small part of the scene through an LCD digital camera display. A 2D cursor on the viewfinder represents the pointing direction. Since the cursor and the objects displayed on the viewfinder are drawn at fixed parallax, they avoid selection ambiguity problems that have discouraged image-plane techniques in stereoscopic displays.

Viewfinder metaphor: viewing pyramids of L/R eyes and the viewfinder (left). User view of the viewfinder (right, not to scale).

D3: An Immersive Aided Design Deformation Method Vincent Meyrueis, Alexis Paljic, and Philippe Fuchs, Mines ParisTech, Paris, France

This paper introduces a new deformation method adapted to immersive design. The use of virtual reality (VR) in the design process implies a physical displacement of project actors and data between the virtual reality facilities and the design office. The decisions taken in the immersive environment are manually reflected on the computed aided design (CAD) system. This increases the design time and breaks the continuity of data workflow. On this basis, there is a clear demand in the industry for tools adapted to immersive design. But few methods exist that encompass CAD in VR. For this purpose, the researchers propose a new method, called D3, for “Draw, Deform and Design”, based on a two-step manipulation paradigm, consisting of 1) area selection, and 2) path drawing, and a final refining and fitting phase (see illustration). The use of tool “key frames” as control points of the deformation curve allows the widening of the deformation possibilities to twist or taper deformations. The difference between rotation and twist depends on the selection type. In order to perform a rotation, all the selected object points will rotate with the same angle. To perform a twist deformation, points rotate with a different angle. The fact that selection and deformation movements are independent is a strong advantage. The user can choose the location in space where the deformation path is. It can be close to the surface selection or distant: this way the user has the possibility of defining the deformation curve and its reference frames at the most appropriate location. For example, it could be interesting to define a rotation center far from the selected surface to make an extrusion that has a circular path. This confers to the user a better precision for achieving such a deformation instead of trying to hand shape a perfect circular path.

Scheme of the D3 deformation method

74 Veritas et Visus 3rd Dimension March 2010

3D Object Arrangement for Novice Users: the Effectiveness of Combining a First-Person and a Map View Luca Chittaro, Roberto Ranon, and Lucio Ieronutti, University of Udine, Udine, Italy

Arranging 3D objects in virtual environments can be a complex, error prone and time consuming task, especially for users who are not familiar with interfaces for 3D navigation and object manipulation. In this paper, the researchers analyze and compare novice users' performance on 3D object arrangement tasks using three interfaces that differ in the views of the 3D environment they provide: the first one is based only on a first-person view; the second one combines the first-person view and a map view in which the zoom level is manually controlled by the user; the third one extends the second with automated assistance in controlling the map zoom level during object manipulation. The study shows that users without prior experience in 3D object arrangement prefer and actually benefit from having a map view in addition to a first person view in object arrangement tasks. The type of manipulation (translation, rotation or scale) is chosen by clicking on the corresponding icon available in the upper part of the first person view window (see Figure 1(a) and Figure 1(b)). To apply the currently chosen manipulation to an object, the user has to start a drag action over the object. Two visual aids highlight the possibility of interacting with an object when the mouse pointer is over it: (i) the pointer changes its shape according to the currently selected manipulation mode, and (ii) the bounding box of the object is displayed. For example, in Figure 1(a), rotation is the currently selected manipulation mode but the mouse pointer is not positioned over an object. In this case, mouse drags control the user orientation. In Figure 1(b), rotation is the currently selected manipulation mode and the mouse pointer is over a manipulable object. To make object positioning easier, a grid parallel to the xz-plane and aligned with the bottom of the bounding box of the manipulated object is displayed during the translation (see Figure 2). As the vertical position of the currently manipulated object changes, the position of the grid is updated accordingly. The grid is aimed at providing additional visual cues to understand the correct position of the selected object in 3D space, especially during vertical positioning.

Figure 1: (a) The mouse pointer is not positioned over an object; (b) when the mouse pointer is positioned over the object, the bounding box of the object and the shape of the mouse pointer indicate the possibility of manipulation; Figure 2: During translation of an object, a grid parallel to the xz-plane and aligned with the bottom of the object is displayed

Crafting Memorable VR Experiences using Experiential Fidelity Robert W. Lindeman, Worcester Polytechnic Institute, Worcester, Massachusetts Steffi Beckhaus, University of Hamburg, Hamburg, Germany

Much of virtual reality is about creating virtual worlds that are believable. But though the visual and audio experiences we provide today technically approach the limits of human sensory systems, there is still something lacking; something beyond sensory fidelity hinders us from fully buying into the worlds we experience through VR technology, according to the researchers. They introduce the notion of “Experiential Fidelity”, which is an attempt to create a deeper sense of presence by carefully designing the user experience. They suggest guiding the user’s frame of mind in a way that their expectations, attitude, and attention are aligned with the actual VR experience, and that the user’s own imagination is stimulated to complete the experience. They propose to do this by structuring the time prior to exposure to increase anticipation, expectation, and the like. The illustration gives a simplified overview of possible factors influencing the experience.

75 Veritas et Visus 3rd Dimension March 2010

A Wide-view Parallax-free Eye-mark Recorder with a Hyperboloidal Half-silvered Mirror Erika Sumiya, Tomohiro Mashita, Kiyoshi Kiyokawa, and Haruo Takemura, Osaka University, Osaka, Japan

This paper proposes a wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror. The eye-mark recorder provides a wide field-of-view (FOV) video recording of the user’s exact view by positioning the focal point of the mirror at the user’s viewpoint. The vertical view angle of the prototype is 122 degrees (elevation and depression angles are 38 and 84 degrees, respectively), and its horizontal view angle is 116 degrees (nasal and temporal view angles are 38 and 78 degrees, respectively). They have implemented and evaluated a gaze estimation method for this eye-mark recorder. Experimental results have verified that it successfully captures a wide FOV of a user and estimates a rough gaze direction. The wide FOV eye-mark recorder is composed of a small camera and a hyperboloidal half-silvered mirror as shown in Figure 1. Every light ray going to the inner focus of a hyperboloid is reflected on its surface toward the outer focus. So the small camera placed at the outer focus will capture the wide FOV image effectively from the viewpoint of the inner focus. If the user’s eye is placed at the inner focus, the wide FOV image exactly from their viewpoint will be recorded. Also, the half-silvered mirror has a cut on it to directly capture the user’s eyeball for eye-tracking purposes. The primary advantages of the proposed design include: a wide FOV image comparable to the user’s own field-of-view can be recorded by using a convex mirror; the user’s view is parallax-free and can be recorded exactly from the user’s own viewpoint thanks to the geometric constraints of a hyperboloid; the user’s eyeball can be recorded for eye-tracking purposes; and a single camera is used for recording both the user’s view and the eyeball, eliminating the necessity of a secondary camera or a synchronization mechanism. Figure 2 shows the appearance of the helmet. A circular cut of about 20mm in diameter is made on the mirror, and the small camera captures the user’s eyeball through the cut as well as the user’s visual field through the primary mirror.

Figure 1: Schematic diagram of a hyperboloidal eye-mark recorder; Figure 2: The prototype eye-mark recorder

>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

Veritas et Visus (Truth and Vision) publishes a family of specialty newsletters about the displays industry:

Flexible Substrate Display Standard 3rd Dimension High Resolution Touch Panel

http://www.veritasetvisus.com

76 Veritas et Visus 3rd Dimension March 2010 ACM Multimedia 2009 October 19-24, 2009, Beijing, China

Phillip Hill covers papers from Trinity College, University of Advancing Technology, University of Illinois at Urbana-Champaign (x2), The Chinese University of Hong Kong/Shenzhen Institute of Advanced Technology, and University of British Columbia

Interacting with a Personal Cubic 3D Display Billy Lam, Ian Stavness, Ryan Barr, Sidney Fels, University of British Columbia, Vancouver, British Columbia

The researchers describe a demonstration of four novel interaction techniques for a cubic head-coupled 3D display. The interactions illustrated include: viewing a static scene, navigating through a large landscape, playing with colliding objects inside a box, and stylus-based manipulation of objects. Users experience new interaction techniques for 3D scene manipulation in a cubic display, the paper suggests. The pCubee display uses five 5-inch LCD panels arranged as five sides of a box. Three graphics pipelines drive the screens on the sides of the box visible to the user. A Polhemus Fastrak is used to track the pCubee and users’ head in order to couple the user’s view of the box to the rendered perspective on each screen. This creates the illusion for the user of looking into a box with clear sides to see a small virtual diorama contained inside it. Figure 1 shows a participant playing with pCubee. The 3D scene is rendered with Open Scene Graph (OSG) and object physics and collisions are simulated with the Nvidia PhysX engine. The software architecture of pCubee could also support alternative rendering and physics engines. While 3D objects enclosed inside the box are naturally viewed by looking into different sides, larger virtual scenes that extend outside the bounds of the physical box present a problem in navigating to see distant parts of the scene. The researchers have devised an interaction method for navigating 3D landscapes in pCubee in which the viewpoint translates in the direction that the pCubee is tilted. The metaphor is of a marble rolling within the landscape, with the view into pCubee always centered on the marble. This navigation technique is similar to the “side scroller” type video game in 3D, but with physical tilting of the display causing movement of the scene. They use the physics engine of pCubee to generate the tilt-to-translate action. This also gives the marble a realistic rolling and bouncing animation as it moves throughout the landscape (see Figure 2).

Figure 1: A participant playing with pCubee by bouncing cows around inside the box; Figure 2: A 3D landscape in pCubee that extends outside of the box; the rolling ball is used to navigate through the scene

77 Veritas et Visus 3rd Dimension March 2010

Kosmoscope – A Seismic Observatory Tim Redfern, and Mads Haahr, Trinity College, Dublin, Ireland

Kosmoscope, a telematic art installation by Tim Redfern, marries two technologies, the kaleidoscope and the seismograph, in order to immerse viewers within an abstract audiovisual representation of earth tremors. Kosmoscope uses state of the art seismic monitoring to render live scientific data in an impressionistic way, making the worldwide network of ultra-sensitive seismic microphones audible and creating an imposing, visually fractured narrative that evokes the face of geological forces. This paper describes the Kosmoscope’s influences, aims, design and realization. Kosmoscope (Greek: kosmos: people/universe/world; scopio: shape/form) is a telematic art installation utilizing live computer-generated visuals and immersive sound. Kosmoscope was constructed in June 2008 and first shown at the Sebastian Guinness Gallery in Dublin, Ireland. Designed to fit the dimensions of the Georgian stairwell in which it was shown, Kosmoscope is a 5.2m high aluminum structure that contains five 2.4m long mirrors and a vinyl rear-projection screen, displaying live computer-generated imagery from an LCD projector. Altering the symmetry of the classical “box” kaleidoscope, Kosmoscope creates an illusion that is fragmented and chaotic, evoking the balance of dynamic forces within the earth. On entering, audience members find themselves within an illusory space appearing far larger than the structure that produces it, which presents a multiplicity of jumbled, fragmented reflections of the audience themselves and their surroundings stretching upwards to a dark, faceted sphere above their heads. As time passes, patterns of fractured lines course across the huge overhead globe, resonant rumblings, percussive crashes and groaning sounds surround the audience. These sounds and drawings translate into the visible and audible domain seismic readings, which are gathered from around the world via the Internet (see Figure 1).

Figure 1: Kosmoscope interior; Figure 2: CAD drawing of the Kosmoscope

Kosmoscope is a large, vertical, free-standing installation that the audience may walk straight into (see Figure 2) The lower half of the structure comprises mirrored panels, whereas the open top half positions another mirror which bounces the beam from the projector back onto the black vinyl screen which separates the two halves of the installation. Kosmoscope differs from most box kaleidoscopes in that it uses pentagonal symmetry. Kosmoscope’s 5-sided geometry is deliberately imperfect: in effect, every reflected break between two mirrors reveals two different versions of the entire illusion. Kosmoscope was realized through an iterative process of design prototyping and visualization using ray-tracing software to simulate the kaleidoscope effect. The final solution is a compromise between the sizes of materials used, the size of the stairway space it was constructed for, the optical capabilities of the projector, and the qualities of the illusion created.

78 Veritas et Visus 3rd Dimension March 2010

Chiasmus Stephen Cady, University of Advancing Technology, Tempe, Arizona

Chiasmus is a responsive and dynamically reflective, two-sided volumetric projection surface that embodies the formation and reception of images. It consists of a square grid of 64 individually motorized cube elements engineered to move linearly. Each cube is controlled by custom software that analyzes video imagery for luminance values and sends these values to the motor control mechanisms to coordinate the individual movements. The resolution of the sculptural screen from the individual movements allows its volume to dynamically alter, providing novel perspectives of its mobile form to an observer. The predominant physical presentation of Chiasmus is a three- dimensional sculptural form that consists of 64 distinct cube forms arranged in an eight by eight array. Video content is projected upon the two opposing faces. Each cube is motorized and engineered to move linearly in response to the luminance values of the video content. The continual shifting of the cubes causes the overall form to dynamically alter, reshaping itself into unique patterns and arrangements. Sixteen motors are attached to each unit and two units can be networked together to allow for the control of 32 motors over one I/O serial line from the micro-controller. The graphical programming language Max/MSP/Jitter is employed to analyze the video luminance and control the serial messaging to the micro-controller. Through this software, the video imagery is segmented into a matrix of 64 distinct sections that correspond to each individual cube form. Each segment's pixels are analyzed for an average 8-bit luminance value and this derived value is plotted onto a simple three-part scale that is used to control the three distinct motor positions: fully extended, centered or fully recessed. The darkest and lightest tones move the cubes to the extreme linear ends while middle tones provide a central position. As the video content is projected onto both opposing sides of the sculptural form, the movement in relation to the associated tonal value is ultimately dependent upon the subjective positioning of the viewer.

Chiasmus shows interplay between moving surfaces and a projected image as shown on the left; on the right is the Chiasmus mechanical frame

Real-Time Remote Rendering of 3D Video for Mobile Devices Shu Shi, Won J. Jeon, Klara Nahrstedt, and Roy H. Campbell. UIUC, Urbana, Illinois

The requirements of huge network bandwidth and computing resources make it a big challenge to render 3D video on mobile devices in real time. This paper presents how a remote rendering framework can be used to solve the problem. The differences between dynamic 3D video and static graphic models are analyzed. A general proxy- based framework is presented to render 3D video streams on the proxy and transmit the rendered scene to mobile devices over a wireless network. An image-based approach is proposed to enhance 3D interactivity and reduce the

79 Veritas et Visus 3rd Dimension March 2010 interaction delay. In order to render 3D video streams on mobile devices, the researchers replace the rendering part in the TEEVE system (Tele-immersive Environments for Everybody) with mobile devices and a powerful server as the proxy. The proxy comprises four components. The input interface is used to communicate with the gateway server and receive 3D video streams. The rendering module processes the rendering of 3D video at the resolution of mobile screen size. The output interface sends the generated image frames to mobile devices. The motion module manages the rendering viewpoint according to the user interaction information collected from mobile devices. The mobile device only displays the image frames received from the proxy and updates the user interaction information to the proxy if the user tries to change the rendering viewpoint. This proxy-based framework provides two major benefits. First, the proxy offers enough network bandwidth to receive 3D video streams from the gateway server, and abundant computing resources for 3D rendering. Second, the proxy hides the details of 3D video representations from mobile devices. Although the current proxy is designed for 3D video represented by depth images, it can be easily modified to be compatible with other 3D video formats while the implementation on each mobile platform remains unchanged.

Boosting 3D Object Retrieval by Object Flexibility Boqing Gong, and Chunjing Xu, The Chinese University of Hong Kong, Hong Kong, China Jianzhuang Liu, and Xiaoou Tang, Shenzhen Institute of Advanced Technology, Shenzhen, China

This paper proposes a novel feature, called “object flexibility”, at a point of a 3D object to describe how the neighborhood of this point is massively connected to the object. This feature is stable to both linear transformations and non-linear deformations caused by an object’s articulations. Based on this object flexibility, the researchers propose a new shape descriptor for 3D object retrieval. Extensive experiments show that it outperforms a variety of existing shape descriptors in the retrieval of articulated 3D objects, which are often natural objects like animals, plants, and humans. Also, combined with existing shape descriptors, it also helps to obtain better performance for retrieving generic 3D objects. Since the flexibility describes local shape characteristics, one has to select enough points of a 3D model to obtain a complete shape description. For an object represented by voxels, the method selects all its surface points, each of which has less than 13 non-zero voxels among its 26 neighbor voxels. In the researchers’ experiments, 590 surface points of a 3D model are left on average after filtering out inner points in the McGill database, where each model is represented by 1283 voxels. For an object represented by meshes in the Princeton shape benchmark, 2000 surface points are sampled in the first round, and then 500 points are randomly selected from them. This research uses all the 2000 points to compute the flexibilities of the 500 points. One example of the sampled points from a mesh model is shown in Figure (b).

(a) A 3D model of an ant. (b) Sampled points of the ant. (c) The flexibility distribution on the ant.

Immersive Environments for Rehabilitation Activities Peter Bajcsy, Kenton McHenry, Hye Jung Na, Rahul Malik, Andrew Spencer, Suk Kyu Lee, Rob Kooper, and Mike Frogley, University of Illinois at Urbana-Champaign (UIUC), Urbana, Illinois

This paper presents new technologies for real-time immersion of humans into virtual reality environments with non-invasive real-time 3D imaging; a new methodology for evaluating immersive VR spaces in rehabilitation applications; and experimental results documenting the benefits of immersive VR spaces for regaining proprioception. The work focuses on designing immersive VR spaces with non-invasive multimedia sensory inputs where real time digital clones of humans are fused with virtual scenes for rehabilitation purposes. The researchers hypothesize that humans with proprioceptive impairments can use their other senses as the proprioceptive feedback from real-time 3D+color reconstructions of their bodies in space. Their objective is to investigate this hypothesis

80 Veritas et Visus 3rd Dimension March 2010 and quantify any benefits of immersive environments for regaining proprioception as one example of a rehabilitation application. The paper describes the portable immersive VR system for real time 3D imaging, reconstruction and rendering; a new methodology for quantitative evaluations of rehabilitation experiments in immersive VR spaces; and the experimental results obtained for validating the hypothesis with wheelchair basketball athletes. The novelty of the work lies in the first of its kind evaluation of the benefits of immersive VR spaces with multimedia cues for regaining proprioception. In the experiments, three bi-modal marker targets consisting of blue and green halves are placed on the floor and monitored by a ceiling camera. The goal is to reach the targets in such a way that one half (one color) of the target is occluded by the wheelchair front as viewed by a ceiling camera. In effect, these markers simulate a virtual wall, where the intersection between the colors represents that wall and proximity to that wall can be measured by the amount of each color seen. While the marker size is smaller than the wheelchair, the design of the wheelchair is such that the front fender and marker are of the same size and this is where the first point of intersection occurs. While presenting each cue, the subject has to move from the base location (past a black line on the floor) to one of the green/blue targets, stop at the boundary of green and blue as accurately as possible, come back to the black line and then proceed to the next target. There are three targets on the floor and three repetitions of the movements to the three targets. This is an approximation of the clover exercise used by wheelchair basketball players.

Volume 5 – 82 pages

+ Applied Materials, Tom Edman, VP and GM, Bus Dev + NextWindow, Al Monro, CEO + DiiVA Consortium, Brett Gaines, President + NOVA Chemicals, Chad Tarkany, Dir of Marketing & Sales + Emerson & Cuming, Jeff Parker, Sr. Tech Service Engineer + -trig, Rick Seger, President, North America Operations + ESAC, Nicole Helsberg, Director of Public Relations + PolyIC, Wolfgang Mildner, Managing Director + FlatFrog, Ola Wassvik, Co-founder & VP of Engineering + Rallypoint, Jeff Allen, CEO + HDMI Licensing, Steve Venuti, President + Uni-pixel, Jim Tassone, CFO + Jon Peddie Research, Jon Peddie, Owner + Verbatim America, Randy Queen, President + Meant To Be Seen, Neil Schneider, President & CEO + VISSUMO, Garrick Infanger, President + Meko, Bob Raikes, Founder + VIZIO, William Wang, Founder, CEO, and CTO + Nano EPrint, Aimin Song, CTO + Westinghouse, Rey Roque, VP of Marketing

http://www.veritasetvisus.com

81 Veritas et Visus 3rd Dimension March 2010 Eurodisplay 2009/IDRC September 14-17, 2009, Rome, Italy

In the first of two reports, Phillip Hill covers presentations from Prokhorov General Physics Institute of Russian Academy of Sciences, Tohoku University, Holografika Ltd., Konan University/Nagaoka University of Technology/Shimane University, and University College London/Koç University/ DeMontfort University

Laser Scanning 3D Display with Dynamic Exit Pupil Hadi Baghsiahi, Eero Willman, Sally E. Day, David R. Selviah, and F. Aníbal Fernández, University College London, London, England; Kishore V.C., Erdem Erden, and Hakan Urey, Koç University, Istanbul, ; Phil Surman, DeMontfort University, Leicester, England

A novel auto-stereoscopic 3D display system that forms dynamic exit pupils with the aid of a head-tracker is described. Different views are displayed using time multiplexing. Visible lasers, beam combiner, shaping and scanning optics, and exit pupil control optics are discussed. It is estimated that 100fL display brightness is achievable with about 3W per color of R, G and, B lasers. The 3D display does not require the use of special glasses. Temporal multiplexing for left and right eyes is employed, and corresponding dynamic exit pupils are formed with head tracking. Figure 1 shows a schematic of the system in two parts, (a) the light engine, where a projection lens, L1, projects the light to L2, and (b), the transfer screen. An individual view is formed with a scanning laser beam, which has been formed into a column for scanning across the liquid-crystal-on-silicon (LCoS) micro-display. This view is then projected through to the transfer screen where a spatial light modulator (SLM) is used to select the angle of the exit pupil. Multiple angles may be selected for multiple viewers, if they are to see the same stereoscopic view, or if a suitably fast LCoS unit were available, separate views could be shown to different viewers. A full color display is obtained by using light from red, green and blue lasers, which are modulated separately by the LCoS unit consisting of three microdisplays. The image is projected via L1 into the transfer screen section as in a conventional projection system. The full system will use a head tracking system to identify which elements of the SLM should be switched in order to present the image to the correct exit pupil.

Figure 1: (a) Light engine and (b) transfer screen

82 Veritas et Visus 3rd Dimension March 2010

A simplified prototype of the system outlined above has also been built as a proof of concept. Instead of using the laser based light engine described, two data projectors are used for producing the stereo image pair. Furthermore, a static SLM is used fixing the exit pupil locations and effectively limiting the number of viewers to one. This set-up allows the researchers to test and evaluate the performance of the transfer screen.

Real-Time Natural 3D Content Displaying with HoloVizio Displays Péter Tamás Kovács, Zoltán Gaál, Attila Barsi, and Zoltán Megyesi, Holografika Ltd., Budapest, Hungary

This paper presents HoloVizio technology as a solution to real time natural content display. The technology is capable of reproducing large field of view continuous parallax light fields, providing a realistic 3D experience of displayed scenes and objects. The paper demonstrates that the system is capable of real time displaying of dense light field streams originating from multiple cameras. The researchers present the results of a real time acquisition displaying solution using the HoloVizio system and a camera array of 27 cameras. The paper summarizes the technology, describes the components of the system, and shows results and measurements with natural scenes. The patented HoloVizio technology uses a different approach from stereoscopic, multiview, volumetric and holographic systems. It uses a specially arranged array of optical modules and a holographic screen. Each point of the holographic screen emits light beams of different color and intensity to the various directions. The light beams generated in the optical modules hit the screen points at various angles and the holographic screen makes the necessary optical transformation to compose these beams into a perfectly continuous 3D view. With proper software control, light beams leaving the pixels propagate in multiple directions, as if they were emitted from the points of 3D objects at fixed spatial locations. A 50-megapixel demonstration provides a practical possible use of the system in true 3D teleconferencing (see photos). The reasons for using 3D in telepresence is twofold. 3D visualization provides a more life-like experience, and 3D light-field capturing and visualization inherently solves the problem of incorrect eye gaze, which is a serious drawback of all 2D teleconferencing systems, even high-end ones. Although the eye-gaze problem can be solved with multi-view displays, such a solution is limited to a fixed number of people, carefully positioned to the calibrated locations. The solution allows people to move freely, experiencing a continuous motion parallax. The 27 cameras captured 960x720 resolution video streams. The image data was converted on-the-fly to light field format and the continuous parallax 3D image stream was displayed. Two PC clusters of three PCs (built from commercial components) were used, one on the acquisition and one on the display side. With this setup, they reached 15 frames per seconds playback speed for the 640x480 resolution stream and 10 frames per second for the 960x720 resolution stream. The bottleneck here was the camera acquisition speed. Future improvements are planned for building a system with a higher number of cameras with higher resolution and frame rate. A self-contained intelligent camera hardware eliminating PCs is under development, providing 64 2-mgapixel streams at 30fps, synchronized. As the image format used at the moment is sub-optimal, not being compact enough for Internet transmission, a two-layered approach for decoding and rendering is being developed for using different encoding during transmission (more compact), and light-field rendering (more GPU-friendly), which also allows arbitrary number of cameras and arbitrary resolution and frame rate usage. Full GPU decoding of traditional video codecs is also under investigation.

Continuous motion-parallax, true 3D tele-conferencing

83 Veritas et Visus 3rd Dimension March 2010

World’s First Full Resolution (at Each View) Auto3D/2D Planar Display Structure Vasily Ezhov, Prokhorov General Physics Institute of Russian Academy of Sciences, Moscow, Russia

The paper describes the first autostereoscopic/2D planar LCD design with full resolution Q in each view, where Q is the number of pixels in the screen. Each pixel of the display matrix carries information about both (left L and right R) image views: the sum of L and R views is presented by the value of light intensity in each pixel, and the ratio of amplitudes of L and R views is coded by the light elliptical polarization state. A phase-polarization electronically switchable parallax barrier is used for directly analyzing (decoding) the encoded polarization state of each pixel. The subsequent visualization of such polarization decoding with the help of a continuous polarization analyzer sheet corresponds to forming two separate (L and R) observations in space. The scheme is shown in the diagrams, where the top part (a) of the drawing corresponds to (x,y) cross-section of the whole display structure mn mn illustrated by the bottom part (b). Matrix J is an intensity matrix, forming in its m/n pixel the sum BL + BL of mn resolvable elements of both views. Matrix E is a polarization encoding matrix, performing modulation of the mn mn light polarization state according to the ratio of the two views BL /BL in each display pixel.

Autostereoscopic display structure: (a) cross-section on the level of row m;(b) isometric view

Monocular Display Unit to Induce Accommodation for Correct 3D Takashi Hosomi, and Kunio Sakamoto, Konan University, Kobe, Japan Shusaku Nomura, Nagaoka University of Technology, Nagaoka, Japan Tetsuya Hirotomi, Kuninori Shiwaku, and Masahito Hirakawa, Shimane University, Shimane, Japan

The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display systems utilize binocular stereopsis. The authors have developed a monocular 3D vision system with an accommodation mechanism, which is a useful function for perceiving depth. To realize natural 3D viewing, the researchers have developed a monocular vision system, which can directly project a stereoscopic image on the retina, and a 3D image generation system, which can make 3D computer graphics in accordance with accommodation. Assuming an actual object in the real world, when one perceives this object, a part of the projected image on the retina might be a blur caused by the lens of the eye. It is a physiological response called accommodation. In the case of virtual 3D image viewing as shown in Figure 1, one can watch a correct 3D image as the actual object is there if the projected retina image has the appropriate blur in compliance with the focus adjustment of the eye. Then one might perceive virtual images with the same accommodation as when one watches real objects. Thus a monocular 3D vision system can provide correct 3D viewing with accommodation, vergence and binocular stereopsis and without a tired feeling after a long time of watching when the retina image is directly projected and external stimulation induces the focus adjustment by

84 Veritas et Visus 3rd Dimension March 2010 changing the thickness of the eye lens. Figure 2 shows the principle of the 3D vision system using monocular stereoscopy. This display system consists of an LCD panel, an acrylic plate and an optical lens. The observers perceive parallax images at the just point, which the optical lens converges the light on. To perceive multiple parallax images with just one eye, the image shifting optics consists of a parallel plane acrylic plate, whose inclination causes the image to shift as shown in Figure 2. An LCD panel is used as the displaying plane of parallax images. The signal controller sends an image signal to the LCD panel to the tune of a control signal. Then the displaying plane of parallax images creates monocular multi-viewing images.

Figure 1: Flow of generating focused image; Figure 2: Optical layout of the monocular 3D display

Glassless 3D Projection Display System using Spatially Divided Iris Plane Shutter with High Resolution Takahiro Ishinabe, Tohru Kawakami, Noriyuki Takahashi, and Tatsuo Uchida, Tohoku University, Miyagi, Japan

The paper proposes a novel glassless 3D projection display system using a spatially divided iris plane shutter. The proper pair of stereo images are time sequentially sent to the viewer’s positions by using the iris plane shutter and the viewer-tracking system. This display has the capability of multi-view images, wide viewing angle, glassless, 2D/3D switchable, high resolution, no crosstalk between left and right viewing zones, and a simple structure. Therefore, this display is promising as a display for novel 3D applications such as the head-up displays and digital signage. The iris plane is a plane in which all the image information of the object is kept in the optical system. The Figure shows the example of the optical system using a convex lens. A convex lens forms a real image of the object. In Figure a, the all image information of the object passes through the point O, simultaneously with all the points on the principal plane of the convex lens A-B, therefore the principal plane of the convex lens works as the iris plane. Even if half of the iris plane is closed by the optical shutter as shown in Figure b, the real image of the object is formed, though the luminance of the real image becomes half.

Image formation of the object with a convex lens. (a) Iris plane shutter in the full open state; (b) half of the iris plane shutter is in the closed state.

85 Veritas et Visus 3rd Dimension March 2010 Capture, Transmission and Display of 3D Video

June 7‐9, 2010 — Hotel Scandic Rosendahl, Tampere, Finland

CALL FOR PAPERS — 3DTV CONFERENCE 2010

3DTV‐CON 2010 is the 4th in a series of successful conferences having the objective to bring together researchers and developers from academia and industry with diverse experience and activity in distinct, yet complementary, areas to discuss the development of next generation 3DTV technologies, applications and services. The conference involves a wide range of disciplines: imaging and computer graphics, signal processing, telecommunications, electronics, optics and physics. Professionals from these areas are cordially invited to participate at 3DTV‐CON 2010.

 3D Capture and Processing: 3D audio‐visual scene capture and reconstruction techniques for static and dynamic scenes, synchronization and calibration of multiple cameras, holographic camera techniques, multi‐view and multi‐sensor image and 3D data processing, mixing of virtual and real worlds, 3D tracking.  3D Coding and Transmission: Systems, architectures and transmission for 3DTV, coding of multi‐view video, 3D meshes, and holograms, audio coding for 3DTV, error resilience and error concealment of 3D video and 3D geometry, signal processing for diffraction and holographic 3DTV.  3D Visualization: Projection and display technology for 3D videos, stereoscopic and auto‐stereoscopic display techniques, technology, reduced parallax systems, integral imaging techniques, underlying optics and VLSI technology, 3D mesh, texture, point, and volume‐based representation, object‐based representation and segmentation, 3D motion analysis and animation.  3D Quality of Experience: Subjective quality evaluation, objective quality metrics, multimodal experience, interaction with 3D content.  3D Applications: 3D television, cinema, games and entertainment, , advanced 3D audio applications, 3D tele‐ immersion and remote collaboration, 3D imaging in virtual heritage and virtual archaeology, augmented reality and virtual environments, underlying technologies for 3DTV, medical and biomedical applications, 3D content‐based retrieval and recognition, 3D watermarking, other applications.

Paper submission: Prospective authors are invited to submit original papers, four‐pages long, in double‐column format including authors’ names, affiliations, and short abstract. Papers will be collected only by electronic submission through the conference site

http://www.3dtv‐con.org

86 Veritas et Visus 3rd Dimension March 2010 3DTV Conference May 4-6, 2009, Potsdam, Germany

In this third report of four, Phillip Hill covers this IEEE conference with presentations from Tianjin University/Zhongyuan University of Technology (x3), INRIA Rennes/IETR-INSA Rennes, INRIA Rennes/INSA/IETR Rennes/Orange Labs, Heinrich-Hertz-Institut/Technical University of Berlin, University of Grenoble/University of Bordeaux, Bremer Institut für Angewandte Strahltechnik/South Valley University, Kanagawa Institute of Technology, University of Strathclyde, Gwangju Institute of Science and Technology, NTT Cyber Space Laboratories, University of Alberta, IMEC/Hasselt University, Bilkent University, and Poznań University of Technology

A General Multiview LCD Stereo Image Composition Method Based on Optical Plate Technology Lei Yang, Chunping Hou, Jichang Guo, Sumei Li, and Yuan Zhou, Tianjin University, Tianjin, China Xiaowei Song, Zhongyuan University of Technology, Henan Zhengzhou, China

Multiview stereo image composition mainly depends on the type of the multiview device. Currently, multiview LCD optical plate autostereoscopic display devices are common, while the composition method is limited. A new general multiview LCD stereo image composition method is proposed in this paper based on an optical plate LCD stereo display device. The proposed method mainly consists of three steps: sub-pixel judgment, sub-sampling of the sub-pixel of each view, and arrangement and composition of the sub-pixels. The proposed method covers all possible cases of the optical plate LCD stereo display device. It has good universality and applicability, the researchers say. The feasibility of the proposed method is verified on the detailed stereo display device. Whether lenticular or slit, each unit of the optical plate controls N sub-pixel rays in the horizontal direction. Each unit of the optical plate has a control region, and the judgment should be made first on which optical plate region a certain sub-pixel of the display belongs to. Then, the judgment principle should be set according to the slanted angle of the optical plate. After the judgment, sub-sampling is performed on each of the nine views. After the sub-sampling, composition is performed to form the nine view stereo image. It should be noted that the width in the horizontal direction of each lenticular unit is 0.7225mm, while the dot pitch of the LCD display is 0.255mm. Then the number of sub-pixels in a lenticular unit is 8.5, i.e., 8.5 sub-pixels can be placed into one lenticular unit. Therefore, for the nine views, the width of a lenticular unit can’t hold all sub-pixels of nine different views. To solve this problem, an interlace arrangement with 8+9 type and 9+8 type in the vertical direction is utilized, so that an ideal stereo effect can be achieved. In the 8 case of the 8+9 type or 9+8 type, the sub- pixel of one view should be discarded randomly with pseudo-random number selection. The diagram shows a schematic of this arrangement method. The stereo effect of this general composition method was verified to be good by many viewers.

Arrangement for a 9-view lenticular stereo display

87 Veritas et Visus 3rd Dimension March 2010

Incremental-LDI for Multiview Coding Vincent Jantet, and Christine Guillemot, INRIA Rennes, Rennes, France Luce Morin, IETR-INSA Rennes, Rennes, France

This paper describes an incremental algorithm for layer depth image construction (I-LDI) from multiview plus depth data sets. A solution to sampling artifacts is proposed, based on pixel interpolation (in-painting) restricted to isolated unknown pixels. A solution to ghosting artifacts is also proposed, based on depth discontinuity detection, followed by a local foreground/background classification. The researchers propose a formulation of warping equations, which reduces time consumption, specifically for LDI warping. Tests on the Breakdancers and Ballet MVD data sets show that extra layers in I-LDI contain only 10% of first layer pixels, compared to 50% for LDI. I- LDI layers are also more compact, with a less spread pixel distribution, and thus easier to compress than LDI. Visual rendering is of similar quality. To reduce correlation between LDI layers, the method uses an incremental construction scheme, illustrated in the Figure, based on residual information extraction. First, the reference view is used to create an I-LDI with only one layer (the view itself). Then, this I-LDI is warped iteratively on every other viewpoint (in a fixed order), and a logical exclusion difference between the real view and the warped I-LDI is used to compute the residual information. This information is warped back into the reference viewpoint and inserted in the I-LDI layers. By this method, only required residual information from side views is inserted, and no pixels from already defined areas are added to the L-LDI. On the other side, all the information present in the MVD data is not inserted in the I-LDI. The Compared to LDI layers, I-LDI layers contain fewer pixels, and these pixels are grouped in connected clusters.

Step of I-LDI construction for view i, with residual information extraction

A 3D Avatar Modeling of Real World Objects Using a Depth Camera Ji-Ho Cho, Hyun Soo Kim, and Kwan H. Lee Gwangju Institute of Science and Technology, Gwangju, South Korea

This paper proposes a novel 3D avatar generation scheme using a depth camera. The depth camera can capture both visual and depth information of moving objects by using an infrared light source at video frame rate. This method consists of two main steps: alpha matting and mesh generation. The researchers present a novel alpha matting algorithm that combines visual and range information. It improves on existing natural alpha matting methods. After alpha matting is performed a triangular mesh is created by using RGB, depth, and alpha images. The construction of the 3D triangular mesh consists of four steps (see figure). The method can represent any real world objects including a furry one. Experimental results show that the matting method demonstrates better results than previous approaches. Especially, the method provides a viable solution to model a scene with fuzzy objects.

88 Veritas et Visus 3rd Dimension March 2010

Compact Quad-Based Representation for 3D Video T. Colleu, and C. Labit, INRIA Rennes, Rennes, France; L. Morin, INSA/IETR Rennes, Rennes, France S. Pateux, and R. Balter, Orange Labs, Cesson-Sévigné, France

The context of this study is 3D video. Starting from a sequence of multiview video plus depth (MVD) data, the proposed quad-based representation takes into account, in a unified manner, different issues such as compactness, compression, and intermediate view synthesis. The representation is obtained in two steps. Firstly, a set of 3D quads is extracted by using a quadtree decomposition of the depth maps. Secondly, a selective elimination of the quads is performed in order to reduce inter-view redundancies and thus provide a compact representation. Experiments on two real sequences show good quality results at the rendering stage and a small data overload compared to mono-view video. In the illustrations, Qd is the desired set of quads extracted from all the views after redundancy reduction, and Qi the set of quads from view i. The idea is to initialize Qd with the quads from a reference view Vr and iteratively complete and modify Qd with Qi, from i = 1 to N. Qd is first projected into view Vi. The resulting image contains disocclusion areas. Then the quads from Qi are added to Qd if the pixel block that they form in view Vi covers these disocclusion areas. The illustration (left) shows the projection of Qd in Vi. The disocclusion areas can be seen in white. The other illustration (right) shows the quads from Qi that are added in Qd. The large white regions show that many redundancies have been eliminated.

Redundancies reduction. Left: Projection of Qd in Vn. Right: Quads from Qi added in Qd

Real-time Free Viewpoint Viewer from Multiview Video Plus Depth Representation Shinya Shimizu, Hideaki Kimata, and Yoshimitsu Ohtani, NTT Cyber Space Laboratories, Kanagawa, Japan

This paper presents a real-time video-based rendering system that uses multiview video data with depth representation for free-viewpoint navigation. The proposed rendering algorithm not only achieves high quality rendering but also increases viewpoint flexibility to cover viewpoints that do not lie on the camera baselines. The proposed system achieves real-time decoding of multiple videos and depth maps that are encoded by the H.264/AVC Multiview Video Coding Extension on a regular CPU. The rendering process is fully implemented on a commercial GPU. A performance evaluation shows that the system can generate XGA free-viewpoint images at 30fps.

Examples of rendered images: As can be seen, visually high quality images were rendered from two adjacent and two furthest views. The changing occlusions show that this is "walking” into the scene, not a simple up-sampling.

89 Veritas et Visus 3rd Dimension March 2010

Remote and Collaborative 3D Interactions Benjamin Petit, Jean-Denis Lesage, Edmond Boyer, and Bruno Raffin, University of Grenoble, Grenoble, France Jean-Sébastien Franco, University of Bordeaux, Bordeaux, France

This paper presents a framework for new 3D tele-immersion applications that allows collaborative and remote 3D interactions. This framework is based on a multiple-camera platform that builds, in real-time, 3D models of users. Such models are embedded into a shared virtual environment where they can interact with other users or purely virtual objects. 3D models encode geometric information that is plugged into a physical simulation for interactive purposes. They also encode photometric information through the use of mapped textures to ensure a good sense of presence. Experiments were conducted with two multiple-camera platforms, and the preliminary results demonstrate the feasibility of such environments. The researchers acquired images and generated the 3D meshes at 20fps on each platform. The simulation and the rendering processes were running respectively at 50-60fps and 50- 100fps, depending on the load of the system. As they run asynchronously from the 3D model and texture generation they need to resample the mesh and the texture streams independently. In practice the mesh and texture transfer between sites oscillates between 15fps and 20fps, depending on the size of the silhouette inside the images. Meanwhile the transfer between the 3D modeling and the rendering node inside a platform and the transfer going to the simulation node are always running at 20fps. They do not experience any extra connection latency between the two platforms. During execution, the application does not overload the gigabit link. From the user’s point of view the sense of presence is strong (see figure). Users do not need to learn the interaction paradigms that are in fact the same as people use to experience the real world.

Left: The 3D virtual environment with a "full-body” user and a "hand” user, interacting together with a virtual puppet. Right: one of the acquisition platforms.

Stereo Video Compression for Mobile 3D Services P. Merkle, H. Brust, K. Dix, and K. Müller, Heinrich-Hertz-Institut, Berlin, Germany T. Wiegand, Technical University of Berlin, Berlin, Germany

This paper presents a study on different techniques for stereo video compression and its optimization for mobile 3D services. Stereo video enables 3D television, but as mobile services are subject to various limitations, including bandwidth, memory, and processing power, efficient compression is required. Three of the currently available MPEG coding standards are applicable for stereo video coding, namely H.264/AVC with and without stereo SEI message and H.264/MVC. These methods are evaluated with respect to the limitations of mobile services. The results clearly indicate that for a certain bit-rate inter-view prediction as well as temporal prediction with hierarchical B pictures lead to a significantly increased subjective and objective quality. Although both techniques require more complex processing at the encoder side, their coding efficiency offers the chance to realize 3D stereo at the bit-rate of conventional video for mobile services.

90 Veritas et Visus 3rd Dimension March 2010

Experimental Investigation of Holographic 3D TV Approach Thomas Kreis, Bremer Institut für Angewandte Strahltechnik, Bremen, Germany Mostafa Agour, South Valley University, Aswan, Egypt

A digital hologram is recorded by a 2D CCD array by superposition of the wave field reflected or scattered from a scene and a coherent reference wave. If the recorded digital hologram is fed to a spatial light modulator (SLM) and this is illuminated by the reference wave, then the whole original wave field can be reconstructed. The reconstructed wave field contains phase and intensity distributions, which means it is full 3D, exhibiting such effects as depth and parallax. Therefore, the concept of digital holography is a promising approach to 3D TV. In order to obtain an off-axis hologram, i.e. to separate the twin image terms from each other, the orientation of the mirror M1 that reflects the reference wave is set such that the reference wave reaches the CCD target with an incident angle θ while the object wave propagates perpendicularly to the CCD (see Figure 1). Attention must be paid to the adjustment of θ, which must not exceed a maximum value for which the carrier frequency of the interferogram is equal to the Nyquist frequency of the CCD sensor. Figure 2 presents an intensity hologram recorded with the experimental setup. This hologram consists of 2452x2054 pixels and has dimensions of the sensitive chip area 8.46x7.09mm of the CCD. Interference fringes that are characteristic of the off-axis geometry are observed in this figure.

Figure 1: Digital recording of the off-axis Fresnel hologram; Figure 2: Original off-axis sampled intensity holograms

Distortions of Synthesized Views Caused by Compression of Views and Depth Maps Krzysztof Klimaszewski, Krzysztof Wegner, and Marek Domański Poznań University of Technology, Poznań,

The paper deals with prospective 3D video transmission systems that would use compression of both multiview video and depth maps. The paper addresses the problem of quality of views synthesized from other views transmitted together with depth information. For the state-of-the-art depth map estimation and view synthesize techniques, the paper proves that the AVC/SVC-based Multiview Video Coding technique can be used for compression of both view pictures and depth maps. The paper reports extensive experiments where synthesized video quality has been estimated by use of both PSNR index and subjective assessment. The critical value of depth quantization parameter is defined as a function of the reference view quantization parameter. For smaller depth map quantization parameters, depth map compression has negligible influence on fidelity of synthesized views, the researchers conclude.

91 Veritas et Visus 3rd Dimension March 2010

Input System for Moving Integral Imaging Using Full HD Camcorder and Fly’s Eye Lens Kazuhisa Yanaka, and Hirokazu Motegi, Kanagawa Institute of Technology, Kanagawa, Japan

Integral imaging is one of the best methods of 3D display because not only horizontal but also vertical parallax is obtained without having to wear special glasses. It is possible to create content for integral imaging with either CG or live action, and the latter is advantageous in that time-consuming modeling is not necessary as long as real objects are available. Moreover, if the displayed 3D images are moving, a more realistic impression of sensation is obtained. However, considerably large and expensive machines have been necessary to shoot moving integral images up until now. In order to cope with this issue, the institute developed a simple system of input for moving integral imaging in which a household full-HD digital camcorder and a fly’s eye lens are combined. The video data recorded on the flash memory is then moved to a PC and converted into a form that is appropriate for 3D display with original software. The extended fractional view method that the researchers had previously developed was used for the 3D display. By using it, a wide range of LCDs, normal PCs and several fly’s eye lenses available on the market can be combined quite freely, because the ratio between the lens pitch and dot pitch is no longer restricted to integer numbers. Experiments revealed that the proposed system of input could capture moving 3D images of sufficient quality. The proposed input subsystem consists of a full HD video camcorder and a fly’s eye lens is shown in Figure 1. The fly’s eye lens used for the input subsystem is different to that used for the display subsystem. A large convex lens can be placed between the fly’s eye lens and the real objects to converge the light rays emitted from the objects. This lens is not crucial. Figure 2 shows a frame captured with a full HD video camcorder. The resolution is 1920x1080 pixels. From this image, 1080x1080 pixels of the center are clipped by the software, and it is divided into n-by-n images. In Figure 2, the value of n is 8. Therefore, the resolution of each small image is 180x180. These small images are used for synthesizing the elemental images. The AVI file was played back with Media Player 11 on a Windows PC and observed through the fly’s eye lens. The researchers found that a full-parallax stereoscopic image could be displayed.

Figure 1: The 3D video input system; Figure 2: Frame captured with full HD video camcorder

Low Cost Multi-View Video System for Wireless Channel Nurulfajar Abd Manap, Gaetano Di Caterina, and John Soraghan, University of Strathclyde, Glasgow, Scotland

One of the key elements in 3D TV is the multi-view video coding, obtained from a set of synchronized cameras, capturing the same scene from different viewpoints. The video streams are synchronized and subsequently used to exploit the redundancy contained among video sources. A multi-view video consists of components for data acquisition, compression, transmission and display. This paper outlines the design and implementation of a multi- view video system for transmission over a wireless channel. Synchronized video sequences acquired from four separate cameras and coded with H.264/AVC. The video data is then transmitted over a simulated Rayleigh channel through Digital Video Broadcasting - Terrestrial (DVB-T) system with Orthogonal Frequency Division Multiplexing (OFDM).

92 Veritas et Visus 3rd Dimension March 2010

Optimal Pixel Aspect Ratio for Stereoscopic 3D Displays under Practical Viewing Conditions Hossein Azari, Irene Cheng, and Anup Basu, University of Alberta, Edmonton, Alberta

In multiview 3D TVs the original 3D scene is reconstructed based on the corresponding pixels of adjacent 2D views. For a conventional 2D display the highest image quality is usually achieved by uniform distribution of pixels. However, recent studies on the 3D reconstruction process show that for a given total resolution, a non- uniform horizontally finer resolution yields better visual experience on 3D displays. Unfortunately, none of these studies explicitly model practical viewing conditions, such as the role of the 3D display as a medium and behavior of the human eyes. In this paper the previous models are extended by incorporating these factors into the optimization process. Based on this extended formulation the optimal ratios are calculated for a few typical viewing configurations. Some supporting subjective studies are presented as well. To understand the human eyes behavior, the researchers tracked the eyes’ reaction to the changes in disparities and also changes in stereo capturing vergence (see illustration). It shows a subset of red/blue images used for a test. The two upper rows are some sample of disparity variations and corresponding images of the eyes. The images roughly show that the eyes’ orientation is almost independent of the amount of disparity. The two bottom rows are the stereo pairs generated from the Bunny 3D mesh under different vergence angles using the virtual stereo imaging system the researchers established for this purpose. One can see that the eyes reveal almost the same behavior in dealing with the stereo pairs generated under different vergence configurations.

Orientation of the eyes in response to different disparities (top) and different stereo capturing vergences (bottom)

Real-time Transmission of High-resolution Multi-view Stereo Video over IP Networks Yuan Zhou, ChunPing Hou, Zhigang Jin, Jiachen Yang, and Jichang Guo, Tianjin University, Tianjin, China Lei Yang, Zhongyuan University of Technology, Zhengzhou, China

In this paper, a real-time high-resolution multi-view video transport system that can deliver multi-view video over IP networks is proposed. Video streams are encoded with H.264/AVS. Owing to the massive amount of data involved, multi-view video is delivered in two separate IP channels. Since packets losses always occur in IP networks, a novel packets processing method is employed in the proposed system to hold the correlation between views for loss data recovery. Additionally, an error concealment scheme for multi-view stereo video is exploited in this transport system, in order to solve the packet loss problem in IP networks. The experimental results show that the proposed transport system is feasible for multi-view video in IP networks. The researchers use a Lotus multi- view sequence to transport in the proposed system. Lotus sequence has eight views, and 500 frames are provided for each view. The resolution is 720x480 for each view. At a packet loss rate of 2%, eight view streams transported over the network without the error concealment process. The actual observation indicates that the sense of stereo of the Lotus multi-view video is seriously damaged after a packet loss transmission without error concealment. And for the multi-view video delivered over the proposed transport system at the same packet loss rate, the sense of stereo is much better.

93 Veritas et Visus 3rd Dimension March 2010

Migrating Real-time Depth Image-based Rendering from Traditional to Next-Gen GPGPU Sammy Rogmans, and Gauthier Lafruit, IMEC, Leuven, Maarten Dumont, and Philippe Bekaert, Hasselt University, Diepenbeek, Belgium

This paper focuses on the current revolution in using the GPU for general-purpose computations (GPGPU), and how to maximally exploit its powerful resources. Recently, the advent of next-generation GPGPU replaced the traditional way of exploiting the graphics hardware. The researchers have migrated real-time depth image-based rendering – for use in contemporary 3D TV technology – and noticed however that using both GPGPU paradigms leads to a higher performance than non-hybrid implementations. Using this paper, they say they want to sensitize other researchers to reconsider before migrating their implementation completely, and use their practical migration rules to achieve maximum performance with minimal effort. The next-gen paradigm offers random writes and flexibility by abstracting the GPU as a generic coprocessor that exists out of multiple multiprocessors. Each multiprocessor contains an equal number of stream (scalar) processors and an on-chip shared memory, following a distributed-shared memory model. The shared memory therefore acts a user-managed cache to control the data transfers from global or texture memory inside the VRAM to the GPU. The joint execution model uses blocks of threads inside a grid to execute individual blocks on a dedicated multiprocessor, while an idle multiprocessor can swap to different thread blocks. Since the next-generation paradigm exposes the graphics hardware in a more generic and familiar way, it is hereby not able to expose all available hardware contained inside the GPU.

Real-time Color Holographic Video Display System Fahri Yaras, Hoonjong Kang, and Levent Onural, Bilkent University, Ankara, Turkey

In this experimental system, a real-time multi-GPU color holographic video display system computes holograms from 3D video of a rigid object. The system has three main stages: client, server and optics. 3D coordinates and texture information are kept in the client stage and sent online to the server through the network. In the server stage, with the help of the parallel processing ability of the GPUs and segmentation algorithms, phase-holograms are computed in real time. The graphics card of the server computer drives the SLMs and red, green and blue channels are controlled in parallel. The resultant color holographic video is loaded to the SLMs, which are illuminated by expanded light from LEDs. In the optics stage, reconstructed color components are combined by using beam splitters. Reconstructions are captured by a CCD array without any supporting optics. The proposed system can be used as a color holographic video display. A multi-GPU computing architecture is used for real-time color holographic fringe generation. The 3D model consists of discrete points in space. 3D coordinate information and color value of each point are extracted from each 3D video frame and sent through a network to the server computer. With the help of parallel processing of the GPU, phase-only holograms are calculated in real-time. Calculated fringe patterns are then sent to the display unit, which consists of SLMs, LEDs and optics (see figure). The graphics card of the server computer drives the SLMs and red, green and blue channels are controlled in parallel. Received phase-only holograms are loaded to the SLMs and illuminated by corresponding LEDs. Then reconstructed 3D video is captured by the CCD array without any supporting optics.

Overall setup (BE = beam expander)

94 Veritas et Visus 3rd Dimension March 2010

An Improved Multiview Stereo Video FGS Scalable Scheme Lei Yang, Chunping Hou, Jichang Guo, Sumei Li, and Yuan Zhou, Tianjin University, Tianjin, China Xiaowei Song, Zhongyuan University of Technology, Henan Zhengzhou, China

A multiview stereo video FGS (Fine Granular Scalability) scalable scheme is presented in this paper. The similarity among adjacent views is fully utilized. A tradeoff scheme is presented in order to adapt to different demands of Quality First (QF) and View First (VF) of the decoder. The scheme is composed of three cases: I, P, B frame. The middle view is encoded as the basic layer, while the other views are predicted from the partly retrieved FGS enhancement layers of adjacent views. The FGS enhancement layer of the current view is generated based on that. Experimental results show that the presented scheme is of more flexible and extensive scalable characteristic, which could better adapt different demands on view image quality and stereo immersion of different users.

5th China International 3D World Forum & Exhibition

Amassing the global latest 3D information technology and applications of 3D display, digital technology, home entertainment and Web 3D

· China’s policies and plans on 3D display · Post-production and its development of 3D industry by the related state administrations and content ministries · Development of 3D information technology and · 3D display transmission, storage, coding and digital consumer electronics industry decoding technologies and development · 3D chip and video adapter technologies and · 3D graphic adaptors, workstations and digital their prospects platforms · 3D filming, coding and transmission · 3D internet evolution and application technologies and systems development · 3D display technologies and directions of · 3D information technology applications in the development field of digital entertainment · Successful applications of 3D information · 3D panel displays and systems technology in gaming industry · 3D display terminals and trends of display · Development of 3D virtual community and technology business legends · 2D to 3D display conversion solutions · Digitalized 3D visual technologies and solutions · Industrial applications of 3D virtual reality · Creation of 3D content and the current status technology

http://www.c3dworld.org/english.htm

95 Veritas et Visus 3rd Dimension March 2010 Collimated backlights

by Adrian Travis

After completing his PhD on fiber-optic waveguides in 1987 at Cambridge University, Adrian Travis started working on displays which use lenses and fast-switching liquid crystals to synthesize a three dimensional image. The focal depths of the lenses made these displays bulky, so he created a way of scanning illumination from a light-guide, which evolved into a virtual image display. Next came a wedge-shaped waveguide which projects real images, but may also perform many of the other functions of a lens. Travis remains a fellow of Clare College but in autumn 2007, left his lectureship at Cambridge to work for Microsoft in Seattle.

A simple concept for a 3D display is to illuminate a liquid crystal panel with collimated rays then to display a sequence of views of a 3D image on the panel while scanning the direction of collimation of the rays. Several fast-switching liquid crystal effects are being developed at the moment, but how do we get collimated rays from a backlight? Slim backlights originally comprised a fluorescent tube placed along the thick edge of a wedge-shaped waveguide through which rays propagated until they reached the critical angle and emerged. Prismatic film was used to bend rays towards the perpendicular but they were in general diffuse so let us start with what we want – parallel rays emerging from all parts of the waveguide surface – and see if we can guide them backwards to a single point- source LED at the edge.

If no changes are made, the parallel rays will all be guided back to illuminate the whole of the thick end of the wedge and let us inspect the thick end by looking at it through the thin end, as if the wedge were a kaleidoscope. We see multiple reflections of the thick end which combine to form something approaching a curve and we can make this curve smooth by slightly curving the thick end itself.

All our rays are almost at the critical angle at the exit surface so take this as our plane of departure and within our kaleidoscope, these rays will all hit the thick end in parallel. If we coat the thick end with a mirror, its curvature will cause the reflected rays to converge towards a point and we can imagine truncating the wedge so that this point is at the thin end.

This does not quite solve the problem: as things stand, all the rays will reach the critical angle and come back out of the wedge before they reach the thin end! But we can avoid this by embossing the thick end with facets angled to shift the focus to a point where the angle of the rays relative to the plane of the wedge is sufficiently small that they do reach the thin end.

At the point of focus, rays are in reality as likely to leave the LED travelling upwards as downwards so at the thick end, the facet angle should alternate like a zigzag. It follows that rays emerge also from the bottom surface but it is a simple matter to reflect them back through the top. The result is a waveguide which produces perpendicular rays of light uniformly from all points on its surface when an LED is placed at the centre of its thin end. Details are at http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-22- 19714 and scanning will be described in the second paper of session 16 at SID 10 in Seattle.

96 Veritas et Visus 3rd Dimension March 2010

http://www.veritasetvisus.com

97 Veritas et Visus 3rd Dimension March 2010

How can it be?

3D Illusion

by Alan Stubbs

Alan Stubbs teaches for the Psychology Department and the Art Department at the University of Maine. His area of interest in psychology is perception and in art it is photography and digital imaging. I addition to illusions, he is interested in the design of graphs and large format printing on a variety of papers. A current hobby is cooking better habenero cornbread. He maintains a website that features many fascinating illusions: http://perceptualstuff.org

With Avatar doing well and the buzz from the Consumer Electronics Show, 3D is in the air. Part of the appeal for 3D images is that they seem much more realistic than traditional pictures. But, like more conventional pictures, 3D pictures are also subject to illusory effects.

Although the figure looks like random dots if viewed normally, if you look with red/cyan glasses – with red for the left eye and cyan for the right – you should see two figures that appear to come off of the page. Further, viewers typically see the circular figure on the left as being closer to them (or further away from the background) and the circle on the right further away (or closer to the background). So far, so good. This is another demonstration that looking at two slightly different views by means of red and cyan filters produces a strong 3D effect. What is new and different about this figure concerns the apparent size of the two circles. If you compare the two circles, the circle on the right side of the page--the circle that seems further away--looks larger. But, in fact both circles are physically identical in diameter, but the one that looks psychologically further away seems larger.

The illusory effect is new but does relate to other illusory effects that have been known for some time. Variations on figures like that in the accompanying graphic have been around for a long time. Two figures might be identical, but if one is placed so as to seem further away it will appear larger. In this example, one red ball was simply a copy of the other; but the ball that looks further looks bigger. Psychologists would talk about depth cues and how the effect was due to the drawing context. The 3D illusion makes a more general point: that if two equal objects seem to be at different distances, the one that seems further will appear larger. This effect occurs whether due to pictorial depth cues in a drawing or photograph or whether the objects and depth are created by fusing two separate views in the brain.

The illusion also should remind us of an important point that made when he first invented the in the early 1800s. Although we think of 3D images as somehow more real, Wheatstone pointed out the seeing depth from two different drawings was in fact a very strong illusion.

98 Veritas et Visus 3rd Dimension March 2010

99 Veritas et Visus 3rd Dimension March 2010 Display of Three-Dimensional Images A Review of the Prior Art

by Robert A. Connor

Robert A. Connor, Ph.D. is the President of Holovisions LLC. Holovisions LLC is an early- stage company that is evolving from a focus on marketing to developing of innovative methods to display images that appear to be three-dimensional with and motion parallax. He wrote this review of the prior art for one of Holovisions' pending patent applications.

Humans use several visual cues to recognize and interpret three-dimensionality in images. Monocular cues can be seen with just one eye. Binocular cues require two eyes. Monocular cues for three-dimensional images include: the relative sizes of objects of known size; occlusion among objects; lighting and shading; linear perspective; adjusting eye muscles to focus on an object at one distance while objects at other distances are out of focus (called "accommodation"); and objects moving relative to each other when one's head moves (called "motion parallax"). Binocular cues for three-dimensional images include: seeing different images from slightly different perspectives in one's right and left eyes (called "binocular disparity" or "stereopsis"); and intersection of the viewing axes from one's right and left eyes (called "convergence"). When a method of displaying three- dimensional images provides some of these visual cues, but not others, then the conflicting signals can cause eye strain and headaches for the viewer.

The ultimate goal for methods of displaying three-dimensional moving images is to provide as many of these visual cues for three-dimensionality as possible while also: being safe; providing good image quality and color; enabling large-scale applications; being viewable simultaneously by multiple viewers in different positions; not requiring special headgear; and being reasonably priced. This goal has not yet been achieved by current methods for displaying three-dimensional moving images. In this review, we present a taxonomy of current methods for displaying three-dimensional moving images and discuss limitations of these current methods.

Binocular disparity is a good starting point for a taxonomy of methods to display three-dimensional moving images. When images seen in the right and left eyes are different perspectives of the same scene, as would be seen if one were viewing the scene in the real world, then the brain interprets the two images as a single three-dimensional image. This process is "stereoscopic" vision. This discussion of related art focuses on methods that provide at least some degree of stereoscopic vision. The first branch in the taxonomy of three-dimensional display methods is between methods that require glasses or other headgear (called "stereoscopic") vs. methods that do not require glasses or other headgear (called "autostereoscopic").

The first type of stereoscopic imaging uses glasses or other headgear and a single image display that simultaneously displays encoded images for both the right and left eyes. These simultaneous images are decoded by lenses in the glasses for each eye so that each eye sees the appropriate image. Image encoding and decoding can be done by color (such as red vs. cyan) or polarization (such as linear polarization or circular polarization). A second type of stereoscopic imaging uses glasses or other headgear and a single image display source that sequentially displays images for the right and left eyes. These sequential images are each routed to the proper eye by alternating shutter mechanisms over each eye. The third type of stereoscopic imaging uses glasses or other headgear and two different image projectors, one for each eye, so that each eye receives a different image.

The main limitation of methods that display three-dimensional moving images through the use of glasses or other headgear is the inconvenience of glasses or other headgear for people who do not normally wear glasses and potential incompatibility with regular glasses for people who do normally wear glasses. Other potential limitations

100 Veritas et Visus 3rd Dimension March 2010 of these methods include: lack of motion parallax with side-to-side head movement, up-or-down head movement, or frontward-or-backward head movement; and lack of accommodation because all image points are on the same two-dimensional plane. The conflict between accommodation and convergence can cause eye strain and headaches. There are many examples of methods using glasses or other headgear in the related art.

We now turn our attention to "autostereoscopic" methods for displaying three-dimensional moving images. Broadly defined, "autostereoscopic" refers to any method of displaying three-dimensional images with stereoscopic vision that does not require glasses or headgear. Six general methods of autostereoscopic display are as follows: (1) methods using a "parallax barrier" to direct light from a display source in different directions by selectively blocking light rays; (2) methods using lenses (such as "lenticular" lenses) to direct light from a display source in different directions by selectively bending light rays; (3) methods using an array of "micromirrors" that can be tilted in real time to direct light from a display source in different directions by selectively reflecting light rays; (4) methods using sets of "sub-pixels" within each pixel in a display to emit light in different directions at the pixel level; (5) methods using a three-dimensional display volume or a moving two-dimensional surface to create a "volumetric" image in three-dimensional space; and (6) methods using laser light interference to create animated three-dimensional "holograms." Some of these six methods can also be used together in various combinations.

We now discuss the six autostereoscopic methods in greater detail, starting with three-dimensional imaging methods using a parallax barrier. A parallax barrier selectively blocks light from a display surface (such as an LCD screen). Openings in the barrier that do not block light allow different images to reach the right and left eyes. For example, the display surface can show a composite image with vertical image stripes for right and left eye images and the parallax barrier can have vertical slits that direct the appropriate image stripes to reach the right and left eyes when the viewer is located in the right spot. When the viewer is not located in the right spot, the viewer can see "pseudoscopic" images (with reversed depth, double images, and black lines) that can cause eye strain and headaches. To partially address this problem, the system may track the location of the viewer's head and move the parallax barrier so that the viewer can see the image properly from a larger range of locations.

A basic parallax barrier system with vertical slits does not provide vertical motion parallax; there is no relative movement of objects in the image with up-or-down head movement. Also, it provides only limited horizontal motion parallax (no more than a few sequential views of objects with side-to-side head movement) due to spatial constraints between the slits and the display surface. A parallax barrier using an array of pinholes (a method with its roots in "integral photography") can provide limited motion parallax in both vertical and horizontal directions, but has significant limitations in terms of image resolution and brightness.

Having some distance between the light display surface and the parallax barrier is required so that the parallax barrier can direct light rays along different angles to the right and left eyes. However, this distance causes many of the limitations of the parallax barrier method. For example, distance significantly restricts the area within which the viewer must be located in order to see the three-dimensional images correctly. It is also why parallax barriers do not work well, or at all, for simultaneous viewing by multiple viewers. This distance is also the reason why the parallax barrier blocks so much of the light from the display surface. This causes inefficient light use and relatively dim images.

There are several limitations on using one or more parallax barriers to display three-dimensional moving images. One of the main limitations is the restricted size of the viewing area within which an observer sees the images properly. When a viewer moves outside this restricted viewing area, the viewer can see pseudo-scopic images with depth reversal, double images, and black bands that can cause eye strain or headaches. Head tracking mechanisms can be used in an effort to expand the size of the proper viewing area, but such tracking mechanisms can be inconvenient and do not work well for multiple viewers.

Another limitation of parallax barriers arises because the barrier blocks much of the light from the display. Display light is used inefficiently and the image can be dim. Also, due to spatial constraints between the display surface

101 Veritas et Visus 3rd Dimension March 2010 and openings in the parallax barrier, there are only a limited number of different views for motion parallax. For barriers with vertical slits, there is no motion parallax at all for up-and-down head movement. Use of pinholes instead of slits can provide motion parallax for vertical as well as horizontal movement, but also can have severe problems in terms of low image resolution and image dimness. Finally, lack of accommodation because all image points are on the same two-dimensional plane can result in eye strain and headaches.

Examples in the related art that appear to use one or more parallax barriers to display three-dimensional moving images include the following: U.S. Patents 5,300,942 (Dolgoff, 1994), 5,416,509 (Sombrowsky, 1995), 5,602,679 (Dolgoff et al., 1997), 5,855,425 (Hamagishi, 1999), 5,900,982 (Dolgoff et al., 1999), 5,986,804 (Mashitani et al., 1999), 6,061,083 (Aritake et al., 2000), 6,337,721 (Hamagishi et al., 2002), 6,481,849 (Martin et al., 2002), 6,791,512 (Shimada, 2004), 6,831,678 (Travis, 2004), 7,327,389 (Horimai et al., 2008), 7,342,721 (Lukyanitsa, 2008), 7,426,068 (Woodgate et al., 2008), and 7,532,225 (Fukushima et al., 2009); and U.S. Patent Applications 20030076423 (Dolgoff, Eugene, 2003), 20030107805 (Street, Graham, 2003), 20030206343 (Morishima et al., 2003), 20050219693 (Hartkop et al., 2005), 20050264560 (Hartkop et al., 2005), 20050280894 (Hartkop et al., 2005), 20060176541 (Woodgate et al., 2006), 20070058258 (Mather et al., 2007), 20080117233 (Mather et al., 2008), 20080150936 (Karman, Gerardus, 2008), and 20080231690 (Woodgate et al., 2008).

We now continue our discussion of autostereoscopic methods by discussing the use of lenses (especially arrays of lenticular lenses) for displaying three-dimensional moving images. Lenticular lenses are used to selectively bend light from a display surface to create the illusion of a three-dimensional image. These lenses may be bi-convex columns, semi-cylindrical columns, hemispheres, spheres, or other shapes. Lenticular lens column arrays may be arranged vertically or horizontally. Lenticular lenses may be configured in single or multiple layers. They may be static or move relative to the display surface or each other. There are also "active" or "dynamic" lenses whose focal length and/or curvature can be adjusted in real time.

There are several similarities between using parallax barriers and using lenticular lenses. For example, parallax barriers with parallel vertical slits allow strips of different-perspective images from a display surface to reach the right and left eyes by letting light through the vertical slits. By analogy, lenticular lenses with parallel vertical columns allow strips of different-perspective images from a display surface to reach the right and left eyes by bending light at different angles through the vertical lenses. Also, as is the case with using parallax barriers, there is a restricted area within which a viewer must be located in order to see the three-dimensional images properly when using lenticular lenses. Head tracking can be used to move the lenticular array to increase the size of this area, but the number of sequential views remains limited by spatial constraints. Analogous to the use of pinholes in a parallax barrier, spherical lenses called "fly's eye" lenses can be used in a lenticular array. Taking and displaying images with an array of small "fly's eye" lenses is called "integral photography." As is the case with parallax barriers, there is also some distance between the display surface and the light-directing layer with the use of lenticular lenses.

However, lenticular lenses have some capabilities that are different than those possible with parallax barriers. This is because there are a greater variety of ways to bend light through a lens than there are ways to pass light through an empty opening. For example, there are "active" or "dynamic" lenses whose focal length and/or curvature can be changed in real time. Different methods for changing the optical characteristics of active lenses in real time include: applying an electric potential to a polymeric or elastomeric lens; mechanically deforming a liquid lens sandwiched within a flexible casing; and changing the temperature of the lens. With imaging systems that include head tracking, the focal lengths and/or curvatures of active lenses can be changed in response to movement of an observer's head.

Many of the limitations of using lenticular lenses to display three-dimensional moving images are similar to those for using parallax barriers and many of these common limitations come from the distance between the display surface and the light-guiding layer. As is the case with parallax barriers, display systems that use lenticular arrays

102 Veritas et Visus 3rd Dimension March 2010 have significant restrictions on the size of the viewing area and the number of observers. When viewers move outside this restricted area, they can see pseudoscopic images involving depth reversal, double images, and black bands that can cause eye strain and headaches. Using such systems for multiple viewers is difficult or impossible. Head tracking mechanisms used to try to expand the proper viewing area are often inconvenient and do not work well for multiple viewers. Further, the moving parts of head-tracking mechanisms are subject to wear and tear. Boundaries between light elements in lenticular display systems can create dark lines, graininess, and rough edges.

Due to spatial constraints between the display surface and the width of the lenticular lenses, there are a limited number of different views for motion parallax. With vertical columnar lenses, there is no vertical motion parallax at all. Fly's eye lens arrays can provide some vertical as well as horizontal motion parallax, but are expensive and can have significant problems in terms of low resolution and dim images. Using active lenses in lenticular displays can provide a wider range of motion parallax, but fluids or other moving materials may not change shape fast enough to display three-dimensional moving images. Lack of accommodation due to all image points being on the same plane can cause eye strain and headaches.

Examples in the related art that appear to use stationary lenticular lenses to display three-dimensional moving images include the following: U.S. Patents 4,829,365 (Eichenlaub, 1989), 5,315,377 (Isono et al., 1994), 5,465,175 (Woodgate et al., 1995), 5,602,679 (Dolgoff et al., 1997), 5,726,800 (Ezra et al., 1998), 5,880,704 (Takezaki, 1999), 5,943,166 (Hoshi et al., 1999), 6,118,584 (Van Berkel et al., 2000), 6,128,132 (Wieland et al., 2000), 6,229,562 (Kremen, 2001), 6,437,915 (Moseley et al., 2002), 6,462,871 (Morishima, 2002), 6,611,243 (Moseley et al., 2003), 6,795,241 (Holzbach, 2004), 6,876,495 (Street, 2005), 6,929,369 (Jones, 2005), 7,142,232 (Kremen, 2006), 7,154,653 (Kean et al., 2006), 7,268,943 (Lee, 2007), 7,375,885 (Ijzerman et al., 2008), 7,400,447 (Sudo et al., 2008), 7,423,796 (Woodgate et al., 2008), 7,492,513 (Fridman et al., 2009), and 7,506,984 (Saishu et al., 2009); and U.S. Patent Applications 20040012671 (Jones et al., 2004), 20040240777 (Woodgate et al., 2004), 20050030308 (Takaki, Yasuhiro, 2005), 20050264560 (Hartkop et al., 2005), 20050264651 (Saishu et al., 2005), 20060012542 (Alden, Ray, 2006), 20060227208 (Saishu, Tatsuo, 2006), 20060244907 (Simmons, John, 2006), 20070058127 (Mather et al., 2007), 20070058258 (Mather et al., 2007), 20070097019 (Wynne- Powell, Thomas, 2007), 20070109811 (Krijn et al., 2007), 20070201133 (Cossairt, Oliver, 2007), 20070222915 (Niioka, Shinya, 2007), 20070258139 (Tsai et al., 2007), 20080068329 (Shestak et al., 2008), 20080117233 (Mather et al., 2008), 20080231690 (Woodgate et al., 2008), 20080273242 (Woodgate et al., 2008), and 20080297670 (Tzschoppe et al., 2008).

Examples in the related art that appear to use laterally-shifting lenticular lenses to display three-dimensional moving images include the following: U.S. Patents 4,740,073 (Meacham, 1988), 5,416,509 (Sombrowsky, 1995), 5,825,541 (Imai, 1998), 5,872,590 (Aritake et al., 1999), 6,014,164 (Woodgate et al., 2000), 6,061,083 (Aritake et al., 2000), 6,483,534 (d'Ursel, 2002), 6,798,390 (Sudo et al., 2004), 6,819,489 (Harris, 2004), 7,030,903 (Sudo, 2006), 7,113,158 (Fujiwara et al., 2006), 7,123,287 (Surman, 2006), 7,250,990 (Sung et al., 2007), 7,265,902 (Lee et al., 2007), 7,375,885 (Ijzerman et al., 2008), 7,382,425 (Sung et al., 2008), and 7,432,892 (Lee et al., 2008); and U.S. Patent Applications 20030025995 (Redert et al., 2003), 20040178969 (Zhang et al., 2004), 20050041162 (Lee et al., 2005), 20050117016 (Surman, Philip, 2005), 20050219693 (Hartkop et al., 2005), 20050248972 (Kondo et al., 2005), 20050264560 (Hartkop, David ; et al., 2005), 20050270645 (Cossairt et al., 2005), 20050280894 (Hartkop et al., 2005), 20060109202 (Alden, Ray, 2006), 20060244918 (Cossairt et al., 2006), 20070165013 (Goulanian et al., 2007), 20080204873 (Daniell, Stephen, 2008), 20090040753 (Matsumoto, Shinya, 2009), 20090052027 (Yamada et al., 2009), and 20090080048 (Tsao, Che-Chih, 2009).

Examples in the related art that appear to use active lenses to display three-dimensional moving images include the following: U.S. Patents 5,493,427 (Nomura et al., 1996), 5,790,086 (Zelitt, 1998), 5,986,811 (Wohlstadter, 1999), 6,014,259 (Wohlstadter, 2000), 6,061,083 (Aritake et al., 2000), 6,437,920 (Wohlstadter, 2002), 6,533,420 (Eichenlaub, 2003), 6,683,725 (Wohlstadter, 2004), 6,714,174 (Suyama et al., 2004), 6,909,555 (Wohlstadter,

103 Veritas et Visus 3rd Dimension March 2010

2005), 7,046,447 (Raber, 2006), 7,106,519 (Aizenberg et al., 2006), 7,167,313 (Wohlstadter, 2007), 7,297,474 (Aizenberg et al., 2007), 7,336,244 (Suyama et al., 2008), and 7,471,352 (Woodgate et al., 2008); and U.S. Patent Applications 20030058209 (Balogh, Tibor, 2003), 20040141237 (Wohlstadter, Jacob, 2004), 20040212550 (He, Zhan, 2004), 20050111100 (Mather et al., 2005), 20050231810 (Wohlstadter, Jacob, 2005), 20060158729 (Vissenberg et al., 2006), 20070058127 (Mather et al., 2007), 20070058258 (Mather et al., 2007), 20070242237 (Thomas, Clarence, 2007), 20080007511 (Tsuboi et al., 2008), 20080117289 (Schowengerdt et al., 2008), 20080192111 (Ijzerman, Willem, 2008), 20080204871 (Mather et al., 2008), 20080297594 (Hiddink et al., 2008), 20090021824 (Ijzerman et al., 2009), 20090033812 (Ijzerman et al., 2009), 20090052049 (Batchko et al., 2009), and 20090052164 (Kashiwagi et al., 2009).

Examples in the related art that appear to include head or eye tracking as part of a system to display three- dimensional moving images include the following: U.S. Patents 5,311,220 (Eichenlaub, 1994), 5,712,732 (Street, 1998), 5,872,590 (Aritake et al., 1999), 5,959,664 (Woodgate, 1999), 6,014,164 (Woodgate et al., 2000), 6,061,083 (Aritake et al., 2000), 6,115,058 (Omori et al., 2000), 6,788,274 (Kakeya, 2004), 6,798,390 (Sudo et al., 2004), and 7,450,188 (Schwerdtner, 2008); and U.S. Patent Applications 20030025995 (Redert et al., 2003), 20070258139 (Tsai et al., 2007), and 20080007511 (Tsuboi et al., 2008).

Examples in the related art that appear to use a large rotating or tilting lens or prism as part of a system to display three-dimensional moving images include the following: U.S. Patents 3,199,116 (Ross, 1965), 4,692,878 (Ciongoli, 1987), 6,061,489 (Ezra et al., 2000), 6,483,534 (d'Ursel, 2002), and 6,533,420 (Eichenlaub, 2003); and U.S. Patent Applications 20040178969 (Zhang et al., 2004), 20060023065 (Alden, Ray, 2006), and 20060203208 (Thielman et al., 2006).

Examples in the related art that appear to use multiple rotating or tilting lenses or prisms as part of a system to display three-dimensional moving images include the following: U.S. Patents 7,182,463 (Conner et al., 2007), 7,300,157 (Conner et al., 2007), and 7,446,733 (Hirimai, 2008), and unpublished U.S. Patent Applications 12,317,856 (Connor, Robert, 2008) and 12,317,857 (Connor, Robert, 2008).

We now continue discussion of autostereoscopic methods by considering micromirror arrays. A micromirror array is a matrix of very tiny mirrors that can be individually controlled and tilted in real time to reflect light beams in different directions. Micromirror arrays are often used with coherent light, such as the light from lasers. Coherent light can be precisely targeted onto and reflected from moving mirrors. These redirected coherent light beams can be intersected to create a moving holographic image.

Although micromirror arrays offer some advantages over parallax barriers and lenticular arrays, they can be complicated and expensive to manufacture. They also have mechanical limitations with respect to speed and range of motion. If they are used with coherent light, then there can be expense and safety issues. If they are used with non-coherent light, then there can be issues with image quality due to the imprecision of reflecting non-coherent light from such tiny surface areas.

Examples in the related art that appear to use micromirror arrays to display three-dimensional moving images include the following: U.S. Patents 5,689,321 (Kochi, 1997), 6,061,083 (Aritake et al., 2000), 6,304,263 (Chiabrera et al., 2001), 7,182,463 (Conner et al., 2007), 7,204,593 (Kubota et al., 2007), 7,261,417 (Cho et al., 2007), 7,300,157 (Conner et al., 2007), and 7,505,646 (Katou et al., 2009); and U.S. Patent Applications 20030058209 (Balogh, Tibor, 2003), 20040252187 (Alden, Ray, 2004), and 20050248972 (Kondo et al., 2005).

We now continue further along the autostereoscopic branch of our taxonomy to discuss the use of three- dimensional (3D) pixels. Each 3D pixel contains a set of sub-pixels, in different discrete locations, that each emit light in a different direction. For example, a 3D pixel can be made from a set of sub-pixels in proximity to a pixel- level wherein the light from each sub-pixel enters and exits the microlens at a different angle. In another example, a 3D pixel can be made from a set of optical fibers that emit light at different angles.

104 Veritas et Visus 3rd Dimension March 2010

The concept of 3D pixels has considerable appeal, but is complicated to implement. Manufacturing 3D pixels can be complex and expensive. There are spatial limits to how many discrete sub-pixels one can fit into a space the size of a pixel. This, in turn, limits image resolution and quality. Large displays can become bulky and expensive due to the enormous quantity of sub-pixels required and the complicated structures required to appropriately direct their light outputs. Microstructures (such as microdomes) to house multiple sub-pixels that protrude from the display surface can occlude the light from sub-pixels in adjacent pixels, limiting the size of the proper viewing zone.

Examples in the related art that appear to use 3D pixels containing sets of sub-pixels to display three-dimensional moving images include the following: U.S. Patents 5,132,839 (Travis, 1992), 5,550,676 (Ohe et al., 1996), 5,993,003 (McLaughlin, 1999), 6,061,489 (Ezra et al., 2000), 6,128,132 (Wieland et al., 2000), 6,201,565 (Balogh, 2001), 6,329,963 (Chiabrera et al., 2001), 6,344,837 (Gelsey, 2002), 6,606,078 (Son et al., 2003), 6,736,512 (Balogh, 2004), 6,999,071 (Balogh, 2006), 7,084,841 (Balogh, 2006), 7,204,593 (Kubota et al., 2007), 7,283,308 (Cossairt et al., 2007), 7,425,951 (Fukushima et al., 2008), 7,446,733 (Hirimai, 2008), and 7,532,225 (Fukushima et al., 2009); and U.S. Patent Applications 20030071813 (Chiabrera et al., 2003), 20030103047 (Chiabrera et al., 2003), 20050053274 (Mayer et al., 2005), 20050285936 (Redert et al., 2005), 20060227208 (Saishu, Tatsuo, 2006), 20060279680 (Karman et al., 2006), 20080150936 (Karman, Gerardus, 2008), 20080266387 (Krijn et al., 2008), 20080309663 (Fukushima et al., 2008), 20090002262 (Fukushima et al., 2009), 20090046037 (Whitehead et al., 2009), 20090079728 (Sugita et al., 2009), 20090079733 (Fukushima et al., 2009), 20090096726 (Uehara et al., 2009), 20090096943 (Uehara et al., 2009), and 20090116108 (Levecq et al., 2009).

We now continue our review of autostereoscopic methods by discussing three-dimensional display volumes and moving two-dimensional surfaces that create a "volumetric" image in three-dimensional space. "Volumetric" means that the points that comprise the three-dimensional image are actually spread out in three-dimensions instead of on a flat display surface. In this respect, volumetric displays are not an illusion of three-dimensionality; they are actually three dimensional. Major types of volumetric displays are: (a) curved screen displays (such as a cylindrical or hemispherical projection surface); (b) static volumetric displays (such as an X,Y,Z matrix of light elements in 3D space or a series of parallel 2D display layers with adjustable transparency); and (c) dynamic volumetric displays with two-dimensional screens that rotate through space (such as a spinning disk or helix) while emitting, reflecting, or diffusing light.

Many planetariums use a dome-shaped projection surface as a form of volumetric display. The audience sits under the dome while light beams representing stars and planets are projected onto the dome, creating a three-dimensional image. Static volumetric displays can be made from a three-dimensional matrix of LEDs or fiber optics. Alternatively, a static volumetric display can be a volume of translucent substance (such as a gel or fog) into which light beams can be focused and intersected. One unusual version of a static volumetric display involves intersecting infrared laser beams in mid-air to create a pattern of glowing plasma bubbles in mid-air. This plasma method is current quite limited in terms of the number of display points, color, and safety issues, but is one of the few current display methods that genuinely projects images in "mid-air."

There are several limitations of using volumetric methods to display three-dimensional moving images. Curved screen methods are significantly limited with respect to the shape of three-dimensional image that they can display; planetariums work because a dome-shaped display surface works as a proxy for the sky, but would not work well for projecting a 3D image of a car. Large static volumetric displays become very bulky, heavy, complex, and costly. Also, both static and dynamic volumetric displays generally create ghost-like images with no opacity, limited interposition, limited color, and low resolution. There are significant limitations on the size of dynamic volumetric displays due to the mass, inertia, and structural stress of large rapidly-spinning objects.

Examples in the related art that appear to use volumetric displays to display three-dimensional moving images include the following: U.S. Patents 5,111,313 (Shires, 1992), 5,704,061 (Anderson, 1997), 6,487,020 (Favalora,

105 Veritas et Visus 3rd Dimension March 2010

2002), 6,720,961 (Tracy, 2004), 6,765,566 (Tsao, 2004), 6,948,819 (Mann, 2005), 7,023,466 (Favalora et al., 2006), 7,277,226 (Cossairt et al., 2007), 7,364,300 (Favalora et al., 2008), 7,490,941 (Mintz et al., 2009), 7,492,523 (Dolgoff, 2009), and 7,525,541 (Chun et al., 2009); and U.S. Patent Applications 20050117215 (Lange, Eric, 2005), 20050152156 (Favalora et al., 2005), 20050180007 (Cossairt et al., 2005), and 20060109200 (Alden, Ray, 2006). A closely related method that involves using a vibrating projection screen is disclosed in U.S. Patents 6,816,158 (Lemelson et al., 2004) and 7,513,623 (Thomas, 2009).

We now conclude our review of autostereoscopic methods by discussing holographic methods of displaying three- dimensional moving images. Holography involves recording and reconstructing the amplitude and phase distributions of an interference pattern of intersecting light beams. The light interference pattern is generally created by the intersection of two beams of coherent (laser) light: a signal beam that is reflected off (or passed through) an object and a reference beam that comes from the same source. When the interference pattern is recreated and viewed by an observer, it appears as a three-dimensional object that can be seen from multiple perspectives.

Holography has been used for many years to create three-dimensional static images and progress has been made toward using holographic images to display three-dimensional moving images, but holographic video remains quite limited. Limitations of using holographic technology for displaying three-dimensional moving images include the following: huge data requirements; display size limitations; color limitations; ghost-like images with no opacity and limited interposition; and cost and safety issues associated with using lasers.

MultiView Veritas et Visus

 Andrew Woods, volume 10: 20 articles, 62 pages, $12.99

 Mark Fihn, volume 11: 83 articles, 260 pages, $12.99

The MultiView compilation newsletters bring together the contributions of various regular contributors to the Veritas et Visus newsletters to provide a compendium of insights and observations from specific experts in the display industry. http://www.veritasetvisus.com

106 Veritas et Visus 3rd Dimension March 2010 Thoughts on Avatar and 3D TV Adoption

by Ross Young

Ross Young is SVP, Displays and PV at IMS Research USA. Prior to joining IMS Research in November 2009, Young co-founded Young Market Research(YMR) with Barry Young in May of 2009 which IMS Research acquired in November. Prior to forming YMR, Young was VP of New Market Creation at Samsung Electronics' LCD Business, reporting to the LCD CEO, where he tracked, analyzed and assessed the solar market and supported their market intelligence efforts in notebooks and TVs. Prior to Samsung, Young was the founder and CEO of DisplaySearch, the leading flat panel display market research, consulting and events firm. Young ran DisplaySearch from 1996 to 2007 and launched most of their product areas and many of their most popular reports on such topics as production equipment, supply/demand, large-area displays, notebooks, monitors and TVs. He sold DisplaySearch to The NPD Group in Sept. 2005. Young was educated at UCSD, Australia’s University of New South Wales, UCSD’s Graduate School of International Relations and Pacific Studies and Japan’s Tohoku University.

I have now seen Avatar twice, once with my wife and once with my 9-year old son. I had heard that the 3D effects were going to be so compelling and immersive that it would create demand for that experience in the home, driving demand for 3D Blu-ray players and TVs. Now that I have seen it, I have my doubts, although it may also be related to the implementation.

The first time I saw the movie, I didn’t even notice the 3D effects. I was so taken with the story, how real and expressive the Avatars were, what an amazing invention the live action cameras were (which convert actors with sensors into whatever you want them to be along with human expressions and movements, etc), how visually stunning the movie was, how they converted tiny soft corals that I only see when scuba diving into huge beautiful plants, etc., I didn’t pay much attention to the fact that it was in 3D. However, it may also be a function of the 3D implementation at the theater which utilized Barco DP-2000 2K projectors with Dolby 3D technology and color separation glasses. I did not like the glasses which had a much smaller lens and was much more reflective than the Real D glasses.

The second time, I focused more on the 3D effects, and how immersive they were. The 3D implementation also seemed better with the Sony 4K projectors and Real D technology and glasses. Despite it being a better experience, I am not of the opinion that this one movie is going to create huge 3D TV demand. In fact, I am not quite sure 3D movies alone will drive 3D demand.

Good movies, like Avatar, are not about the effects, but the quality of the storytelling. I don’t think 3D makes or breaks a movie or 3D can make a movie tremendously better.

Even though I enjoyed Avatar immensely and I think 3D helped tell the story, I don’t think seeing it in 3D is critical for enjoying the movie. I now believe that it is gaming and/or sports that will drive 3D TVs. The 3D console gaming I have done so far was so much better than 2D, it would be the reason I would buy a 3D TV. That said, I think 3D-capable TV sales will do very well in 2010 as early adopters will buy the TVs since they won’t likely have much of a premium over non-3D 240Hz TVs. However, I don’t think the glasses sales will do particularly well among non-gamers next year. I think non-gamers will wait for more 3D content before buying glasses.

107 Veritas et Visus 3rd Dimension March 2010 Film-based 3D Pushes Forward

by Aldo Cugnini

Aldo Cugnini is a consultant in the digital television industry. Prior to founding AGC Systems, he held various technical and management positions at Philips Electronics’ Research and Consumer Electronics Divisions and at interactive television developer ACTV. He had a leadership role in the development of the ATSC Digital Television System, and was a key member of the Advanced Television Research Consortium (ATRC) development team. Mr. Cugnini received his BS and MS degrees from Columbia University and has been awarded six patents in the fields of digital television and broadcasting. He served on the board of directors of the ATSC, and is the author of numerous technical papers and industry reports, and is a contributor to several trade publications. This article is revised from the Display Daily, published by Insight Media on March 8, 2010. http://www.displaydaily.com

Technicolor has announced that it has reached an agreement with New York City-based Bow Tie Cinemas to install Technicolor 3D in all Bow Tie locations on 25 of its 150 screens. Technicolor 3D is a new 3D system for 35mm film projectors, enabling exhibitors to equip theatres for high-quality 3D at a fraction of the cost of installing a digital 3D projection system. The company had earlier announced support from multiple major motion picture studios for the release of theatrical content presented in Technicolor 3D, as well as support from Kodak and Fuji.

Technicolor’s name used to be under that of owner Thomson, but earlier this year, that order was reversed with Technicolor assuming the top line company name.

Last year, Technicolor bowed its film-based 3D solution, which employs a proprietary “production to projection” system that accommodates 35mm film projectors already in use. A patented state-of-the-art Schneider lens system assembles the left and right eye images as the film runs through the projector, and delivers a 3D-ready image onto a silver screen. The system also optimizes color and light levels, and is usable with screens up to about 40 feet in size. The solution works with circular polarized glasses–identical to the ones used for existing digital 3D cinema – to produce the 3D effect. The silver screen can be used for the projection of both Technicolor 3D as well as digital 3D content.

Technicolor President of Creative Services Joe Berchtold, tells us that one hurdle of film-based 3D is the perception of an “old” technology compared with that of digital projection. However, audience polling conducted by Technicolor at actual screenings suggests “no statistical difference from digital 3D” when judged on satisfaction, quality, and movie recommendation. Berchtold won’t quote screen brightness, but says that systems “operating at spec today” should have no problem equalling the quality of digital 3D, including the stability of the image. The company plans to license the special lens to exhibitors, at a rate of $2,000/film, with a $12,000 cap per year.

We reported recently that another company, Oculus3D, had also developed a film-based 3D projection format. The primary difference between that and the Technicolor system is that the former uses a side-by-side 90-degree rotated image format, whereas the latter uses an over/under horizontal image format. Oculus3D CEO Marty Shindler tells us that his system is “brighter, sharper and steadier” than the competition, and that they are getting 10fL at the screen. He also reports that audiences seeing their process "can’t believe it’s not digital," a sentiment echoed by company co-founder Lenny Lipton.

The Oculus3D system also uses a proprietary lens, currently sourced from multiple optics vendors; some variants use off-the-shelf components, while an optimized solution from one vendor is also being evaluated. The lens will be available to theater owners for a purchase price “in the low to mid $20’s,” with quantity discounts, or for lease-to- buy. The company is having ongoing discussions and demos with studio executives, and plans to make an announcement next week about their progress. They are currently giving private demonstrations of their system.

108 Veritas et Visus 3rd Dimension March 2010

The 3D steamroller continues to press on. According to Technicolor, 3D is a driver of ticket sales with 3D engagements outperforming 2D by more than 2-to-1 in attendance and even greater in box office. They also say that 90 percent of theatre screens in the U.S. were not able to provide a 3D experience to moviegoers, prior to a film- based 3D solution. Performance aside, the proof may be in the agreements with film studios and services companies; these would give a company a leg up in a competitive market, provided actual deployments occur.

Could slow digital 3D? Some industry executives say that digital growth couldn’t be slower, but that the digital players will increasingly push the business because of 3D. So digital 3D growth will continue, but with financing still problematic, it will be slow. Currently, studios are increasing capacity for 3D production, but 3D screening is installation limited: 3D box office could even be bigger than its current record-breaking heights with greater 3D projection penetration. If the technical merits hold up to audience scrutiny, 3D film sounds like a no- brainer.

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

109 Veritas et Visus 3rd Dimension March 2010

Last Word: The Oculus3D Projection System by Lenny Lipton

Lenny Lipton is recognized as the father of the electronic stereoscopic display industry, Lenny invented and perfected the current state-of-the-art 3D technologies that enable today's leading filmmakers to finally realize the dream of bringing their feature films to the big screen in crisp, vivid, full cinematic-quality 3D. Lenny Lipton became a Fellow of the Society of Motion Picture and Television Engineers in 2008, and along with Petyer Anderson he is the co-chair of the American Society of Cinematographers Technology Committees’ subcommittee studying stereoscopic cinematography. He was the chief technology officer at Real D for several years, after founding StereoGraphics Corporation in 1980. He has been granted more than 30 patents in the area of stereoscopic displays. The image of Lenny is a self-portrait, recently done in oil.

Technology follows patterns. Early on there’s a system or product that people are using, but that product is available only to well-heeled early adopters because of its price. Next somebody comes along with a product that has equal or superior performance, and it’s price is appealing to a broad range of customers who don’t have the deep pockets that the first guys had. While the early adopters get to use the technology first, the guys who waited may be at an advantage. I think that’s where we’re at with the Oculus3D system.

When I got started at StereoGraphics in the 1980s, the applications for stereoscopic imaging were limited. Customers were mostly in molecular modeling, aerial mapping, and a little bit of CAD. Now we’ve got a whole world wanting stereoscopic movies and other consumer applications. That’s a different situation from my early years.

I have been working on a new stereoscopic projection method, the Oculus3D system, using 35mm projectors and film. The logic behind this effort is simple: From a business perspective there is a strong demand for stereoscopic movies and there are not enough digital projectors, the “platform” for 3D projection, in the United States, North America, or the rest of the world, for that matter. There are theatrical features, mostly adventure/fantasy films, shot in 2D, that are being converted so that they can be shown in 3D. (And there are 3D movies in the pipeline that were planned to be in 3D — something like two features a month for this year.) If I were a studio executive I would have made the same decision to convert my assets to maximize attendance and profits after the robust success of Avatar, which is a great financial success and a boost to the stereoscopic medium; in its wake there is a strong demand for other stereoscopic theatrical movies. We are seeing steps toward the ubiquity of the stereoscopic cinema on a genre by genre basis – first kids’ animation, then horror date movies, and now action, science fiction, and adventure films. (Comedies are next.) The 3D films that are getting made are, with the passage of time, for older and older audiences.

One could make a good case for the eventual ubiquity of the theatrical stereoscopic medium since about 15% of the box office for the year came from about a score of 3D movies – even before Avatar. Part of this is because of good attendance and another is that 3-D movies are priced as a special event. (They’re not priced at two or three bucks more because of the eyewear as is commonly asserted; people are charged more because they are getting something special.) Also, given the

110 Veritas et Visus 3rd Dimension March 2010 history of the motion picture medium and commonly observed pricing trends, prices don’t go down; so we are likely to see a continuation of this pricing trend, which is good news for the exhibitors and the studios.

There is a shortage of digital projectors. They’re costly – averaging $70K to install, according to the February 6th LA Times, and that doesn’t include a similar sum over a few years for the 3D hardware sale or license. We are hopefully passing through this great recession but the economy makes it hard to finance digital projectors. The just announced closing of a financing deal by a major investment bank is a year late and half of its original goal. There is also a brewing crisis of confidence in the industry with regard to the future of digital projectors. Some of the problem has to do with the financing model, the so-called virtual print fee, and some of it has to do with the present generation of projectors remaining viable in the long run. I think digital projection is great. I stood up at an ASC technology committee meeting and said so, and was booed by those in attendance. They are film guys, and so am I. I love film, but my eyes told me that digital projection looks good. There are good things about digital projectors, and good things about 35mm projectors and you can say bad things about both. The trouble with digital projectors is that there are not enough of them and they’re costly to purchase and maintain.

My mission for 37 years has been to create advanced stereoscopic displays, and in particular to invent a viable stereoscopic cinema. I’ve done my part. The major technology components of stereoscopic DLP projection were developed by my colleagues and me at StereoGraphics many years ago, and are today being deployed in thousands of cinemas.

On a worldwide basis the lack of 3D screens is blocking the growth of the stereoscopic cinema. How do to overcome the problem is my obsession of late. I began to consider the projection of 3D movies using 35mm projectors and I’m not the only one who had this flash. In the early 1990s I was the chairman of the SMPTE working group that recommended standards for the projection of motion pictures using the above-and-below system. It’s a deeply flawed system, and I regret seeing its return. It has three major technical shortcomings: it’s dim, it’s easy to project a pseudostereoscopic image, and it has asymmetrical vignetting for the left and right fields. The last is the only problem that can be solved but by means of adding dodging to the prints’ subframes in order to symmetrize the illumination — a grim prospect for a system that is too dark.

Knowing this, as I began to think about what I could design that would be better I hit upon a different idea; and I have been working on it for some months with my colleagues at Oculus3D, and together we have gone beyond my original conception. Here is the thinking that went into this system: One of the important considerations in designing optical systems is brightness. We had to figure out how to make the system bright. That is because the great majority of stereoscopic digital projection systems just aren’t bright enough. The SMPTE recommendation for projecting 35mm in theatrical cinemas is 16 fL. Why shouldn’t stereoscopic movies also be projected that bright? Is there something special about stereoscopic projection that makes a dark image better? No. 3D movies are usually a lot darker than 2D movies because of two factors: Digital projection field-sequential duty cycle and the selection device. When 3D movies are projected properly the colors are rich and deep and the image has better contrast and looks sharper. Most 3D movies are being projected at light levels from 3.5 to maybe 5.5 fL. 3D movies for digital projection have to be timed specially to boost color saturation because at such low light levels the eye starts to lose its color perception.

My thinking went like this: The resurrected over/under system with its split-lens approach can’t have the brightness needed. The maximum size barrel that a 35mm projection lens can fit into is a little less than three inches. A system that tries to stuff two lenses in a small barrel just can’t pass enough light. So I had to think of something different, something better.

I felt the new approach needed to have one projection lens to get all the light down its barrel and onto the screen. I also knew that the 1:85 format was inefficient and yet produces a great image on theater screens. Let me explain. In the early 1950s Fox introduced CinemaScope, which used virtually all of the available 35mm frame area. Over

111 Veritas et Visus 3rd Dimension March 2010 the years the size of the 35mm frame has shrunk. First when soundtrack was added because in order to put the optical track on the film it had to intrude into what had been image area. In order to maintain the 1.3:1 aspect ratio the image got less wide and less high. When Scope was introduced much more of the available area could be used in conjunction with an anamorphic lens that could fill the screen with what is today a 2.4:1 aspect ratio. When this happened in the early 1950s Universal introduced a counter-format which, instead of being 1.3:1, was 1.85:1. They achieved the wider aspect ratio through cropping the top and bottom of the image. The newly cropped image throws away a lot of film area. But advances in film stock and the digital intermediate process have allowed for a truly fine image within the reduced area.

I thought to myself that if the new 3D format is based on1.85, with two sideframes occupying as much area as possible, I’d have a shot at great brightness. With the right optical system it could be like having two 35mm projectors side-by-side projecting on the screen. As we designed the format (and this is work that was done with my colleague Al Mayer, Jr. in conjunction with both EFilm and FotoKem) we came up with a way to maximize the size of the format without intruding into the optical soundtrack area or perforations. The concept is to have two side frames, left and right, but rotated by 90 degrees That works out nicely to left and right 1.85 images with room for a septum between them (or it might be called it a sideframe frameline).

After many experiments we decided that the images are best positioned head-to-head. If the images are head-to- head, vignetting (corner illumination fall off due to the lamphouse and lens) can be symmetrical for the superimposed left and right fields. It is important that the left and right illumination fields point for point brightness be as close as possible. Because the side frame height isn’t quite as wide as the normal1.85 frame it’s going to require, in addition to rotation, some optical magnification to fill the existing screen since the sideframe is 17% less wide that the 1.85 frame. Conceptually you can think of this as two projectors side-by-side, projecting a slightly smaller image than usual. Brightness is a function of area, so there has to be some light loss. Another light loss involves the optics required for image rotation. And the final loss comes from the requirement to polarize the images.

There are several optical functions our new lens, the OculR, must perform: First it has to form an image. Then, for various reasons, we needed to increase the length of the optical path so that the final component has room to do its job. Lenses are typically buried in the projector, so the final component optics responsible for rotating, converging and polarizing the left and right images must be in positioned so there is no mechanical interference.

Reflecting surfaces used for the rotation and converging functions can be extremely efficient. But they aren’t perfect, so some light is lost. And polarizing systems have to lose light (they can lose anywhere between 65% and 70% of the light). Can the design wind up with a lot of light on the screen? According to my calculations, based on theory, these optics could produce an image having 8.4 fL per eye, as measured through both . That’s rather good. It’s not as good as having two 35mm projectors side by side using the same size format (that would be more like 11 fL) but it’s pretty good. Al Mayer Jr. and I had the help of John Rupkalvis, who joined the effort early on. He’s a well-known and an expert on reflection optics for such systems. John built our first proofs-of-concept and added design ideas.

The resultant projected light is a satisfying 7 fL per eye. What we’ve got on the screen is probably the brightest single-projector 35 mm 3D image. It’s much brighter than ordinary single projector 3D by a factor of two.

I should mention that the Oculus3D system takes advantage of the entire digital infrastructure. Infrastructure – which is a fancy word but one that everybody’s familiar with, whether it’s the electric grid or roads – has been the key to my life as an inventor. When I was a boy I read about Thomas Edison, and one of my books said that Edison’s big invention wasn’t the light bulb, it was the electrical distribution system infrastructure. In order for the Oculus3D system to succeed we had to agree with the existing motion picture production, printmaking and projection infrastructure. We had to have everything drive down the existing road. So, as far as the way 3D

112 Veritas et Visus 3rd Dimension March 2010 movies are made today – no change. It’s the same pipeline. Instead of a hard drive being distributed to the digital cinemas, it’s a 35mm print – the result of a film-out and the usual way 35mm prints are made. The 35mm print runs the same length, but in the place where it had one picture it now has two. The soundtrack is in the same place.

I’ve mentioned my technology colleagues and I also want to mention Marty and Robert Shindler who, until they became seduced by the good side of the force, had been running a think tank consulting group called The Shindler Perspective. I also want to thank the people who have believed in us – at FotoKem and EFILM, Deluxe and to Pacific Theatres.

Display Industry Calendar

A much more complete version of this calendar is located at: http://www.veritasetvisus.com/industry_calendar.htm. Please notify [email protected] to have your future events included in the listing.

March 2010

March 22-26 2010 Measurement Science Conference Pasadena, California

March 23-25 Phosphors Summit San Diego, California

March 23-25 Image Sensors Europe London, England

March 24 Korea FPD Conference Seoul, Korea

March 24 Transistors on Plastic London, England

March 24-27 EHX Spring Orlando, Florida

Symposium on Haptic Interfaces and Virtual March 25-26 Waltham, Massachusetts Environments

March 31 3D: Behind the Hype Santa Clara, California

April 2010

April 7-10 International Sign Expo Orlando, Florida

April 8-9 2010 Taiwan FPD Conference Taipei, Taiwan

April 8-10 Global FPD Partners Conference Tokyo, Japan

April 9-11 China International 3D World Forum & Exhibition Shenzhen, China

April 10-15 NAB 2010 Las Vegas, Nevada

April 10-15 CHI 2010 Atlanta, Georgia

April 11-14 International Symposium on Flexible Electronics Palma de Mallorca, Spain

Digital Holography and Three Dimensional April 12-14 Miami, Florida Imaging

113 Veritas et Visus 3rd Dimension March 2010

April 12-16 MIPTV Cannes, France

April 13-14 Printed Electronics Europe Dresden, Germany

April 13-14 Photovoltaics Europe Dresden, Germany

April 13-15 Sign UK/Digital Signage Showcase Birmingham, England

April 14-15 Digital Signage Show 2010 Las Vegas, Nevada

April 14-16 FineTech Japan & Display 2010 Tokyo Japan

April 14-16 Touch Panel Japan Tokyo, Japan

April 14-16 Smart Fabrics 2010 Miami, Florida

April 14-16 LED/OLED Lighting Technology Expo Tokyo, Japan

April 20-22 Interactive Displays 2010 San Jose, California

April 21-22 3D Gaming Summit Universal City, California

April 27 Photovoltaic Technology Electronics Stuttgart, Germany

April 28-29 Flat Panel Displays Training Workshop Pforzheim, Germany

Philadelphia, April 28-30 Organic Photovoltaics Pennsylvania

April 29-30 Displays: An Executive Overview Nottingham, England

May 2010

May 3-6 Digital Hollywood Spring Santa Monica, California

International Conference on Animation, Effects, May 4-7 Stuttgart, Germany Games, and Digital Media

May 5-6 Screen Expo Europe London, England

May 10-11 Printed Electronics Summit San Jose, California

May 11 FPD Materials and Components Forum Tokyo, Japan

International Conference on Imaging Theory and May 17-21 Angers, France Applications

May 18-19 National Electronics Week Birmingham, England

SGIA Membrane Switch & Printed Electronics May 18-20 Phoenix, Arizona Symposium

May 19-21 SEMICON Singapore Singapore

May 19-21 Three Dimensional Systems and Applications Tokyo, Japan

DisplaySearch China FPD TV and HDTV May 20-21 Shenzhen, China Conference

114 Veritas et Visus 3rd Dimension March 2010

May 23-26 China Optoelectronics & Display Expo Shenzhen, China

May 23-28 SID International Symposium Seattle, Washington

May 24 SID Business Conference Seattle, Washington

May 24-26 CeBIT Australia Sydney, Australia

May 25-26 3DTV World Forum London, England

May 25-29 Advanced Visual Interfaces Rome, Italy

May 26 The Future of Lighting and Backlighting Seattle, Washington

May 26-27 TV 3.0: The Future of TVs Seattle, Washington

May 27 The Future of Touch and Interactivity Seattle, Washington

LOPE-C -- Large Area, Organic and Printed May 31 - June 2 Frankfurt, Germany Electronics Convention

May 31 - June 2 Graphics Interface 2010 Ottawa, Ontario

June 2010

Seine-Saint-Denis, June 1-3 Dimension3 Expo France

June 1-5 Computex 2010 Taipei, Taiwan

June 3-6 SIIM 2010 Minneapolis, Minnesota

June 5-11 InfoComm '10 Las Vegas, Nevada

June 7-8 Projection Summit Las Vegas, Nevada

June 7-9 3DTV-CON 2010 Tampere, Finland

June 9-10 EuroLED 2010 West Midlands, England

June 9-11 3DCOMM Las Vegas, Nevada

Photonics Festival: OPTO Taiwan , SOLAR, LED June 9-11 Taipei, Taiwan Lighting, Optics

June 14 3DNext Hollywood, California

June 14-16 SEMICON Russia 2010 Moscow, Russia

June 15-17 E3 Media and Business Summit Los Angeles, California

June 15-17 Digital Signage Expo 2010 Essen, Germany

June 15-17 CEDIA Expo Europe London, England

115 Veritas et Visus 3rd Dimension March 2010

June 21-24 Solid State and Organic Lighting Karlsruhe, Germany

June 21-24 Cinema Expo Amsterdam, Netherlands

June 21-25 Nanotech Conference & Expo Anaheim, California

June 22-25 OLED Expo 2010 Seoul, Korea

June 22-25 LED & Solid State Lighting Expo Seoul, Korea

June 22-25 International Conference on Organic Electronics Paris, France

June 23-25 Electronic Materials Conference Notre Dame, Indiana

June 29 - July 1 Plastic Electronics Asia Osaka, Japan

July 2010

July 7-9 China International Flat Panel Display Exhibition Shanghai, China

China International Touch Screen Exhibition & July 7-9 Shanghai, China Seminar International Symposium on Flexible Organic July 7-9 Halkidiki, Electronics

July 8-11 SINOCES Qingdao, China

July 11-16 International Liquid Crystal Conference Krakow, Poland

July 12-14 Nanosciences & Nanotechnologies Halkidiki, Greece

July 13-14 TV 3.0 Summit and Expo Los Angeles, California

July 13-15 Semicon West 2010 San Francisco, California

July 13-15 Intersolar North America San Francisco, California

July 14-19 National Stereoscopic Association Convention Huron, Ohio

July 16 Mobile Display Forum Taipei, Taiwan

July 25-29 SIGGRAPH 2010 Los Angeles, California

July 28-29 Japan Forum Tokyo, Japan

116