<<

3rd Dimension Veritas et Visus November 2010 Vol 6 No 3

Hatsune Miku, p6 Keio University, p27 Holorad, p41 Letter from the publisher : Garage doors…by Mark Fihn 2

News from around the world 4

SD&A, January 18-20, 2010, San Jose, California 15

3D and Interactivity at SIGGRAPH 2010 by Michael Starks 21

It will be awesome if they don’t screw it up: 44

3D Printing, IP, and the Fight over the Next Great Disruptive Technology by Michael Weinberg

Stereoscopic 3D Benchmarking: (DDD, iZ3D, NVIDIA) by Neil Schneider 60

Last Word: The film from hell…by Lenny Lipton 67

The 3rd Dimension is focused on bringing news and commentary about developments and trends related to the use of 3D displays and supportive components and software. The 3rd Dimension is published electronically 10 times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603. http://www.veritasetvisus.com

Publisher & Editor-in-Chief Mark Fihn [email protected] Managing Editor Phillip Hill [email protected] Contributors Lenny Lipton, Neil Schneider, Michael Starks, and Michael Weinberg

Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are available upon request, at a rate based on location and mailing method. Copyright 2010 by Veritas et Visus. All rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others.

Veritas et Visus 3rd Dimension November 2010

Garage doors… by Mark Fihn

Long-time readers of this newsletter will be aware of my fascination with clever efforts to create 3D images from 2D surfaces. The German company “Style your Garage” has created a fantastic solution – they make posters for your garage door. I suppose that any one of these images might get tiresome over time, but they are certainly fun for awhile. Prices vary from $199 to $399 for a double door! Everything included! http://www.stylemygarage.com

2 Veritas et Visus 3rd Dimension November 2010

>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<

3 Veritas et Visus 3rd Dimension November 2010 3D news from around the world compiled by Phillip Hill and Mark Fihn

China’s SARFT looks to establish S3D standards China’s State Administration of Radio, Film and Television (SARFT) is formulating standards for stereo 3D television, according to online sources. The governing body has recently been conducting a number of tests into S3D and it is thought that regional broadcasters would have to receive approval for its premium channels to be broadcast by SARFT, if such standards were introduced. Other sources have claimed that the channels will be marketed towards high-end demographics and that the standards will enable the introduction of S3D broadcasting from February 2011. http://www.sarft.gov.cn

JVC projectors get THX 3D certification JVC announced that four D-ILA projectors have qualified as the world's first projectors to gain THX 3D Display certification. The four projectors include the DLA-RS60/DLA-X9 models carrying $11,995 suggested retails each, and the DLA-RS50/DLA-X7 carrying $7,995 suggested retails, each. The RS or Reference Series models are carried by JVC's Professional Products Group, while the X7 and X9 models are carried by JVC U.S.A's consumer business unit. During the certification process, more than 400 laboratory tests were conducted, evaluating color accuracy, cross-talk, viewing angles and processing to ensure the high quality 3D and 2D display performance that home theater enthusiasts demand. The JVC projectors have one button solutions for optimized playback of 3D and 2D movies – THX Cinema Mode to ensure that color reproduction, luminance, blacks, gamma and video processing matches what the filmmaker intended, and THX 3D Cinema Mode, which extends this same of accuracy for 3D broadcasts and Blu-ray Discs. THX 3D Cinema Mode is designed to deliver highly accurate color in 3D, while minimizing sources of cross-talk and flicker, JVC said. All THX modes on JVC projectors can be accessed by THX professional calibrators. Besides the four THX Certified models, JVC announced two additional new 3D-enabled projectors, the DLA-RS40 and DLA-X3, to be available at $4,495 each. The PK-AG1 3D glasses ($179) and PK-EM1 3D signal emitter ($79) are now available. http://www.jvc.com

VIZIO licenses Real D 3D technologies Real D announced that VIZIO has licensed Real D’s 3D technologies for their upcoming line-up of 3D TVs. Current VIZIO XVT series Full HD 3D HDTVs in 42-, 47- and 55-inch class sizes will support the stereoscopic Real D Format for the delivery and display of high-definition 3D content. VIZIO active eyewear compatible 3D HDTVs will also utilize active shutter glasses that integrate Real D 3D lens and synchronization protocol. The Real D Format is based on Real D’s patented side-by-side 3D display technology. It combines left eye and right eye 3D image streams into a single channel for delivery of high-definition 3D video to 3D-enabled displays using today’s HD infrastructure, including existing HD set-top boxes and DVRs. http://www.reald.com

SENSIO puts SENSIO 3D technology and expertise to work for Videotron SENSIO Technologies announced that it has signed an agreement with Videotron. As part of its broadcasting activities, Videotron will launch a 3D content offering this coming December. The cable-TV distributor has selected SENSIO for its SENSIO 3D technology as well as its expertise, acquired through over ten years in the industry, in order to deliver the most immersive experience to the consumer, whatever the type of television set owned. SENSIO 3D technology enables broadcasting of 3D content over the conventional 2D infrastructure, via cable, satellite or IPTV, while providing superior quality to common frame-compatible formats. SENSIO 3D delivers ‘visually lossless’ 3D images, which are so faithful to the originally captured images that the difference is imperceptible to the eye of the viewer. For non 3D-enabled TVs, SENSIO’s know-how enables the best-quality anaglyph (viewed through the red-and-cyan glasses), which provides an unrivalled level of viewing comfort. Through these technologies, Videotron’s customers will be able to view diversified 3D content including movies via video-on-demand, sporting events and concerts. http://www.sensio.tv

4 Veritas et Visus 3rd Dimension November 2010

SENSIO and MediaTek join to offer high-quality 3D-capable SoC products SENSIO Technologies announced that it is teaming up with MediaTek to integrate SENSIO 3D technology into MediaTek products. Equipped with SENSIO 3D, MediaTek SoC (system-on-chip) products targeted to the 3DTV market will enable television manufacturers to provide a high-quality, immersive 3D experience. MediaTek’s new SoC will target the worldwide TV market. The chip will accept 3D sources from multiple inputs, including HDMI1.4, the , and USB. SENSIO 3D is a proprietary frame-compatible format for high-quality stereoscopic signal processing. http://www.sensio.tv

GRilli3D enables 3D stereo viewing on iPad, iPhone and iPod touch GRilli3D unveiled a new technology that allows Apple iPad, iPhone and iPod touch users to view 3D-generated content in true 3D stereo format without 3D glasses. GRilli3D offers the very first of a next-generation utility that allows users to enjoy true 3D stereo depth by virtue of applying a simple and inexpensive plastic film to a 3D- enabled device. GRilli3D films are known as “GRillis”. GRillis operate by interposing a series of “Barrier Lines” between the eyes and the projected image, blocking the view of each eye differently and providing the signal separation that result in depth perception at close intervals when used with mobile devices. GRillis will be available worldwide starting November 26th, by simply going to the GRilli3D website: http://www.GRilli3D.com. GRillis also serve as a screen protector when viewing all your other 2D content.

You'll need the right screen cover to view the image on the left properly; you must align the film correctly

Spice Mobile develops autostereoscopic 3D phone Spice Mobile, an Indian mobile handset provider, announced the release of an autostereoscopic (glasses-free) 3D mobile phone. The SIZE Spice M-67 3D features a 2.4-inch LCD display and comes with a 3D video player, imaging viewer, and user interface, as well as 16GB of expandable memory. The model also features Bluetooth, a 2MP camera, FM radio, an MP3 player and offers the user the ability to use two GSM SIM cards. Online sources claim that the device will sell for approximately $97 in over 50,000 outlets in India. http://www.spiceglobal.com

5 Veritas et Visus 3rd Dimension November 2010

Ishikawa Optics & Arts shows movies in mid-air, spherical objects Ishikawa Optics & Arts Corp exhibited unique holographic displays that show movies floating in the mid-air and in a transparent spherical object. Those displays can show any types of movies including movies taken by cameras and graphics. http://www.holoart.co.jp

 Space Projector: A display called “Space Projector” can show a movie in the mid-air. Its display device looks like that of a rear projector but does not have a screen. The light projected from its projector builds up an image at a certain distance away from the projector. The projected movie can be seen only from in front. But, when seen from in front, it looks as if a movie is floating in the mid-air. The Space Projector is being sold at a price of ¥650,000 (approx US$7,947).

 Crystal-ball Display: The company also showcased the “Crystal-ball Display,” which shows a movie inside a transparent spherical object. The spherical object is made of resin. A layer is formed inside the object to diffuse light, and it works as a screen. When light is projected from a projector to the spherical object, a movie is displayed on the diffusion layer. The company expects that the Crystal-ball Display will be used at , the entrances of companies, etc. The prices of the displays using spherical objects with diameters of 100mm and 200mm are ¥250,000 and ¥800,000, respectively.

The “Space Projector”; the image of a flame can be seen only from the front.

The "Crystal-ball Display"

6 Veritas et Visus 3rd Dimension November 2010

Panasonic to release 3D lens for mirror-less cameras Corp released the “H-FT012,” an interchangeable lens unit that enables the Lumix G series of mirror-less cameras to shoot 3D images. Its price is ¥26,250 (approx US$310). Equipped with two optical systems inside the lens mount, the lens unit allows to take two pictures at the same time. “Compared with a method of creating 3D images by using images shot at different times, the H-FT012 can take an image of a moving object in 3D without distortion,” Panasonic said. The diameter and height of the H-FT012 are 57mm and 20.5mm, respectively. It weighs 45g. And the focal distance is 65mm (35mm film equivalent). The lens unit can be used with the DMC- GH2 mirror-less lens-interchangeable camera, which was announced this time, and the DMC-G2, which was released in April 2010 (The firmware of the DMC-G2 has to be updated). In addition to the H-FT012, Panasonic announced two other interchangeable lens units. One is the “H-H014,” a 14mm wide-angle lens (35mm film equivalent: 28mm). It is a so-called “pancake lens” with a maximum aperture of 2.5. The H-H014 will be released at a price of ¥49,875. The other is the “H-FS100300,” a zoom lens with a focal distance of 100-300mm (35mm film equivalent: 200-600mm). It will be released at a price of ¥80,850.

The DMC-GH2 mirror-less lens-interchangeable camera equipped with the “H-FT012” lens unit; the back side of the H-FT012. Two optical systems can be seen.

Microsoft plans to buy privately-held gesture chip maker Canesta Canesta and reportedly signed a “definitive agreement” such that Canesta’s technology, IP, customer contracts, and other resources will be acquired by Microsoft. Canesta specializes in 3D sensing technology used in making Natural User Interfaces (NUI). Canesta is the inventor of a leading single chip 3-D sensing technology platform and a large body of intellectual property. With 44 patents granted to date and dozens more on file, the company has made breakthroughs in many areas critical to enabling natural user interfaces broadly across many platforms. Some of these include the invention of standard CMOS 3-D sensing pixels, fundamental innovations in semiconductor device physics, mixed-signal IC chip design, optics, signal processing algorithms, and computer vision software. No details of the agreement have been disclosed. The acquisition is expected to be completed before the end of this year. http://www.canesta.com

Digital Domain Holdings acquires In-Three Digital Domain Holdings announced that it has reached an agreement to acquire 3D stereo studio In-Three, Inc., developer of the Dimensionalization process that converts 2D films into high quality, 3D stereo imagery. Digital Domain studios in California and Vancouver recently completed production on Walt Disney Studios’ TRON: Legacy, which was generated and produced in stereoscopic 3D. In-Three completed 3D stereo work on Tim Burton’s Alice in Wonderland, which grossed over $1 billion at the worldwide box office. The In-Three team will be based out of Digital Domain Holdings’ headquarters in Port St. Lucie, Florida. http://www.digitaldomain.com

7 Veritas et Visus 3rd Dimension November 2010

J-Pop star (and 3D hologram) Hatsune Miku sells out stadiums Hatsune Miku has topped the pop charts in Japan, sold out stadium concerts and become a legitimate cultural phenomenon. The interesting thing is that Miku doesn't exist – at least not in any traditional sense of the word. Miku is a computer-generated avatar that performs songs with the help of a live band. But unlike say, Gorillaz, a cartoon band that merely serves as the public face of an artistic collective, everything about Miku comes from a computer. She is the product of a company called Crypton Future Media, which synthesizes Miku's voice using Yamaha's Vocaloid software. Creating the character – which appears as a girl with blue pigtails and a cyberpunk version of the traditional Japanese school-girl uniform – was a meticulous process. First, the creators recorded voice actress Saki Fujita making individual phonetic sounds at a specific pitch and tone. Then, they recombined the samples and fed them through the synthesis software to produce an almost endless number of words and sounds. Users can actually purchase a copy of Miku to run on their home PCs, and have her perform songs of their own creation. Despite Miku's availability for private performances on home PCs, crowds still pay for live concerts, where Miku is able to whip her legions of fans into a frenzy (as seen in the video below). At these sold-out shows, Miku is materialized as a 3D hologram. http://www.youtube.com/watch?v=-7EAQJStWso&feature=related

Japanese pop star Hatsune Miku is an electronic avatar, (including her digitally created voice) who performs to live and paying audiences in Japan. Her performances are electronic lightshows to a live band and feature quick change routines, (see lower left and center images in which her clothing and hairstyle changed in 4 seconds.

8 Veritas et Visus 3rd Dimension November 2010

DirecTV to present “Guy's Big Bite in 3D” DirecTV announced that it reached a deal with Scripps Networks to carry the first food show to be broadcast in full 3D for its n3D channel, powered by Panasonic. The first episode will debut at 8:00 p.m. ET/PT Dec. 4; the co-production features Food Network star Guy Fieri and will present six new episodes of “Guy's Big Bite,” produced specifically for 3D. Each will feature such signature recipes as andouille and clam crostini, Sloppy Joe's with Maui onion straws, hoisin chicken fold-ups, braised pork shoulder and garlic parmesan crab. http://www.directv.com

Plextor launches Blu-ray writer with 3D playback Plextor announced an internal Blu-ray writer with 3D playback capabilities. The PXLB950Sa is said by Plextor to be able to record a 25GB single layer Blu-ray disc in 12 minutes at 12X speed. It also has Lightscribe disc labeling and comes bundled with the Plexutilities, Cyberlink and PowerDVD9 Blu-ray software. For Blu-ray burning its minimum specification requirements include an Intel Pentium 4 running at 2GHz or Pentium D at 3GHz, with 1GB RAM and at least 50GB of free storage space. It also requires 7 or Vista or XP with service pack 2. To use it as a player the computer needs an HDCP supported graphic card with 256MB of RAM and 32-bit color support. Plextor recommends the AMD ATI x1600 or NVIDIA 7600GT or later. The 0.65kg internal drive has 7-pin SATA and 15-pin power connectors. The Blu-ray drive can also write to DVDs. http://www.plextor.com

Virgin Media announces a 1TB HD 3D capable set-top box Virgin Media is launching its STB alongside its TIVO-powered next-generation connected TV service. TIVO is the US cable and web TV service that has operated in the UK since 2000. Pricing and availability for Virgin Media's IPTV and STB are yet to be revealed. Virgin has reported that the service needs optical fiber cable to the property but there will not be any impact on the customer's broadband speed. Whether customers have Virgin Media's 10Mb, 20Mb or 50Mb broadband service their Internet connection will remain at that speed, as the STB has its own dedicated broadband link. Virgin Media also has a 100Mb broadband service that it will be rolling out this quarter. Whichever deal customers have, the STB will use its own internal modem, set at a connection speed of 10Mb, to deliver the HD video and other online services using the dedicated broadband. http://www.virginmedia.com

International 3D Society targets advertising community The International 3D Society announced an initiative to promote 3D production to the advertising community, as well as appointments to its newly formed marketing committee. Marketing committee members will support the group's mission to advance 3D technology and content through developing an education curriculum directed specifically targeted towards creative professionals in the advertising community. Ed Erhardt, President of Customer Sales and Marketing, ESPN, will serve as Chairman. The International 3D Society has scheduled a 3DNA Outlook Forum Tuesday, February 8, at which marketing committee members will present ideas for uses of the technology in creative advertising. The medium is predicted to become a key marketing tool. In addition to developing educational themes for 3D advertising designers and producers, committee members will identify advertising campaigns to be awarded Lumiere honors at the Society's 2nd Annual Creative Awards, Wednesday, February 9th, 2011 at Hollywood's historic Grauman's Chinese Theatre. http://www.international3dsociety.com

Mufin announces world’s first 3D music interface Mufin announced that the latest free release of its mufin player will include the world’s first 3D music navigation feature. Mufin player 2.0 will include mufin vision, a new 3D music interface which uses three distinct song attributes to display digital music collections as a visual map. The result is that mufin vision provides music fans with a fast, intuitive way to explore the thousands of tracks on their PCs or mobile phones using either a mouse or touch screen interface. “Even casual music fans now have access to thousands of digital tracks on their PCs and mobile devices, and managing such vast collections has become a real challenge,” said Boris Löhe, mufin President. “We’ve introduced mufin vision as a unique feature in the free version of mufin player 2.0 to give music fans a truly innovative way to navigate music, and so far the feedback has been amazing.” The free mufin player 2.0 can be downloaded at http://www.mufin.com

9 Veritas et Visus 3rd Dimension November 2010

Panasonic to sell 152-inch and 103-inch 3D plasma TVs Panasonic will offer its state-of-the-art TH-152UX1 152-inch, 3D plasma display, the world’s largest, along with the TH-103VX200 and TH-85VX200, 3D versions of its premium 103-inch and 85-inch plasma displays from January 2011. Ideal for a broad spectrum of cutting-edge applications, the three new Full HD 3D large-format plasmas are designed for utilization in homes, corporate environments, and the entertainment industry.

Several breakthrough technologies enable the 3D displays’ Full-HD images and even improve 2D performance. Panasonic’s 3D ultra high-speed drive technology and new motion prediction technology eliminates crosstalk (ghosting) between left and right images to produce super fine crisp images, and facilitate an overwhelming, near- infinite contrast ratio of 5,000,000:1, around twice the tone gradation of conventional models. A 30-bit Color Processing Engine provides super-accurate color reproduction and a color management system that features high- precision pixel conversion and color signal-processing technology, high definition broadcast-compliant color reproduction, and professional picture quality adjustment that allows displays to be set up to suit their viewing environments perfectly. The result is studio monitor-like performance with reproduction of the detail and texture of the original images with near perfection. The three displays also incorporate video processing technology to allow for the crisp, smooth playback of images originally recorded in 24p, such as movies recorded on film. This raises the video reproduction performance of the plasma display panels allowing for true- to-life reproduction of colorful, high- precision content in either 3D or 2D.

To shorten decay time by a third in comparison to conventional models, new short-decay time phosphors have been installed. Also, the inclusion of the world’s first 1 motion vector prediction allows high-precision light emission controls to predict not only backward and forward movement, but also left-right and diagonal movement to heighten drive speed, thereby allowing a higher illuminating speed of about one fourth the time of conventional models. This installation reduces crosstalk, the key to high quality 3D video, to an absolute minimum, giving clear, high resolution 3D images on the ultra-large panels such as the 85- and 103-inch models and even the 152-inch model which offers 8.84 million pixels (4096x2160), about quadruple the pixels of Full HD panels (1920x1080 = 2.07 million pixels). With the virtually infinite contrast of 5,000,000:1, the new displays maximize PDP’s high quality 3D imaging capabilities on the entire surface of the ultra-large screens, creating an overwhelmingly immersive experience. http://www.panasonic.com

Verizon rolls out 3D video-on-demand on FiOS Verizon launched its 3D on-demand movie channel according to Avail-TVN, the company supplying the service. Titles becoming available in the next few months include “The Last Airbender,” “Saw: The Final Chapter,” “Alpha and Omega” and “Piranha.” Avail-TVN says its 3D video-on-demand service “does not require any additional equipment or upgrades to service providers’ existing equipment.” That is, it works on Verizon’s FiOS encoders and set-tops, the latter with a “minor over-the-air software update.” Subscribers still need 3DTV sets and companion glasses to view content in 3D. Avail-TVN says it uses the frame-compatible format for distribution of 3D content, which is transmitted at the same rate as 2D HD files. Movies will be priced from around $1 to $2.50 more than HD titles. http://www.availmedia.com

10 Veritas et Visus 3rd Dimension November 2010

ITRI develops simultaneous 2D/3D switchable display i2/3DW is a next generation 2D/3D switchable display technology and can simultaneously integrate naked eye 3D displays with regular 2D information. This breakthrough solves the problems previously associated with 2D/3D displays – a lack of integration forcing viewers to switch between 2D and 3D modes – and 3D displays – blurry text and specific eyewear. With i2/3DW, 2D texts are as clear as they are on a 2D screen and 3D images are as fascinating as on a 3D screen, but can now coexist on the same screen for optimal viewing quality. The construction of an enabled i2/3DW display is comprised of three primary component layers: the conventional liquid crystal display panel (LCD panel), the dynamic black-light unit (DBLU) and the 2D/3D switching layer -- that lies in between the LCD and DBLU panels, allowing the 2D and 3D display mode to be switched automatically. This feature differentiates ITRI's i2/3DW technology from its competitors -- to date, similar technologies have only focused on whole screen 2D or 3D display. i2/3DW is the first to make the integration of a partial switch possible. ITRI's switching component is made of two polarization films, one micro-retarder and one low-resolution LC panel -- all extremely inexpensive to make, making the i2/3DW technology affordable. http://www.itri.org.tw/eng/

Gucci, Calvin Klein, and Oakley release fashionable 3D glasses A newly released pair of 3D glasses from Gucci contain a high tech multi-layered mirrored coating which allows the wearer to view themselves in a mirror without distortion. The mirrored coating not only provides a premium look as well as offers superior viewing and contrast enhancement, but also allows over 98 percent of visible light through, therefore not affecting viewing in a cinema environment. An anti-reflective coating has been applied to the back of the lens for additional overall image quality by reducing scattered light, glare and blue light. These aviator style 3D glasses will be available at Gucci boutiques this holiday season for $225. http://www.gucci.com

Calvin Klein announced that it has partnered with Marchon3D, the company behind the M3D technology, which makes curved lenses 3D-capable. The company promises “the most technologically advanced, fashion-forward 3D sunglasses on the market.” The glasses are Real D 3D certified and feature photo-chromic lens technology for wearing inside and outside--and, assumedly, at night, Corey Hart-style. There will be six versions of the glasses in total – three for men, three for women. All will be available next month for $180. http://www.calvinklein.com

Oakley announced the release of the world’s first optically correct 3D glasses, OAKLEY 3D GASCAN. Utilizing the company’s proprietary HDO-3D™ technologies, these premium glasses are engineered for unrivaled 3D performance, superior visual clarity and signature Oakley comfort. OAKLEY 3D GASCAN will both complement and optimize the technology used in the majority of 3D movie theaters around the world. With HDO-3D, Oakley 3D lenses virtually eliminate the ghosting or “crosstalk” between images that reach each eye from one moment to the next, a potential problem with inferior 3D glasses. Lens curvature is another issue with conventional 3D glasses. Greater curvature around the eyes provides a wider field of view, but without highly precise optics, even a mild curve can cause visual distortion. Oakley technology maintains optical clarity so the wearer can enjoy a wide field of sharp vision. The curvature of Oakley 3D lenses has the added benefit of minimizing distractive glare. The new 3D glasses will be available for $120. http://www.oakley.com

Newly released designer glasses from Gucci, Calvin Klein, and Oakley

11 Veritas et Visus 3rd Dimension November 2010

THine introduces new power management LSI for 3D television markets THine Electronics introduced its new power management LSI, THV3058, for advanced television markets, such as 3D televisions and 240 Hz refresh rate panels. THV3058 is available now in volume production. THine’s new product, THV3058, achieves complicated power management systems required to 3D televisions and other advanced televisions only by one chip. The advanced television markets of 3D televisions and 240 Hz refresh rate panels are growing rapidly. Since these televisions have higher-speed refresh rates, driver ICs of LCD panels need more electric current. These advanced televisions therefore require more complicated power management systems. THV3058 saves energy consumption in power management LSIs by at most 30% compared to usual power management LSI. In addition, this new products achieved more reliable systems by adding new function for protecting from over current as well as usual system protection functions. http://www.thine.co.jp

Planar announces new 3D-ready monitor Planar Systems announced the latest addition to its 3D stereoscopic product line. Leveraging six years of expertise in manufacturing 3D monitors with the highest stereoscopic image quality available, Planar is launching the new SA2311W 3D ready desktop monitor. Planar's SA2311W 3D display will offer a single monitor entrée for the consumer market and an added option for the professional market, supporting the company's strategy to develop innovative display solutions for unique applications. The SA2311 utilizes the NVIDIA 3D Vision technology, to produce full 1920x1080 resolution stereo images that can be seen with NVIDIA 3D Vision and NVIDIA 3D Vision Pro active shutter glasses. The stereo image quality is the best available for frame sequential 3D technology. The monitor is competitively priced at $449.

NVIDIA 3D Vision(TM) consumer technology is a combination of a wireless emitter, active shutter 3D glasses, and one of many compatible NVIDIA GeForce graphics cards and software, which delivers an immersive 3D gaming, video and photo experience on the SA2311W. NVIDIA 3D Vision Pro technology employs active shutter glasses and an innovative radio frequency (RF) communication system for use with Planar's 120 Hz monitor, in conjunction with NVIDIA Quadro professional graphics cards. Professionals can design, create and explore in stereoscopic 3D, enabling collaborative workflows on the SA2311W.

Planar uses cutting edge 120 Hz LCD technology to produce a 3D monitor with blazing fast response time. The 23- inch screen is packaged in a sleek design that looks beautiful in both the home and work environments. The SA2311W also functions great as a 2D monitor and its speed makes it ideal for gaming or geospatial image roaming; however, the 3D capability is what differentiates the product from the competition. Planar is offering the SA2311W bundled with the 3D Vision Kit and Planar's new ProGlow backlit keyboard. The keyboard features seven brightness levels including "off," using an individual LED behind each of the 105 keys. This provides excellent keyboard visibility while wearing 3D glasses and/or when working (or playing) in a low ambient light environment. http://www.planar.com

International 3D Society presents industry awards Panasonic, Dolby, Real D, Texas Instruments, NVIDIA and XpanD were just some of the recipients of 3D Technology Awards from The International 3D Society. Panasonic received the organization’s first “Charles Wheatstone” Award for its “advocacy, technology and consumer engagement” in developing stereoscopic 3D products, while Dolby was awarded for its Dolby 3D system and Texas Instruments’ for its DLP Cinema technology. NVIDIA’s 3D Vision technology won an award and XpanD won for its “Active 3D Cinema System”. http://www.international3dsociety.com

12 Veritas et Visus 3rd Dimension November 2010

Streambox launches 3D encoder-decoder Streambox released the world’s first low-latency full resolution 4:2:2 HD 3D 1-RU encoder/decoder. The Full HD 3D encoder/decoder uses the company’s ACT-L3 video compression and includes all advanced video and networking features found in existing Streambox professional video products. The compact 1-RU solution is ideal for industries focusing on professional quality 3D video acquisitions, such as post-production, sports broadcasting, and Government/Military. Streambox Full HD 3D video transport solution will be available to order in December 2010. Designed for low-bandwidth Full HD 3D video acquisition and transport, the Streambox 3D Encoder/Decoder enables users to capture and transmit live and file-based 3D video over IP networks. The Streambox Full HD 3D Encoder/Decoder will offer robust forward error correction, and bandwidth shaping technologies to mitigate packet loss, network jitter, and buffering. The encoder captures the full-frame left and full- frame right HD 3D video from the source and compresses it into a single synchronized transport stream or file. The single stream is received and decoded by the HD 3D Decoder as full-left and full-right play-out, or has the option for side-by-side monitoring. http://www.streambox.com

Streambox Full HD 3D encoder/decoder

InvenSense announces the world’s first MotionProcessor InvenSense announced the release of its highly anticipated MPU-6000 product family. The MPU-6000 is a breakthrough in MEMS motion sensing technology with the integration of a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die together with an onboard Digital Motion Processor (DMP) capable of processing complex 9-axis sensor fusion algorithms. With increasing popularity of motion sensors in everyday consumer electronics, pioneered by with the Wii console and later by Apple with the iPhone, motion processing is quickly expanding into smart phones, tablets, TV remotes, handheld gaming devices and gaming consoles, digital still and video cameras and many other consumer products.

The MPU-6000 family of MotionProcessors eliminates the challenges associated with selection and integration of many different motion sensors that could require signal conditioning, sensor fusion and factory calibration. It features integrated 9-axis sensor fusion algorithms that utilize an external magnetometer output through its master I2C bus to provide dead reckoning functionality. The MPU-6000 offers ease of integration and interface to various application processors through an I2C or SPI bus and its standard MotionProcessing Library (MPL) and APIs. Adoption of motion processing functions in smart phones, tablets and many other portable consumer electronic devices is promising to bring a host of new and enhanced functionalities and benefits to consumers including: precise sensing of hand jitter to improve image quality and video stability; GPS dead reckoning for vehicles and indoor pedestrian navigation and new motion-based user interfaces, and more immersive gaming experiences to name a few. However, market adoption has been slow primarily due to a lack of available off-the- shelf solutions that could be adopted quickly and easily by OEMs. Today, developing an integrated motion sensor solution requires using various components offered by many different suppliers, adding signal conditioning, developing proprietary sensor fusion algorithms, processing overhead and resource allocation and understanding the complex IP challenges in this space, all of which adds cost and delays in adoption by end customers. The MPU- 6000 is available for immediate selected customer sampling. http://www.invensense.com

Teranex and SENSIO enable broadcasting of premium-quality 3D television to the home Teranex Systems and SENSIO Technologies announced jointly the addition of SENSIO Technologies’ patented 3D frame-compatible stereoscopic compression technology to Teranex’s 3D encoding and decoding options, the VC1- 3D-ENC and VC1-3D-DEC applications. The added capability will enable delivery of 3D programming of superior image quality over existing broadcasting infrastructure. http://www.teranex.com http://www.sensio.tv

13 Veritas et Visus 3rd Dimension November 2010

University of Abertay Dundee develops Motus to enable users to film in any 3D environment In the creation of the film Avatar, director James Cameron invented a system called Simul-cam. It allowed him to see the video output of the cameras, in real-time, but with the human actors digitally altered to look like the alien creatures whom they were playing. Now, researchers from the University of Abertay Dundee have built on the techniques pioneered by Simul-cam to create a new system, that lets users act as their own cameraperson within a 3D environment. Users of the Motus system hold two Sixense electromagnetic motion-sensitive controllers (like the Wii controllers), and see their environment through a virtual camera – just like the environments of existing video games and animations are already seen. In this system, however, they can look around their simply by moving one of the controllers, as if it were a camcorder. While it’s been possible to do this in first-person video games for years, the Abertay system does so in a much more lifelike, organic fashion, and can be applied to any 3D computer model.

Motus users can advance through their environment, pan left and right, tilt up and down, zoom, and adjust their virtual iris and depth of field. The camera can be “hand held,” for a Blair Witch-like effect, or mounted on a virtual tripod or dolly, for a steadier, more professional look. The scale of the camera can also be changed on the fly, so you could start by walking through a room, then in one continuous shot proceed to squeeze through the holes in a block of Swiss cheese. There are several possible uses for the technology, besides film-making. “Within games, watching and sharing replays of the action is hugely popular,” said project associate Erin Michno. “What our development allows is replays to be edited exactly as if they were a film, zooming in, panning the camera, quickly and easily creating a whole movie based on your gaming. For online games enthusiasts, that would dramatically change what’s possible.” A commercial version of the system will be manufactured by gaming hardware company Razer, and should work on any home PC. It is expected to be available early next year for under £100. http://www.abertay.ac.uk/about/news/newsarchive/2010/name,6983,en.html

Aperio awarded patent for 3D viewing of digital cytology slides Aperio announced that the United States Patent and Trademark Office has issued the company patent No. 7,787,674 entitled, “Systems and Methods for Viewing Three Dimensional Virtual Slides.” Digital pathology improves patient care by allowing physicians to access, analyze and share digital (whole microscope) slide images, enabling accurate and reliable diagnoses at lower costs. The ‘674 patent describes novel techniques for retrieving, manipulating and viewing three dimensional (3D) image objects from three dimensional digital slide images. Unlike 2D digital histology slides that contain a single focus plane sufficient for the majority of viewing situations, 3D digital slides contain multiple, unlimited focus planes that enables digital focusing on specimens that are inherently three- dimensional in nature, such as cytology slides. An image library module allows a 3D image object to be sliced into horizontal and vertical views, skewed cross layer views and regular and irregular shaped 3D image areas for viewing by a user. http://www.aperio.com

14 Veritas et Visus 3rd Dimension November 2010

15 Veritas et Visus 3rd Dimension November 2010 Stereoscopic Displays and Applications Conference January 18-20, 2010, San Jose, California

In this fifth report, Phillip Hill covers presentations from University of St. Andrews, University of Nantes, Bangor University, Samsung Advanced Institute of Technology, Qualcomm Inc/University of California, and Philips Research

Monocular Zones in Stereoscopic Scenes: A Useful Source of Information for Human Binocular Vision? Julie M. Harris, University of St. Andrews, St. Andrews, Scotland

When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This ability of the human visual system, called stereopsis, lies at the core of all binocular 3D perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye’s views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye’s view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye’s view and can thus cause problems for depth extraction algorithms. This paper discusses evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. The paper reviews the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

A stereopair from the St. Andrews Binocular Image Database. The stereopair is set up for cross-fusing to reveal the depth. Monocular zones occur primarily in association with the edge of the foreground rock. Left: right eye; right: left eye. http://psy.standrews.ac.uk/people/personal/pbh2/images/

16 Veritas et Visus 3rd Dimension November 2010

The Influence of Autostereoscopic 3D Displays on Subsequent Task Performance Marcus Barkowsky, and Patrick Le Callet, University of Nantes, Nantes, France

Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low-level perception as well as for the task performance itself are evaluated. In addition, two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment. The performance task splits into three parts: reading the text (e.g find all triangles bordered by green), performing the search task, and selecting the correct answer. With the help of the eyetracking data, a detailed analysis is possible. In the illustration an example is displayed for one observer and one task. The figure shows the saliency information overlayed on the task screen as seen by the subject. It can be seen that the position of the correct answer is randomized. The gaze points that were recorded by the eyetracker are displayed in green. Additionally, a fixation and saccade analysis was performed using an acceleration detection method. The fixation points are displayed with larger green circles and the number of each fixation is printed as text. The saccades are depicted with dashed lines. The points are not connected when the eye could not be tracked, e.g. due to a blink.

Example of saliency map and scanpath of one observer and one task for the four sessions, pre/post 3D, pre/post 2D

Eliminating Accommodation-Convergence Conflicts in Stereoscopic Displays: Can Multiple-focal-plane Displays Elicit Continuous and Consistent Vergence and Accommodation Responses? Kevin J. MacKenzie, and Simon J. Watt, Bangor University, Bangor, Wales

Conventional stereoscopic displays present images at a fixed focal distance. Depth variations in the depicted scene therefore result in conflicts between the stimuli to vergence and to accommodation. The resulting decoupling of accommodation and vergence responses can cause adverse consequences, including reduced stereo performance, difficulty fusing binocular images, and fatigue and discomfort. These problems could be eliminated if stereo displays could present correct focus cues. A promising approach to achieving this is to present each eye with a sum of images presented at multiple focal planes, and to approximate continuous variations in focal distance by distributing light energy across image planes, a technique referred to as depth-filtering.

Here the researchers describe a novel multi-plane display in which they can measure accommodation and vergence responses. They report an experiment in which they compare these oculomotor responses to real stimuli and depth- filtered simulations of the same distance. Vergence responses were generally similar across conditions. Accommodation responses to depth-filtered images were inaccurate, however, showing an overshoot of the target, particularly in response to a small step-change in stimulus distance. This is surprising because research has previously shown that blur-driven accommodation to the same stimuli, viewed monocularly, is accurate and

17 Veritas et Visus 3rd Dimension November 2010 reliable. They speculate that an initial convergence-driven accommodation response, in combination with a weaker accommodative stimulus from depth-filtered images, leads to this overshoot. The results suggest that stereoscopic multi-plane displays can be effective, but require smaller image-plane separations than monocular accommodation responses suggest.

A schematic of the stereoscopic multiple-focal-plane display. (a) Side view of the main optical elements in each eye’s display, including the autorefractor (right eye only). (b) Photograph of an observer viewing the display. (c) Plan view of one eye’s display showing the range of focal distances that can be presented. Individual focal planes are adjustable within this range, with a minimum separation (determined by physical constraints) of 1/3 D. (d) Plan view of the layout of the left and right eye’s displays. The system is a haploscope: each eye’s display can rotate in the horizontal plane about the eye’s center of rotation. The rotation axes of the displays are adjustable, to match the observer’s inter-ocular distance.

Improving Depth Maps with Limited User Input Patrick Vandewalle, René Klein Gunnewiek, and Chris Varekamp, Philips Research, Eindhoven, The Netherlands

A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance.

In this paper, the researchers discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, they elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool. In the figure, the researchers

18 Veritas et Visus 3rd Dimension November 2010 give an example of the depth map enhancement algorithm. The contour of the woman’s head is drawn in a key- frame, along with an outside local background region (see Figure (b)). A number of salient features on the head are then indicated to permit precise tracking. The enhancements to the depth map are shown in Figure (c). These corrections are then propagated through the sequence, and the result in frame 10 can be seen in Figure (d)-(f). The contour has been tracked quite accurately and in a temporally stable manner. Some inaccuracies to the left of the head are still visible in frame 10 due to small tracking errors.

Depth map enhancement using a small amount of user interaction. The user has drawn the contour of the head in frame 1 as well as some feature points. These are automatically propagated and depth maps are filtered accordingly.

Is a No-reference Necessary and Sufficient Metric for Video Frame and Stereo View Interpolation Possible? Vikas Ramachandra, Qualcomm Inc, San Diego, California Truong Q. Nguyen, University of California, San Diego, California

This paper explores a novel metric that can check the consistency and correctness of a disparity map, and hence validate an interpolated view (or video frame for motion compensated frame interpolation) from the estimated correspondences between two or more input views. The proposed reprojection error metric (REM) is shown to be sufficient for the regions where the observed 3D scene has no occlusions. The metric is completely automatic requiring no human input. The paper also explains how the metric can be extended to be useful for 3D scenes (or ) with occlusions. However, the proposed metric does not satisfy necessary conditions. The paper discusses the issues that arise during the design of a necessary metric, and argue that necessary metrics that work in finite time cannot be designed for checking the validity of a method that performs disparity estimation.

2D-to-3D Conversion by Using Visual Attention Analysis Jiwon Kim, Aron Baik, Yong Ju Jung, and Dusik Park, Samsung, Yongin-Si, South Korea

This paper proposes a novel 2D-to-3D conversion system based on visual attention analysis. The system was able to generate stereoscopic video from monocular video in a robust manner with no human intervention. According to an experiment, visual attention information can be used to provide rich 3D experience even when depth cues from monocular view are not enough. Using the algorithm introduced in the paper, 3D display users can watch 2D media in 3D. In addition, the algorithm can be embedded into 3D displays in order to deliver better viewing experience

19 Veritas et Visus 3rd Dimension November 2010 with more immersive feeling. Using visual attention information to give a 3D effect is the first tried in this research as far as the researchers know. Visual attention has been studied for a long time in various research areas including physiology, psychology, neural systems and computer vision. It has been proved in several different fields that human’s nervous and cognitive systems focus more on some interesting objects/regions than plain objects/regions when watching a scene. These interesting objects/regions are called salient objects/regions. Some examples are shown in the figure.

Salient objects enclosed in red rectangles

20 Veritas et Visus 3rd Dimension November 2010

3D and Interactivity at SIGGRAPH 2010 by Michael Starks

After graduate work in cell physiology at UC Berkeley, Michael Starks began studying stereoscopy in 1973, and co- founded StereoGraphics Corp (now Real D) in 1979. He was involved in all aspects of R&D including prototype 3D videogames for the Atari and Amiga and the first versions of what evolved into CrystalEyes LCD shutter glasses, the standard for professional stereo, and is co-patentee on their first 3DTV system. In 1985 he was responsible for starting a project at UME Corp, which eventually resulted in the Mattel PowerGlove, the first consumer VR system. In 1989 he started 3DTV Corp. In 1990 he began work on “Solidizing”-- a realtime process for converting 2D video into 3D. In 1992 3DTV he created the first full color stereoscopic CDROM (“3D Magic”) including games for the PC with shutter glasses. In 2007 companies to whom 3DTV supplied technology and consulting produced theatrical 3D shutter glasses viewing systems, which are being introduced worldwide in 2008. Starks has been a member of SMPTE, SID, SPIE and IEEE and has published in Proc. SPIE, Stereoscopy, American Cinematographer and Archives of Biochemistry and Biophysics. The SPIE symposia on 3D Imaging seem to have originated due to his suggestion to John Merritt at a San Diego SPIE meeting some 20 years ago. Michael more or less retired in 1998 and lives in China where he raises goldfish and is researching a book on the philosophy of Wittgenstein. http://www.3dtv.jp

Every summer the ACM special interest group in computer graphics holds its annual USA (there are others in Asia and Europe) conference and exhibition. It’s a wonderful high-tech circus ranging from advanced technical papers to playful interactive art. In the evenings the winners of the juried animations are projected, and of course many are now in 3D. Both the conferences and most of the animations are available in books and DVDs so I will only cover stereoscopic related offerings in the exhibition halls, with a sampling of student projects, interactive art, and some poster papers.

The Canon Viewer with Polhemus tracker in the center and Virtual Photocopier simulation (screen at left) which imposes the stereo CGI on the real machine for interactive training. This type of see-through HMD projects the CGI on the real object with precise alignment due to real time object detection via twin cameras and a gyro in the HMD. This system thus has more rapid response and better registration than previously available. http://www.canon-its.co.jp which, is opaque if you don’t read Japanese, but you can email mr_project@canon- its.co.jp or [email protected] and watch http://www.youtube.com/watch?v=o2NIX7DNpvk (skip the political short). The VH-2007 is a prototype running on a dual XeonX5570 with an NVIDIA Quadro 4800.

Another app for the Canon Mixed Reality HMD --animating a dinosaur skeleton (center) in real-time. In this case goers see the dinosaur skeleton and then interact with a stereoscopic CGI simulation. http://www.youtube.com/watch?v=i2RqDTYYoFc, http://www.youtube.com/watch?v=xwIzRIasXto

21 Veritas et Visus 3rd Dimension November 2010

The Canon Mixed Reality system and Dual Polhemus FastTrak trackers controlling the virtual Shuriken blades being used to play Hyak-Ki Men –the Anti-Ogre Ninja’s Mask – a videogame created by a team at Prof. Ohshima’s lab at Ritsumeikan University in Kyoto [email protected]

Philipp Bell of WorldViz – Santa Barbara, California http://www.worldviz.com – showed their VR world-building software with a $25K NVision HMD and 3D joystick. http://www.youtube.com/watch?v=MIppOTeHEBc http://www.youtube.com/watch?v=TnlGUl_P6Y4

Nemer Velasquez of CyberGlove Systems http://www.cyberglovesystems.com with their highly sophisticated glove and a stereo HMD. http://www.youtube.com/watch?v=WDad5_dnRFg&feature=related

CEO and Professor at Chonbuk University Ha Dong Kim (left) and Gyu Tae Hwang of CG Wave of Seoul Korea http://cgwave.mir9.co.kr/index_en.html with their Augmented Reality system combining stereo CGI with 2D (soon 3D) video windows. Real-time UGC (User Generated Content) in a stereoscopic VR environment. You can download a trial version of the Wave 3D VR authoring system at http://cgwave.dothome.co.kr/renew/download.htm

The irrepressible Ramesh Raskar of the MIT Medialab had his finger in many pies here including the Slow Displays (see below) and several papers. One of special interest to autostereo fans is “Content Adaptive Parallax Barriers for Auto-multiscopic 3D Display” which explains how to dynamically alter both the front and back panels of such displays to get optimal refresh rate and brightness and resolution for the given content. In his paper with lead author Daniel Lanman and three others he says “We prove that any 4D lightfield created by dual-stacked LCD’s is the tensor product of two 2D mask functions. Thus a pair of 2D masks only achieves a rank-1 approximation of a 4D light field. We demonstrate higher rank approximations using temporal multiplexing… here a high speed LCD sequentially displays a series of translated barriers. If the completed mask set is displayed faster than the flicker fusion threshold, no spatial resolution loss will be perceived….Unlike conventional barriers, we allow a flexible field of view tuned to

22 Veritas et Visus 3rd Dimension November 2010

one or more viewers by specifying elements of the weight matrix W. General 4D light fields are handled by reordering them as 2D matrices, whereas 2D masks are reordered as vectors.” So this just might be a big step forward for autostereo. You can buy the paper online or the video http://siggraphencore.myshopify.com/products/2010-tl042. Below is a SIGGRAPH handout with some projects at MIT.

23 Veritas et Visus 3rd Dimension November 2010

One of the demos had a new twist on the structured light approach to depth mapping via the DLP projector and cameras which analyzed the deformations of the straight lines.

Jan Kjallstrom (left) and Mats Johansson of Eon Reality http://www.eonreality.com with CGI on a DLP projector viewed with XpanD DLP Link glasses. The glasses performed very well (i.e., up to 20M away) in this context but not so well in others (see “The Ten Sins of DLP Link Glasses” in the FAQ on my page http://www.3dtv.jp). An Eon user in Beijing did the 8 view interactive graphics for the NewSight multiview panels, which were promoted by 3DTV Corp in Asia for several years. Eon develops interactive 3D visual content management solutions as well as its renderers and other CGI apps. In the right photo is Simon Inwood of Autodesk Canada who showed their software in 3D on a Mitsubishi 3D Ready DLP TV with shutter glasses, running real time interactive, with a virtual video camera (i.e., you could put the tracker on your finger just as well) using the well known Intersense tracker http://www.intersense.com/.

Chris Ward of Lightspeed Design with the small version of the DepthQ Modulator-now a serious competitor for the Real D XL system. See http://www.depthq.com or my ‘3D at Infocomm’ for more info. It is made by http://www.lctecdisplays.com/; Touch Light Through the Leaves-a Tactile Display for Light and Shadow by is a Braille-like opto-mechanical device that converts shapes and shadows into pressure on your palm. http://www.cyber.t.u-tokyo.ac.jp/~kuni/ and http://www.youtube.com/watch?v=x8jDUX7fsDg , http://www.youtube.com/watch?v=UqiShlpnBjw

24 Veritas et Visus 3rd Dimension November 2010

Glowing Pathfinder Bugs by Anthony Rowe (Oslo School of Architecture and Design/Squidsoup) http://www.youtube.com/watch?v=DBU65ilhcWM was commissioned by Folly http://www.folly.co.uk/ and made by Squidsoup http://www.squidsoup.org/blog/ for PortablePixelPlayground http://www.portablepixelplayground.org. Projected images of depth-sensitive virtual animals seek the bottom of a sandbox. This is a demonstration of a rapidly growing field called “Appropriated Interaction Surfaces” where images are projected on the hands, floor, sidewalk, car windows etc.

Another such AIS demo here was the Smart Laser Projector by Alvaro Cassinelli and colleagues which combines a Lidar beam with a projection beam for Augmented Reality. http://www.youtube.com/watch?v=B6kzu5GFhfg. It does not require calibration and can detect (and so interact with) objects such as fingers above the surface. A second demo was a two axis MEMS mirror which can perform edge enhancement, polarization or fluorescence of printed matter with perfect registration in real-time. Putative apps include dermatology (cancer cell detection and smart phototherapy), nondestructive testing and object authentication. http://www.k2.t.u-tokyo.ac.jp/perception/SLP. You can find a nice article including some very different approaches in the June 2010 issue of IEEE Computer for $19 or free abstract here http://www.computer.org/portal/web/search/simple.

The Smart Laser Projector http://www.youtube.com/watch?v=JWqgBRMkmPg enhances and transforms text, images, surfaces or objects in real-time. They have also been used to track, scan and irradiate live protozoa and sperm. Potential apps are limited only by the imagination – e.g., why not have a tactile or auditory output for the vision impaired? For a related device from the same lab see e.g., this YouTube video http://www.youtube.com/user/IshikawaLab#p/u/6/Ow_RISC2S0A.

In the Line of Sight by Daniel Sauter and Fabian Winkler http://www.youtube.com/watch?v=uT8yDiB1of8 uses 100 computer-controlled flashlights to project low-resolution video on the wall of human motion in a 10 by 10 matrix representation of video (the video is shown on an adjacent monitor).

For more photos and info on exhibits in the Touchpoint interactive art gallery see http://www.siggraph.org/s2010/for_attendees/art_gallery

25 Veritas et Visus 3rd Dimension November 2010

Lauren McCarthy of UCLA in this photo from her page http://lauren-mccarthy.com/projects.html wearing her Happiness Hat http://www.youtube.com/watch?v=y_umsd5FP5Y . Her exhibit “’Tools for Improved Social Interacting’ is a set of three wearable devices that use sensors and feedback to condition the behavior of the wearer to better adapt to accepted social behaviors. The Happiness Hat trains the wearer to smile more. An enclosed bend sensor attaches to the cheek and measures smile size, affecting an attached servo with metal spike. The smaller the smile of the wearer, the further a spike is driven into the back of their neck. The Body Contact Training Suit requires the wearer to maintain frequent body contact with another person in order to hear normally; if he or she stops touching someone for too long, static noise begins to play through headphones sewn into the hood. A capacitance sensing circuit measures skin-to-skin body contact via a metal bracelet sewn into the sleeve. The Anti-Daydreaming Device is a scarf with a heat radiation sensor that detects if the wearer is engaged in conversation with another person. During conversation, the scarf vibrates periodically to remind the wearer to stop daydreaming and pay attention.”

Yan Jin of 3DTV Corp petting the irresistible ADB (After Deep Blue –a reference to IBM’s famous world champion chess computer), is a touch responsive toy that pets you back-- by Nicholas Stedman and Kerry Segal http://www.youtube.com/watch?v=pcEGh03ADyI. It also defends itself when hurt. “ADB is composed of a series of identical modules that are connected by mechanical joints. Each module contains a servo motor, a variety of sensors, including capacitive touch sensors, a rotary encoder, and a current sensor to provide information about the relationship to a person’s body. The electronics are enclosed within plastic shells fabricated on 3D printers”. No barking, no vet bills, no fleas and never bites the neighbor’s kid – just add some appendages and a talking head and it’s a certain billion dollar market for somebody. http://www.youtube.com/watch?v=BXVAVHGgWoM

This image of a human abdomen in “The Lightness of Your Touch” by Henry Kauffman responds to touch by moving, and your hands leave impressions which lift off and move around. http://www.youtube.com/watch?v=uWlZ9yJI_sQ

Tachilab showed another system with fingertip capacitance sensed control of patterns http://tachilab.org/ A longtime researcher in VR related tech Professor Susumu Tachi is now working at both Keio University and the University of Tokyo you can see the camouflage suit, 3D digital pen and other wonders in action at http://www.youtube.com/user/tachilab

Empire of Sleep: The Beach by Alan Price lets you take virtual photos of the stereoscopic animation which causes it to zoom on the subject of the photo. http://www.youtube.com/watch?v=SRMQJZCebz4

26 Veritas et Visus 3rd Dimension November 2010

Dr. Patrick Baudisch and his team from Hasso Plattner Institute created Lumino, a Virtual Structural Engineer which uses a $12K table by Microsoft and fiber optic blocks for interactivity. http://www.hpi.uni-potsdam.de/baudisch/projekte/lumino.html. The fiber optics transmit your finger image to cameras below the surface. The You Tube is here http://www.youtube.com/watch?v=tyBbLqViX7g&NR=1 . The paper is free here http://www.patrickbaudisch.com/publications/2010-Baudisch-CHI10-Lumino.pdf .

Hanahanahana by Yasuaki Kakehi, Motoshi Chikamori and Kyoko Kunoh from Keio University -an Interactive sculpture which alters its flowers transparency in response to different odors provided by scented pieces of paper offered by users. http://vimeo.com/15092350 http://muse.jhu.edu/journals/leonardo/summary/v043/43.4.kakehi.html

A dual stereo viewpoint rear projected 3D tactile table with position tracked shutter glasses by the French VR Company Immersion http://www.immersion.fr. Two users each get their own 3D viewpoint and parallax changes in response to their hand positions. You can get a nice summary of this and the other Emerging Technologies exhibits at http://www.siggraph.org/resources/international/podcasts/s2010/english/emerging-technologies/text-summary.

Also in the Emerging Technologies area was the famous robot Acroban from French company Inria http://flowers.inria.fr/media.php which is shown in videos on their page and You Tube and a good one is that on principal inventor Pierre-Yves Oudeyer’s page http://www.pyoudeyer.com/languageAcquisition.htm. Don’t miss this one of the Playground Experiment in which it is used to investigate robotic curiosity –i.e., self organization of language and behavior which is his principal interest http://www.pyoudeyer.com/playgroundExperiment.htm.

Roboticist and student of language development P-Y Oudeyer is pictured with Acroban in this still from his page. This is cutting edge AI and . Acroban is not only exceptionally responsive cognitively, but also physically as shown by its ability to move naturally and preserve balance via its complex “skeleton” and control software. See the many YouTubes such as http://www.youtube.com/watch?v=wQ9xd4sqVx0

27 Veritas et Visus 3rd Dimension November 2010

Robot society interacts with red LED “voices” in AirTiles from Kansei-Tsukuba Design and you can see the YouTube video here http://www.youtube.com/watch?v=d6Wyj73MIw. Its modules allow users to create geometric shapes and interact with them. http://www.ai.iit.tsukuba.ac.jp/research/airtiles

Echidna http://www.youtube.com/watch?v=rK7ZzZ7Z6kY by UK based Tine Bech and Tom Frame hums happily until you touch it when it begins squeaking. It was part of the Touchpoint gallery of interactive art and you can get podcasts and a pdf here http://www.siggraph.org/resources/international/podcasts/s2010/english/touchpoint/text-summary

“Matrix LED Unit With Pattern Drawing and Extensive Connection” lets users can draw patterns with a light source such as a laser pointer. LEDs sense the light and display the pattern. The pattern is morphed by users via a tilt sensor in each unit. Units can be tiles and the morphing will then scroll across connected units, giving effects similar to the Game of Life. [email protected] and the YouTube is here http://www.youtube.com/watch?v=YyQcEqvgz0M

“FuSA2 Touch Display” by a group from Osaka University http://www-human.ist.osaka-u.ac.jp/fusa2/ uses plastic optical fibers and a camera below the fibers to alter the colors of the touch responsive display [email protected] and YouTube here http://www.youtube.com/watch?v=RKa-Q24q35c

“QuintPixel: Multi-Primary Color Display Systems” adds sub-pixels to red, green, and blue (RGB) to reproduce over 99% of the colors in Pointer’s dataset (all colors except those from self-luminous objects). The above comparison shows

28 Veritas et Visus 3rd Dimension November 2010

the improved high luminance reproduction of yellows due to the addition of yellow and cyan sub-pixels without display enlargement. Though QuintPixel adds sub-pixels, it does not enlarge the overall pixel area. By decreasing the area by one sub-pixel, it balances high-luminance reproduction with real-surface color reproduction. MPC’s are now appearing in Sharp TV’s where they can also produce “pseudo-super resolution” and reduce the problem of angular color variation in LCD panels. Only in 2D now but this video shows you the 3D versions are coming soon and will be the next must have for anyone with the money http://www.youtube.com/watch?v=09sM7Y0jZdI&feature=channel

“Beyond the Surface: Supporting 3D Interactions for Tabletop Systems” is best understood by viewing the YouTube video http://www.youtube.com/watch?v=eplfNE5Cvzw A tabletop with an infrared (IR) projector and a regular projector simultaneously project display content with invisible markers. Infrared sensitive cameras in tablets or other devices localize “objects” above the tabletop, and programmable marker patterns refine object location. The iView tablet computer lets you view 3D content with 6 degrees of freedom from the perspective of the camera in the tablet, while the iLamp is a projector/camera that looks like a desk lamp that projects high-resolution content on the surface. iFlashlight is a mobile version of iLamp. The SIGGRAPH paper from the group at Taiwan University is here

http://portal.acm.org/citation.cfm?id=1836829&dl=GUIDE&coll=GUIDE&CFID=104423014&CFTOKEN=87764926

“Colorful Touch Palette http://www.youtube.com/watch?v=UD3-FIbesvY by a group from Keio and Tokyo Universities [email protected] uses for sensors in fingertip covers to enable tactile sensations while painting on a PC monitor. It has 3 principal advances over previous force sensitive systems: it gives degrees of roughness by controlling the intensity of each electrode in the fingertip array; it increases the spatial resolution by changing the stimulus points faster than the fingertip movements thus providing tactile feedback that changes with finger position and velocity; it combines pressure and vibration for feedback of blended tactile textures.

From the University of Tsukuba Hoshino Lab comes Gesture-World Technology http://www.kz.tsukuba.ac.jp/~hoshino/. It achieves a highly accurate noncontact hand and finger tracking technology

29 Veritas et Visus 3rd Dimension November 2010

using high-speed cameras for any arbitrary user by compiling a large database including bone thickness and length, joint movement ranges and finger movements. This reduces the dimensionality to 64 or less, or 1/25th of the original image features—a huge advance in this art. Apps are endless but include interaction in a virtual world, video games, robotics and virtual surgery. You can get the YouTube video here http://www.youtube.com/watch?v=ivmrBsU_XUo

“Haptic Canvas: Dilatant Fluid-Based Haptic Interaction” is a novel haptic interaction that results from wearing a glove filled with a special fluid that is subjected to sucking, pumping and filtering which changes the state of the dilatant fluid from more liquid to more solid. The gloved hand is immersed in a shallow pool of water with starch added to block the view. The sheer force between particles at the bottom of the pool and partially solid particles inside the rubber glove changes with hand movement.

Varying what they term the three “Haptic Primary Colors” (the RGB dots in the pool) of "stickiness", "hardness", and "roughness" sensations, allows the user to create new “Haptic Colors”. More info here http://hapticcanvas.bpe.es.osaka-u.ac.jp/ and the paper by this team from Osaka University is here http://portal.acm.org/citation.cfm?id=1836821.1836834&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES382 &part=series&WantType=Proceedings&title=SIGGRAPH&CFID=104380878&CFTOKEN=47977647 and YouTube video here http://www.youtube.com/watch?v=eu9Za4JSvNk.

“Slow Display” by Daniel Saakes and colleagues from MIT http://www.slowdisplay.com and the Vimeo here http://vimeo.com/13505605 shows a high-resolution, low energy, very low frame rates display that uses a laser to activate mono or bistable light-reactive variable persistence and/or reflective materials. The resolution of the display is limited by laser speed and spot size. Projection surfaces can consist of complex 3D materials, allowing objects to become low-energy, ubiquitous peripheral displays (another example of the appropriated interaction surface displays). Among the display possibilities are arbitrarily shaped super hires, low-power reflective outdoor, dual day/night (see above photo), temporary, projected decals, printing, advertising and emergency signs. It could be done in 3D with polarization, anaglyph or shutter glasses.

“RePro3D: Full-Parallax 3D Display Using Retro-Reflective Projection Technology”. This uses the old technology of retro-reflective screens in a new way to produce a full-parallax 3D display when looking at dual projected images through a semi-silvered mirror. http://www.youtube.com/watch?v=T-0OrMtlROY Within a limited horizontal area the exit pupils are narrower than our inter-ocular enabling glasses free stereo. The most common use of these screens has until recently been high brightness backgrounds in movie and video production. The screens can be of arbitrary shape without image warping and can be touch-sensitive or otherwise interactive as shown above, or even moving. An infrared camera tracks the hand for manipulation of 3D objects. Smooth motion parallax is achieved via 40 projection lenses and a high-luminance LCD. http://tachilab.org/

“Shaboned Display: An Interactive Substantial Display Using Soap Bubbles” controls size and shape of soap bubbles pixels to create and interactive display with sound. Sensors detect bubble characteristics and hand gestures or air movements and can replace and break bubbles as desired. http://www.xlab.sfc.keio.ac.jp/

30 Veritas et Visus 3rd Dimension November 2010

A group from the University of Tokyo (including Alvaro Cassinelli who also did the Smart Laser Projector above) demonstrated typing without keyboards http://www.youtube.com/watch?v=jRhpC5LiBxI. More info here http://www.k2.t.u-tokyo.ac.jp/vision/typing_system/index-e.html

“Head-Mounted Photometric Stereo for Performance Capture” by a group from the USC Institute for Creative Technologies http://gl.ict.usc.edu/Research/HeadCam/ updates a very well known technique for capturing depth by using a head-mounted camera with polarized white light LEDs a Point Grey Flea 3 camera(see below) to capture 3 different lighting conditions at 30fps each so that subtle face structure and movements can be used as input to facial simulation hardware or software.

“beacon 2+: Networked Socio-Musical Interaction” allows people to collaborate to generate sounds and play music with their feet. A musical interface (beacon) uses laser beams to change pitch and duration when they contact a foot. Multiple beacons can be networked, so distant performers can interact in real-time via the web. http://www.ai.iit.tsukuba.ac.jp/research/beacon

Thierry Henkinet of Volfoni http://www.volfoni.com, Michael Starks of 3DTV Corp, Jerome Testut of Volfoni and Ethan Schur of stereo codec programmer TDVision Systems discuss Ethan’s forthcoming book on Stereoscopic Video.

31 Veritas et Visus 3rd Dimension November 2010

Volfoni, formerly one of XpanD’s largest dealers, has recently made their own 3D Cinema shutter glasses system in direct competition with XpanD. I predicted such an event in my Infocomm article only a month ago and it seems the end is even nearer for XpanD than I thought. XpanDs pi cell glasses are more or less obsolete tech and their nonstandard battery and somewhat clumsy design coupled with high prices and a bad attitude made them an easy target. However, the Chinese are not stupid and several companies there have already started selling shutter glasses cinema systems so it is not clear who will dominate.

Ubiquitous paper glasses manufacturer APO enhanced their always classy booth with a pair of CP monitors. Billions served! http://www.3dglassesonline.com/

“Non-Photorealistic Rendering in Stereoscopic 3D Visualization” by Daniel Tokunaga and his team from Interlab of Escola Politecnica de USP (Universidad de Sao Paulo where I helped install the first stereo-video operating microscope in the medical school almost 20 years ago). Get the paper from SIGGRAPH here http://portal.acm.org/citation.cfm?id=1836845.1836985. One of the aims is fast and frugal stereo CGI on low cost PCs for education. YouTube here: http://www.youtube.com/watch?v=HiBOrcuNtcM.

The EyeTech MegaTracker http://www.eyetechds.com is a similar approach to eyetracking which adds a tracking device to the monitor for remote noncontact tracking. A new version is due Oct. 2010. http://www.youtube.com/watch?v=TWK0u8nRW2o

Prof. Kyoji Matsushima http://www.laser.ee.kansai-u.ac.jp/matsu/ and a team from Osaka and Kansai Universities presented the world's first ultra hires computer generated holograms using fast wave field rendering.

32 Veritas et Visus 3rd Dimension November 2010

Each of the 4 sides of this display box are viewed with LCD shutter glasses. A project by a team from the lab of Profs Michitaka Hirose and Tomohiro Tanakawa at the University of Tokyo http://www.cyber.t.u-tokyo.ac.jp/ For an extremely cool related device don’t miss the pCubee http://www.youtube.com/watch?v=xI4Kcw4uFgs&p=65E4E92216DEABE1&playnext=1&index=13

33 Veritas et Visus 3rd Dimension November 2010

Visible interactive breadboarding by multi-talented Yoichi Ochiai ([email protected]) http://96ochiai.ws/top.html of the University of Tsukuba. The Visible Electricity Device or the Visible Breadboard is touch sensitive and displays voltages of every junction via color and brightness of LEDs, which permits wiring by fingertip via solid state relays. http://www.youtube.com/watch?v=nsL8t_pgPjs

“A New Multiplex Content Displaying System Compatible with Current 3D Projection Technology” by Akihiko Shirai (right) http://www.shirai.la/ and a team from Kanagawa Institute of Technology. http://www.youtube.com/watch?v=RXUqIb7xXRc. The idea is to use dual polarized 3D systems or shutter glasses systems to multiplex two 2D images so people can watch two different programs or two sets of subtitles on the same screen. Passive polarized glasses for this have the same orientation in both eyes (RR or LL) while shutter glasses in a 120hz system would have both lenses clear at 60hz on alternating frames for the two kinds of glasses (such glasses called DualView already exist for DLP Link monitors and projectors and are sold by Optoma).

Marcus Hammond, an Aero-Astro grad student at Stanford Univ. with “A Fluid Suspension, Electromagnetically Driven Eye with Video Capability for Animatronic Applications”. It is low power, frictionless and has a range of motion and saccade speeds exceeding those of the human eye. Saccades are the constant twitchings of our eyes (of which we are normally unaware). A stationary rear camera can see through the clear index matching fluid of the eye from the back through the small entrance pupil and remains stationary during rotation of the eye. One signal can drive two eyes for stereo for objects at infinity or converged from object-distance data as is commonly done now for stereo video cameras. The inner part of the eye is the only moving part and is neutrally buoyant in liquid. Due to its spherical symmetry it is the only lens used for the camera. Due to magnification by the outer sphere and liquid, the surface of the inner eye appears to be at the outside of the sphere. They imagine that a hermetically sealed version might be used as a human eye prosthesis, along with an extra-cranially mounted magnetic drive. Coauthored with Katie Bassett of Yale and Lanny Smoot of Disney

34 Veritas et Visus 3rd Dimension November 2010

The totally mobile Tobii eyetracker (photo above and yes this is all there is to it) http://www.tobiiglasses.com is a sensational new product which you can see in action in various videos on their page or here http://www.youtube.com/watch?v=6CdqLe9UgBs. They also have the original version embedded in monitors which you can see here http://www.vimeo.com/10345659 or on their page. http://www.tobii.com

Google Earth is widely used on the web (including stereo http://www.gearthblog.com/blog/archives/2009/03/stereo_3d_views_for_google_earth.html) and Philip Nemec shows how it is now adding 45 degree maps. Adding this angle of view greatly increases comprehensibility of the data. For Microsoft’s competing system see Bing Maps http://www.bing.com/maps/

Part of puppet pioneer Jim Henson’s legacy-the HDPS (Henson Digital Puppetry Studio) has dual handsticks with real-time stereo animation and RF wireless NVIDIA pro shutter glasses. http://www.creatureshop.com. Headquarters in LA with branches in NYC and London. http://www.youtube.com/watch?v=m6Qdvvb1UTs

Andersson Technologies of Pennsylvania http://www.ssontech.com showed the latest version of their approx. $400 program SynthEyes which has, among its many capabilities, stereo motion tracking and stereorectification. It has been used on various scenes in Avatar such as the 3D holographic displays in the control room, the bio-lab, the holding cell, and for visor insertion. There are informative videos on their page and YouTubes at http://www.youtube.com/watch?v=C4XrnLrlu14&feature=related, http://www.youtube.com/watch?v=n-2p4HCyo2Y

Interactive CP polarized display comprising 10 JVC panels in the King Abdullah University of Science and Technology booth http://www.kaust.edu.sa was the brainchild of Andrew Prudhomme and colleagues who use Covise from HLRS http://www.hlrs.de/organization/av/vis/covise/ and Mac software to split the display over the panels using a Dell Geforce 285 cluster. The university http://www.calit2.net/newsroom/release.php?id=1599 has used its $10B endowment to establish one of the leading scientific visualizations centers in the world. Some of its initial visualizations were developed by teams from California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego and the Electronic Visualization Laboratory (EVL) at the University of Illinois where Andrew Prudhomme has worked. KAUST’s President, Choon Fong Shih, is former president of the National University of Singapore and most of the 70 faculty and 400 students are foreign. There are numerous YouTubes including http://www.youtube.com/watch?v=7i4EkINknMk

35 Veritas et Visus 3rd Dimension November 2010

MetaCookie is a mixed reality system in which an interactive virtual cookie is projected on a real one along with odors http://www.youtube.com/watch?v=si32CRVEvi4. Co-inventor Takuji Narumi [email protected] describes it as “Pseudo-gustation system to change perceived taste of a cookie by overlaying visual and olfactory information onto a cookie with an AR marker.” Those interested might wish to attend DAP 3(Devices that Alter Perception 3) held in conjunction with the IEEE Symposium on Mixed and Augmented Reality in Seoul, Korea Oct 13th http://devices-alter.me/10

NVIDIA 3D Vision shutter glasses with RTT DeltaGen software on a 120 Hz Alienware LCD panel with cold cathode fluorescent backlight.

NVIDIA showed Intra or Internet real time stereoscopic collaborative editing in Autodesk Maya using the NVIDIA 3D Vision Pro RF wireless glasses. However, the web version is subject to the usual lag and bandwidth limitations.

Another NVIDIA team shows 3D shutter glasses movie editing with Adobe and Cineform. For more info on Adobe and Cineform stereo see 3D at NAB 2010 and the detailed tutorials online including http://www.youtube.com/results?search_query=cineform+3d&aq=4

One section of their booth showed the newest NVIDIA mobile processor doing real-time 3D playback from an HP Laptop in the NVIDIA-HP Innovation Zone; CXC Simulations http://www.cxcsimulations.com was showing their 3 screen MP2 simulator with Custom built PC’s using NVIDIA cards and the Corbeau $25K racing chair. It gets top ratings from racecar drivers, some of whom own them. You can race 350 different cars on 750 tracks!

36 Veritas et Visus 3rd Dimension November 2010

Andrew Page of NVIDIA’s Quadro Fermi team with the NVIDIA developed RF wireless glasses used on a 120hz LCD monitor with Siemens syngo.fourSight Workplace medical imaging software showing a beating heart. The Quadro cards with the Fermi GPU cost about $1500 but it is also present in their GTX 480 series cards for about $500.

Randy Martin shows Assimilate’s http://www.assimilateinc.com and http://www.youtube.com/watch?v=l1KGaMQ4hd4 Scratch 3D edit software running on a PNY NVIDIA Quadro card via a 3ality 3D Flex box which converts the image for line alternate display on an LG Xcanvas CP monitor. LG seems to have marketed these monitors only in Europe so far.

Video card maker ATI was always a distant second to NVIDIA in stereoscopic support, but after being acquired by AMD they have scurried to catch up. Here they show stereo support on the dual FHD semi-silvered mirror display from Planar. Cards such as the FirePro v8800 (ca. $1200) are way beyond video gaming unless you are a superpower user and, like the many NVIDIA Quadro’s, have the standard 3pin MiniDin VESA stereo plug for 3DTV Corp’s Universal Glasses Emitter-- which can be used with 7 different types of shutter glasses. Planar http://www.planar3d.com also had their own booth.

The Web 3D Consortium of Menlo Park, CA, USA http://www.web3d.org was also present, seeking members (NASA, Schlumberger and Sun are a few of their current members) to develop the ISO X3D specifications for web based 3D graphics. For one example of a real-time interactive app supporting multiple formats see http://www.3df33d.tv/ created by former NewSight CTO Keith Fredericks and colleagues of http://general3d.com/General3D/_.html and below are a few of the 3D videos you can stream with Firefox HTML5 from their page http://www.3df33d.tv/node/videos. On 10-10- 10 they streamed live 3D from their offices in Germany – a world’s first for HTML5. They expect to soon support all types of displays and to derive revenue from advertising.

The alternate streamers via the newest 3DTV’s, STB’s and Blu-ray players so far support only one or two formats in the hardware and use high bandwidth dual compressed images, whereas 3DFeeD uses the DiBR method http://iphome.hhi.de/fehn/Publications/fehn_EI2004.pdf which is easy to modify and control via easily updated software and can accommodate real-time broadcast quality graphics. It has huge advantages over other pc based streamers such as the NVIDIA 3D Vision system or the live feed with capture card and Wimmer’s software in using a normal pc with no special cards, drivers or downloaded software. You go to any 3DF33D compatible page and upload or download your content or go interactive. And of course it is multiplatform and will have robust 3D GUI in the browser.

37 Veritas et Visus 3rd Dimension November 2010

However I am not convinced that the monoscopic image plus depth data used in DiBR will retain the lustre, sparkle, texture and shadows of a true dual compressed image so I await a side by side demo. Of course it’s a relatively new codec, supported by e.g., the European ATTEST program and will be developed continually. In any case it’s totally cool dude and will spread like wildfire! Think Facebook and YouTube together in 3D full-screen on any computer. Downsized version for pad, pods and phones to follow!

Masahito Enokido, Shinichiro Sato and Masataro Nishi (left to right) of the Lucent Pictures http://www.lpei.co.jp/en team showed some of their recent 3D film work (including their own 2D to 3D conversions) in the Japan based 3-D Consortium http://www.3dc.gr.jp booth.

Arcsoft (http://www.arcsoft.com.tw) showed the ability of their 3D Blu-ray PC playback software to give shutter or anaglyph display on a Samsung 120Hz LCD with the NVIDIA 3D Vision system. All the software Blu-ray players including PowerDVD and Roxio are starting to support 3D in multiple formats. Here’s a video of their 3D Blu-ray player in Japanese http://www.youtube.com/watch?v=bovhlMnufE8

Kiyoto Kanda, CEO of NewSight Japan http://www.newsightjapan.jp/ and http://www.youtube.com/watch?v=hhCzVqmfDR0 (in Japanese) with their 3D picture frame with contents converted with 3D Magic software by two Japanese programmers. Although NewSight USA is no longer operational, Kanda-san is carrying on with his own line of autostereoscopic displays including a made in Japan 70 inch model that is the world’s largest no glasses flat panel http://www.youtube.com/watch?v=lGIX3YIKA0w. He also reps the giant LED autostereo outdoor panels made by TJ3D Corp in China.

Steve Crouch of Iridas http://www.iridas.com showing their 3D edit software in the Melrose Mac booth http://www.melrosemac.com. You can see him in action editing RED footage http://www.youtube.com/watch?v=3GtV3LNd4-s.

38 Veritas et Visus 3rd Dimension November 2010

Blick of Korea showed a line of elegant active and passive glasses, but two months later their page http://www.blick- eyewear.com is still not working (but you can try http://www.ogk.co.kr/eng/company/sub1.asp) and they have not responded to emails or voicemails, so with dozens of companies rushing into this market they will have to move faster.

Brendan Iribe CEO of Scaleform Corp http://www.scaleform.com showing their plug and play stereoscopic interface for 3D game designers. Their software includes Flash tweening and actionscript extensions. The 2D version has been used in over 700 games http://www.youtube.com/watch?v=zKDuzVbi50Q, and is being prepped for phones and tablets http://www.youtube.com/watch?v=amkwCBAqN6s

I have followed Canadian company Point Grey’s stereoscopic vision products since their first model in 1998 and they have now expanded greatly. Here Renata Sprencz demos the Bumblebee 2 machine vision camera. Some of their cams have 3 lenses for more accurate data with a wider choice of subjects. http://www.ptgrey.com. Among their numerous YouTubes is one of their spherical (360 deg) Ladybug 3 camera http://www.youtube.com/watch?v=FQaKwYRouyI.

Point Grey Bumblebee 2 which you can see a bit more about here http://www.youtube.com/watch?v=ZGujKSUAxDU

There were many MoCap (real-time Model Capture) systems at the show and XSENS http://www.xsens.com had one of the largest booths. In addition to MoCap http://www.youtube.com/watch?v=JeGflcAW_-g&feature=related and

39 Veritas et Visus 3rd Dimension November 2010

http://www.youtube.com/watch?v=TNkkLBkBSrw&feature=related, a single sensor can be used for interactive graphics http://www.youtube.com/watch?v=qM0IdPcuuxw

NaturalPoint’s OptiTrack MoCap system uses cameras and glowing light balls. The Expression facial MoCap costs $2K can also be used for real-time control of animations or robotics. They also make TrackIR for viewpoint control in videogames and other CGI apps. http://www.youtube.com/watch?v=_AO0F5sLdVM&feature=related and http://www.naturalpoint.com.

Dr Howard Taub of Tandent, http://www.tandentvision.com, was showing a revolutionary face recognition system which uses COTS cameras and uncontrolled lighting http://www.tandentvision.com/site/images/SIGGRAPH%20- %20PR%20(Face).pdf. You may not have heard of them before but you will again since it should now be feasible to ID people while standing at airport security checkpoints or driving through a toll booth.

Naoya Eguchi naoya.eguchi@jp..com showed Sony's RayModeler – a spinning LED screen, which makes a volumetric display (now commonly termed “lightfield display”) controlled by a PlayStation joystick. For some of the many previous manifestations of this well traveled concept see e.g., the SIT article on the 3DTV Corp page http://www.3dtv.jp/articles/sit.html. A common problem has been that inappropriate pixels (e.g., from the other side of the object) can be seen but this did not seem to be an issue here (probably due to the microsecond switching of LEDS) and some images of real persons were also presented (i.e., 360 degree video). “Light Field Display” means it approximates the light reflected from a real world 3D object with photons originating from a volume. This term overlaps with the conventional 3D display term “volumetric”. For a nice videos showing related displays see http://vodpod.com/watch/844164-research-interactive-360-light-field-display and http://www.youtube.com/watch?v=FF1vFTQOWN4&p=65E4E92216DEABE1&index=15&feature=BF.

40 Veritas et Visus 3rd Dimension November 2010

So-called Light Field or Plenoptic multilens cameras which take simultaneous multiple images of a scene in order to have everything in focus (each lens can be selected later by software) should reach the consumer market soon. I give some references on plenoptic imaging in my article on Stereo Camera Geometry http://www.3dtv.jp/. The ability of such cameras to provide 3D images is a free byproduct.

Also in the Emerging Tech gallery was Stephen Hart of HoloRad, http://www.holorad.com, of Salt Lake City with an 8 frame holomovie -- each position having 42 depth planes and its own green laser at the end of the bars shown. They are doing R&D in collaboration with Disney and you can find their paper here http://portal.acm.org/citation.cfm?id=1836821.1836827. This is one of 3 exhibits of what they term “interactive zoetropes” after the 200 year old picture animation devices.

Paul Craig of 3D Rapid Prototyping http://www.3drp.com distributes 5 models of the ZScanner http://www.zcorp.com ($12K for Model 700) which takes “laser snapshots” to create solid models that can be made with any CNC device such as the Roland in the next photo which they sell for $8K. http://www.youtube.com/watch?v=6CdqLe9UgBs

4D Dynamics http://www.4ddynamics.com new PicoScan model capture system costs $2K, but they have full body scanning Pro versions for up to $120K.

41 Veritas et Visus 3rd Dimension November 2010

The $8k Roland Milling Machine carves a plastic model from an image captured by the Z-Scanner http://www.youtube.com/watch?v=Yir7T165RcY

Shapeways, http:// www.shapeways.com of Eindhoven lets you upload and make a solid model of your design from a variety of materials for about half the usual cost.

The $395 MakerBot. http://www.makerbot.com melts the powder from the flexible blue rods to build up the model layer by layer and among the numerous videos is http://www.youtube.com/watch?v=Hzm5dkuOAgM

The $695 DIYLILCNC, http://www.diylilcnc.org carves wood or plastic into models but most interesting is that you can download the open source plans and build your own and use its Creative-Commons license to tweak and redistribute it.

42 Veritas et Visus 3rd Dimension November 2010

43 Veritas et Visus 3rd Dimension November 2010 It will be awesome if they don’t screw it up

3D Printing, Intellectual Property, and the Fight over the Next Great Disruptive Technology

by Michael Weinberg

Michael Weinberg joined Public Knowledge as a full-time Staff Attorney after two years as a part-time Law Clerk and Student Intern. Although he is involved in a wide range of issues at Public Knowledge, he focuses primarily on copyright and issues before the FCC. Michael received his J.D. from The George Washington University Law School where he was awarded the ABA-BNA Award for Excellence in the Study of Intellectual Property Law. Prior to GW he worked in New Delhi and Beijing, and received a B.A. with honors in History and Government from Claremont McKenna College. The article is reprinted with permission from Public Knowledge and the author. http://www.publicknowledge.org

An Opportunity, and a Warning: The next great technological disruption is brewing just out of sight. In small workshops, and faceless office parks, and garages, and basements, revolutionaries are tinkering with machines that can turn digital bits into physical atoms. The machines can download plans for a wrench from the Internet and print out a real, working wrench. Users design their own jewelry, gears, brackets, and toys with a computer program, and use their machines to create real jewelry, gears, brackets, and toys.

These machines, generically known as 3D printers, are not imported from the future or the stuff of science fiction. Home versions, imperfect but real, can be had for around $1,000. Every day they get better, and move closer to the mainstream.

In many ways, today’s 3D printing community resembles the personal computing community of the early 1990s. They are a relatively small, technically proficient group, all intrigued by the potential of a great new technology. They tinker with their machines, share their discoveries and creations, and are more focused on what is possible than on what happens after they achieve it. They also benefit from following the personal computer revolution: the connective power of the Internet lets them share, innovate, and communicate much faster than the Homebrew Computer Club could have ever imagined.

The personal computer revolution also casts light on some potential pitfalls that may be in store for the growth of 3D printing. When entrenched interests began to understand just how disruptive personal computing could be (especially massively networked personal computing) they organized in Washington, D.C. to protect their incumbent power. Rallying under the banner of combating piracy and theft, these interests pushed through laws like the Digital Millennium Copyright Act (DMCA) that made it harder to use in new and innovative ways. In response, the general public learned once-obscure terms like “fair use” and worked hard to defend their ability to discuss, create, and innovate. Unfortunately, this great public awakening came after Congress had already passed its restrictive laws.

Of course, computers were not the first time that incumbents welcomed new technologies by attempting to restrict them. The arrival of the printing press resulted in new censorship and licensing laws designed to slow the spread of information. The music industry claimed that home taping would destroy it. And, perhaps most memorably, the movie industry compared the VCR to the Boston Strangler preying on a woman home alone.

One of the goals of this whitepaper is to prepare the 3D printing community, and the public at large, before incumbents try to cripple 3D printing with restrictive intellectual property laws. By understanding how intellectual property law relates to 3D printing, and how changes might impact 3D printing’s future, this time we will be ready when incumbents come calling to Congress.

44 Veritas et Visus 3rd Dimension November 2010

Figure 1: 3D printers can create ball bearings in a single print. Image from Thingiverse uber RayRaywasHere Figure 2: RepRap is an open source desktop 3D printer capable of replicating itself by printing all of the plastic parts necessary to build one. http://www.reprap.org

Figure 3: MakerBot Industry’s Cupcake 3D printer is an open source 3D printer. It cannot reproduce itself, but it can create the parts necessary to build a RepRap. Image from MakerGear

3D Printing: So what is 3D printing? Essentially, a 3D printer is a machine that can turn a blueprint into a physical object. Feed it a design for a wrench, and it produces a physical, working wrench. Scan a coffee mug with a 3D scanner, send the file to the printer, and produce thousands of identical mugs.

While even today there are a number of competing designs for 3D printers, most work in the same general way. Instead of taking a block of material and cutting away until it produces an object, a 3D printer actually builds the object up from tiny bits of material, layer by layer. Among other advantages, this allows a 3D printer to create structures that would be impossible if the designer needed to find a way to insert a cutting tool into a solid block of material. It also allows a 3D printer to form general-purpose material into a wide variety of diverse objects.

45 Veritas et Visus 3rd Dimension November 2010

Because they create objects by building them up layer-by-layer, 3D printers can create objects with internal, movable parts. Instead of having to print individual parts and have a person assemble them, a 3D printer can print the object already assembled. Of course, a 3D printer can also print individual parts or replacement parts. In fact, some 3D printers can print a substantial number of their own parts, essentially allowing them to self-replicate.

3D printing starts with a blueprint, usually one created with a computer aided design (CAD) program running on a desktop computer. This is a virtual 3D model of an object. CAD programs are widely used today by designers, engineers, and architects to imagine physical objects before they are created in the real world.

The CAD design process replaces the need to design physical prototypes out of malleable material such as clay or styrofoam. A designer uses the CAD program to create the model, which is then saved as a file. Much as a word processer is superior to a typewriter because it allows a writer to add, delete, and edit text freely, a CAD program allows a designer to manipulate a design as she sees fit.

Figure 4: CAD programs range in price from thousands of dollars for proprietary versions made by companies such as Autodesk to this free and open source program called Blender.

Image from Flickr

Alternatively, a 3D scanner can create a CAD design by scanning an existing object. Just as a flatbed scanner can create a digital file of a drawing on a piece of paper, a 3D scanner can create a digital file of a physical object. No matter how it is created, once the CAD design exists it can be widely distributed just like any other computer file. One person can create a new object, email the design to his friend across the country, and the friend can print out an identical object.

3D Printing in Action: The mechanics of 3D printing are all well and good, but what can it actually be used for? This is a hard question to answer comprehensively. If in 1992, after describing the of computer networking, someone asked you what it could be used for, it is unlikely that you would have described Facebook, , or SETI@Home. Instead you may have described early websites like craigslist, or the home pages of print newspapers, or (if you were particularly forward thinking) a blog. While these early sites are not representative of everything that today’s maturing Internet has to offer, they do at least give someone an idea of what the Internet could be. Similarly, today’s examples of 3D printing will inevitably appear primitive in five, ten, or twenty years. However, they can be helpful to understand exactly what we are talking about.

Figure 5: MakerBot Industries sells a 3D scanner mount called Cyclops. Users need to supply their own projector, camera, and iPod touch or iPhone (or other VGA video source). Image from MakerBot

As mentioned above, 3D printing can be used to create objects. At its most basic, 3D printing would allow you to design bookends that look like your face, or even custom action figures. 3D printing could be used to make simple machines like bicycles and skateboards. More elaborately, when combined with on-demand circuit board printing, 3D printing could be used to make simple household electronics like a

46 Veritas et Visus 3rd Dimension November 2010 custom remote control for your TV that is molded to fit your hand, with all of the buttons exactly where you want them. Industrial 3D printing is already used to make custom, fully functional prosthetic limbs.1

This ability seems amazing today. Who could resist giving out exact replicas of their face to friends and family as gifts? What child (or adult, for that matter) would not enjoy the ability to summon toys they designed out of a computer and into their hands? What is to prevent you from making a toaster that squeezes into that oddly shaped nook in your kitchen? Why shouldn’t amputees have prosthetic limbs that match the rest of their body, or that have neon stripes with alternating flashing lights if they so desire?

Yet, this amazing ability is also vulnerable to restriction through intellectual property law. Artists may fear that their copyright-protected sculptures will be replicated without permission. Toy companies will see trademark and copyright violations in toys flowing from 3D printers. The new toaster or prosthetic arm may infringe on innumerable patents.

No one suggests that these concerns are unwarranted. After all, the ability to copy and replicate is the ability to infringe on copyright, patent, and trademark. But the ability to copy and replicate is also the ability to create, expand upon, and innovate. Just as with the printing press, the copy machine, and the personal computer before it, some people will see 3D printing as a disruptive threat. Similarly, just as with the printing press, the copy machine, and the personal computer, some people will see 3D printing as a groundbreaking tool to spread creativity and knowledge. It is critical that those who fear not stop those who are inspired.

Using 3D Printing

Intellectual property law is varied and complex, as are the potential uses for 3D printing. The easiest way to consider the possible impact that intellectual property law could have on 3D printing is to consider a few different use scenarios.2

Creating Original Products: Intuitively, creating original products would create the fewest intellectual property conflicts. After all, the user is creating his or her own 3D object.

In the world of copyright law, this intuition is correct. When a child in Seattle writes an ode to his pet dog, that work is protected by copyright. If, two years later, another child in Atlanta writes an identical ode to her pet dog (unaware of the first ode), the second work is also protected by copyright. This is possible because copyright allows for independent creation, even if the same work was independently created twice (or even more than twice). While a work must be original in order to receive copyright protection, the work does not need to be unique in the world. However, and relevantly for reproducing 3D objects, patent law does have a novelty requirement. Patent law does not allow for parallel creation. Once an invention is patented every unauthorized reproduction of that invention is an infringement, whether the reproducer is aware of the original invention or not.

Historically, this distinction has not been particularly problematic. Copyright protects many works that are long and complex, and can take the form of a variety of expressions. As a result, it was relatively unlikely that two people would create exactly the same work without the second copying the first. In contrast, many people working on a practical problem at the same time may create similar solutions. For patents to be worthwhile, they had to cover all identical devices, no matter how they were developed. It was assumed that parties vying for a patent were

1 Ashlee Vance, 3-D Printing Spurs a Manufacturing Revolution, N.Y. Times, Sep. 13, 2010.

2 This discussion is necessarily focused on United States law. For an excellent discussion of how EC and UK law apply, see S Bradshaw, A Bowyer and P Haufe, “The Intellectual Property Implications of Low-Cost 3D Printing”, (2010) 7:1 SCRIPTed 5. http://www.law.ed.ac.uk/ahrc/script-ed/vol7-1/bradshaw.asp

47 Veritas et Visus 3rd Dimension November 2010 sophisticated and would do a patent search before trying to solve a problem. Everyone playing the game understood that it was a race to file, and took necessary precautions. By democratizing the precision creation of physical objects, 3D printing may make the creation of physical objects nearly as widespread as the creation of copyright-protectable works. 3D printing also removes object creation from the realm of well-funded labs tightly integrated into the existing patent system.

This shift will likely increase the number of innocent patent infringers – people who infringe on a patent they do not even know exists. As 3D printing proliferates, individuals will look to solve problems by designing and creating their own solutions. In producing those solutions it is quite possible that they will unwittingly incorporate elements protected by patent. Again, unlike copyright, that type of innocent copying is still infringement.

Sharing designs on the Internet amplifies the problem. It is unlikely that a single object produced for home use would attract the attention of a patent holder. But, if the history of the Internet up to this point has taught us anything, it is that people like to share. Individuals who successfully design products that solve real world problems will share their designs online. Other people with similar problems will use (and even remix and improve) those designs. Very successful designs that happen to infringe on patents are the most likely to be targeted by patent holders.

Figure 6: Thingiverse user Skimbal created this Gothic Cathedral Playset. He describes it as the “Mount Everest of MakerBot prints” because it pushes the limits of the technology’s current capabilities.

Image from Skimbal

While this type of inadvertent patent infringement has the potential to become one of the high-profile, defining conflicts of early 3D printing, it is likely to impact relatively few people. When millions of people are creating objects for 3D printing, the likelihood of someone copying a patented object or process is high. However, because patents do not cover most physical objects in the world, the likelihood that any one reproduced object infringes patent is relatively low. It is entirely possible that many (if not most) users of 3D printers will live their entire lives without inadvertently infringing on a patent.

Copying Products: Naturally, every object produced in a 3D printer will not be the result of the printing individual’s own creativity and ingenuity. As already mentioned, sometimes the object will be one downloaded and printed from another person’s original design. However, sometimes the object will simply be a copy of an existing commercial product.

This copy could come from at least two sources. The first source would be the Internet. CAD plans, like all files, are easily copied and distributed online. Once one individual creates the plan for an object and uploads that plan, it is essentially available to the world. The second source would be a 3D scanner. A 3D scanner has the capability to create a CAD file by scanning a 3D object. An individual with a 3D scanner would be able to scan a physical object, transfer the resulting file to a 3D printer, and reproduce it at will.

No matter the source of the file, copying existing commercial objects will draw the attention of the object’s original manufacturers. Although the proliferation of 3D printing will undoubtedly create opportunities for manufacturers (such as vastly reduced distribution costs and the ability to allow customers to customize objects), it will also disrupt existing business models. Depending on the type of object copied, manufacturers may turn to several different forms of intellectual property protection for relief.

48 Veritas et Visus 3rd Dimension November 2010

Copyright: Copyright essentially attaches to every original creative work that is fixed in a tangible medium.3 This includes most things that are written, drawn, or designed. However, the copyright only protects the actual writing, drawing, or design itself, not the idea that it expresses. Networked computers are designed to reproduce things that are written, drawn, or designed. Their spread created exponentially increasing public awareness of copyright law and policy. As creations appeared online, they have been copied. As items have been copied, creators and those who monetized scarcity have called for stronger, more aggressive copyright enforcement. Oftentimes they have sought to transfer the cost of enforcement onto service providers and the public – anyone but themselves.

In many ways, this struggle has defined the world of intellectual property law and policy for the last fifteen years. However, it has primarily been limited to the world of the intangible. The debate may manifest itself in a discussion about physical CDs, or DVDs, or books, but it really is about songs, and movies, and stories. These expressed ideas are at the core of copyright law.

The rise of 3D printing may divert some of the attention that copyright has received in recent years. While there are copyright implications for 3D printing, the fact that copyright has traditionally avoided attaching to functional objects – objects with purposes beyond their aesthetic value – may very well limit its importance. By and large, attempts to expand copyright protection to functional objects have failed. Copyright law has long avoided attaching to functional objects on the grounds that patent law should protect them (if they should be protected at all). That said, it is unavoidable that some functional objects also serve the types of decorative and creative purposes protected by copyright. Copyright deals with this by applying the “severability test.”

Classic useful articles (of the type traditionally covered by patent) are things like a new oil pump, or a hinge, or a machine to fold boxes. However, sometimes useful articles can also be decorative. A vase is a container to hold water and flowers, but it can also be a work of art in its own right. The severability test seeks to deal with the fact that sometimes an un-copyrightable object (the vase) and a copyrightable object (the decoration on the vase) can exist in the same object (the decorative vase). Under this test, any decorative elements of the object that exist outside of the scope of the useful object (or could be “severed” from the useful object) are protectable under copyright.

This has ramifications for individuals using 3D printers to reproduce physical objects. While, for the most part, the physical object itself will not be protected by copyright, decorative elements may be protected.

Users would be well served to keep this distinction in mind. Take, as a simple example, an individual who wishes to reproduce a doorstop. The individual likes this particular doorstop because it is exactly the right size and angle to keep a door in their home open. This doorstop also has decorative elements – it is covered with a lively and colorful print, and intricate designs are carved into the sides. If the individual were to reproduce the entire doorstop, including the print and carvings, the original manufacturer may be able to bring a successful claim for copyright infringement. However, if the individual simply reproduced the parts of the doorstop that he cared about (the size and angle of the doorstop), and omitted the decorative elements (the print and carving), it is unlikely that the original manufacturer would be able to successfully bring a copyright claim against the copier.

Patent: Patent is different from copyright in several key ways. First and foremost, patent protection is not granted automatically. While the mere act of writing down a story grants it copyright protection, the mere creation of an

3 “Fixed in a tangible medium” is a term of art in copyright law, and a critical prerequisite for copyright protection. A work must be “sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration.” 17 U.S.C. § 101. In practice, this requirement distinguishes a speech made up on the spot and not written down (not fixed, and therefore not protectable under copyright) from a speech that is written down and then delivered (fixed, and therefore protected under copyright).

49 Veritas et Visus 3rd Dimension November 2010 invention does not result in patent protection. An inventor must apply for a patent on her invention at the Patent and Trademark Office (PTO). The invention must be new,4 useful,5 and non-obvious.6 In making the application, the inventor must disclose information that would allow others to practice the invention.7 Finally, patent protection is significantly shorter in duration than copyright protection.8

The end result of these differences is that there are far fewer inventions protected by patent law than there are works protected by copyright law. While copyright law protects every ditty, every poem, and every home movie (no matter how trivial) for decades after its creation, most functional objects are not protected by patent law.

This dichotomy can be easily seen in the treatment of digital versus physical products. When you purchase a work that is delivered digitally to your computer, be it a song or a movie or a book, making additional unauthorized copies of that work is an infringement of it because it is protected by copyright (unless it is in the public domain or the copy is a protected fair use). In contrast, when you purchase a physical object that is delivered to your home, making an additional copy of that object is unlikely to be a violation of patent because it is probably not covered by a patent. This creates an entire universe of items that can be freely replicated in a 3D printer.

Figure 7: While the decorations on this vase would likely be protected by copyright, the shape is mostly utilitarian and therefore likely would not.

Image from flickr user Hamed Saber

Though patent protects fewer objects, and protects them for a shorter amount of time, in many ways it protects them more completely. As discussed above, there is no exception for independent creation in patent law. Once an object has been patented, all copies, regardless of the copier’s knowledge of the patent, infringe upon that patent. Simply stated, if you are using a 3D printer to reproduce a patented object, you are infringing on the patent. Even using the patented device without authorization infringes on the patent. Furthermore, unlike in copyright, there is no fair use in patent. There is also no exception for home use, or for copying objects for purely personal use.

Yet, infringement is not as absolute as it might first appear. Infringement of a patented invention requires infringement of the entire invention. This flows from the nature of patents.9 One of the primary requirements for patent protection is that the invention is new.10 Often, a novel invention will consist of many existing inventions working together in a new way.11 It would be illogical if, by patenting the new combination of old inventions, the patent holder acquired a patent on the old inventions as well. Therefore, copying unpatented parts of a patented invention is not a violation of the larger patent.

4See 35 U.S.C. § 101. 5 See 35 U.S.C. § 102. 6See 35 U.S.C. § 103. 7 See 35 U.S.C. § 112. 8See 35 U.S.C. § 154 (a)(2). 9 See Bullock Electric & Mfg. Co. v. Westinghouse Electric & Mfg. Co., 129 F.105, 109-10 (C.C.A.6 1904). 10 See 35 U.S.C. § 101. 11 See Leeds and Catlin Co. v. Victor Talking Machine Co., 213 U.S. 301, 318 (1909).

50 Veritas et Visus 3rd Dimension November 2010

Trademark: Although it is usually grouped with patents and copyright, trademark is a slightly different intellectual property animal. Unlike patent and copyright, there is no mention of trademark in the Constitution. Instead, trademark developed as a way to protect consumers, giving them confidence that a product marked with a manufacturer’s symbol was actually made and backed by that manufacturer. As a result, trademark is not designed to protect intellectual property per se. Intellectual property protection is instead a side effect of needing to protect the integrity of the mark.

Trademark could still be implicated when making exact copies of objects. If a 3D printer made a copy of an object and that copy included a trademark, the copy would infringe on the trademark. However, the specificity of 3D printing would allow an individual to replicate an object without replicating the trademark. If you like a given product, and do not feel passionately about having the logo attached to it, it will generally not be a violation of trademark law to reproduce it without the logo.

Use in Commerce: There is an additional trademark issue to consider in the case of home-based 3D printing. Because trademark protection is specifically geared towards preventing consumer confusion in the marketplace, trademark infringement is described in terms of “use in commerce.”12 Unlike patent or copyright, it is not copying a trademark that creates a trademark violation. Instead, it is using that trademark in commerce (thus potentially confusing a consumer as to the origin of the product) that results in a violation.

Over time, the understanding of “use in commerce” has expanded significantly. Trademark infringement has even been expanded to include “dilution” of famous marks, essentially making any public use of a famous mark – in commerce or not – a violation of trademark law.13

That being said, the mere existence of an unauthorized trademark in your home is not a violation of trademark law. In most cases, making products in your own home for your own personal use that include trademarks is not a violation of trademark. You know you made the product, so there is no chance that you are going to be “confused” about where it came from. However, this does not mean that just because you make a product in your home there are not trademark implications. Using a home 3D printer to churn out knockoff sunglasses to use in your back yard may not be trademark infringement, but it will be as soon as you take steps to try and sell them.14

Replacement Objects: While 3D printing could be used to create wholesale copies of manufactured goods, it could also be used to create replacement parts for worn or broken goods. Instead of scouring the Internet for that oddly shaped bracket or hinge, an individual could simply print out a perfect replacement part. In fact, the individual might decide to improve upon the original part to prevent it from breaking in the future.

As with creating and copying objects, there are ways in which manufacturers could use intellectual property law to prevent such activity. In the case of replacement objects, copyright and trademark protections will not be as important. A replacement part is, almost by definition, a “useful article” of the type under the jurisdiction of patent law.

Patent allows for the free reproduction of replacement parts in a number of ways. First, there are relatively stringent requirements for patent protection. As mentioned above, these stringent requirements mean that relatively few objects are protected by patent. Moreover, many of the objects protected by patent are, in fact, “combination” patents. Combination patents combine existing objects (some patented, some not) in a new way. Although the new combination is protected by patent, the individual elements (assuming they are not individually protected by patent) are free to be reproduced at will. As a result, there is little question that manufacturing unpatented replacement

12 15 U.S.C. § 1114. 13 See 15 USC § 1125(c). 14 Or, if the trademark is considered appropriately famous, as soon as you wear them in public.

51 Veritas et Visus 3rd Dimension November 2010 parts for a patented device would not violate the patent for that device.15 As long as you legitimately purchased the original device, you have the right to manufacture your own replacement parts.16

This right to replace has two noteworthy caveats. First, you only have the right to replace parts of a patented device. That means that a simple patented device consisting of only one part, or an individually patented part of a more complex device, cannot be reproduced without infringing.

Second, though repairing a patented device is legal, reconstructing the same device in its entirety from its constituent parts is infringement.17 The line between repair and reproduction is somewhat undefined, and may become an area of increased attention as the use of 3D printing to replace parts expands. A good rule of thumb is that if the patented item is designed to be used once, attempting to refashion it would qualify as infringement.18

If, however, there is an unpatented part of a larger patented device that has worn out, refashioning the part is not infringement.19 This holds true even if, over time, the owner of a device ends up replacing each worn out part of the patented device.20 Alternatively, replacing part of a patented device in order to give the device new or different functionality is also not infringement, because it creates a new device.21

Using Logos and Other Trade Dress: Once they become widespread, individuals will begin using 3D printers to reproduce trademarked logos and other elements of “trade dress.” 22 Most exact logo reproductions, as discussed above, will likely be infringement. The look and feel of the object, often referred to as “trade dress,” is slightly more complex. Those aspects can be protected by design patent and by the trade dress subsection of trademark.

Design Patents: In addition to purely functional patents, United States law also provides patent protection for “new, original, and ornamental design for an article of manufacture.” 23 Although this expansion into ornamental design might appear to overlap with copyright, design patents are quite limited in scope.

First, the protected design must truly be novel.24 Secondly, design patents are strictly limited to ornamental, non- functional designs.25 Courts have reacted skeptically when manufacturers have attempted to use design patent to protect functional elements of designs.26 Finally, the design protection itself only extends to the actual design represented in the patent application, not similar designs or designs merely derived from the original.27

In many ways this distinction between form and function clashes with the traditional goals of industrial design. In general, industrial designers achieve elegance by wedding form to function – finding a single way to meet both imperatives. Creating a hard distinction between form and function runs counter to that goal.

15 See Aro Mfg. Co. v. Convertible Top Replacement Co., 35 U.S. 336, 344 (1961) (Aro I). 16 See Aro Mfg. Co. v. Convertible Top Replacement Co., 377 U.S. 476, 480 (1964 (Aro II). 17 See Husky Injection Molding Sys. Ltd. V. R & D Tool & Eng’g Co., 291 F.3d 780, 785 (Fed.Cir. 2002).. 18 Id. 19 Id. at 785-86 (quoting Aero I). 20 Id. at 786. 21 Id. 22 Trade dress is a subsection of trademark law. A classic example of protectable trade dress is the curvy Coca Cola bottle (as opposed to the protectable trademark of “Coca Cola” written in its distinctive cursive script printed onto that bottle). 23 35 U.S.C. § 171 24 See Id. 25 See Best Lock Corp. v. Ilco Unican Corp., 94 F3d 1563, 1566 (Fed. Cir. 1996). 26 See Id. 27 See Id at 1567.

52 Veritas et Visus 3rd Dimension November 2010

In any event, users of 3D printers should often be able to work around design patents. If an element of an object is functional, and thus necessary to reproduce a machine or product, it simply cannot be protected by a design patent.28

However, there are some cases in which design patent protection may be problematic. Perhaps most famously, automobile manufacturers are increasingly using design patents to protect body panels, lights, and mirrors. This has allowed them to prevent third parties from entering the auto replacement parts market.29 Also, design patents can be used to protect designs as soon as they enter the marketplace. This can give manufactures the ability to protect a design during the time it takes to develop the secondary meaning required to obtain more permanent trade dress protection under trademark law.30

Trade Dress: Trademark protection can extend beyond a logo affixed to a product to include the design of the product itself. However, in order to extend protection to product design, courts have required that trade dress acquire a distinct association with a specific manufacturer.31 Acquiring this type of distinctiveness takes time, and must be proven by survey results or some other proof of association in the eyes of the general public. As a result most product designs, even unique designs intended “to render the product more useful or more appealing,” will not be protected as trade dress.32

Additionally, as with design patents, trade dress protection cannot be applied to functional product elements.33 The burden of establishing non- functionality of the trade dress lies with the manufacturer, making it harder to protect functional elements behind trade dress.34 Any “essential” feature of a product – a feature that would put competitors at a “significant non- reputational-related disadvantage” if they were unable to incorporate it, or would affect the cost or quality of the device – is excluded from trade dress protection.35 As the Supreme Court wrote, trademark law “does not protect trade dress in a functional design simply because an investment has been made to encourage the public to associate a particular functional feature with a single manufacturer or seller.”36

Figure 8: While the Coca Cola script logo is protected by trademark, the look and feel of the classic Coke bottle is protectable under trade dress.

Image from flickr user KBE35

As with design patents, trade dress protection should not provide a significant barrier to the reproduction of objects with a 3D printer. If an element of the

28 See Id at 1566. 29 See Design Patents and Auto Replacement Parts: Hearing Before the H. Comm. On the Judiciary, 111th Cong. (2010). 30 Daniel Brean, Enough is Enough: Time to Eliminate Design Patent and Rely on More Appropriate Copyright and Trademark Protection for Product Design, 16 Tex. Intell. Prop. L.J. 325, 364 (2008). 31 Although simple trade dress can be “inherently distinctive” from the moment it enters the marketplace, product design trade dress cannot be inherently distinctive and must acquire distinctiveness. See Wal-Mart Stores, Inc. v. Samara Brothers, 529 U.S. 205, 215 (2000). 32 Id. at 213. 33 See Traffix Devices v. Mktg. Displays, 532 U.S. 23, 29 (2001). 34 See Id. at 33. 35 Id 36. Id. at 35.

53 Veritas et Visus 3rd Dimension November 2010 object is required for its operation, it cannot be protected by trade dress. However, attempts to exactly copy objects with trade dress protection will run afoul of trademark law.

Remixing: What about remixing? Remix culture has been one of the richest creative results of the widespread availability of networked computing. Traditionally, remix culture has been limited to written works, visual art, and music. However, there are already examples of remixers experimenting with functional objects.

Figure 9: Designer Daan van den Berg imagined what would happen if you “infected” standard IKEA designs with the “Elephantiasis virus”

In some ways, 3D printing may usher in a new golden age of remix culture. Recall that the traditional sources of remixed works – written works, visual art, and music – are mostly protected by copyright. As a result, remix artists have needed to rely on fair use to create their works.

There are comparatively fewer intellectual property protections for tangible, everyday objects. Re-appropriating and mashing up functional objects will, in general, trigger fewer intellectual property rights issues. However, when those issues are triggered, they will be harder to resolve. Unlike copyright, there is no fair use for patent. Repurposing a patented object, for whatever reason, is still a violation of the patent.

Future Issues

Thus far, this paper has largely considered how rights holders would respond with existing intellectual property law if 3D printing became widespread overnight. However, 3D printing will not emerge overnight. It will slowly improve and creep into the mainstream. As this process occurs, there will be tens, if not hundreds, of small intellectual property skirmishes. These skirmishes will attempt to wed existing intellectual property protections to new realities, and in doing so will slowly change the state of the law. While it would be easy to miss these skirmishes – an obscure lawsuit here, a small amendment to the law there – it will be critical not to. In aggregate, these changes will decide how free we will be to use disruptive new technologies like 3D printing to their fullest potential. What follows is a list of the issues most likely to be fought over.

Patent: Expansion of Contributory Infringement: Traditional patent infringement is not necessarily well suited to a world in which individuals are replicating patented items in their own homes for their own use. Unlike with copyright infringement, the mere possession or downloading of a file is not enough to create infringement

54 Veritas et Visus 3rd Dimension November 2010 liability.37 In order to identify an infringer, the patent owner would need to find a way to determine that the device was actually replicated in the physical world by the potential defendant. This would likely be significantly more time and resource intensive than the monitoring of file trading sites used in copyright infringement cases.

In light of this, following in the wake of large copyright holders, patent owners may turn to the doctrine of contributory infringement to defend their rights.38 This would allow patent owners to go after those who enable individuals to replicate patented items in their homes. For example, they could sue manufacturers of 3D printers on the grounds that 3D printers are required to make copies. They may sue sites that host design files as havens of piracy. Instead of having to sue hundreds, or even thousands, of individuals with limited resources, patent holders could sue a handful of companies with the resources to pay judgments against them.

In addition to attacking the companies that make 3D printing possible, patent owners may try to stigmatize CAD file types in much the same way that copyright holders stigmatize the bit-torrent file transfer protocol (or even MP3 files). Successfully equating CAD files with infringement could slow the mainstream adoption of 3D printing and imply that anyone uploading CAD files to a community site is somehow infringing on rights.

Figure 10: Communities such as Thingiverse (http://www.thingiverse.com) already exist to allow designers to share, discuss, and collaborate on designs.

Evidence of Copying: However, contributory infringement will not automatically give patent owners the ability to shut down 3D printing. First and foremost, contributory infringement still requires evidence of actual infringement.39 This should prevent patent owners from inferring that Company X must be helping people infringe simply because of the nature of the product they offer. In order to successfully sue Company X, patent owners will have to prove that a user actually used a product or service offered by Company X to infringe, not just that a user could have done so. Contributory infringement gives patent holders a way to protect their patent without having to go after each and every individual who infringed, but they still have to find at least one individual who actually infringed the patent.

Staple Article of Commerce: The second hurdle for patent holders will be the “staple article of commerce” doctrine. This doctrine recognizes that inventions are made out of things, and that things can be used to make more than just the invention. For example, just because you patent a new steel mechanism does not mean that you can

37When downloading a file, a user creates a copy of that file on her own hard drive, thus implicating copyright. 38 See 35 U.S.C. 271 (c) 39 See Enpat, Inc. v. Microsoft Corp., 6 F.Supp. 2d 537, 538 (E.D. Va. 1998) (citing Joy Technologies, Inc. v. Flakt. Inc., 6 F.3d 770, 774 (Fed.Cir. 1993)).

55 Veritas et Visus 3rd Dimension November 2010 sue all steel manufacturers for contributory patent infringement. Even if someone did use a specific steel manufacturer’s steel to copy your mechanism, that fact alone would not allow you to sue for infringement. Steel has substantial lawful as well as unlawful uses, and the mere fact that it could be misused does not prove that it was misused.40

As long as an item is capable of substantial non-infringing uses, the fact that it could be used to infringe a patent is not enough to create liability for its creator.41 Moreover, selling general-purpose equipment that can perform a process does not infringe on a patent on that process.42 When the Supreme Court considered the fate of the VCR, it specifically borrowed this concept from patent law.43

This rule is logical. Tools like scanners and barcode readers are no doubt used in a number of patented processes – however, they are also used in any number of non-patented ways.44 Similarly, a computer, a 3D printer, and some glue have the ability to make an infringing reproduction of a patented product. However, all of these items have so many legal and non-infringing uses that outlawing them would harm society.

Knowledge: Finally, in order to sue a company who provides tools that can be used to infringe patents, a patent owner must show that the company knew or had the intent to cause someone else to infringe a patent.45 Although a patent owner does not need to uncover direct evidence of intent to contribute to infringement, the patent owner does need to provide circumstantial evidence.46 patent holder must show that the party who allegedly induced infringement actually knew of the patent in question, or displayed deliberate indifference to the existence of such a patent.47

As with the other hurdles, this should serve to insulate the companies who merely provide the tools necessary to make 3D printing possible. The printer manufacturer, software designer, and companies that provide the materials that the printers use to make products should be able to claim that they are servicing a large, legitimate market and that any infringement is incidental to their activities.

Repair and Reproduction: Today the public is free to replicate unpatented elements of combination patents. They can repair and replace worn elements without securing an additional license or obtaining necessary replacement parts from the original manufacturer.

When creating those replacement parts or unpatented elements becomes easier, manufacturers will likely begin to see it as piracy and theft. They will likely seek to criminalize the creation of replacement parts without a license and reduce the threshold for what qualifies as a step towards infringement. This will most likely come in the form of a push for an expanded scope for patent protection (especially design patents), and the creation of some sort of protections for non-patented elements of combination patents.

Also, the somewhat ambiguous line between repair and reconstruction is likely to be explored, and potentially clarified. Users will fight to maintain the right to repair worn out parts, while manufactures will fight to create a monopoly on replacements.

40 See, e.g. Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 545 U.S. 913, 932-33 (2005) (Grokster). 41 See In re Bill of Lading Transmiss. & Processing Sys., 695 F. Supp.2d 680, 686-87 (S.D.O.H., 2010). See also Sony Corp of America v. Universal City Studios, Inc., 464 U.S. 417, 442 (1984). 42 See Co., Ltd. V. Quanta Computer Inc., 550 F.3d 1325, 1334 (Fed. Cir. 2008). 43 See Sony at 442. 44 See In Re Bill of Lading at 687. 45 See SEB S.A. v. Montgomery Ward & Co., 594 F.3d 1360, 1376 (Fed. Cir. 2010). 46 See DSU Medical Corp. v. JMS Co., Ltd., 471 F.3d 1293, 1306 (Fed. Cir. 2006). 47 See SEB S.A. at 1377.

56 Veritas et Visus 3rd Dimension November 2010

Copyright: As 3D printing makes it possible to recreate physical objects, manufacturers and designers of such objects will increasingly demand “copyright” protection for their functional objects. The most likely way to achieve this type of protection is to eliminate or restrict the application of the severability test discussed above. Instead of separating design elements from functional elements, they will work to expand copyright protection to all functional items that contain design elements. We are already seeing such attempts in the call for fashion copyright, or a desire to protect functional objects such as a Dyson vacuum or an iPod as art. In some ways, this fear was realized when Congress added a special copyright protection for boat hull designs.48

This could create a type of quasi-patent system, without the requirement for novelty or the strictly limited period of protection. Useful objects could be protected for decades after creation. Mechanical and functional innovation could be frozen by fears of massive copyright infringement lawsuits. Furthermore, articles that the public is free to recreate and improve upon today (such as a simple mug or bookend) would become subject to inaccessible and restrictive licensing agreements.

Trademark: In recent years, the Supreme Court has been protective of the public’s interest in competition in the face of requests from trademark holders to increase the scope of protection.49 However, manufacturers will continue to seek expanded scope of trademark protection. Trademark is an especially attractive type of protection because it is potentially infinite in time.

With regard to trade dress, manufacturers will continue to push for “inherent distinctiveness” (or automatic trademark protection) without a requirement that a design acquire distinctiveness through public association. They will also seek to minimize the importance of the “use in commerce” clause in trademark law. At this time, “use in commerce” has not been heavily litigated because there were very few circumstances in which a defendant would be able to claim that they were not using the mark in commerce. As it becomes easier for individuals to create products at home for their own use, we can expect that to change.

The amorphous doctrine of trademark dilution is another candidate for possible expansion in scope. Unlike traditional trademark, a use that dilutes a “famous mark” does not need to be in commerce, confuse consumers, or cause direct economic harm to the mark holder. Whether or not a mark qualifies as sufficiently “famous” for dilution protection is determined by the application of a nonexclusive list of eight separate factors defined in the statute.50 This would give the courts wide latitude to gradually expand what marks qualify as famous for the purposes of dilution.

Expansion of Liability: One of the major lessons of the digital copyright battles is that it can be hard, expensive, and time consuming to find and prosecute individual infringers. In response, rights holders have increasingly sought out ways to expand liability beyond infringers to those who facilitate such infringement.51 As this effort expands further from infringing material, it becomes increasingly destructive: all computers can make copies, but if computer manufacturers and networking companies are held liable for every movie illegally downloaded from the Internet, the companies would quickly go out of business and the Internet would slow from a superhighway to a unpaved country lane.

The same will be said for 3D printing. Sophisticated 3D printers will be able to reproduce patented items, protected trade dress, and even ornamental objects protected by copyright. However, if rights holders are allowed to hold the companies that make 3D printing possible liable for copies that individuals make, they will be unable to continue

48 17 U.S.C. § 1301 et al. 49 See, e.g. Wal-Mart Stores; Traffix Devices. 50 15 U.S.C. § 1125 (c)(1)(A-H). 51 See, e.g. Grokster.

57 Veritas et Visus 3rd Dimension November 2010 operating. If rights holders are able to force 3D printing companies to forfeit a percentage of their sales as “compensation,” or to incorporate restrictive copy controls, the industry may very well stall before it reaches a mass market audience.52 For example, rights holders could insist that, in order to avoid liability, 3D printer manufacturers incorporate restrictive DRM that would prevent their printers from reproducing CAD designs with “do not copy” watermarks.

Conclusion

The ability to reproduce physical objects in small workshops and at home is potentially just as revolutionary as the ability to summon information from any source onto a computer screen. Today, the basic outlines of this revolution are just starting to come into focus: 3D scanners and accessible CAD programs to create designs. Connected computers to easily share those designs. 3D printers to bring those designs into the real world. Low-cost, easy to use, accessible tools will change the way we think about physical objects just as radically as computers have changed the way we think about ideas.

The line between a physical object and a digital description of a physical object may also begin to blur. With a 3D printer, having the bits is almost as good as having the atoms. Information control systems that are traditionally applied to digital goods could start to seep out into the physical world.

The basic outlines of this revolution have not yet been filled in. In many ways, this is a gift. Setting the tools free in the world will produce unexpected outcomes and unforeseeable changes. However, the unknowable nature of 3D printing’s future also works against it. As incumbent companies begin to see small-scale 3D printing as a threat, they will inevitably attempt to restrict it by expanding intellectual property protections. In doing so they will point to easily understood injuries to existing business models (caused by 3D printing or not) such as lost sales, lower profits, and reduced employment.

While thousands of new companies and industries will bloom in the wake of widespread 3D printing, they may not exist when the large companies start calling for increased protections. Policymakers and judges will be asked to weigh concrete losses today against future benefits that will be hard to quantify and imagine.

That is why it is critical for today’s 3D printing community, tucked away in garages, hacker spaces, and labs, to keep a vigilant eye on these policy debates as they grow. There will be a time when impacted legacy industries demand some sort of DMCA for 3D printing. If the 3D printing community waits until that day to organize, it will be too late. Instead, the community must work to educate policy makers and the public about the benefits of widespread access. That way, when legacy industries portray 3D printing as a hobby for pirates and scofflaws, their claims will fall on ears too wise to destroy the new new thing.

52 See, e.g. 17 U.S.C. § 1001 – 1010. .

58 Veritas et Visus 3rd Dimension November 2010

59 Veritas et Visus 3rd Dimension November 2010 Stereoscopic 3D Benchmarking (DDD, iZ3D, NVIDIA)

by Neil Schneider

Neil Schneider is the President & CEO of Meant to be Seen (mtbs3D.com) and the newly founded S-3D Gaming Alliance (s3dga.com). For years, Neil has been running the first and only stereoscopic 3D gaming advocacy group, and continues to grow the industry through demonstrated customer demand and game developer involvement. His work has earned endorsements and participation from the likes of Electronic Arts and Blitz Games Studios, and he has also developed S-3D public speaking events with the likes of Crytek, Epic Games, iZ3D, DDD, and NVIDIA. Tireless in their efforts, mtbs3D.com is by far the largest S-3D gaming site in existence, and has been featured in GAMEINFORMER, Gamecyte, Wired Online, Joystiq, Gamesindustry.biz, and countless more sites and publications.

While traditional gaming media has every benchmark under the sun, it’s very to find ratings for modern stereoscopic 3D drivers like DDD, iZ3D, and NVIDIA’s GeForce 3D Vision. This is completely understandable because it’s much more time consuming, the drivers don’t work equally well with the available measurement tools (e.g. FRAPS), and game setting expectations are different from one solution to the next.

We’ve been getting our share of graphics cards for game testing, and it got us thinking. What’s stopping us from doing some benchmarking on MTBS? Could we share information that other sites don’t? Let’s find out!

Using both an NVIDIA and AMD graphics card, combined with all three driver solutions, we wanted to determine:

 Can games be fairly benchmarked in stereoscopic 3D?  What is a realistic game efficiency expectation in 3D?  How does antialiasing impact S-3D game performance with the different driver solutions?  Does deeper access to the graphics card directly impact performance?

To be fair, this article is more of a media experiment than a diverse collection of S-3D gaming performance results. Similar to M3GA’s regimented rules, http://www.mtbs3d.com/m3ga/, benchmarking should be based on a fixed platform that meets certain criteria. It’s for this reason that we limited benchmarking to just two graphics cards and a handful of tests. We want to get the process down pat, get the required benchmarking equipment, and then we will be able to follow through with a full-fledged service on MTBS.

Under normal circumstances, we would use FRAPS to benchmark a selection of video games in 3D. Unfortunately, FRAPS is not compatible with iZ3D’s stereoscopic 3D drivers – at least, not consistently. The best way around this problem is to find games with benchmarks built in. We chose Metro 2033, Resident Evil 5, Crysis Warhead, and Battleforge. These are great selections because all driver developers have been aware of these titles for some time, and most have active profiles in their software offerings.

We wish to make it clear that this is not a test of quality – just performance. So if one solution is a little faster than another with a given title, don’t jump to conclusions! Like the tortoise and the hare, it’s important to pay equal if not more attention to the quality of the 3D gaming experience – and there can be big variations from one driver developer to the next!

This round of benchmarking is only focused on DirectX 9 performance. DDD and iZ3D have just started supporting DX10 and DX11 in an official capacity, and we want all solutions to have adequate game profiles in these modes before testing. We also limited ourselves to 1280 X 720 and 1920 X 1080 resolutions for testing. Anything less or more would fall out of current generation display standards – especially in 3D. We also turned off

60 Veritas et Visus 3rd Dimension November 2010 auto-convergence features and “virtual 3D” or “2D+depth” modes. Auto-convergence cuts down on performance, and “Virtual 3D” skews frame rates because it doesn’t actually render a second camera view and display a complete 3D experience.

We tested for three things: first, we measured 2D performance with the S-3D drivers completely deactivated. Then we tested each S-3D driver with multiple antialiasing options. Finally, based on these results, we determined the S- 3D efficiency relative to the 2D performance in each benchmark.

The Graphics Cards: Based on our current inventory, we tested the AMD HD 6870 and the NVIDIA GTX285. This article isn’t intended to be a comparison of two graphics cards. We are more interested in seeing how the different drivers perform with two competing and architectures. With the exception of DX11 support, the AMD 6870 and GTX 285 are directly competitive in the DX9 mode we are testing today.

The Test Platform:

 AMD Phenom 9850 X4 2.5 Ghz  MSI K9A2 Platinum 790FX  4GB RAM  Windows 7 64 Bit  Zalman 24-inch 3D Monitor  Creative Labs XFI Soundcard

 The GTX285 (EVGA GTX285SC) is clocked at 675MHz. It also features 1GB of DDR3 RAM clocked at 2,538Mhz. It was tested with version 258.96 of the GeForce drivers because this is the last Zalman compatible 3D driver on record. There is no evidence that later drivers would make a significant difference to the 2D or S-3D gaming performance.  The AMD HD 6870 is factory clocked at 900Mhz and features 1GB DDR5 RAM clocked at 1050Mhz and was tested with Catalyst 10.10.

 The stereoscopic 3D drivers included DDD’s TriDef Experience 4.3.2, iZ3D’s 1.12, and NVIDIA’s 258.96 Zalman release.

 Zalman 24: We acknowledge that Zalman is an interlaced solution (half vertical resolution per eye), and are open to the possibility that there could be a modest drop in performance with full resolution page flipping displays. Future benchmarking will use a different platform. We chose this route because there was no other way to test all driver solutions with a single benchmarking specification.

Resident Evil 5: While the Resident Evil 5 benchmark was released as a marketing tool to promote NVIDIA’s GeForce 3D Vision compatibility, it runs equally well on all three S-3D driver solutions. All settings were maxed out for the test.

We were very surprised to see that Resident Evil 5 consistently performs better in stereoscopic 3D on DDD and iZ3D drivers compared to NVIDIA’s GeForce 3D Vision on both the 6870 and GTX285. On the GTX285, the iZ3D drivers maintained a modest two to three FPS advantage over DDD’s TriDef Experience – and this gap widens at 8AA or more. In contrast, DDD did much better on the 6870 with an eight to twelve FPS lead over iZ3D.

While Resident Evil 5 was a great marketing push for NVIDIA’s GeForce 3D Vision, it isn’t a great contender for demonstrating GPU performance in 3D. Compared to 3D Vision’s competitors, it doesn’t once breach the 60% efficiency grade on the GTX285. In contrast, the DDD and iZ3D drivers are able to reach anywhere from nearly 50% to almost 80% efficiency on the GTX285. On the 6870, the DDD drivers are easily 10% more efficient than iZ3D’s.

61 Veritas et Visus 3rd Dimension November 2010

Comparison of performance in frames per second (2D, DDD, iZ3D, and NVIDIA)

3D efficiency versus 2D performance: DDD, iZ3D, and NVIDIA

62 Veritas et Visus 3rd Dimension November 2010

Metro 2033: Metro 2033 has also been promoted as NVIDIA GeForce 3D Vision certified, but the game and benchmark also works with the DDD and iZ3D stereoscopic 3D drivers. All settings were maxed out, but we left PhysX off because that’s a proprietary feature unique to NVIDIA.

Metro 2033 demonstrates a very modest performance advantage on the GTX285 with NVIDIA GeForce 3D Vision drivers. iZ3D is a close second, and DDD is third.

For some reason, to get the DDD drivers to run with Metro 2033 with the GTX285, we had to have the NVIDIA 3D Vision drivers installed in the background. Otherwise, it had a slideshow performance. This caveat was unique to Metro 2033 and isn’t a trend.

Metro 2033 has a minor efficiency advantage with NVIDIA GeForce 3D Vision drivers on the GTX285. iZ3D’s efficiency rating did very well on the 6870 – even surpassing that of NVIDIA’s at 1280x720 resolution. DDD did ok, but given their need for the NVIDIA 3D Vision drivers to be resident, there is probably room for more optimization.

Battleforge: A wildcard RTS game, Battleforge has no official recognition for being S-3D compliant. However, NVIDIA rates this title as “excellent” in their games list, and DDD already has a Battleforge profile. iZ3D also has compatibility, and they have been informed that we were looking to review this title some time ago.

The only limitation is that Battleforge doesn’t have a 1280X720 mode, so we only tested at 1920X1080 resolution. All settings were at maximum.

63 Veritas et Visus 3rd Dimension November 2010

From a performance point of view, Battleforge consistently ran best with the NVIDIA GeForce 3D Vision drivers. On the GTX285 graphics card, DDD and iZ3D remained neck and neck, but iZ3D inched ahead once antialiasing was set to 8X or more. DDD and iZ3D performed better on the AMD 6870 than they did on the GTX285 both in frame rates and efficiency. The performance gap between solutions was practically non- existant until they hit the 8AA setting.

Crysis Warhead: While Crytek has been promoting Crysis 2 as supporting 3D without a drop in frames per second, it’s going to be a pseudo 2D+depth solution that doesn’t render a second camera view. Crysis Warhead combined with DDD, iZ3D, and NVIDIA stereoscopic 3D drivers does it the old fashioned way – complete left/right rendering. This makes Warhead a great performance measuring tool. We ran an external benchmarking utility because it let us go through a fixed testing series in batch form. The only setting we turned off was shadows because it causes major problems on all three driver solutions.

At 1280x720, the GTX285 didn’t really have any clear performance winners. iZ3D had a one or two frame advantage here and there, but that was all. The 6870 had a similar result, though DDD held the lead this time around. On the GTX285, while DDD and iZ3D had a minor performance advantage over NVIDIA’s drivers at lower resolutions, this edge diminished at 1920X1080 as the antialiasing settings increased. DDD seemed to have some trouble with antialiasing compared to iZ3D on Crysis Warhead with the 6870. The moment antialiasing was activated, its performance dropped like a rock with each settings increase.

NVIDIA’s GeForce 3D Vision drivers have average but consistent stereoscopic 3D efficiency with Crysis Warhead. While they are easily outperformed in 1280X720 on the GTX285, their efficiency is unmatched at 1920x1080. In contrast, DDD and iZ3D do well at the beginning of the race, but putter out at high resolution on the GTX285. What is interesting is that while DDD and iZ3D performed better on the 6870, their efficiency grades both dropped at 2X AA. They faced a modest drop of efficiency at 4X AA, and become nearly useless at 8X AA.

64 Veritas et Visus 3rd Dimension November 2010

Comparison of performance in frames per second (2D, DDD, iZ3D, and NVIDIA)

3D efficiency versus 2D performance: DDD, iZ3D, and NVIDIA

65 Veritas et Visus 3rd Dimension November 2010

Conclusion: We think it’s fair to say that all stereoscopic 3D drivers have a competitive offering, and consumers should have confidence that they aren’t going to miss out on any major performance gaps.

While NVIDIA was green with envy over their competitors’ scores with Resident Evil 5 (a “GeForce 3D Vision Ready” game), their 2D/3D performance was a little more efficient in three out of four titles tested on the GTX285. AMD’s 6870 combined with DDD and iZ3D drivers proved to be an even match, however.

DDD and iZ3D had nearly equal performance in most cases, and tended to perform better with the AMD HD 6870. iZ3D inched ahead of DDD’s results when antialiasing was set high enough. We are taking an educated guess that iZ3D’s memory footprint is smaller than DDD’s when in 3D mode, so antialiasing impacts DDD’s performance sooner than it does iZ3D’s. We would be interested to see if additional GPU memory would make a difference in this outcome for either graphics card maker.

It’s also interesting to point out that while 3D is often accused of cutting FPS in half, these tests show otherwise with efficiency scores ranging from 50% to 70% of the 2D equivalent. In cases where efficiency dropped below the 50% mark and the FPS was on the slow side, it’s perfectly reasonable to expect decent game speed with some eye candy reductions on mid-range graphics card equipment.

All things being equal, measuring an extra frame here and there isn’t going to determine the winning combination. Stereoscopic 3D game quality and visual experience is much more important. Please visit MTBS' 3D Game Analyzer http://www.mtbs3d.com/m3ga/ regularly to track that based on member submission. In conclusion, we plan to do more tests like this so consumers are informed of which GPUs will work best with their favorite drivers and games.

http://www.electronic-displays.de

66 Veritas et Visus 3rd Dimension November 2010

Last Word: The film from hell… by Lenny Lipton

Lenny Lipton is recognized as the father of the electronic stereoscopic display industry, Lenny invented and perfected the current state-of-the-art 3D technologies that enable today's leading filmmakers to finally realize the dream of bringing their feature films to the big screen in crisp, vivid, full cinematic-quality 3D. Lenny Lipton became a Fellow of the Society of Motion Picture and Television Engineers in 2008, and along with Petyer Anderson he is the co-chair of the American Society of Cinematographers Technology Committees’ subcommittee studying stereoscopic cinematography. He was the chief technology officer at Real D for several years, after founding StereoGraphics Corporation in 1980. He has been granted more than 30 patents in the area of stereoscopic displays. The image of Lenny is a self-portrait, recently done in oil.

The first money my new company StereoGraphics made in 1981 was from my consulting fees working on a 3-D movie called Rottweiler: Dogs from Hell. Chris Condon, the president of StereoVision International (which was a Burbank-based supplier of stereoscopic optics for the motion picture film industry), and StereoGraphics formed a venture called Future Dimensions, in which I would help market his line of lenses and provide consulting expertise.

Stereoscopic motion picture camera systems are difficult to use without a stereographer/consultant but in the film business consultants may be a perceived source of trouble. This may be a result from the days when the Technicolor Company required a color consultant who would dictate passing or failing grades on the choice of colors for costumes and set decor. One of these consultants, Natalie Kalmus (who was at one time the wife of the president of Technicolor, Herbert Kalmus), had a reputation as a tyrant. Nevertheless, a consultant is necessary, if a production company is doing its first stereoscopic movie.

Condon and I put together a deal with Earl Owensby, who had a filmmaking/hardware fiefdom, E. O. Productions, in Shelby, North Carolina. He had seven large sound stages which were also used for warehouses for his hardware supply business, and many country acres including his own motel, baronial dwelling, business center, aforementioned sound stages, and an airstrip for his two planes.

I was hired to train his crew for two months, after Earl had negotiated a 99-year lease on Condon’s stereoscopic lenses. After Rottweiler: Dogs from Hell, that crew went on to shoot several stereoscopic movies and, although the only film I’ve seen was Rottweiler, I’ve heard that the stereo in the films, under the direction of cinematography Earl Dickson, was pretty good.

Just before shooting was to begin I flew with from Shelby to Miami with Mike Allen, who was Earl’s pilot. Mike and I got into Earl’s little Aeronca STOL craft – a plane that requires a short runway. We flew off in a thunderstorm with an Arriflex camera that needed to be properly mated to the stereoscopic lenses. For much of the day we flew through the gray unnamable. The Aeronca was outfitted with sophisticated instrumentation electronics so we could fly in zero visibility conditions.

Mike decided to test me and I decided I would pass the test. He was a good old boy from Carolina, and I was some wise-ass fool from California. When the weather cleared, Mike showed me what the STOL aircraft could do. Mike was inhibited only by the fact that the camera was on board and he didn’t want to damage it. I did a meditation that lasted hours, relying on what I had learned from Uncle Bill, a sociable Buddhist hermit, and no matter what weird dip or roll or dive Mike pulled I remained calm, because I was wasn’t even there. I used the sound of the engine as a mantra, a technique I have used to steady my nerves on commercial flights, which are more horrible than anything Mike could dream up. At the end of the trip, Mike decided I was one hell of a guy (or victim).

67 Veritas et Visus 3rd Dimension November 2010

For much of the time we flew over the coast of Carolina, Georgia and then Florida. It is improbably beautiful – the vivid blue-green of the water and the shapes of the sandy barrier reefs. At one point we flew by Cape Kennedy and saw a rocket on the launch pad. We refueled several times en route at small airports and Mike knew people all along the way which only gave him additional excuses to show off the plane’s aerobatic capability. At one of the airports we encounter a World War II fighter, a Hellcat. I had no notion of how immense these machines were. I thought of them as single person small aircraft, but it ain’t so. The engine was mammoth – half the length of the plane. As a boy I watched these planes win the Second World War in triple features at the People’s Cinema in Brooklyn. Landing in Miami Mike took me to that part of the airport where there were score of aircraft, some of them the size of a DC 3, parked row after row, all having been seized by federal authorities from drug dealers. I suspected that Mike may have been a pilot playing in that game.

After visiting the Arri repair shop we returned the following day. The first week or so that I was on the set, a producer from CBS’s 60 Minutes showed up. He and his crew did some shooting and nosed around and said that in a couple of weeks Morley Safer would be coming back with the 60 Minutes to do a segment on Earl.

There was a good Indian restaurant in Charlotte where I would eat on weekends with a girlfriend who was a news cameraperson for a local TV station. I told Earl about their terrific food, and he told me he had never eaten Indian food. It seemed to me that a movie producer ought to have eaten Indian food.

Earl told me that his desire to make films came from observations he made while watching a film crew at work. One day he decided there wasn’t that much to it, and he was going to get into the business. And his films and his studio made money. One day I ran into David Nelson coming out of an editing room – a blast from the past for me because I had been a fan of Ozzie and Harriet on both radio and TV in my youth.

Earl enjoyed acting in his pictures and he had a part in Rottweiler. He was a good-looking man who easily fit the part of the southern sheriff who liked to point his six-gun right at the lens so it would poke out of the screen in killer 3-D. Not many people get to fulfill the fantasy of being a movie producer or movie star. Earl might have never been given such an opportunity had he migrated to Hollywood, but here he was the king of world, a smaller world than Cameron’s, but his and his alone. One day I rode through the green hills of North Carolina to our location with Earl in his Rolls and he suggested that I stick around and direct one of his films. Although I was flattered, life would take me elsewhere.

The plot of Rottweiler involves dogs with super intelligence who escaped from the U.S. Army, which was breeding them to be super weapons. After they escape they go on a killing rampage in a town. Earl’s plot has the classic science fiction motif of Frankenstein. The problem is technology – meddling in things that irked the creator. Bruce, the shark in Jaws, is not a man-made threat – he is nature itself. But the human race made the hell dogs that live to kill. The dogs could not have succeeded harming so many people if the character in the film behaved with common sense (the desire to stay alive). Nobody would have been hurt if the Army told the town the dogs were on the loose. And naturally enough the scientist who made the dogs what they were does everything he can to protect them no matter what price is paid in human life. The citizens of the town act in increasingly stupid ways to guarantee their doom, like characters in a many melodramas. Given a chance to get away from a dog these people managed to get cornered on rooftops, in dead ends, or in burning buildings.

People who don’t know about working on movies may have romantic notions of the experience, but it’s a lot of hard work and mostly waiting; waiting for something or somebody to get ready. It was 110 degrees or more every day, but fortunately it was an evening shoot and it was a little cooler after dark, unless we were in one of Earl’s non-air-conditioned sound stages. Most of the show was location photography, and Earl had a large generator truck which wasn’t muffled. That meant much of the sound had to be dubbed, even though it was recorded sync sound to serve as a guide track. The soundman with his boom had such pride in his work but little of what he recorded was used. Many of the members of the crew of 40 carried side-arms. I was told they needed protection from the snakes

68 Veritas et Visus 3rd Dimension November 2010 in the kudzu. Kudzu is a leafy vine that has taken over a lot of the south. I felt it was prudent not to get into any arguments. Despite the six day weeks, the heat and the fourteen hour days, tempers never flared.

The director, a young man named Worth Keeter, also carried a side-arm. Worth was a competent technician who had no real interest in actors or acting. His areas of interests were action, makeup and effects, and the set would grind to a dead stop when Worth was required to add his scar and wound makeup, which were frequently called upon since the dogs were tearing hunks of flesh out of the characters. I learned from my camerawoman friend in Charlotte, who had gone to high school with Worth, that he dressed in a Dracula costume on a daily basis, and slept in a coffin. Worth got his start at E.O. doing makeup and then graduated to directing. Years later Worth directed many of the Power Ranger TV shows.

My job as a stereoscopic consultant placed me between the camera and the Rottweilers. One recurring shot was to have the Rottweilers leap out of the screen as they say on the 3-D posters, into your lap. The stereoscopic “convergence” control, which is used for placing devil dog drool onto the audience, turned out to be at the front of the lens, and it was my job to stand there (or actually hunker) working a knob” pulling” convergence while Rottweilers jumped at the camera. Since I have nerves of steel (more like dead broke), I was suited for the job. Sometimes the Rottweilers were chained to a plywood plank so they couldn’t go too far, and sometimes they weren’t, because the shot wouldn’t allow it. And every now and then they had to chain me to the camera so that I wouldn’t run away.

The dogs were smacked with switches so they’d develop a foamy lather and look like angry dogs, because Rottweilers aren’t necessarily mean dogs. I didn’t speak out at the time because I wanted to remain gainfully employed. There were Rottweilers spread all over Earl’s domain, usually behind chain-link fences and there was a large turnover in the population of Rottweilers and their trainers. Many of the crew wound up wearing T-shirts from various training establishments all of which had drawings of snarling dogs.

I was given a small part in the film playing a tourist in a Hawaiian shirt tucked into my pants, wearing suspenders. My line was dubbed on a looping stage in New York many months later. The actor who spoke my dialogue decided that my character should have the voice of Groucho Marx. Since I was a Groucho fan in my youth that was kind of fun.

Finally 60 Minutes returned with Morley Safer, sporting a silk cravat. While the crew was hanging out and shooting the making of the film, the 60 Minutes producer invited me to be interviewed. But I was coached to complain about Earl Owensby; about Earl’s tyranny, about Earl’s bad treatment of the crew, about Earl’s lousy movies. Earl wasn’t a tyrant; he was a fair man who was giving people an opportunity to get into the movie business. He gave people jobs, which were I come from is a good thing. His movies, to my lights, weren’t terribly good or terribly sophisticated, but there are a lot of movies made in Hollywood that aren’t any better than Earl’s, and they’re distributed by major studios and made by people who are supposed to be talented. When I declined to cooperate with the 60 Minutes producer my interview was canceled.

The most serious criticism I had of Earl was that he had an opportunity to have excellent food on the set, but passed it up. There was a restaurant outside of Shelby that served good Mediterranean cuisine, but Earl’s son, who was working on the crew, voted it down. Instead we got served grits and fried chicken every day, but I withheld this comment from 60 Minutes, because unlike the Rottweilers, I was not going to bite the hand that fed me. As it turned out the piece about Earl was favorable, and I have one moment on the screen standing near my favorite Rottweiler, Brutus.

On the day I left E.O. Studios a big fire scene was set up. This time those pesky Rottweilers had set fire to a hotel and were attacking the fools within. Rubber cement was used to coat the walls of the house, which was then set on fire but it got out of hand and the fire department was called. (Just like shooting the House of Wax — the roof

69 Veritas et Visus 3rd Dimension November 2010 burned off the Warner Bros. sound stage and you can see the blue daylight behind the melting statues!) This occurred as I was on my way to the airport and was described to me later by one of the crew. Nobody was hurt that day.

A year later I sat in a screening room in the shadow the Black Tower at Universal Studios, watching a print of Rottweiler with the head of Universal Optical, Pete Comendini, and the man who was going to be the director of Jaws 3D, Joe Alves. I was peddling my services and Condon’s lenses for use on the new production, which promised to be the biggest budget stereoscopic film in years. Pete and Joe sat in the row directly behind me and as the film unfolded they began to speak over the dialogue. I turned around and saw Joe removing his cardboard 3D glasses: “The 3D is great. This is the best I’ve seen. Easy on the eyes and the effects are great. Much better than anything else that has come through the door. You would be amazed what crap people have shown us. But the picture itself, it is shit,” he said.

“Yes,” Pete offered, “This is a terrible picture. I don’t think we can show it to the guys in the Black Tower.” The Black Tower is the office building on the edge of the Universal lot. They felt that given the poor quality of the picture, they could not screen it for the executives who would make the final decision about who would get the job of doing the 3D on the film.

Another year later I sat in a mammoth theater in the outskirts of Detroit wearing cardboard 3D glasses watching Jaws 3D. This film was just as crappy as Rottweiler but the 3-D was a disgrace. My feeling was that I was sitting through one of the worst movies ever made with technical mistakes so serious that it was only with an effort of will that I was able to continue to look at the screen.

Demystifying 3D: The Complete Guide to Autostereoscopic 3D Display Technology

3D Workshops at Digital Signage Expo 2011 February 22, 2011, Las Vegas

Insight Media University will be presenting the "Demystifying 3D: The Complete Guide to Autostereoscopic 3D Display Technology" workshop series as part of the upcoming Digital Signage Expo. The workshop series will be held on Tuesday, February 22, 2011 at the Las Vegas Convention Center, just prior to the opening of Digital Signage Expo. Digital Signage Expo will be held February 23-25 at the Las Vegas Convention Center.

There is a lot to know about 3D technology, including the displays, creation of the content, configuring and integrating the entire solution and understanding which applications and venues are best suited for this opportunity. This workshop offers a comprehensive presentation of all the aspects of successful AS-3D solutions. The workshop is organized into four 90- minute modules. You can attend 1, 2, 3 or all of them, depending on your focus and interest level. The modules are:

 Autostereoscopic and Advanced 3D Displays  Content Creation for Stereoscopic & Autostereoscopic Displays  System Integration for Autostereoscopic Display Solutions  Autostereoscopic Market Opportunity

Attendees will gain the skills to understand the technology behind AS-3D, the methods to assemble a successful solution and the business opportunities that can and should be addressed: https://www.compusystems.com/servlet/ar?evt_uid=320

70