<<

3D and Interactivity at Siggraph 2010

Michael Starks

© 3DTV Corp 2010 Permission is granted to reprint provided credit is given and nothing is changed, added or omitted.

Every summer the ACM special interest group in graphics holds its annual USA (there are others in Asia and Europe) conference and exhibition. It’s a wonderful high-tech circus ranging from advanced technical papers to playful interactive art. In the evenings the winners of the juried animations are projected, and of course many are now in 3D. Both the conferences and most of the animations are available in books and DVD’s so I will only cover stereoscopic related offerings in the exhibition halls, with a sampling of student projects, interactive art, and some poster papers.

The Canon Viewer with Polhemus tracker in the center and Virtual Photocopier simulation(screen at left) which imposes the stereo CGI on the real machine for interactive training. This type of see-through HMD projects the CGI on the real object with precise alignment due to realtime object detection via twin cameras and a gyro in the HMD. This system thus has more rapid response and better registration than previously available. www.canon-its.co.jp which, is opaque if you don’t read Japanese, but you can email [email protected] or [email protected] and watch http://www.youtube.com/watch?v=o2NIX7DNpvk (skip the political short at the beginning). The VH-2007 is a prototype running on a dual XeonX5570 with an Nvidia Quadro 4800. Another app for the Canon Mixed Reality HMD --animating a dinosaur skeleton (center) in realtime. In this case goers see the dino skeleton and then interact with a stereoscopic CGI simulation. http://www.youtube.com/watch?v=i2RqDTYYoFc , http://www.youtube.com/watch?v=xwIzRIasXto

The Canon Mixed Reality system and Dual Polhemus FastTrak trackers controlling the virtual Shuriken blades being used to play Hyak-Ki Men –the Anti-Ogre Ninja’s Mask – a videogame created by a team at Prof. Ohshima’s lab at Ritsumeikan University in Kyoto [email protected]

Philipp Bell of WorldViz-- Santa Barbara, California www.worldviz.com-- showed their VR worldbuilding software with a $25K NVision HMD and 3D joystick. http://www.youtube.com/watch?v=MIppOTeHEBc http://www.youtube.com/watch?v=TnlGUl_P6Y4

Nemer Velasquez of CyberGlove Systems www.cyberglovesystems.com with their Highly sophisticated glove and a stereo HMD. http://www.youtube.com/watch?v=WDad5_dnRFg&feature=related

One of the demos had a new twist on the structured light approach to depth mapping via the dlp projector and cameras which analyzed the deformations of the straight lines.

CEO and Professor at Chonbuk University Ha Dong Kim (left) and Gyu Tae Hwang of CG Wave of Seoul Korea http://cgwave.mir9.co.kr/index_en.html with their system combining stereo CGI with 2D (soon 3D) windows. Realtime UGC (User Generated Content) in a stereoscopic VR environment. You can download a trial version of the Wave 3D VR authoring system at http://cgwave.dothome.co.kr/renew/download.htm Jan Kjallstrom (left) and Mats Johansson of Eon Reality www.eonreality.com with CGI on a DLP projector viewed with XpanD DLP Link glasses. The glasses performed very well (i.e., up to 20M away) in this context but not so well in others (see “The Ten Sins of DLP Link Glasses” in the FAQ on my page www.3dtv.jp). An Eon user in Beijing did the 8 view interactive graphics for the NewSight multiview panels, which were promoted by 3DTV Corp in Asia for several years. Eon develops interactive 3D visual content management solutions as well as its renderers and other CGI apps.

Simon Inwood of Autodesk Canada showed their software in 3D on a Mitsubishi 3D Ready DLP TV with shutter glasses, running realtime interactive, with a virtual videocamera (i.e., you could put the tracker on your finger just as well) using the well known Intersense tracker http://www.intersense.com/.

The irrepressible Ramesh Raskar of the MIT Medialab had his finger in many pies here including the Slow Displays (see below) and several papers. One of special interest to autostereo fans is “Content Adaptive Parallax Barriers for Automultiscopic 3D Display” which explains how to dynamically alter both the front and back panels of such displays to get optimal refresh rate and brightness and resolution for the given content. In his paper with lead author Daniel Lanman and three others he says “We prove that any 4D lightfield created by dual-stacked LCD’s is the tensor product of two 2D mask functions. Thus a pair of 2D masks only achieves a rank-1 approximation of a 4D light field. We demonstrate higher rank approximations using temporal multiplexing. … here a high speed LCD sequentially displays a series of translated barriers. If the completed mask set is displayed faster than the flicker fusion threshold, no spatial resolution loss will be perceived….Unlike conventional barriers, we allow a flexible field of view tuned to one or more viewers by specifying elements of the weight matrix W. General 4D light fields are handled by reordering them as 2D matrices, whereas 2D masks are reordered as vectors.” So this just might be a big step forward for autostereo. You can buy the paper online or even the video http://siggraphencore.myshopify.com/products/2010-tl042 . Below is a Siggraph handout with some projects at MIT.

Chris Ward of Lightspeed Design with the small version of the DepthQ Modulator-now a serious competitor for the realD XL system. See www.depthq.com or my ‘3D at Infocomm’ for more info. It is made by http://www.lctecdisplays.com/

Glowing Pathfinder Bugs by Anthony Rowe (Oslo School of Architecture and Design/Squidsoup) http://www.youtube.com/watch?v=DBU65ilhcWM was commissioned by Folly http://www.folly.co.uk/ and made by Squidsoup http://www.squidsoup.org/blog/ for PortablePixelPlayground http://www.portablepixelplayground.org. Projected images of depth-sensitive virtual animals seek the bottom of a sandbox. This is a demonstration of a rapidly growing field called “Appropriated Interaction Surfaces” where images are projected on the hands, floor, sidewalk, car windows etc.

Another such AIS demo here was the Smart Laser Projector by Alvaro Cassinelli and colleagues which combines a Lidar beam with a projection beam for Augmented Reality. http://www.youtube.com/watch?v=B6kzu5GFhfg . It does not require calibration and can detect (and so interact with) objects such as fingers above the surface. A second demo was a two axis MEMS mirror which can perform edge enhancement, polarization or fluorescence of printed matter with perfect registration in realtime. Putative apps include dermatology (cancer cell detection and smart phototherapy), nondestructive testing and object authentication. http://www.k2.t.u- tokyo.ac.jp/perception/SLP. You can find a nice article including some very different approaches in the June 2010 issue of IEEE Computer for $19 or free abstract here http://www.computer.org/portal/web/search/simple.

The Smart Laser Projector http://www.youtube.com/watch?v=JWqgBRMkmPg enhances and transforms text, images, surfaces or objects in realtime. They have also been used to track, scan and irradiate live protozoa and sperm. Potential apps are limited only by the imagination—e.g., why not have a tactile or auditory output for the vision impaired? For a related device from the same lab see e.g., this YouTube video http://www.youtube.com/user/IshikawaLab#p/u/6/Ow_RISC2S0A.

In the Line of Sight by Daniel Sauter and Fabian Winkler http://www.youtube.com/watch?v=uT8yDiB1of8 uses 100 computer-controlled flashlights to project low-resolution video on the wall of human motion in a 10 by 10 matrix representation of video (the video is shown on an adjacent monitor).

For more photos and info on exhibits in the Touchpoint interactive art gallery see http://www.siggraph.org/s2010/for_attendees/art_gallery Lauren McCarthy of UCLA in this photo from her page http://lauren-mccarthy.com/projects.html wearing her Happiness Hat http://www.youtube.com/watch?v=y_umsd5FP5Y . Her exhibit “’Tools for Improved Social Interacting’ is a set of three wearable devices that use sensors and feedback to condition the behavior of the wearer to better adapt to accepted social behaviors. The Happiness Hat trains the wearer to smile more. An enclosed bend sensor attaches to the cheek and measures smile size, affecting an attached servo with metal spike. The smaller the smile of the wearer, the further a spike is driven into the back of their neck. The Body Contact Training Suit requires the wearer to maintain frequent body contact with another person in order to hear normally; if he or she stops touching someone for too long, static noise begins to play through headphones sewn into the hood. A capacitance sensing circuit measures skin-to-skin body contact via a metal bracelet sewn into the sleeve. The Anti-Daydreaming Device is a scarf with a heat radiation sensor that detects if the wearer is engaged in conversation with another person. During conversation, the scarf vibrates periodically to remind the wearer to stop daydreaming and pay attention.”

Yan Jin of 3DTV Corp petting the irresistible ADB (After Deep Blue—a reference to IBM’s famous world champion chess computer), is a touch responsive toy that pets you back-- by Nicholas Stedman and Kerry Segal http://www.youtube.com/watch?v=pcEGh03ADyI . It also defends itself when hurt. “ADB is composed of a series of identical modules that are connected by mechanical joints. Each module contains a servo motor, a variety of sensors, including capacitive touch sensors, a rotary encoder, and a current sensor to provide information about the relationship to a person’s body. The electronics are enclosed within plastic shells fabricated on 3D printers”{see end of this article}. No barking, no vet bills, no fleas and never bites the neighbor’s kid—just add some appendages and a talking head and it’s a certain billion dollar market for somebody. http://www.youtube.com/watch?v=BXVAVHGgWoM

This image of a human abdomen in “The Lightness of Your Touch” by Henry Kauffman responds to touch by moving, and your hands leave impressions which lift off and move around. Multiple viewers can interact simultaneously. http://www.youtube.com/watch?v=uWlZ9yJI_sQ

Touch Light Through the Leaves-a Tactile Display for Light and Shadow by is a Braille-like optomechanical device that converts shapes and shadows into pressure on your palm. www.cyber.t.u-tokyo.ac.jp/~kuni/ and http://www.youtube.com/watch?v=x8jDUX7fsDg, http://www.youtube.com/watch?v=UqiShlpnBjw Dr. Patrick Baudisch and his team from Hasso Plattner Institute created Lumino, a Virtual Structural Engineer which uses a $12K table by Microsoft and fiber optic blocks for interactivity. http://www.hpi.uni-potsdam.de/baudisch/projekte/lumino.html. The fiber optics transmit your finger image to cameras below the surface. The You Tube is here http://www.youtube.com/watch?v=tyBbLqViX7g&NR=1. The paper is free here http://www.patrickbaudisch.com/publications/2010-Baudisch-CHI10-Lumino.pdf or you can purchase the delivered papers on all exhibits in the Emerging Technologies here http://portal.acm.org/toc.cfm?id=1836821&type=proceeding&coll=GUIDE&dl=GUIDE&CFID=104380878&CFTOK EN=47977647.

Hanahanahana by Yasuaki Kakehi,Motoshi Chikamori and Kyoko Kunoh from Keio University -an Interactive sculpture which alters its flowers transparency in response to different odors provided by scented pieces of paper offered by users. http://vimeo.com/15092350 http://muse.jhu.edu/journals/leonardo/summary/v043/43.4.kakehi.html A dual stereo viewpoint rear projected 3D tactile table with position tracked shutter glasses by the French VR Company Immersion www.immersion.fr. Two users each get their own 3D viewpoint and parallax changes in response to their hand positions. You can get a nice summary of this and the other Emerging Technologies exhibits at http://www.siggraph.org/resources/international/podcasts/s2010/english/emerging-technologies/text- summary

Also in the Emerging Technologies area was the famous robot Acroban from French company Inria http://flowers.inria.fr/media.php which is shown in on their page and You Tube and a good one is that on principal inventor Pierre-Yves Oudeyer’s page http://www.pyoudeyer.com/languageAcquisition.htm. Don’t miss this one of the Playground Experiment in which it is used to investigate robotic curiosity –i.e., self organization of language and behavior which is his principal interest http://www.pyoudeyer.com/playgroundExperiment.htm.

Roboticist and student of language development P-Y Oudeyer with Acroban in this still from his page This is cutting edge AI and . Acroban is not only exceptionally responsive cognitively, but also physically as shown by its ability to move naturally and preserve balance via its complex “skeleton” and control software. See the many YouTubes such as http://www.youtube.com/watch?v=wQ9xd4sqVx0

Tachilab showed another system with fingertip capacitance sensed control of patterns http://tachilab.org/ A longtime researcher in VR related tech Professor Susumu Tachi is now working at both Keio University and the University of Tokyo you can see the camouflage suit, 3D digital pen and other wonders in action at http://www.youtube.com/user/tachilab

Empire of Sleep: The Beach by Alan Price lets you take virtual photos of the stereoscopic animation which causes it to zoom on the subject of the photo. http://www.youtube.com/watch?v=SRMQJZCebz4

Robot society interacts with red LED “voices” in AirTiles from Kansei-Tsukuba Design and you can see the YouTube video here http://www.youtube.com/watch?v=d6Wyj73_MIw . It’s modules allow users to create geometric shapes and interact with them. http://www.ai.iit.tsukuba.ac.jp/research/airtiles

Echidna http://www.youtube.com/watch?v=rK7ZzZ7Z6kY by UK based Tine Bech and Tom Frame hums happily until you touch it when it begins squeaking. It was part of the Touchpoint gallery of interactive art and you can get podcasts and a pdf here http://www.siggraph.org/resources/international/podcasts/s2010/english/touchpoint/text-summary “Matrix LED Unit With Pattern Drawing and Extensive Connection” lets users can draw patterns with a light source such as a laser pointer. LEDs sense the light and display the pattern. The pattern is morphed by users via a tilt sensor in each unit. Units can be tiles and the morphing will then scroll across connected units, giving effects similar to the well know Game of Life. [email protected] u.ac.jp and the YouTube is here http://www.youtube.com/watch?v=YyQcEqvgz0M

“FuSA2 Touch Display” by a group from Osaka University http://www-human.ist.osaka-u.ac.jp/fusa2/ uses plastic optical fibers and a camera below the fibers to alter the colors of the touch responsive display [email protected] and YouTube here http://www.youtube.com/watch?v=RKa-Q24q35c

“Beyond the Surface: Supporting 3D Interactions for Tabletop Systems” is best understood by viewing the YouTube video http://www.youtube.com/watch?v=eplfNE5Cvzw A tabletop with an infrared (IR) projector and a regular projector simultaneously project display content with invisible markers. Infrared sensitive cameras in tablets or other devices localize “objects” above the tabletop, and programmable marker patterns refine object location. The iView tablet computer lets you view 3D content with 6 degrees of freedom from the perspective of the camera in the tablet, while the iLamp is a projector/camera that looks like a desk lamp that projects high-resolution content on the surface. iFlashlight is a mobile version of iLamp. The Siggraph paper from the group at Taiwan National University is here http://portal.acm.org/citation.cfm?id=1836829&dl=GUIDE&coll=GUIDE&CFID=104423014&CFTOKEN=87764926

“Colorful Touch Palette http://www.youtube.com/watch?v=UD3-FIbesvY by a group from Keio and Tokyo Universities [email protected] uses for sensors in fingertip covers to enable tactile sensations while painting on a PC monitor. It has 3 principal advances over previous force sensitive systems: it gives degrees of roughness by controlling the intensity of each electrode in the fingertip array; it increases the spatial resolution by changing the stimulus points faster than the fingertip movements thus providing tactile feedback that changes with finger position and velocity; it combines pressure and vibration for feedback of blended tactile textures. The Siggraph paper can be purchased here http://portal.acm.org/citation.cfm?id=1836821.1836831&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES382& part=series&WantType=Proceedings&title=SIGGRAPH&CFID=104380878&CFTOKEN=47977647

http://www.siggraph.org/photos/main.php?g2_itemId=466

From the University of Tsukuba Hoshino Lab comes Gesture-World Technology http://www.kz.tsukuba.ac.jp/~hoshino/. It achieves a highly accurate noncontact hand and finger tracking technology using high-speed cameras for any arbitrary user by compiling a large database including bone thickness and length, joint movement ranges and finger movements. This reduces the dimensionality to 64 or less, or 1/25th of the original image features—a huge advance in this art. Apps are endless but include interaction in a , video games, robotics and virtual surgery. You can get the YouTube video here http://www.youtube.com/watch?v=ivmrBsU_XUo

“Haptic Canvas: Dilatant Fluid-Based Haptic Interaction” is a novel haptic interaction that results from wearing a glove filled with a special fluid that is subjected to sucking, pumping and filtering which changes the state of the dilatant fluid from more liquid to more solid. The gloved hand is immersed in a shallow pool of water with starch added to block the view. The shear force between particles at the bottom of the pool and partially solid particles inside the rubber glove changes with hand movement. Varying what they term the three “Haptic Primary Colors” (the rgb dots in the pool) of "stickiness", "hardness", and "roughness" sensations, allows the user to create new “Haptic Colors”. Morre info here http://hapticcanvas.bpe.es.osaka-u.ac.jp/ and the paper by this team from Osaka University is here http://portal.acm.org/citation.cfm?id=1836821.1836834&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES382&pa rt=series&WantType=Proceedings&title=SIGGRAPH&CFID=104380878&CFTOKEN=47977647 and YouTube video here http://www.youtube.com/watch?v=eu9Za4JSvNk .

For abstracts and photos of some of the exhibits see also http://siggraphmediablog.blogspot.com/2010/05/siggraph-2010-emerging-technologies.html

“QuintPixel: Multi-Primary Color Display Systems” Adds sub-pixels to red, green, and blue (RGB) to reproduce over 99% of the colors in Pointer’s dataset (all colors except those from self-luminous objects). The above comparison shows the the improved high luminance reproduction of yellows due to the addition of yellow and cyan sub-pixels without display enlargement. Though QuintPixel adds sub-pixels, it does not enlarge the overall pixel area. By decreasing the area by one sub- pixel, it balances high-luminance reproduction with real-surface color reproduction. MPC’s are now appearing in Sharp TV’s where they can also produce “pseudo-super resolution” and reduce the problem of angular color variation in LCD panels. Only in 2D now but this video shows you the 3D versions are coming soon and will be the next must have for anyone with the money http://www.youtube.com/watch?v=09sM7Y0jZdI&feature=channel and [email protected]

“RePro3D: Full-Parallax 3D Display Using Retro- Reflective Projection Technology” http://www.youtube.com/watch?v=T-0OrMtlROY This uses the old technology of retroreflective screens in a new way to produce a full-parallax 3D display when looking at dual projected images through a semi-silvered mirror. Within a limited horizontal area the exit pupils are narrower than our interocular enabling glasses free stereo. The most common use of these screens has until recently been high brightness backgrounds in movie and video production.The screens can be of arbitrary shape without image warping and can be touch-sensitive or otherwise interactive as shown above, or even moving. An infrared camera tracks the hand for manipulation of 3D objects. Smooth motion parallax is achieved via 40 projection lenses and a high-luminance LCD. http://tachilab.org/

“Slow Display” by Daniel Saakes and colleagues from MIT http://www.slowdisplay.com and the Vimeo here http://vimeo.com/13505605 shows a high-resolution, low energy, very low frame rates display that uses a laser to activate mono or bistable light-reactive variable persistence and/or reflective materials. The resolution of the display is limited by laser speed and spot size. Projection surfaces can consist of complex 3D materials, allowing objects to become low-energy, ubiquitous peripheral displays (another example of the appropriated interaction surface displays). Among the display possibilities are arbitrarily shaped super hires, low-power reflective outdoor, dual day/night (see above photo), temporary, projected decals, printing, advertising and emergency signs. It could be done in 3D with polarization, anaglyph or shutter glasses.

“Shaboned Display: An Interactive Substantial Display Using Soap Bubbles” controls size and shape of soap bubbles pixels to create and interactive display with sound. Sensors detect bubble characteristics and hand gestures or air movements and can replace and break bubbles as desired. http://www.xlab.sfc.keio.ac.jp/

A group from the University of Tokyo (including Alvaro Cassinelli who also did the Smart Laser Projector above) demonstrated typing without keyboards http://www.youtube.com/watch?v=jRhpC5LiBxI . More info here http://www.k2.t.u- tokyo.ac.jp/vision/typing_system/index-e.html and their Siggraph paper here: http://portal.acm.org/citation.cfm?id=1836821.1836836&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES382& part=series&WantType=Proceedings&title=SIGGRAPH&CFID=104380878&CFTOKEN=47977647 “Head-Mounted Photometric Stereo for Performance Capture” by a group from the USC Institute for Creative Technologies http://gl.ict.usc.edu/Research/HeadCam/ updates a very well known technique for capturing depth by using a head-mounted camera with polarized white light LEDs a Point Grey Flea 3 camera(see below) to capture 3 different lighting conditions at 30fps each so that subtle face structure and movements can be used as input to facial simulation hardware or software.

“beacon 2+: Networked Socio-Musical Interaction” allows people to collaborate to generate sounds and play music with their feet. A musical interface (beacon) uses laser beams to change pitch and duration when they contact a foot. Multiple beacons can be networked, so distant performers can interact in realtime via the web. http://www.ai.iit.tsukuba.ac.jp/research/beacon

Thierry Henkinet of Volfoni www.volfoni.com , Michael Starks of 3DTV Corp, Jerome Testut of Volfoni and Ethan Schur of stereo codec programmer TDVision Systems discuss Ethan’s forthcoming book on Stereoscopic Video. Volfoni , formerly one of XpanD’s largest dealers, has recently made their own 3D Cinema shutter glasses system in direct competition with XpanD. I predicted such an event in my Infocomm article only a month ago and it seems the end is even nearer for XpanD than I thought. XpanDs pi cell glasses are more or less obsolete tech and their nonstandard battery and somewhat clumsy design coupled with high prices and a bad attitude made them an easy target. However, the Chinese are not stupid and several companies there have already started selling shutter glasses cinema systems so it is not clear who will dominate.

Ubiquitous paper glasses manufacturer APO enhanced their always classy booth with a pair of CP monitors. Billions served! http://www.3dglassesonline.com/

Prof. Kyoji Matsushima http://www.laser.ee.kansai-u.ac.jp/matsu/ and a team from Osaka and Kansai Universities presented the world's first ultra hires computer generated holograms using fast wave field rendering.

Their poster presentation.

Each of the 4 sides of this display box are viewed with LCD shutter glasses. A project by a team from the lab of Profs Michitaka Hirose and Tomohiro Tanakawa at the University of Tokyo http://www.cyber.t.u-tokyo.ac.jp/ For an extremely cool related device don’t miss the pCubee http://www.youtube.com/watch?v=xI4Kcw4uFgs&p=65E4E92216DEABE1&playnext=1&index=13

Visible interactive breadboarding by multitalented Yoichi Ochiai ( [email protected]) http://96ochiai.ws/top.html of the University of Tsukuba. It was a new campus when I visited it for the amazing Expo 85 (see my article http://www.3dtv.jp/pdf/21ST-CEN.PDF and http://www.3dtv.jp/3dtvpicweb/index.htm for photos and other info). The Visible Electricity Device or the Visible Breadboard is touch sensitive and displays voltages of every junction via color and brightness of LEDs, which permits wiring by fingertip via solid state relays. http://www.youtube.com/watch?v=nsL8t_pgPjs “A New Multiplex Content Displaying System Compatible with Current 3D Projection Technology” by Akihiko Shirai (right) http://www.shirai.la/ and a team from Kanagawa Institute of Technology. http://www.youtube.com/watch?v=RXUqIb7xXRc . The idea is to use dual polarized 3D systems or shutter glasses systems to multiplex two 2D images so people can watch two different programs or two sets of subtitles on the same screen. Passive polarized glasses for this have the same orientation in both eyes (RR or LL) while shutter glasses in a 120hz system would have both lenses clear at 60hz on alternating frames for the two kinds of glasses (such glasses called DualView already exist for DLP Link monitors and projectors and are sold by Optoma).

“Non-Photorealistic Rendering in Stereoscopic 3D Visualization” by Daniel Tokunaga and his team from Interlab of Escola Politecnica de USP (Universidad de Sao Paulo where I helped install the first stereovideo operating microscope in the medical school almost 20 years ago). Get the paper from Siggraph here http://portal.acm.org/citation.cfm?id=1836845.1836985 and the YouTube here http://www.youtube.com/watch?v=HiBOrcuNtcM . One of the aims is fast and frugal stereo CGI on low cost pc’s for education.

Marcus Hammond, an Aero-Astro grad student at Stanford Univ. with “A Fluid Suspension, Electromagnetically Driven Eye with Video Capability for Animatronic Applications”. It is low power, frictionless and has a range of motion and saccade speeds exceeding those of the human eye. Saccades are the constant twitchings of our eyes (of which we are normally unaware). A stationary rear camera can see through the clear index matching fluid of the eye from the back through the small entrance pupil and remains stationary during rotation of the eye. One signal can drive two eyes for stereo for objects at infinity or converged from object-distance data as is commonly done now for stereo video cameras. The inner part of the eye is the only moving part and is neutrally buoyant in liquid. Due to its spherical symmetry it is the only lens used for the camera. Due to magnification by the outer sphere and liquid, the surface of the inner eye appears to be at the outside of the sphere. They imagine that a hermetically sealed version might be used as a human eye prosthesis, along with an extra-cranially mounted magnetic drive. Coauthored with Katie Bassett of Yale and Lanny Smoot of Disney http://portal.acm.org/citation.cfm?id=1836821.1836824&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES382&pa rt=series&WantType=Proceedings&title=SIGGRAPH&CFID=104380878&CFTOKEN=47977647

The totally mobile Tobii eyetracker (photo above and yes this is all there is to it) www.tobiiglasses.com is a sensational new product which you can see in action in various videos on their page or here http://www.youtube.com/watch?v=6CdqLe9UgBs . They also have the original version embedded in monitors which you can see here http://www.vimeo.com/10345659 or on their page. www.tobii.com

The EyeTech MegaTracker www.eyetechds.com is a similar approach to eyetracking which adds a tracking device to the monitor for remote noncontact tracking. A new version is due Oct. 2010. http://www.youtube.com/watch?v=TWK0u8nRW2o

Google Earth is widely used on the web (including stereo http://www.gearthblog.com/blog/archives/2009/03/stereo_3d_views_for_google_earth.html ) and Philip Nemec shows how it is now adding 45 degree maps. Adding this angle of view greatly increases comprehensibility of the data. For Microsoft’s competing system see Bing Maps http://www.bing.com/maps/

Part of puppet pioneer Jim Henson’s legacy-the HDPS (Henson Digital Puppetry Studio) has dual handsticks with realtime stereo animation and RF wireless Nvidia pro shutter glasses. www.creatureshop.com. Headquarters in LA with branches in NYC and London. http://www.youtube.com/watch?v=m6Qdvvb1UTs

Andersson Technologies of Pennsylvania http://www.ssontech.com showed the latest version of their approx. $400 program SynthEyes which has, among its many capabilities, stereo motion tracking and stereorectification. It has been used on various scenes in Avatar such as the 3-D holographic displays in the control room, the bio-lab, the holding cell, and for visor insertion. There are informative videos on their page and YouTubes at http://www.youtube.com/watch?v=C4XrnLrlu14&feature=related , http://www.youtube.com/watch?v=n-2p4HCyo2Y

Interactive CP polarized display comprising 10 JVC panels in the King Abdullah University of Science and Technology booth www.kaust.edu.sa was the brainchild of Andrew Prudhomme and colleagues who use Covise from HLRS http://www.hlrs.de/organization/av/vis/covise/ and Mac software to split the display over the panels using a Dell Geforce 285 cluster. The university, which opened in 2009 http://www.calit2.net/newsroom/release.php?id=1599 , has used its $10B endowment to establish one of the leading scientific visualizations centers in the world. Some of its initial visualizations were developed by teams from California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego and the Electronic Visualization Laboratory (EVL) at the University of Illinois where Andrew Prudhomme has worked. KAUST’s President, Choon Fong Shih, is former president of the National University of Singapore and most of the 70 faculty and 400 students are foreign. There are numerous YouTubes including http://www.youtube.com/watch?v=7i4EkINknMk

MetaCookie is a mixed reality system in which an interactive virtual cookie is projected on a real one along with odors http://www.youtube.com/watch?v=si32CRVEvi4 . Coinventor Takuji Narumi [email protected] tokyo.ac.jp describes it as “Pseudo-gustation system to change perceived taste of a cookie by overlaying visual and olfactory information onto a cookie with an AR marker.” Those interested might wish to attend DAP 3(Devices that Alter Perception 3) held in conjunction with the IEEE Symposium on Mixed and Augmented Reality in Seoul, Korea Oct 13th http://devices-alter.me/10

Nvidia 3D Vision shutter glasses with RTT DeltaGen software on a 120hz Alienware LCD panel with cold cathode fluorescent backlight (afaik all the newer LED/LCD TV’s use white LED backlights).

NVIDIA showed Intra or Inter net realtime stereoscopic collaborative editing in Autodesk Maya using the NVIDIA 3D Vision Pro RF wireless glasses. However, the web version is subject to the usual lag and bandwidth limitations.

Another NVIDIA team shows 3D shutter glasses movie editing with Adobe and Cineform. For more info on Adobe and Cineform stereo see 3D at NAB 2010 and the detailed tutorials online including http://www.youtube.com/results?search_query=cineform+3d&aq=4 One section of their booth showed the newest Nvidia mobile processor doing realtime 3D playback from an HP Laptop in the Nvidia-HP Innovation Zone

CXC Simulations www.cxcsimulations.com of Santa Monica, Calif. was showing their 3 screen MP2 simulator with Custom built PC’s using NVIDIA cards and the Corbeau $25K racing chair. It gets top ratings from racecar drivers, some of whom own them. You can race 350 different cars on 750 tracks!

Andrew Page of nVidia’s Quadro Fermi team with the NVIDIA developed RF wireless glasses used on a 120hz LCD monitor with Siemens syngo.fourSight Workplace medical imaging software showing a beating heart. The Quadro cards with the Fermi GPU cost about $1500 but it is also present in their GTX 480 series cards for about $500.

Randy Martin shows Assimilate’s www.assimilateinc.com and http://www.youtube.com/watch?v=l1KGaMQ4hd4 Scratch 3D edit software running on a PNY Nvidia Quadro card via a 3ality 3D Flex box which converts the image for line alternate display on an LG Xcanvas CP monitor. LG seems to have marketed these monitors only in Europe so far.

Video card maker ATI was always a distant second to nVidia in stereoscopic support, but after being acquired by AMD they have scurried to catch up. Here they show stereo support on the dual FHD semisilvered mirror display from Planar. Cards such as the FirePro v8800 (ca. $1200) are way beyond videogaming unless you are a superpoweruser and, like the many Nvidia Quadro’s, have the standard 3pin MiniDin VESA stereo plug for 3DTV Corp’s Universal Glasses Emitter-- which can be used with 7 different types of shutter glasses. Planar www.planar3d.com also had their own booth.

The Web 3D Consortium of Menlo Park, CA, USA www.web3d.org was also present, seeking members (NASA, Schlumberger and Sun are a few of their current members) to develop the iso X3D specifications for web based 3D graphics. For one example of a realtime interactive app supporting multiple formats see http://www.3df33d.tv/ created by former NewSight CTO Keith Fredericks and colleagues of http://general3d.com/General3D/_.html and below are a few of the 3D videos you can stream with Firefox HTML5 from their page http://www.3df33d.tv/node/videos . On 10-10-10 they streamed live 3D from their offices in Germany –a world’s first for HTML5. http://www.youtube.com/watch?v=05IODwp3fRs&feature=related . They expect to soon support all types of displays and to derive revenue from advertising.

The alternate streamers via the newest 3DTV’s, STB’s and BluRay players so far support only one or two formats in the hardware and use high bandwidth dual compressed images, whereas 3DFeeD uses the DiBR method http://iphome.hhi.de/fehn/Publications/fehn_EI2004.pdf which is easy to modify and control via easily updated software and can accommodate realtime broadcast quality graphics. It has huge advantages over other pc based streamers such as the Nvidia 3D Vision system or the live feed with capture card and Wimmer’s software (see the 3DTV FAQ on my page) in using a normal pc with no special cards, drivers or downloaded software. You go to any 3DF33D compatible page and upload or download your content or go interactive. And of course it is multiplatform and will have robust 3D GUI in the browser.

However I am not convinced that the monoscopic image plus depth data used in DiBR will retain the lustre, sparkle, texture and shadows of a true dual compressed image so I await a side by side demo. Of of course it’s a relatively new codec, supported by e.g., the European ATTEST program and will be developed continually. In any case its totally cool dude and will spread like wildfire! Think Facebook and YouTube together in 3D fullscreen on any computer. Downsized version for pad, pods and phones to follow! http://www.youtube.com/watch?v=CmiOO71yHQ8&sns=em Hope to demo it in the 3DTV Corp booth at CES in January 2011.

Masahito Enokido, Shinichiro Sato and Masataro Nishi (left to right) of the Lucent Pictures www.lpei.co.jp/en team showed some of their recent 3D film work (including their own 2D to 3D conversions) in the Japan based 3-D Consortium www.3dc.gr.jp booth.

Arcsoft www.arcsoft.com.tw showed the ability of their 3D BluRay PC playback software to give shutter or anaglyph display on a Samsung 120hz LCD with the Nvidia 3D Vision system (the 3DTV Corp system is compatible with such monitors and less expensive). All the software BluRay players including PowerDVD and Roxio are starting to support 3D in multiple formats. Here’s a video of their 3D BluRay player in Japanese http://www.youtube.com/watch?v=bovhlMnufE8

Kiyoto Kanda, CEO of NewSight Japan http://www.newsightjapan.jp/ and http://www.youtube.com/watch?v=hhCzVqmfDR0 (in Japanese ) with their 3D picture frame with contents converted with 3D Magic software by two Japanese programmers. Kanda san brought them to meet me in the USA 7 years ago but their product was less developed then and there was no autostereo display and little market. I introduced him to NewSight USA 5 years ago and he became their Japanese distributor with rights to the NewSight name in Japan. Now that NewSight is gone he is carrying on with his own line of autostereoscopic displays including a made in Japan 70 inch model that is the world’s largest no glasses flat panel http://www.youtube.com/watch?v=lGIX3YIKA0w . He also reps the giant LED autostereo outdoor panels made by TJ3D Corp in China (see my previous articles for info).

Steve Crouch of Iridas www.iridas.com showing their 3D edit software in the Melrose Mac booth www.melrosemac.com . You can see him in action editing RED footage http://www.youtube.com/watch?v=3GtV3LNd4-s .

Blick of Korea showed a line of elegant active and passive glasses, but two months later their page www.blick- eyewear.com is still not working (but you can try http://www.ogk.co.kr/eng/company/sub1.asp ) and they have not responded to emails or voicemails, so with dozens of companies rushing into this market they will have to move faster.

Brendan Iribe CEO of Scaleform Corp www.scaleform.com showing their plug and play stereoscopic interface for 3D game designers. Their software includes Flash tweening and actionscript extensions. The 2D version has been used in over 700 games http://www.youtube.com/watch?v=zKDuzVbi50Q , http://www.youtube.com/watch?v=3WqoXlH1piE&feature=related , and is being prepped for phones and tablets http://www.youtube.com/watch?v=amkwCBAqN6s

I have followed Canadian company Point Grey’s stereoscopic vision products since their first model in 1998 and they have now expanded greatly. Here Renata Sprencz demos the Bumblebee 2 machine vision camera. Some of their cams have 3 lenses for more accurate data with a wider choice of subjects. www.ptgrey.com . Among their numerous YouTubes are one of their spherical (360 deg) Ladybug 3 camera http://www.youtube.com/watch?v=FQaKwYRouyI . Realtime depth mapping, ranging or 3D databasing.

Point Grey Bumblebee 2 which you can see a bit more about here http://www.youtube.com/watch?v=ZGujKSUAxDU

There were many MoCap (realtime Model Capture) systems at the show and XSENS www.xsens.com had one of the largest booths. In addition to MoCap http://www.youtube.com/watch?v=JeGflcAW_-g&feature=related and http://www.youtube.com/watch?v=TNkkLBkBSrw&feature=related , a single sensor can be used for interactive graphics http://www.youtube.com/watch?v=qM0IdPcuuxw

NaturalPoint’s OptiTrack MoCap system uses cameras and glowing light balls. The Expression facial MoCap costs $2K can also be used for realtime control of animations or robotics. They also make TrackIR for viewpoint control in videogames and other CGI apps. http://www.youtube.com/watch?v=_AO0F5sLdVM&feature=related and www.naturalpoint.com .

4D Dynamics www.4ddynamics.com brand new PicoScan model capture system costs $2K, but they have full body scanning Pro versions for up to $120K.

Dr Howard Taub of Tandent www.tandentvision.com was showing a revolutionary face recognition system which uses COTS cameras and uncontrolled lighting http://www.tandentvision.com/site/images/SIGGRAPH%20-%20PR%20(Face).pdf. You may not have heard of them before but you will again since it should now be feasible to ID people while standing at airport security checkpoints or driving through a toll booth.

Naoya Eguchi [email protected] showed Sony's RayModeler—a spinning LED screen, which makes a volumetric display (now commonly termed “lightfield display”) controlled by a PlayStation joystick. For some of the many previous manifestations of this well traveled concept see e.g., the SIT article on the 3DTV Corp page http://www.3dtv.jp/articles/sit.html . A common problem has been that inappropriate pixels (e.g., from the other side of the object) can be seen but this did not seem to be an issue here (probably due to the microsecond switching of LEDS) and some images of real persons were also presented (i.e., 360 degree video).

“Light Field Display” means it approximates the light reflected from a real world 3D object with photons originating from a volume. This term overlaps with the conventional 3D display term “volumetric”. For a nice videos showing related displays see http://vodpod.com/watch/844164-research-interactive-360-light-field- display and http://www.youtube.com/watch?v=FF1vFTQOWN4&p=65E4E92216DEABE1&index=15&feature=BF .

So-called Light Field or Plenoptic multilens cameras which take simultaneous multiple images of a scene in order to have everything in focus (each lens can be selected later by software) should reach the consumer market soon. I give some references on plenoptic imaging in my article on Stereo Camera Geometry http://www.3dtv.jp/ . The ability of such cameras to provide 3D images is a free byproduct.

Also in the Emerging Tech gallery was Stephen Hart of HoloRad www.holorad.com of Salt Lake City with an 8 frame holomovie--each position having 42 depth planes and its own green laser at the end of the bars shown. They are doing R&D in collaboration with Disney and you can find their paper here http://portal.acm.org/citation.cfm?id=1836821.1836827 . This is one of 3 exhibits of what they term “interactive zoetropes” after the 200 year old picture animation devices.

Paul Craig of 3D Rapid Prototyping www.3drp.com distributes 5 models of the ZScanner www.zcorp.com ($12K for Model 700) which takes “laser snapshots” to create solid models that can be made with any CNC device such as the Roland in the next photo which they sell for $8K. http://www.youtube.com/watch?v=6CdqLe9UgBs

The $8k Roland Milling Machine carves a plastic model from an image captured by the Z-Scanner http://www.youtube.com/watch?v=Yir7T165RcY

Shapeways www.shapeways.com of Eindhoven lets you upload and make a solid model of your design from a variety of materials for about half the usual cost.

The $395 MakerBot www.makerbot.com melts the powder from the flexible blue rods to build up the model layer by layer and among the numerous videos is http://www.youtube.com/watch?v=Hzm5dkuOAgM

The $695 DIYLILCNC www.diylilcnc.org carves wood or plastic into models but most interesting is that you can download the opensource plans and build your own and use its Creative-Commons license to tweak and redistribute it.

Brian Taber-Lead Depth Artist of StereoD http://www.stereodllc.com/, who did (and/or supervised) the 2D to 3D conversions for films like Thor and The Last Airbender and also did some work on Avatar. Rumor has it that this costs about $10M. There are many other 3D fakes coming such as Gulliver’s Travels and you can only find out they are fake by searching the net, as they are not required to say so in their advertising. This may (I don’t think anyone really knows to what extent) pay off at the box office, but it is generating a huge hostility among the public (myself included).

Afaik all such conversion work as well as the realtime conversion in Samsung, Sony etc 3DTV sets and in the 3D software players from PowerDVD and ArcSoft uses my US Patent 6,108,005 without a license. It seems quite feasible to buy the rights to it from NewSight and litigate as there are billions in revenues. Regarding the quality of Airbender famous critic Roger Ebert www.rogerebert.com had this to say: “"The Last Airbender" is an agonizing experience in every category I can think of and others still waiting to be invented. The laws of chance suggest that something should have gone right. Not here. It puts a nail in the coffin of low-rent 3D, but it will need a lot more coffins than that.” This is of course not StereoD’s fault.

Ebert does not like 3D much—even the genuine kind, and he is not alone. However, it never seems to cross the mind of the anti-3D crowd that it is likely their stereo vision is defective (the alternative is a psychological problem). Many people with apparently normal vision have problems perceiving depth (as some do with color, movement etc.) but very little work has been done to quantitate this.

An allied claim that pops up periodically is that 3D viewing is potentially harmful, especially for children. Those who know perceptual physiology will likely take the opposite view that it is highly therapeutic. There are millions of sufferers from amblyopia (“cross eyes” and maybe several hundred million others who do not see 3D well who do not have obvious amblyopia. The treatment of choice is to have them view 3D with glasses beginning as early in life as possible. If you wait longer than early childhood it is too late. The growth of 3D is actually a giant therapeutic program since it will force billions to see 3D from childhood onward and I'm sure this has never crossed the minds of those who write about the "damage" from 3D viewing! Everyone should be required to watch 3D movies as children to prevent amblyopia or other stereovision defects since amblyopia is really a blanket term for a variety of oculomotor and brain stereo processing problems.

For proof of even transient problems from e.g., accommodation/convergence breakdown, one needs controlled blind (i.e., those who gather data don't know controls from experimental subjects) statistically valid studies that go on for say weeks or months. Control groups should be subject to such protocols as watching 2D TV or films for the same time in exactly same conditions. There was lots of noise identical to this about 15 years ago when HMD's and appeared, and studies that purported to show persistent neurological problems, but it all faded away and nobody gives a thought to it today even though millions of HMD's are in use by consumers every day (e.g., you can get them for your iPod for $100). And, these isolated studies mean nothing. You have to look at the whole context of human visual system use and how common it is to have people report eye problems, headaches etc. after viewing 2D TV, films or videogames for the same period of time in the same contexts. The visual system like all others is evolved for flexibility. I recall the experiments done occasionally for over 100 years, where people wear special glasses for days or weeks that reverse the right and left eyes or turn the world upside down. After a day or two the brain adapts, things start to look normal, and one can walk around without problems! And when they finally take them off they are again totally disoriented for a few hours or days, but then everything is ok again. Riding in a car is likely a far greater stress than any kind of film viewing, and tens of millions get car sick (or on bus, train, airplane) every day. And then there are the amusement park rides and motion seat theaters that routinely make a large percentage of the patrons a bit ill.

Watching 3D is almost certainly good exercise for our visual system and if it bothers you just take off the glasses for a few minutes or a few days. Regarding children, they are the most adaptable—it’s the seniors who will have a harder time, but I'm 69 and quite sensitive to bad 3D (as I told Jeffrey Katzenberg after watching an eyestraining clip of Monsters and Aliens at 3DX two years ago—the final film however was corrected), and I watch these films from the front half of the theater (the best way to produce eyestrain) and feel no problems at all. Also, the recent 3D films/videos are conservative in their use of horizontal parallax, and careful about avoiding binocular asymmetries—a dramatic contrast to previous 3D film practice! And the broadcasters are doing the same--just look at the 3D specs of Europe’s BskyB satellite network, which, like theaters are supposed to do, limit the H parallax to 3% of the screen width (and prohibit 2D conversions without special permission).

I am sure few of those who talk about this issue stop to think that millions of people every week for the last 20 years or so have looked at 3D movies and games on their TV's and PC's with shutter glasses and other 3D viewing systems and that most of these (unlike the very well done current 3D films) have very bad stereo errors or huge parallax. In addition, there were hundreds of millions who saw the often very poorly shot and projected films from the 50's to the present. Every day for the last 50 years maybe a million people see such films at special venues where they are often part of rides where the seats are violently jerked around--an experience that makes many people sick even when the films are 2D! Even IMAX and Disney 3D theaters for decades have had notices in the lobby warning people to stop watching if they become ill (a frequent occurrence due to bad 3D!) and warning cardiac patients and the pregnant to avoid them. And it seems to there has rarely been an issue in 50 years. No lawsuits, nobody falling down on the sidewalk outside the theaters, no reports of neurological damage.

It is also considered necessary to include warnings with all 3DTV sets and shutter glasses to discontinue use if a person feels bad and partly this is due to the rare condition of photogenic epilepsy. The public is generally unaware that such warnings have been routine with 2D games, videos and TV sets for decades. In this regard I recall reading of children with this condition repeatedly inducing seizures by looking at a light or the sun coming thru the trees while waving their fingers in front of their eyes. For many years I have sold shutter glasses to optometrists who have wired them to battery powered sync generators so that persons with amblyopia and other conditions can wear them for hours a day while walking around observing the world with extreme 60hz flicker!

Another health issue being raised is infection from the glasses. Italian health officials recently seized a 3D theaters entire supply of shutter glasses for testing. For decades 3D glasses have commonly been reused dozens or even hundreds of times- often without cleaning (in other countries I often got them so dirty and scratched they were almost unusable) but where is the evidence that anybody got an eye infection? Peoples fingers are 100x more infectious than glasses and they stick them everywhere including on the best places possible to get an eye virus—on other people and children and animals and then touch everything in their daily life (i.e., without seeing 3D movies). And we all touch furniture, eating utensils, door handles, etc. etc. so it’s clear that even if we reuse unsterilized glasses when seeing movies, it can at worst add negligible risk to what we normally encounter.

Of course as I noted in my other articles (see e.g., “The Future of Digital 3D Projection” at www.3dtv.jp ) it is desirable to investigate the relative comfort with variations of the stereo filming and display parameters and I suggested how this should be done. But the data don't exist and it will be a major effort to do such studies with real world conditions (e.g., home 3DTV and cinema viewing under realworld conditions for normal viewing schedules with statistically valid samples followed over time).