<<

An Overview of as Used in Augmented and Systems

Brian Valerius University of Minnesota, Morris 600 East 4th Street [email protected]

ABSTRACT The technology of today has allowed companies to prototype Displaying images in 3-dimensional form can prove useful virtual models through 3D displays, thus saving time and for performing certain tasks, and also for entertainment pur- money on physically building them. Electronic communica- poses. In this paper we discuss 3-dimensional with- tion is widely used throughout the world, and 3D imaging out the use of external hardware for the user. This type of can help provide a more realistic feel to human interactions 3-dimensional imaging is becoming more and more popular over a distance. as related technologies continue to develop. We also look We describe how autostereoscopy can be used for sev- at how 3-dimensional imaging can be used with augmented eral practical purposes. Background on and au- and virtual realities. Topics of commercial and professional tostereoscopy are given, as well as an example of a failed use are discussed, along with some knowledge of how these attempt at creating a stereoscopic game console. We uses work for some 3-dimensional displays. also describe how autostereoscopy can be used for virtual prototyping and live display. How autostereoscopy can be used in augmented and virtual realities is also discussed. Categories and Subject Descriptors I.3.1 []: Hardware architecture—Three- 2. BACKGROUND dimensional displays; I.3.7 [Computer Graphics]: Three- Dimensional Graphics and Realism 2.1 Stereoscopy Stereoscopy creates an illusion of three-dimensional depth General Terms from images on a two-dimensional plane. It works by pre- Design, Human Factors senting a slightly different image to each eye. It was first invented by Sir in 1838 [8]. Many 3D Keywords displays use this method to display images. Stereoscopy is used in many places today. It can be used Stereoscopy, Autostereoscopy, 3-Dimensional Imaging, Par- to view images rendered from large multi-dimensional data allax Barrier, Lenticular Lenses, , Virtual sets such as ones produced by experimental data. Stere- Reality oscopy is used in , the practice of determin- ing the geometric properties of objects from photographic 1. INTRODUCTION images. Stereoscopy is often used for entertainment pur- poses, such as through stereograms. A stereogram is an Technology for displaying images in 3-dimensional (3D) of depth created from flat, two-dimensional form is becoming more widely used. Although movie the- image or images. Stereograms were originally two im- atres have been showing 3D movies for many years, the pro- ages that were able to be viewed through a . A gression of technology for displaying 3D images has grown user would look through the stereoscope and view one image rapidly throughout recent years. There is a trend to want to through the left eye and one image through the right eye, display 3D images without the use of external utilities worn thus creating the 3D effect. Today there are several other by the user. There are multiple ways of achieving this type ways to display stereograms that are not described in this of 3D imaging, some of which are described in this paper. paper, such as rapidly switching between images to achieve Beyond entertainment, there are many practical uses for a similar effect. displaying 3D images without the use of external hardware. 2.2 Failed Stereoscopic System: The Virtual Boy was first introduced in July of 1995. It was the first dedicated stereoscopic con- sole released to the public. It is considered the most famous This work is licensed under the Creative Commons Attribution- market failure by Nintendo, a large video gaming company. Noncommercial-Share Alike 3.0 United States License. To view a copy The Virtual Boy console did not last a year on the market of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/us/ or before it was pulled. Its number of unique videogames was send a letter to Creative Commons, 171 Second Street, Suite 300, San Fran- cisco, California, 94105, USA. limited to 22, with 14 games released in North America and UMM CSci Senior Seminar Conference Morris, MN. 19 released in Japan. Challenges for Success in Stereo Gaming: A Virtual Boy Case Study Matt Zachara José P. Zagal College of Computing and Digital Media College of Computing and Digital Media DePaul University DePaul University 243 South Wabash Ave. 243 South Wabash Ave. Chicago, IL 60604, USA Chicago, IL 60604, USA [email protected] [email protected]

A BST R A C T computers and consoles rendering increasingly sophisticated and Stereo video stands to revolutionize the medium of videogames in life-like images. Very little progress seems to have been made, the same way that stereo sound revolutionized the audio however, in how we actually see virtual worlds. experience. It is a pending revolution. Despite years of research, This apparent lack of progress is dissociated with an upsurge of development, and hype, 3D stereo video ubiquity in videogames interest in stereoscopic video gaming. Popular magazine Game has yet to be observed. 8VLQJ 1LQWHQGR¶V 9LUWXDO %R\ (VB) Informer recently discussed some of the newer technologies gaming platform as a case-study, we explore why the revolution reaching the mass market [1] while Edge magazine explored how KDVQ¶W yet happened. We identify six factors that played a independent game designers are looking at what novel game VLJQLILFDQW UROH LQ 9%¶V IDLOXUH its lack of defined identity as a experiences can be designed with stereoscopic video [2]. product, a comparatively weak display, its socially isolating game However, is the current excitement simply the result of unbridled experience, purported effects, the challenges in optimism? Perhaps, but despite the continuous evolution in explaining and demonstrating stereoscopic gaming, and its lack of improving 3D technology, the adoption of stereoscopic a must-have game. We note that the factors we identify DUHQ¶WMXst technologies for gaming purposes has been preceded with a technological and that they interact in confounding ways. Nearly caution: videogame history seems littered with devices, fifteen years after the introduction of the VB, new technologies contraptions, and games with unfulfilled promises to deliver the may address the original 9%¶V technical shortcomings, but not ultimate game immersion experience. necessarily the others. There is currently still an issue with raising WKHEDURQFRQVXPHU¶VH[SHFWDWLRQVDVZHOODVencouraging game Headfinder Controls (a) (b) Prism Mask designers to explore the design space offered by stereoscopic Movable video. Prism Mask

1 Keywords Right Virtual Boy, stereoscopic, case study, virtual reality, head- Eye mounted display, , 3D, LCD

1. IN T R O DU C T I O N (c) (d) In interactive entertainment, 3D graphics usually refers to a rendered illusion of a three-dimensional space; the player understands objects are not flat because the environment rotates, Standard TFT Panel giving a sense of tangible . However, these graphics Left are not really tridimensional in their representation since the eyes Eye are constantly focused on a single surface, such as a display. In stereo video displays, however, each eye is presented with an independent image source such that the brain is able to Figure Figure1 ± Virtu 1:al B Theoy un Virtualit with at Boytache consoled controll [9].er & three Figure 9: Principle of an autostereoscopic display [29]. interpret the perceived based on the subtle offsets game cartridges featured Figure 2: Principle of an autostereoscopic display between the two visual inputs. Similarly to how we use our eyes In this article we explore the multiple challenges faced by stereo [5]. in the real world, there is a need to refocus depending on layer technologiesVirtual Boy forworked interactive based entertainment on oscillating by mirrors analyzing that what dis- is tributed among the columns by rendering textured lines. Again, that is being visually examined ± essentially stimulating natural on certain architectures it may perform better if for the first im- perhapsplayed a the linear Figure highest-profile array 8: Finite of elements lines, commercial powered from Fig. videogame 5 by have light been emitting fixed:platform (a) edge is eye reflexes to visual cues. Thus, the illusion of depth is achieved. age a different strategy is chosen, e.g. distributing the columns via failurediodes(LEDs), 1LQWHQGR¶Vcollapsed one 9LUWXDO to for common each %R\ node;eye[3] [9].( (b)Figure triangle The 1). user and The adjacent placed Virtual quadrilateral their Boy isway thatglCopyPixels alternating.Fig.10(c)showsthecorrespondingresultusing columns can be seen either with the left storyface againstprovidesdivided; the eyepiece, (c) to stretched the largely which and adjacent blockedignored elements genre out externalof are stereoscopic divided; light, (d) adjacent or withthis the method right and eye. in Fig. One 11 approach and 1(c) one isto can use seea autostereoscopic prism mask Curiously, advancements in entertainment displays have primarily triangles are merged if possible. addressed two issues: how to display an image and how to render JDPLQJ1LQWHQGR¶VHDUO\PLVWDNHVVHWWKHVWDJHIRUGLVFXVVLQJWKHand allowed them to focus on the two internal electronic to bendrendering the rays of a and car model. also use a dual- system to track (prepare) images for display. Current technology allows us to issuesdisplays. that The must controller be addressed for Virtual for the Boy successful held 6 AA widespread batter- the positionThe eye of the position viewer’s acquired eyes by [5]. the An two embedded built-in processor can be read out via a serial port. The position is given with respect to the view images on larger screens with higher resolution than ever adoptionies required of present5.1 to power Autostereoscopy and the emerging console, stereoscopic and was connected technologies. to the In interprets the eye position and controls the position of the underside of the unit. prismcenter mask of (see the monitor Figure and 2). can A be pair used of for stereographic simple tracking images and the before. Similarly, the decades-old transition from 2D to 3D particular, weIn argue recent that years the manyFKDOOHQJHVDUHQ¶WMXVWWHFKQRORJLFDO solutions for autostereoscopic impression transformation of the presented model can be updated accordingly. graphics has also been astounding, with high-end gaming rather,Although they have Virtual are been the Boy developed. result used of sophisticated Most multiple of them confounding use technology a prism factors system for or thinhas to be rendered vertically interlaced into the frame buffer its time, it wasblades limited to bend to or blockusing the only pixel a columns single of a standard for the TFT dis-for this type of display to work. Alternative techniques are display: redplay [9]. in This a way limitation that alternating likely columns made can the be videogame seen either with thedescribed in subsequent sections of this paper. Permission to make digital or hard copies of part or all of this console unattractiveleft or with to the buyers. right eye. Other The display popular we use gaming (see Fig. con- 1(c)) con- tains a prism mask to bend the rays and it also has a dual-camera work or personal or classroom use is granted without fee soles at the time used many more . provided that copies are not made or distributed for profit or system which tracks the position of the user’s eyes. An embedded3. MOTIVATION AND PRACTICAL USES The systemprocessor was criticized interprets for the being eye position uncomfortable and controls to the play. position of commercial advantage and that copies bear this notice and the It required the prism player mask, to which hunch can over, be displaced and also horizontally reportedly by servo mo- FOR AUTOSTEREOSCOPY full citation on the first page. To copy otherwise, to republish, caused physiologicaltors so that effects it is guaranteed such as that motion each eye sickness will see and the right eye- column. Autostereoscopic displays can serve many practical pur- to post on servers, or to redistribute to lists, requires prior strain after extendedFig. 9 illustrates periods the top of use.view of Some the optical of these paths. problems poses. Companies are able to save economically by creating specific permission and/or a fee. are alleviated throughTo drive such an a autostereoscopic display, a pair of stereographic approach. images has to bevirtual prototypes(a) instead of physically(b) building(c) real ones.  2FW  99 rendered vertically interlaced into the frame buffer. Most graphics card manufacturers provide drivers for this purpose, but generallyWearingFigure glasses 10: Rendering to see 3D a line: movies (a) stencil or images mask can(gray) be blocks intrusive during rasterization; (b) the prism system of the display widens the © $&0  ,6%1:    ... 2.3 Autostereoscopyeither only for their high-end graphics boards or for WIN32-basedand uncomfortable if the user is also wearing corrective eye- pixels and the user sees a dotted line through his left eye; (c) correct Autostereoscopyplatforms is only. defined To circumvent as any these method problems of displayingwe implemented na-glasses,result while (users autostereo view through displays prisms) by solve distributing this issue. half-sized Live viewport long- stereoscopictive images support without for autostereoscopic the use of displays special in headgearscFEMod.Analter- or distanceon odd-numbered autostereoscopic columns. chat could provide a more realistic glasses on thenative part to native of the support viewer is presented [7]. This in [31], technology which we can extended toexperience for users on both ends of the display. There are support also autostereoscopic displays. include motion and wide viewing angles. Motion other interesting uses for autostereo displays as well. Here, Afastandeasysolutiontoachievethealternatingdistributionof 5.2 3D and Haptic Editing parallax usesthe eye-tracking, two images is whileto use a wider matching viewing stencil angles mask, render do not the imagewe describe two practical uses of autostereoscopy in more need to trackfor where one eye, the change viewer’s the stencil eyes are test located.to the appropriate Some ex- setting anddetail.As already discussed, 3D editing using 2D input devices is quite amples of autostereoscopicrender the scene again displays for the include other eye parallax (for further barrier, details on how problematic. A Spacemouse [1] provides input with six degrees of lenticular, volumetric,to set up the camera electro-holographic, parameters for correct and stereographic light field projection3.1freedom. Virtual It has Prototyping established itself as easy to use interaction device, displays. Severalrefer to of[31]). these On some display architectures types will the rendering be described might perform Virtualbut it prototyping is too inaccurate is forincreasingly editing purposes. physical Haptic real devices, mock-ups which better when the stencil test is disabled for the first pass. This works have been developed over the past years, also provide input of 6 or later in thiswell paper. as long as one does not want to render thin lines e.g. a wireand experimentseven more DOF. in industrialSpecifically, product we use a Phantom development Desktop [5].[30] Part seen Most 3D flat-panelframe representation displays of today the car use model. lenticular Especially lenses bevel or lines willof thein process Fig. 1(c). is Thesimulation user interacts of structural with the model and by functional holding a pen-like prop- parallax barriersnot be that reproduced redirect very incoming well, as the imagery stencil mask to severalwill block someerties.device Many with aspects a button. of this Currently methodology we are investigating are based how on such finite an viewing regionspixels at (Fig. a lower 10(a)) and resolution. as a result theWhen user thewill see viewer’s an interruptedelementinput analysis. device can Finite be used element for interactive analysis and isintuitive a technique editing. The for head is in aline certain as depicted position, in Fig. a different 10(b). Our image approach is seen to solve with this prob-findingpen approximate can be represented solutions on-screen of by partial a corresponding differential virtual equa- pen each eye. Thislem, gives which the additionally illusion ofproduces a 3D muchimage. better quality when fulltions asin the well 3D as scene integral and its equations. point can be used to interact with the FE scene anti- is enabled, halves the resolution of the view- surface. Thanks to the Phantom’s many degrees of freedom it is Eye trackingport can andbe renders used narrowed to limit images the number of the scene. of displayed The two images Oneeasy example to navigate is fromto the desired the automotive node, press the industry. button and A perform sim- views to justcan two. be stored One ondisadvantage the graphics card of eye using tracking texture memory is that and dis-ulationalmost model arbitrary for a translatory safety improvement and rotatory modifications resulting onfrom the anse- it limits the display to a single viewer. It tracks only one automotive crash can consist of up to one million finite ele- viewer’s eyes, and therefore can not be used in larger viewing ments. Stereographic projection can support engineers who environments.366 are working on complex finite element models. Engineers are Many systems for autostereoscopic impression use a prism able to be immersed in a 3D illusion without the need for system or thin blades to bend or block pixel columns of a external equipment through autostereoscopic displays (see standard thin-film transistor (TFT) display. This works in a Figure 3). mountedto their displays usually the presented use video methods see-through have been technology. quickly ac- ASTOR: An AutostereoscopicThiscepted is and much the more engineers common Optical are now than able optical to solve see-through many mesh-related dis- plays,problems because on their of compatibility own without the and need availability for an additional issues. Op- loop See-through Augmentedticalthrough see-through theReality CAD department. displays System are Generation typically of the new preferred variations choice of ex- inisting many FE applications components—e.g. in which elongations a “direct” or creation view of of the stiffening world iscorrugations desirable. and folds—is now possible using intuitive 3D wid- 1 2 gets.ASTOR Errors2 uses inan the optical mesh that see-through might2 appear display during where these a holo-editing2 Alex Olwal Christoffer Lindfors Jonny Gustafssongraphicoperations optical are Torsten detected element Kjellberg reliably is used and for on-the-fly autostereoscopy. Lars by Mattsson our algorithms. It is a “walk-up-and-use”Relaxation or local element 3D augmented restructuring reality can thensystem be used where to repair the 1 2 Department of Numerical Analysis and usercritical does meshDepartment not regions. need to Although wear of Production any it external is not possible equipment.Engineering to fix all Differ-kinds Computer Science entof meshing images areerrors, seen our depending methods are on nevertheless the user’s very position. powerful The and holographicsuccessfully produce optical valid element FE meshes in ASTOR in general. produces The texturing a holo- ca- Royal Institute of Technology, graphic100pabilities 44 image Stockholm, of standard of a light-diffusing graphics Sweden cards screen.can be used When to visualize the holo- the graphicdifferences optical between element various is designilluminated, stages, orthe to light display reflects erroneous to [email protected], [email protected], [email protected],createregions [email protected], of real an image FE mesh. of the This [email protected] diffusing highlighting screen technique floating can in be front used Figure 11: FE model of a car rendered for display on an autostereo- ofalso the to holographic proof that our optical methods element for mesh plate repair (see and Figure smoothing 4). do Figurescopic monitor 3: Finite (data element courtesy of modelBMW AGof). a car rendered for not impair the surface and do preserve important features. displayABSTRACT on an autostereoscopic monitor (data cour- New hardware approaches for immersive workplaces like au- tesyWe ofpresent BMW a AG)novel [5]. autostereoscopic optical see-through tostereoscopic displays and haptic input devices are not yet ac- lectedsystem set for of nodes.Augmented For selecting Reality the (AR). nodes It a 2Duses mouse a transparent is still the cepted by the industry. On one hand, this might because of the im- best interaction device, but the 3D widget presented in section 3.2 holographic optical element (HOE) to separate the views perfections of these novelties, e.g. most autostereoscopic displays 3.2can be Live replaced Autostereoscopic completely by the virtual Display pen making our editing produced by two, or more, digital projectors. It is a mini- have a rather low resolution, which is additionally halved due to operationsThe ability even to more record intuitive 3D video to control. and display it back in 3D is the display method. On the other hand, these devices are still very mallyAnother intrusive advantage AR system of these that devices does is not their require ability the to provideuser to expensive. Our application is pushing into the direction of a broad becomingwear special a somewhat glasses or more any realistic other equipment, possibility. since Advances the user in multi-imagehaptic feedback. capture This and can be multi-projector useful to prevent display the pen allow from this per- usage of such technology and some engineers at BMW AG are al- will see different images depending on the point of view. toforating be possible. the FE Hewlett-Packard surface by generating (HP) appropriate Laboratories forces have in case cre- of ready considering to equip some workplaces with these devices. atedacollision.ThisbehaviormakesiteasierfortheusertograbanodeThe aHOE low-cost itself wide-VGA-qualityis a thin glass plate low-latency or plastic autostereo- that can We expect to encourage this trend when our application provides lying on that surface and it also aids to avoid perforation of other scopiceasily display be incorporated of live video into on other a single surfaces, PC such [1]. Multipleas a win- full support for haptic editing of FE models in a 3D environment. parts during modification of the mesh. Additionally, force feed- usersdow. areThe able technology to observe offers and interactgreat flexibility, with a life-sized allowing dis- the In this application paper we demonstrated that there is a need for backprojectors can be to employed be placed to let where the engineer they are perceive the least more intrusive. and more new modification methods for FE meshes and we presented solu- playresistance surface with responsive increasingly to theirworse positions. element quality. To implement ASTOR’sCapturing, capability transmitting, of sporadic and reconstructing AR enough ofis the cur- tions that can be operated intuitively. Glyphs and textures can be afastcollisiondetectiononecaneitherusethelibraryprovidedbyrently ideal for smaller physical workspaces, such as our localthe manufacturer lightfield is [30] a challenge or develop for one establishing of its own [17, 3D 18]. video We com- want munication.toprototype try another setup Five approach in major an andindustrial challenges use high-resolution environment. for this work distance come volumes. from multi-viewpointThis will allow us capture, to have multi-viewpoint collision detection display, and corresponding mathemat- icsforcesKEYWORDS: and very analysis fast and foraugmented in constantcalibration time, reality, across i.e. independent them, optical efficient from see-through, the com- num- pressionberautostereoscopy, of surface to permit elements. holographic reasonable Memory transmission, consumptionoptical element, will and be system, smart a challenge, pro- pro- Figure 1. The principle of the display used for our autos- cessingbutjection-based hierarchical of the signals approaches to provide seem to an be veryinteractive promising experience to handle Figureteroscopic 4: An optical autostereoscopic see-through augmented opticalsee-through reality sys- [1].this task. Researchers at Hewlett-Packard Laboratories discuss augmentedtem. A smaller reality transparent system [3]. holographic optical element the1 first INTRODUCTION three. (30×40 cm) in the center of the larger window separates Their system works with 9 cameras and 9 projectors of- Thethe two diffusing projected screen views, can beso extendedthat each ineye the is verticalpresented di- 6RThe combinationESULTS AND CofONCLUSIONS real and virtual objects in Augmented rection and narrowed(a) in the horizontal direction(b) to form a fering 7 discrete binocular view zones at the plane of the with a different perspective, thus creating a stereoscopic projectors.Reality (AR) Video requires bandwidth see-through approaches displays, a gigabit which per can sec- be vertical slit. The holographic optical element will produce Fig. 12 shows a simple, yet typical example for a problem that an effect. The annotations and the cylinder represent virtual ond.either Hewlett-Packard video or optical Laboratories see-through. states Video that their see-through camera one slit image for each light source. Light from the left engineer often faces. In the original model, seen in Fig. 12(a), the objects. systemhead-mounted can support displays 8 times (HMDs) this number are (72 significantly imagers) [1]. more projector in Figure 2 will pass through the right slit. The violetpopular component than optical perforates see-through the orange HMDs part. Ourprimarily application because de- opposite applies to light from the right projector. The slits tects the erroneous area and visualizes it by marking the region of availability and calibration issues. However, optical see- canthat be the lined area up of side projection-based by side by adjusting AR is the fairly projector unexplored po- 4.with AUTOSTEREOSCOPY an alerting texture (Fig. 12(b)). Since IN AUGMENTED it is a clearly defined through displays are the preferred choice in many applica- sitions.and has Adding great more potential projectors in terms will increase of achieving the number AR with of perforation,AND it VIRTUAL can be resolved REALITY automatically as SYSTEMS shown in Fig. 12(c). Intions Fig. in 12(d) which it can a be “direct” seen how view the engineer of the performsworld is a desirable. stretching perspectiveminimal intrusion. views and enlarge the viewing zone. Navab,Augmented for instance, reality (AR) discusses refers theto a need live direct for optical or an in-see- The technology used for the display are consumer-grade operation by dragging the 3D widget proposed in this paper. Erro- (c) (d) directneousthrough view elements displays of a are physical, in detected industrial real-world interactively applications environment and marked due to whose withsafety appro- ele- and projectors.This paper Each presents projector a novel is projection-baseddriven by a separate AR system com- mentspriatesecurity glyphs,are reasons augmented in this [8]. case by two computer-generated elements have a bad aspectsensory ratio—yet input, puterwith (2.5 an optical GHz CPU, see-through 256 MB display RAM, where NVIDIA a holographic GeForce4 suchnot too as critical sound foror graphics a valid numerical [6]. The simulation—and technology functions one element by MXoptical 440-SE) element [3]. (HOE) Java3D is is used used for for autostereoscopy. rendering and Java The enhancingcontainsThe intrusiveness an one’s angle thatperception of is too the large. current of reality. Our preprocessing display Advanced technology tool AR is abletech- is to a Remotedetails onMethod the HOE Invocation are provided is used in for a communicationprevious paper [6].in nologyrepairproblem these allows in elementscurrent the information byAR locally systems. restructuring about While the there the surrounding FE have mesh been and real sig- au- aThis client/server is a “walk-up-and-use” architecture over 3D AR TCP/IP. system The where display the user is worldtomaticallynificant of theadvances inserting user to in new become this elements research interactive (Fig. area, 12(e)) and resulting without digitally thein manip-increas- need for suitabledoes not for have augmented to wear reality any equipment, applications since since different it provides im- ulable.remeshingingly smaller Virtual the whole head-worn-displays, reality part. (VR) Then works the mesh insuch is a smoothed similar as the waydisplays by using except ourde- goodages see-throughwill be seen capabilities,depending on it the has user’s more position. than sufficient thatrelaxationveloped it replaces by approach, Minolta the which real [7] world guaranteesand MicroOptical with thata simulated features—e.g. [14], one. we chamfers Inbelieve this brightness, it does not physically interfere with the real scene section,and sharp we edges—are describe preserved. three AR To or compare VR systems the new and mesh how with they the seenIn the through remainder it, and of it this is economic paper we to present produce related [3]. work in starting mesh one can use distance mapping as mentioned before. (e) (f) use autostereoscopic techniques to achieve 3D imagery. SectionThere are 2, followed some limitations by a description to the ASTOR of the project. technology The in We developed these methods in direct cooperation with engi- Figure 12: Various stages of repairing and editing a finite element numberSection of 3. views We discuss is limited our to prototype the number AR of system, projectors set used, up in 4.1neers at ASTOR the BMW AG. Most of our algorithms are no longer proto- andmodel: are also(a) original constrained mesh with to lie perforating along the materials; horizontal (b) axis. perforating This an industrial environment in Section 4. Finally, we provide typesASTOR and have is an been Autostereoscopic transfered to the Optical commercially See-through available Aug- crash meansparts are that textured the display ; (c) perforations suffers from and image penetrations distortions are removed; that mentedworthiness Reality preprocessing System. tool See-throughscFEMod. augmented Therefore, the reality techniques dis- are(d)conclusions common user performs toand all a future modification horizontal-parallax-only directions and erroneous in Section elements displays. 5 and are6. When marked playswe presented can be in either this application video or paper optical are already see-through. being used Head- widely theinteractively; viewer gets (e) closer element or errors farther are fixed from automatically; the display, (f) the relaxation depth by several German car manufacturers and their subcontractors. Due smooths mesh whilst conserving features.

367 Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR’05) 0-7695-2459-1/05 $20.00 © 2005 IEEE position of an image point will change. The prototype is also play. Lenticular displays work equivalently to barrier strip limited in that it can currently only display monochrome red displays, except that the barrier strip film is replaced by a images. It is believed that developing a color version will be sheet of cylindrical lenses. non-trivial after a color-capable holographic optical element There are two design paradigms that au- has been manufactured [3]. The human viewer has to focus tostereo displays typically follow. One is tracked systems on both the display and the objects behind it at the same such as Varrier, that produces a stereo pair of views that time which makes the display more suitable for small work follow the user in space, given the location of the user’s spaces. head or eyes from the tracking system [4]. These systems Informal interviews with production engineers who used are only capable of serving one user at a time. An untracked ASTOR for manufacturing research gave enthusiastic re- system shown in Figure 6 is another popular design pattern. sponses. They were used to their current setup which forced A sequence of multiple perspective views is displayed from them to divide their attention between the machine and the slightly varying vantage points. This allows a user to have control computer. They liked the idea of simultaneously a limited “look around” capability. In other words, if a user being able to monitor the values and the movement in the moves to a different location about the display, they will see machine through an AR approach [3]. This new approach the 3D image from a different perspective since their eyes was less intrusive in that, because an operator needs to in- are in a new static eye position. spect values and program execution on multiple machines in-between other tasks, the operator does not need to wear head-mounted-displays or shutter glasses. 4.2 Personal Varrier Personal Varrier is a display built by the Electronic Visu- alization Laboratory (EVL) at the University of Illinois at Chicago. It is a single-screen version of its 35-panel tiled Varrier display. Personal Varrier is based on a static par- allax barrier and the Varrier computational method. EVL states that Personal Varrier provides a quality 3D autostereo experience in an economical, compact form [4]. Varrier (variable or virtual barrier) is the name of both the system and a unique computational method [4]. This method utilizes a static parallax barrier, which is fine-pitch alternating sequence of opaque and transparent regions printed Figure 6: An alternative to Varrier’s tracked 2-view on photographic film, as shown in Figure 5. The front of the method in Figure 5 is used in some other systems: LCD display is mounted with the film, offset from it by a the multi-view panoramagram. Multiple views rep- small distance. This distance allows the user to see one im- resented by multi-colored stripes are rendered from age with the left eye and one image with the right eye. The static untracked eye positions [4]. image is divided in such a way that all of the left eye regions are visible only by the left eye and the right eye images are The Personal Varrier LCD is an AppleTM30” only visible by the right eye regions. Two views are simulta- wide-screen monitor. It has a base resolution of 2560 x 1600 neously presented to the brain, which become one 3D image. pixels. Construction of the Varrier screen is a simple mod- ification to a standard LCD display [4]. Varrier users are able to freely move in an area approximately 1.5 ft. x 1.5 ft., centered about 2.5 feet from the display screen. Their system uses a dual OpteronTM64-bit 2.4GHz pro- cessors computer with 4GB RAM, and 1Gbps Ethernet. The system also uses auxiliary equipment. Two 640x480 UnibrainTMcameras are used to capture stereo teleconfer- encing video of the user’s face. A six-degrees-of-freedom tracked wand is included, tracked by a medium-range Flock of BirdsTMtracker. The wand includes a joystick and also three programmable buttons, for 3D navigation and inter- action. The layout of the Personal Varrier including these elements is shown in Figure 7. Cameras are needed to collect real-time knowledge of the user’s 3D head position. Being an autostereo system, Per- Figure 5: Varrier is based on a static parallax bar- sonal Varrier uses no external gear worn by the user. This rier. The left eye is represented by blue stripes and is achieved by using three high speed cameras and several the right eye with green; these image stripes change artificial neural networks. The cameras record at 120 frames with the viewer’s motion while the black barrier re- per second at a resolution of 640 x 480. Eight infrared panels mains fixed [4]. and infrared band-pass filters on the camera lenses produce independence from room and display illumination. The cen- There is an alternative to the static barrier called an ac- ter camera is used for face recognition and for x,y tracking. tive barrier. It is built from a solid state micro-stripe dis- Outer cameras serve for range finding. Three computers 4. Distributed and local scientific visualization demonstrations Camera tracking system Point sample packing and visualization engine (PPVE) Tracked autostereo requires real-time knowledge of the user’s 3d head position, for larger groups of people. They are using large arrays preferably with a tether-less system that requires no add i t i oTnhaisl agpepalri ctoat iboen wiso drnes fiognr ed to visualize highly-detailed massive data sets of small lenticular displays that are capable of serving size- tracking. In Personal Varrier, tracking is accomplished auts intge r3a chtigveh fsrpaemeed r(a1te2s0 for the Varrier system. Since most large scientific able groups. A Graphics Processing Unit(GPU)-accelerated frames per second, 640 x 480) cameras and several artifdiactiaa ls entesu arrael snteotrwedo rokns remote servers, it is inefficient to download the entire mechanism has been implemented for performing real-time (ANNs) to both recognize and detect a face in a crowdedda tsac esente a anndd s torarec ka tlhoec afla coepy every time there is a need for visualization. rendering to lenticular displays. EVL is able to synchro- in real-time. More details can be found in [2], [3]. IndepFeunrtdheenrcmeo frreo, mla rrgoeo dma tansedts may only be rendered at very low frame rates on nize the displays and superimpose autostereo functionality on them, resulting in large arrays that behave as a single display illumination is accomplished using 8 infrared (IaR l)o pcaln ceolsm apnudte IrR, i nbhainbdit-ipnags Vs arrier’s interactive VR experience. PPVE is designed to handle the discontinuity between local capability and required display [2]. filters on the camera lenses. The center camera is used for detection (face display performance, essentially de-coupling local frame rate from server Their installations use Alioscopy 24” and 42” 3DHD dis- recognition) and for x,y tracking, while the outer two cameras are used for range frame rate, scene complexity, and network bandwidth limitations. plays. They have also tested their approaches on a variety finding (see Figure 7). 3 computers process the video data and transmit tracking of similar lenticular and parallax barrier displays. These A large dataseTtM is visualized in client-server mode. Personal Varrier, coordinates to the Personal Varrier. Each computer conatacitninsg d ausa tlh Xe ecolinent s 3e.n6d sG fHruzstum requests over network to the server, and the displays are capable of presenting eight independent image processors, 2GB RAM and 1 Gbps Ethernet. remote server performs on-demand rendering and sends the resulting depth channels. Training for a new face is short and easy, requiring less than two minutes per Lenticular display systems pose a challenge to rendering user. The usFeri gsulorwe 8ly: Tmhoev PesP hViEs oarp hpelirc haetiaodn in a sequenmcea po ar nndo crmolaolr pmoaspe sb awchki lteo the client. The client extracts 3d points efficiency due to the need to render the scene from multi- Figure 7: A user is seated at the Personal Var- 512 frames oimf vpildemeoe nartes ac acpliteunret-ds ewrhveicrh r aerme ottheen used to(c toroairnd itnhaete As NanNd sc.o Ulosre)r f droamta receiFveigdu dreep 5th: aAn du cseorlo irs mseaaptse adn da ts pthlaets them ple different view points. Despite complexities, EVL is able rier system, designed in a desktop form factor to be to achieve real-time display using the programmable GPU. can be stored aatnad v riesu-laolaizdaetdio, nso m thoed euls, ewr inthe etdhse to train thoen tsoy tshtem di sopnllayy .o Pnocien. tOs snecreve as thPee drissopnlaayl pVraimrritiievre sbyesctaeumse, of their simple used comfortably for extended periods of time. Au- Their approach is a generalization of a GPU-based algorithm trained and dsertevcetre dge, ntheer astyinstge mte xtrtaucrke sa an dfa dcep atth a frequerenpcrye soefn t6a0ti oHnz, .rendering efficiencdye, saingdn eqdua ilnit ya [d9e].sktop form factor Server: The servetostereor’s tasks in togetherclutdoe b oen u-dsee withmd acnodm camera-basedrefonrdtearibnlgy, froedrund trackingancy per- for autostereoscopic rendering that targets the Varrier par- Additionallym, tahpesr ea nisd atphper colxieimnta rtelcyo nasnt r8u0c%ti npgrobability that a generic trainmitsing the user to see 3D stereo while being com- deletion, map compression and daetax tsetrnedamedin pge. rTiohed ss eorfv etirm mea.intains a allax barrier display [2]. As a single-user system, the Varrier session can bveie wapppolinedts tboy o pthoeinr tu sperlas.t tFinogr .e Axample, in a conference setting pletelysuch as untethered [4]. presents exactly two image channels using parallax barrier similar octree-based point represenAtauttiosnt ears ethoe t oclgientht etor ewnistuhr ec adamtaera- iGrid, the sytsotpemog risa ptrhayin oefd C forar toenre L faakce iasnd the same training parameters are used display. The lenticular implementation can generalize this visualized above. synchronization. The server also mbaisnetadi ntrs athcek ionrgig pinearl mcoitms ptlheete u dsaetraset, for most of the participants, with overall success. which may take one of several forms including polygonal, ray tracing, or to an arbitrary number of views that are limited only by the process the videosee 3d data ster andeo w transmithile bein trackingg coordinates. The tracking system has certain limitations. The opevroatliunmge r arenngdee irsin cgu drraetan.t lTyhe server need not be a single machine, but can be graphics hardware. New usersco needmple totel trainy unte thether artificialed. neural networks, restricted to 1.5 x 1.5 feet. Presently the tracking systema ni se nsutipree rcvoimsepdu tbayti oan taral icnluedster. Normally, a lenticular display projects its channels for- which requires less than two minutes. Users slowly move ward, and thus two adjacent lenticular displays project their operator, a l t h Coluigenht :a T nheew c liinetnetr’fsa tcaes kiss binecilnugd ep rmepapar deedc tohmatp rweislsli omna, kpeo itnhti sextraction and decimation, point clustering and local point their head in a sequence of normal poses while 512 frames channels parallel to one another [2]. Projected channels unnecessasrpyla. t tEinngd.- Atod-denitdio lnaatle fnecaytu irse 8s 1in mclsu,d me ienacsruermeden atsa li nm u[4lt]i.- rMesooslut toiof nth pios ilnatt-esnetc ygrowing, view-dependent level-of-detail (LOD) of video are captured and used to train the artificial neu- must be brought into alignment at the plane of focus for is due to ccoomntmroul,n aincdat oiocntr eoev-ebrahseadd p, oaindt gwroourpki nisg o anngdo firnugs ttuom t rcaunlslifnegr .t r Tachkei cnlgie dnat tpaermits navigation of local data by the user while ral networks. There is an 80% probability that a generic multiple lenticular displays to appear to the user as a sin- more efficiently between the tracking machines and the rendering machine. new data is being transmitted from the server. training session can be applied to other users [4]. gle continuous display. Their display can accomplish this by A sample dataset containing 5 M triangles and a 4 K by 4 K texture Themap trackingcan be ren systemdered b doesy the s haveervercertain at only limitations.1 Hz, but The shifting center channels of all displays into alignment at the with PPVE, the client can reconstruct a view at 20-30 Hz. At iGrid, operatingthe transmi rangession d isata restricted rate was a toppr 1.5oxim ftat xel 1.5y 2 M ft.B/ As, new in- plane of focus. and PPVE was run over the local network, from an on-site server. A terface3d topog isra beingphy ap preparedplication w thatas de willmons maketrated, this as sh unnecessaryown Current research uses a camera-based tracking system not in Figure 8. [4]. appropriate for public spaces. EVL states that future instal- The Personal Varrier also creates the possibility of real- lations plan to use camera tracking without targets. This 3d video/audio teleconferencing time 3D. At 25 Hz rendering, end-to-end latency is approxi- approach has been developed for the Personal Varrier [2]. mately .3s (round trip). Even with a 10% packet drop rate, Scientific research today depends on communication; results are areal-timenalyzed an displayd data a appearsre visual flawlesslyized collab [4].orati Seevely Figure, often 8 as an 5. CONCLUSION over great distances and preferably in real-time. The multi-media colexamplelaborative of n Personalature of sc Varrierientific real-timevisualizati autostereoon is enhanc chat.ed To conclude, research and development for autostereo- by grid-based live teleconferencing capability as in Figure 9. In the past, this has been a 2d experience, but this changed at scopic displays are growing more rapidly. These displays iGrid 2005, as conference attendees sat and discussed the benefits of can be used for many different purposes. Some examples autostereo teleconferencing “face to face” in real-time 3d with scientists shown have been for automobile engineers, users of video at EVL in Chicago. Initial feedback from iGrid was positive and conferencing, video gaming, personal use, and commercial participants certainly appeared to enjoy the interaction. Thanks to use. CAVEwave 10 gigabit optical networking between UCSD and UIC, As shown, there are currently some limitations and restric- audio and stereo video were streamed in real-time, using approximately tions in these displays. Providing an economic display is one 220 Mb/s of bandwidth in each direction. End to end latency was Figure 6: Personal Varrier fits of the biggest challenges that autostereo engineers face. De- approximately .3 s (round trip), and autostereo images of participants pending on the implementation, multiple users may not be Figure 7: A close-up view is shown of 3 tracking cameras for face a standard 36” x 30” desk. The were rendered at 25 Hz. Raw, unformatted YUV411 encoded images user can move left and right 20 able to share the full experience simultaneously. Overall, as recognitwioenre a tnradn tsrmaicttkeidn, gu,s 4in IgR o nilel uhmalfi nthaet obrasn dtow cidotnht arso lc oilmlupmarienda ttoio an for more research is completed, presumably these 3D technolo- inches, and be a distance of trackingc,o arnreds p2o nstdeirnego R cGoBn ffeorremnacti.n Tgh cea rmeael-rtaims efo dri s3pdla ay ufotor sttheer meoost part gies will become more economical, more usable, more useful, between 20 to 40 inches from teleconfearpepneacrinedg .flawless even though approximately 10% of packets were and therefore, more widely used. the display screen. Field of view dropped. The robustness is due to the high transmission and rendering Figuries 9b:e Atw reeesnea 3r5c haenrd a 6t 5E VdeLg rcheeast,s in rates. The sending rate was 30 Hz, while the rendering rate was Figure 8: A researcher at the Electronic Visualiza- Acknowledgments autosdteerpeeon wdiitnhg a o rne mthoet eu sceorll’esague on approximately 25 Hz, limited mainly by pixel fill rates and CPU loadtioning. Library chats autostereoscopically with a re- Thank you Kristin Lamberty and Elena Machkasova for the Pelorcsoantiaoln V. arrier. The net result is that the occasional lost packets were negligible. Themote colleague on the Personal Varrier [4]. proof-reading this paper and for giving useful feedback. 6. REFERENCES 4.3 Multi-viewer Tiled Autostereo VR Display [1] H. Baker and Z. Li. Camera and projector arrays for The Electronic Visualization Laboratory at the University immersive 3d video. In Proceedings of the 2nd of Illinois at Chicago is also developing an autostereoscopic International Conference on Immersive display that can provide a wider comfortable viewing area Telecommunications, IMMERSCOM ’09, pages 23:1–23:6, ICST, Brussels, Belgium, Belgium, 2009. Future Gener. Comput. Syst., 22:976–983, October ICST (Institute for Computer Sciences, 2006. Social-Informatics and Telecommunications [5] D. Rose, K. Bidmon, and T. Ertl. Intuitive and Engineering). interactive modification of large finite element models. [2] R. Kooima, P. Andrew, J. Schulze, D. Sandin, and In Proceedings of the conference on Visualization ’04, T. DeFanti. A multi-viewer tiled autostereoscopic VIS ’04, pages 361–368, Washington, DC, USA, 2004. virtual reality display. In Proceedings of the 17th ACM IEEE Computer Society. Symposium on Virtual Reality Software and Technology, [6] Wikipedia. Augmented reality — wikipedia, the free VRST ’10, pages 171–174, New York, NY, USA, 2010. encyclopedia, 2011. ACM. [7] Wikipedia. Autostereoscopy — wikipedia, the free [3] A. Olwal, C. Lindfors, J. Gustafsson, T. Kjellberg, and encyclopedia, 2011. L. Mattsson. Astor: An autostereoscopic optical [8] Wikipedia. Stereoscopy — wikipedia, the free see-through augmented reality system. In Proceedings of encyclopedia, 2011. the 4th IEEE/ACM International Symposium on Mixed [9] M. Zachara and J. P. Zagal. Challenges for success in and Augmented Reality, ISMAR ’05, pages 24–27, stereo gaming: a virtual boy case study. In Proceedings Washington, DC, USA, 2005. IEEE Computer Society. of the International Conference on Advances in [4] T. Peterka, D. J. Sandin, J. Ge, J. Girado, R. Kooima, Computer Enterntainment Technology, ACE ’09, pages J. Leigh, A. Johnson, M. Thiebaux, and T. A. DeFanti. 99–106, New York, NY, USA, 2009. ACM. Personal varrier: autostereoscopic virtual reality display for distributed scientific visualization.