What are 3D User Interaction?
Human-computer interaction in which the user’s tasks 3D User Interface are performed directly in a 3D spatial context. Interactive systems that display 3D graphics do not necessarily involve 3D interaction – e.g. using 2D menu to control 3D interactive environments 3D interaction does not necessarily mean that 3D input devices are used – e.g. using 2D mouse input to perform 3D interaction 071011-1 3D user interface is a UI that involves 3D interaction 2017년 가을학기 11/08/2017 박경신
Challenges in 3D Interaction 3D User Interfaces
Limited (or expensive) input devices. 6-DOF Input Device for the 3D Interaction on the Too many variables needed to accurately interact with desktop objects (i.e. 6 DOF). Examples are SpaceMouse, SpaceBall, etc. Information in the world can become overwhelming (Sestito et al., 2002) Slight push and pull pressure of the fingers on the cap of the device generates small deflections in x, y, z - which moves Users can get lost/sick while navigating the virtual objects dynamically in the corresponding 3-axes. environment. Slight twisting and tilting of the cap generates rotational User’s have difficulty moving their arms and hands motions along the 3-axes. precisely in free space. Interaction is often done with arms outstretched and/or head bent over - arms and neck fatigue. 3D User Interfaces 3D User Interfaces
3D Mice User-Worn 3D Mice The “bat”, developed by Colin Ware, is a mouse that flies - It’s Another approach to the design of 3D mice is to have the simply a 6-DOF tracking device with three buttons. user wear them instead of holding them. Wand is commonly used in conjunction with surround-screen Ring Mouse is a small, two-button, ring-like device that uses VR displays, such as CAVE and ImmersaDesk. ultrasonic tracking that generates only position information. Another example is the 3D Joystick modeled after a flight stick. FingerSleeve is a finger-worn 3D mouse (similar to Ring Mouse) Cubic Mouse, developed at Fraunhofer, is a 3D mouse in that it is small and lightweight, but it adds more buttons. designed primarily as an interactive prop for handling 3D objects.
3D User Interfaces 3D User Interfaces
Special-Purpose 3D User Interfaces Special-Purpose 3D User Interfaces Many other types of input devices are used in 3D user CavePainting Table used in 3D CavePainting, a system for interfaces. painting 3D scenes in a virtual environment – multiple cups of ShapeTape is a flexible, ribbon-like tape of fiber-optic paint, a single tracked paint-brush, and a paint-bucket. curvature sensors that comes in various lengths and sensor Transparent Palettes allows writing and selection of objects spacing – It senses bend and twist information. and commands from 2D palettes, as well as 3D interaction Interaction Slippers embed a wireless trackball device (the techniques, such as volumetric selection by sweeping the Logitech Trackman) into a pair of common house slippers. device through the virtual world. Touching a cloth patch on the left slippers to a cloth patch on the right slipper completes the button press circuit. 3D User Interfaces Interaction Design Principles
Special-Purpose 3D User Interfaces Performance Control Action Table (CAT) - looks like a circular tabletop - was Quantitative measures indicating how well the task is being designed for use in surround-screen VR display. It combines done by the user and the system in cooperation 6DOF input with 2D tablet interaction. It uses angular sensors to Efficiency, accuracy, productivity detect orientation information using three nested orientation axes & a potentiometer to detect forces in any 3D direction. Usability Qualitative experience of the user In Jeffrey Shaw’s “configuring the CAVE”, the movement of a near life-size wooden puppet body and limbs dynamically Ease of use, ease of learning, user comfort modulates the parameters of image and sound generation. Usefulness Interaction helps users perform work or meet system goals Users focus on tasks
Universal Interaction Tasks Selection
Selection & Manipulation Goal of selection Common selection Selection: Specifying one or more objects from a set Indicate action on object techniques Manipulation: Modifying object properties (position, orientation, Query object Touching with virtual hand scale, shape, color, texture, behavior, etc) Make object active Ray casting Navigation Travel to object location Occlusion/framing Travel: Physical movement from place to place, setting the Set up manipulation Naming position (and orientation) of the user’s viewpoint Variables affecting user Indirect selection Wayfinding: Cognitive process of defining a path through an performance in selection environment, using and acquiring spatial knowledge, helped by Object distance from user (artificial) cues Object size System control Density of objects in area Issuing a command to change the system state or the interaction Occluders mode (In 2D systems, menus & command-line interfaces) Implementation issues for selection Selection Classification techniques
object touching How to indicate that the selection event should take pointing place object indirect selection Need efficient algorithms for object intersections indication Feedback (graphical, aural, tactile) indicating which button object is about to be selected Selection indication to gesture Virtual hand avatar (virtual representation for user hand) select voice List of selectable objects, to increase efficiency graphical feedback tactile audio
Manipulation Manipulation Classification
Goal of manipulation Common manipulation world-in-miniature exo-centric Object placement techniques automatic scaling Design layout Simple virtual hand metaphor Grouping Ray-casting Hand-position mapping Tool usage Manipulation virtual hand Indirect-depth mapping Travel virtual hand HOMER Go-Go Variables affecting user metaphor Scaled-world grab ego-centric Indirect Go-Go performance in WIM manipulation metaphor Required translation Ray-casting distance virtual pointer Amount of rotation metaphor Aperture Required precision of placement Flashlight Image plane Manipulation Metaphors Manipulation Metaphors
Simple virtual hand Indirect depth mapping One-to-one mapping between physical and virtual hands Infinite reach, not tiring Natural but limited Not natural, separates DOFs Ray casting HOMER (ray-casting + arm-extension) Pointer attached to virtual hand Easy selection & manipulation little effort required Expressive over range of distances Exact positioning and orienting very different Hard to move objects away from you Hand position mapping Virtual hand Scaled-world grab Natural, easy placement Easy, natural manipulation Limited reach, fatiguing, overshoot User discomfort with use World-In-Miniature All manipulation in reach
Ray-casting Doesn’t scale well, indirect
Implementation issues for manipulation techniques Ray Casting Technique
Integration with selection technique User selects objects by Disable selection and selection feedback while intersecting them with a manipulating ray or beam What happens upon release? Shoot a ray in hand “forward” direction Ray Casting Technique Two-handed Pointing Technique
Advantages Flexible Pointer (Olwal and No need to move to far away objects to select them Feiner, 2003) Less fatiguing (user can “shoot from the hip”) Simplify pointing to fully or Drawbacks: partially obscured objects Cannot select occluded objects Using the distance between Difficult to manipulate selected object the two hands to control Small movements in hand result in larger translations of the length of the virtual distant objects pointer Difficult to move in a direction parallel to the ray Twisting the hands slightly Rotates relative to the user’s position, not the object’s origin the user can curve the virtual pointer
Flashlight Technique Image Plane Technique
Spotlight or flashlight was developed to provide a soft Image-Plane technique selection technique that does not require precision and simplifies the object selection accuracy of pointing to virtual objects with the ray. task by requiring the user to The pointing direction is defined in the same way as in control only 2DOF. (Pierce et the simple ray-casting, but it replaces the virtual ray with al., 1997) a conic selection volume. Users operate on objects in Easy selection of small objects their image plane as if the object was right in front of Disambiguation of the desired object when more than one object falls into the spotlight. them. If two objects fall into the selection volume, the object that is Translations and Rotations closer to the center line of the corn volume is selected. relative to object’s origin. If the angle between the center line of the selection cone is the Suffers from the same same for both objects, then the object closer to the device is occlusion problems as ray cast. selected. Can be fatiguing. Fishing-Reel Technique Go-Go Interaction Technique
Previous approach is difficult to control the distance to Poupyre et. Al., 1996 the virtual object in z-axes. Interactively grows the Additional input device, such as fishing-reel (e.g. a length of user’s arm length mechanical slider or up-down button), dedicated to Allows user to manipulate controlling the length of the virtual ray. near and distant objects It allows the user to select an object with a ray-casting, directly then reel it back and forth using the dedicated input Translations relative to device. user, rotation relative to object’s origin
HOMER (Hand-centered Object Manipulation Go-Go Implementation Extended Ray Casting) Technique
Requires “torso position” t – tracked or inferred Bowman and Hodges, 1997 Each frame: Combination of Ray Casting Get physical hand position h in world coordinate system and Go-Go Calculate physical distance from torso dp = dist(h, t) User selects an object by ray Calculate virtual hand distance dv = gogo(dp) casting, then the virtual hand Normalize torso-hand vector Time manipulates to the object V hand position v = t + dv * th (in world coordinate system) Translations done relative to Virtual Hand the user’s body Distance Rotations done around object’s origin Actual Hand Unlike Go-Go, no maximum Distance arm length (ray casting is infinite) HOMER Implementation Selection Task Evaluation
Requires “torso position” t Poupyrev, et al. (1998) evaluated ray-casting and Go- Upon selection Go interaction technique with 5 levels of the distance Detach virtual hand from tracker and 3 levels of the object size Move v. hand to object position in world coordinate system Bowman, et al. (1999) evaluated ray-casting, image Attach object to v. hand (without moving object) plane and Go-Go technique with 3 levels of the Get physical hand position h and distance dh = dist(h, distance and 2 levels of the object size t) Ray-casting and image-plane techniques generally Get object position o and distance do = dist(o, t) more effective than Go-Go, except that selection of very small objects can be more difficult with pointing Each frame: Copy hand tracker matrix to v. hand matrix (to set orientation) Ray-casting and image-plane techniques result in the Get physical hand position hcurr and distance dh-curr = same performance (2DOF) dist(hcurr, t) Image-plane technique less comfortable v. hand distance Normalize torso-hand vector v. hand position vh = t + dvh * (thcurr)
Object Positioning Task Evaluation Object Orientation Task Evaluation
Ray-casting effective if the object is repositioned at Setting precise orientation can be very difficult constant distance Shape of objects is important Scaling techniques (HOMER, scaled world grab) Orienting at-a-distance harder than positioning at-a- difficult in outward positioning of objects: e.g. pick an distance object located within reach and move it far away Techniques should be hand-centered If outward positioning is not needed then scaling techniques might be effective Travel Travel Classification
Goal of travel Common travel target-based Setting the position (and techniques start to move route-based orientation) of the user’s Physical locomotion position viewpoint Steering continuous Variables affecting user Target-based indicate position velocity performance in travel Route planning Direction or target Manual manipulation Travel position acceleration Travel-by-scaling indicate orientation Velocity/acceleration Viewpoint orientation Conditions of input (e.g. Velocity specification start, stop, continue) stop moving
Travel Metaphors Travel Metaphors
Physical locomotion Physical locomotion interfaces Walking & walking in place Device simulating walking (e.g. treadmills & bicycles) Other physical motion (e.g. magic carpet, Disney’s river raft ride)
CirculaFloor Sphere
Omni-directional treadmill Travel Metaphors Travel Metaphors
Steering Route-planning Continuous control of the direction of motion One-time specification of path (user specifies the end Gaze-directed steering goal and also some Pointing (“flying” gesture) intermediate path Physical device (e.g. steering wheel, flight stick) information) Target-based Place markers in world Discrete specification of the end goal of the motion, then the Move icon on map system actually performs the movement in between the two points Point at object Choose from list Enter coordinates
Travel Metaphors Gaze-directed Steering
Map-based Move viewpoint in direction of gaze User represented by icon on 2D map Cognitively simple, but doesn’t allow user to look to the Drag icon with stylus to new location on map side while traveling When released, viewpoint animated smoothly to new location Implementation Manual manipulation Each frame while moving: Manual manipulation of viewpoint in which the user’s hand Get head tracker information motions are mapped in some way to viewpoint motion Get gaze direction “Camera in hand” by transforming vector [0, 0, -1] in head coordinate system to v=[x,y,z] in world coordinate system Fixed object manipulation Normalize v Translate viewpoint by v * current_velocity Map-based Travel Map-based Travel
User represented by icon on 2D map Implementation Drag icon with stylus to new location on map Must know: Map scale relative to world: s When released, viewpoint animated smoothly to new Location of world origin in map coordinate system: o=(xo, yo, zo) location On button press: If stylus intersects user icon, then each frame: . Get stylus position in map coordinate system: (x, y, z) . Move icon to (x, y, 0) in map coordinate system On button release: Get stylus position in map coordinate system (x,y,z) Move icon to (x, y, 0) in map coordinate system Desired viewpoint: pv = (xv, yv, zv) where xv = (x – xv)/s, yv = (y – yv)/s, z = desired height at (xv, yv) Move vector: m = (xv-xcurr, yv-ycurr, z-zcurr) * (veclocity/distance) 2D map used in 2D map view displayed 2D map control embedded Each frame for (distance/velocity) frames: translate viewpoint by m Virtual Gorilla Project on a PDA interface in on top of a table in Tangible Virtual Harlem Moyangsung
Travel Task Evaluation Travel Task Evaluation
Relative motion task - Spatial orientation task - move to a point along the remember the locations 3D line defined by the of objects within a maze pointer while moving through it Steering techniques have “Teleportation” can lead similar performance on to significant absolute motion tasks disorientation Non-head coupled Environment complexity steering better for relative affects information motion gathering Relative motion task Spatial orientation task Travel and user’s strategies affect spatial orientation Travel Task Evaluation Travel Task Evaluation
A study of “cross-task” Steering techniques best techniques (i.e., requiring for naïve and primed manipulation for travel) search shows that manipulation- Map-based techniques based travel techniques not effective in unfamilar efficient for relative environments, or when motion. any precision is required Manipulation-based techniques not requiring an object efficient for search, but tiring (Bowman99) The yellow ring is the radius within which the user had to move to end the task
System Control System Control Classification 2D menu Goals of system controls Common types of 1-DOF menu Catch-all for other types system control graphical menu of virtual environment techniques 3D widget interaction Menu systems TULIP menu Issuing a command to Voice commands voice command speech recognition change the system Gestures/postures System spoken dialogue system mode & choose the Implicit control (e.g. pick system state up new tool to switch Control gesture modes) gestural command Often composed of posture other tasks physical tool tool virtual tool Floating 2D Menus 1-DOF menus
Group of graphical menus Menus using a circular Can be seen as 3D object on which several equivalent of 2D menus items are placed Can occlude environment Correct number of DOFs for the task Using 3D selection for a 1D task Can be put away Only 1 menu level at a time Can be made in several forms: Ring menus Sundials Spiral menus (spiral formed ring menu) Rotary tool chooser
Radial Pop-Up Menus Fade-Up Menus
Menus using a circular Radial Layout Sundial Translate cursor to select Pie menu with 3D selector Hierarchical User rotates “Shadow stick” to occlude desired segment TULIP (Three Up Labels in Palm) Menus Menus Task Evaluation
Menu system using pinch Compared with pull-down vs. pen vs. pen-and-tablet gloves menus Non-dominant hand controls Pen-and-tablet found to be faster menus Users preferred TULIP Dominant hand controls menu items TULIP had higher comfort level Only three items available for selection Other items appear in sets of three on the palm More item linked to next set Goal: display all options but retain efficiency and comfort
2D Interaction in 3D Virtual Environments Constraints
Quite useful for appropriate tasks (match task and input Artificial limitations designed to help users interact more DOFs) precisely or efficiently Can integrate seamlessly with 3D Examples are If presence is important, the 2D interface should be Snap to grid embedded, not overlaid Intelligent objects Examples: Single DOF controls Interaction on the projection surface or viewplane Using a PDA for input Snap Dragging in 3D Passive Haptic Feedback
Beir, 1990 Props or “near-field” Desktop (2D) interface haptics Positions objects along gridlines Increase presence, Allows for arbitrary positioning of object in a 3D world improve interaction Allows objects to be repeatedly placed in the same Examples: position precisely. Flight simulator controls Precision limited to granularity of grid lines Pirates steering wheel Elevator railing Smaller grid lines lead to more finely tuned positioning
Hand-held Indirect Two-handed Interaction - Pen & Tablet Interaction
Symmetric vs. Asymmetric Involves 2D interaction, Dominant vs. Non- two-handed interaction, dominant hand constraints, and props Design principles: Can put away Non-dominant hand Constrained surface for provides frame of reference input Non-dominant hand used Combine 2D/3D for coarse tasks & Dominant hand for fine- interaction grained tasks Hand-writing input Manipulation initiated by Use any type of 2D Non-dominant hand interface, not just menus Hand-held Indirect Hand-held Indirect - Virtual Notepad - Transparent Pad
Tool for writing, taking Transparent prop for the notes, & adding Virtual Table annotations or comments The pad is tracked and in virtual environments graphics are projected on the primary display but Using a spatially tracked appear as if they are on pressure-sensitive the surface and even graphics tablet from above the pad Wacom tech. Tool and object palette Handwriting as a mean of Window tools interacting in immersive Through-the-plane tool virtual environments Volumetric manipulation
Virtual Inertia Reference
Ruddle and Savage, 2002 Bowman, D., Kruijff, E., LaViola, J.J., Poupyrev, I., 3D Virtual objects are “weightless” - User Interfaces: Theory and Practice. They can be moved rapidly Rapid Movements can be error prone Inertia slows the movement of the object. Useful when high precision is needed (to avoid collisions) Increases minimum time to complete task. Not helpful when task is straightforward and easy.