Quick viewing(Text Mode)

A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens

A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens

A Zoomable Interface for Presenting Hierarchical Diagrams on Large Screens

C. Geiger1, H. Reckter2, . Dumitrescu3, S. Kahl3, and J. Berssenbrügge3

1 Düsseldorf University of Applied Sciences, Düsseldorf, Germany [email protected] 2 Harz University of Applied Sciences, Wernigerode, Germany [email protected] 3 Heinz Nixdorf Institute / University Paderborn, Paderborn, Germany {Roman.Dumitrescu,Sascha.Kahl, Jan.Berssenbruegge}@hni.uni-paderborn.de

Abstract. We present the design, implementation and initial evaluation of a zoomable interface dedicated to present a large hierarchical design model of a complex mechatronic system. The large hierarchical structure of the model is il- lustrated by means of a visual notation and consists of over 800 elements. An efficient presentation of this complex model is realized by means of a zoomable that is rendered on a large wall with a high resolu- tion (3860 x 2160). We assume that this visualization set-up combined with dedicated interaction techniques for selection and navigation reduces the cogni- tive workload of a passive audience and supports the understanding of complex hierarchical structures. To validate this assumption we have designed a small experiment that compares the traditional visualization techniques PowerPoint and paper sheets with this new presentation .

1 Introduction

Zoomable User Interfaces (ZUIs) organize information in space and scale, and use zooming and panning as main interaction techniques. A number of projects already presented the successful application of ZUIs in domains like web browsing, story telling or image viewing. User studies examined navigation patterns and the of interaction techniques of zoomable user interfaces. These are mainly based on zooming and panning techniques because these interactions seem to be very intuitive for exploring complex information . Colin Ware observed in [12] that users in real life typically make use of zooming by moving their bodies and thus their views mostly forward and backward and seldom sideward. Panning is mostly achieved by head rotation sideways and up/down. Thus, ZUIs seem to support an intuitive and reality-based interaction style. While significant research on this topic exists and first commercial / -source applications have been released, most ZUIs concentrate on information visualization tasks following Shneiderman‘s InfoVis mantra „Overview, Zoom, , Details on Demand“. These tasks have been largely examined for users that actively explore the information space. On the other hand, there are fewer results how the design of ZUIs affects a passive audience. In this research project, we are mainly interested how a group of people can understand a complex information

J.A. Jacko (Ed.): Human- Interaction, Part II, HCII 2009, LNCS 5611, pp. 791–800, 2009. © Springer-Verlag Berlin Heidelberg 2009 792 . Geiger et al. structure if it is presented by large hierarchical diagrams in a guided presentation. Our approach to visualize the huge information space is two-folded: a large screen set-up with a 4*HD resolution (3840x2160) and the use of zoomable interaction techniques for selection and navigation within the diagrams. In this paper we present the effect of dedicated zoomable interaction techniques on a large screen for the presentation of large hierarchical information spaces. We visualized a diagram comprising of very large number of elements used to model the design of an innovative railway system. This 2D-diagram was visualized on a Virtual Reality Power Wall and used during a presentation for a passive audience and a presenter who interacts with the diagram.

2 Zoomable Interfaces: Basics and Related Work

In 2000, Raskin proposed that "The zooming interface paradigm can replace the browser, the , and the traditional ..." [8] . Over the past thirty years the WIMP paradigm dominated the 2D world. Today, the advent of generation user interfaces employing large multi- screen, multi-touch, multi-view or mobile 2D interfaces calls for new emerging interface paradigms. Among them, the Zoomable User Interface (ZUI) approach, sometimes called multi-scale user interfaces, is of particular interest when the user needs to visualize large information spaces. ZUIs use the metaphor of an infinite two- dimensional plane to represent the user's and the ability to view this plane an infinite high level of detail. In practice, the metaphor's infiniteness is often re- stricted by technical and conceptual constraints limiting the implemented zoom and pan interactions techniques to a suitable degree of resolution. The user is able to change the view of her workspace with these techniques. Advanced interaction is provided by semantic zooming, which introduces different kinds of representation based on the level of zooming. For example zooming into a hierarchical information structure (e.g. a system model of a complete car) presents details of each subsystem (e.g. the motor engine). The first ZUI system, Pad, is credited to Perlin and Fox, who published their work in 1993 [6]. The Pad system embodied a single shared work- space, where any part could be visible at any time. Pad used semantic zooming as a novel concept and featured a magic lens metaphor, a concept that allows multiple points with different representations within a single application. Objects are positioned and scaled on a plane and the user navigates using zooming and panning. ZUIs since then use these techniques and propose that this approach is close to the user's cognitive abilities. Furnas and Bederson worked on formal aspects of multi- scale interfaces and developed a visualization technique called space-scale diagrams [5]. These represent a spatial world and its different magnifications and allow the direct visualization and analysis of important scale-related issues of user interfaces. Scale is explicitly described using the vertical axis of the diagram, thus an object is represented at different scales. Perlin and Meyer combined ZUIs with nested UI wid- gets and developed recursively nested user interfaces [7]. The major goal of this approach was to present an easy-to-navigate user interface by a layered structure of controls. With zoomable nested GUIs, widgets can be hierarchically arranged at arbi- trary depth. Bederson developed the most prominent ZUI approach, the Jazz toolkit [3]. It builds on ideas of PAD and its successor PAD++, and adds scene A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens 793 graph structure to simplify the design of non-trivial ZUIs. Jazz was written in and and allowed embedding a ZUI component inside regular Java program code. This way, 2.5D-space applications could be created easily. A number of appli- cations were built using Jazz and its successor Piccolo [10]. The image browser Pho- toMesa provides the user with a zoomable view into a structure containing images [1]. Using the concept of treemaps, an automated screen space management was implemented. Bederson also developed CounterPoint, a presentation tool that enhances PowerPoint by new ZUI features like slide sorting and multiple paths. With CounterPoint it was possible to present one single presentation to different audiences or within different time constraints [9]. While many ZUI systems focus on domain specific applications, ORRIL is a framework that aids the design of ZUIs in general. Based on a set of requirements for ZUI design that is structured in four groups (dis- play, data, interaction, results), ORRIL implements a component-based approach using objects, regions, relations and interface logic as basic building block types of a each ZUI. The authors presented an example that used ORRIL to design a ZUI for a media player[11]. ZUIs for representing abstract graph structures like UML charts have presented by Frisch et al [15]. They address the problem of visualizing the global structure of complex UML diagrams and the detailed relationships of individ- ual UML elements and propose the usage of semantic zooming and other intuitive interactions techniques to ease the navigation between different diagrams. Beside a number of ZUI toolkits and dedicated ZUI applications some authors have studied the design and usability of zoomable interaction techniques in general. Hornbaek et al designed and evaluated navigation patterns and usability of ZUIs with and without overview [16]. Subjects preferred the overview interface and scored this significantly higher. On the other hand the work showed that ZUIs might eliminate the need for overviews in some cases. Recently, Jetter et al have introduced the ZOIL user inter- face paradigm [14]. This approach is aimed at unifying all types of local and remote information objects with their functionality and their mutual relations in a single workspace. ZOIL, the zoomable object-oriented information landscape paradigm is based on five design principles: (1) object-oriented user interface, (2) Semantic Zooming, (3) Nested Information Visualization, (4) information space as information landscape, and nomadic cross-platform user interfaces. The presented UI paradigm was applied to personal information management tasks.

3 Application: System Design in Mechanical Engineering

Modern mechanical engineering products are affected by a high degree of information and . This is aptly expressed by the term “mechatronics” and denotes the symbiotic cooperation of mechanics, electronics, control engineering and software engineering to improve the behaviour of a technical system. Modern automobiles, tools or airplanes are prominent examples of mechatronic sys- tems. The conceivable development of communication and information technology opens up new perspectives, which move far beyond current standards of mechatron- ics: today mechatronic systems have an inherent partial intelligence. The behaviour of these systems is characterized by the communication and cooperation of intelligent subsystems that can optimize themselves. This new feature enables advanced mecha- tronic systems that have the ability to react autonomously and flexibly on changing 794 C. Geiger et al. operation conditions [17]. The functionality of self-optimizing systems leads to an increased design complexity and requires an effective cooperation and communica- tion between developers from different domains throughout the development . Established design methodologies laid the foundation to meet these challenges, but need to be fundamentally extended and supported by domain-spanning methods and tools. The development of self-optimizing systems can be basically divided into two main phases: the domain-spanning conceptual design and the domain-specific “con- cretization”. Within the conceptual design, the basic structure and the operation of the system are defined and the system is also structured into modules that form logical and functional units. These units can be independently developed, tested, maintained and, if later on necessary, exchanged. All results of the conceptual design are specified in the so-called “principal solution”. Based upon the principle solution the subsequent domain-specific “concretization” is planned and realized. The term “concretization” describes the domain-specific design of the technical system. The aim of the concretization is the complete description of the system by using the con- struction structure and the component structure. This results in fairly complex system models. Communication and cooperation between all stakeholders involved is essen- tial for a successful and efficient development of mechatronic and self-optimizing systems. Within the conceptual design phase domain-spanning development tasks and their results have to be visualized, so that developers can further elaborate the system design in a cooperative way. In the concretization phase, developers work independ- ently from others on modules in different domains. Their specific development tasks need to be synchronized with those of other domains or modules. Therefore domain- and module-spanning coordination processes need to be defined and communicated between developers by means of appropriate visual tools. We identified the following user tasks. Overview and Details: Developers have to synchronize their work with other activities in the development process and need to identify which process steps relate to others and which can be refined and executed independently. This requires a visual overview of the complete development process as well as a focus on specific and detailed information of individual development tasks and their results. Search, Filter and Results: The large number of process steps requires an efficient search mechanism. Arbitrary types of process elements should be selectable by filter operations and results should be represented and accessed efficiently. Interactive Navigation and Presentation: Communication between stakeholders is best realized by means of an interactive visualization of the complete system model. Users must be able to efficiently navigate through the complex model and present selected elements at an arbitrary level of detail. Moreover, it should be possible to present a sequence of user-defined views of the system model, for exam- ple to discuss a specific workflow. We chose a complex application example by selecting the complete design model of an innovative railway prototype called “Neue Bahntechnik Paderborn/RailCab” (http://www-nbp.uni-paderborn.de). The system model defines an autonomous vehi- cle (RailCabs) that supplies transport for both passengers and cargo. RailCabs drive on demand and reduce energy by forming convoys pro-actively. The complete devel- opment process of the RailCab comprises of 850 development steps including about 900 development objects, e.g. kinematic models, dynamic models, controller models, state charts, list of requirements, test results etc. A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens 795

4 Visualization Techniques

Previous visualizations of the railcab prototype model have been presented using a very large paper sheet (3.5m x 1.5 m) or a large PowerPoint presentation. Thus, this design model is a suitable test case for a zoomable user interface application. We defined a presentation use case for this scenario: A group of 7 to 20 people partici- pates in a presentation meeting. The presenter illustrates central ideas of the system model by zooming and panning through the model at different levels of detail. After 10 minutes of presentation the audience can ask questions which the presenter an- swers interactively by navigating through the model. The complete model was too large to fit on the VR screen (4.7 x 2.4 m, see the next section about the VR set up). Therefore, we implemented a ZUI interface with zoom, pan, overview, and animated transitions to arbitrary model positions. Semantic zooming with different abstraction layers was used to cope with the complexity of the diagram during presentation. The implemented ZUI in figure 1 shows details of the selected diagram.

Fig. 1. ZUI and GUI of the presentation application

Coloured indicate the maximum size in horizontal and vertical direction and provide an interactive overview. The position of the detailed ZUI view can be easily changed with classical zoom and pan operations as well as with the GUI scroll- bars. The presenter can modify the diagram's level of abstraction by zooming in or out or by means of the GUI elements at the top right. Clicking on a diagram element will center it and focus at the selected element. A focus rectangle can alternatively be used to select and focus on a group of elements. Each interaction that results in a move- ment is automatically calculated with correct transitions between abstract levels and implements a semantic level of detail. Ease-In and Ease-Out at the beginning and end of each animation reduce the cognitive of the audience when following fast 796 C. Geiger et al. movements through the diagram. The in the lower UI area can be switched off to focus only on the ZUI view. For filter operations the dock provides filter types at the lower left side. Clicking on an filters out all positions where the selected element type exists in the diagram. It is possible to pan through the results and select one of them. The ZUI view will automatically perform an animated transition to the selected point and adjust the zoom level. To further support the presenter we integrated a storyboard that allows creating and viewing a simplified linear tour through the dia- gram. There are two modes when this feature is used. The editor mode allows creating a sequence of positions that are visited in linear order.

Fig. 2. Storyboard mode with 3 storyboard screens, showing screen 1 and 3

The presenter activates this mode by clicking on a GUI element in the dock. Then she navigates to a desired position in the ZUI view and adds a "screenshots" with position and zoom level parameters to the storyboard list or delete screens in the se- quence. In presentation mode the presenter is able to navigate freely through the in- formation space. However, if she wants to proceed with the previously defined story- board she just presses the forward / backward buttons and the system calculates an animated transition to the next/previous storyboard item. Figure 4 shows a simple storyboard defined in the dock and its presentation of storyboard items. The described presentation scenario includes a presentation on a large VR wall. Therefore, it is necessary that the presenter stands in front of the audience and is able to respond to questions by pointing directly to arbitrary elements in the diagram. As suitable we selected the "Go Pro"-Mouse from Thompson which basically is a cable free gyroscopic mouse. Equipped with five buttons it was possible to map all interac- tion techniques including ZUI view navigation, GUI interaction or dock manipulation. The system model was created by mechatronic engineers using Visio and dedicated shape palettes. The presentation engine imports the XML-based Visio file and automatically creates a suitable screen layout. A semantic level of detail is pro- vided, i. e., system elements are rendered with three different levels of complexity (see figure 3). Based on the user-defined zoom level the presentation engine selects the appropriate abstraction details. The layout is computed in real-time and it is possi- ble to switch to a 3D view that presented the diagram in a perspective view (see figure 4, right). For each diagram element it is possible to link a Full-HD resolution image, a HD movie file or a transition to another diagram element. The application was im- plemented in OpenGL and .net with C# as programming language. A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens 797

Fig. 3. The three abstract levels of the VR ZUI application

5 Information Visualization on a Virtual Reality Power Wall

We run our ZUI prototype on a high resolution multi-channel projection system con- sisting of 4 projection screens, where left, right, and floor projections provide a reso- lution of 1920 x 2160 pixel, and the center projection provides 3840 x 2160 pixel. The center screen has dimensions of 4.7 m by 2.64 m. The system can display images in or passive stereo. Floor, left, and right projection each use 4 Full HD beamers for display, while the center screen is fired by 2 special beamers that provide a resolu- tion of 4 x Full HD each. This results in a resolution of roughly 20 million pixels in total for all projection screens. All 14 projectors are driven by in total 5 PCs. Each PC is connected to an Nvidia Quadro Plex system that feeds 4 Full HD channels. So we have one PC-Quadro Plex unit for center left eye, center screen right eye, left screen, right screen, and floor projection. The high-resolution projection system is used to visualize engineering data like very complex 3D-CAD models. Due to the high resolution of the system, engineers can focus on small parts and sub components, while, at the same time, do not loose the overview of the whole assembly. This allows more efficient design review processes in product development. Other applications, e.g. a night driving simulation, provide a highly detailed visualisation of the lighting characteristics of the virtual prototypes of automotive headlights. Here, engineers can evaluate even small details in the illumination of the road ahead of a . More research-oriented applications focus on efficient randomized rendering techniques that are capable of displaying very large polygon datasets of up to 60 GB or 1014 triangles. To display the ZUI prototype, presented in this paper, we use the center projection screen of the system in monoscopic projection (see figure 6, left). To drive this screen, we use one PC, connected to one Quadro Plex systems, which fires the 4 segments (each in Full HD resolution) of the center screen. With 3840 x 2160 pixels that are projected onto the 4.7 m by 2.64 m center screen, we end up with a pixel size of 1.2 mm by 1.2 mm. For our projection screen dimensions this means, at 5 m view- ing distance in the middle of the centre screen, the viewer’s field of accurate colour perception covers the whole projection screen, while the human eye still is capable of resolving every pixel on the projection screen. There is enough room in front of the projection screen, to group a small number of users in front of the screen and provide optimal viewing for all. The material of the projection screen itself provides a large viewing angle, which ensures a good colour and contrast reproduction even under less optimal viewing angles from viewing positions outside the sweet spot. So all users are capable of over- looking the whole projection area, while also being able to enjoy the full resolution

798 C. Geiger et al.

Fig. 4. User presenting the ZUI on a VR wall the system provides for displaying graphical items within the ZUI. This is very impor- tant when it comes to reading small text or detecting small graphical within the ZUI. There is no need for the user to move closer to or away from the pro- jection screen, in order to have a better view onto the screen. Interaction and naviga- tion within the ZUI is done using a wireless gyro mouse. The user stands in front of the projection screen, acting as a demonstrator for the audience, and navigates through the hierarchical diagrams using the ZUI. When standing in the sweet spot, the audience has an optimal view onto the screen, so zooming-interaction is only needed to show more details within the hierarchical diagrams that are to be presented within our application context.

6 Test Design and Preliminary Evaluation Results

We designed a small experiment to validate our hypothesis that a zoomable user inter- face helps users to understand the presentation of a complex system model on a VR wall and reduces the cognitive effort. However, due to the prototypical state of our current system the preliminary results of the experiment should help to guide the iterative development process and validate the overall evaluation design rather than to confirm our hypothesis. The confirmative evaluation and analysis is left to future work. In the experiment, a presenter explained details of the complex railcab system model introduced in section 2. The presenter used three alternative visualizations. A large paper sheet model illustrated the complete system design. The sheet is 3.0m times 1.5m large and mounted on a wall. The second variant is a PowerPoint presen- tation that was successfully used during lectures and project meetings. The third vari- ant is the ZUI presentation with the implemented interaction techniques described in previous sections. The audience, mechanical engineering students in their final year, is split up into six groups and each group is assigned one of the six possible visualiza- tion sequences. The presenter is always the same and explains the model using the available presentation techniques of each type of illustration. After each presentation students are asked to fill out a questionnaire. This includes a short test with questions about the system model and evaluates if the subjects have understood the technical content of presentation. Additionally, the students filled out a short qualitative review with questions on the suitability of each medium to present complex information, the presentation style of the teacher and their opinion about the best way to present the model. We selected 6 groups each with 2 students, (11m+1f), bachelor of mechanical engineering, aged between 23 and 35. The experiment was designed as "within groups"-experiment and each group participated in all three presentations but in a A Zoomable User Interface for Presenting Hierarchical Diagrams on Large Screens 799 different order. After each presentation the subjects answered the qualitative test questions and filled out a qualitative review. Although not statistical significant, we observed that test results were better (= number of correct answers) if subjects partici- pates earlier in the ZUI Power Wall presentation. The qualitative review showed that participants had some problems to follow the unusual, non-linear presentation with animated transitions. However, the ZUI Power Wall presentation was rated as best medium and the amount of provided detailed information was significantly larger than the other presentation variants. The presenter's style was equally rated as "good" in all treatments. From the preliminary results we significantly refined our test design and extended it. Summarizing, the preliminary results indicated that even with a reduced data set as used in this test, the new presentation medium VR wall performed similar to the well-known presentation forms of PowerPoint and paper sheet. Given the fact that the VR Wall is designed to present large information spaces with deep hierarchi- cal structures, we to validate our hypothesis in future experiments.

7 Conclusion, Future Work, Acknowledgement

The use of a zoomable user interface and dedicated interaction techniques helped to present an overview of a complex system model, zoom to different detail levels with additional support for semantic abstraction and filter and retrieve interesting elements. A "storyboard"-mode allows to define a sequential presentation through the complete model. The application has been implemented on a VR wall and runs effi- ciently in real-time. We have designed an experiment that should validate our hy- pothesis that ZUI interaction provides benefits for presenting complex hierarchical information. Future work will complete the prototypical implementation and conduct a full usability study to evaluate the hypothesis about the suitability of ZUIs for com- plex system design. We thank Stefanie Müller, Brian Schimmel, Stefan Schulze and Peter Weinreich for their excellent work on the implementation of the ZUI project. This contribution was partially developed and published in the course of the Collaborative Research Centre 614 "Self-Optimizing Concepts and Structures in Mechanical Engineering" funded by the German Research Foundation (DFG) under grant number SFB 614. One author is supported by the International Graduate School of Dynamic Intelligent Systems.

References

1. Bederson, B.B.: PhotoMesa: A Zoomable Image Browser Using Quantum Treemaps and Bubblemaps. In: Proceedings of User Interface Software and Technology (UIST 2001), pp. 71–80. ACM Press, New York (2001) 2. Bederson, B.B., Hollan, J.., Perlin, K., Meyer, J., Bacon, D., Furnas, G.W.: Pad++: A Zoomable Graphical for Exploring Alternate Interface Physics. Journal of Vis- ual Languages and Computing 7, 3–31 (1996) 3. Bederson, B.B., Meyer, J., Good, L.: Jazz: An Extensible Zoomable User Interface Graph- ics Toolkit in Java. In: Proceedings of User Interface Software and Technology (UIST 2000), pp. 171–180. ACM Press, New York (2000) 800 C. Geiger et al.

4. Bier, E., Stone, M., Pier, K., Buxton, W., DeRose, T.: Toolglass and Magic Lenses:The See-Through Interface. In: Proceedings of (SIGGRAPH 1993), pp. 73–80. ACM Press, New York (1993) 5. Furnas, G.W., Bederson, B.B.: Space-Scale Diagrams: Understanding Multiscale Inter- faces. In: Proceedings of Human Factors in Computing Systems (CHI 1995), pp. 234–241. ACM Press, New York (1995) 6. Perlin, K., Fox, D.: Pad: An Alternative Approach to the Computer Interface. In: Proceed- ings of Computer Graphics (SIGGRAPH 1993). ACM Press, New York (1993) 7. Perlin, K., Meyer, J.: Nested User Interface Components. In: Proceedings of User Interface Software and Technology (UIST 1999), pp. 11–18. ACM Press, New York (1999) 8. Raskin, J.: The Humane Interface. In: New Directions for Designing Interactive Systems. Addison-Wesley, Reading (2000) 9. Good, L., Bederson, B.B.: Zoomable user interfaces as a medium for slide show presenta- tions. In: IEEE Int. Symposium on Information Visualization, p. 1 (2002) 10. Bederson, B.B., Grosjean, J., Meyer, J.: Toolkit Design for Interactive Structured Graph- ics. IEEE Transactions on Software Engineering 30(8) (2004) 11. Bennet, M., Cummins, F.: ORRIL: A Simple Building Block Approach to Zoomable User Interfaces. In: 8th Conf. on Information Visualisation, IV (2004) 12. Ware, Colin: Information Visualization. In: Perception for Design. Morgan Kaufmann, San Francisco (2004) 13. Gundelsweiler, F., Memmel, T., Reiterer, H.: ZEUS - Zoomable Explorative User Inter- face for Searching and Object Presentation. In: HCI International, Beijing (2007) 14. Jetter, H.-C., König, W.A., Gerken, J., Reiterer, H.: ZOIL - A Cross-Platform User Inter- face Paradigm for Personal Information Management. In: PIM 2008, Personal Information Management, CHI 2008 Workshop, Florence, Italy (2008) 15. Frisch, M., Dachselt, R., Brückmann, T.: Towards seamless semantic zooming techniques for UML diagrams. In: Proceedings of the 4th ACM Symposium on Software Visualliza- tion SoftVis 2008. ACM, New York (2008) 16. Hornbaek, K., Bederson, B.B., Plaisant, C.: Navigation Patterns and Usability of Zoomable User Interfaces with and without an Overview. ACM Transactions on Computer-Human Interaction 9(4) (2002) 17. Henkler, S., Hirsch, M., Kahl, S., Schmidt, A.: Development of Self-optimizing Systems: Domain-spanning and Domain-specific Models exemplified by an Air Gap Adjustment System for Autonomous . In: ASME Int. Design Engineering Technical Confer- ences and and Information in Engineering Conference, New York, USA, ASME, August 3-6 (2008)