Eindhoven University of Technology

MASTER

Interactive printer simulation visualization

Li, S.H.

Award date: 2010

Link to publication

Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.

General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Interactive Printer Simulation Visualization

Siu Hong Li

June 2010 Abstract

For developing and testing embedded control software at Oc´e,Software-in-the-Loop simulations are used to allow testing in earlier phases of development. These produce large logfiles in an unstructured format, which have to be examined to analyze the behavior of the control software and to test the control software itself.

In a previous project, a toolbox for visualization of the simulation data has been developed. In this project, we go beyond visualization only, and propose a computational steering application. This makes it possible to manipulate the simulation from the visualization environment. This thesis describes the architecture and design of the visualization, which is extended with the ability for the user to instantly send feedback back to the simulation and thus creating a computational steering environment. While computational steering is generally used for computational intensive and distributed systems, we will now apply this to embedded software testing.

We will demonstrate the effectiveness of our approach by a number of realistic scenarios. The Software-In-the-Loop simulations can be manipulated, in real time, using an interactive visual- ization of the simulation. This results in a visual responsive way of testing embedded software, which is applicable in early phases of embedded software development. This also shows how computational steering can be used for embedded software testing.

2 Oc´eTechnologies BV – Technische Universiteit Eindhoven Acknowledgements

I would like to thank my project supervisors Michel Westenberg and Lou Somers for their input, guidance and support during this project.

I would also like to thank Klemens Schindler and Amar Kalloe for their input during the project.

Furthermore I would like to thank the following persons for help and expertise offered during this project: Werner de Kort, Ruud Jacobs, Hank van Bekkem, Harald Schwindt, Joost Janse, Christian Lamit, Dennis Ebben and Sander Hulsenboom.

Siu Hong Li

Oc´eTechnologies BV – Technische Universiteit Eindhoven 3 Contents

1 Introduction 6

2 Analysis 7 2.1 Domain Analysis ...... 7 2.1.1 About Oc´eTechnologies R& ...... 7 2.1.2 Printer development ...... 7 2.2 Current Situation ...... 10 2.3 Software Environment ...... 10 2.3.1 Software In the Loop (SIL) ...... 11 2.3.2 Embedded software (ESW) ...... 11 2.3.3 Remote Control (RC) ...... 11 2.3.4 Test Executor (TE) ...... 11 2.3.5 Universal Tester (UT) ...... 11 2.3.6 MoBasE Data Model ...... 11 2.3.7 Visualization ...... 12 2.3.8 High Bandwidth Logger ...... 12 2.4 Stake Holders ...... 12 2.4.1 Embedded Software Engineer ...... 12 2.4.2 Integrator ...... 12 2.4.3 Test Engineer ...... 13 2.4.4 Timing designer ...... 13 2.4.5 Machine Architect ...... 13 2.4.6 Physical Model designer ...... 14 2.5 Previous work ...... 14 2.5.1 Computational Steering Environment ...... 14 2.5.2 SILVis ...... 14 2.6 Goal ...... 15

3 Requirements 16 3.1 Functional requirements ...... 16 3.1.1 Time-related requirements ...... 16 3.1.2 Part-related requirements ...... 16 3.1.3 Simulated Concept requirements ...... 17 3.1.4 Configuration/storage requirements ...... 17 3.2 Non-functional requirements ...... 18

4 Architecture 19 4.1 Used technology ...... 19 4.2 Components ...... 19 4.2.1 Recorder ...... 19 4.2.2 Data Model ...... 20 4.2.3 VisualizationHandlerCollection ...... 20

4 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

4.2.4 VisualizationMainFrame ...... 21 4.2.5 CEGUIHandlerCollection ...... 21 4.2.6 Feedback ...... 21 4.2.7 Configuration ...... 21 4.3 Architectural Design Decisions ...... 22 4.3.1 Dynamic data in the data model ...... 22 4.3.2 Picking Mechanism ...... 22

5 Design 24 5.1 ...... 24 5.1.1 Views ...... 24 5.1.2 wxWidgets Layer ...... 25 5.1.3 CEGUI Layer ...... 25 5.1.4 3D scene Layer ...... 28 5.1.5 Graphical User Interface Design Decisions ...... 28

6 Results 32 6.1 Use cases ...... 32 6.1.1 Starting the visualization with simulation and tool environment ...... 34 6.1.2 Configuration of selectable/viewable items and GUI ...... 34 6.1.3 Heat model testing ...... 34 6.1.4 Error handling test ...... 35 6.1.5 Error recovery test ...... 35 6.2 Verification ...... 36 6.2.1 Functional requirements verification ...... 36 6.2.2 Non-functional requirements verification ...... 38

7 Conclusion 39

8 Future Work 40 8.1 Recording and replaying saved sessions ...... 40 8.2 Extend the interface with ability to send feedback to the steering interface of the process simulation ...... 40 8.3 Add an edit modus that allows the user to change the simulated machine configuration 40 8.4 Create a more direct mechanism to communicate with the Test Executor and load/save test cases ...... 40 8.5 POI management system for more complex scenarios ...... 41

Oc´eTechnologies BV – Technische Universiteit Eindhoven 5 Chapter 1

Introduction

Oc´eis a company where high performance printing systems are developed and manufactured. These systems consist of both embedded software and hardware. Both are developed concurrently when a design has been made. It is desirable to be able to test the software during development, but the hardware on which it should run is not always available for testing. Therefore, the testing platforms called ”Software in the Loop” and ”Hardware in the Loop” have been developed for testing the software. These platforms provide a virtual machine(the part called ”plant” in control theory) for communication with the software. For ”Software in the Loop” the ”plant” is simulated in software whereas for ”Hardware in the Loop” this is done in hardware. The state of the plant is not represented in a way that is easy to understand. During tests the states of elements of the platform are dumped in long log files that are hard to read. For better understanding of what is happening during a simulation, a visualization framework is being developed which visualizes directly the state of the plant while the simulation is running. By adding the ability to manipulate the state of the plant simulations can be done more efficiently. In Software-in-the-Loop simulation used at Oc´e,this has been partially implemented in an interface which allows an external party to change certain variables in the Software-in- the-Loop simulation. The interface for this is currently a not so user friendly command based interface. The goal of this project is to design and implement a rich graphical user interface for this interface and extend the visualization framework for this purpose. In the following chapters we will describe the current situation and domain in more detail. We will describe the stake holders and goals. From that we will work out requirements. Furthermore we will describe the architecture, design and design decisions made during this project. Also the final implementation will be discussed. We will discuss the result of this project and provide some points for future work.

6 Oc´eTechnologies BV – Technische Universiteit Eindhoven Chapter 2

Analysis

2.1 Domain Analysis

2.1.1 About Oc´eTechnologies R&D The activities of Oc´eR&D are to research and develop new products that are used in document systems. These products are primarily printers and copiers and may consist of multiple modules like an external PIM (paper input), a printing module and a finisher(post processing like folding or stapling) as can be seen in Fig 2.1. Oc´eis specialized in high end, durable machines. Most of the equipment is high speed, high volume and therefore has to last for many prints. Due to the high quality and reliability requirements, developing the products is an intensive process where speed of development is of importance. Designing these printers and copiers requires expertise in many disciplines including computer science.

2.1.2 Printer development Printer and copiers have development cycles like any other software intensive system. A typical development cycle comprises of requirements acquisition and analysis, design, implementation and testing until the requirements are validated and the product is ready for release. This is done for both hardware and software which are integrated during implementation. The development of hardware and software is further decomposed into smaller components or processes. A printing device is decomposed into a paper handling part and a printing/fusing part we call process where toner images are formed and are actually pressed onto the paper.

Paper handling The paper handling transfers the paper from the paper tray to the warm process for printing and transfers printed papers out of the system. If necessary it also flips printed papers and transfers it to the warm process again for duplex printing. Paper travels along a path that is called the paper path. The paper handling contains sensors that detect whether a sheet of paper is located on the paper path at the sensor location. Some of these sensors are used to detect the actual paper size which might differ from the size indicated by the tray. In the paper tray sensors are used to detect the paper size indicated by the user and some are to detect the paper level. Sensors are also used to synchronize the transport of paper with the other processes. The sheets of paper are moved/transferred with pinches. A pinch consists of two rollers that grip a sheet of paper, turn and therefore move the sheet of paper down the paper path where another pinch catches the sheet of paper and moves it down the paper path. An actuator is used to set the distance and pressure between two rollers to control grip. The pinches are driven by a motor for which the speed is controlled using actuators.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 7 Siu-Hong Li

Figure 2.1: An example of an Oc´ehigh volume printer consisting of a external PIM, a printing module and a finisher

8 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Sometimes the paper path splits into two paths. Then a solenoid is used to guide the paper to the right path. The solenoid is also controlled using an actuator.

Cold Process

The cold process does the toner placement on the transfer belt. It moves toner from the bulk supply to the work supply. There the toner is picked up and filtered where after it is applied to the image medium which can be a belt or a drum. There a toner image is formed and transferred to the fuse belt. Before the toner is transferred to the fuse belt excess toner is removed using another roller. There are fans that remove toner particles floating in the air and cool the parts if necessary. There is also a cleaner which cleans the transfer medium of glue which might be transferred from the fuse belt. There are many sensors used in this system. A heat sensor is used to check whether the image medium is too warm. Position sensors are used to determine whether a roller is aligned properly. Boolean sensors are used in several places in the system. Boolean sensors are also called ”simple” sensors. In the toner supply units they are used to measure the level of the toner supply. Another simple sensor is used to check whether the toner supply cover is closed. Rollers have sensors that measure the speed of the roller. Some rollers, if applicable, have sensors that measure the voltage on the roller. All rollers, drums and belts are powered by motors which again are controlled using actuators. The position of some rollers can be changed using actuators for aligning. To attract toner, some rollers have actuators that control the voltage that is put on the roller. The cleaner is controlled using an actuator. The toner bulk supply has a motor that shakes the toner so it goes down the toner feed better. Also a component called toner remover is present that removes too coarse toner particles from the stock. That component is controlled using actuators. If the temperature becomes too high, a fan can be activated to cool the parts.

Warm Process

The warm process is where the toner gets attached to the paper and the transfer belt is cleaned. The toner image is put on the transfer belt by the cold process and a sheet of paper is delivered by the paper path. Then the toner on the belt is heated and the sheet of paper pre heated. A fuse and pressure roller then push the toner image and paper together and fuse the toner on the paper. The paper is led back to the paper handler and the transfer belt is cleaned. When heating paper, moisture from the paper evaporates and the humidity of the air in the printer is increased. This might lead to dew, therefore a dew fan that reduces humidity is used. Also fans reduce the temperature if it gets too hot. Heaters are used to heat the components that need to be heated. Heaters have actuators to control the power. The transfer belt has sensors to determine the temperature on the belt. The belt is driven by a motor while the pressure roller is not driven by a motor. The motors are controlled using actuators and speed is determined using sensors just like in the other processes. Actuators are used to change the distance between the pressure roller and belt. The pressure the roller puts on the belt is also changed by the actuator. A homing sensor is used to determine the position of the roller for alignment. A groove cleaner is used to clean the transfer belt removing toner and paper dust. It is like a roller but with grooves and it is also driven by a motor like the normal rollers.

Finisher

The finisher receives the printed sheets and does post processing. This can mean simple actions like stacking/turning them properly but depending on the finisher more complex operations can be done. For instance if required some can fold and staple the printed material.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 9 Siu-Hong Li

2.2 Current Situation

For testing embedded printer software and design exploration, simulation of a printer is interesting. The behavior of the system using certain versions of software can be determined and if needed the software can be changed according to the requirements. Errors can be introduced (error injection) in the simulation to test error handling software. This is done by setting values to what the software considers as an error situation. The simulation of printer parts can be done by using real parts connected to a hardware board called a BBO (Besturings Bank Opstelling) that communicates with the software, while elements like sheets of paper in the printer are simulated using software. Errors, like motors breaking down, are then introduced in the system using a physical control board that directly accesses the sensor/actuator I/O. The board also functions as a visualization of what is happening in the simulation. This method of simulation is called Hardware-in-the-Loop (HIL). Using BBOs for testing is quite limited. BBOs are often not up to date. Changes to hardware take quite some time to implement on a BBO and are expensive. Also currently not everything is adjustable so some workarounds are needed to test everything. To solve that, another approach is to simulate everything in software. That is called Software- in-the-Loop (SIL). SIL offers more possibilities when compared to HIL. SIL does not use an expensive physical control board like HIL. SIL overcomes another limitation of HIL as SIL can run on any computer while for HIL only one device is available. The downside of SIL is there is no visual/physical interface to control and manipulate the simulation. Currently there is a visualization which gives a graphical representation of the paper path simulation based on the physical layout of a device, but no feedback created by using a graphical user interface is possible. To support both forms of simulation, a framework has been proposed that consists of multiple tools that are able to work with both simulations. The visualization is one of those tools. HIL also works with the visualization by using a component that receives the hardware signals, translates and sends system state information to the visualization.

2.3 Software Environment

Figure 2.2: The SIL Simulation environment

10 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

2.3.1 Software In the Loop (SIL) Unlike in HIL, in SIL all components are simulated in software. Properties and behavior of the hardware is represented by values in software. Therefore it is possible to change certain values in the SIL simulation which is not possible in HIL as internal hardware properties are not changed easily. Similar to the development, printer simulation is divided into multiple processes: the paper handling, warm process and cold process. The environment that SIL operates in is depicted in Figure 2.2. SIL simulates all parts in a printer. SIL uses the MoBasE [11] data model for the machine layout, paper paths and parts used in the simulation. Currently only the paper handling (SILPPH) is simulated. The warm and cold process simulation (SILProcess) are still in development by Kalloe [8]. SIL connects to the embedded software using a simulated IO layer which emulates a real machine. In turn SIL is controlled using another tool sending commands to a socket SIL provides. The tools can send print commands and other data. SIL provides a socket interface to which the visualization can connect to for data on the state of the simulated machine. Also there is a command interface which can be used to manipulate simulation variables. For SILProcess a separate interface is used as SILProcess contains elements that are not supported by SIL.

2.3.2 Embedded software (ESW) The embedded software is what is to be tested. The embedded software can be divided into multiple levels. It can be seen as a main node where many sub nodes are connected through. The high level main node software communicates with the printer user interface. In this case it receives commands from the Remote Control (RC). The lower level sub node software communicates with the (simulated) parts. If some functionality is not available stubs can be used.

2.3.3 Remote Control (RC) Remote control is used to give commands to the embedded software. It is used as a replacement of the physical user interface of a printer. For instance, it is used to start print jobs and setting the type of paper that should be used.

2.3.4 Test Executor (TE) Test executor is used for automated testing. It can control RC and send error injections to SIL. Test scripts are made with optional error injection. TE sends predetermined error injections in the script to SIL using the socket interface and starts a print job using remote control.

2.3.5 Universal Tester (UT) Universal Tester can be used to test the more low level code of the software. The embedded software has a layered architecture. Universal tester can send commands directly to the lower layers of the embedded software bypassing the top layer. So procedures can be called and actuators can be set from the software side to test the I/O channels.

2.3.6 MoBasE Data Model The MoBasE (Model Based Engineering) data model is used for maintaining a minimalistic data model with all the (shared) information about a printer. It is a multidisciplinary model that allows people from different disciplines to work on a printer concurrently and communicate with each other using the model. Because the model is multidisciplinary many different aspects are modeled. In this case that is the parts of the printer with their properties, how they are related (topology) and position info of the parts. This data model is used by the simulation and visualization.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 11 Siu-Hong Li

Currently the model only contains information about the paperpath part but as the warm/cold process simulation is developed, it will be extended.

2.3.7 Visualization The visualization component visualizes the machine state during a simulation. It shows the in- ner mechanism of the printer on the screen with the involved parts and their properties. The visualization processes a large amount of abstract data and makes it more understandable as it is mapped to a representation more in relation with the physical situation. The simulation data is received from SIL through a socket interface and using the MoBasE data model for the physical configuration it is visualized. The format of the data that is received has been set through a specification. The visualization component is still in development.

2.3.8 High Bandwidth Logger Another tool called high bandwidth logger, is often used by the engineers to visualize the simulation data in a table like manner. The High Bandwidth Logger not only shows information about the current state but also shows the state in previous samples.

2.4 Stake Holders

There are quite some parties that are interested in interactive visualization. Here we will describe the views and the aspects the interested parties are mostly interested in.

2.4.1 Embedded Software Engineer The Embedded Software Engineer produces embedded software for the printer that can be tested using SIL together with the visualization.

View of the paper handling Software Engineer on interactivity The paper handling software engineer was already interested and was using the visualization to find points in the system where there might be problems with timing and error handling. Different test cases have to be run to find the conditions for errors to occur and have to be handled. He also is interested in being able to interact with the simulation as that can be used to create certain situations directly. Also with design exploration in mind the interactivity can be used to check designs and in combination with design tools it can be used to create better designs.

View of Process Software Engineer on interactivity The process software engineer is not much interested in the visualization as the BBO is providing enough insight in the system. For more detailed information logs are preferred as that gives information about a range of data instead of only the latest state. He is interested in interactivity as it allows for changing values and injecting errors that are not possible in hardware. The process software engineer is primarily interested in dynamic information of parts. This information consists of ”low level” sensor readings as those are used to trigger certain subroutines in the embedded software.

2.4.2 Integrator The integrator does white box tests on the code produced by the embedded software engineers. This means that the integrator has access to the code and can devise tests for certain parts of code.

12 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

View of the integrator on interactivity The integrator’s view on interactive simulation visualization is that it should allow him to test without any hardware and that he should have full control over the system. Currently some signals can not be altered so they have to be ”simulated” in the software that is to be tested. The interest lies primarily in being able to interact ”live” with the simulation and not so much in being able to visualize past simulation runs. It would be best to have a complete simulation with process and visualization of everything. He would like to see behavior of sheets locally and globally and also erroneous sheets being marked. He is primarily interested in errors like faulty motors and creating situations where certain handlers will be called. The level of abstraction in the visualization does not have to be down to the level of actuators. This means setting engine speeds is sufficient, unlike the process software engineer who wants to set multiple actuators driving the motor. Testing is done by instinct. There are some standard scenarios that are tested every time a new release is done but setting up scenarios takes up quite some time. Scenarios have to be worked out entirely, predicting events that will happen. Therefore for testing handlers and software components often tests are devised on the fly using the experience of the tester. For more thorough/regression tests other things like Test Executor can be used. That is why the interest lies at interactive testing.

2.4.3 Test Engineer The Test Engineer uses TE to do automatic testing on the embedded software.

View of the Test Engineer on interactivity The test engineer sees added interactivity as a good way to setup automatic tests. The visualization helps with picking the right segments on which actions can be performed. Adding the option to do actions with the visualization and saving these actions to a test case would speed up the entire process of test case design as it would be much easier to determine action and point of Interest(POI) placement.

2.4.4 Timing designer The Timing Designer designs the timing and control of the paper path. Using a timing tool the paper is timed to be at the correct point.

View of the Timing Designer on interactivity The Timing designer is primarily interested in the visualization of the timing design. To add interactive feedback would be also interesting but that would require quite some work as the timing tool needs to be adjusted to accept input from the visualization. Maybe in the future it would be possible to design the timing interactively.

2.4.5 Machine Architect The Machine Architect is responsible for the overall machine design.

View of the Machine Architect on interactivity Like the test engineer the Machine Architect is interested in the overall picture on the working of the machine and performance. Using the visualization the internal working of the machine is easily understood and adding Interactivity would allow the user to directly see the impact of malfunction or environmental changes on the machine.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 13 Siu-Hong Li

2.4.6 Physical Model designer The Physical model designer designs models that approximate reality, for instance temperature models. These models are used to design the software.

View of the Physical Model Designer on Interactivity The Physical Model Designer was quite interested in the visualization. The Physical model that is designed has to be checked and verified. Currently this is done by using the High Bandwidth Logger tool. The visualization is much easier to understand than the High Bandwidth Logger. Still the High Bandwidth Logger is needed to analyze the situation exactly as it provides exact values and a history for comparison. Interactivity would be most welcome as it allows the user to bring the system in a certain state at will. This gives the designer a quick and easy way to test the Physical model.

2.5 Previous work

This project uses/continues the work on the framework and demonstrator created by Schindler [12] and extends it with a richer GUI and the ability to send commands and manipulate the visual- ization. By creating a loop back to the simulation to interactively manipulate the simulation it becomes a computational steering application. This section will describe the previous work related to this project.

2.5.1 Computational Steering Environment Computational Steering Environment or CSE is a software environment designed by van Liere et al. [14] that allows researchers to create computational steering applications by themselves. CSE is designed to fit between the user/researcher and the simulation as an interface to steer the simulation and to interpret the data. CSE is designed to work with different simulations which can run separately. The architecture, shown in Figure 2.3, is setup modularly so it is easier to integrate with different simulations. Also, as different types of data can be used, CSE has a flexible data model. The center of the architecture of CSE is the Data Manager. The data manager contains a state of the simulation that is continuously updated as the simulation is running. The data manager is surrounded by so called satellite processes that connect to the data manager and can exchange information with it. The data manager functions as a central database but also supports subscriptions. Satellite processes can subscribe to fields of data and are notified if the fields are changed. Communication with simulations are also handled by satellite processes. Visualizing the data and handling the feedback of the user are done by separate satellite processes where interfaces use so called Parameterized Graphics Objects(PGO). PGO are graphical objects whose properties represent data in the model. Actions inside CSE are based on the data in the data model and the mutations of the data done by the user and simulation. While CSE provides a flexible environment that can be coupled to simulations it is not entirely suitable for this project. CSE uses PGO for visualization while in our case a visualization designed for visualizing our target machine is used.

2.5.2 SILVis SILVis is the demonstrator that was created as part of the implementation of the visualization framework by Schindler [12]. The architecture of the framework, shown in Figure 2.4, was influ- enced by the outlook of the extension to a computational steering application. The framework consists of read components, an internal data model, the visualization and the renderer. The visualization translates objects in the data model to display primitives. Display primitives in the scene are divided into objects that are Rigid Bodies and MultiCurveBodies. Rigid bodies are 3D objects defined by a mesh. MultiCurveBodies are not that simple. As the name implies it consists of multiple curves. The curves are interpolated from a set of points. The MultiCurveBodies each

14 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Figure 2.3: The CSE architecture where everything is based around the data manager have a coordinate system. These are used to determine 3D coordinates and rotation based on curve offsets, which is used by the simulation. With this any type of objects can be placed on any location on a curve. This is very useful for indicating positions on curves. The visualization primitives are in turn translated to drawing primitives which in turn are rendered by the renderer to a render window. Using a simple GUI the user can move the camera through the scene.

Figure 2.4: The pipeline architecture of SILVis

2.6 Goal

The goal of the project is to add interactivity to the simulation/visualization creating an interactive simulation through visualization. This is also known as computational steering as the user is steering the simulation by interacting with elements in the visualization. Interaction mechanisms have to be added to the visualization component in such a way the user can easily, accurately and quickly make adjustments to the simulation while looking at the visualization of the simulation. This has to be done while keeping possible extensions in mind such as simulation components of the other processes. The user interface should make use of the larger amount of freedom SIL has compared to HIL. A more powerful interface can be made. Also, the ability to save the interactions done by the user so the same scenario can be executed again to reproduce bugs or compare code is desirable. Such a scenario then can be used as a base to create tests in Test Executor. With design exploration in mind, the interactive visualization can be used to test design when combined with a design tool. If deemed necessary, adding interactivity might involve changes in the existing environment but that is to be determined after analysis and evaluation of the current situation and possible approaches that can be taken to reach the goal. With this functionality added the user will have much more and direct control on the simulation. Testing that was done on the control board in HIL then can be done using SIL. Also tests that were not possible on HIL then can be executed on SIL. Deliverables are a well documented interactive extension on the visualization that transforms the simulation with visualization into a computational steering application.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 15 Chapter 3

Requirements

3.1 Functional requirements

The functional requirements are divided into a couple of categories.

3.1.1 Time-related requirements First there are the time-related requirements. With time the simulated time within SIL is meant. The running speed of the simulation highly depends on the speed of the machine on which the code runs. On high-end machines it is possible that it runs too fast for the user to see the behavior. Therefore it is desirable to be able to manipulate the time or even pause the simulation.

ˆ Regulating speed of the simulation

ˆ Stopping time when a certain condition is valid in the system

3.1.2 Part-related requirements The main goal of the project is to add a user interface such that it will be easier to manipulate values/properties of I/O of parts. The user should be able to set the value only once and allow the simulation to change it or the user can override the value so the simulation will not be able to change it. The modifiable values/properties of a part are divided into separate categories. First there are sensor values or outputs of a part. These are values that are read in by the embedded software to determine what to do. Secondly there are values of inputs of a part. These are used by the embedded software to change properties of a part. Finally there are conceptual values. These are high level values that can not be read from a part but are useful for the user to understand the state of the system. An example is the motor speed and environment temperature. Typically the following is what happens in the simulation. Changing an actuator value will change a simulation value. The values/properties of parts read from the simulation value are at embedded software level. Before displaying these values they are converted to a more understandable format. The simulation also converts the values to a format the embedded software understands to communicate with the embedded software.

Direct Manipulation The user interface should provide some mechanism to allow the user to modify the value directly.

ˆ Setting sensor values of a part(i.e. temperature in capture value)

ˆ Setting actuator values of a part(i.e. power of a heater)

16 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

ˆ Setting conceptual values of a part(i.e. temperature in ‰)

ˆ Overriding sensor values of a part(i.e. temperature in capture value)

ˆ Overriding actuator values of a part(i.e. power of a heater)

ˆ Overriding conceptual values of a part(i.e. motor speed)

ˆ Setting delay of parts

Manipulation by condition: POI(point of interest) The user interface also should allow the user to modify values by setting a condition called a point of interest. Only when the condition is satisfied the will be modified.

ˆ Setting sensor values of a part(ie. temperature in capture value)

ˆ Setting actuator values of a part(ie power of a heater)

ˆ Setting conceptual values of a part(ie. temperature in ‰)

ˆ Overriding sensor values of a part(ie. temperature in capture value)

ˆ Overriding actuator values of a part(ie. power of a heater)

ˆ Overriding conceptual values of a part(ie. motor speed)

3.1.3 Simulated Concept requirements Besides parts there are also simulated concepts in the simulation like sheets, toner images and temperature points. The user should also be able to manipulate these concepts.

Direct Manipulation These concepts can be manipulated directly.

ˆ Stopping Sheets

ˆ Removing Sheets

Manipulation by condition: POI(point of interest) These concepts can be manipulated by POI.

ˆ Stopping Sheets

ˆ Removing Sheets

3.1.4 Configuration/storage requirements The visualization should also have functionality to save the actions the user does on the parts and POIs that are set to a file. From this file test scenarios can be generated. Besides the manipulation of parts/concepts, the visualization should be extended to be able to save and load various kinds of user preferences.

ˆ Ability to store the interactions/manipulations done on the simulation

ˆ Ability to save configurations of the workspace/user interface in a configuration file

ˆ Ability to load configurations of the workspace/user interface from a configuration file

Oc´eTechnologies BV – Technische Universiteit Eindhoven 17 Siu-Hong Li

3.2 Non-functional requirements

ˆ usability: low learning curve, but fast to use compared to current system ˆ reliability: stable ˆ performance: able to run on computers at R&D ˆ supportability: flexible supporting multiple data and feedback interfaces

18 Oc´eTechnologies BV – Technische Universiteit Eindhoven Chapter 4

Architecture

The interactive part was supposed to be built as an extension to the visualization. But as both were developed in parallel, the architecture of the entire application was designed in accordance to the needs of the visualization and the interactive part.

4.1 Used technology

ˆ Ogre3D [1], a 3D rendering engine

ˆ CEGUI [2], a widgets toolkit for Ogre3D

ˆ wxWidgets [3], a cross platform widgets toolkit

ˆ Boost [4], a ++ used for threading, signalling and io

4.2 Components

A short overview of the architecture of the visualization with interaction will be described in this section. The architecture is set up similarly to the one used in Computational Steering Environment by van Liere et al. [14]. A data manager is used but unlike Computational Steering Environment, where feedback is done through the data manager in the visualization, we explicitly keep the feedback part separate from the data manager to ensure that the data that is visualized is 100% simulated data and none of the data has to be derived from other data. This is done to separate data manipulation from the visualization. The visualization extended with the interactive part has several external interfaces. The inter- face with the simulation is through a socket and the interface with the users is through a GUI. The socket interface uses text based messages containing either the state of the system or commands for the simulation. The user interface relies on a keyboard and pointing device for input. The pointing device used during the project is a mouse but other pointing devices can also be used. An overview of the proposed architecture is shown in Fig. 4.1. The remainder of this section will describe the components/elements in the architecture.

4.2.1 Recorder The Recorder component reads in data about the simulated machine from the configuration file in MoBasE format and fills the data model with the corresponding objects. The recorder component also functions as a receiver of dynamic simulation data. Due to the design of the environment, recorder has a couple of servers running to receive connections from simulation clients. Currently these are the SIL and the process simulation interface. The client sends dynamic information

Oc´eTechnologies BV – Technische Universiteit Eindhoven 19 Siu-Hong Li

Figure 4.1: The general architecture and data flow of the Visualization extended with Interaction about parts inside the machine. Recorder receives this data and updates the respective parts in the data model and signals the corresponding handlers. The number of servers and interfaces can easily be extended in the future to interface with more simulations.

4.2.2 Data Model Data Model contains a list of all parts and simulated concepts like the paper path segments. The data model can be seen as a snapshot of the state of a machine. Each part is an object in the data model and can be subscribed to for updates by corresponding handlers in the Visualization- HandlerCollection. In the observer-observable design pattern the parts are observable and the handlers in the VisualizationHandlerCollection are the observers. Using the data from recorder the state of the simulated machine is kept internally. When data has been changed inside the data model, a signal can be sent to the subscribed handlers to update the visualization. The data model functions somewhat like the Data Manager mentioned in Computational Steering Environ- ment [14]. The difference is that functionality is split up. Recorder is used as an interface to receive simulation data and the feedback part has been separated. This is done to ensure that what is visualized is what is simulated and not a combination of the simulation and the user interaction.

4.2.3 VisualizationHandlerCollection The VisualizationHandlerCollection contains a number of handlers which take objects from the data model and creates a corresponding drawing primitive in the form of Rigid Bodies or Multi- Curve Bodies. Rigid Bodies are constructed from a mesh file while MultiCurve Bodies are con- structed from curves interpolated from control points. The primitives are created and rendered using the visualization library framework developed earlier. The handlers are subscribed to the relevant objects in the data model. The VisualizationHandlerCollection accesses the configuration to determine the visibility and colors of drawn components. The multitude of different handlers

20 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization function like the satellites in Computational Steering Environment [13] but the data goes only one way.

4.2.4 VisualizationMainFrame The VisualizationMainFrame is the main GUI and contains several widgets and their handlers. The widgets are for manipulation of the GUI or for manipulating simulation values that are not simulated-machine specific like time manipulation. These widgets send their commands to the Feedback component which then sends it to the respective simulation interfaces. From the VisualizationMainFrame a configuration widget can be opened. Using that, a user can change the configuration. This can be setting the visibility of parts, their labels and whether the part type can be selected. The VisualizationMainframe contains the render window in which the renderer draws the 3D scene. Ogre3D is initialized there. Connected to the VisualizationMainFrame is a RenderWindowListener which catches and pro- cesses all user input. In RenderWindowListener the input is processed and depending on the context the respective handler is called. The input is also injected into the CEGUI toolkit as it does not detect any user input on its own. RenderWindowListener handles the keyboard input, so shortcut keys are also implemented there. RenderWindowListener is also the component where the object picking mechanism is implemented.

4.2.5 CEGUIHandlerCollection When there is user interaction within the render window, this is handled by a handler in CEGUI- HandlerCollection. These handlers take care of all the GUI elements inside the render window. All the GUI widgets in the render window are handled by CEGUI. CEGUIHandlerCollection mainly contains functions for creating and filling windows with content. Besides some standard functions for opening status window, CEGUIHandlerCollection provides functionality to create menus such as the setPOI menu. For setting POIs, markers are created and placed in the Data Model. The window management is also there. The window management consists of a couple of handlers:

ˆ A window movement tracker which keeps windows inside the render window and allows the windows to push each other around.

ˆ A window position generator which calculates the location of a new window

ˆ A window tracker which tracks open windows and the order of opening.

ˆ A window tiler which tiles all windows

ˆ A window destroy procedure that closes all subscriptions and cleanly destroys a window.

4.2.6 Feedback The Feedback component handles all the communication back to the simulation. Unlike Computa- tional Steering Environment, where the feedback is also handled by a satellite process, feedback is kept separately from the data model to ensure the feedback does not alter the simulation data and all visualized data is from the simulation. This ensures the correctness of the visualization with regard to the simulation. Feedback sets up servers to which the respective feedback interfaces of the simulation can connect. The servers support more than one connection. Therefore, multiple interfaces and external tools can receive the commands from feedback. This can be used to steer multiple simulations or store the sent commands.

4.2.7 Configuration The configuration is kept in a separate central Configuration component. The configuration is loaded at startup from a configuration file. The components read from the configuration and

Oc´eTechnologies BV – Technische Universiteit Eindhoven 21 Siu-Hong Li determine what to visualize and several other configurable options. While running the program the configuration can also be changed and saved.

4.3 Architectural Design Decisions 4.3.1 Dynamic data in the data model Dynamic attributes of parts may differ in typing and have to be declared in their part class. This requires some ”maintenance” on the data model when parts and system attributes are added. This has to be done for every attribute of a part. As the number of types of parts increase this requires quite some work and makes the data model more complex. When looking at the code for the status window of a part, code has to be produced to handle each type of attribute differently in terms of display and interaction. In this case the number of types of attributes are limited. Therefore, it is possible to generalize over the typing of parts and reduce the work that has to be done when the data model is changed. In the original visualization this generalization was not done and properties of parts were defined statically in the data model component. choice To reduce the complexity of the data model and the code handling the changes in the different parts one has to generalize from the type and number of attributes. Therefore the use of a generic type container was chosen. For that boost::any was used. One can copy a value of any type to Boost::any and extract it after a type check. Using the type check the correct procedure could be chosen. Attributes value are put inside an any container which in place was put inside of a string to any mapping with the string being the name of the attribute. At the user interface end one only has to loop through the object mapping and depending on the type of the object fill the interface and call the appropriate functions. With this mechanism, also known as naked objects [10] , changes to the input are automatically updated in the status window and requires no more work in the code. Also when new fields are added, not much work is required as new part information propagates through the components automatically.

4.3.2 Picking Mechanism Picking is used when a user clicks or hovers the cursor somewhere on the render window. When the user hovers over an object the label of the object should be displayed, when the user clicks on an object the corresponding menu should be opened. Multiple techniques are possible for item picking. They primarily differ in the amount of work and accuracy.

Via Ogre3D The picking mechanism used by Ogre3D uses bounding boxes of objects to simplify the picking algorithm used. This causes an object to be selected when the user clicks on what appears to be empty space but actually is inside the bounding box of an object. This is especially the case for concave objects like segments which are considered one object. Segments consist of smaller curves and picking accuracy can be improved by considering not entire segments but the curves as objects. This though, would require a significant change in the system and data model as sheets are segment and segment position based.

Custom Algorithm A more fine grained solution would be to investigate further in a segment object to check whether the segment is really hit after a hit of the bounding box of the object is determined. This requires a custom ray casting algorithm that does more than just checking bounding boxes. This would

22 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization be more heavy on the system but as it only executed on a mouse click this should not affect performance much. But if the algorithm has to be run every time the mouse moves(to check for mouseovers) this might become more expensive. choice We chose for the mechanism via Ogre3D which uses rough bounding boxes of components as it required less work. If necessary, in the future this can be replaced with a custom algorithm.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 23 Chapter 5

Design

5.1 Graphical User Interface 5.1.1 Views The application has different users with different interests and requirements for the interface. To accommodate all these wishes, a customizable view that can mix different views is created instead of creating a view for each user separately. This is done as there are many type of users and their interests overlap. The GUI will be able to display everything and users will be able to choose to hide unnecessary elements. Depending on the level of customizability, users will get at least what they wished for. A high level of customizability might be confusing for new users but in this case most of the users know what they want to see. Though the users are experts, the ease of learning influences the design of the user interface very much.

Figure 5.1: A mockup of the GUI. It is divided in three layers, shown here by using different colors and numbering . 1: wxWidgets Layer. 2: CEGUI Layer. 3: 3D Scene layer.

The main window of the GUI can be divided in 3 parts or layers as shown in Figure 5.1. First we have the main widget layer we call the wxWidgets layer. Besides the render window and a menu bar at the top of the screen, it also contains widgets related to the time manipulation, manual command sending and a part list in the form of a tree. The visibility of these widgets can

24 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization be configured using the menu bar so that more space can be freed for the render window. The widgets in this layer are created using the wxWidgets toolkit. The second layer, the CEGUI layer, is in the render window. Initially it is empty but when the user interacts with the third layer, widgets are placed inside this layer. As this layer only exists inside the render window, all widgets can not move out of the bounds of this layer. The widgets in this layer are made using the CEGUI toolkit. The third and bottom layer is the 3D scene layer. When the user clicks on an object in the scene, corresponding menus or windows will be opened in the second layer. When hovering over an object in the scene, the corresponding label will be shown if it is not shown already.

5.1.2 wxWidgets Layer The wxWidgets Layer contains a couple of widgets that are not much related to the parts inside the scene or are related to a different view on the scene. The widgets made using the wxWidgets toolkit have a familiar feel to them as it uses the same look and feel as the OS. The wxWidgets layer contains at least a menu bar at the top and the render window. The menu bar contains standard options like exiting the application, help and visibility options of widgets. Using shortcuts or the menu bar, four other widgets can be opened:

ˆ A simulation time manipulation widget

ˆ A direct command field

ˆ A parts tree

ˆ A Configuration Window

The Time Manipulation Widget is a slider with a text field that allows the user to set the time modifier. The time modifier can only slow down the simulation. The simulation itself will try to run as fast as possible but can be slowed down using the modifier. A value smaller than 1 means faster than real time. 1 means real time and a value x ¿ 1 means x times slower. For the slider, a variable scale has been chosen. For values between zero and one, a standard linear scale has been chosen. For values larger than one the scale is logarithmic. The direct command field is just a text field where the user can send commands directly to the simulation. For this the user needs the know the exact syntax of the interface. The syntax is defined in the socket interface of SIL. The part tree functions as a large list of all the parts in the simulation. This allows the user to look up parts by name. The configuration window contains configuration options categorized by type. Each type of configuration option has its own tab. These options are:

ˆ Selectability of types of parts

ˆ Visibility of types of parts

ˆ Visibility of label of types of parts

5.1.3 CEGUI Layer The CEGUI layer contains widgets that appear through interaction with the objects in the scene or by using shortcut keys. Depending on what the user clicks on, an object selection menu may appear. This is the case when the user clicks on a point that is in multiple objects. Similarly when the user clicks on an object or has selected an object, an action selection menu may appear. This will happen when a part has been selected that has several possible actions.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 25 Siu-Hong Li

Action Widgets Each type of action has its own widget. The possible actions for the paper handling are:

ˆ setSensor

ˆ setActuator

ˆ setMotorSpeed

ˆ stopSheet

ˆ removeSheet

ˆ removeAllSheets

The actions are categorized in this fashion because the error injection interface of the paper handling is defined such a fashion. From the viewpoint of interaction, one can generalize this to setting a boolean or an analog value. Widget wise this means two types of widgets. A window with buttons to toggle the value for boolean values or a window with spinners and possibly sliders for analog values. Sliders are only possible when max and min values are available for the value. For the actions where values are set, a check box is added to determine whether to override the value or set it only once. For the manipulation interface of the process simulation, called Steering Interface, the actions are more generic. There the user has to specify the process element name and the attribute which has to be set in an XML format. Using a command override or setVal the value can be overridden or set. The Steering Interface also has commands to set conditions when to execute commands. Using a when or whenever element as condition on parts, predicates can be set. This is powerful as predicates can be nested to create more complex conditions. In turn this would make the GUI for the Steering Interface much more complex. A GUI for the steering interface was not made as the formulae were more generic and complex to work out a interaction method in time, but this can be done in the future. Currently the user can still input these commands manually. setPOI Besides direct actions, the user should also be able to set points of interest. These are conditions, which when fulfilled, can trigger a specified action. A point of interest requires some arguments:

ˆ SegmentId, the segment in the paper path the POI should be located on

ˆ SegmentPosition, the position on the segment the POI is located at

ˆ POIType, the value that is to be compared with the condition value. In this case: EdgeRising or EdgeFalling

ˆ ConditionValue, the value at which the Command should be executed

ˆ Command

The possible commands are the same as the actions listed in the action widgets.

Context Menus When clicking on multiple parts or selecting a part that has multiple actions possible a menu will pop up. As the contents of the menu differs depending on what is clicked on, it is called a context menu. Normally this is a rectangular menu that lists all possibilities in a vertical fashion also known as a linear context menu. While the linear context menu would work, we looked into more efficient context menus as context menus are used very often.

26 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Instead of this standard context menu a different kind of context menu was used, which we call Matrix menu. Elements were borrowed from the pie menu, introduced by Hopkins [7], and marking menus, the extension by Kurtenbach [9]. Pie menus are different from linear context menus as instead of ordering the buttons in a linear fashion below the cursor, the buttons are put in a circle around the cursor. In the case of matrix menus, as the name gives away, the buttons are placed in a matrix of 3 by 3 as shown in 5.2. For reference, the buttons are numbered using the same ordering as the numbers on the keypad. The center button is reserved for closing the menu. With this there are eight buttons that can be filled with choices. In case there are more than eight choices, the top button will be reserved for opening new matrix menu containing the remaining choices. The menu opens when the user clicks and the menu will pop up with the cursor on the center element. We provide two types of selecting choices using the mouse. One way resembles the pie menu more: The user releases the mouse button and then can click on the button with the desired choice. The other way is more in line with marking menus. The user has to click and hold the mouse button. Or the user can press on the corresponding key on the keypad if the user prefers that method.

Figure 5.2: The basic view of a matrix menu

Status windows Besides widgets for setting properties of parts, a status window is introduced. The status window shows the newest property information about a part. This information can be static(when loaded from MoBasE) or dynamic(part information received from simulations)

Window management To manage the status windows, widgets and menus, some kind of window management mecha- nism was required. To do that a very simple window manager was created. The window manager tracks all top level windows (menu items are contained inside an invisible top level window). Giv- ing the behavior of a so called tiling window manager, status windows are not allowed to overlap so no occlusion occurs. The window manager also provides a window placement algorithm. This can be a simple algorithm that places new windows at a default location or a more complex one that calculates where there is free space and allows efficient use of screen space. Window placement algorithms or placement algorithms use many solutions found in the geometric algo- rithms/computational geometry field (like collision detection and ranged object retrieval) and can

Oc´eTechnologies BV – Technische Universiteit Eindhoven 27 Siu-Hong Li be quite complex. In this case only a simple algorithm is used that places a window directly where the cursor is. The window management also offers functionality to tile the windows. This places the windows next to each other without any overlap starting from the top left. This is done until the screen is full. For placement convenience, windows are made to snap to the edge of the screen and to maintain a tiling window management status windows, when dragged and moved, cannot overlap each other. Instead the window being moved can push other windows away.

5.1.4 3D scene Layer The 3D scene layer contains the visualized parts. These parts can be picked and are all provided and modified by the visualization. The placement of objects raises problems with occlusion. This issue becomes even more evident when labels for the objects are added. To reduce the occlusion, elements and their labels can be hidden using the configuration window. Alternatively only certain parts can be made selectable. Though this does not reduce the amount of occlusion on the screen, selecting certain elements will be easier as certain elements can not be selected anymore. The advantage is that no information is hidden, though.

5.1.5 Graphical User Interface Design Decisions Here we describe the major considerations during design of the GUI when choosing what widgets to use and features to include taking usability into account.

Interaction input interfaces The simulation has a text based socket interface. Therefore it would be enough to have a text box to send the commands to the simulation. For expert users this might be enough but for users that do not have a complete picture of the system and have not memorized the possible commands it might be more challenging to use. Though the application will be used by expert users, ease of learning still emphasized. As the printer model is simulated and visualized it is not a big step to use the visualization as starting point to send feedback back to the simulation directly. While using a text input device is enough to send commands back to the simulation, using the visualization requires a method to select visualized elements. Looking at the standard input devices available, the possibilities are the keyboard and the mouse as pointing device. With these two devices we have three possible variations:

ˆ keyboard only ˆ mouse only ˆ mouse and keyboard

Using keyboard only would mean the user has to use the keyboard to navigate through the objects in the visualization and perform actions on them. As alternative, the user can manually send commands by typing them in the respective field. Using mouse only means the user can click on the visualized object and manipulate it through menus. This method relies on the graphical user interface that has to show possible commands. Clicking through menus might be tedious when a lot of clicks have to be done for one action. Using only the mouse leaves one hand free. By using both keyboard and mouse the other hand is used too, which can improve performance when combined correctly. This can be for quickly selecting menu items using a single key corresponding to a menu item. Not in all cases performance is improved. For instance this does not apply for typing as for typing both hands are needed and therefore the user has to switch from using the mouse to typing. After some consideration the mouse and keyboard option was chosen. The graphical nature of the interface gives the mouse a big advantage for quickly selecting items on the screen. Using two

28 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization different input devices is not always efficient as it requires time to switch from keyboard to mouse and vice versa. To overcome this, the user interface is designed in such a way that it can be used without using the keyboard for typing.

GUI styling For the look and feel of the GUI of the application different widget toolkits could be used. In the beginning the wxWidgets toolkit was used for a basic interface containing an OGRE renderwindow for the visualization. As the GUI had to be extended to support feedback, multiple options were possible for the extension.

ˆ using Wxwidgets

ˆ using a widgets toolkit for OGRE wxWidgets makes use of the windowing API of the to provide a native look. This gives a familiar look which the user is used to and makes it easier to learn as the user is already accustomed to the look. A familiar look was determined to be a factor in acceptance/adaptation of the application by new users. Using a widgets toolkit for OGRE allows for creating widgets ”inside the renderwindow”. This gives the user more of a feeling he is manipulating something inside the 3D window. Using a widgets toolkit for OGRE also allows for more visual effects like transparency of the widgets. Widgets can be placed inside the renderwindow allowing the information inside the widgets be much closer to the visualization. When viewing or manipulating information closely related to a part, the widget can be placed close to the selected part which gives less strain to the user. We chose for using both the wxWidgets toolkit and the CEGUI toolkit for OGRE. The wxWid- gets toolkit is used for creating the window and some widgets which do not immediately manipulate certain parts in the simulation. The CEGUI toolkit is used for creating a GUI inside the OGRE window giving the user a feeling he is manipulating and viewing from inside the visualization. Hav- ing different look and feels for the used toolkits also gives off a seperation of what is manipulated, a part inside the simulation or something about the application.

Window management type For the window management of the information window several types of window managers were possible. In general window managers are divided in to different types/classes.

ˆ stacking window manager

ˆ tiling window manager

ˆ compositing window manager

A tiling window manager does not allow windows to overlap. Often the windows are tiled to fit on the entire screen and the window manager does not allow the user to move a window to a specific location. Not allowing windows to overlap ensures there is no occlusion and all windows are entirely visible. A stacking window manager is a 2D window manager that allows windows to be stacked on top of each other and is the window manager that most users are used to. The user can move windows at will and they can overlap. The last used window will be on top. A compositing window manager allows the windows to be manipulated in 3D and allows more advanced techniques to be used like transparency, bending, rotating and scaling. What was used is a combination of a compositing window manager where the window behavior would be like a tiling window manager. As the windows are opened inside the renderwindow, they occlude a part of the screen. This decreases the amount of data that can be visualized. By using semi transparent windows, occlusion still occurs but it is possible to see what is behind

Oc´eTechnologies BV – Technische Universiteit Eindhoven 29 Siu-Hong Li a window. Following the behavior of tiling window management systems the windows are not allowed to overlap as windows are meant to show necessary information. If they are not needed, they can simply be closed. Different from tiling window managers moving windows is allowed so the user can move the window that is placed inconveniently. To cope with collisions during window movement, the window that is moved can push away other windows. This is done by detecting collisions and moving the other windows accordingly. An added effect by the pushable windows is that the placement of windows can be changed by pushing them away with one window instead of moving them all separately.

Context menu type When a decision has to be made when several options are possible a menu is used. In this case specifically, when the user clicks on a point where multiple objects are located or multiple actions can be taken a context menu is appropriate. While the standard is a linear context menu, it is not very efficient when compared with newer techniques. As alternatives for the standard linear context menu, different kind of menus were also inverstigated. Menu’s that were explored more deeply were:

ˆ Pie menus

ˆ Marking menus

Pie menus were researched by Hopkins et al. [5] as an alternative context menu. Pie menus differ from standard context menus in layout of the menu items. The items are put around the cursor. When the user moves the mouse in the direction of the menu item the item is highlighted and will be selected when the user clicks. This technique reduces the amount of space that has to be traversed with the mouse to make a selection and therefore should be more efficient. This is explained by Fitt’s law [6] which relates the time required to point at something as a function of distance and size of the target. Using Fitt’s law, another efficiency improving property is explained. The wedge shaped menu items grow larger as distance to the center increases making it easier to select. Marking menus are an extension of the pie menu suggested by Kurtenbach et al. [9]. Marking menus do not appear instantly when the user clicks but after a short delay. During that delay the menu is invisible. While beginners still need the visual representation of the menu, advanced users which have made the movement more often and memorized it can do it before the menu appears. This approach turns the menu into a more gesture based interaction method. Instead of the familiar linear context menu, an implementation similar to the Pie menu was chosen as it was determined to be more efficient, while not as different as marking menus(no concepts of markings are introduced). Due to restrictions in the , a menu different from a pie menu was implemented. We call it a matrix menu. A 3 x 3 matrix appears at the mouse with the center cell centered to the mouse. The menu items still appears around the mouse and because of the use of a matrix, the placement of the menu items are set.

Send command interface One major point in the requirements is an easy to use interface to quickly send commands to the simulation. Two different interaction mechanisms, which differ in the order things are done, were explored. For one the user has to select what type of action is to be executed. After that, the part is chosen where after the variables can be set. In this case the user has to select the part from a list of parts on which that action can be performed. This is, to ensure the action is actually possible. For the other mechanism the user first has to pick the part where changes are to be applied. Then the action has to be chosen where after the variables can be set. Both mechanisms have their own advantages. It mostly depends on the order the user works. The user can choose the action first or the part first depending on the situation. In both situations

30 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization the user has to fill in the variables. Only in the case where the action is chosen first the part has to be selected also. Therefore the form can be generalized to a standard form where the part can be selected but will be preselected if the user selected the part before the action. setPOI interface Another major point in the requirements was the ability to do actions when the system satisfies certain conditions. Different GUI and interaction schemes were explored. A POI command consists of two parts. Those are the condition and the action that should be executed. First we explore interfaces for setting the condition. For setting a condition we need the following: ˆ SegmentId: the segment the POI is located on

ˆ SegmentPosition: the position on the given segment ˆ ConditionType: Whether the sheet detection is on Leading edge or Trailing edge. ˆ Number of sheets: the x-th sheet that passes the point on the segment Selecting the SegmentId and the SegmentPosition can be done by directly clicking on a segment. This would be the most visual way of selecting these values. As segments can be hard to click on, some closest segment/position detection would be desirable. After selecting a segment the ConditionType and number of sheets are left. These values are not present anywhere in the visualization and therefore a widget should be created separately. Another way would be by using a form to set all the values. To have more reference to the visualization, selected segments can be highlighted and the position can be marked by a marker. By using a form, all necessary condition values can be put on the same widget and confuses the user less than having seperate widgets for each condition value. For setting the action similar alternatives were explored. One possibility was to extend the interface with a POI modus. For giving the command, an interaction mechanism is already available. Therefore after setting a POI condition, the application would go into POI modus and the next command would be the POI action. The two stage interface switching from setting the POI to setting the action might be confusing for the user. Another possibility is to use a form where the user can choose the condition and command. Using this method both parts of the POI command are kept together in one form. This confuses the user less as the both commands are kept together in a consistent interface of a form. In the end, to keep things simple and easy to learn, the variant where a form is used was chosen. Implementation wise, the form is also more simple as no setPOI selection modus had to be added. The form can be generalized that it is usable even when no part has been selected. The part of the form for setting the action is the same as the one the user would get when performing an action normally.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 31 Chapter 6

Results

The design was implemented and resulted in an application that can be used in combination with the simulation. Emphasis was put on a simple and easy to use GUI while keeping it powerful and flexible enough for expert users. This application was tested using use cases of typical workflows. The application works as follows. Layouts of printers are loaded in the beginning using a layout file. Then using socket connections, the application can connect to both the paperpath and the process simulation to get simulation data and send back feedback. Using this feedback simulated values can be altered once or even locked permanently. For both simulations the most important parts are visualized and their values can be viewed through the visualization. The speed of the simulations can be altered using the simulation time manipulation mechanism which was developed as a plug-in to the simulation to which the visualization can connect. Visualized items can be shown/hidden through a configuration menu or by editing a con- figuration file that is loaded on startup. The GUI configuration can also be adjusted through configuration options. For more often used options, keyboard shortcuts can be used. Commands can be sent back to the simulation directly by using a command interface repre- sented by a textbox. Alternatively this can be done by the GUI. The speed of the simulation can be set by a slider. Names of parts in the simulation can be looked up quickly through a tree menu that sorts the parts by type and by name. Visualized aspects of the simulated machine can be manipulated/examined by simply clicking on the part and selecting the appropiate action. A GUI built on top of the visualization is implemented to make it easier to send commands back to the simulations. Selecting parts can be done by pointing and clicking directly on the parts. When parts overlap a matrix menu will pop up allowing the user to pick the correct object. An example is shown in Figure 6.1. When a part has been picked, a matrix menu appears showing the possible actions on that part as shown in Figure 6.2. The matrix menu is a non-standard type of menu but was chosen specifically as it performs better than linear menus in most cases and allow user learning through muscle memory. Muscle memory is a form of memory that memorizes actions that are done often/repeatedly. In this case it is moving the mouse in a certain direction and clicking. Direction is remembered better than distance moved which has to be remembered in case of linear menus.

6.1 Use cases

Typical workflows through the application and simulation framework tools will be described by use cases.

32 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Figure 6.1: Here, two segments overlap and when the user clicks on the overlapping area, a matrix menu appears under the mouse.

Figure 6.2: The matrixmenu after selecting a segment showing the possible actions on that seg- ment.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 33 Siu-Hong Li

6.1.1 Starting the visualization with simulation and tool environment The user starts the visualization and remote control. The simulation is started as last. Once started, the simulated machine will boot up and go to standby. Once in standby a test run script can be loaded in remote control. The user can configure the test run script by adjusting parameters.(number of jobs, number of sheets/job, tray number, sheet weight, sheet size, etc.) By adjusting the parameters specific parts of the software can be tested. Once done with configuring, the script can be executed.

6.1.2 Configuration of selectable/viewable items and GUI With the visualization open the user can open the configuration window by going to ”file” in the menu bar and select workspace or use key combination ALT-W. Using the checkboxes in the viewable tab, parts that are to be shown can be selected/deselected. In the same tab the labels of those parts can also be shown/hidden. Using the checkboxes in the selectable tab, parts can be set unselectable as shown in Figure 6.3. The part will be still shown(if checked in the viewable tab) but when picked with the mouse nothing will happen.

Figure 6.3: The configuration window showing the selectable tab which contains checkboxes indi- cation whether the part is selectable.

The wxWidgets GUI layer can also be changed. Using the menu options in the view menu in the menubar three widgets can be shown/hidden. These are the parts tree widget, time manipulation widget and the command interface widget. Hiding them when unused, will give more display space for the visualization.

6.1.3 Heat model testing As mentioned before, the physical model designer is interested in testing heat models. This can be done by starting the system up with the process simulation coupled to a heat model. Then during a run, using the command interface shown at the bottom of Figure 6.4, the temperature simulation values can be changed to a certain state. For instance the environment temperature can be changed. After that the user can observe and check whether the temperature model is

34 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Figure 6.4: The GUI with all the widgets. Left is a tree containing all the parts in the simulation. Below are the Time Manipulation and Command widget. The widgets can be hidden for more screenspace. correct. To observe the exact temperature values the user can click on the part and choose to show the more detailed status of a part.

6.1.4 Error handling test To test whether certain errors are handled correctly by the software, sensor values can be set to values that should be observed as not acceptable by the system. Setting sensors/actuators is done with their respective windows shown in Figure 6.5 and Figure 6.6. The system should respond by going to an error state using a certain routine. The user can observe if this is done correctly.

Figure 6.5: The window for setting a simple (boolean) sensor value. From the combobox you can select the sensor by name. The sensor value can be set using the radiobuttons. The ”current” value is highlighted. Using the override checkbox one can choose whether the set sensor is set to a constant value or just set once only to be changes by the simulation afterwards.

6.1.5 Error recovery test Similar to the error handling, the embedded software can also be tested for when it gets a paper jam it can recover from it after the ”operator” does the required actions. The system can be brought in a paper jam state by stopping a sheet of paper somewhere on the paper path. During a run the paper moves quite fast, so the user can decide to pause the simulation. Then a POI is set at a location where the paper should jam. Different locations give different error handling routines. By clicking on the segment where the POI should be set, a matrix menu appears and the setPOI option can be selected. If multiple parts overlap at this location, first the correct part should be

Oc´eTechnologies BV – Technische Universiteit Eindhoven 35 Siu-Hong Li

Figure 6.6: The window for setting an actuator value. From the combobox you can select the actuator by name. The actuator value can be set using the radiobuttons as it is a simple actuator in this case. Additionally you can set the delay meaning how long the simulation waits until setting the actuator value after receiving the command. Using the override checkbox one can choose whether the set sensor is permanent or just set once. chosen. After the setPOI option is chosen the setPOI window appears as shown at Figure 6.7. Then the user sets the POI condition values. These are the POItype, the SegmentPosition and the POI condition number. While changing the SegmentPosition value a marker is placed on the segment to indicate the location. For a simple case the POItype can be set to leading edge and POIcondition number to 2. This means when two leading edges pass the position SegmentPosition on segment SegmentID the command will be executed. The segment has already been filled in so it is not necessary to set that. As the objective is to simulate a paper jam, the action that does that is the stopSheet action. After the stopSheet action has been selected, the menu expands to include parameters for the stopSheet action. The expanded part is the same as the contents of the stopSheet window as shown in Figure 6.8. These are the segment number and the segment position. These are the same as the setPOI position by default but can be changed if desired. After setting the parameters for the stopsheet action, everything is set and the POI can be set by clicking the send setPOI button. Now a POI has been set such that when the two leading edges of sheets pass the POI, the stopSheet action will be executed on the same location. Now the simulation can be resumed. After a while, the condition set in the POI will be valid and the command will be executed. This results in a sheet being stopped. The simulated system should go to a paper jam error state. Now an error recovery procedure can be done by the ”operator” which means the user has to do some things that simulate the operator. The print job should be stopped and the front door of the simulated printer should be unlocked. These actions are simulated by sending the appropiate commands in remote control. Now the ”operator” has to open the front door to access the printer internals. The opening is done by toggling the front door sensor. After this the RemoveAllSheets command is sent to remove all the sheets in the system and the front door can be closed again by toggling the front door sensor again. If everything is correct the system will go back to standby mode and another run of the test script can be performed again.

6.2 Verification

Functionality and requirements have been verified by running the use cases described in the pre- vious section.

6.2.1 Functional requirements verification ˆ Regulation of system speed can be done using the time manipulation widget.

ˆ Stopping time by a set condition has not been implemented as the simulation does not offer any conditional time regulating commands.

36 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

Figure 6.7: Window for setting a POI. The POI window contains widgets for all condition values. SegmentId can be set through a combobox. The position can be set using a slider. PoiType again a combobox. SegmentConditionValue is set through a spinner. CommandType is a combobox. After a CommandType is selected the bottom part of the window changes.

Figure 6.8: Window for stopping a sheet. The user has to pick a SegmentId and a position on that segment. The same widget set appears in the setPOI window when a stopSheet CommandType has been selected.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 37 Siu-Hong Li

ˆ Manipulation of sensor/actuator values can be done using their respective set value windows using the correct sensor/actuator name. ˆ Overriding can be done by checking the override checkbox in the set value windows ˆ Manipulation of values by POIs can be done by using the setPOI window.

ˆ Manipulation of sheets can be done by the stopsheet/removesheet windows ˆ Manipulation of sheets by POIs can be done by using the setPOI window. ˆ The interactions/manipulations done and commands sent during a run are written to a log file.

ˆ Workspace configuration is loaded from a configuration file. The configuration can be changed using the configuration window but the configuration is not saved to the config- uration file. For now the configuration file has to be edited manually.

6.2.2 Non-functional requirements verification ˆ The visualization provides an intuitive representation of the system. Manipulation is done through the same intuitive image. This intuitity lowers the learning curve as no commands have to be learned and logs have to be read. The direct manipulation and live visualization of the simulation speeds up testing as one no longer has to read the logs and create test cases. ˆ Stability has been tested by running multiple long test runs. ˆ The application runs on average workstation at Oc´eR&D. It is also possible to run simulation and visualization on different computers.

ˆ Modular architecture allows simple additions of data and feedback interfaces that not affect- ing existing functionality.

The requirements for basic functionality of the application have been fulfilled. With this the application can be used by the intended end users.

38 Oc´eTechnologies BV – Technische Universiteit Eindhoven Chapter 7

Conclusion

During this project a rich user interface was designed and implemented on a 3D visualization framework for Oc´especific visualizations. Users often are intimidated by new tools as it is some- thing that is unfamiliar and requires the user to learn to use. This is why it is hard to convince developers to use new tools. In this project focus was put on the ease of use and learnability of the user interface so the gap between what the user already knows and what is new is kept small, while ensuring new functionality is added. By using a visualization as a base, the user interface is kept strongly connected to what is simulated. Efficiency was emphasized by placing widgets in such a way user have to do less and smaller actions to achieve what they want. With learnability being a point of focus, widgets were used that are verbose and use elements like muscle memory. The user interface provides a much easier way to construct the command that needs to be sent to the steering interface. While some might argue that typing the command is faster for expert users this is definitely not the case for normal users. The final application clearly has shown its benefits when testing embedded software. Before, a simulation had to be run and the log files inspected whether everything is correct. Now, with the visualization and steering, the user can see directly what is happening, slowing down/pausing the simulation and changing values on the fly when desired. Computational steering has been a field that is used for very complex simulations that require grids/supercomputers to calculate. There it is essential to be able to see and interact with the simulation as a simulation is time consuming. This project has shown that computational steering is also useful when used in testing. By giving the user control over the simulation we utilize the power of Software-in-the-Loop as during the test, situations that occur rarely or even are thought impossible can be tested. By providing an interface that is built on top of the visualization of the simulation the actions and the effect of those actions are strongly connected. This shows how important the user interface is to a steering interface. The final implementation was determined as useful and it will be used now and for future projects. The implementation has been built with priority on the functionality considered directly useful by the stakeholders . There are many points where extensions are possible which increase the usefulness and functionality. These give possible ideas for continuation and future projects built using this as fundament.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 39 Chapter 8

Future Work

Here we will discuss the possible extensions that are considered future work.

8.1 Recording and replaying saved sessions

Adding the ability to record and replay saved sessions. By recording all the states of the data model, a replay can be done of a previously done simulation. This is useful for communicating error cases that rarely happen. When the user misses things as they happen unexpectedly the session can be replayed again to view it in more detail. With this ability the user interface of course needs to be extended to allow the user to navigate precisely through the session.

8.2 Extend the interface with ability to send feedback to the steering interface of the process simulation

Currently the application only offers a GUI for the paper path simulation feedback. For the process simulation feedback it still relies on the command line interface widget. The process simulation is not entirely disjunctive from the paper path simulation. A map should be made of what is simulated at the process simulation and what is done at the paper path simulation.

8.3 Add an edit modus that allows the user to change the simulated machine configuration

As currently the layout of the simulated printer is only partly generated. Much of it has to be edited by hand in an XML layout file which one can imagine is tedious and time consuming. An edit modus would highly improve the tooling by giving a WYSIWYG editor for the layout. This also encourages design exploration as changes are made much more easier and can be tested immediately.

8.4 Create a more direct mechanism to communicate with the Test Executor and load/save test cases

Test cases for Test Executor are now designed by hand and scripted into Test Executor. This again is time consuming. Connecting the feedback component with the test executor to generate test cases would give an easier to use interface. Also one can discover unexpected/rare errors during a session which in turn can be loaded as a test case in Test Executor to be tested more thoroughly.

40 Oc´eTechnologies BV – Technische Universiteit Eindhoven Interactive Printer Simulation Visualization

8.5 POI management system for more complex scenarios

For a simple test case it is still easy to oversee actions and POI that have been set. But when test cases become much more complex it is easy to lose overview. Also currently when a POI has been set it, it can not be removed if the user made a mistake. A solution would be a POI management system which tracks all POI and allows for POI removal/editing. It would be desirable for a set of POI to be able to be saved and loaded again for later use or communication purposes.

Oc´eTechnologies BV – Technische Universiteit Eindhoven 41 Bibliography

[1] Object Oriented Graphics Rendering Engine (OGRE) home page. http://www.ogre.org.

[2] CEGUI GUI toolkit home page. http://www.cegui.org.uk. [3] wxWidgets home page. http://www.wxwidgets.org.

[4] Boost library home page. http://www.boost.org. [5] J. Callahan, D. Hopkins, M. Weiser, and B. Shneiderman. An empirical comparison of pie vs. linear menus. In CHI ’88: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 95–100, New York, NY, USA, 1988. ACM.

[6] P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. 1954. J Exp Psychol Gen, 121(3):262–269, September 1992. [7] Don Hopkins. The design and implementation of pie menus. Dr. Dobb’s J., 16(12):16–26, 1991.

[8] Amar Kalloe. Simulating a printing process for verifying embedded control software. Master’s thesis, Eindhoven University of Technology, 2009. [9] Gordon P. Kurtenbach, Abigail J. Sellen, and William A. S. Buxton. An empirical evaluation of some articulatory and cognitive aspects of marking menus. Hum.-Comput. Interact., 8(1):1– 23, 1993.

[10] Richard Pawson. Naked objects. IEEE Software, 19(4):81–83, 2002. [11] Eugen Schindler. A multidisciplinary model based engineering framework for development of production printers. Master’s thesis, Eindhoven University of Technology, 2008. [12] Klemens Schindler. Visualization of a mechatronic software-in-the-loop printer simulation system. Master’s thesis, Eindhoven University of Technology, 2009. [13] Robert van Liere and Jarke J. van Wijk. Cse: a modular architecture for computational steering. In Proceedings of the Eurographics workshop on Virtual environments and scientific visualization ’96, pages 257–266, London, UK, 1996. Springer-Verlag. [14] Jarke J. Wijk, Robert van Liere, and Jurriaan D. Mulder. Bringing computational steer- ing to the user. Technical report, CWI (Centre for Mathematics and Computer Science), Amsterdam, The Netherlands, The Netherlands, 1997.

42 Oc´eTechnologies BV – Technische Universiteit Eindhoven