2016 International Conference on Control and Automation (ICCA 2016) ISBN: 978-1-60595-329-8

PP-Menus: Localizing Pie Menus by Pressure

Xiao-ming WANG1,a, Pei WANG1 and Chuan-yi LIU1,b,* 1School of Information Science & Engineering, Lanzhou University, Lanzhou, China [email protected], [email protected] *Corresponding author

Keywords: Pressure input, Pen-based interfaces, Tracking menus, Pie , PP-Menus.

Abstract. We present PP-Menus, a novel technique for enhancing the performance of pen-based interfaces. PP-Menus are pressure-based pie menus, which are implemented by using pen input pressure values to localize traditional pie menus. PP-Menus possess two states, i.e. stationary state and tracking state. A variation of pen input pressure can switch PP-Menus between the two states. A quantitative experiment shows that PP-Menus significantly outperformed tracking menus on task speed, and there was no significant difference between the two types of menus on accuracy.

Introduction When users perform a series of operation in a common interface, they typically have to move the repeatedly between targets and commands. Repeated trips between and command zone is usually time-consuming and monotonous. Therefore, it is valuable to localize command selection in workspace. In a desk top system, context aware menus are widgets for localizing commands[1]. Context aware menus can be activated by a click of a right of a mouse. Another way to use a local command quickly is by shortcut keys. But there is usually no mouse and keyboard in pen-based interface. A barrel button can also be used to activate a popup menu. But it is awkward for a user to press the barrel button in a pen when s/he holds it in hand. Buttons in the bezel of a display can be used as partial shortcut keys. But it is also awkward for us to press such keys since we usually take a pen-based device by one hand and take its pen by the other one. Furthermore, shortcut keys impose memory burdens on users. Tracking menu[2] was a valuable widget to offer local commands for pen-based interfaces. Tracking menus could be used in a working place without barrel buttons or shortcut keys. A tracking menu had two states, stationary and tracking. The two states could only be automatically switched when the cursor touched a tracking menu’s edge. In other words, a tracking menu could be pushed by the cursor at its edge and then move together with the cursor. The pen tip could not exceed a tracking menu’s edge height to keep the tracking menu under tracking state. A tracking menu’s edge is a metaphor of a pen tip’s sensible range above a tablet or display. The sensible range is approximately 15 millimeters for a typical pen-based device (e.g. a Wacom tablet). The height constraint limits a tracking menu’s movement, especially in a large extent. Furthermore, the cursor cannot escape from the tracking menu until it is dismissed. So other methods to make menus instantly close at hand as users’ requirement should be investigated for pen-based systems. On the other hand, there are some extra input channels, e.g. pressure, tilt and azimuth, can be exploited for command-localizing interfaces. In this paper, we propose a novel widget which utilize pressure input to localize command selection. We need not change a pen’s posture when utilize pressure input channel. This means that we can exploit pen input pressure in a natural way. Human being can manipulate pressure input effectively[3], and pressure performed rather well in mode switching[4]. Therefore, pressure input channel was chosen to switch PP-Menus’ state. Pressure variation marks[5] can be mapped with operation commands. We need not to lift pen tip up from the surface of a pen-based device when change pressure input. So we can keep fluid operation on a pen-based device’s surface when switch PP-Menus’ state. And we can feel the variation of pressure input through our fingers’ touch.

Therefore, this process is eye-free and then we can keep our eyesight focus mainly on the targets in most time. PP-Menus are pressure-based pie menus [6, 7] (see Fig. 1). PP-Menus have two states, i.e. tracking state and stationary state. A PP-Menu keeps tracking with the cursor under tracking state and keep stationary under stationary state. A pressure variation can switch PP-Menus’ state. A PP-Menu usually can dock at any position in a workspace. When we need to select a command from a PP-Menu, we can change its state from stationary to tracking through a simple pressure variation mark, and then the PP-Menu comes near to the cursor instantly. We press the pen tip on the surface stronger than normal input level and then relieve the pressure to normal level, then a pressure variation mark has been made and the PP-Menu’s state is changed. We change PP-Menus’ state from tracking to stationary in the same way. A PP-Menu has a normal size as common interface widgets, and a collection of tools are arranged in a radial pattern[7]. Thus, the functions of a PP-Menus can be explored like other common GUI widgets. Unlike a tracking menu, a PP-Menu is visible all the time and the cursor can freely travel from the menu to workspace when the menu is under stationary state. The cursor always stays in a workspace and the PP-Menu keeps tracking with it when the PP-Menu is under tracking state. In order to avoid visual occlusion, the PP-Menu is usually located bottom left corner to the cursor (for right-handed users) or bottom right corner (for left-handed users). To keep constantly being visible in the display, the PP-Menu automatically relocates when the cursor touches a display’s three edges except the top one. A quantitative experiment was done to compare PP-Menus and tracking menus[2]. The experimental results illustrate that the tasks were performed significantly faster with PP-Menus than with tracking menus. Task error rates between the two types of menus were not significantly different.

Related Work Pop-up menus [1] appear near the cursor. Pop-up menus can afford users command selection in the work place. Users need not to move back and forth between a fixed command zone and the work place. This can reduce operation time. But a pop-up menu is usually activated by an event from a mouse, which is usually unavailable in pen-based system. Gestures are effective methods in localized UI. There were some gesture-based systems for pen input [8-11]. But gestures could be confused with inking data input and sometimes falsely recognized. Furthermore, gestures imposed learning and memory burdens on users. Gesture-based systems usually were weak in function exploration for users. Motion gesture interactions [12-14] were useful for smartphone input, especially for distracted smartphone input. These interactions treated smartphone motions as gestures, and the motions could be sensed by users via proprioception. We can think motion gestures could also offer local commands. But these gestures were not suitable for some of other pen-based devices, e.g. tablet PCs, for their sizes and use conditions. Marking menus[15, 16] and their evolved version, e.g. control menus[17], could afford localized commands. These techniques were suitable for both desktop systems with a mouse and keyboard and pen-only systems. But these techniques limited novice users to explore their functions since the menus were initially invisible. In order to explore the functions, users had to wait a threshold of time for marking menus to become visible. This decreased interaction efficiency. Hover widgets[18] were valuable for pen-based systems. Hover widgets utilized tracking state movements to create a command layer which was clearly distinct from the input layer of a pen interface. Localized UI was achieved through the command layer. However, hover widgets were also of gesture-based techniques, which imposed learning and memory burdens on users. HandyWidgets[19] were another type of widgets that could be localized around users’ hands. HandyWidgets were invoked by a bimanual multi-touch gesture, which was only suitable for multi-touch surfaces. George et al.[2] proposed tracking menus for pen-based interfaces. Tracking menus evolved from pie menus[6]. Tracking menus had two states, i.e. stationary and tracking states. A tracking menu’s state could automatically switched when the cursor travelled across its edge. A tracking menu had to be dismissed when the cursor left it and entered the workspace. Fitzmaurice et al. [20] shrunk a tracking menu to the size of a cursor, i.e. PieCursor. The PieCursor technique merged the normal cursor function of pointing with command selection into a single action. Therefore, the PieCursor could afford localized UI in a more efficient way. But it was difficult for users to explore functions in a PieCursor because of its minimal size. Pressure input channel was widely exploited in pen-based interfaces [3, 5, 21, 22], Ramos and Balakrishnan[5] proposed to use pressure variation marks as command to make selection-action happened seamlessly. Li et al. [4] investigated utilizing pressure in mode switching. In their experiment, pressure performed rather well.

PP-Menus PP-Menus were developed from pie menus[6, 7]. We defined a pressure value threshold, which was much higher than normal pressure input values that might appear in inking tasks. The threshold was intentionally defined to be an abnormal level in order to avoid switches by chance. Whenever the pen input pressure level varied across the given threshold, a switch between the two states of the PP-Menu would be triggered. In the experimental interface, the PP-Menu was positioned in the top left corner of the workspace. There was a pushpin button (see Fig. 1) in the PP-Menu. Clicking on the pushpin button could also change the PP-Menu’s state. But it could be used only when the PP-Menu was under stationary state.

Figure 1. 1A PP-Menu used in a 3D drawing system.

Experiment The experiment was to investigate the performance of the two kinds of menus, PP-Menus and tracking menus, on task speed, accuracy and participants’ subjective comments. Apparatus and Participants We used a Wacom tablet with a pressure-sensitive pen as an input device. The pen provided 1024 levels of pressure, and had a binary button on its barrel. The table’s active area was mapped on the screen’s visual area in absolute mode. And the experiment was run in full-screen mode, on a Samsung 18-inch Flat Panel LCD Monitor running at a resolution of 1280 by 1024 pixels. The experiment software was programed with Java on a 3.01GHz Core Duo PC running .1 operating system. 8 volunteers (6 male and 2 female), aged from 22 to 35, recruited from our university, participated in the experiment. All of them were right-handed and familiar with the operation of computers. But none of them had experience of using the pie menus or tracking menus.

1This figure is cited from our 3D drawing prototype system, which was mentioned in[23].

Task and Stimuli

Figure 2. Experimental logical interface. The dashed lines did not appear in the experiment. The target object was a 50 pixels ×50 pixels square shape with green border (see Fig. 2). The target object had a 60 pixels ×60 pixels sensible area beginning from its center. The target object appeared in 8 pie slices, each slice covered 45 degrees and orientated in 8 directions. There were three distances from the center of experimental interface to that of the target object, i.e. 100, 200 and 300 pixels. The experimental PP-Menu and tracking menu both had 8 buttons, the text on each button was a number from 1 to 8. The buttons stands for abstract commands. And the 8 numbers located in 8 pie slices, which slice covered 45 degrees and orientated in 8 directions. A red start button located at the center of the experimental interface. The subjects clicked the start button to begin each trial. In each trial, the participants were asked to 1) move the cursor from start button into the target shape, 2) activate and move one of the two types of menus into the sensible area of the target shape, 3) select the given target menu item. If a subject pressed the pen tip in the target shape, then the shape would be filled with green color and a pleasant sound was played to inform the subject about successful operation in the first step. Otherwise, the target shape would be filled with red color and an alert sound was played to inform the subject about failed operation. In the second step, if the target menu was failed to be activated or intersected with the target object’s sensible area, then the operation was failed. In the last step, the operation was failed if the target menu item was missed. If operation in any step was failed, then an error was recorded and the trial would repeat from the first step. Not was the experiment proceeded to next trial, until a trial was successfully completed. Move time was recorded from the start button was pressed till the cursor entered the target menu. Selection time was computed from the cursor entered the target menu till a menu item was selected. Procedure and Design A within-subjects full factorial design with repeated measures was used. The experimental dependent variables were trial time and error rate. And the experimental control factors included menu (2 levels), block (3 levels), target object orientation (8 levels), target object distance (3levels) and target menu item (8 levels). Each trial was repeated for 5 times. In summary, the experiment consisted of 8 participants × 2 menus × 3 blocks × 8 orientations (N, NW, W, SW, S, SE, E, NE) × 3 distance (short, medium, long) × 8 target menu items × 5 repetitions = 46080 trials To counterbalance order effect, half of the subjects began with PP-Menu tasks, the other half with tracking menu tasks. The order of the target object’s direction, distance and a target menu item’s orientation were randomized between subjects. Before the formal experiment, the subjects were given a detailed description and explanation about the experimental task. They were conducted to have five minutes’ practice to familiarize the menus and the tasks. And participants were requested to complete each trial as quickly and accurately as possible. Participants can have breaks between blocks, and the experiment lasted approximately three and a half hours for each participant. A questionnaire was used to gather participants’ subjective opinions.

Results We conducted a 2(menu) ×3(block) ×8(orientation) ×3(distance) ×8(target menu item) RM-ANOVA on trial time and error, Bonferroni-corrected post-hoc comparisons for the results. We first analyzed trial time and error rates across experimental main factors of menu type, block, distance, orientation and target menu item. Then we analyzed participants’ subjective comments. Average Task Time There was a significant main effect of menu type on total time (F (1, 7) = 18.13, p < .01) (see Fig. 3). Fig. 3 illustrates that move time of PP-Menu was much shorter than that of tracking menu, this might be caused by different trigger modes of the two types of menus. The PP-Menu allowed the participants to activate a state switch in a more natural way. According to participants’ subjective comments, the tracking menu’s method of state switching was more fatigue-prone. All the aforementioned might make the move process with tracking menu slow down.

Figure 3. Average time across menu type. Figure 4. Average time across menu type and distance.

There also was significant interaction effect of tracking technique × distance on total time (F (2, 14) = 9.20, p < .01). As Fig. 4 shows that no matter what kinds of distance, the PP-Menu was faster than tracking menu on total task time. For PP-Menu, the total time of the three kinds of distance had no obvious difference. On the contrary, for tracking menu, we could find that the total time obviously increased with the distance. So the longer the distance was, the more advantageous the PP-Menu was (see Fig. 4).

Figure 5. Average time across menu type and Figure 6. Average time across menu type and target object orientation. menu item orientation. Fig. 5 illustrates that there was no interaction effect of menu type × target object orientation on total time (F (7, 49) = 0.16, N.S). As expected, the performance of each type of menu is almost the same across target object’s orientations in the screen. Different trigger modes of the two menu types might lead to different postures of hand, and then the difference might affect the speed to choose a menu item. Therefore, we analyzed interaction effect of menu type × menu item orientation. As Fig. 6 shows, there was no significant effect of menu type × menu item on total time (F(7, 49) =1.92, p > 0.05). Error Rate There was no significant main effect of menu type on error rate (F(1, 7) = 1.40, p > 0.05) (see Fig. 7). In the three types of errors, only the menu activation error of the PP-Menu was a little higher than that of the tracking menu, this might because the participants had little experience to manipulate the pressure value, which was employed for a PP-Menu’s manipulation.

Figure 7. Average error rate across menu type. Figure 8. Average error rate across menu type and menu item orientation. As Fig. 8 illustrates, there was no significant interaction effect of menu type × target menu item orientation on error rate (F(7, 49) = 0.89, N.S). Learning Effect There was a significant interaction effect of menu type × block on total time (F(2, 14) = 8.51, p < 0.01) (see Fig. 9). There was also a significant interaction effect of menu type × block on error rate (F(2, 14) = 4.10, p < 0.05)( Fig. 10). From the results, we can see that learning effects for each menu type on both task speed and accuracy were significant.

Figure 9. Average time across block and menu type. Figure 10. Average error rate across block and menu type.

Subjective Comments After the quantitative experiment, the participants were conducted to complete a subjective questionnaire for their comments on each menu type’s usability, fatigue, personal preference and open advice. Most of the participants thought PP-Menus were a little better than tracking menus on usability and fatigue. Some of them believed that a visual feedback of pen input pressure would make PP-Menus perform better.

Discussion In the traditional GUIs, menus are usually stationary. When we want to choose an operation command, we have to move the cursor to a and click on a menu there, and then move the cursor back to the workspace again. In a series of operation, repeated trips between tools and workspace are boring and time-consuming. There are many techniques to localize command selection. In desk top system, we can choose commands in the workspace with pop-up menus or shortcut keys. We can also use gestures for localizing command selection. But it is sometimes difficult to distinguish gestures from inking strokes since they are of the same input channel. Tracking menus are valuable for pen-based systems in localizing command selection. We can push a tracking menu at its edge and let it track a cursor. Therefore, we can push a tracking menu only under a constraint, i.e. the pen tip cannot be higher than a tracking menu’s edge. Of course, that was only a metaphor of its real constraint, i.e. the pen tip cannot leave out from a tablet or display’s sense range. A typical pen-based device’s sense range usually are constrained into a rather small space. For example, a Wacom tablet can only sense its pen tip within approximately 15 millimeters. A tracking menu must be dismissed to allow the cursor to enter the workspace. But there was no such constraints for PP-Menus. We can move a pen in any way. PP-Menus can be instantly relocated near to the cursor by a pressure variation. This might be the reason why PP-Menus performed much faster than tracking menus in move process. We can rank the aforementioned widgets according to their activeness. The traditional menus are the laziest, they do not track with the cursor in any way. Tracking menus are more active than traditional menus. We can push a tracking menu and let it track the cursor. PP-Menus are the most active. A PP-Menu actively tracks cursor under its tracking state. Error rates of the task with PP-Menus were a little higher than that with tracking menus. This is maybe due to none of the participants was familiar with pressure input channel. It was a little difficult

for them to manipulate pressure input at the beginning. After they got more familiar with the manipulation of pressure input, they performed better in the last two blocks. The participants could feel variation of pressure input from their fingers’ touch. This means that manipulation of a PP-Menu is eye-free. And this is valuable for users to keep their visual focus mainly on the operated targets.

Conclusions and Future Work We compared two localizing menu techniques in this paper. Both tracking menus and PP-Menus are valuable in pen-based systems. PP-Menus are pressure-based pie menus. Through pressure input channel, PP-Menus’ states can be switched between tracking and stationary there and then. In the traditional interfaces, we operate in a manner that is from cursor to menu, i.e. move the cursor to a menu for operation command selection. In this paper, we proposed another manner that is from menu to cursor, i.e. make a menu move to the cursor instantly to offer command selection. The quantitative experiment illustrated that this novel manner was more efficient for pen-based interfaces than the traditional way. In our future work, we will investigate exploiting other pen input channels for PP-Menus.

Acknowledgement This work has been supported by the Fundamental Research Funds for the Central University (No. lzujbky-2012-37).

References [1] D.R. Airth, Navigation in pop-up menus, INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems, ACM, Amsterdam, The Netherlands, 1993, pp. 115-116. [2] G. Fitzmaurice, A. Khan, R. Pieké, B. Buxton, G. Kurtenbach, Tracking Menus, Proceedings of UIST 2003, ACM, 2003, pp. 71-80. [3] G. Ramos, M. Boulos, R. Balakrishnan, Pressure widgets, Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, Vienna, Austria, 2004, pp. 487-494. [4] Y. Li, K. Hinckley, Z. Guan, J.A. Landay, Experimental analysis of mode switching techniques in pen-based user interfaces, Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, Portland, Oregon, USA, 2005, pp. 461-470. [5] G.A. Ramos, R. Balakrishnan, Pressure marks, Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, San Jose, California, USA, 2007, pp. 1375-1384. [6] J. Callahan, D. Hopkins, M. Weiser, B. Shneiderman, An empirical comparison of pie vs. linear menus, Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, Washington, D.C., United States, 1988, pp. 95-100. [7] D. Hopkins, The design and implementation of pie menus, Dr. Dobb's Journal, 16 (1991) 16-26. [8] Robert C. Zeleznik, Kenneth P. Herndon, J.F. Hughes, SKETCH: An Interface for Sketching 3D, Proceedings of ACM SIGGRAPH 96, ACM, 1996, pp. 163-170. [9] T.P. Moran, P. Chiu, W.v. Melle, Pen-based interaction techniques for organizing material on an electronic whiteboard, Proceedings of the 10th annual ACM symposium on User interface software and technology, ACM, Banff, Alberta, Canada, 1997, pp. 45-54. [10] M.W. Newman, J. Lin, J.I. Hong, J.A. Landay, DENIM: An Informal Web Site Design Tool Inspired by Observations of Practice, Human–Computer Interaction, 18 (2003) 259-324. [11] E. Saund, E. Lank, Stylus input and editing without prior selection of mode, Proceedings of the 16th annual ACM symposium on User interface software and technology, ACM, Vancouver, Canada, 2003, pp. 213-216. [12] D. Ashbrook, T. Starner, MAGIC: a motion gesture design tool, Proceedings of the 28th international conference on Human factors in computing systems, ACM, Atlanta, Georgia, USA, 2010, pp. 2159-2168. [13] J. Ruiz, Y. Li, E. Lank, User-defined motion gestures for mobile interaction, Proceedings of the 2011 annual conference on Human factors in computing systems, ACM, Vancouver, BC, Canada, 2011, pp. 197-206. [14] M. Negulescu, J. Ruiz, Y. Li, E. Lank, Tap, swipe, or move: attentional demands for distracted smartphone input, International Working Conference on Advanced Visual Interfaces, 2012, pp. 173-180. [15] G. Kurtenbach, W. Buxton, User learning and performance with marking menus, Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence, ACM, Boston, Massachusetts, United States, 1994, pp. 258-264. [16] G.P. Kurtenbach, A.J. Sellen, W.A.S. Buxton, An Empirical Evaluation of Some Articulatory and Cognitive Aspects of Marking Menus, Human–computer Interaction, 8 (1993) 1-23. [17] S. Pook, E. Lecolinet, G. Vaysseix, E. Barillot, Control menus: excecution and control in a single interactor, CHI '00 extended abstracts on Human factors in computing systems, ACM, The Hague, The Netherlands, 2000, pp. 263-264. [18] T. Grossman, K. Hinckley, P. Baudisch, M. Agrawala, R. Balakrishnan, Hover widgets: using the tracking state to extend the capabilities of pen-operated devices, Proceedings of the SIGCHI conference on Human Factors in computing systems, ACM, Montréal, Québec, Canada, 2006, pp. 861-870. [19] T. Yoshikawa, B. Shizuki, J. Tanaka, HandyWidgets: local widgets pulled-out from hands, Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces, 2012, pp. 197-200. [20] G. Fitzmaurice, J. Matejka, A. Khan, M. Glueck, G. Kurtenbach, PieCursor: merging pointing and command selection for rapid in-place tool switching, Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, ACM, Florence, Italy, 2008, pp. 1361-1370. [21] D. Mandalapu, S. Subramanian, Exploring pressure as an alternative to multi-touch based interaction, Proceedings of the 3rd International Conference on Human Computer Interaction, ACM, Bangalore, India, 2011, pp. 88-92. [22] C.F. Herot, G. Weinzapfel, One-point touch input of vector information for computer displays, Proceedings of the 5th annual conference on Computer graphics and interactive techniques, ACM, 1978, pp. 210-216. [23] C. Liu, X. Ren, Fluid and Natural Pen Interaction Techniques by Utilizing Multiple Input Parameters, International Journal of Innovative Computing, Information and Control, 6 (2010) 2103-2111.