M AMI a Taxonomy of Multimodal Interaction in the Human Information Processing System

Total Page:16

File Type:pdf, Size:1020Kb

M AMI a Taxonomy of Multimodal Interaction in the Human Information Processing System M AMI A Taxonomy of Multimodal Interaction in the Human Information Processing System A Report of the Esprit Project 8579 MIAMI { WP 1 { February, 1995 Written by L. Schomaker∗, J. Nijtmans (NICI) A. Camurri, F. Lavagetto, P. Morasso (DIST) C. Beno^ıt, T. Guiard-Marigny, B. Le Goff, J. Robert-Ribes, A. Adjoudani (ICP) I. Def´ee (RIIT) S. Munc¨ h∗ (UKA) K. Hartung∗, J. Blauert (RUB) ∗Editors Abstract This document has been prepared in the ESPRIT BRA No. 8579, Multimodal Integration for Advanced Multimedia Interfaces | in the following referred to as MIAMI | in order to serve as a basis for future work. The basic terms which will be used in MIAMI will be defined and an overview on man-machine-interfaces will be given. The term \taxonomy" is used in the following sense, adapted from [217]: \1: the study of the general principles of scientific classification: SYSTEMATICS; 2: CLASSIFICATION; specif: orderly clas- sification of plants and animals according to their presumed natural relationships"; but instead of plants and animals, we attempt to classify input and output modalities. @techreport{Schomaker-et-al-Taxonomy1995, author = {L. Schomaker and J. Nijtmans and A. Camurri and P. Morasso and C. Benoit and T. Guiard-Marigny and B. Le Gof and J. Robert-Ribes and A. Adjoudani and I. Defee and S. Munch and K. Hartung and J. Blauert}, institution = {Nijmegen University, NICI}, keywords = {handwriting-recognition}, title = {A Taxonomy of Multimodal Interaction in the Human Information Processing System: Report of the Esprit Project 8579 MIAMI}, year = {1995} } Contents 1 Introduction 1 1.1 Definitions of Basic Terms . 1 1.1.1 The basic model for human-computer interaction . 1 1.1.2 Levels of observation . 3 1.1.3 (Multi-) Modality . 5 1.1.4 Multimodal vs. multimedia vs. virtual reality system . 6 1.1.5 Communication channels . 8 1.2 Additional Notes and Caveats . 10 1.2.1 An extra information loop? . 10 1.2.2 Notes with respect to intention . 11 1.2.3 Topics which had to be excluded from this report . 12 1.3 Structure of the Document . 12 2 Perception 15 2.1 Human Input Channels . 15 2.1.1 Input modalities . 17 2.1.2 Vision . 18 2.1.3 Hearing . 19 2.1.4 Somatic senses . 21 2.2 Computer Output Media . 22 2.2.1 Output modalities . 22 2.2.2 Visual output . 22 i ii Miami Esprit bra 8579 2.2.3 Acoustical output . 23 2.2.4 Tactile/haptic output . 26 2.3 Bi- and Multimodal Perception . 30 2.3.1 Visual-acoustical perception . 30 2.3.2 Visual-speech perception . 31 3 Control and Manipulation 39 3.1 Human Output Channels . 39 3.1.1 Cybernetics: Closed-loop control . 40 3.1.2 Open-loop models . 40 3.1.3 Coordinative structure models . 42 3.1.4 Relevance for multimodal interaction . 43 3.2 Computer Input Modalities . 44 3.2.1 Keyboards . 46 3.2.2 Mice . 47 3.2.3 Pens . 48 3.2.4 Cameras . 52 3.2.5 Microphones . 52 3.2.6 3D input devices . 52 3.2.7 Other input devices . 53 3.2.8 Generalized input devices . 54 3.3 Event Handling Architectures in CIM . 56 3.3.1 Within-application event loops: GEM, X11 . 56 3.3.2 Event-routine binding: Motif, Tcl/Tk . 57 3.4 Bi- and Multimodal Control . 57 3.4.1 Visual-gestural control . 57 3.4.2 Handwriting-visual control . 59 3.4.3 Handwriting-speech control . 60 3.4.4 Visual-motoric control . 66 CONTENTS Esprit bra 8579 Miami iii 4 Interaction 73 4.1 Architectures and Interaction Models . 73 4.2 Input/Output Coupling . 86 4.3 Synchronization . 87 4.3.1 Object synchronization . 87 4.3.2 Complexity of information . 88 4.4 Virtual Reality? . 89 4.5 Analysis of Interaction . 91 5 Cognition 93 5.1 Cognition in Humans . 93 5.1.1 Symbolic, subsymbolic, and analogical . 96 5.1.2 High-level representations . 96 5.1.3 Human learning and adaptation . 97 5.1.4 Hybrid interactive systems . 98 5.2 (Intelligent) Agents and Multimedia . 99 5.2.1 Application Scenarios . 100 6 Scenarios & Dreams 103 6.1 The Multimodal Orchestra . 103 6.2 Multimodal Mobile Robot Control . 104 A An Introduction to Binaural Technology 109 A.1 The Ears-and-Head Array: Physics of Binaural Hearing . 111 A.1.1 Binaural recording and authentic reproduction . 112 A.1.2 Binaural measurement and evaluation . 112 A.1.3 Binaural simulation and displays . 113 A.2 Psychophysics of Binaural Hearing . 115 A.2.1 Spatial hearing . 120 A.2.2 Binaural psychoacoustic descriptors . 120 CONTENTS iv Miami Esprit bra 8579 A.2.3 Binaural signal enhancement . 120 A.3 Psychology of Binaural Hearing . 121 B Audio-Visual Speech Synthesis 126 B.1 Visual Speech Synthesis from Acoustics . 126 B.1.1 Articulatory description . 130 B.1.2 Articulatory synthesis . 133 B.2 Audio-Visual Speech Synthesis from Text . 135 B.2.1 Animation of synthetic faces . 135 B.2.2 Audio-visual speech synthesis . 137 C Audio-Visual Speech Recognition 140 C.1 Integration Models of Audio-Visual Speech by Humans . 140 C.1.1 General principles for integration . 141 C.1.2 Five models of audio-visual integration . 142 C.1.3 Conclusion . 145 C.1.4 Taxonomy of the integration models . 146 C.2 Audio-Visual Speech Recognition by Machines . 147 C.2.1 Audio-visual speech perception by humans . 148 C.2.2 Automatic visual speech recognition . 148 C.2.3 Automatic audio-visual speech recognition . 149 C.2.4 Current results obtained at ICP . 150 C.2.5 Forecast for future works . 153 D Gesture Taxonomies 156 D.1 Hand Gestures Taxonomy . 156 E Two-dimensional Movement in Time 158 E.1 The Pen-based CIM/HOC . 159 E.2 Textual Data Input . 160 E.2.1 Conversion to ASCII . 160 CONTENTS Esprit bra 8579 Miami v E.2.2 Graphical text storage . 162 E.3 Command Entry . 162 E.3.1 Widget selection . 162 E.3.2 Drag-and-drop operations . 163 E.3.3 Pen gestures . 163 E.3.4 Continuous control . 163 E.4 Handwriting and Pen Gestures COM . 163 E.5 Graphical Pattern Input . 164 E.5.1 Free-style drawings . 164 E.5.2 Flow charts and schematics . 165 E.5.3 Miscellaneous symbolic input . 165 E.6 Known Bimodal Experiments in Handwriting . 166 E.6.1 Speech command recognition and pen input . 166 E.6.2 Handwriting recognition and speech synthesis . 166 Bibliography 167 CONTENTS Chapter 1 Introduction In this chapter we will introduce our underlying model of human-computer interaction which will influence the whole structure of this document as well as our research in MIAMI. The next step will be the definition of basic terms (more specific terms will follow in later sections) and a statement of what will be included and excluded in the project, respectively. Some 'philosophical' considerations will follow in section 1.2, and finally we will give an overview on the structure of this report. 1.1 Definitions of Basic Terms One thing most publications on multimodal systems have in common is each author's own usage of basic terms on this topic. Therefore, we want to state in the beginning of this report what our basic model looks like, how we are going to use several terms within this document, and which levels of perception and control we are considering in MIAMI. 1.1.1 The basic model for human-computer interaction In order to depict a taxonomy of multimodal human-computer interaction we will have to clarify a number of concepts and issues. The first assumption is that there are minimally two separate agents involved, one human and one machine. They are physically separated, but are able to exchange information through a number of information channels. As schematically shown in figure 1.1, we will make the following definitions. There.
Recommended publications
  • Motion and Context Sensing Techniques for Pen Computing
    Motion and Context Sensing Techniques for Pen Computing Ken Hinckley1, Xiang ‘Anthony’ Chen1,2, and Hrvoje Benko1 * Microsoft Research, Redmond, WA, USA1 and Carnegie Mellon University Dept. of Computer Science2 ABSTRACT We explore techniques for a slender and untethered stylus prototype enhanced with a full suite of inertial sensors (three-axis accelerometer, gyroscope, and magnetometer). We present a taxonomy of enhanced stylus input techniques and consider a number of novel possibilities that combine motion sensors with pen stroke and touchscreen inputs on a pen + touch slate. These Fig. 1 Our wireless prototype has accelerometer, gyro, and inertial sensors enable motion-gesture inputs, as well sensing the magnetometer sensors in a ~19 cm Χ 11.5 mm diameter stylus. context of how the user is holding or using the stylus, even when Our system employs a custom pen augmented with inertial the pen is not in contact with the tablet screen. Our initial results sensors (accelerometer, gyro, and magnetometer, each a 3-axis suggest that sensor-enhanced stylus input offers a potentially rich sensor, for nine total sensing dimensions) as well as a low-power modality to augment interaction with slate computers. radio. Our stylus prototype also thus supports fully untethered Keywords: Stylus, motion sensing, sensors, pen+touch, pen input operation in a slender profile with no protrusions (Fig. 1). This allows us to explore numerous interactive possibilities that were Index Terms: H.5.2 Information Interfaces & Presentation: Input cumbersome in previous systems: our prototype supports direct input on tablet displays, allows pen tilting and other motions far 1 INTRODUCTION from the digitizer, and uses a thin, light, and wireless stylus.
    [Show full text]
  • An Empirical Study in Pen-Centric User Interfaces: Diagramming
    EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (2008) C. Alvarado and M.- P. Cani (Editors) An Empirical Study in Pen-Centric User Interfaces: Diagramming Andrew S. Forsberg1, Andrew Bragdon1, Joseph J. LaViola Jr.2, Sashi Raghupathy3, Robert C. Zeleznik1 1Brown University, Providence, RI, USA 2University of Central Florida, Orlando, FL, USA 3Microsoft Corporation, Redmond, WA, USA Abstract We present a user study aimed at helping understand the applicability of pen-computing in desktop environments. The study applied three mouse-and-keyboard-based and three pen-based interaction techniques to six variations of a diagramming task. We ran 18 subjects from a general population and the key finding was that while the mouse and keyboard techniques generally were comparable or faster than the pen techniques, subjects ranked pen techniques higher and enjoyed them more. Our contribution is the results from a formal user study that suggests there is a broader applicability and subjective preference for pen user interfaces than the niche PDA and mobile market they currently serve. Categories and Subject Descriptors (according to ACM CCS): H.5.2 [User Interfaces]: Evaluation/Methodology 1. Introduction ficially appears pen-centric, users will in fact derive a sig- nificant benefit from using a pen-based interface. Our ap- Research on pen computing can be traced back at least to proach is to quantify formally, through head-to-head evalua- the early 60’s. Curiously though, there is little formal un- tion, user performance and relative preference for a represen- derstanding of when, where, and for whom pen comput- tative sampling of both keyboard and mouse, and pen-based ing is the user interface of choice.
    [Show full text]
  • Pen Interfaces
    Understanding the Pen Input Modality Presented at the Workshop on W3C MMI Architecture and Interfaces Nov 17, 2007 Sriganesh “Sri-G” Madhvanath Hewlett-Packard Labs, Bangalore, India [email protected] © 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Objective • Briefly describe different aspects of pen input • Provide some food for thought … Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Unimodal input in the context of Multimodal Interfaces • Multimodal interfaces are frequently used unimodally − Based on • perceived suitability of modality to task • User experience, expertise and preference • It is important that a multimodal interface provide full support for individual modalities − “Multimodality” cannot be a substitute for incomplete/immature support for individual modalities Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Pen Computing • Very long history … predates most other input modalities − Light pen was invented in 1957, mouse in 1963 ! • Several well-studied aspects: − Hardware − Interface − Handwriting recognition − Applications • Many famous failures (Go, Newton, CrossPad) • Enjoying resurgence since 90s because of PDAs and TabletPCs − New technologies such as Digital Paper (e.g. Anoto) and Touch allow more natural and “wow” experiences Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Pen/Digitizer Hardware … • Objective: Detect pen position, maybe more • Various technologies with own limitations and characteristics (and new ones still being developed !) − Passive stylus • Touchscreens on PDAs, some tablets • Capacitive touchpads on laptops (Synaptics) • Vision techniques • IR sensors in bezel (NextWindow) − Active stylus • IR + ultrasonic (Pegasus, Mimeo) • Electromagnetic (Wacom) • Camera in pen tip & dots on paper (Anoto) • Wide variation in form − Scale: mobile phone to whiteboard (e.g.
    [Show full text]
  • Handwriting Recognition Systems: an Overview
    Handwriting Recognition Systems: An Overview Avi Drissman Dr. Sethi CSC 496 February 26, 1997 Drissman 1 Committing words to paper in handwriting is a uniquely human act, performed daily by millions of people. If you were to present the idea of “decoding” handwriting to most people, perhaps the first idea to spring to mind would be graphology, which is the analysis of handwriting to determine its authenticity (or perhaps also the more non-scientific determination of some psychological character traits of the writer). But the more mundane, and more frequently overlooked, “decoding” of handwriting is handwriting recognition—the process of figuring out what words and letters the scribbles and scrawls on the paper represent. Handwriting recognition is far from easy. A common complaint and excuse of people is that they couldn’t read their own handwriting. That makes us ask ourselves the question: If people sometimes can’t read their own handwriting, with which they are quite familiar, what chance does a computer have? Fortunately, there are powerful tools that can be used that are easily implementable on a computer. A very useful one for handwriting recognition, and one that is used in several recognizers, is a neural network. Neural networks are richly connected networks of simple computational elements. The fundamental tenet of neural computation (or computation with [neural networks]) is that such networks can carry out complex cognitive and computational tasks. [9] In addition, one of the tasks at which neural networks excel is the classification of input data into one of several groups or categories. This ability is one of the main reasons neural networks are used for this purpose.
    [Show full text]
  • Pen Computing History
    TheThe PastPast andand FutureFuture ofof PenPen ComputingComputing Conrad H. Blickenstorfer, Editor-in-Chief Pen Computing Magazine [email protected] http://www.pencomputing.com ToTo buildbuild thethe future,future, wewe mustmust learnlearn fromfrom thethe pastpast HistoryHistory ofof penpen computingcomputing 1914: Goldberg gets US patent for recognition of handwritten numbers to control machines 1938: Hansel gets US patent for machine recognition of handwriting 1956: RAND Corporation develops digitizing tablet for handwriting recognition 1957-62: Handwriting recognition projects with accuracies of 97-99% 1963: Bell Labs develops cursive recognizer 1966: RAND creates GRAIL, similar to Graffiti Pioneer:Pioneer: AlanAlan KayKay Utah State University Stanford University Xerox PARC: GUI, SmallTalk, OOL Apple Computer Research Fellow Disney Envisioned Dynabook in 1968: The Dynabook will be a “dynamic medium for creative thought, capable of synthesizing all media – pictures, animation, sound, and text – through the intimacy and responsiveness of the personal computer.” HistoryHistory ofof penpen computingcomputing 1970s: Commercial products, including kana/romanji billing machine 1980s: Handwriting recognition companies – Nestor – Communication Intelligence Corporation – Lexicus – Several others Pioneers:Pioneers: AppleApple 1987 Apple prototype – Speech recognition – Intelligent agents – Camera – Folding display – Video conferencing – Wireless communication – Personal Information Manager ““KnowledgeKnowledge NavigatorNavigator””
    [Show full text]
  • Pen Computer Technology
    Pen Computer Technology Educates the reader about the technologies involved in a pen computer Fujitsu PC Corporation www.fujitsupc.com For more information: [email protected] © 2002 Fujitsu PC Corporation. All rights reserved. This paper is intended to educate the reader about the technologies involved in a pen computer. After reading this paper, the reader should be better equipped to make intelligent purchasing decisions about pen computers. Types of Pen Computers In this white paper, "pen computer" refers to a portable computer that supports a pen as a user interface device, and whose LCD screen measures at least six inches diagonally. This product definition encompasses five generally recognized categories of standard products, listed in Table 1 below. PRODUCT TARGET PC USER STORAGE OPERATING RUNS LOCAL EXAMPLE CATEGORY MARKET INTERFACE SYSTEM PROGRAMS Webpad Consumer & No Standard Flash Windows CE, Only via Honeywell Enterprise browser memory Linux, QNX browser WebPAD II plug-ins CE Tablet Enterprise No Specialized Flash Windows CE Yes Fujitsu applications memory PenCentra Pen Tablet Enterprise Yes Windows & Hard drive Windows 9x, Yes Fujitsu specialized NT-4, 2000, Stylistic applications XP Pen-Enabled Consumer Yes Windows Hard drive Windows 9x, Yes Fujitsu & Enterprise 2000, XP LifeBook B Series Tablet PC Consumer Yes Windows Hard drive Windows XP Yes Many under & Enterprise Tablet PC development Edition Table 1: Categories of Pen Computers with LCD Displays of Six Inches or Larger Since the different types of pen computers are often confused, the following paragraphs are intended to help explain the key distinguishing characteristics of each product category. Pen Computers Contrasted Webpad: A Webpad's primary characteristic is that its only user interface is a Web browser.
    [Show full text]
  • Enabling Freehand Sketching Through Improved Primitive Recognition
    RETHINKING PEN INPUT INTERACTION: ENABLING FREEHAND SKETCHING THROUGH IMPROVED PRIMITIVE RECOGNITION A Dissertation by BRANDON CHASE PAULSON Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2010 Major Subject: Computer Science RETHINKING PEN INPUT INTERACTION: ENABLING FREEHAND SKETCHING THROUGH IMPROVED PRIMITIVE RECOGNITION A Dissertation by BRANDON CHASE PAULSON Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, Tracy Hammond Committee Members, Yoonsuck Choe Ricardo Gutierrez-Osuna Vinod Srinivasan Head of Department, Valerie E. Taylor May 2010 Major Subject: Computer Science iii ABSTRACT Rethinking Pen Input Interaction: Enabling Freehand Sketching Through Improved Primitive Recognition. (May 2010) Brandon Chase Paulson, B.S., Baylor University Chair of Advisory Committee: Dr. Tracy Hammond Online sketch recognition uses machine learning and artificial intelligence tech- niques to interpret markings made by users via an electronic stylus or pen. The goal of sketch recognition is to understand the intention and meaning of a partic- ular user's drawing. Diagramming applications have been the primary beneficiaries of sketch recognition technology, as it is commonplace for the users of these tools to first create a rough sketch of a diagram on paper before translating it into a machine understandable model, using computer-aided design tools, which can then be used to perform simulations or other meaningful tasks. Traditional methods for performing sketch recognition can be broken down into three distinct categories: appearance-based, gesture-based, and geometric-based.
    [Show full text]
  • A Context for Pen-Based Mathematical Computing
    A Context for Pen-Based Mathematical Computing Elena Smirnova Stephen M. Watt Ontario Research Centre for Computer Algebra Department of Computer Science University of Western Ontario London Ontario, Canada N6A 5B7 {elena,watt}@orcca.on.ca Abstract We report on an investigation to determine an architectural framework for pen-based mathematical computing. From the outset, we require that the framework be integrated with suitable elements for pen computing, document processing and computer algebra. We demonstrate that this architecture is able to provide these interfaces and allows a high degree of platform independence for the development of pen-based mathematical software. Keywords: Pen computing, computer algebra, Java, .Net, Maple, Microsoft Office. 1. Introduction 1.1. Motivation With the recent widespread availability of pen-enabled devices (such as Pocket PCs, Tablet PCs and interactive whiteboards), there is an opportunity for a new generation of natural interfaces for mathematical software packages. The use of the pen to enter, edit and manipulate mathematical expressions can lead to a qualitative improvement in the ease of use of computer algebra systems. On the other hand, mathematical input on pen-enabled devices goes well beyond ordinary hand- written mathematics on paper or chalkboard, simply because it can enjoy the rich functionality of software behind the ink-capturing hardware. We should point out from the beginning that pen-based software for mathematics has unique aspects that distinguish it from text-based applications and that these drive our software architecture with different considerations: First, the set of symbols used in mathematics is much larger than the usual alphabets or syllabaries encountered in natural languages.
    [Show full text]
  • Presentation (PDF)
    The Past and Future of Pen Computing Conrad H. Blickenstorfer, Editor-in-Chief Pen Computing Magazine [email protected] http://www.pencomputing.com Technology has become the international language of progress, of building things rather than destroying them PC Market: Cloudy Future After 20 years of growth, demand leveling off IDC and Dataquest say shipments down first time ever, predict 6% down from 2000 Still 30 million each in Q2 and Q3 2001, but…. – Commodity components make it difficult to make profit – PC prices have come down: – 1981: 4.77MHz PC costs US$4,000 ($7,767 in 2001 money) – 2001: 1.8GHz PC costs US$1,000 Notebook market a bit better Estimate: 26 million units for 2001, same as for 2000 It is clear that PCs and notebooks as we know them represent the past and the present of computing, but not necessarily the future of computing. Many people agree that PDAs and pen tablets or web tablets are a technology with a very promising future. PDA Projections (1) IDC said that Asia Pacific (without Japan) PDA sales were about two million in 2000. Dataquest said there were 2.1 million PDAs sold in Europe in 2000, with Palm and Pocket PC each having a market share of about 40% in Q2/2001. The US PDA market is 7-8 million units this year, and represents 60-70% of worldwide PDA sales right now. Microsoft said in May 2001 that 1.25 million Pocket PCs have sold since the April 2000 introduction. At a August Microsoft conference in Seattle, Washington, Microsoft said that two million Pocket PCs have been sold worldwide.
    [Show full text]
  • Active Pen Input and the Android Input Framework
    ACTIVE PEN INPUT AND THE ANDROID INPUT FRAMEWORK A Thesis Presented to the Faculty of California Polytechnic State University San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Electrical Engineering by Andrew Hughes June 2011 c 2011 Andrew Hughes ALL RIGHTS RESERVED ii COMMITTEE MEMBERSHIP TITLE: Active Pen Input and the Android Input Framework AUTHOR: Andrew Hughes DATE SUBMITTED: June 2011 COMMITTEE CHAIR: Chris Lupo, Ph.D. COMMITTEE MEMBER: Hugh Smith, Ph.D. COMMITTEE MEMBER: John Seng, Ph.D. iii Abstract Active Pen Input and the Android Input Framework Andrew Hughes User input has taken many forms since the conception of computers. In the past ten years, Tablet PCs have provided a natural writing experience for users with the advent of active pen input. Unfortunately, pen based input has yet to be adopted as an input method by any modern mobile operating system. This thesis investigates the addition of active pen based input to the Android mobile operating system. The Android input framework was evaluated and modified to allow for active pen input events. Since active pens allow for their position to be detected without making contact with the screen, an on-screen pointer was implemented to provide a digital representation of the pen's position. Extensions to the Android Soft- ware Development Kit (SDK) were made to expose the additional functionality provided by active pen input to developers. Pen capable hardware was used to test the Android framework and SDK additions and show that active pen input is a viable input method for Android powered devices.
    [Show full text]
  • Charles C. Tappert, Ph.D
    Charles C. Tappert, Ph.D. CSIS, Pace University, Goldstein Center 861 Bedford Road, Pleasantville NY 10570 E-mail: [email protected] Office: Phone: 914-773-3989 Fax: x3533 Google Scholar Publication Citations Extensive experience in computer science, specializing in artificial intelligence, big data analytics, quantum computing, pattern recognition, machine learning, deep learning, brain modeling, biometrics, pen computing and speech applications, cybersecurity, forensics, and project management. Taught graduate and undergraduate courses, supervised dissertations, and secured government contracts. Over 100 peer reviewed publications in prestigious international journals and conferences. Education Ph.D. Electrical Eng., Cornell, ECE Fulbright Scholar, Royal Inst. Tech., Stockholm M.S. Electrical Eng., Cornell, ECE B.S. Engineering Sciences, Swarthmore Academic Experience Pace University, Seidenberg School of CSIS, Computer Science Professor 2000-present Director, Doctor of Professional Studies (D.P.S.) in Computing Program; Director, Pervasive Computing Lab; Supervised three M.S. Theses, thirty Doctoral Dissertations; Teaches Quantum Computing, Machine Learning, Big Data Analytics, Capstone Projects, Emerging IT U.S. Military Academy, Computer Science Associate Professor 1993-2000 Taught courses in Computer Graphics, Languages, Databases, Capstone Projects, Intro to Computing SUNY Purchase and Pace University, Adjunct Associate Professor 1990-1993 Taught undergraduate courses in Computer Operating Systems and Data Structures North Carolina
    [Show full text]
  • Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops
    DISS. ETH NO. 21686 Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by FABRICE MATULIC Diplôme d'Ingénieur ENSIMAG Diplom Informatik, Universität Karlsruhe (TH) accepted on the recommendation of Prof. Dr. Moira C. Norrie, examiner Prof. (FH) Dr. Michael Haller, co-examiner Dr. Ken Hinckley, co-examiner 2014 Document Authoring on Pen and Calibri 24 B I U © 2014 Fabrice Matulic Abstract In the last few years, the consumer device landscape has been significantly transformed by the rapid proliferation of smartphones and tablets, whose sales are fast overtaking that of traditional PCs. The highly user-friendly multitouch interfaces combined with ever more efficient electronics have made it possible to perform an increasingly wide range of activities on those mobile devices. One category of tasks, for which the desktop PC with mouse and keyboard remains the platform of choice, however, is office work. In particular, productivity document engineering tasks are still more likely to be carried out on a regular computer and with software, whose interfaces are based on the time-honoured "windows, icons, menus, pointer" (WIMP) paradigm. With the recent emergence of larger touch-based devices such as wall screens and digital tabletops (which overcome the limitations of mobile devices in terms of available screen real estate) as well as the resurgence of the stylus as an additional input instrument, the vocabulary of interactions at the disposal of interface designers has vastly broadened. Hybrid pen and touch input is a very new and promising interaction model that practitioners have begun to tap, but it has yet to be applied in office applications and document-centric productivity environments.
    [Show full text]