Enabling Freehand Sketching Through Improved Primitive Recognition

Total Page:16

File Type:pdf, Size:1020Kb

Enabling Freehand Sketching Through Improved Primitive Recognition RETHINKING PEN INPUT INTERACTION: ENABLING FREEHAND SKETCHING THROUGH IMPROVED PRIMITIVE RECOGNITION A Dissertation by BRANDON CHASE PAULSON Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2010 Major Subject: Computer Science RETHINKING PEN INPUT INTERACTION: ENABLING FREEHAND SKETCHING THROUGH IMPROVED PRIMITIVE RECOGNITION A Dissertation by BRANDON CHASE PAULSON Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, Tracy Hammond Committee Members, Yoonsuck Choe Ricardo Gutierrez-Osuna Vinod Srinivasan Head of Department, Valerie E. Taylor May 2010 Major Subject: Computer Science iii ABSTRACT Rethinking Pen Input Interaction: Enabling Freehand Sketching Through Improved Primitive Recognition. (May 2010) Brandon Chase Paulson, B.S., Baylor University Chair of Advisory Committee: Dr. Tracy Hammond Online sketch recognition uses machine learning and artificial intelligence tech- niques to interpret markings made by users via an electronic stylus or pen. The goal of sketch recognition is to understand the intention and meaning of a partic- ular user's drawing. Diagramming applications have been the primary beneficiaries of sketch recognition technology, as it is commonplace for the users of these tools to first create a rough sketch of a diagram on paper before translating it into a machine understandable model, using computer-aided design tools, which can then be used to perform simulations or other meaningful tasks. Traditional methods for performing sketch recognition can be broken down into three distinct categories: appearance-based, gesture-based, and geometric-based. Al- though each approach has its advantages and disadvantages, geometric-based methods have proven to be the most generalizable for multi-domain recognition. Tools, such as the LADDER symbol description language, have shown to be capable of recognizing sketches from over 30 different domains using generalizable, geometric techniques. The LADDER system is limited, however, in the fact that it uses a low-level rec- ognizer that supports only a few primitive shapes, the building blocks for describing higher-level symbols. Systems which support a larger number of primitive shapes have been shown to have questionable accuracies as the number of primitives increase, or they place constraints on how users must input shapes (e.g. circles can only be drawn in a clockwise motion; rectangles must be drawn starting at the top-left corner). iv This dissertation allows for a significant growth in the possibility of free-sketch recognition systems, those which place little to no drawing constraints on users. In this dissertation, we describe multiple techniques to recognize upwards of 18 primi- tive shapes while maintaining high accuracy. We also provide methods for producing confidence values and generating multiple interpretations, and explore the difficulties of recognizing multi-stroke primitives. In addition, we show the need for a standard- ized data repository for sketch recognition algorithm testing and propose SOUSA (sketch-based online user study application), our online system for performing and sharing user study sketch data. Finally, we will show how the principles we have learned through our work extend to other domains, including activity recognition using trained hand posture cues. v ACKNOWLEDGMENTS The writing of this dissertation would not be possible without the strength and guidance given to me by my Lord and Savior, Jesus Christ. In addition, this achieve- ment would not have been completed without the love and support I have received from my wife, Stacy, as well as the rest of my entire family. I would also like to thank my family at Beacon Baptist Church for their encouragement during this pro- cess. Thank you to my committee members, Dr. Yoonsuck Choe, Dr. Ricardo Gutierrez-Osuna, and Dr. Vinod Srinivasan for all that you have taught me, both in the classroom, and through your helpful critiques of this work. Thanks to the many members of the Sketch Recognition Lab (past and present) for your help in proofreading, critiquing, and editing this work. Thank you also to Greg Sparks for taking time to proofread this document as well. Last, but not least, I would like to acknowledge my advisor, Dr. Tracy Hammond, for all of her hard work and the investment that she has made in me to further my academic career. vi TABLE OF CONTENTS CHAPTER Page I INTRODUCTION :::::::::::::::::::::::::: 1 A. Sketch Recognition in Design . 1 B. Sketch Recognition in Engineering & Education . 2 C. Proposal . 3 II PREVIOUS WORK IN PEN-BASED INTERFACES :::::: 7 A. Hardware Research . 7 B. Human-computer Interaction . 10 1. Sketch Editing . 11 2. Incorporating Gestures . 11 3. Beautification . 13 4. Toolkits . 15 5. Multimodal Systems . 15 C. Recognition . 16 D. High-level Recognition . 17 E. Low-level Recognition . 19 1. Motion-based Recognition . 20 2. Appearance-based Recognition . 23 3. Geometric-based Recognition . 24 a. Corner Finding . 25 b. Primitive Recognition . 30 c. Limitations of Geometric-based Approach . 31 4. Hybrid/Combination Techniques . 32 F. Applications for Sketch-based Interfaces . 35 1. Sketching in Design . 35 a. Storyboards . 37 b. 3D Modeling . 37 2. Sketching in Engineering and Education . 38 a. Sketching in Engineering . 39 b. Sketching in Math & Science . 41 c. Sketching in Language & Fine Arts . 43 3. Search by Sketch . 44 vii CHAPTER Page III PALEOSKETCH :::::::::::::::::::::::::: 46 A. Introduction . 46 1. First Version of PaleoSketch . 49 2. Determining What Primitives to Support . 49 3. Symbol Description Experiment . 51 B. Data Set . 53 C. The First Part of PaleoSketch: Pre-recognition . 55 D. Features . 58 1. New Geometric Features . 59 a. General Features . 59 b. Line Features . 62 c. Ellipse Features . 64 d. Circle Features . 64 e. Arc Features . 65 f. Curve Features . 66 g. Polyline Features . 67 h. Spiral & Helix Features . 68 E. Classifier . 70 F. Handling Complex Shapes . 71 1. Confidence Modification . 72 G. Experiment & Results . 74 1. Accuracy of Complex Fits . 77 H. Discussion . 78 1. Complex Fits . 78 2. Using PaleoSketch in a Real-world Setting . 82 3. Limitations of PaleoSketch . 82 I. Chapter Summary . 85 IV MILITARY COURSE OF ACTION :::::::::::::::: 86 A. Introduction . 86 B. Previous Work in COA Systems . 87 C. Methodology . 89 1. Features . 90 a. Rectangle & Diamond Features . 91 b. Arrow Features . 91 c. Dot Features . 92 d. Wave Features . 93 e. Gull Features . 94 viii CHAPTER Page f. NBC Features . 95 2. Classifier . 96 3. Data . 96 D. Results . 98 E. Discussion . 98 F. Chapter Summary . 100 V MULTI-STROKE PRIMITIVES :::::::::::::::::: 101 A. Introduction . 101 B. Implementation . 103 1. Graph Building . 104 2. Graph Searching . 106 3. Stroke Combination . 108 4. False Positive Removal . 109 5. Arrow Detection . 111 C. Experiment . 113 D. Results . 115 1. Single-stroke . 115 2. Multi-stroke . 118 3. Complex Shapes . 123 E. Discussion . 129 1. Comparison to Other Methods . 129 2. Multiple Interpretations . 131 3. Improving Complex Interpretations . 132 4. Improving Grouping . 132 5. Additional Improvements . 133 F. Chapter Summary . 133 VI SOUSA :::::::::::::::::::::::::::::::: 134 A. Introduction . 134 B. Previous Efforts . 134 C. The SOUSA System . 135 1. Collection Studies . 138 2. Verification Studies . 138 D. Evaluation . 141 E. Future Work . 142 F. Chapter Summary . 143 VII OFFICE ACTIVITY RECOGNITION :::::::::::::: 145 ix CHAPTER Page A. Preface . 145 B. Introduction . 145 C. Related Work in Activity Recognition . 149 D. Experiment . 150 1. Classifiers . 151 2. Feature Spaces . 153 E. Results . 153 1. User-Independent System . 154 2. User-Dependent System . 155 F. Discussion . 157 G. Future Work . 162 H. Chapter Summary . 163 VIII CONCLUSION ::::::::::::::::::::::::::: 164 REFERENCES ::::::::::::::::::::::::::::::::::: 167 VITA :::::::::::::::::::::::::::::::::::::::: 198 x LIST OF TABLES TABLE Page I Accuracy results of existing recognizers on our collected set of primitive data. :::::::::::::::::::::::::::::: 54 II Accuracy results of the different feature sets using a multi-layer perceptron classifier. The first five columns represent the accu- racies of the five individual feature sets. Combined refers to the combined feature set of CALI, HHReco, and Long. All refers to the combination of the Combined feature set, plus the Paleo fea- tures. Modified uses the same feature set as All, but also utilizes the complex confidence modification algorithm. :::::::::::: 75 III Number of occurrences of each primitive shape in our data sets as well as the accuracy of 10-fold cross-validation on each set. The final column shows the accuracy results when we train with dataset A and test with dataset B. \Total" accuracy represents the flat average of all shape accuracies, while \weighted" average shows the average accuracy of each shape weighted by its number of occurrences. *Dataset B consisted of more lines because it included \anticipated" unit symbols, which are drawn as dashed rectangles or diamonds. ::::::::::::::::::::::::: 99 IV Number of each type of sketched primitive present in the collected data. Complex and polyline shapes cannot be drawn with multiple strokes. :::::::::::::::::::::::::::::::::: 116 V Accuracy results after 10-fold cross-validation. \Average" repre- sents flat averages, while \weighted averages".
Recommended publications
  • Motion and Context Sensing Techniques for Pen Computing
    Motion and Context Sensing Techniques for Pen Computing Ken Hinckley1, Xiang ‘Anthony’ Chen1,2, and Hrvoje Benko1 * Microsoft Research, Redmond, WA, USA1 and Carnegie Mellon University Dept. of Computer Science2 ABSTRACT We explore techniques for a slender and untethered stylus prototype enhanced with a full suite of inertial sensors (three-axis accelerometer, gyroscope, and magnetometer). We present a taxonomy of enhanced stylus input techniques and consider a number of novel possibilities that combine motion sensors with pen stroke and touchscreen inputs on a pen + touch slate. These Fig. 1 Our wireless prototype has accelerometer, gyro, and inertial sensors enable motion-gesture inputs, as well sensing the magnetometer sensors in a ~19 cm Χ 11.5 mm diameter stylus. context of how the user is holding or using the stylus, even when Our system employs a custom pen augmented with inertial the pen is not in contact with the tablet screen. Our initial results sensors (accelerometer, gyro, and magnetometer, each a 3-axis suggest that sensor-enhanced stylus input offers a potentially rich sensor, for nine total sensing dimensions) as well as a low-power modality to augment interaction with slate computers. radio. Our stylus prototype also thus supports fully untethered Keywords: Stylus, motion sensing, sensors, pen+touch, pen input operation in a slender profile with no protrusions (Fig. 1). This allows us to explore numerous interactive possibilities that were Index Terms: H.5.2 Information Interfaces & Presentation: Input cumbersome in previous systems: our prototype supports direct input on tablet displays, allows pen tilting and other motions far 1 INTRODUCTION from the digitizer, and uses a thin, light, and wireless stylus.
    [Show full text]
  • An Empirical Study in Pen-Centric User Interfaces: Diagramming
    EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (2008) C. Alvarado and M.- P. Cani (Editors) An Empirical Study in Pen-Centric User Interfaces: Diagramming Andrew S. Forsberg1, Andrew Bragdon1, Joseph J. LaViola Jr.2, Sashi Raghupathy3, Robert C. Zeleznik1 1Brown University, Providence, RI, USA 2University of Central Florida, Orlando, FL, USA 3Microsoft Corporation, Redmond, WA, USA Abstract We present a user study aimed at helping understand the applicability of pen-computing in desktop environments. The study applied three mouse-and-keyboard-based and three pen-based interaction techniques to six variations of a diagramming task. We ran 18 subjects from a general population and the key finding was that while the mouse and keyboard techniques generally were comparable or faster than the pen techniques, subjects ranked pen techniques higher and enjoyed them more. Our contribution is the results from a formal user study that suggests there is a broader applicability and subjective preference for pen user interfaces than the niche PDA and mobile market they currently serve. Categories and Subject Descriptors (according to ACM CCS): H.5.2 [User Interfaces]: Evaluation/Methodology 1. Introduction ficially appears pen-centric, users will in fact derive a sig- nificant benefit from using a pen-based interface. Our ap- Research on pen computing can be traced back at least to proach is to quantify formally, through head-to-head evalua- the early 60’s. Curiously though, there is little formal un- tion, user performance and relative preference for a represen- derstanding of when, where, and for whom pen comput- tative sampling of both keyboard and mouse, and pen-based ing is the user interface of choice.
    [Show full text]
  • Pen Interfaces
    Understanding the Pen Input Modality Presented at the Workshop on W3C MMI Architecture and Interfaces Nov 17, 2007 Sriganesh “Sri-G” Madhvanath Hewlett-Packard Labs, Bangalore, India [email protected] © 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Objective • Briefly describe different aspects of pen input • Provide some food for thought … Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Unimodal input in the context of Multimodal Interfaces • Multimodal interfaces are frequently used unimodally − Based on • perceived suitability of modality to task • User experience, expertise and preference • It is important that a multimodal interface provide full support for individual modalities − “Multimodality” cannot be a substitute for incomplete/immature support for individual modalities Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Pen Computing • Very long history … predates most other input modalities − Light pen was invented in 1957, mouse in 1963 ! • Several well-studied aspects: − Hardware − Interface − Handwriting recognition − Applications • Many famous failures (Go, Newton, CrossPad) • Enjoying resurgence since 90s because of PDAs and TabletPCs − New technologies such as Digital Paper (e.g. Anoto) and Touch allow more natural and “wow” experiences Nov 17, 2007 Workshop on W3C MMI Architecture and Interfaces Pen/Digitizer Hardware … • Objective: Detect pen position, maybe more • Various technologies with own limitations and characteristics (and new ones still being developed !) − Passive stylus • Touchscreens on PDAs, some tablets • Capacitive touchpads on laptops (Synaptics) • Vision techniques • IR sensors in bezel (NextWindow) − Active stylus • IR + ultrasonic (Pegasus, Mimeo) • Electromagnetic (Wacom) • Camera in pen tip & dots on paper (Anoto) • Wide variation in form − Scale: mobile phone to whiteboard (e.g.
    [Show full text]
  • Handwriting Recognition Systems: an Overview
    Handwriting Recognition Systems: An Overview Avi Drissman Dr. Sethi CSC 496 February 26, 1997 Drissman 1 Committing words to paper in handwriting is a uniquely human act, performed daily by millions of people. If you were to present the idea of “decoding” handwriting to most people, perhaps the first idea to spring to mind would be graphology, which is the analysis of handwriting to determine its authenticity (or perhaps also the more non-scientific determination of some psychological character traits of the writer). But the more mundane, and more frequently overlooked, “decoding” of handwriting is handwriting recognition—the process of figuring out what words and letters the scribbles and scrawls on the paper represent. Handwriting recognition is far from easy. A common complaint and excuse of people is that they couldn’t read their own handwriting. That makes us ask ourselves the question: If people sometimes can’t read their own handwriting, with which they are quite familiar, what chance does a computer have? Fortunately, there are powerful tools that can be used that are easily implementable on a computer. A very useful one for handwriting recognition, and one that is used in several recognizers, is a neural network. Neural networks are richly connected networks of simple computational elements. The fundamental tenet of neural computation (or computation with [neural networks]) is that such networks can carry out complex cognitive and computational tasks. [9] In addition, one of the tasks at which neural networks excel is the classification of input data into one of several groups or categories. This ability is one of the main reasons neural networks are used for this purpose.
    [Show full text]
  • Pen Computing History
    TheThe PastPast andand FutureFuture ofof PenPen ComputingComputing Conrad H. Blickenstorfer, Editor-in-Chief Pen Computing Magazine [email protected] http://www.pencomputing.com ToTo buildbuild thethe future,future, wewe mustmust learnlearn fromfrom thethe pastpast HistoryHistory ofof penpen computingcomputing 1914: Goldberg gets US patent for recognition of handwritten numbers to control machines 1938: Hansel gets US patent for machine recognition of handwriting 1956: RAND Corporation develops digitizing tablet for handwriting recognition 1957-62: Handwriting recognition projects with accuracies of 97-99% 1963: Bell Labs develops cursive recognizer 1966: RAND creates GRAIL, similar to Graffiti Pioneer:Pioneer: AlanAlan KayKay Utah State University Stanford University Xerox PARC: GUI, SmallTalk, OOL Apple Computer Research Fellow Disney Envisioned Dynabook in 1968: The Dynabook will be a “dynamic medium for creative thought, capable of synthesizing all media – pictures, animation, sound, and text – through the intimacy and responsiveness of the personal computer.” HistoryHistory ofof penpen computingcomputing 1970s: Commercial products, including kana/romanji billing machine 1980s: Handwriting recognition companies – Nestor – Communication Intelligence Corporation – Lexicus – Several others Pioneers:Pioneers: AppleApple 1987 Apple prototype – Speech recognition – Intelligent agents – Camera – Folding display – Video conferencing – Wireless communication – Personal Information Manager ““KnowledgeKnowledge NavigatorNavigator””
    [Show full text]
  • Pen Computer Technology
    Pen Computer Technology Educates the reader about the technologies involved in a pen computer Fujitsu PC Corporation www.fujitsupc.com For more information: [email protected] © 2002 Fujitsu PC Corporation. All rights reserved. This paper is intended to educate the reader about the technologies involved in a pen computer. After reading this paper, the reader should be better equipped to make intelligent purchasing decisions about pen computers. Types of Pen Computers In this white paper, "pen computer" refers to a portable computer that supports a pen as a user interface device, and whose LCD screen measures at least six inches diagonally. This product definition encompasses five generally recognized categories of standard products, listed in Table 1 below. PRODUCT TARGET PC USER STORAGE OPERATING RUNS LOCAL EXAMPLE CATEGORY MARKET INTERFACE SYSTEM PROGRAMS Webpad Consumer & No Standard Flash Windows CE, Only via Honeywell Enterprise browser memory Linux, QNX browser WebPAD II plug-ins CE Tablet Enterprise No Specialized Flash Windows CE Yes Fujitsu applications memory PenCentra Pen Tablet Enterprise Yes Windows & Hard drive Windows 9x, Yes Fujitsu specialized NT-4, 2000, Stylistic applications XP Pen-Enabled Consumer Yes Windows Hard drive Windows 9x, Yes Fujitsu & Enterprise 2000, XP LifeBook B Series Tablet PC Consumer Yes Windows Hard drive Windows XP Yes Many under & Enterprise Tablet PC development Edition Table 1: Categories of Pen Computers with LCD Displays of Six Inches or Larger Since the different types of pen computers are often confused, the following paragraphs are intended to help explain the key distinguishing characteristics of each product category. Pen Computers Contrasted Webpad: A Webpad's primary characteristic is that its only user interface is a Web browser.
    [Show full text]
  • A Context for Pen-Based Mathematical Computing
    A Context for Pen-Based Mathematical Computing Elena Smirnova Stephen M. Watt Ontario Research Centre for Computer Algebra Department of Computer Science University of Western Ontario London Ontario, Canada N6A 5B7 {elena,watt}@orcca.on.ca Abstract We report on an investigation to determine an architectural framework for pen-based mathematical computing. From the outset, we require that the framework be integrated with suitable elements for pen computing, document processing and computer algebra. We demonstrate that this architecture is able to provide these interfaces and allows a high degree of platform independence for the development of pen-based mathematical software. Keywords: Pen computing, computer algebra, Java, .Net, Maple, Microsoft Office. 1. Introduction 1.1. Motivation With the recent widespread availability of pen-enabled devices (such as Pocket PCs, Tablet PCs and interactive whiteboards), there is an opportunity for a new generation of natural interfaces for mathematical software packages. The use of the pen to enter, edit and manipulate mathematical expressions can lead to a qualitative improvement in the ease of use of computer algebra systems. On the other hand, mathematical input on pen-enabled devices goes well beyond ordinary hand- written mathematics on paper or chalkboard, simply because it can enjoy the rich functionality of software behind the ink-capturing hardware. We should point out from the beginning that pen-based software for mathematics has unique aspects that distinguish it from text-based applications and that these drive our software architecture with different considerations: First, the set of symbols used in mathematics is much larger than the usual alphabets or syllabaries encountered in natural languages.
    [Show full text]
  • Presentation (PDF)
    The Past and Future of Pen Computing Conrad H. Blickenstorfer, Editor-in-Chief Pen Computing Magazine [email protected] http://www.pencomputing.com Technology has become the international language of progress, of building things rather than destroying them PC Market: Cloudy Future After 20 years of growth, demand leveling off IDC and Dataquest say shipments down first time ever, predict 6% down from 2000 Still 30 million each in Q2 and Q3 2001, but…. – Commodity components make it difficult to make profit – PC prices have come down: – 1981: 4.77MHz PC costs US$4,000 ($7,767 in 2001 money) – 2001: 1.8GHz PC costs US$1,000 Notebook market a bit better Estimate: 26 million units for 2001, same as for 2000 It is clear that PCs and notebooks as we know them represent the past and the present of computing, but not necessarily the future of computing. Many people agree that PDAs and pen tablets or web tablets are a technology with a very promising future. PDA Projections (1) IDC said that Asia Pacific (without Japan) PDA sales were about two million in 2000. Dataquest said there were 2.1 million PDAs sold in Europe in 2000, with Palm and Pocket PC each having a market share of about 40% in Q2/2001. The US PDA market is 7-8 million units this year, and represents 60-70% of worldwide PDA sales right now. Microsoft said in May 2001 that 1.25 million Pocket PCs have sold since the April 2000 introduction. At a August Microsoft conference in Seattle, Washington, Microsoft said that two million Pocket PCs have been sold worldwide.
    [Show full text]
  • Active Pen Input and the Android Input Framework
    ACTIVE PEN INPUT AND THE ANDROID INPUT FRAMEWORK A Thesis Presented to the Faculty of California Polytechnic State University San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Electrical Engineering by Andrew Hughes June 2011 c 2011 Andrew Hughes ALL RIGHTS RESERVED ii COMMITTEE MEMBERSHIP TITLE: Active Pen Input and the Android Input Framework AUTHOR: Andrew Hughes DATE SUBMITTED: June 2011 COMMITTEE CHAIR: Chris Lupo, Ph.D. COMMITTEE MEMBER: Hugh Smith, Ph.D. COMMITTEE MEMBER: John Seng, Ph.D. iii Abstract Active Pen Input and the Android Input Framework Andrew Hughes User input has taken many forms since the conception of computers. In the past ten years, Tablet PCs have provided a natural writing experience for users with the advent of active pen input. Unfortunately, pen based input has yet to be adopted as an input method by any modern mobile operating system. This thesis investigates the addition of active pen based input to the Android mobile operating system. The Android input framework was evaluated and modified to allow for active pen input events. Since active pens allow for their position to be detected without making contact with the screen, an on-screen pointer was implemented to provide a digital representation of the pen's position. Extensions to the Android Soft- ware Development Kit (SDK) were made to expose the additional functionality provided by active pen input to developers. Pen capable hardware was used to test the Android framework and SDK additions and show that active pen input is a viable input method for Android powered devices.
    [Show full text]
  • Charles C. Tappert, Ph.D
    Charles C. Tappert, Ph.D. CSIS, Pace University, Goldstein Center 861 Bedford Road, Pleasantville NY 10570 E-mail: [email protected] Office: Phone: 914-773-3989 Fax: x3533 Google Scholar Publication Citations Extensive experience in computer science, specializing in artificial intelligence, big data analytics, quantum computing, pattern recognition, machine learning, deep learning, brain modeling, biometrics, pen computing and speech applications, cybersecurity, forensics, and project management. Taught graduate and undergraduate courses, supervised dissertations, and secured government contracts. Over 100 peer reviewed publications in prestigious international journals and conferences. Education Ph.D. Electrical Eng., Cornell, ECE Fulbright Scholar, Royal Inst. Tech., Stockholm M.S. Electrical Eng., Cornell, ECE B.S. Engineering Sciences, Swarthmore Academic Experience Pace University, Seidenberg School of CSIS, Computer Science Professor 2000-present Director, Doctor of Professional Studies (D.P.S.) in Computing Program; Director, Pervasive Computing Lab; Supervised three M.S. Theses, thirty Doctoral Dissertations; Teaches Quantum Computing, Machine Learning, Big Data Analytics, Capstone Projects, Emerging IT U.S. Military Academy, Computer Science Associate Professor 1993-2000 Taught courses in Computer Graphics, Languages, Databases, Capstone Projects, Intro to Computing SUNY Purchase and Pace University, Adjunct Associate Professor 1990-1993 Taught undergraduate courses in Computer Operating Systems and Data Structures North Carolina
    [Show full text]
  • Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops
    DISS. ETH NO. 21686 Towards Document Engineering on Pen and Touch-Operated Interactive Tabletops A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by FABRICE MATULIC Diplôme d'Ingénieur ENSIMAG Diplom Informatik, Universität Karlsruhe (TH) accepted on the recommendation of Prof. Dr. Moira C. Norrie, examiner Prof. (FH) Dr. Michael Haller, co-examiner Dr. Ken Hinckley, co-examiner 2014 Document Authoring on Pen and Calibri 24 B I U © 2014 Fabrice Matulic Abstract In the last few years, the consumer device landscape has been significantly transformed by the rapid proliferation of smartphones and tablets, whose sales are fast overtaking that of traditional PCs. The highly user-friendly multitouch interfaces combined with ever more efficient electronics have made it possible to perform an increasingly wide range of activities on those mobile devices. One category of tasks, for which the desktop PC with mouse and keyboard remains the platform of choice, however, is office work. In particular, productivity document engineering tasks are still more likely to be carried out on a regular computer and with software, whose interfaces are based on the time-honoured "windows, icons, menus, pointer" (WIMP) paradigm. With the recent emergence of larger touch-based devices such as wall screens and digital tabletops (which overcome the limitations of mobile devices in terms of available screen real estate) as well as the resurgence of the stylus as an additional input instrument, the vocabulary of interactions at the disposal of interface designers has vastly broadened. Hybrid pen and touch input is a very new and promising interaction model that practitioners have begun to tap, but it has yet to be applied in office applications and document-centric productivity environments.
    [Show full text]
  • The Method of Computer User Interface Using Tablet and Pen Instead of Mouse and Key Board Is Known As Pen Computing
    The method of computer user interface using Tablet and Pen instead of mouse and key board is known as Pen Computing. Use of touch screen devices like Mobile phones, GPS, PDA etc using the Pen as input is also referred to as Pen Computing. Pen Computing essentially has a Stylus commonly known as the Pen and a hand writing detection device called Tablet. First Stylus called Styalator was used by Tom Dimond in 1957 as the computer input. Stylus and Digital Pen are slightly different even though they perform the functions of Pointing. Stylus pen is a small pen shaped device used to give inputs to the touch screen of Mobile phone, PC tablet etc while Digital pen is somewhat larger device with programmable buttons and havepressure sensitivity and erasing features. Stylus is mainly used as the input device of PDA (Personal Digital Assistant), Smart phones etc. Finger touch devices are becoming popular to replace the Stylus as in iPhone. The Digital pen is similar to a writing pen in shape and size with internal electronics. The Digital pen consists of 1. Pointer- Used to make markings on the Tablet. This is similar to the writing point of a Dot pen. 2. Camera device- To sense the markings as inputs. 3. Force sensor with Ink cadridge – Detects the pressure applied by the Pen into corresponding inputs. 4. Processor – A microprocessor based system to analyze and process the input data. 5.Memory – To store the memory on input data. 6.Communication system- To transfer the data to the Mobile phone or PC through Bluetooth technology.
    [Show full text]