Egovernance Augmented Reality Application

Total Page:16

File Type:pdf, Size:1020Kb

Egovernance Augmented Reality Application Reality Sensing, Mining and Augmentation for Mobile Citizen–Government Dialogue FP7-288815 D3.2 eGovernance augmented reality application Dissemination level: Public (PU) Contractual date of delivery: Month 33, 2014-10-31 Actual date of delivery: 2014-10-31 Workpackage: WP3 - Mobile Augmented Reality Task: T3.2 – eGovernance augmentation layer Type: Prototype Approval Status: Final Version: v0.7 Number of pages: 126 Filename: D3.2-eGovernance augmented reality Abstract D3.2 reports on the activities performed during the second phase of the project in the context of WP3. During this period, there has been mainly three different activity tracks constituting WP3 contribution to Live+Gov, namely: a) the enhancement of the platform for mobile augmented reality and the generation of the eGovernance augmented layers, b) the development of a mechanism for personalized content delivery, and c) the review of methods and tools for data aggregation and visualization, as well as the thorough presentation of the analytic tools that have been developed to facilitate the decision makers in making more informed decisions. All three activity tracks are presented in detail placing emphasis on the reasons that have motivated our work and the concrete outcomes of our developments. The information in this document reflects only the author’s views and the European Community is not liable for any use that may be made of the information contained therein. The information in this document is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. This work was supported by the EU 7th Framework Programme under grant number IST-FP7- 288815 in project Live+Gov (www.liveandgov.eu) Page 1 Copyright © Copyright 2013 Live+Gov Consortium consisting of: 1. Universität Koblenz-Landau 2. Centre for Research and Technology Hellas 3. Yucat BV 4. Mattersoft OY 5. Fundacion BiscayTIK 6. EuroSoc GmbH This document may not be copied, reproduced, or modified in whole or in part for any purpose without written permission from the Live+Gov Consortium. In addition to such written permission to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced. All rights reserved. Page 2 D3.2 – Final History Version Date Reason Revised by 0.1 2014-05-27 Draft skeleton - alpha version S. Nikolopoulos and Y. Kompatsiaris 0.2 2014-09-03 Version for collecting input S. Nikolopoulos, D. Ververidis, G. Liaros, A. Papazoglou-Chalkias, E. Chatzilari, A. Maronidis, G. Chantas 0.3 2014-09-27 Version incorporating the first round of inputs L. Niittylä, C. Schaefer 0.4 2014-10-15 Version incorporating the second round of inputs L. Niittylä, C. Schaefer, – beta version D. Ververidis, G. Liaros, P. Minnigh, M. Terpstra 0.5 2014-10-24 Version incorporating the comments from S. Nikolopoulos internal review 0.6 2014-10-30 Added a section on the sentiment analysis of S. Nikolopoulos and A. social content Papazoglou-Chalkias 0.7 2014-10-31 Proof editing and additional refinements – final S. Nikolopoulos and Y. version Kompatsiaris Author list Organization Name Contact Information CERTH Dimitrios Ververidis [email protected] CERTH Elisavet Chatzilari [email protected] CERTH Giorgos Liaros [email protected] CERTH Tasos Papazoglou Chalikias [email protected] CERTH Spiros Nikolopoulos [email protected] CERTH Yiannis Kompatsiaris [email protected] CERTH Giannis Chantas [email protected] CERTH Anastasios Maronidis [email protected] YCT Pieter Minnigh [email protected] YCT Marjolein Terpstra [email protected] UKOB Christoph Schaefer [email protected] MTS Laura Niittylä [email protected] BIZ Maite de Arana [email protected] Page 3 D3.2 – Final Executive Summary D3.2 is organized in four main sections (Section 2-5) that are used to report on the activities that have taken place during the second phase of the project. These sections are complemented by an introductory section that provides an overview of the deliverable and a summary section that concludes with some important insights that we have gained during the Live+Gov experience. In what refers to the main sections, Section 2 reports on the improvements that we have performed in addressing the project’s revised requirements, as well as on the major developments that we have undertaken in order to automatically serve content across platforms and vendors. Section 3 describes the personalization mechanism that has been developed to deliver traffic-related information in a personalized manner. Section 4 reviews some of the most widely established methods and tools for data visualization, while Section 5 provides a thorough analysis of the analytic tools that have been developed to facilitate the extraction of valuable insights for each of the Live+Gov use cases. Finally, D3.2 incorporates a number of Appendices that provide details about the technical investigations that we have undertaken to select the most appropriate libraries and services for implementing the analytic tools. Page 4 D3.2 – Final Abbreviations and Acronyms API Application Programming Interface AR Augmented Reality AREL Augmented Reality Experience Language ARML Augmented Reality Markup Language CMS Content Management System ESA Explicit Semantic Analysis HTTP Hypertext Transfer Protocol IBS Image BaSed augmented reality IST Information Society Technologies JSON JavaScript Object Notation KML Keyhole Markup Language LBS Location BaSed augmented reality MVC Model-View-Controller NGD Normalized Google Distance NRD Normalized Relevance Distance REST Representational State Transfer SMS Short Messaging Service VR Virtual Reality W3C World Wide Web Consortium XML eXtensible Markup Language Page 5 D3.2 – Final Table of Contents 1. INTRODUCTION ....................................................................................................... 13 2. MOBILE AUGMENTED REALITY FOR EGOVERNANCE ................................................. 15 2.1 Overview ............................................................................................................. 15 2.2 Revised requirements and extensions compared to the first prototype ................ 15 2.3 Improving the augmented reality experience ....................................................... 17 2.4 Serving AR content automatically across platforms and vendors .......................... 20 2.4.1 Motivation............................................................................................................................ 21 2.4.2 Existing solutions for creating and publishing AR content .................................................. 23 2.4.3 Urban Planning AR directory ................................................................................................ 25 2.4.4 FastAR on Sobipro ................................................................................................................ 26 2.4.5 Conclusions .......................................................................................................................... 32 2.5 eGovernance Augmentation Layers...................................................................... 33 2.5.1 Urban planning .................................................................................................................... 33 2.5.2 Mobility ................................................................................................................................ 35 3. PERSONALIZED CONTENT DELIVERY ......................................................................... 37 3.1 Motivation .......................................................................................................... 37 3.2 Personalized traffic-related information .............................................................. 37 3.2.1 Extracting the commuting habits ......................................................................................... 37 3.2.2 Building the user’s profile .................................................................................................... 38 3.2.3 Generating personalized suggestions .................................................................................. 39 3.3 Privacy ................................................................................................................ 40 3.4 Related apps and how Live+Gov goes beyond ...................................................... 40 4. VISUALIZATION TOOLS FOR DATA COMPREHENSION ............................................... 43 4.1 Introduction ........................................................................................................ 43 4.2 Map based visualisations ..................................................................................... 45 4.2.1 Types of map-based visualizations ...................................................................................... 45 4.2.2 Map visualizations on the web ............................................................................................ 48 4.2.3 Map visualizations on mobile devices ................................................................................. 49 4.3 Image-based visualisations .................................................................................. 49 4.3.1 Types of image-based visualizations ...................................................................................
Recommended publications
  • A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumiński, Krzysztof Walczak
    CARL: A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumiński, Krzysztof Walczak To cite this version: Dariusz Rumiński, Krzysztof Walczak. CARL: A Language for Modelling Contextual Augmented Reality Environments. 5th Doctoral Conference on Computing, Electrical and Industrial Systems (DoCEIS), Apr 2014, Costa de Caparica, Portugal. pp.183-190, 10.1007/978-3-642-54734-8_21. hal- 01274770 HAL Id: hal-01274770 https://hal.inria.fr/hal-01274770 Submitted on 16 Feb 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License CARL: A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumi ński, Krzysztof Walczak Pozna ń University of Economics Niepodległo ści 10, 61-875 Pozna ń, Poland {ruminski, walczak}@kti.ue.poznan.pl Abstract. The paper describes a novel, declarative language that enables modelling ubiquitous, contextual and interactive augmented reality environments. The language, called CARL – Contextual Augmented Reality Language, is highly componentised with regards to both the structure of AR scenes as well as the presented AR content. This enables dynamic composition of CARL presentations based on various data sources and depending on the context.
    [Show full text]
  • Augmented Reality
    Augmented Reality: A Review of available Augmented Reality packages and evaluation of their potential use in an educational context November 2010 Stephen Rose Dale Potter Matthew Newcombe Unlocking the Hidden Curriculum University of Exeter Learning and Teaching Innovation Grants (04/08) 2 Contents 1. Augmented Reality Page 4 2. Augmented Reality in Education 6 3. Augmented Reality Applications 8 3.1 Marker-based Augmented Reality 8 3.2 Markerless Augmented Reality 10 4. Available Augmented Reality Technologies 12 4.1 Current Smartphone Ownership Patterns 12 4.2 Platforms 16 4.3 AR Software 19 5. Technical Considerations 22 5.1 Limitations of Current Platforms 24 6. Choosing an Augmented Reality System 24 7. Glossary 28 8. References 29 9. Appendix 1: Unlocking the Hidden Curriculum - a JISC- 31 funded Learning and Teaching Innovation Project at the University of Exeter This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Licence. To view a copy of this licence, visit: http://creativecommons.org/licenses/by-nc-sa/2.0/uk or send a letter to: Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. 3 1. ‘Augmented Reality’ Every now and again a ‘new technology’ appears which seems to capture the public imagination. Invariably the technology enables a new means of interacting with screen-based entertainment or a computer game - 3DTV, the Nintendo Wii. The proliferation of so-called ‘smartphones’ with their abilities to run once-complex computer applications, in-built cameras and ‘GPS’ capability has unleashed the potential of ‘Augmented Reality’ – to date a regular feature of science fiction or ‘near future’ movies.
    [Show full text]
  • "Print &” Augmented Reality Guide
    "Print &” Augmented Reality Guide Terminology: - junaio: the Augmented Reality (AR) Browser Application powered by metaio - channel: the individual experience section within the junaio App - scanning: using the camera to frame objects (QR Code, images) to trigger experiences - tracking: occurs after scanning, when the camera locks on to an image and prompts the AR experience - QR (Quick Response) Code: black and white tiled square used to access a channel 1. Setting up junaio on an iOS or Android Device: - Go to iTunes App Store or Google Play and search for “junaio”. - Download the free app to your device. - Once downloaded, open junaio; a quick tutorial will follow. - Next, the Home channel will open in camera view mode. junaio logo: 2. Getting to the Sappi Channel: - Tap the “SCAN” button in the top right corner. - Hover the camera over the QR Code provided on the Numbers Experience page (p. 15). - This will bring you to the Sappi Channel. You will see the Sappi blue “S” on the top bar immediate to the left of junaio; you should see this “S” throughout the Numbers Experience, signifying you are in the right channel. - You can also access the Sappi Channel by touching the magnifying glass search button in the upper left corner and searching for “Sappi”. - Let the channel load completely (refer to the loading bar at the top). Sappi Channel QR Code: Sappi Channel Icon: 1 3. Numbers Experience in Sappi Channel: - Make sure the Numbers Experience page is unfolded completely and lying as flat as possible on a table. - The page should be facing you right side up.
    [Show full text]
  • A Mobile Augmented Reality Application for Visualizing and Interacting with Online Data
    A mobile Augmented Reality application for visualizing and interacting with online data Author Nurane Kuqi Supervisor Aris Alissandrakis Co-supervisor Nico Reski Examiner Ilir Jusufi Exam date 30th May 2017 Subject Social Media and Web Technologies Level Master Course code 5ME11E i “Computing is not about computers anymore. It is about living.” Nicholas Negroponte (MIT Media Lab founder) ii Abstract Augmented Reality (AR) technology has recently received a lot of attention for hav- ing lots of potential for uses in a variety of contexts. From an interaction perspective, an important factor is not only the usability of the applications, but also the engage- ment that these allow. Of particular interest is the possibility of allowing users to interact with data from online sources via an Augmented Reality interface. The main focus of this master thesis is to create a mobile AR application that allows visualization and interaction with online data. For this purpose a prototype was designed and implemented using the Unity game engine with C# and the Vuforia SDK on the front-end (mobile app) and NodeJS with MongoDB on the back-end (database server). The prototype was tested, and also deployed in a two week user study to evaluate a) how the system allowed the participants to explore available data, and b) how the use of the system affected the participants daily behaviour. The results indicate that the prototype was well received by the users. They enjoyed the experience not only due to the functionality of the system, but also given the opportunity for better involvement and completion of the given task (scheduling their lunch at the university campus restaurant) successfully.
    [Show full text]
  • Metadefender Core V4.17.3
    MetaDefender Core v4.17.3 © 2020 OPSWAT, Inc. All rights reserved. OPSWAT®, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc. All other trademarks, trade names, service marks, service names, and images mentioned and/or used herein belong to their respective owners. Table of Contents About This Guide 13 Key Features of MetaDefender Core 14 1. Quick Start with MetaDefender Core 15 1.1. Installation 15 Operating system invariant initial steps 15 Basic setup 16 1.1.1. Configuration wizard 16 1.2. License Activation 21 1.3. Process Files with MetaDefender Core 21 2. Installing or Upgrading MetaDefender Core 22 2.1. Recommended System Configuration 22 Microsoft Windows Deployments 22 Unix Based Deployments 24 Data Retention 26 Custom Engines 27 Browser Requirements for the Metadefender Core Management Console 27 2.2. Installing MetaDefender 27 Installation 27 Installation notes 27 2.2.1. Installing Metadefender Core using command line 28 2.2.2. Installing Metadefender Core using the Install Wizard 31 2.3. Upgrading MetaDefender Core 31 Upgrading from MetaDefender Core 3.x 31 Upgrading from MetaDefender Core 4.x 31 2.4. MetaDefender Core Licensing 32 2.4.1. Activating Metadefender Licenses 32 2.4.2. Checking Your Metadefender Core License 37 2.5. Performance and Load Estimation 38 What to know before reading the results: Some factors that affect performance 38 How test results are calculated 39 Test Reports 39 Performance Report - Multi-Scanning On Linux 39 Performance Report - Multi-Scanning On Windows 43 2.6. Special installation options 46 Use RAMDISK for the tempdirectory 46 3.
    [Show full text]
  • Webizing Mobile Augmented Reality Content
    This article was downloaded by: [Sangchul Ahn] On: 10 February 2014, At: 12:48 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK New Review of Hypermedia and Multimedia Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tham20 Webizing mobile augmented reality content Sangchul Ahnab, Heedong Kob & Byounghyun Yoob a HCI & Robotics, University of Science and Technology, Daejeon, Korea b Imaging Media Research Center, Korea Institute of Science and Technology, Seoul, Korea Published online: 05 Feb 2014. To cite this article: Sangchul Ahn, Heedong Ko & Byounghyun Yoo , New Review of Hypermedia and Multimedia (2014): Webizing mobile augmented reality content, New Review of Hypermedia and Multimedia, DOI: 10.1080/13614568.2013.857727 To link to this article: http://dx.doi.org/10.1080/13614568.2013.857727 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
    [Show full text]
  • Vexcel Imaging / Microsoft Photogrammetry User Meeting, Tokyo, September 2008
    Vexcel Imaging / Microsoft Photogrammetry User Meeting, Tokyo, September 2008 Michael Gruber, [email protected] Vexcel Imaging Photogrammetry Products UltraCamXp UltraCam Xp Based on the most successfull UltraCamX Key features • CCD size 6.0 µm (UCX: 7.2 µm) • 195 Mega pixel (UCX: 136 Mega pixel) • Storage system 2 x 2.1 Terra byte (UCX: 2 x 1.7 TB) • Storage system 6600 images (UCX: 4700 images) • New filters for even improved image dynamic UltraCam Xp Projects • 500m flying height ‐> 2.9 cm GSD ‐> 514m strip width • 1000m flying height ‐> 5.8 cm GSD ‐> 1028m strip width • 3000m flying height ‐> 17.4 cm GSD ‐> 3085m strip width Largest digital camera world‐wide Lowest possible collection costs Most efficient digital camera world‐wide Success Story Sales Figures • 244 cameras world‐wide • 101 cameras of UCD and UCX • 47 UltraCamD • 54 UltraCamX Announcement: The UltraCam #100 has been sold in July 2008 Vexcel market share to Geokosmos, Russia, 42% with UCD/UCX by our partner GeoLidar Success Story Sales Figures Announcement: • 244 cameras world‐wide Aerodata, Belgium, • 103 cameras of UCD, UCX purchased UltraCam Xp and UCXp #1 and #2 at ISPRS 08 • 47 UltraCamD • 54 UltraCamX • 2 UltraCamXp • (end of July 08) UltraCamXp Computing Unit exchangeable Data Unit Data Unit Docking Station 4 parallel Download Ports UltraCamXp Largest Image Format 17310 by 11310 pixel (195 Mpixel) On board Data Storage 6600 Frames / Data Unit Short frame intervall (2 sec) 3cm GSD / 60% Endlap / 140 kn Microsoft/Vexcel Aerial Camera Evolution UltraCamD
    [Show full text]
  • Microsoft Surface User Experience Guidelines
    Microsoft Surface™ User Experience Guidelines Designing for Successful Touch Experiences Version 1.0 June 30, 2008 1 Copyright This document is provided for informational purposes only, and Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, financial and other data, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, places, financial or other data, or events is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2008 Microsoft Corporation. All rights reserved. Microsoft, Excel, Microsoft Surface, the Microsoft Surface logo, Segoe, Silverlight, Windows, Windows Media, Windows Vista, Virtual Earth, Xbox LIVE, XNA, and Zune are either registered trademarks or trademarks of the Microsoft group of companies.
    [Show full text]
  • State-Of-The-Art
    State-of-the-Art Hyperlocal Mobile Advertising Charles Woodward, VTT NEXT MEDIA - A TIVIT PROGRAMME 21.5.2012 1 Scope of the Study Sources Mobile World Congress, Barcelona, 27-29 March 2012 Others: ARE 2011, ISMAR 2011, Google, Web Update from previous study NextMedia / Ubimedia State-of-the-Art report 2011 Further technical information available at Liite: Mika Hakkarainen (VTT): Lisätyn todellisuuden alustat NEXT MEDIA - A TIVIT PROGRAMME 21.5.2012 2 Application Categories Categories Geo located, World Browsing (WB) Image based, Close Range (CR) Main applications Layar CR commercial, WB free Wikitude, WB, commercial / free Metaio CR, WB, SDK, commercial Aurasma, CR, commercial BlippAR, CR, commercial T-Immersion (D-Fusion), CR, commercial Qualcomm (Vuforia), CR, SDK, free NEXT MEDIA - A TIVIT PROGRAMME 21.5.2012 3 Layar Layar http://layar.com/ World’s first commercial AR World Browser, 2009 Received €14 Million VC investment in 2010 Over 10 Million installs of Layar Reality Browser Comes preinstalled on Samsung Galaxy tablets Free for developers Commercial product: Layar Vision See video http://www.youtube.com/watch?v=AsD0DuPT1GI&feature=youtu.be Layar Vision Pricing http://www.layar.com/browser/pricing/ 0,01 € per match, first 10,000 free, max €1000 per month Also: Layar Stiktu Enables image based content creation by end-users See video http://www.stiktu.com/ NEXT MEDIA - A TIVIT PROGRAMME 21.5.2012 4 Metaio Metaio http://www.metaio.com Offers both location based & close range solutions See examples, http://www.metaio.com/software/junaio-plugin/ etc. Success Stories, see http://www.junaio.com/publish/success-stories/ Products Metaio Design 2,5 – high end PC based functionality Metaio Mobile SDK v.
    [Show full text]
  • Context-Aware Mobile Augmented Reality Visualization in Construction Engineering Education
    CONTEXT-AWARE MOBILE AUGMENTED REALITY VISUALIZATION IN CONSTRUCTION ENGINEERING EDUCATION by AREZOO SHIRAZI B.S. University of Tehran, 2012 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Civil, Environmental, and Construction Engineering in the College of Engineering and Computer Sciences at the University of Central Florida Orlando, Florida Spring Term 2014 Major Professor: Amir H. Behzadan © 2014 Arezoo Shirazi ii ABSTRACT Recent studies suggest that the number of students pursuing science, technology, engineering, and mathematics (STEM) degrees has been generally decreasing. An extensive body of research cites the lack of motivation and engagement in the learning process as a major underlying reason of this decline. It has been discussed that if properly implemented, instructional technology can enhance student engagement and the quality of learning. Therefore, the main goal of this research is to implement and assess effectiveness of augmented reality (AR)-based pedagogical tools on student learning. For this purpose, two sets of experiments were designed and implemented in two different construction and civil engineering undergraduate level courses at the University of Central Florida (UCF). The first experiment was designed to systematically assess the effectiveness of a context-aware mobile AR tool (CAM-ART) in real classroom-scale environment. This tool was used to enhance traditional lecture-based instruction and information delivery by augmenting the contents of an ordinary textbook using computer- generated three-dimensional (3D) objects and other virtual multimedia (e.g. sound, video, graphs). The experiment conducted on two separate control and test groups and pre- and post- performance data as well as student perception of using CAM-ART was collected through several feedback questionnaires.
    [Show full text]
  • Developing a Zoomable Timeline for Big History
    19 From Concept to Reality: Developing a Zoomable Timeline for Big History Roland Saekow Abstract Big History is proving to be an excellent framework for designing undergradu- ate synthesis courses. A serious problem in teaching such courses is how to convey the vast stretches of time from the Big Bang, 13.7 billion years ago, to the present, and how to clarify the wildly different time scales of cosmic history, Earth and life history, human prehistory and human history. Inspired by a se- ries of printable timelines created by Professor Walter Alvarez at the Univer- sity of California, Berkeley, a time visualization tool called ‘ChronoZoom’ was developed through a collaborative effort of the Department of Earth and Plane- tary Science at UC Berkeley and Microsoft Research. With the help of the Of- fice of Intellectual Property and Industry Research Alliances at UC Berkeley, a relationship was established that resulted in the creation of a prototype of ChronoZoom, leveraging Microsoft Seadragon zoom technology. Work on a sec- ond version of ChronoZoom is presently underway with the hope that it will be among the first in a new generation of tools to enhance the study of Big History. In Spring of 2009, I had the good fortune of taking Walter Alvarez' Big His- tory course at the University of California Berkeley. As a senior about to com- plete an interdisciplinary degree in Design, I was always attracted to big picture courses rather than those that focused on specifics. So when a housemate told me about Walter's Big History course, I immediately enrolled.
    [Show full text]
  • User Interaction in Deductive Interactive Program Verification
    User Interaction in Deductive Interactive Program Verification Zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften von der KIT-Fakult¨atf¨urInformatik des Karlsruher Instituts f¨urTechnologie (KIT) genehmigte Dissertation von Sarah Caecilia Grebing aus Mannheim Tag der m¨undlichen Pr¨ufung: 07. Februar 2019 Erster Gutachter: Prof. Dr. Bernhard Beckert Zweiter Gutachter: Assoc. Prof. Dr. Andr´ePlatzer Contents Deutsche Zusammenfassung xv 1. Introduction1 1.1. Structure and Contribution of this Thesis . .2 1.1.1. Qualitative, Explorative User Studies . .3 1.1.2. Interaction Concept for Interactive Program Verification . .5 1.2. Previously Published Material . .6 I. Foundations for this Thesis9 2. Usability of Software Systems: Background and Methods 11 2.1. Human-Computer Interaction . 12 2.2. User-Centered Process . 12 2.3. Usability . 13 2.3.1. What is Usability? . 13 2.3.2. Usability Principles . 14 2.4. Interactions . 16 2.4.1. Models of Interaction . 16 2.4.2. Interaction Styles . 18 2.5. Task Analysis . 20 2.5.1. A brief Introduction to Sequence Models . 20 2.6. Evaluation Methods . 22 2.6.1. Questionnaires . 23 2.6.2. Interviews . 24 2.6.3. Focus Groups . 24 2.6.4. Preparation and Conduction of Focus Groups and Interviews . 25 3. Interactive Deductive Program Verification 29 3.1. Introduction . 29 3.2. Logical Calculi . 30 3.3. Specification of Java Programs with JML . 32 3.3.1. Method Contracts . 33 3.3.2. Loop Invariants . 34 3.3.3. Class Invariants . 36 3.3.4. The Purpose of Specifications . 36 3.4. A Brief Introduction to Java Dynamic Logic (JavaDL) .
    [Show full text]