Database Benchmarking for Supporting Real-Time Interactive

Total Page:16

File Type:pdf, Size:1020Kb

Database Benchmarking for Supporting Real-Time Interactive Database Benchmarking for Supporting Real-Time Interactive Querying of Large Data Leilani Battle, Philipp Eichmann, Marco Angelini, Tiziana Catarci, Giuseppe Santucci, Yukun Zheng, Carsten Binnig, Jean-Daniel Fekete, Dominik Moritz To cite this version: Leilani Battle, Philipp Eichmann, Marco Angelini, Tiziana Catarci, Giuseppe Santucci, et al.. Database Benchmarking for Supporting Real-Time Interactive Querying of Large Data. SIGMOD ’20 - International Conference on Management of Data, Jun 2020, Portland, OR, United States. pp.1571-1587, 10.1145/3318464.3389732. hal-02556400 HAL Id: hal-02556400 https://hal.inria.fr/hal-02556400 Submitted on 28 Apr 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Database Benchmarking for Supporting Real-Time Interactive Querying of Large Data Leilani Battle Philipp Eichmann Marco Angelini University of Maryland∗ Brown University University of Rome “La Sapienza”∗∗ College Park, Maryland Providence, Rhode Island Rome, Italy [email protected] [email protected] [email protected] Tiziana Catarci Giuseppe Santucci Yukun Zheng University of Rome “La Sapienza”∗∗ University of Rome “La Sapienza”∗∗ University of Maryland∗ [email protected] [email protected] [email protected] Carsten Binnig Jean-Daniel Fekete Dominik Moritz Technical University of Darmstadt Inria, Univ. Paris-Saclay, CNRS University of Washington Darmstadt, Germany Orsay, France Seattle, Washington [email protected]. [email protected] [email protected] de ABSTRACT for interactive and real-time performance, the database and In this paper, we present a new benchmark to validate the visualization communities have developed a variety of tech- suitability of database systems for interactive visualization niques, including approximate query processing [2, 14], on- workloads. While there exist proposals for evaluating data- line aggregation/progressive visualization [3, 19, 26], data base systems on interactive data exploration workloads, none cubes [8, 36, 38], spatial indexing [63], speculative query rely on real user traces for database benchmarking. To this execution [8, 30], and lineage tracking [53]. end, our long term goal is to collect user traces that repre- However, we still lack adequate benchmarks to empirically sent workloads with different exploration characteristics. In assess which of these resulting systems provide satisfactory this paper, we present an initial benchmark that focuses on performance, and which systems are actually better than oth- “crossfilter”-style applications, which are a popular interac- ers for interactive, real-time querying scenarios. This issue tion type for data exploration and a particularly demanding is exacerbated for the most demanding yet popular visualiza- scenario for testing database system performance. We make tion scenarios such as crossfilter [41, 58, 67], where one in- our benchmark materials, including input datasets, inter- teraction may generate hundreds of queries per second, with action sequences, corresponding SQL queries, and analysis an expectation of near-immediate results. Unfortunately, ex- code, freely available as a community resource, to foster fur- isting database benchmarks such as TPC-H [65], TPC-DS ther research in this area: https://osf.io/9xerb/?view_only= [64], or the Star Schema Benchmark (SSB) [47] are insuffi- 81de1a3f99d04529b6b173a3bd5b4d23. cient for making these comparisons. One main reason is that the workloads modeled in these benchmarks are not rep- CCS CONCEPTS resentative of how database queries are generated through user interactions, such as with tools like Tableau [59] or • Information systems → Data management systems; Spotfire [58]. In contrast, visualization benchmarks provide Data analytics; • Human-centered computing → Visu- realistic scenarios, but lack the configurability, precision and alization systems and tools. automation afforded by database benchmarks when test- 1 INTRODUCTION ing performance [7]. Furthermore, there exists no evaluation Motivation. The data science process often begins with platform to test database systems under a range of interactive users (i.e., analysts, data scientists) exploring possibly mas- analysis conditions, such as different dataset characteristics, sive amounts of data through interactions with a graphical exploration scenarios, interaction paradigms, or user profiles. user interface, oftentimes a data visualization tool [4, 9, 12, Therefore, our community is still woefully unequipped to 66]. However, each time a user interacts with a visualization answer the question “are database systems truly capable of interface, the underlying data must be processed (filtered, supporting real-time interactive data exploration at scale?” aggregated, etc.) such that the interface can quickly provide a visual response [39, 42, 56]. To meet this growing demand L. Battle et al. other interaction designs (e.g., mouse hovers or check-box se- lection), crossfilter-style applications are arguably the most demanding use case for database systems [41]. With one single interaction, a user can generate hundreds of database queries per second when moving a single crossfilter range-slider. For example, Figure 1 shows information about airline delays; changing the selection of the flight distance us- ing the crossfilter range-slider of the histogram in the upper row (center), causes all other histograms to update at each mouse drag. Crossfilter interfaces are instances of dynamic query interfaces [57], which require live updates from the underlying database system as the slider moves in real time. Figure 1: An example crossfilter application from our In the following, we discuss the contributions of this paper: user study, visualizing the Flights dataset, composed As a first contribution, we present the results ofa of six numerical attributes. Each attribute is visual- user study conducted by the different international re- ized as a histogram that can be filtered using a range- search groups who authored this article. In this user slider. When the user interacts with a range slider, the study, we asked 22 users with a broad range of data sci- entire dataset is filtered, the bins in each histogram ence experience to perform four representative interactive are recomputed, and the visualizations are updated. analysis tasks using a crossfilter visualization tool, across three different datasets, producing a total of 128 different Here, real-time means with a latency < 100 ms, or support- traces for further study. Using the log data and video record- ing at least ten frames per second [15, 41, 42]. Unfortunately, ings collected from the study, we analyze and summarize our study reveals more work is needed from the database our findings regarding the interaction patterns observed, and community to answer positively. their effects on the resulting query workload. This characteri- Contributions. While there exist proposals [7, 18, 28, 29, zation analysis provides useful insights into how interaction 61] for evaluating database systems on interactive data explo- patterns can be taken into account when evaluating and ration workloads, none of these approaches derive a bench- optimizing queries on a database system. mark from real user traces, introducing potential artifacts As a second contribution, we present the results of and possibly missing key behavioral patterns. To validate our analysis of the 128 collected traces and character- the suitability of database systems for interactive and visual ize the behavior of typical users on interactive tasks. data exploration workloads, this paper thus presents a bench- Although continuous interactions, such as dragging a slider, mark constructed from real traces of users interacting with require a latency < 100 ms to provide continuous feedback, an information visualization system. In this scenario, every other events such as mouse clicks only require a 1 s latency, query corresponds to an actual interaction from a real user. allowing for pre-computation strategies. Based on our analy- Given the innumerable analysis scenarios supported by vi- sis, we elaborate on design implications for database systems sualization tools [23, 33], it would be unrealistic to have one to support highly interactive use cases (e.g., crossfilter). benchmark represent all possible use cases. Instead, we be- As a third contribution, we present a new benchmark lieve that a database benchmark ecosystem is required based on our user traces. As mentioned before this bench- for evaluating interactive data exploration workloads, mark aims to measure the ability of a DBMS to sustain similar to the TPC benchmarks for classical database work- the challenging workloads of interactive and real-time data loads (OLTP, OLAP, etc.). Our long term goal is to collect user exploration applications. Contrary to traditional database traces, representing workloads with different exploration benchmarks focused on returning exact results without la- characteristics. As a first step,
Recommended publications
  • Dashboard Design for Real-Time Situation Awareness
    Dashboard Design for Real-Time Situation Awareness Stephen Few, author of Information Dashboard Design Dashboard Design for Real-Time Situation Awareness Few if any recent trends in business information delivery have inspired as much enthusiasm as dashboards. When they work, they provide a powerful means to tame the beast of data overload. Despite their popularity, however, most dashboards live up to only a fraction of their potential. They fail, not because of poor technology—at least not primarily—but because of poor design. The more critical that information is to the well being of the business, the more grievous is the failure, because the remedy is so readily available. The term “dashboard” refers to a single screen information display that is used to monitor what’s going on in some aspect of the business. The key word is “monitor.” A dashboard presents the key data that you must effi ciently monitor to maintain awareness of what’s going on in your area of responsibility. Most dashboards are used to monitor information once a day, because more frequent use is unnecessary given the rate at which the information changes and speed at which responses must be made. Some jobs, however, require constant monitoring in real time, or close to it, because the activities that you track are happening right now and delays in responding can’t be tolerated. There is perhaps no better example of this type of dashboard than one that monitors the brisk and sometimes harried activities of a call center. Much like air traffi c control systems or cockpits in airplanes, call center dashboards must be designed to support real-time “situation awareness.” They must grab your attention when it’s needed, they must make it easy to spot what’s most important in a screen full of data, and they must give you the means to understand what’s happening and respond without delay.
    [Show full text]
  • Business Intelligence: a Discussion on Platforms, Technologies, and Solutions
    Business Intelligence: A Discussion on Platforms, Technologies, and solutions Tutorial RCIS 2013-Paris May 29-31 Introduction • Presenters: – Noushin Ashrafi • [email protected] – Jean-Pierre Kuilboer • [email protected] • Time and Discussion Frame – 90 minutes • Audiene can ask questions any time during presentation. 4/19/2013 RCIS 2013-Paris May 29-31 2 Tutorial overview knowledge Discovery from Data • The Thrust of the tutorial is to examine Business Intelligence (BI) Platforms to develop business analytics applications. • Specifically, we will address: – Requirements, development , and capabilities of the BI. – Self-service, enterprise, and cloud BI. – Confusing BI concepts such as: Platform, infrastructure, technology and architecture . – Solutions by three well-known vendors: Microsoft, TableauSoftware and IBM. 4/19/2013 RCIS 2013-Paris May 29-31 3 Overview of Business Intelligence • BI is revolutionizing decision making and information technology across all industries. This phenomenon is largely due to the ever-increasing availability of data. • The explosive volumes of data are available in both structured and unstructured formats, and are analyzed and processed to become information within context hence providing relevance, and purpose to the decision making process. 4/19/2013 RCIS 2013-Paris May 29-31 4 4/19/2013 RCIS 2013-Paris May 29-31 5 What is Business Intelligence? • BI is a content-free expression, so it means different things to different people. • BI is neither a product, nor a service – BI refers to people, processes, technologies and practices used to support business decision making. 4/19/2013 RCIS 2013-Paris May 29-31 6 4/19/2013 RCIS 2013-Paris May 29-31 7 What is Business Intelligence? • BI is an umbrella term that combines architectures, technology, analytical tools, applications, and methodologies to help transform data, to information, to knowledge, to decisions, and finally to action.
    [Show full text]
  • EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence an Architectural Overview
    EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence An Architectural Overview EMC Information Infrastructure Solutions Abstract This white paper provides readers with an overall understanding of the EMC® Greenplum® Data Computing Appliance (DCA) architecture and its performance levels. It also describes how the DCA provides a scalable end-to- end data warehouse solution packaged within a manageable, self-contained data warehouse appliance that can be easily integrated into an existing data center. October 2010 Copyright © 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part number: H8039 EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence—An Architectural Overview 2 Table of Contents Executive summary .................................................................................................
    [Show full text]
  • Microsoft's SQL Server Parallel Data Warehouse Provides
    Microsoft’s SQL Server Parallel Data Warehouse Provides High Performance and Great Value Published by: Value Prism Consulting Sponsored by: Microsoft Corporation Publish date: March 2013 Abstract: Data Warehouse appliances may be difficult to compare and contrast, given that the different preconfigured solutions come with a variety of available storage and other specifications. Value Prism Consulting, a management consulting firm, was engaged by Microsoft® Corporation to review several data warehouse solutions, to compare and contrast several offerings from a variety of vendors. The firm compared each appliance based on publicly-available price and specification data, and on a price-to-performance scale, Microsoft’s SQL Server 2012 Parallel Data Warehouse is the most cost-effective appliance. Disclaimer Every organization has unique considerations for economic analysis, and significant business investments should undergo a rigorous economic justification to comprehensively identify the full business impact of those investments. This analysis report is for informational purposes only. VALUE PRISM CONSULTING MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. ©2013 Value Prism Consulting, LLC. All rights reserved. Product names, logos, brands, and other trademarks featured or referred to within this report are the property of their respective trademark holders in the United States and/or other countries. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this report may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
    [Show full text]
  • IBM Infosphere Master Data Management
    Data and AI Solution Brief IBM InfoSphere Master Data Management Maintain a trusted, accurate view of Highlights critical business data in a hybrid environment • Support comprehensive, cost-effective information Organizations today have more data about products, governance services, customers and transactions than ever before. • Monitor master data quality Hidden in this data are vital insights that can drive business using dashboards and results. But without comprehensive, well-governed data proactive alerts sources, decision-makers cannot be sure that the information • Simplify the management they are using is the latest, most accurate version. As data and enforcement of master volumes continue to grow, the problem is compounded over data policies time. Businesses need to find new ways to manage and • Integrate master data govern the increasing variety and volume of data and deliver management into existing it to end users as quickly as possible and in a trusted manner, business processes regardless of the computing modality. • Quickly scale to meet changing business needs IBM InfoSphere Master Data Management (InfoSphere MDM) provides a single, trusted view of critical business data to • Optimize resources using users and applications, helping to reduce costs, minimize risk flexible deployment options and improve decision-making speed and precision. InfoSphere MDM achieves the single view through highly accurate and comprehensive matching that helps overcome differences and errors that often occur in the data sources. The MDM server manages your master data entities centrally, reducing reliance on incomplete or duplicate data while consolidating information from across the enterprise. Data and AI Solution Brief With InfoSphere MDM, you can enforce information governance at an organizational level, enabling business and IT teams to collaborate on such topics as data quality and policy management.
    [Show full text]
  • Improving Data Preparation for Business Analytics Applying Technologies and Methods for Establishing Trusted Data Assets for More Productive Users
    BEST PRACTICES REPORT Q3 2016 Improving Data Preparation for Business Analytics Applying Technologies and Methods for Establishing Trusted Data Assets for More Productive Users By David Stodder Research Sponsors Research Sponsors Alation, Inc. Alteryx, Inc. Attivio, Inc. Datameer Looker Paxata Pentaho, a Hitachi Group Company RedPoint Global SAP / Intel SAS Talend Trifacta Trillium Software Waterline Data BEST PRACTICES REPORT Q3 2016 Improving Data Table of Contents Research Methodology and Demographics 3 Preparation for Executive Summary 4 Business Analytics Why Data Preparation Matters 5 Better Integration, Business Definition Capture, Self-Service, and Applying Technologies and Methods for Data Governance. 6 Establishing Trusted Data Assets for Sidebar – The Elements of Data Preparation: More Productive Users Definitions and Objectives 8 By David Stodder Find, Collect, and Deliver the Right Data . 8 Know the Data and Build a Knowledge Base. 8 Improve, Integrate, and Transform the Data . .9 Share and Reuse Data and Knowledge about the Data . 9 Govern and Steward the Data . 9 Satisfaction with the Current State of Data Preparation 10 Spreadsheets Remain a Major Factor in Data Preparation . 10 Interest in Improving Data Preparation . 11 IT Responsibility, CoEs, and Interest in Self-Service . 13 Attributes of Effective Data Preparation 15 Data Catalog: Shared Resource for Making Data Preparation Effective . 17 Overcoming Data Preparation Challenges 19 Addressing Time Loss and Inefficiency . 21 IT Responsiveness to Data Preparation Requests . 22 Data Volume and Variety: Addressing Integration Challenges . 22 Disconnect between Data Preparation and Analytics Processes . 24 Self-Service Data Preparation Objectives 25 Increasing Self-Service Data Preparation . 27 Self-Service Data Integration and Catalog Development .
    [Show full text]
  • Automating Exploratory Data Analysis Via Machine Learning: an Overview
    Tutorials SIGMOD ’20, June 14–19, 2020, Portland, OR, USA Automating Exploratory Data Analysis via Machine Learning: An Overview Tova Milo Amit Somech Tel Aviv University Tel Aviv University [email protected] [email protected] ABSTRACT up-close, better understand its nature and characteristics, Exploratory Data Analysis (EDA) is an important initial step and extract preliminary insights from it. Typically, EDA is for any knowledge discovery process, in which data scien- done by interactively applying various analysis operations tists interactively explore unfamiliar datasets by issuing a (such as filtering, aggregation, and visualization) – the user sequence of analysis operations (e.g. filter, aggregation, and examines the results of each operation, and decides if and visualization). Since EDA is long known as a difficult task, which operation to employ next. requiring profound analytical skills, experience, and domain EDA is long known to be a complex, time consuming pro- knowledge, a plethora of systems have been devised over cess, especially for non-expert users. Therefore, numerous the last decade in order to facilitate EDA. lines of work were devised to facilitate the process, in multi- In particular, advancements in machine learning research ple dimensions [10, 24, 25, 31, 42, 46]: First, simplified EDA 1 have created exciting opportunities, not only for better fa- interfaces (e.g., [25], Tableau ) allow non-programmers to cilitating EDA, but to fully automate the process. In this effectively explore datasets without knowing scripting lan- tutorial, we review recent lines of work for automating EDA. guages or SQL. Second, numerous solutions (e.g., [6, 14, 22]) Starting from recommender systems for suggesting a single were devised to improve the interactivity of EDA, by re- exploratory action, going through kNN-based classifiers and ducing the running times of exploratory operations, and active-learning methods for predicting users’ interestingness displaying preliminary sketches of their results.
    [Show full text]
  • Montana COVID-19 Cases Map and Dashboard Metadata
    Montana COVID-19 Cases Map and Dashboard Metadata For assistance, please e-mail or call the Montana Joint Information Center 1-888-333-0461 A list of FAQs and answers related to COVID-19 can be found here. Updates to the information are made by 11:30 a.m. daily. Description: The data for this dashboard is derived from local health officials at the county level who report cases to the Montana Department of Health and Human Services (DPHHS). DPHHS tabulates case data and then gives the data to the Montana State Library to publish through this web feature service. The daily updates are managed by the Disaster & Emergency Service State Emergency Coordination Center. Montana public health agencies and the Governor's Coronavirus Task Force are actively working to limit the spread of novel coronavirus in Montana. The Montana State Library is assisting the effort by geo-enabling public health information to help decision-makers, State Emergency Coordination Center and the Governor's Coronavirus Task Force, understand the spread of the disease. Information is reported based on the previous day. Information on the dashboard and maps will change based on further public health investigation. Cumulative Cases Indicator: This includes all reported Montana residents meeting the CDC case definition for novel coronavirus infections, and some non-Montana residents who were diagnosed with novel coronavirus infections while in Montana. New Cases Indicator: Daily aggregate of new cases in Montana, reported to DPHHS, for the previous full day. These newly reported cases can be classified as either active or recovered. Updates to the information are run by 11:30 a.m.
    [Show full text]
  • Procurement and Spend Fact and Dimension Modeling
    Procurement and Spend Dimension and Fact Job Aid Contents Introduction .................................................................................................................................................. 2 Procurement and Spend – Purchase Orders Subject Area: ........................................................................ 10 Procurement and Spend – Purchase Requisitions Subject Area:................................................................ 14 Procurement and Spend – Purchase Receipts Subject Area:...................................................................... 17 1 Procurement and Spend Dimension and Fact Job Aid Introduction The purpose of this job aid is to provide an explanation of dimensional data modeling and of using dimensions and facts to build analyses within the Procurement and Spend Subject Areas. Dimensional Data Model The dimensional model is comprised of a fact table and many dimensional tables and used for calculating summarized data. Since Business Intelligence reports used in measuring the facts (aggregates) across various dimensions, dimensional data modeling is the preferred modeling technique in a BI environment. STARS - OBI data model based on Dimensional Modeling. The underlying database tables separated as Fact Tables and Dimension Tables. The dimension tables joined to fact tables with specific keys. This data model usually called Star Schema. The star schema separates business process data into facts, which hold the measurable, quantitative data about the business, and dimensions
    [Show full text]
  • Master Data Management (MDM) Improves Information Quality to Deliver Value
    MIT Information Quality Industry Symposium, July 15-17, 2009 Master Data Management (MDM) Improves Information Quality to Deliver Value ABSTRACT Master Data Management (MDM) reasserts IT’s role in responsibly managing business-critical information—master data—to reasonable quality and reliability standards. Master data includes information about products, customers, suppliers, locations, codes, and other critical business data. For most commercial businesses, governmental, and non-governmental organizations, master data is in a deplorable state. This is true although MDM is not new—it was, is, and will be IT’s job to remediate and control. Poor-quality master data is an unfunded enterprise liability. Bad data reduces the effectiveness of product promotions; it causes unnecessary confusion throughout the organization; it misleads corporate officers, regulators and shareholders; and it dramatically increases the IT cost burden with ceaseless and fruitless spending for incomplete error remediation projects. This presentation describes—with examples—why senior IT management should launch an MDM process, what governance they should institute for master data, and how they can build cost-effective MDM solutions for the enterprises they serve. BIOGRAPHY Joe Bugajski Senior Analyst Burton Group Joe Bugajski is a senior analyst for Burton Group. He covers IT governance, architecture, and data – governance, management, standards, access, quality, and integration. Prior to Burton Group, Joe was chief data officer at Visa responsible worldwide for information interoperability. He also served as Visa’s chief enterprise architect and resident entrepreneur covering risk systems and business intelligence. Prior to Visa, Joe was CEO of Triada, a business intelligence software provider. Before Triada, Joe managed engineering information systems for Ford Motor Company.
    [Show full text]
  • Governance by Dashboard
    Governance by Dashboard A Policy Paper Jamie Bartlett Nathaniel Tkacz Open Access. Some rights reserved. As the publisher of this work, Demos wants to encourage the circulation of our work as widely as possible while retaining the copyright. We therefore have an open access policy which enables anyone to access our content online without charge. Anyone can download, save, perform or distribute this work in any format, including translation, without written permission. This is subject to the terms of the Demos licence found at the back of this publication. Its main conditions are: · Demos and the author(s) are credited · This summary and the address www.demos.co.uk are displayed · The text is not altered and is used in full · The work is not resold · A copy of the work or link to its use online is sent to Demos. You are welcome to ask for permission to use this work for purposes other than those covered by the licence. Demos gratefully acknowledges the work of Creative Commons in inspiring our approach to copyright. To find out more go to www.creativecommons.org PARTNERS CREDITS This report was commissioned by: Published by Demos March 2017 © Demos. Some rights reserved. Unit 1, 2-3 Mill Street London SE1 2BD T: 020 7367 4200 [email protected] www.demos.co.uk CONTENTS Foreword 5 Introduction 7 The rise of the data dashboard 9 Dashboards in action 11 Dashboards transformation 14 Factors for successful dashboard implementation 18 4 5 FOREWORD Data dashboards – interfaces which display complex data to a user, often displayed in real time, and typically drawn from multiple sources – are an increasingly important way in which government departments make decisions.
    [Show full text]
  • Business Intelligence Dashboard for Academic Program Management
    Paper ID #13300 Business Intelligence Dashboard for Academic Program Management Dr. Mudasser Fraz Wyne, National University I have a Ph.D. in Computer Science, M.Sc. in Engineering, and B.Sc. in Electrical Engineering. Currently I serve as a Chair for Department of Computer Science, Information and Media Systems and Professor of Computer Science at School of Engineering and Computing, National University. I am also the Program Lead for MS in Information Technology Management and have also served recently as the lead for BSc in Information Systems, the co-Lead for MSc Computer Science and Program Lead for MSc in Database Administration programs. My association with ABET (Accreditation Board of Engineering and Technol- ogy) US dates back to 2001, as a certified program evaluator for BSc in Computer Science and BSc in Information Systems. At present, I am also serving as the Commissioner for the Computer Accredita- tion Commission (CAC). Previously, I have taught at 6 different countries for over 25 years. I have been privileged to be part of the DESY Group (Deutches Elecktronen Synchrotron), Hamburg Germany, as a research fellow, and worked with an MIT group, led by a Nobel laureate. On the research side, I have been fortunate enough to secure a number of grants and have served on numerous international Ph.D. Thesis committees, been a member of the editorial boards for 7 interna- tional journals, and served as the Chair and Co-Chair for 12 international conferences. For recognition of my research activities, I have been invited to a number of international conferences as Invited Speaker, chaired panel discussions and numerous international conference sessions.
    [Show full text]