BRKCLD-2501.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Architecting and Delivering Big Data in cloud Abhi Singh Technical Solutions Architect, Cisco Agenda • Business use case Demonstration: • Ecosystem and technology •Demo 1: Import and query structured • Data Models and architectures data in Hadoop •Demo 2: Import and analyze semi • Enterprise Grade structured data in Hadoop • Hadoop and Openstack •Demo 3: Spark for analytics on object relationships • Summary, further learning, Q&A •Demo 4: Big Data as a service in cloud 3 From Data to information to actionable intelligence OUTCOME LOGIC DATA Bare Metal & Virtualized Infrastructure 5 Private & Public Cloud Big Data is … Data sets too large/complex for traditional data processing applications Volume Big Data driven Velocity spending Variety 2012-2016: Veracity $232 B Gartner 2012 Data quality problems cost U.S. businesses more than $600 billion a year- DW-institute 2002 6 Big Data is an Enabler Decrease Cost • Open source technology • Integrate different data sources Streamline business • Landing zone for all Data • Data driven business decisions • Improved outcomes of campaigns Increase revenue • Improve customer satisfaction • Reduce customer churn • Understand patterns • Preventative maintenance 7 Big Data in Action - Waze Real time Location GPS based Ads Data Budget Map based Ad Archives campaigns Social media Community activity shared Gas prices 8 Ecosystem and Technology 9 Data lives in RDBMS, NoSQL, and Hadoop Bare Metal & Virtualized Infrastructure Private & Public Cloud 10 Conventional – Databases and Data Warehouses RDBMS – A relational database is a set of tables containing data fitted into predefined categories (schema/blueprint). Allows for ACID properties. Operational Data Databases Warehouse • OLTP • OLAP • Data • Data analysis and retrieval/update decision making • Relational • Relational and • Current multi-dimensional • Historic Scaling 11 Distribution across cluster is challenging (CAP) Consistency Partition or Availability Is that hotel (latency) What's the room still account $ available? balance? Network 12 NoSQL (Non-relational) databases Attributes: • Schema-less (?), cluster friendly (Sharding), Opensource • Aggregates (Transaction boundary), Developer friendly • Flavors (Aggregates: Key-Value, Document, Columnar Relationship: Graph ) Scaling 13 Hadoop History – reinventing the Google wheel From Google to Yahoo to Opensource (Apache Hadoop) to Monetized 2003-2004 2006 2009 • Google • Apache 2008 2011 published • MapR Hadoop is • Cloudera Founded • Hortonworks papers on born @ Yahoo GFS (Google founded • CTO Srivas is founded File System) • Hadoop name • Doug is a from Google • 24 engineers and came from Chief architect and worked on from Hadoop MapReduce Doug Cutting @ Cloudera GFS, BigTable team @ Yahoo (Hadoop’s (Data • Intel (740M for and • HDP (Market Processing on creator) son’s Mapreduce plush toy 18%) Cap 850M) Large • Google (110M) Clusters) elephant 14 Hadoop ecosystem Kafka highly reliable Distributed distributed Workflow scheduler Realtime Computation Messaging coordination Faster Computations HDFS <-> RDBMS Machine Streaming event HiveQL Data analytics NoSQL DB learning apps data ingestion SQL-Like to Mapreduce Core Hadoop MapReduce/YARN & HDFS 15 NoSQL and Hadoop are now widely accepted 16 Data Models and architectures 17 New Data models have emerged (NoSQL) Tom 123456 London Customer Orders …. Name Scott 999999 Malvern phone ….. email Column based …. Key-Value Address Payment Document Graph City Credit Card State Number Jon pancakes expiry …. Dan Football 18 Core Hadoop : Mapreduce/YARN & HDFS Mapreduce: Ability to take a dataset, divide it, and run it over parallel nodes. Input data is processed and transformed into a intermediate stage and then summarized into the final stage Parallel Processing for large data sets. MapReduce 2.0 ResourceMgr NodeMgr Job scheduling and cluster YARN -Resource Negotiator resource management ApplicationMaster Container High throughput HDFS NameNode DataNode access to Distributed file system application data 19 MapReduce 2.0 (YARN) • Global Resource Manager (Job Tracking and resource management) • Per-application (Mapreduce or DAG) Application Master (job scheduling/monitoring): Negotiate resources from the ResourceManager and works with the NodeManager(s) to execute and monitor the tasks https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html 20 HDFS - Hadoop Distributed File System For large data sets, runs on Clusters, Failure is assumed hadoop.apache.org • DFS in Java, runs on Linux • write-once-read-many • hierarchical file organization • stores each file as a sequence of blocks • block size and replication factor are configurable per file • NameNode maintains the file system namespace • communication protocols are layered on top of the TCP/IP 21 Demo – Import and query structured data in Hadoop 22 Data is getting complex Volume Velocity Variety Veracity Sensor data Social Macrobatch >15 min Database Media activity Microbatch 2-15 min Videos Audio Near Real-time 2 sec – 2 min Decision support Near Real-time 100 milisec-2 min Event Processing Images DATA Archives Real-time <100 milisec 24 Data serialization and De-serialization Process of translating an object into a stream of bytes in order to store or transmit in memory, a database, or a file Sensor Avro Parquet data Social Music Media • Schema based system • Columnar storage format activity • uses JSON for defining data available to any project in the types and protocols, and Hadoop ecosystem, regardless Videos Audio serializes data in a compact of the choice of data processing binary format framework, data model or 110101010101101 programming language 010101010101010 101010101010101 Images 010101010101010DATA Archives 101010101010101 010101010101010 101010101010101 010101010101010 Others 101010101010101 010101010101010 • Thrift (Facebook) • Protocol Buffers (Google) • JSON Database File • BSON Memory 25 Demo – Import and analyze semi structured data in Hadoop 26 Big Data solutions need a lot more than just Hadoop Lambda http://nathanmarz.com/blog/how-to-beat-the-cap-theorem.html Kappa http://radar.oreilly.com/2014/07/questioning-the-lambda-architecture.html 28 Apache Spark deserves special attention 10x to 100x faster than MapReduce spark.apache.org DAG execution engine: cyclic data flow and in-memory computing Java, Scala, Python, R IBM: 300M + 3500 people 29 Demo - Spark for analytics on object relationships 30 When to use NoSQL vs Hadoop for Big Data Cluster friendly, scale out horizontally, Handle large data sets, Flexible Data formats, Developer friendly NoSQL Hadoop Maintain two different Databases • Batch processing copies of data? • Read/Write/Modify • Process large data • Interactive sets • Real-time • Historic (Data Lakes) random, real-time read/write access to HDFS 32 Industry acceptance : NoSQL and Hadoop Magic Quadrant for Operational Database Management Systems 33 Enterprise Grade 34 Simplified view of Hadoop cluster and add-ons 35 Opensource is free ..if your time has no value Management Skills Scale & Operations Fault Security Governance Tolerance 36 Management 37 Scalability and Fault tolerance In the world of cluster computing Hadoop was designed to be scalable and robust: built into HDFS Centralized service for maintaining and MapReduce configuration information, naming, providing distributed synchronization, and providing group services 38 Security By default Hadoop runs in non-secure mode sentry.apache.org Authentication Data Encryption Role based authorization to data and metadata stored on a Hadoop cluster • Kerberos principals for • Data Encryption on RPC, Started by Cloudera Hadoop Daemons and Block data transfer, and Users HTTP • Hadoop Key Management • Transparent Encryption in Server HDFS Service Level Authorization • ensure clients are authorized to access the Hadoop service 39 Data governance in Hadoop, ready for compliance? Audit data access, Track data Lineage, Data/Metadata lifecycle management 40 Hadoop and Openstack 41 Hadoop can run on Openstack Traditional Hadoop Hadoop on Openstack Big Data Software (Hadoop) • Designed for Bare Metal • Agile, automated (Lack of agility) deployment (both Physical • Underutilized resources (Ironic) and Virtual) - Operating System • Maintenance (Operations Sahara (Linux) can become a bottleneck) • Shared platform • Fixed Capacity • Repeatable operations with • Setting up POC takes too template based Infrastructure – much time provisioning (Hadoop on Physical or Virtual demand) • Scale up/down • Quick POC 42 Big Data needs Infrastructure and it can be virtual Virtual infrastructure Hadoop was designed for physical servers: benefits: •Understand the virtualization overhead •Clone and repeat- lower •Local on-server HDD storage (fast sequential reads) – use Block operations costs device driver for Cinder, utilize full disk for Cinder volume and •On-demand Hadoop keep Cinder volume on same host as the VM clusters (Short lived •Dedicated CPUs and homogeneous hardware – keep VM clusters) flavors consistent •Physical infrastructure can •Dedicated network – isolate physical networks supporting Tenant be reused and shared networks •Flexible cluster scaling– •Replication and redundant services ( zookeeper, Journal mgr, Grow/Shrink Namenode) land on different data nodes for fault tolerance – Anti- •Pay per usage (in public Affinity cloud •For Spark (memory intensive) – do NOT oversubscribe RAM 43 Summary OUTCOME LOGIC Cisco Data & Analytics Cisco Data Virtualization DATA XML Packaged Apps RDBMS Excel Files Data Warehouse
Recommended publications
  • Download File

    Download File

    Annex 2: List of tested and analyzed data sharing tools (non-exhaustive) Below are the specifications of the tools surveyed, as to February 2015, with some updates from April 2016. The tools selected in the context of EU BON are available in the main text of the publication and are described in more details. This list is also available on the EU BON Helpdesk website, where it will be regularly updated as needed. Additional lists are available through the GBIF resources page, the DataONE software tools catalogue, the BioVel BiodiversityCatalogue and the BDTracker A.1 GBIF Integrated Publishing Toolkit (IPT) Main usage, purpose, selected examples The Integrated Publishing Toolkit is a free open source software tool written in Java which is used to publish and share biodiversity data sets and metadata through the GBIF network. Designed for interoperability, it enables the publishing of content in databases or text files using open standards, namely, the Darwin Core and the Ecological Metadata Language. It also provides a 'one-click' service to convert data set metadata into a draft data paper manuscript for submission to a peer-reviewed journal. Currently, the IPT supports three core types of data: checklists, occurrence datasets and sample based data (plus datasets at metadata level only). The IPT is a community-driven tool. Core development happens at the GBIF Secretariat but the coding, documentation, and internationalization are a community effort. New versions incorporate the feedback from the people who actually use the IPT. In this way, users can help get the features they want by becoming involved. The user interface of the IPT has so far been translated into six languages: English, French, Spanish, Traditional Chinese, Brazilian Portuguese, Japanese (Robertson et al, 2014).
  • System and Organization Controls (SOC) 3 Report Over the Google Cloud Platform System Relevant to Security, Availability, and Confidentiality

    System and Organization Controls (SOC) 3 Report Over the Google Cloud Platform System Relevant to Security, Availability, and Confidentiality

    System and Organization Controls (SOC) 3 Report over the Google Cloud Platform System Relevant to Security, Availability, and Confidentiality For the Period 1 May 2020 to 30 April 2021 Google LLC 1600 Amphitheatre Parkway Mountain View, CA, 94043 650 253-0000 main Google.com Management’s Report of Its Assertions on the Effectiveness of Its Controls Over the Google Cloud Platform System Based on the Trust Services Criteria for Security, Availability, and Confidentiality We, as management of Google LLC ("Google" or "the Company") are responsible for: • Identifying the Google Cloud Platform System (System) and describing the boundaries of the System, which are presented in Attachment A • Identifying our service commitments and system requirements • Identifying the risks that would threaten the achievement of its service commitments and system requirements that are the objectives of our System, which are presented in Attachment B • Identifying, designing, implementing, operating, and monitoring effective controls over the Google Cloud Platform System (System) to mitigate risks that threaten the achievement of the service commitments and system requirements • Selecting the trust services categories that are the basis of our assertion We assert that the controls over the System were effective throughout the period 1 May 2020 to 30 April 2021, to provide reasonable assurance that the service commitments and system requirements were achieved based on the criteria relevant to security, availability, and confidentiality set forth in the AICPA’s
  • F1 Query: Declarative Querying at Scale

    F1 Query: Declarative Querying at Scale

    F1 Query: Declarative Querying at Scale Bart Samwel John Cieslewicz Ben Handy Jason Govig Petros Venetis Chanjun Yang Keith Peters Jeff Shute Daniel Tenedorio Himani Apte Felix Weigel David Wilhite Jiacheng Yang Jun Xu Jiexing Li Zhan Yuan Craig Chasseur Qiang Zeng Ian Rae Anurag Biyani Andrew Harn Yang Xia Andrey Gubichev Amr El-Helw Orri Erling Zhepeng Yan Mohan Yang Yiqun Wei Thanh Do Colin Zheng Goetz Graefe Somayeh Sardashti Ahmed M. Aly Divy Agrawal Ashish Gupta Shiv Venkataraman Google LLC [email protected] ABSTRACT 1. INTRODUCTION F1 Query is a stand-alone, federated query processing platform The data processing and analysis use cases in large organiza- that executes SQL queries against data stored in different file- tions like Google exhibit diverse requirements in data sizes, la- based formats as well as different storage systems at Google (e.g., tency, data sources and sinks, freshness, and the need for custom Bigtable, Spanner, Google Spreadsheets, etc.). F1 Query elimi- business logic. As a result, many data processing systems focus on nates the need to maintain the traditional distinction between dif- a particular slice of this requirements space, for instance on either ferent types of data processing workloads by simultaneously sup- transactional-style queries, medium-sized OLAP queries, or huge porting: (i) OLTP-style point queries that affect only a few records; Extract-Transform-Load (ETL) pipelines. Some systems are highly (ii) low-latency OLAP querying of large amounts of data; and (iii) extensible, while others are not. Some systems function mostly as a large ETL pipelines. F1 Query has also significantly reduced the closed silo, while others can easily pull in data from other sources.
  • Containers at Google

    Containers at Google

    Build What’s Next A Google Cloud Perspective Thomas Lichtenstein Customer Engineer, Google Cloud [email protected] 7 Cloud products with 1 billion users Google Cloud in DACH HAM BER ● New cloud region Germany Google Cloud Offices FRA Google Cloud Region (> 50% latency reduction) 3 Germany with 3 zones ● Commitment to GDPR MUC VIE compliance ZRH ● Partnership with MUC IoT platform connects nearly Manages “We found that Google Ads has the best system for 50 brands 250M+ precisely targeting customer segments in both the B2B with thousands of smart data sets per week and 3.5M and B2C spaces. It used to be hard to gain the right products searches per month via IoT platform insights to accurately measure our marketing spend and impacts. With Google Analytics, we can better connect the omnichannel customer journey.” Conrad is disrupting online retail with new Aleš Drábek, Chief Digital and Disruption Officer, Conrad Electronic services for mobility and IoT-enabled devices. Solution As Conrad transitions from a B2C retailer to an advanced B2B and Supports B2C platform for electronic products, it is using Google solutions to grow its customer base, develop on a reliable cloud infrastructure, Supports and digitize its workplaces and retail stores. Products Used 5x Mobile-First G Suite, Google Ads, Google Analytics, Google Chrome Enterprise, Google Chromebooks, Google Cloud Translation API, Google Cloud the IoT connections vs. strategy Vision API, Google Home, Apigee competitors Industry: Retail; Region: EMEA Number of Automate Everything running
  • Beginning Java Google App Engine

    Beginning Java Google App Engine

    apress.com Kyle Roche, Jeff Douglas Beginning Java Google App Engine A book on the popular Google App Engine that is focused specifically for the huge number of Java developers who are interested in it Kyle Roche is a practicing developer and user of Java technologies and the Google App Engine for building Java-based Cloud applications or Software as a Service (SaaS) Cloud Computing is a hot technology concept and growing marketplace that drives interest in this title as well Google App Engine is one of the key technologies to emerge in recent years to help you build scalable web applications even if you have limited previous experience. If you are a Java 1st ed., 264 p. programmer, this book offers you a Java approach to beginning Google App Engine. You will explore the runtime environment, front-end technologies like Google Web Toolkit, Adobe Flex, Printed book and the datastore behind App Engine. You'll also explore Java support on App Engine from end Softcover to end. The journey begins with a look at the Google Plugin for Eclipse and finishes with a 38,99 € | £35.49 | $44.99 working web application that uses Google Web Toolkit, Google Accounts, and Bigtable. Along [1]41,72 € (D) | 42,89 € (A) | CHF the way, you'll dig deeply into the services that are available to access the datastore with a 52,05 focus on Java Data Objects (JDO), JDOQL, and other aspects of Bigtable. With this solid foundation in place, you'll then be ready to tackle some of the more advanced topics like eBook integration with other cloud platforms such as Salesforce.com and Google Wave.
  • Migrating Your Databases to Managed Services on Google Cloud Table of Contents

    Migrating Your Databases to Managed Services on Google Cloud Table of Contents

    White paper June 2021 Migrating your databases to managed services on Google Cloud Table of Contents Introduction 3 Why choose cloud databases 3 The benefits of Google Cloud’s managed database services 5 Maximum compatibility for your workloads 6 For Oracle workloads: Bare Metal Solution for Oracle 8 For SQL Server workloads: Cloud SQL for SQL Server 10 For MySQL workloads: Cloud SQL for MySQL 12 For PostgreSQL workloads: Cloud SQL for PostgreSQL 14 For Redis and Memcached workloads: Memorystore 16 For Redis: Redis Enterprise Cloud 17 For Apache HBase workloads: Cloud Bigtable 19 For MongoDB workloads: MongoDB Atlas 20 For Apache Cassandra workloads: Datastax Astra 22 For Neo4j workloads: Neo4j Aura 23 For InfluxDB workloads: InfluxDB Cloud 25 Migrations that are simple, reliable, and secure 27 Assessing and planning your migration 27 Google Cloud streamlines your migrations 28 Google Cloud services and tools 28 Self-serve migration resources 28 Database migration technology partners 29 Google Professional Services 29 Systems integrators 29 Get started today 29 3 Introduction This paper is for technology decision makers, developers, architects, and DBAs. It focuses on modernizing database deployments with database services on Google Cloud. These services prioritize compatibility and simplicity of management, and include options for Oracle, SQL Server, MySQL, PostgreSQL, Redis, MongoDB, Cassandra, Neo4j, and other popular databases. To transform your business applications, consider Google Cloud native databases. For strategic applications that don’t go down, need on-demand and unlimited scalability, advanced security, and accelerated application development, Google provides the same cloud native database services that power thousands of applications at Google, including services like Google Search, Gmail, and YouTube with billions of users across the globe.
  • A Bridge to the Cloud Damien Contreras ダミアン コントレラ Customer Engineer Specialist, Data Analytics, Google Cloud アジェンダ

    A Bridge to the Cloud Damien Contreras ダミアン コントレラ Customer Engineer Specialist, Data Analytics, Google Cloud アジェンダ

    A bridge to the Cloud Damien Contreras ダミアン コントレラ Customer Engineer Specialist, Data Analytics, Google Cloud アジェンダ 1 2 4 5 6 はじめに 移行する前の DWHの移行 GCP と連動 データの表示 準備 について はじめに 01 移行する前に| データウェアハウスの欠点 リアルタイムの負担は受 データ増加 けられない コスト データ形式対応外 セルフサービス分析 が難 しい ベンダーロックイン の心 スタースキーマとディメン 配 表ションとファクト表に合わ せる 移行する前に | データレイクの欠点 コスト クラスターのリソースの バージョンアップ バランス 複数のデータレイクが構築 パートナー、人材採用が困 される 難 移行する前に | Google Cloud の価値 コストパフォーマン 弾力性のある構 セキュリティー スが良い サイロはない サーバーレスでNo-ops ANSI SQL-2011 移行をする前に考 えること 02 Partners Cloud plan & cloud deploy 移行する前に | 社内にスキルはない場合 グーグルの支援 パートナーと BigQuery のスター Google リソースが ターパック 協力し合う https://cloud.google.com/partner s/?hl=ja 移行する前に | TCO & ROI アンケートを記入するだけ で 総所有コストのも計算 移行する前に | クラウドで構築 サイロ化に 2 3 なっている 1 データセット データと関連データの Proof of 基盤を構築 発見 Concept 誰でも ML を 使えれるよう 6 5 4 に 機械学習 ソースデータを 周りのシステム 移行 と通信用のツー ルの構築 DWH の移行について03 移行する前に | DWH移行対応 t n r m h o s v b ? Teradata IBM AWS Azure SQL Hadoop Oracle Snowfake Verica SAP BW その他 Netezza Redshif BigQuery t Teradata IBM Netezza から 13 IBM Netezza | アーキテクチャ FPGA CPU メモリー NZSQLコマンド:DML, データダンプ Host FPGA CPU (Linux JDBCコネクター:SQLクエリ メモリー サー バ) FPGA CPU Symmetric Multiprocessing メモリー (SMP)複数のマイクロプロセッサ Disk S-Blade Network Enclosure fabric AMPP Massively Parallel Processing Architecture (MPP) 大量な並行処理 IBM Netezza | データタイプ IBM Netezza の 31 タイプを全部 BigQuery でマッピングが出来ます IBM Netezza BigQuery VARCHAR STRING BOOLEANま BigQuery : TRUE / FALSE BOOL たBOOL Netezza : True / False, 1 / 0, yes / no, on / of TIME / TIMETZ / TIME BigQuery : の TIME でタイムゾーンはない TIME_WITH_TI ME_ZONE ARRAY Netezza : VARCHARのデータタイプに保存 IBM
  • Nosql Database Comparison: Bigtable, Cassandra and Mongodb

    Nosql Database Comparison: Bigtable, Cassandra and Mongodb

    Running Head: NOSQL DATABASE COMPARISON: BIGTABLE, CASSANDRA AND MONGODB NoSQL Database Comparison: Bigtable, Cassandra and MongoDB CJ Campbell Brigham Young University October 16, 2015 1 NOSQL DATABASE COMPARISON: BIGTABLE, CASSANDRA AND MONGODB INTRODUCTION THE SYSTEMS Google Bigtable History Data model & operations Physical Storage ACID properties Scalability Apache Cassandra History Data model & operations Physical Storage ACID properties Scalability MongoDB History Data model & operations Physical Storage ACID properties Scalability Differences Conclusion References 2 NOSQL DATABASE COMPARISON: BIGTABLE, CASSANDRA AND MONGODB Introduction As distributed systems are adopted and grown to scale, the need for scalable database solutions which meet the application’s exact need has become increasingly important. In the early days of computing, databases were almost entirely relational. Today, new breeds of database have emerged, called NoSQL databases. They are a common element in the grand design for most distributed software platforms. Each database is suited to a slightly different purpose from its peers. This paper discusses the features, similarities, and differences of three NoSQL databases: Google Bigtable, Apache Cassandra, and MongoDB. The Systems In this section, each of the three NoSQL databases are analyzed in-depth, starting with Google Bigtable, then Apache Cassandra, and finally MongoDB. Analysis includes their history, data model, accepted operations, physical storage schema, ACID properties, and scalability. Google Bigtable History. Bigtable was designed within Google to meet their internal data ​ processing needs at scale. It began development in 2004 as part of their effort to handle large amounts of data across applications such as web indexing, Google Earth, Google Finance and more (Google, Inc., 2006). It first went into production use in April 2005.
  • Google Cloud Security Whitepapers

    Google Cloud Security Whitepapers

    1 Google Cloud Security Whitepapers March 2018 Google Cloud Encryption at Rest in Encryption in Transit in Application Layer Infrastructure Security Google Cloud Google Cloud Transport Security Design Overview in Google Cloud 2 Table of Contents Google Cloud Infrastructure Security Design Overview . 3 Encryption at Rest in Google Cloud . 23 Encryption in Transit in Google Cloud . 43 Application Layer Transport Security in Google Cloud . 75 3 A technical whitepaper from Google Cloud 4 Table of Contents Introduction . 7 Secure Low Level Infrastructure . 8 Security of Physical Premises Hardware Design and Provenance Secure Boot Stack and Machine Identity Secure Service Deployment . 9 Service Identity, Integrity, and Isolation Inter-Service Access Management Encryption of Inter-Service Communication Access Management of End User Data Secure Data Storage . 14 Encryption at Rest Deletion of Data Secure Internet Communication . 15 Google Front End Service Denial of Service (DoS) Protection User Authentication Operational Security . 17 Safe Software Development Keeping Employee Devices and Credentials Safe Reducing Insider Risk Intrusion Detection 5 Securing the Google Cloud Platform (GCP) . .. 19 Conclusion . 21 Additional Reading . 22 The content contained herein is correct as of January 2017, and represents the status quo as of the time it was written. Google’s security policies and systems may change going forward, as we continually improve protection for our customers. 6 CIO-level summary • Google has a global scale technical infrastructure designed to provide security through the entire information processing lifecycle at Google. This infrastructure provides secure deployment of services, secure storage of data with end user privacy safeguards, secure communications between services, secure and private communication with customers over the internet, and safe operation by administrators.
  • Ray Cromwell

    Ray Cromwell

    Building Applications with Google APIs Ray Cromwell Monday, June 1, 2009 “There’s an API for that” • code.google.com shows 60+ APIs • full spectrum (client, server, mobile, cloud) • application oriented (android, opensocial) • Does Google have a Platform? Monday, June 1, 2009 Application Ecosystem Client REST/JSON, GWT, Server ProtocolBuffers Earth PHP Java O3D App Services Media Docs Python Ruby Utility Blogger Spreadsheets Maps/Geo JPA/JDO/Other Translate Base Datastore GViz Social MySQL Search OpenSocial Auth FriendConnect $$$ ... GData Contacts AdSense Checkout Monday, June 1, 2009 Timefire • Store and Index large # of time series data • Scalable Charting Engine • Social Collaboration • Story Telling + Video/Audio sync • Like “Google Maps” but for “Time” Monday, June 1, 2009 Android Version 98% Shared Code with Web version Monday, June 1, 2009 Android • Full API stack • Tight integration with WebKit browser • Local database, 2D and 3D APIs • External XML UI/Layout system • Makes separating presentation from logic easier, benefits code sharing Monday, June 1, 2009 How was this done? • Google Web Toolkit is the Foundation • Target GWT JRE as LCD • Use Guice Dependency Injection for platform-specific APIs • Leverage GWT 1.6 event system Monday, June 1, 2009 Example App Code Device/Service JRE interfaces Guice Android Browser Impl Impl Android GWT Specific Specific Monday, June 1, 2009 Shared Widget Events interface HasClickHandler interface HasClickHandler addClickHandler(injectedHandler) addClickHandler(injectedHandler) Gin binds GwtHandlerImpl
  • Before We Start…

    Before We Start…

    Before we start… This is the Introduction to Databases Design and Implementation workshop • Download material: dartgo.org/db-design • Poll / Interactive questions: dartgo.org/poll • Optional software: https://dev.mysql.com/downloads/workbench/ • More info: rc.dartmouth.edu Introduction to Database Design and Implementation Christian Darabos, Ph.D. [email protected] Slides download: dartgo.org/db-design Overview • introduction to Databases and this workshop • development/production environments • tools (admin, browse, query, etc.) • DB design, UML and case study (http://www.datanamic.com/support/lt-dez005-introduction-db-model ing.html) • port model into MySQL Workbench Right-click > Open link in new window To keep open slides and poll dartgo.org/poll Research Computing Introduction • Research Computing service offering • Definition of a Relational Database • Overview of this workshop Right-click > Open link in new window To keep open slides and poll dartgo.org/poll Definition of a Relational Database (SQL) • a database type structured to recognize relations among stored items of information • designed to store text, dates/times, integers, floating-point number • implemented as a series of tables Mental Model • Think of a database as a set of spreadsheets • Each spreadsheet (or table) represents a type of entity (person, object, concept, etc.) • Better than Excel because it also models the relationship between the entities Why use a Relational Database • concurrent (simultaneous) read and write • powerful selecting, filtering and sorting cross-referencing tables • large quantity of structured storage and standardized distribution • minimize post-processing (simple analytics tools pre-implemented) • automate using any scripting and programming languages (R, Matlab, Python, C++, Java, PHP) • web-proof SQL vs.
  • Web2py Dojo @Pycon 2009 Goal

    Web2py Dojo @Pycon 2009 Goal

    web2py Dojo @PyCon 2009 Goal • Write an application that allows you to post news (in markdown syntax), attach and link documents to news, restrict access, provides login, logout, registration, and exposes the posted news as multiple services. • 29 (models) + 47 (controllers) + 45 (views) = 121 total lines of code • type only stuff in blue Part 0 • Download “web2py 1.59” from http://www.web2py.com • Start it with: $ python web2py.py • Create an application called “news” Part 1 • Create a table “news_item” with fields: • title • body • posted_on FILE: applications/news/models/db.py try: from gluon.contrib.gql import * except: db = SQLDB('sqlite://storage.db') # connect to SQLite else: db = GQLDB() # connect to Google BigTable session.connect(request, response, db=db) db.define_table('news_item', db.Field('title',length = 128), db.Field('body','text'), db.Field('posted_on','datetime')) Part 1 • Create a table “document” with fields • news_id which references news_item • name • uploaded file FILE: applications/news/models/db.py try: from gluon.contrib.gql import * except: db = SQLDB('sqlite://storage.db') # connect to SQLite else: db = GQLDB() # connect to Google BigTable session.connect(request, response, db=db) db.define_table('news_item', db.Field('title',length = 128), db.Field('body','text'), db.Field('posted_on','datetime')) db.define_table('document', db.Field('news_id',db.news_item), db.Field('name',length = 128), db.Field('file','upload')) Try appadmin • http://127.0.0.1:8000/news/appadmin • Try insert some records • Try select some