Betriebliche Informationssysteme: Grid-Basierte Integration Und Orchestrierung

Total Page:16

File Type:pdf, Size:1020Kb

Betriebliche Informationssysteme: Grid-Basierte Integration Und Orchestrierung Wilhelm Hasselbring (Hrsg.) Betriebliche Informationssysteme: Grid-basierte Integration und Orchestrierung Schlussbericht 1 Das diesem Bericht zugrunde liegende Vorhaben wurde mit Mitteln des Bundesministeriums für Bildung und Forschung unter dem Förderkennzeichen 01IG07005 gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt bei dem Autor. 2 Vorwort BIS-Grid startete im Mai 2007 als eines der ersten rein kommerziell orientierten Projekte in der zweiten Phase der D-Grid-Initiative des BMBF. Das Ziel, einen Integrations- und Orchestrierungsdienst per Grid-Providing anzubieten, war damals sehr innovativ und ist es auch heute noch. Während der Projektlaufzeit hat der zu Beginn des Projekts noch nicht ausgeprägte Begriff des „Cloud Computing“ zunehmende Aufmerksamkeit erlangt. Es hat sich gezeigt, dass BIS-Grid genau in diese neue Kategorie von Diensten einzuordnen ist. Traditionell fokussiert das Grid auf wissenschaftliche Berechnungen, die Cloud wird überwiegend von kommerziellen Providern betrieben. Inzwischen existieren auch erste kommerzielle Dienste, die dem im BIS-Grid-Projekt geprägten „Orchestration as a Service (OaaS)“ Ansatz entsprechen. Beispiele sind die Azure .NET Workflow Services, Iceberg on Demand und Appian Anywhere. In BIS-Grid haben wir erfolgreich interdisziplinär auf der technischen Ebene (die BIS-Grid-Engine), auf der organisatorischen Ebene (Kooperations- und Geschäftsmodelle) und auf der empirischen Ebene (Evaluation in industriellen Anwendungsszenarien) gearbeitet. Auf den drei jährlichen Grid Workflow Workshops haben wir unsere Ergebnisse verbreitet und uns mit anderen Projekten ausgetauscht. Zum Projektende steht die BIS-Grid-Engine als Open Source Software zur Verfügung. Die konzeptionellen, technischen und empirischen Ergebnisse werden im hier vorliegenden Abschlussbericht für die Fachöffentlichkeit dokumentiert. An dieser Stelle soll ein kurzer Dank an alle Projektbeteiligten aus immerhin acht Unternehmen und wissenschaftlichen Einrichtungen gehen. Eine Auflistung aller Mitarbeiter würde den Rahmen eines Vorwortes sprengen. Stellvertretend sei jedoch Stefan Gudenkauf erwähnt, der die nicht immer einfache organisatorische Koordination des Verbundprojektes stets souverän bewältigte. Ein besonderer Dank gebührt auch den Mitgliedern unseres Industriebeirats, die uns in diversen Treffen mit konstruktiven Ratschlägen zur Seite standen: Stephan Bögner, Franz Haberhauer, Gerd Kachel, Ingo Laue, Daniel Rolli, Frank Schulz und Jörg Strebel. Abschließend sei Herrn Roland Mader vom Projektträger DLR-PT für die stets kompetente und kritische fachliche Betreuung und Frau Ramona Schulze vom Projektträger DLR-PT für die stets reibungslose administrative Abwicklung in der dreijährigen Projekt- laufzeit gedankt. Die wissenschaftliche Koordination des BIS-Grid-Projektes hat mir viel Freude bereitet! Kiel, im September 2010 Wilhelm Hasselbring 3 4 1. Einleitung ............................................................................... 7 2. Kurzdarstellung ..................................................................... 9 2.1. Aufgabenstellung..................................................................... 9 2.2. Voraussetzungen zur Projektdurchführung .............................. 9 2.3. Planung und Ablauf ............................................................... 11 2.4. Wissenschaftlicher und technischer Stand............................. 14 2.5. Zusammenarbeit mit Dritten................................................... 14 3. Eingehende Darstellung...................................................... 16 3.1. Erzielte Ergebnisse................................................................ 16 3.2. Während der Projektdurchführung bekannt gewordene Fortschritte auf dem Gebiet des Vorhabens bei Dritten.......... 28 3.2.1. Cloud Computing.............................................................. 28 3.2.2. Ablaufumgebung für Workflows ........................................ 28 3.3. Veröffentlichungen................................................................. 29 4. Software und Dokumentation ............................................. 31 5. Lieferobjekte des Projekts .................................................. 32 5.1. Kooperationsmodelle.............................................................. 34 5.2. Geschäftsmodell für kommerzielles Grid-Providing................. 79 5.3. Marktanalyse ........................................................................ 116 5.4. Catalogue of WS-BPEL Design Patterns .............................. 160 5.5. Spezifikation der Anforderungen an Sicherheit und Service Level Agreement im Zusammenhang mit WS-BPEL ................... 221 5 5.6. WS-BPEL Engine Specification ........................................... 257 5.7. BIS-Grid Workflow Engine Prototype .................................. 336 5.8. ............................................................ 388 5.9. Zusammenfassung der Evaluationsergebnisse für das Anwendungsszenario CeWe Color.............................................. 403 5.10. Zusammenfassung der Evaluationsergebnisse für das Anwendungsszenario KIESELSTEIN .......................................... 424 5.11. Integration in D-Grid ........................................................... 463 5.12. Projektglossar..................................................................... 478 6 1. Einleitung Betriebliche Informationssysteme (BIS) ermöglichen u. a. die Verwaltung der Ressourcen eines Unternehmens (ERP-Systeme), das effiziente Management der Kundenbeziehungen (CRM-Systeme) und die Unterstützung der eigentlichen Produktion bzw. Dienstleistung (PDM-Systeme), um hier nur die typischen Hauptaufgaben zu nennen. Zum Aufbau betrieblicher Informationssysteme müssen im Wesentlichen existierende und evtl. neu angeschaffte Teilsysteme angepasst und integriert werden. Für eine effektive und effiziente Integration müssen die betrieblichen Abläufe durch Workflow- Systeme unterstützt werden: die Dienste der Teilsysteme werden orchestriert. Enterprise Application Integration (EAI) zur Integration von Informationssystemen hat dadurch eine große Bedeutung beider Umsetzung von Geschäftsprozessen, die in Unternehmen oder zwischen Unternehmen üblicherweise mehrere Informationssysteme betreffen, durch die so genannte Orchestrierung in Service-orientierten Architekturen (SOA). Traditionelles EAI nutzt bisher primär kommerzielle Middleware-Systeme, wie z. B. Microsoft BizTalk oder IBM WebSphere. Orchestrierung in SOA zielt insbesondere darauf, die Fachwelt mit der Technologie effektiv zu verbinden. Seit der Einführung von Grid Services durch entsprechende Standards (OGSA, WSRF), die auf Web Services-Technologien basieren, ordnen sich Grid- Technologien in Service-orientierte Architekturen (SOA) ein. SOA ist auch für die EAI von zunehmender Bedeutung. Genauer betrachtet haben Grid- Technologien und EAI viele Gemeinsamkeiten, weil beide Technologien Integrationsprobleme innerhalb einer heterogenen Umgebung fokussieren, Grid-Technologien auf Ressourcenebene und EAI auf Anwendungsebene. Das Grid bietet für die Integration betrieblicher Informationssysteme eine effiziente, sichere und zuverlässige Technologie, mit der benötigte Ressourcen bei Bedarf einfach eingebunden und virtuelle Organisationen effizient gebildet werden können, was bisher in diesem Bereich nur schwer möglich ist. Im Laufe des Projektes hat sich gezeigt, dass für die Umsetzung des Orchestrierungsdienstes in BIS-Grid Orchestration as a Service (OaaS) der am ehesten geeignete Ansatz ist. Daher ist es für BIS-Grid auch zur Zielsetzung geworden, Grid Technologien über OaaS in der kommerziellen Welt verfügbar zu machen. Dies ist im Wesentlichen von der Herausforderung geprägt ist, eine in der Wissenschaft erprobte Technologie an die Erfordernisse kommerzieller Akteure anzupassen. Dabei wurde schnell deutlich, dass die verwendeten Begriffe Grid (Wissenschaft) und Cloud (Industrie) einer genaueren - über Marketingphrasen hinausgehende - Betrachtung bedürfen, um Anknüpfungspunkte zwischen beiden Welten zu identifizieren. Innerhalb des letzten Jahre hat eine intensive Diskussion von Grid und Cloud Computing sowohl in der Industrie als auch der Wissenschaft stattgefunden, die unter anderem zu einer Konsensbildung im Hinblick auf die unterschiedlichen 7 Begriffsebenen des Cloud Computing geführt hat, nämlich Infrastructure as a Service (IAAS), Platform as a Service (PaaS) und Software as a Service (SaaS). OaaS kann hier als eine Form von PaaS und SaaS angesehen werden. In den bisherigen Arbeiten wurde insbesondere deutlich, dass der Begriff Grid Computing in der kommerziellen Welt entweder als ein Term rein aus der wissenschaftlichen Welt angesehen wird (im Sinne von Hochleistungsrechnen) oder als Synonym für die Ebene Infrastructure as a Service (IaaS), d. h. eine Ebene des Cloud Computings, verwendet wird. Grid Computing wird als technologischer Oberbegriff dort nicht verwendet, so dass im Kontext kommerzieller Anwendung - wie in der Marktanalyse für BIS-Grid - nur der Begriff Cloud Computing genutzt wird. 8 2. Kurzdarstellung 2.1. Aufgabenstellung Die Aufgabenstellung des Projekts ist, sowohl auf konzeptioneller als auch auf technischer Ebene Erweiterungen für ein horizontales Service Grid im Anwendungsbereich betrieblicher Informationssysteme zu erarbeiten. Ziel ist die Nutzung von Grid-Technologien zur Integration von dezentralen Informationssystemen. Auf konzeptioneller Ebene sollen neue Formen der Kooperation und neue Geschäftsmodelle erarbeitet
Recommended publications
  • Unravel Data Systems Version 4.5
    UNRAVEL DATA SYSTEMS VERSION 4.5 Component name Component version name License names jQuery 1.8.2 MIT License Apache Tomcat 5.5.23 Apache License 2.0 Tachyon Project POM 0.8.2 Apache License 2.0 Apache Directory LDAP API Model 1.0.0-M20 Apache License 2.0 apache/incubator-heron 0.16.5.1 Apache License 2.0 Maven Plugin API 3.0.4 Apache License 2.0 ApacheDS Authentication Interceptor 2.0.0-M15 Apache License 2.0 Apache Directory LDAP API Extras ACI 1.0.0-M20 Apache License 2.0 Apache HttpComponents Core 4.3.3 Apache License 2.0 Spark Project Tags 2.0.0-preview Apache License 2.0 Curator Testing 3.3.0 Apache License 2.0 Apache HttpComponents Core 4.4.5 Apache License 2.0 Apache Commons Daemon 1.0.15 Apache License 2.0 classworlds 2.4 Apache License 2.0 abego TreeLayout Core 1.0.1 BSD 3-clause "New" or "Revised" License jackson-core 2.8.6 Apache License 2.0 Lucene Join 6.6.1 Apache License 2.0 Apache Commons CLI 1.3-cloudera-pre-r1439998 Apache License 2.0 hive-apache 0.5 Apache License 2.0 scala-parser-combinators 1.0.4 BSD 3-clause "New" or "Revised" License com.springsource.javax.xml.bind 2.1.7 Common Development and Distribution License 1.0 SnakeYAML 1.15 Apache License 2.0 JUnit 4.12 Common Public License 1.0 ApacheDS Protocol Kerberos 2.0.0-M12 Apache License 2.0 Apache Groovy 2.4.6 Apache License 2.0 JGraphT - Core 1.2.0 (GNU Lesser General Public License v2.1 or later AND Eclipse Public License 1.0) chill-java 0.5.0 Apache License 2.0 Apache Commons Logging 1.2 Apache License 2.0 OpenCensus 0.12.3 Apache License 2.0 ApacheDS Protocol
    [Show full text]
  • 60 Recipes for Apache Cloudstack
    60 Recipes for Apache CloudStack Sébastien Goasguen 60 Recipes for Apache CloudStack by Sébastien Goasguen Copyright © 2014 Sébastien Goasguen. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or [email protected]. Editor: Brian Anderson Indexer: Ellen Troutman Zaig Production Editor: Matthew Hacker Cover Designer: Karen Montgomery Copyeditor: Jasmine Kwityn Interior Designer: David Futato Proofreader: Linley Dolby Illustrator: Rebecca Demarest September 2014: First Edition Revision History for the First Edition: 2014-08-22: First release See http://oreilly.com/catalog/errata.csp?isbn=9781491910139 for release details. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. 60 Recipes for Apache CloudStack, the image of a Virginia Northern flying squirrel, and related trade dress are trademarks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.
    [Show full text]
  • Hortonworks Data Platform May 29, 2015
    docs.hortonworks.com Hortonworks Data Platform May 29, 2015 Hortonworks Data Platform : Data Integration Services with HDP Copyright © 2012-2015 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including MapReduce, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, Zookeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included. Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to Contact Us directly to discuss your specific needs. Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 3.0 License. http://creativecommons.org/licenses/by-sa/3.0/legalcode ii Hortonworks Data Platform May 29, 2015 Table of Contents 1.
    [Show full text]
  • Performance Prediction of Data Streams on High-Performance
    Gautam and Basava Hum. Cent. Comput. Inf. Sci. (2019) 9:2 https://doi.org/10.1186/s13673-018-0163-4 RESEARCH Open Access Performance prediction of data streams on high‑performance architecture Bhaskar Gautam* and Annappa Basava *Correspondence: bhaskar.gautam2494@gmail. Abstract com Worldwide sensor streams are expanding continuously with unbounded velocity in Department of Computer Science and Engineering, volume, and for this acceleration, there is an adaptation of large stream data processing National Institute system from the homogeneous to rack-scale architecture which makes serious con- of Technology Karnataka, cern in the domain of workload optimization, scheduling, and resource management Surathkal, India algorithms. Our proposed framework is based on providing architecture independent performance prediction model to enable resource adaptive distributed stream data processing platform. It is comprised of seven pre-defned domain for dynamic data stream metrics including a self-driven model which tries to ft these metrics using ridge regularization regression algorithm. Another signifcant contribution lies in fully-auto- mated performance prediction model inherited from the state-of-the-art distributed data management system for distributed stream processing systems using Gaussian processes regression that cluster metrics with the help of dimensionality reduction algorithm. We implemented its base on Apache Heron and evaluated with proposed Benchmark Suite comprising of fve domain-specifc topologies. To assess the pro- posed methodologies, we forcefully ingest tuple skewness among the benchmark- ing topologies to set up the ground truth for predictions and found that accuracy of predicting the performance of data streams increased up to 80.62% from 66.36% along with the reduction of error from 37.14 to 16.06%.
    [Show full text]
  • ACNA2011: Apache Rave: Enterprise Social Networking out of The
    Apache Rave Enterprise Social Networking Out Of The Box Ate Douma, Hippo B.V. Matt Franklin, The MITRE Corporation November 9, 2011 Overview ● About us ● What is Apache Rave? ● History ● Projects and people behind Rave ● The Project ● Demo ● Goals & Roadmap ● More demos and examples ● Other projects using Rave ● Participate Apache Rave: Enterprise Social Networking Out Of The Box About us Ate Douma Matt Franklin Chief Architect at Lead Software Engineer at Hippo B.V. The MITRE Corporation's Center of Open source CMS and Portal Software Information & Technology Apache Champion, Mentor and Committer Apache PPMC Member and Committer of Apache Rave of Apache Rave [email protected] [email protected] [email protected] [email protected] [email protected] twitter: @atedouma twitter: @mattfranklin Apache Rave: Enterprise Social Networking Out Of The Box What is Apache Rave? Apache Rave (incubating) is a lightweight and extensible Web and Social Mashup engine, to host, serve and aggregate Gadgets, Widgets and general (social) network and web services with a highly customizable Web 2.0 friendly front-end. ● Targets Enterprise-level intranet, extranet, portal, web and mobile sites ● Can be used 'out-of-the-box' or as an embeddable engine ● Transparent integration and usage of OpenSocial Gadgets, W3C Widgets, …, ● Built upon a highly extensible and pluggable component architecture ● Will enhance this with context-aware cross-component communication, collaboration and content integration features ● Leverages latest/open standards and related open source
    [Show full text]
  • Real-Time Stream Processing for Big Data
    it – Information Technology 2016; 58(4): 186–194 DE GRUYTER OLDENBOURG Special Issue Wolfram Wingerath*, Felix Gessert, Steffen Friedrich, and Norbert Ritter Real-time stream processing for Big Data DOI 10.1515/itit-2016-0002 1 Introduction Received January 15, 2016; accepted May 2, 2016 Abstract: With the rise of the web 2.0 and the Internet of Through technological advance and increasing connec- things, it has become feasible to track all kinds of infor- tivity between people and devices, the amount of data mation over time, in particular fine-grained user activi- available to (web) companies, governments and other or- ties and sensor data on their environment and even their ganisations is constantly growing. The shift towards more biometrics. However, while efficiency remains mandatory dynamic and user-generated content in the web and the for any application trying to cope with huge amounts of omnipresence of smart phones, wearables and other mo- data, only part of the potential of today’s Big Data repos- bile devices, in particular, have led to an abundance of in- itories can be exploited using traditional batch-oriented formation that are only valuable for a short time and there- approaches as the value of data often decays quickly and fore have to be processed immediately. Companies like high latency becomes unacceptable in some applications. Amazon and Netflix have already adapted and are mon- In the last couple of years, several distributed data pro- itoring user activity to optimise product or video recom- cessing systems have emerged that deviate from the batch- mendations for the current user context.
    [Show full text]
  • Hadoop Programming Options
    "Web Age Speaks!" Webinar Series Hadoop Programming Options Introduction Mikhail Vladimirov Director, Curriculum Architecture [email protected] Web Age Solutions Providing a broad spectrum of regular and customized training classes in programming, system administration and architecture to our clients across the world for over ten years ©WebAgeSolutions.com 2 Overview of Talk Hadoop Overview Hadoop Analytics Systems HDFS and MapReduce v1 & v2 (YARN) Hive Sqoop ©WebAgeSolutions.com 3 Hadoop Programming Options Hadoop Ecosystem Hadoop Hadoop is a distributed fault-tolerant computing platform written in Java Modeled after shared-nothing, massively parallel processing (MPP) system design Hadoop's design was influenced by ideas published in Google File System (GFS) and MapReduce white papers Hadoop can be used as a data hub, data warehouse or an analytic platform ©WebAgeSolutions.com 5 Hadoop Core Components The Hadoop project is made up of three main components: Common • Contains Hadoop infrastructure elements (interfaces with HDFS, system libraries, RPC connectors, Hadoop admin scripts, etc.) Hadoop Distributed File System • Hadoop Distributed File System (HDFS) running on clusters of commodity hardware built around the concept: load once and read many times MapReduce • A distributed data processing framework used as data analysis system ©WebAgeSolutions.com 6 Hadoop Simple Definition In a nutshell, Hadoop is a distributed computing framework that consists of: Reliable data storage (provided via HDFS) Analysis system
    [Show full text]
  • Apache Pulsar and Its Enterprise Use Cases
    Apache Pulsar and its enterprise use cases Yahoo Japan Corporation Nozomi Kurihara July 18th, 2018 Who am I? Nozomi Kurihara • Software engineer at Yahoo! JAPAN (April 2012 ~) • Working on internal messaging platform using Apache Pulsar • Committer of Apache Pulsar Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 2 Agenda 1. What is Apache Pulsar? 2. Why is Apache Pulsar useful? 3. How does Yahoo! JAPAN uses Apache Pulsar? Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 3 What is Apache Pulsar? Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 4 Agenda 1. What is Apache Pulsar? › History & Users › Pub-Sub messaging › Architecture › Client libraries › Topic › Subscription › Sample codes 2. Why is Apache Pulsar useful? 3. How does Yahoo! JAPAN uses Apache Pulsar? Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 5 Apache Pulsar Flexible pub-sub system backed by durable log storage ▪ History: ▪ Competitors: › 2014 Development started at Yahoo! Inc. › Apache Kafka › 2015 Available in production in Yahoo! Inc. › RabbitMQ › Sep. 2016 Open-sourced (Apache License 2.0) › Apache ActiveMQ › June 2017 Moved to Apache Incubator Project › Apache RocketMQ › June 2018 Major version update: 2.0.1 etc. ▪ Users: › Oath Inc. (Yahoo! Inc.) › Comcast › The Weather Channel › Mercado Libre › Streamlio › Yahoo! JAPAN etc. Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 6 Pub-Sub messaging Message transmission from one system to another via Topic ▪ Producers publish messages to Topics ▪ Consumers receive only messages from Topics to which they subscribe ▪ Decoupled (no need to know each other) → asynchronous, scalable, resilient Subscribe Consumer 1 Publish Producer Topic Consumer 2 message (log, notification, etc.) Consumer 3 Pub-Sub system Copyright (C) 2018 Yahoo Japan Corporation.
    [Show full text]
  • Databases Theoretical Introduction Contents
    Databases Theoretical Introduction Contents 1 Databases 1 1.1 Database ................................................ 1 1.1.1 Terminology and overview .................................. 1 1.1.2 Applications .......................................... 2 1.1.3 General-purpose and special-purpose DBMSs ........................ 2 1.1.4 History ............................................ 2 1.1.5 Research ........................................... 6 1.1.6 Examples ........................................... 6 1.1.7 Design and modeling ..................................... 7 1.1.8 Languages ........................................... 9 1.1.9 Performance, security, and availability ............................ 10 1.1.10 See also ............................................ 12 1.1.11 References .......................................... 12 1.1.12 Further reading ........................................ 13 1.1.13 External links ......................................... 14 1.2 Schema migration ........................................... 14 1.2.1 Risks and Benefits ...................................... 14 1.2.2 Schema migration in agile software development ...................... 14 1.2.3 Available Tools ........................................ 15 1.2.4 References .......................................... 15 1.3 Star schema .............................................. 16 1.3.1 Model ............................................ 16 1.3.2 Benefits ............................................ 16 1.3.3 Disadvantages .......................................
    [Show full text]
  • White Paper Open Governance
    FP7-ICT-2009-5 257103 page: 1 of 30 white paper: Open Governance white paper Open Governance October 2012 This work is partially funded by webinos, an EU-funded project under the EU FP7 ICT Programme, No 257103. This report is a public deliverable of the webinos project. The project members will review any feedback received; updates will be incorporated as applicable. The webinos project reserves the right to disregard your feedback without explanation. Later in the year, update to the report may be published on www.webinos.org as well as being made available as a live and community maintainable wiki. If you want to comment or contribute on the content of the webinos project and its deliverables you shall agree to make available any Essential Claims related to the work of webinos under the conditions of section 5 of the W3C Patent Policy; the exact Royalty Free Terms can be found at: http://www.w3.org/Consortium/Patent-Policy-20040205/. This report is for personal use only. Other individuals who are interested to receive a copy, need to register to http://webinos.org/downloads. For feedback or further questions, contact: [email protected] DISCLAIMER: webinos believes the statements contained in this publication to be based upon information that we consider reliable, but we do not represent that it is accurate or complete and it should not be relied upon as such. Opinions expressed are current opinions as of the date appearing on this publication only and the information, including the opinions contained herein, are subject to change without notice.
    [Show full text]
  • Download Our Sponsorship Prospectus
    APACHECON @ HOME 2021 ApacheCon@Home 2020 brought together attendees from more than 100 countries around the world. NORTH AMERICA + ASIA EUROPE August 6-8 September 21-23 SPONSORSHIP PROSPECTUS ApacheCon is the official global conference series of The Apache Software Foundation (ASF). Since 1998 – before the ASF’s incorporation – ApacheCon has been drawing participants at all levels to explore ”Tomorrow’s Tech- nology Today” across 300+ Apache projects and their diverse communities. ApacheCon showcases the latest developments in Apache projects and emerging innovations through hands-on sessions, keynotes, real-world case studies, trainings, hackathons, and more. ApacheCon showcases the latest breakthroughs from ubiquitous Apache projects and upcoming innovations in the Apache Incubator, as well as open source devel- opment and leading community-driven projects the Apache way. Attendees learn about core open source technologies independent of business interests, corpo- rate biases, or sales pitches. The ApacheCon program is dynamic, evolving at each event with content directly driven by select Apache project developer and user communities. ApacheCon delivers state-of-the-art content that features the latest open source advances in big data, cloud, community de- velopment, FinTech, IoT, machine learning, messaging, programming, search, security, servers, streaming, web frameworks, and more in a collaborative, vendor-neutral environment. ABOUT THE ASF: Established in 1999, the all-volunteer Apache Software Foundation oversees more than 350 leading open source projects, including Apache HTTP server — the world’s most popular web server software. Through the meritocratic process known as The Apache Way, more than 730 Members and 7,000 Committers across six continents collaborate to develop freely available, enterprise-grade software, benefitting millions of users worldwide.
    [Show full text]
  • Annual Report
    ANNUAL REPORT FY2017 [1 May 2016 – 30 April 2017] THE APACHE® SOFTWARE FOUNDATION (ASF) Open. Innovation. Community. Are you powered by Apache? The answer is most likely “yes”. Apache projects serve as the backbone for some of the world's most visible and widely used applications in Artificial Intelligence and Deep Learning, Big Data, Build Management, Cloud Computing, Content Management, DevOps, IoT and Edge Computing, Mobile, Servers, and Web Frameworks, among other categories. Dependency on Apache projects for critical applications cannot go underestimated, from the 2.6- terabyte, Pulitzer Prize-winning Panama Papers investigation to system-wide information management at the US Federal Aviation Administration to capturing 500B events each day at Netflix to enabling real-time financial services at Lloyds Banking Group to simplifying mobile application development across Android/Blackberry/iOS/Ubuntu/Windows/Windows Phone/OS X platforms to processing requests at Facebook’s 300-petabyte data warehouse to powering clouds for Apple, Disney, Huawei, Tata, and countless others. Every day, more programmers, solutions architects, individual users, educators, researchers, corporations, government agencies, and enthusiasts around the world are choosing Apache software for development tools, libraries, frameworks, visualizers, end-user productivity solutions, and more. Advancing the ASF’s mission of providing software for the public good, momentum over the past fiscal year includes: 35M page views per week across apache.org; Web requests received from every Internet-connected country on the planet; Apache OpenOffice exceeded 200M downloads; Apache Groovy downloaded 12M times during the first 4 months of 2017; Nearly 300 new code contributors and 300-400 new people filing issues each month Bringing value has been a driving force behind Apache’s diverse projects.
    [Show full text]