Final Version of Technical White Paper

Total Page:16

File Type:pdf, Size:1020Kb

Final Version of Technical White Paper Big Data Technical Working Groups White Paper BIG 318062 Project Acronym: BIG Project Title: Big Data Public Private Forum (BIG) Project Number: 318062 Instrument: CSA Thematic Priority: ICT-2011.4.4 D2.2.2 Final Version of Technical White Paper Work Package: WP2 Strategy & Operations Due Date: 28/02/2014 Submission Date: 14/05/2014 Start Date of Project: 01/09/2012 Duration of Project: 26 Months Organisation Responsible of Deliverable: NUIG Version: 1.0 Status: Final Author name(s): Edward Curry (NUIG) Panayotis Kikiras (AGT), Andre Freitas (NUIG) John Domingue (STIR) Andreas Thalhammer (UIBK) Nelia Lasierra (UIBK) Anna Fensel (UIBK) Marcus Nitzschke (INFAI) Axel Ngonga (INFAI) Michael Martin (INFAI) Ivan Ermilov (INFAI) Mohamed Morsey (INFAI) Klaus Lyko (INFAI) Philipp Frischmuth (INFAI) Martin Strohbach (AGT) Sarven Capadisli (INFAI) Herman Ravkin (AGT) Sebastian Hellmann (INFAI) Mario Lischka (AGT) Tilman Becker (DFKI) Jörg Daubert (AGT) Tim van Kasteren (AGT) Amrapali Zaveri (INFAI) Umair Ul Hassan (NUIG) Reviewer(s): Amar Djalil Mezaour Helen Lippell (PA) (EXALEAD) Marcus Nitzschke (INFAI) Axel Ngonga (INFAI) Michael Hausenblas (NUIG) Klaus Lyko (INFAI) Tim Van Kasteren (AGT) Nature: R – Report P – Prototype D – Demonstrator O - Other Dissemination level: PU - Public CO - Confidential, only for members of the consortium (including the Commission) RE - Restricted to a group specified by the consortium (including the Commission Services) Project co-funded by the European Commission within the Seventh Framework Programme (2007-2013) ii BIG 318062 Revision history Version Date Modified by Comments 0.1 25/04/2013 Andre Freitas, Aftab Finalized the first version of Iqbal, Umair Ul the whitepaper Hassan, Nur Aini (NUIG) 0.2 27/04/2013 Edward Curry (NUIG) Review and content modification 0.3 27/04/2013 Helen Lippell (PA) Review and corrections 0.4 27/04/2013 Andre Freitas, Aftab Fixed corrections Iqbal (NUIG) 0.5 20/12/2013 Andre Freitas (NUIG) Major content improvement 0.6 20/02/2014 Andre Freitas (NUIG) Major content improvement 0.7 15/03/2014 Umair Ul Hassan Content contribution (human computation, case studies) 0.8 10/03/2014 Helen Lippell (PA) Review and corrections 0.91 20/03/2014 Edward Curry (NUIG) Review and content modification 0.92 06/05/2014 Andre Freitas, Edward Added Data Usage and minor Curry (NUIG) corrections 0.93 11/05/2014 Axel Ngonga, Klaus Final review Lyko, Marcus Nitzschke (INFAI) 1.0 13/05/2014 Edward Curry (NUIG) Corrections from final review iii BIG 318062 Copyright © 2012, BIG Consortium The BIG Consortium (http://www.big-project.eu/) grants third parties the right to use and distribute all or parts of this document, provided that the BIG project and the document are properly referenced. THIS DOCUMENT IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENT, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. iv BIG 318062 Table of Contents 1. Executive Summary .......................................................................................................... 1 1.1. Understanding Big Data .............................................................................................. 1 1.2. The Big Data Value Chain ........................................................................................... 1 1.3. The BIG Project .......................................................................................................... 2 1.4. Key Technical Insights ................................................................................................ 3 2. Data Acquisition ............................................................................................................... 4 2.1. Executive Summary .................................................................................................... 4 2.2. Big Data Acquisition Key Insights ................................................................................ 5 2.3. Social and Economic Impact ....................................................................................... 7 2.4. State of the Art ............................................................................................................ 7 2.4.1 Protocols .................................................................................................. 7 2.4.2 Software Tools ....................................................................................... 11 2.5. Future Requirements & Emerging Trends for Big Data Acquisition ........................... 22 2.5.1 Future Requirements/Challenges ........................................................... 22 2.5.2 Emerging Paradigms .............................................................................. 24 2.6. Sector Case Studies for Big Data Acquisition ............................................................ 25 2.6.1 Health Sector ......................................................................................... 25 2.6.2 Manufacturing, Retail, Transport ............................................................ 26 2.6.3 Government, Public, Non-profit .............................................................. 28 2.6.4 Telco, Media, Entertainment ................................................................... 30 2.6.5 Finance and Insurance ........................................................................... 33 2.7. Conclusion ................................................................................................................ 33 2.8. References ............................................................................................................... 34 2.9. Useful Links .............................................................................................................. 35 2.10. Appendix ................................................................................................................... 36 3. Data Analysis .................................................................................................................. 37 3.1. Executive Summary .................................................................................................. 37 3.2. Introduction ............................................................................................................... 38 3.3. Big Data Analysis Key Insights .................................................................................. 39 3.3.1 General .................................................................................................. 39 3.3.2 New Promising Areas for Research ........................................................ 39 3.3.3 Features to Increase Take-up ................................................................ 39 3.3.4 Communities and Big Data ..................................................................... 40 3.3.5 New Business Opportunities .................................................................. 40 3.4. Social & Economic Impact ........................................................................................ 40 3.5. State of the art .......................................................................................................... 41 3.5.1 Large-scale: Reasoning, Benchmarking and Machine Learning ............. 42 3.5.2 Stream data processing ......................................................................... 45 3.5.3 Use of Linked Data and Semantic Approaches to Big Data Analysis ...... 47 3.6. Future Requirements & Emerging Trends for Big Data Analysis ............................... 49 3.6.1 Future Requirements .............................................................................. 49 3.6.2 Emerging Paradigms .............................................................................. 51 3.7. Sectors Case Studies for Big Data Analysis .............................................................. 53 3.7.1 Public sector .......................................................................................... 53 3.7.2 Traffic ..................................................................................................... 53 3.7.3 Emergency response ............................................................................. 53 v BIG 318062 3.7.4 Health .................................................................................................... 54 3.7.5 Retail ...................................................................................................... 55 3.7.6 Logistics ................................................................................................. 55 3.7.7 Finance .................................................................................................. 55 3.8. Conclusions .............................................................................................................. 56 3.9. Acknowledgements ................................................................................................... 57 3.10.
Recommended publications
  • Large-Scale Learning from Data Streams with Apache SAMOA
    Large-Scale Learning from Data Streams with Apache SAMOA Nicolas Kourtellis1, Gianmarco De Francisci Morales2, and Albert Bifet3 1 Telefonica Research, Spain, [email protected] 2 Qatar Computing Research Institute, Qatar, [email protected] 3 LTCI, Télécom ParisTech, France, [email protected] Abstract. Apache SAMOA (Scalable Advanced Massive Online Anal- ysis) is an open-source platform for mining big data streams. Big data is defined as datasets whose size is beyond the ability of typical soft- ware tools to capture, store, manage, and analyze, due to the time and memory complexity. Apache SAMOA provides a collection of dis- tributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It fea- tures a pluggable architecture that allows it to run on several distributed stream processing engines such as Apache Flink, Apache Storm, and Apache Samza. Apache SAMOA is written in Java and is available at https://samoa.incubator.apache.org under the Apache Software Li- cense version 2.0. 1 Introduction Big data are “data whose characteristics force us to look beyond the traditional methods that are prevalent at the time” [18]. For instance, social media are one of the largest and most dynamic sources of data. These data are not only very large due to their fine grain, but also being produced continuously. Furthermore, such data are nowadays produced by users in different environments and via a multitude of devices. For these reasons, data from social media and ubiquitous environments are perfect examples of the challenges posed by big data.
    [Show full text]
  • DSP Frameworks DSP Frameworks We Consider
    Università degli Studi di Roma “Tor Vergata” Dipartimento di Ingegneria Civile e Ingegneria Informatica DSP Frameworks Corso di Sistemi e Architetture per Big Data A.A. 2017/18 Valeria Cardellini DSP frameworks we consider • Apache Storm (with lab) • Twitter Heron – From Twitter as Storm and compatible with Storm • Apache Spark Streaming (lab) – Reduce the size of each stream and process streams of data (micro-batch processing) • Apache Flink • Apache Samza • Cloud-based frameworks – Google Cloud Dataflow – Amazon Kinesis Streams Valeria Cardellini - SABD 2017/18 1 Apache Storm • Apache Storm – Open-source, real-time, scalable streaming system – Provides an abstraction layer to execute DSP applications – Initially developed by Twitter • Topology – DAG of spouts (sources of streams) and bolts (operators and data sinks) Valeria Cardellini - SABD 2017/18 2 Stream grouping in Storm • Data parallelism in Storm: how are streams partitioned among multiple tasks (threads of execution)? • Shuffle grouping – Randomly partitions the tuples • Field grouping – Hashes on a subset of the tuple attributes Valeria Cardellini - SABD 2017/18 3 Stream grouping in Storm • All grouping (i.e., broadcast) – Replicates the entire stream to all the consumer tasks • Global grouping – Sends the entire stream to a single task of a bolt • Direct grouping – The producer of the tuple decides which task of the consumer will receive this tuple Valeria Cardellini - SABD 2017/18 4 Storm architecture • Master-worker architecture Valeria Cardellini - SABD 2017/18 5 Storm
    [Show full text]
  • Empirical Study on the Usage of Graph Query Languages in Open Source Java Projects
    Empirical Study on the Usage of Graph Query Languages in Open Source Java Projects Philipp Seifer Johannes Härtel Martin Leinberger University of Koblenz-Landau University of Koblenz-Landau University of Koblenz-Landau Software Languages Team Software Languages Team Institute WeST Koblenz, Germany Koblenz, Germany Koblenz, Germany [email protected] [email protected] [email protected] Ralf Lämmel Steffen Staab University of Koblenz-Landau University of Koblenz-Landau Software Languages Team Koblenz, Germany Koblenz, Germany University of Southampton [email protected] Southampton, United Kingdom [email protected] Abstract including project and domain specific ones. Common applica- Graph data models are interesting in various domains, in tion domains are management systems and data visualization part because of the intuitiveness and flexibility they offer tools. compared to relational models. Specialized query languages, CCS Concepts • General and reference → Empirical such as Cypher for property graphs or SPARQL for RDF, studies; • Information systems → Query languages; • facilitate their use. In this paper, we present an empirical Software and its engineering → Software libraries and study on the usage of graph-based query languages in open- repositories. source Java projects on GitHub. We investigate the usage of SPARQL, Cypher, Gremlin and GraphQL in terms of popular- Keywords Empirical Study, GitHub, Graphs, Query Lan- ity and their development over time. We select repositories guages, SPARQL, Cypher, Gremlin, GraphQL based on dependencies related to these technologies and ACM Reference Format: employ various popularity and source-code based filters and Philipp Seifer, Johannes Härtel, Martin Leinberger, Ralf Lämmel, ranking features for a targeted selection of projects.
    [Show full text]
  • Oracle Metadata Management V12.2.1.3.0 New Features Overview
    An Oracle White Paper October 12 th , 2018 Oracle Metadata Management v12.2.1.3.0 New Features Overview Oracle Metadata Management version 12.2.1.3.0 – October 12 th , 2018 New Features Overview Disclaimer This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle. This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. This document and information contained herein may not be disclosed, copied, reproduced, or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates. 1 Oracle Metadata Management version 12.2.1.3.0 – October 12 th , 2018 New Features Overview Table of Contents Executive Overview ............................................................................ 3 Oracle Metadata Management 12.2.1.3.0 .......................................... 4 METADATA MANAGER VS METADATA EXPLORER UI .............. 4 METADATA HOME PAGES ........................................................... 5 METADATA QUICK ACCESS ........................................................ 6 METADATA REPORTING .............................................................
    [Show full text]
  • Apache Log4j 2 V
    ...................................................................................................................................... Apache Log4j 2 v. 2.4.1 User's Guide ...................................................................................................................................... The Apache Software Foundation 2015-10-08 T a b l e o f C o n t e n t s i Table of Contents ....................................................................................................................................... 1. Table of Contents . i 2. Introduction . 1 3. Architecture . 3 4. Log4j 1.x Migration . 10 5. API . 16 6. Configuration . 19 7. Web Applications and JSPs . 50 8. Plugins . 58 9. Lookups . 62 10. Appenders . 70 11. Layouts . 128 12. Filters . 154 13. Async Loggers . 167 14. JMX . 181 15. Logging Separation . 188 16. Extending Log4j . 190 17. Programmatic Log4j Configuration . 198 18. Custom Log Levels . 204 © 2 0 1 5 , T h e A p a c h e S o f t w a r e F o u n d a t i o n • A L L R I G H T S R E S E R V E D . T a b l e o f C o n t e n t s ii © 2 0 1 5 , T h e A p a c h e S o f t w a r e F o u n d a t i o n • A L L R I G H T S R E S E R V E D . 1 I n t r o d u c t i o n 1 1 Introduction ....................................................................................................................................... 1.1 Welcome to Log4j 2! 1.1.1 Introduction Almost every large application includes its own logging or tracing API. In conformance with this rule, the E.U.
    [Show full text]
  • Apache Sentry
    Apache Sentry Prasad Mujumdar [email protected] [email protected] Agenda ● Various aspects of data security ● Apache Sentry for authorization ● Key concepts of Apache Sentry ● Sentry features ● Sentry architecture ● Integration with Hadoop ecosystem ● Sentry administration ● Future plans ● Demo ● Questions Who am I • Software engineer at Cloudera • Committer and PPMC member of Apache Sentry • also for Apache Hive and Apache Flume • Part of the the original team that started Sentry work Aspects of security Perimeter Access Visibility Data Authentication Authorization Audit, Lineage Encryption, what user can do data origin, usage Kerberos, LDAP/AD Masking with data Data access Access ● Provide user access to data Authorization ● Manage access policies what user can do ● Provide role based access with data Agenda ● Various aspects of data security ● Apache Sentry for authorization ● Key concepts of Apache Sentry ● Sentry features ● Sentry architecture ● Integration with Hadoop ecosystem ● Sentry administration ● Future plans ● Demo ● Questions Apache Sentry (Incubating) Unified Authorization module for Hadoop Unlocks Key RBAC Requirements Secure, fine-grained, role-based authorization Multi-tenant administration Enforce a common set of policies across multiple data access path in Hadoop. Key Capabilities of Sentry Fine-Grained Authorization Permissions on object hierarchie. Eg, Database, Table, Columns Role-Based Authorization Support for role templetes to manage authorization for a large set of users and data objects Multi Tanent Administration
    [Show full text]
  • Cómo Citar El Artículo Número Completo Más Información Del
    DYNA ISSN: 0012-7353 Universidad Nacional de Colombia Iván-Herrera-Herrera, Nelson; Luján-Mora, Sergio; Gómez-Torres, Estevan Ricardo Integración de herramientas para la toma de decisiones en la congestión vehicular DYNA, vol. 85, núm. 205, 2018, Abril-Junio, pp. 363-370 Universidad Nacional de Colombia DOI: https://doi.org/10.15446/dyna.v85n205.67745 Disponible en: https://www.redalyc.org/articulo.oa?id=49657889045 Cómo citar el artículo Número completo Sistema de Información Científica Redalyc Más información del artículo Red de Revistas Científicas de América Latina y el Caribe, España y Portugal Página de la revista en redalyc.org Proyecto académico sin fines de lucro, desarrollado bajo la iniciativa de acceso abierto Integration of tools for decision making in vehicular congestion• Nelson Iván-Herrera-Herreraa, Sergio Luján-Morab & Estevan Ricardo Gómez-Torres a a Facultad de Ciencias de la Ingeniería e Industrias, Universidad Tecnológica Equinoccial, Quito, Ecuador. [email protected], [email protected] b Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante, Alicante, España. [email protected] Received: September 15th, 2017. Received in revised form: March 15th, 2018. Accepted: March 21th, 2018. Abstract The purpose of this study is to present an analysis of the use and integration of technological tools that help decision making in situations of vehicular congestion. The city of Quito-Ecuador is considered as a case study for the done work. The research is presented according to the development of an application, using Big Data tools (Apache Flume, Apache Hadoop, Apache Pig), favoring the processing of a lot of information that is required to collect, store and process.
    [Show full text]
  • Tracking Known Security Vulnerabilities in Third-Party Components
    Tracking known security vulnerabilities in third-party components Master’s Thesis Mircea Cadariu Tracking known security vulnerabilities in third-party components THESIS submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER SCIENCE by Mircea Cadariu born in Brasov, Romania Software Engineering Research Group Software Improvement Group Department of Software Technology Rembrandt Tower, 15th floor Faculty EEMCS, Delft University of Technology Amstelplein 1 - 1096HA Delft, the Netherlands Amsterdam, the Netherlands www.ewi.tudelft.nl www.sig.eu c 2014 Mircea Cadariu. All rights reserved. Tracking known security vulnerabilities in third-party components Author: Mircea Cadariu Student id: 4252373 Email: [email protected] Abstract Known security vulnerabilities are introduced in software systems as a result of de- pending on third-party components. These documented software weaknesses are hiding in plain sight and represent the lowest hanging fruit for attackers. Despite the risk they introduce for software systems, it has been shown that developers consistently download vulnerable components from public repositories. We show that these downloads indeed find their way in many industrial and open-source software systems. In order to improve the status quo, we introduce the Vulnerability Alert Service, a tool-based process to track known vulnerabilities in software projects throughout the development process. Its usefulness has been empirically validated in the context of the external software product quality monitoring service offered by the Software Improvement Group, a software consultancy company based in Amsterdam, the Netherlands. Thesis Committee: Chair: Prof. Dr. A. van Deursen, Faculty EEMCS, TU Delft University supervisor: Prof. Dr. A.
    [Show full text]
  • Vimal Daga Chief Technical Officer (CTO) – Linuxworld Informatics Pvt Ltd Professional Experience & Certifications
    Vimal Daga Chief Technical Officer (CTO) – LinuxWorld Informatics Pvt Ltd Professional Experience & Certifications: I Professional Experience During this period, has been engaged with various corporate clients on different domains and has been involved in imparting corporate Training programs and Consultancy for various technologies that covers the following: A. Sr. Machine Learning / Deep Learning / Data Scientist / NLP Consultant and Researcher Expertise in the field of Artificial Intelligence, Deep Learning, and Computer Vision and having ability to solve problems such as Face Detection, Face Recognition and Object Detection using Deep Neural Network (CNN, DNN, RNN, Convolution Networks etc.) and Optical Character Detection and Recognition (OCD & OCR) Worked in tools such as Tensorflow, Caffe/Caffe2, Keras, Theano, PyTorch etc. Build prototypes related to deep learning problems in the field of computer vision. Publications at top international conferences/ journals in fields related to computer vision/deep learning/machine learning / AI Experience on tools, frameworks like Microsoft Azure ML, Chat Bot Framework/LUIS . IBM Watson / ConversationService, Google TensorFlow / Python for Machine Learning (e.g. scikit-learn),Open source ML libraries and tools like Apache Spark Highly Worked on Data Science, Big Data,datastructures, statistics , algorithms like Regression, Classification etc. Working knowlegde of Supervised / Unsuperivsed learning (Decision Trees, Logistic Regression, SVMs,GBM, etc) Expertise in Sentiment Analysis, Entity Extraction, Natural Language Understanding (NLU), Intent recognition Strong understanding of text pre-processing and normalization techniques, such as tokenization, POS tagging, and parsing, and how they work at a basic level and NLP toolkits as NLTK, Gensim,, Apac SpaCyhe UIMA etc. I have Hands on experience related to Datasets such as or including text, images and other logs or clickstreams.
    [Show full text]
  • Assessment of Multiple Ingest Strategies for Accumulo Key-Value Store
    Assessment of Multiple Ingest Strategies for Accumulo Key-Value Store by Hai Pham A thesis submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Master of Science Auburn, Alabama May 7, 2016 Keywords: Accumulo, noSQL, ingest Copyright 2016 by Hai Pham Approved by Weikuan Yu, Co-Chair, Associate Professor of Computer Science, Florida State University Saad Biaz, Co-Chair, Professor of Computer Science and Software Engineering, Auburn University Sanjeev Baskiyar, Associate Professor of Computer Science and Software Engineering, Auburn University Abstract In recent years, the emergence of heterogeneous data, especially of the unstructured type, has been extremely rapid. The data growth happens concurrently in 3 dimensions: volume (size), velocity (growth rate) and variety (many types). This emerging trend has opened a new broad area of research, widely accepted as Big Data, which focuses on how to acquire, organize and manage huge amount of data effectively and efficiently. When coping with such Big Data, the traditional approach using RDBMS has been inefficient; because of this problem, a more efficient system named noSQL had to be created. This thesis will give an overview knowledge on the aforementioned noSQL systems and will then delve into a more specific instance of them which is Accumulo key-value store. Furthermore, since Accumulo is not designed with an ingest interface for users, this thesis focuses on investigating various methods for ingesting data, improving the performance and dealing with numerous parameters affecting this process. ii Acknowledgments First and foremost, I would like to express my profound gratitude to Professor Yu who with great kindness and patience has guided me through not only every aspect of computer science research but also many great directions towards my personal issues.
    [Show full text]
  • Chainsys-Platform-Technical Architecture-Bots
    Technical Architecture Objectives ChainSys’ Smart Data Platform enables the business to achieve these critical needs. 1. Empower the organization to be data-driven 2. All your data management problems solved 3. World class innovation at an accessible price Subash Chandar Elango Chief Product Officer ChainSys Corporation Subash's expertise in the data management sphere is unparalleled. As the creative & technical brain behind ChainSys' products, no problem is too big for Subash, and he has been part of hundreds of data projects worldwide. Introduction This document describes the Technical Architecture of the Chainsys Platform Purpose The purpose of this Technical Architecture is to define the technologies, products, and techniques necessary to develop and support the system and to ensure that the system components are compatible and comply with the enterprise-wide standards and direction defined by the Agency. Scope The document's scope is to identify and explain the advantages and risks inherent in this Technical Architecture. This document is not intended to address the installation and configuration details of the actual implementation. Installation and configuration details are provided in technology guides produced during the project. Audience The intended audience for this document is Project Stakeholders, technical architects, and deployment architects The system's overall architecture goals are to provide a highly available, scalable, & flexible data management platform Architecture Goals A key Architectural goal is to leverage industry best practices to design and develop a scalable, enterprise-wide J2EE application and follow the industry-standard development guidelines. All aspects of Security must be developed and built within the application and be based on Best Practices.
    [Show full text]
  • Return of Organization Exempt from Income
    OMB No. 1545-0047 Return of Organization Exempt From Income Tax Form 990 Under section 501(c), 527, or 4947(a)(1) of the Internal Revenue Code (except black lung benefit trust or private foundation) Open to Public Department of the Treasury Internal Revenue Service The organization may have to use a copy of this return to satisfy state reporting requirements. Inspection A For the 2011 calendar year, or tax year beginning 5/1/2011 , and ending 4/30/2012 B Check if applicable: C Name of organization The Apache Software Foundation D Employer identification number Address change Doing Business As 47-0825376 Name change Number and street (or P.O. box if mail is not delivered to street address) Room/suite E Telephone number Initial return 1901 Munsey Drive (909) 374-9776 Terminated City or town, state or country, and ZIP + 4 Amended return Forest Hill MD 21050-2747 G Gross receipts $ 554,439 Application pending F Name and address of principal officer: H(a) Is this a group return for affiliates? Yes X No Jim Jagielski 1901 Munsey Drive, Forest Hill, MD 21050-2747 H(b) Are all affiliates included? Yes No I Tax-exempt status: X 501(c)(3) 501(c) ( ) (insert no.) 4947(a)(1) or 527 If "No," attach a list. (see instructions) J Website: http://www.apache.org/ H(c) Group exemption number K Form of organization: X Corporation Trust Association Other L Year of formation: 1999 M State of legal domicile: MD Part I Summary 1 Briefly describe the organization's mission or most significant activities: to provide open source software to the public that we sponsor free of charge 2 Check this box if the organization discontinued its operations or disposed of more than 25% of its net assets.
    [Show full text]