Porm Documentation Release 0.0

Total Page:16

File Type:pdf, Size:1020Kb

Porm Documentation Release 0.0 Porm Documentation Release 0.0 José Manuel González del Campo Dec 25, 2017 Contents 1 Enter Porm 1 2 Why Porm? 3 3 Project status 5 4 Documentation 7 5 License 9 i ii CHAPTER 1 Enter Porm Porm is an Object Relational Mapper for Python with a focus in performance. It derives from the Storm ORM originally developed by Canonical. The development of Storm-ORM was stalled since year 2013, and the last version was only compatible with Python 2. This is an efort to resurrect the project and to give it an actualised, modern and cozy new home in GitHub and Read the Docs. I believe that the project deserves it, as it was one of the most performance focused ORM that python had and, furthermore, it has been very tested in production, being used in Launchpad, among other Canonical projects. 1 Porm Documentation, Release 0.0 2 Chapter 1. Enter Porm CHAPTER 2 Why Porm? You may know that there are already several ORM, being probably SQLAlchemy and Peewee the most recognized among them. So, the first and obvious question is: why Porm? I don’t want to make here a marketing exercice, but to build a transparent source of information. The main reason I took the work of rescuing, evolving and mantaining Porm because I needed an ORM to use in a project, and when I looked, I came to the conclusion that the most supported and documented were the two mentioned ORMs. But SQLAlchemy was an overkill, and I found that the performance was nothing to shout about, and the codebase size didn’t allowed me to take a glance of what was happening behind scenes in a reasonable amount of time (you may argue that it would take me less time that this effort, but it would be less exciting, at least for me). Then I looked at Peewee. I have to recognize that I feel admiration for the project, it has a lean one-file (plus extensions) codebase, a great documentation, full support to almost anything you may want and a super easy API to work with. But, although it counts with a bunch of lines of cython optimization, I was almost certain that something more could be done. And then I found the Storm ORM, an old project by Canonical, implemented in Python 2, and that was used by the company in all of its projects. The codebase looked clean, and although I realized that it lacked some functionality that I had found in Peewee and that I would like to have, when I could have turned and start to work with Peewee I had already half the Storm codebase translated, and I was very familiar with it. One of the things I liked the most, and that was of paramount importance for me to decide, was the size of the pure C optimization code and its quality. So, I decided to commit myself not only to translate and mantain it, but also to try to improve it where I can. • Porm is fast, thanks to its extension writen in pure C. • Its clean and lightweight API offers a short learning curve and long-term maintainability. • Porm is developed in a test-driven manner. An untested line of code is considered a bug. This means: ~14.000 app LOC vs. 22.000 tests code. • Porm needs no special class constructors, nor imperative base classes. • It’s very easy to write and support backends for Porm (current backends have around 100 lines of code). • Porm handles relationships between objects even before they were added to a database, functioning with a cache system that improves performance (at least in theory and where network connections are needed). • Porm will flush changes to the database automatically when needed, so that queries made affect recently modi- fied objects. 3 Porm Documentation, Release 0.0 • Porm can handle obj.attr = A SQL expression assignments, when that’s really needed (the expression is executed at INSERT/UPDATE time). • Porm allows you to fallback to SQL if needed (or if you just prefer), allowing you to mix “old school” code and ORM code. • Porm handles composed primary keys with ease (no need for surrogate keys). 4 Chapter 2. Why Porm? CHAPTER 3 Project status The current version is the 0.3 and, at until the releasement of the version 0.4, Porm has a beta status. That means that it could be some loose end here and there, and that there are some modules that doesn’t work completely. However, Porm takes a test oriented development approach (inheritated from the original Storm ORM project), under which every line of code is tested with very very few exceptions. The code released as 0.3 was passing all the unit tests. So, what are those loose ends I was talking about? Well, check it out: • The MySQL and PosgreSQL have been translated to Python 3 and pass all the necessary tests, but there have not been other checks to determine backend compatibility (and the last actualization of Storm ORM was five years ago). So, it may happen that some new functionalities are not supported, or that changes in the underlying drivers have broken the functioning with these databases (SQLite is fully tested and actualized, however). • The documentation is not complete. Apart of the API automatically generated documentation and the tutorial (which are complete), the rest of the documentation is currently in the making, so you may find some challenges finding answers. In any case, you can communicate with me directly a through the github page. Just open an issue. • The situation of django integration has similar problems to those of the MySQL and PosgreSQL backends. It passes the tests, but it’s not tested with the current Django API. • There is not integration with falsk yet, nor any other web framework. 5 Porm Documentation, Release 0.0 6 Chapter 3. Project status CHAPTER 4 Documentation As stated, the main source of information is this tutorial, and although the manual is not ready yet, you may find there already what you are looking. Other useful links are: • The module structure: something you may want to look before looking the API or the code. • The API documentation • Infoheritance: a common Porm design pattern. • The FAQ • The roadmap • Contributing guidelines • The code. 7 Porm Documentation, Release 0.0 8 Chapter 4. Documentation CHAPTER 5 License DISTRIBUTED UNDER THE GNU LESSER GENERAL PUBLIC LICENSE V 2.1 This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Note: the original project had a clause whereby “all contributions must have copyright assigned to Canonical”. Obvi- ously, that’s not longer required. You can find its full text here. Note: My first language is not English, and that may be depicted with more or less intensity along the documentation, depending on how much inspired and focused I am at the moment of writing. If you, during lecture, feel aggrieved for a neglected and maybe insulting use of the English language and you have some spare time, please, it would be highly appreciated if you could issue a pull request to fix all the problems you have detected. 9.
Recommended publications
  • Cubes Documentation Release 1.0.1
    Cubes Documentation Release 1.0.1 Stefan Urbanek April 07, 2015 Contents 1 Getting Started 3 1.1 Introduction.............................................3 1.2 Installation..............................................5 1.3 Tutorial................................................6 1.4 Credits................................................9 2 Data Modeling 11 2.1 Logical Model and Metadata..................................... 11 2.2 Schemas and Models......................................... 25 2.3 Localization............................................. 38 3 Aggregation, Slicing and Dicing 41 3.1 Slicing and Dicing.......................................... 41 3.2 Data Formatters........................................... 45 4 Analytical Workspace 47 4.1 Analytical Workspace........................................ 47 4.2 Authorization and Authentication.................................. 49 4.3 Configuration............................................. 50 5 Slicer Server and Tool 57 5.1 OLAP Server............................................. 57 5.2 Server Deployment.......................................... 70 5.3 slicer - Command Line Tool..................................... 71 6 Backends 77 6.1 SQL Backend............................................. 77 6.2 MongoDB Backend......................................... 89 6.3 Google Analytics Backend...................................... 90 6.4 Mixpanel Backend.......................................... 92 6.5 Slicer Server............................................. 94 7 Recipes 97 7.1 Recipes...............................................
    [Show full text]
  • Declarative Languages for Big Streaming Data a Database Perspective
    Tutorial Declarative Languages for Big Streaming Data A database Perspective Riccardo Tommasini Sherif Sakr University of Tartu Unversity of Tartu [email protected] [email protected] Emanuele Della Valle Hojjat Jafarpour Politecnico di Milano Confluent Inc. [email protected] [email protected] ABSTRACT sources and are pushed asynchronously to servers which are The Big Data movement proposes data streaming systems to responsible for processing them [13]. tame velocity and to enable reactive decision making. However, To facilitate the adoption, initially, most of the big stream approaching such systems is still too complex due to the paradigm processing systems provided their users with a set of API for shift they require, i.e., moving from scalable batch processing to implementing their applications. However, recently, the need for continuous data analysis and pattern detection. declarative stream processing languages has emerged to simplify Recently, declarative Languages are playing a crucial role in common coding tasks; making code more readable and main- fostering the adoption of Stream Processing solutions. In partic- tainable, and fostering the development of more complex appli- ular, several key players introduce SQL extensions for stream cations. Thus, Big Data frameworks (e.g., Flink [9], Spark [3], 1 processing. These new languages are currently playing a cen- Kafka Streams , and Storm [19]) are starting to develop their 2 3 4 tral role in fostering the stream processing paradigm shift. In own SQL-like approaches (e.g., Flink SQL , Beam SQL , KSQL ) this tutorial, we give an overview of the various languages for to declaratively tame data velocity. declarative querying interfaces big streaming data.
    [Show full text]
  • Preview Turbogears Tutorial
    TurboGears About the Tutorial TurboGears is a Python web application framework, which consists of many modules. It is designed around the MVC architecture that are similar to Ruby on Rails or Struts. TurboGears are designed to make rapid web application development in Python easier and more supportable. TurboGears is a web application framework written in Python. TurboGears follows the Model-View-Controller paradigm as do most modern web frameworks like Rails, Django, Struts, etc. This is an elementary tutorial that covers all the basics of TurboGears. Audience This tutorial has been designed for all those readers who want to learn the basics of TurboGears. It is especially going to be useful for all those Web developers who are required to simplify complex problems and create single database backed webpages. Prerequisites We assume the readers of this tutorial have a basic knowledge of web application frameworks. It will be an added advantage if the readers have hands-on experience of Python programming language. In addition, it is going to also help if the readers have an elementary knowledge of Ruby-on-Rails and Struts. Disclaimer & Copyright Copyright 2016 by Tutorials Point (I) Pvt. Ltd. All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher. We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors.
    [Show full text]
  • An Online Analytical Processing Multi-Dimensional Data Warehouse for Malaria Data S
    Database, 2017, 1–20 doi: 10.1093/database/bax073 Original article Original article An online analytical processing multi-dimensional data warehouse for malaria data S. M. Niaz Arifin1,*, Gregory R. Madey1, Alexander Vyushkov2, Benoit Raybaud3, Thomas R. Burkot4 and Frank H. Collins1,4,5 1Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana, USA, 2Center for Research Computing, University of Notre Dame, Notre Dame, Indiana, USA, 3Institute for Disease Modeling, Bellevue, Washington, USA, 4Australian Institute of Tropical Health and Medicine, James Cook University, Cairns, Queensland, Australia 5Department of Biological Sciences, University of Notre Dame, Notre Dame, Indiana, USA *Corresponding author: Tel: þ1 574 387 9404; Fax: 1 574 631 9260; Email: sarifi[email protected] Citation details: Arifin,S.M.N., Madey,G.R., Vyushkov,A. et al. An online analytical processing multi-dimensional data warehouse for malaria data. Database (2017) Vol. 2017: article ID bax073; doi:10.1093/database/bax073 Received 15 July 2016; Revised 21 August 2017; Accepted 22 August 2017 Abstract Malaria is a vector-borne disease that contributes substantially to the global burden of morbidity and mortality. The management of malaria-related data from heterogeneous, autonomous, and distributed data sources poses unique challenges and requirements. Although online data storage systems exist that address specific malaria-related issues, a globally integrated online resource to address different aspects of the disease does not exist. In this article, we describe the design, implementation, and applications of a multi- dimensional, online analytical processing data warehouse, named the VecNet Data Warehouse (VecNet-DW). It is the first online, globally-integrated platform that provides efficient search, retrieval and visualization of historical, predictive, and static malaria- related data, organized in data marts.
    [Show full text]
  • Kodai: a Software Architecture and Implementation for Segmentation
    University of Rhode Island DigitalCommons@URI Open Access Master's Theses 2017 Kodai: A Software Architecture and Implementation for Segmentation Rick Rejeleene University of Rhode Island, [email protected] Follow this and additional works at: https://digitalcommons.uri.edu/theses Recommended Citation Rejeleene, Rick, "Kodai: A Software Architecture and Implementation for Segmentation" (2017). Open Access Master's Theses. Paper 1107. https://digitalcommons.uri.edu/theses/1107 This Thesis is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Open Access Master's Theses by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. KODAI: A SOFTWARE ARCHITECTURE AND IMPLEMENTATION FOR SEGMENTATION BY RICK REJELEENE A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS OF MASTER OF SCIENCE IN COMPUTER SCIENCE UNIVERSITY OF RHODE ISLAND 2017 MASTER OF SCIENCE THESIS OF RICK REJELEENE APPROVED: Thesis Committee: Major Professor: Joan Peckham Lisa DiPippo Ruby Dholakia Nasser H Zawia DEAN OF GRADUATE COMMITTEE UNIVERSITY OF RHODE ISLAND 2017 ABSTRACT The purpose of this thesis is to design and implement a software architecture for segmen- tation models to improve revenues for a supermarket. This tool supports analysis of su- permarket products and generates results to interpret consumer behavior, to give busi- nesses deeper insights into targeted consumer markets. The software design developed is named as Kodai. Kodai is horizontally reusable and can be adapted across various indus- tries. This software framework allows testing a hypothesis to address the problem of in- creasing revenues in supermarkets. Kodai has several advantages, such as analyzing and visualizing data, and as a result, businesses can make better decisions.
    [Show full text]
  • DARPA and Data: a Portfolio Overview
    DARPA and Data: A Portfolio Overview William C. Regli Special Assistant to the Director Defense Advanced Research Projects Agency Brian Pierce Director Information Innovation Office Defense Advanced Research Projects Agency Fall 2017 Distribution Statement “A” (Approved for Public Release, Distribution Unlimited) 1 DARPA Dreams of Data • Investments over the past decade span multiple DARPA Offices and PMs • Information Innovation (I2O): Software Systems, AI, Data Analytics • Defense Sciences (DSO): Domain-driven problems (chemistry, social science, materials science, engineering design) • Microsystems Technology (MTO): New hardware to support these processes (neuromorphic processor, graph processor, learning systems) • Products include DARPA Program testbeds, data and software • The DARPA Open Catalog • Testbeds include those in big data, cyber-defense, engineering design, synthetic bio, machine reading, among others • Multiple layers and qualities of data are important • Important for reproducibility; important as fuel for future DARPA programs • Beyond public data to include “raw” data, process/workflow data • Data does not need to be organized to be useful or valuable • Software tools are getting better eXponentially, ”raw” data can be processed • Changing the economics (Forensic Data Curation) • Its about optimizing allocation of attention in human-machine teams Distribution Statement “A” (Approved for Public Release, Distribution Unlimited) 2 Working toward Wisdom Wisdom: sound judgment - governance Abstraction Wisdom (also Understanding:
    [Show full text]
  • The Turbogears Toolbox and Other Tools
    19 The TurboGears Toolbox and Other Tools In This Chapter ■ 19.1 Toolbox Overview 372 ■ 19.2 ModelDesigner 373 ■ 19.3 CatWalk 375 ■ 19.4 WebConsole 377 ■ 19.5 Widget Browser 378 ■ 19.6 Admi18n and System Info 379 ■ 19.7 The tg-admin Command 380 ■ 19.8 Other TurboGears Tools 380 ■ 19.9 Summary 381 371 226Ramm_ch19i_indd.indd6Ramm_ch19i_indd.indd 337171 110/17/060/17/06 111:50:421:50:42 AAMM urboGears includes a number of nice features to make your life as a de- Tveloper just a little bit easier. The TurboGears Toolbox provides tools for creating and charting your database model, adding data to your database with a web based GUI while you are still in development, debugging system problems, browsing all of the installed widgets, and internationalizing your application. 19.1 Toolbox Overview The TurboGears Toolbox is started with the tg-admin toolbox command. Your browser should automatically pop up when you start the Toolbox, but if it doesn’t you should still be able to browse to http://localhost:7654, where you’ll see a web page with links for each of the tools in the toolbox (as seen in Figure 19.1). FIGURE 19.1 The TurboGears Toolbox home page Each of the components in the Toolbox is also a TurboGears application, so you can also look at them as examples of how TurboGears applications are built. 372 226Ramm_ch19i_indd.indd6Ramm_ch19i_indd.indd 337272 110/17/060/17/06 111:50:431:50:43 AAMM 19.2 ModelDesigner 373 Because there isn’t anything in TurboGears that can’t be done in code or from the command line, the use of the Toolbox is entirely optional.
    [Show full text]
  • STORM: Refinement Types for Secure Web Applications
    STORM: Refinement Types for Secure Web Applications Nico Lehmann Rose Kunkel Jordan Brown Jean Yang UC San Diego UC San Diego Independent Akita Software Niki Vazou Nadia Polikarpova Deian Stefan Ranjit Jhala IMDEA Software Institute UC San Diego UC San Diego UC San Diego Abstract Centralizing policy specification is not a new idea. Several We present STORM, a web framework that allows developers web frameworks (e.g., HAILS [4], JACQUELINE [5], and to build MVC applications with compile-time enforcement of LWEB [6]) already do this. These frameworks, however, have centrally specified data-dependent security policies. STORM two shortcomings that have hindered their adoption. First, ensures security using a Security Typed ORM that refines the they enforce policies at run-time, typically using dynamic (type) abstractions of each layer of the MVC API with logical information flow control (IFC). While dynamic enforcement assertions that describe the data produced and consumed by is better than no enforcement, dynamic IFC imposes a high the underlying operation and the users allowed access to that performance overhead, since the system must be modified to data. To evaluate the security guarantees of STORM, we build a track the provenance of data and restrict where it is allowed formally verified reference implementation using the Labeled to flow. More importantly, certain policy violations are only IO (LIO) IFC framework. We present case studies and end-to- discovered once the system is deployed, at which point they end applications that show how STORM lets developers specify may be difficult or expensive to fix, e.g., on applications diverse policies while centralizing the trusted code to under 1% running on IoT devices [7].
    [Show full text]
  • FOSS Philosophy 6 the FOSS Development Method 7
    1 Published by the United Nations Development Programme’s Asia-Pacific Development Information Programme (UNDP-APDIP) Kuala Lumpur, Malaysia www.apdip.net Email: [email protected] © UNDP-APDIP 2004 The material in this book may be reproduced, republished and incorporated into further works provided acknowledgement is given to UNDP-APDIP. For full details on the license governing this publication, please see the relevant Annex. ISBN: 983-3094-00-7 Design, layout and cover illustrations by: Rezonanze www.rezonanze.com PREFACE 6 INTRODUCTION 6 What is Free/Open Source Software? 6 The FOSS philosophy 6 The FOSS development method 7 What is the history of FOSS? 8 A Brief History of Free/Open Source Software Movement 8 WHY FOSS? 10 Is FOSS free? 10 How large are the savings from FOSS? 10 Direct Cost Savings - An Example 11 What are the benefits of using FOSS? 12 Security 13 Reliability/Stability 14 Open standards and vendor independence 14 Reduced reliance on imports 15 Developing local software capacity 15 Piracy, IPR, and the WTO 16 Localization 16 What are the shortcomings of FOSS? 17 Lack of business applications 17 Interoperability with proprietary systems 17 Documentation and “polish” 18 FOSS SUCCESS STORIES 19 What are governments doing with FOSS? 19 Europe 19 Americas 20 Brazil 21 Asia Pacific 22 Other Regions 24 What are some successful FOSS projects? 25 BIND (DNS Server) 25 Apache (Web Server) 25 Sendmail (Email Server) 25 OpenSSH (Secure Network Administration Tool) 26 Open Office (Office Productivity Suite) 26 LINUX 27 What is Linux?
    [Show full text]
  • Multimedia Systems
    MVC Design Pattern Introduction to MVC and compare it with others Gholamhossein Tavasoli @ ZNU Separation of Concerns o All these methods do only one thing “Separation of Concerns” or “Layered Architecture” but in their own way. o All these concepts are pretty old, like idea of MVC was tossed in 1970s. o All these patterns forces a separation of concerns, it means domain model and controller logic are decoupled from user interface (view). As a result maintenance and testing of the application become simpler and easier. MVC Pattern Architecture o MVC stands for Model-View-Controller Explanation of Modules: Model o The Model represents a set of classes that describe the business logic i.e. business model as well as data access operations i.e. data model. o It also defines business rules for data means how the data can be created, sotored, changed and manipulated. Explanation of Modules: View o The View represents the UI components like CSS, jQuery, HTML etc. o It is only responsible for displaying the data that is received from the controller as the result. o This also transforms the model(s) into UI. o Views can be nested Explanation of Modules: Controller o The Controller is responsible to process incoming requests. o It receives input from users via the View, then process the user's data with the help of Model and passing the results back to the View. o Typically, it acts as the coordinator between the View and the Model. o There is One-to-Many relationship between Controller and View, means one controller can handle many views.
    [Show full text]
  • Mastering Flask Web Development Second Edition
    Mastering Flask Web Development Second Edition Build enterprise-grade, scalable Python web applications Daniel Gaspar Jack Stouffer BIRMINGHAM - MUMBAI Mastering Flask Web Development Second Edition Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Commissioning Editor: Amarabha Banerjee Acquisition Editor: Devanshi Doshi Content Development Editor: Onkar Wani Technical Editor: Diksha Wakode Copy Editor: Safis Editing Project Coordinator: Sheejal Shah Proofreader: Safis Editing Indexer: Rekha Nair Graphics: Alishon Mendonsa Production Coordinator: Aparna Bhagat First published: September 2015 Second Edition: October 2018 Production reference: 1301018 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78899-540-5 www.packtpub.com mapt.io Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career.
    [Show full text]
  • After the Storm After the Storm the Jobs and Skills That Will Drive the Post-Pandemic Recovery
    After the Storm After the Storm The Jobs and Skills that will Drive the Post-Pandemic Recovery February 2021 © Burning Glass Technologies 2021 1 After the Storm Table of Contents Table of Contents 1. Executive Summary pg 3 2. Introduction pg 6 3. The Readiness Economy pg 14 4. The Logistics Economy pg 16 5. The Green Economy pg 19 6. The Remote Economy pg 22 7. The Automated Economy pg 24 8. Implications pg 27 9. Methodology pg 29 10. Appendix pg 30 By Matthew Sigelman, Scott Bittle, Nyerere Hodge, Layla O’Kane, and Bledi Taska, with Joshua Bodner, Julia Nitschke, and Rucha Vankudre ©© BurningBurning GlassGlass TechnologiesTechnologies 20212021 22 After the Storm Executive Summary 1 Executive Summary The recession left in the wake of the • These jobs represent significant fractions COVID-19 pandemic is unprecedented. of the labor market: currently 13% of The changes have been so profound that demand and 10% of employment, but fundamental patterns of how we work, in addition they are important inflection produce, move, and sell will never be the points for the economy. A shortage same. of talent in these fields could set back broader recovery if organizations can’t If the U.S. is going to have a recovery that cope with these demands. not only brings the economy back to where • Jobs in these new “economies” are it was but also ensures a more equitable projected to grow at almost double the future, it is crucial to understand what jobs rate of the job market overall (15% vs. and skills are likely to drive the recovery.
    [Show full text]