An Evaluation of Key-Value Stores in Scientific Applications

Total Page:16

File Type:pdf, Size:1020Kb

An Evaluation of Key-Value Stores in Scientific Applications AN EVALUATION OF KEY-VALUE STORES IN SCIENTIFIC APPLICATIONS A Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Sonia Shirwadkar May 2017 AN EVALUATION OF KEY-VALUE STORES IN SCIENTIFIC APPLICATIONS Sonia Shirwadkar APPROVED: Dr. Edgar Gabriel, Chairman Dept. of Computer Science, University of Houston Dr. Weidong Shi Dept. of Computer Science, University of Houston Dr. Dan Price Honors College, University of Houston Dean, College of Natural Sciences and Mathematics ii Acknowledgments \No one who achieves success does so without the help of others. The wise acknowledge this help with gratitude." - Alfred North Whitehead Although, I have a long way to go before I am wise, I would like to take this opportunity to express my deepest gratitude to all the people who have helped me in this journey. First and foremost, I would like to thank Dr. Gabriel for being a great advisor. I appreciate the time, effort and ideas that you have invested to make my graduate experience productive and stimulating. The joy and enthusiasm you have for research was contagious and motivational for me, even during tough times. You have been an inspiring teacher and mentor and I would like to thank you for the patience, kindness and humor that you have shown. Thank you for guiding me at every step and for the incredible understanding you showed when I came to you with my questions. It has indeed been a privilege working with you. I would like to thank Dr. Shi and Dr. Price for accepting to be my committee members. I truly appreciate the time and effort you spent in reviewing my thesis and providing valuable feedback. A special thanks to my PSTL lab-mates Shweta, Youcef, Tanvir, and Raafat. You have contributed immensely to my personal and professional time at the University of Houston. The last nine months have been a joy mainly because of the incredible work environment in the lab. Thank you for being great friends and for all the encouragement that you have given me. A big thanks to Hope Queener and Jason Marsack at the College of Optometry for teaching me the value of team-work and work ethics. I truly enjoyed working with you. I have been extremely fortunate to have the constant support, guidance, and faith of iii my friends. A big thank you to all my friends in India, for constantly motivating me to follow my dreams. Thank you for the late-night calls, care packages, and all the love that you have given me in the time that I have been away from home. I would like to thank my friends Omkar, Tejus, Sneha, Sonal, Aditya, and Shweta for being my family away from home. I will forever be grateful for the constant assurance and encouragement that you gave me. I would also like to thank my friends, classmates and roomates here in Houston for all their help and support. A special thanks to all my teachers. I would not be here if not for the wisdom that you have shared. You have empowered me to chase my dreams. Each one of you has taught me important life lessons that have always guided me. I will be eternally grateful to have been your student. Last but by no means the least, I would like to thank my family for always being there for me. I would like to start by thanking my Mom and Dad for their unconditional love and support. A very big thank you to Kaka and Kaku for all their love, concern and advice. You all have taught me the beauty of hard-work and perseverance and this thesis would never have been possible without you. Finally, I would like to thank Parikshit for being my greatest source of motivation. You inspire me everyday to be a better version of myself and I would never have made it without you. iv AN EVALUATION OF KEY-VALUE STORES IN SCIENTIFIC APPLICATIONS An Abstract of a Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Sonia Shirwadkar May 2017 v Abstract Big data analytics is a rapidly evolving multidisciplinary field that involves the use of com- puting capacity, tools, techniques, and theories to solve scientific and engineering problems. With the big data boom, scientific applications now have to analyze huge volumes of data. NoSQL [1] databases are gaining popularity for these type of applications due to their scal- ability and flexibility. There are various types of NoSQL databases available in the market today [2], including key-value databases. Key-value databases [3] are the simplest NoSQL databases where every single item is stored as a key-value pair. In-memory key-value stores are specialized key-value databases that maintain data in main memory instead of the disk. Hence, they are well-suited for applications having high-frequencies of alternating read and write cycles. The focus of this thesis is to analyze popular in-memory key-value stores and com- pare their performance. We have performed the comparisons based on parameters like in-memory caching support, supported programming languages, scalability, and utilization from parallel applications. Based on the initial comparisons, we evaluated two key-value stores in detail, namely Memcached [4] and Redis [5]. To perform extensive analysis of these two data stores, a set of micro-benchmarks have been developed and evaluated for both Memcached and Redis. Tests were performed to evaluate the scalability, responsiveness and data load handling capacity and Redis outperformed Memcached in all test cases. To further analyze the in-memory caching ability of Redis, we integrated it as a caching layer into an air quality simulation [6] based on Hadoop [7] MapReduce [8] which calculates the eight-hour rolling average of ozone concentration at various sites in Houston, TX. Our aim was to compare the performance of the original air-quality application that uses the disk for data storage, to our application that uses in-memory caching. Initial results show that there is no performance gain achieved by integrating Redis as a caching layer. Further optimizations and configurations of the code is reserved for future work. vi Contents 1 Introduction 1 1.1 Brief Overview of Key-Value Data Stores . .4 1.2 Goals of this Thesis . .6 1.3 Organization of this Document . .7 2 Background 8 2.1 In-memory Key-value Stores . .9 2.1.1 Redis . .9 2.1.2 Memcached . 12 2.1.3 Riak . 15 2.1.4 Hazelcast . 17 2.1.5 MICA (Memory-store with Intelligent Concurrent Access) . 21 2.1.5.1 Parallel Data Access . 21 2.1.5.2 Network Stack . 22 2.1.5.3 Key-value Data Structures . 23 2.1.6 Aerospike . 24 2.1.7 Comparison of Key-Value Stores . 26 2.2 Brief Overview of Message Passing Interface (MPI) . 29 vii 2.3 Brief Overview of MapReduce Programming and Hadoop Eco-system . 31 2.3.1 Integration of Key-Value Stores in Hadoop . 35 3 Analysis and Results 36 3.1 MPI Micro-benchmark . 37 3.1.1 Description of the Micro-benchmark Applications . 38 3.1.1.1 Technical Data . 41 3.1.2 Comparison of Memcached and Redis using our Micro-benchmark . 41 3.1.2.1 Varying the Number of Client Processes . 43 3.1.2.1.1 Using Values of Size 1 KB . 43 3.1.2.1.2 Using Values of Size 32 KB . 44 3.1.2.2 Varying the Number of Server Instances . 47 3.1.2.3 Varying the Size of the Value . 48 3.1.2.4 Observations and Final Conclusions . 50 3.2 Air-quality Simulation Application . 51 3.3 Integration of Redis in Hadoop . 53 3.3.1 Technical Data . 55 3.4 Results and Comparison . 56 4 Conclusions and Outlook 59 Bibliography 62 viii List of Figures 1.1 Key-value pairs . .5 2.1 Redis Cluster . 11 2.2 Redis in a Master-Slave Architecture . 12 2.3 Memcached Architecture . 14 2.4 Riak Ring Architecture . 17 2.5 Hazelcast In-memory Computing Architecture . 19 2.6 Hazelcast Architecture . 20 2.7 MICA Approach . 23 2.8 Aerospike Architecture . 25 2.9 Word Count Using Hadoop MapReduce . 34 3.1 Time Taken to Store and Retrieve Data When the Number of Client Pro- cesses is Varied. 44 3.2 Time Taken to Retrieve Data When the Number of Client Processes is Varied. 46 3.3 Time Taken to Store and Retrieve Data When the Number of Servers is Varied. 48 3.4 Time Taken to Store and Retrieve Data when the Value Size is Varied. 50 3.5 Customized RecordWriter to Read in Data from Redis . 54 ix 3.6 Customized RecordReader to Write Data to Redis . 55 3.7 Comparison of Execution Times (in minutes) for Air-quality Applications Using HDFS and Redis. 57 x List of Tables 2.1 Summary of features of key-value stores . 28 3.1 Time taken to store and retrieve data when number of client processes is varied. 43 3.2 Time taken to store and retrieve data when number of client processes is varied. 45 3.3 Time taken to store and retrieve data when the number of servers is varied. 47 3.4 Time taken to store and retrieve data when the size of the value is varied. 49 3.5 Time taken to execute original air-quality application .
Recommended publications
  • Redis and Memcached
    Redis and Memcached Speaker: Vladimir Zivkovic, Manager, IT June, 2019 Problem Scenario • Web Site users wanting to access data extremely quickly (< 200ms) • Data being shared between different layers of the stack • Cache a web page sessions • Research and test feasibility of using Redis as a solution for storing and retrieving data quickly • Load data into Redis to test ETL feasibility and Performance • Goal - get sub-second response for API calls for retrieving data !2 Why Redis • In-memory key-value store, with persistence • Open source • Written in C • It can handle up to 2^32 keys, and was tested in practice to handle at least 250 million of keys per instance.” - http://redis.io/topics/faq • Most popular key-value store - http://db-engines.com/en/ranking !3 History • REmote DIctionary Server • Released in 2009 • Built in order to scale a website: http://lloogg.com/ • The web application of lloogg was an ajax app to show the site traffic in real time. Needed a DB handling fast writes, and fast ”get latest N items” operation. !4 Redis Data types • Strings • Bitmaps • Lists • Hyperlogs • Sets • Geospatial Indexes • Sorted Sets • Hashes !5 Redis protocol • redis[“key”] = “value” • Values can be strings, lists or sets • Push and pop elements (atomic) • Fetch arbitrary set and array elements • Sorting • Data is written to disk asynchronously !6 Memory Footprint • An empty instance uses ~ 3MB of memory. • For 1 Million small Keys => String Value pairs use ~ 85MB of memory. • 1 Million Keys => Hash value, representing an object with 5 fields,
    [Show full text]
  • Modeling and Analyzing Latency in the Memcached System
    Modeling and Analyzing Latency in the Memcached system Wenxue Cheng1, Fengyuan Ren1, Wanchun Jiang2, Tong Zhang1 1Tsinghua National Laboratory for Information Science and Technology, Beijing, China 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2School of Information Science and Engineering, Central South University, Changsha, China March 27, 2017 abstract Memcached is a widely used in-memory caching solution in large-scale searching scenarios. The most pivotal performance metric in Memcached is latency, which is affected by various factors including the workload pattern, the service rate, the unbalanced load distribution and the cache miss ratio. To quantitate the impact of each factor on latency, we establish a theoretical model for the Memcached system. Specially, we formulate the unbalanced load distribution among Memcached servers by a set of probabilities, capture the burst and concurrent key arrivals at Memcached servers in form of batching blocks, and add a cache miss processing stage. Based on this model, algebraic derivations are conducted to estimate latency in Memcached. The latency estimation is validated by intensive experiments. Moreover, we obtain a quantitative understanding of how much improvement of latency performance can be achieved by optimizing each factor and provide several useful recommendations to optimal latency in Memcached. Keywords Memcached, Latency, Modeling, Quantitative Analysis 1 Introduction Memcached [1] has been adopted in many large-scale websites, including Facebook, LiveJournal, Wikipedia, Flickr, Twitter and Youtube. In Memcached, a web request will generate hundreds of Memcached keys that will be further processed in the memory of parallel Memcached servers. With this parallel in-memory processing method, Memcached can extensively speed up and scale up searching applications [2].
    [Show full text]
  • Beyond Relational Databases
    EXPERT ANALYSIS BY MARCOS ALBE, SUPPORT ENGINEER, PERCONA Beyond Relational Databases: A Focus on Redis, MongoDB, and ClickHouse Many of us use and love relational databases… until we try and use them for purposes which aren’t their strong point. Queues, caches, catalogs, unstructured data, counters, and many other use cases, can be solved with relational databases, but are better served by alternative options. In this expert analysis, we examine the goals, pros and cons, and the good and bad use cases of the most popular alternatives on the market, and look into some modern open source implementations. Beyond Relational Databases Developers frequently choose the backend store for the applications they produce. Amidst dozens of options, buzzwords, industry preferences, and vendor offers, it’s not always easy to make the right choice… Even with a map! !# O# d# "# a# `# @R*7-# @94FA6)6 =F(*I-76#A4+)74/*2(:# ( JA$:+49>)# &-)6+16F-# (M#@E61>-#W6e6# &6EH#;)7-6<+# &6EH# J(7)(:X(78+# !"#$%&'( S-76I6)6#'4+)-:-7# A((E-N# ##@E61>-#;E678# ;)762(# .01.%2%+'.('.$%,3( @E61>-#;(F7# D((9F-#=F(*I## =(:c*-:)U@E61>-#W6e6# @F2+16F-# G*/(F-# @Q;# $%&## @R*7-## A6)6S(77-:)U@E61>-#@E-N# K4E-F4:-A%# A6)6E7(1# %49$:+49>)+# @E61>-#'*1-:-# @E61>-#;6<R6# L&H# A6)6#'68-# $%&#@:6F521+#M(7#@E61>-#;E678# .761F-#;)7-6<#LNEF(7-7# S-76I6)6#=F(*I# A6)6/7418+# @ !"#$%&'( ;H=JO# ;(\X67-#@D# M(7#J6I((E# .761F-#%49#A6)6#=F(*I# @ )*&+',"-.%/( S$%=.#;)7-6<%6+-# =F(*I-76# LF6+21+-671># ;G';)7-6<# LF6+21#[(*:I# @E61>-#;"# @E61>-#;)(7<# H618+E61-# *&'+,"#$%&'$#( .761F-#%49#A6)6#@EEF46:1-#
    [Show full text]
  • Release 2.5.5 Ask Solem Contributors
    Celery Documentation Release 2.5.5 Ask Solem Contributors February 04, 2014 Contents i ii Celery Documentation, Release 2.5.5 Contents: Contents 1 Celery Documentation, Release 2.5.5 2 Contents CHAPTER 1 Getting Started Release 2.5 Date February 04, 2014 1.1 Introduction Version 2.5.5 Web http://celeryproject.org/ Download http://pypi.python.org/pypi/celery/ Source http://github.com/celery/celery/ Keywords task queue, job queue, asynchronous, rabbitmq, amqp, redis, python, webhooks, queue, dis- tributed – • Synopsis • Overview • Example • Features • Documentation • Installation – Bundles – Downloading and installing from source – Using the development version 1.1.1 Synopsis Celery is an open source asynchronous task queue/job queue based on distributed message passing. Focused on real- time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on one or more worker nodes using multiprocessing, Eventlet or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready). Celery is used in production systems to process millions of tasks every hour. 3 Celery Documentation, Release 2.5.5 Celery is written in Python, but the protocol can be implemented in any language. It can also operate with other languages using webhooks. There’s also RCelery for the Ruby programming language, and a PHP client. The recommended message broker is RabbitMQ, but support for Redis, MongoDB, Beanstalk, Amazon SQS, CouchDB and databases (using SQLAlchemy or the Django ORM) is also available. Celery is easy to integrate with web frameworks, some of which even have integration packages: Django django-celery Pyramid pyramid_celery Pylons celery-pylons Flask flask-celery web2py web2py-celery 1.1.2 Overview This is a high level overview of the architecture.
    [Show full text]
  • An End-To-End Application Performance Monitoring Tool
    Applications Manager - an end-to-end application performance monitoring tool For enterprises to stay ahead of the technology curve in today's competitive IT environment, it is important for their business critical applications to perform smoothly. Applications Manager - ManageEngine's on-premise application performance monitoring solution offers real-time application monitoring capabilities that offer deep visibility into not only the performance of various server and infrastructure components but also other technologies used in the IT ecosystem such as databases, cloud services, virtual systems, application servers, web server/services, ERP systems, big data, middleware and messaging components, etc. Highlights of Applications Manager Wide range of supported apps Monitor 150+ application types and get deep visibility into all the components of your application environment. Ensure optimal performance of all your applications by visualizing real-time data via comprehensive dashboards and reports. Easy installation and setup Get Applications Manager running in under two minutes. Applications Manager is an agentless monitoring tool and it requires no complex installation procedures. Automatic discovery and dependency mapping Automatically discover all applications and servers in your network and easily categorize them based on their type (apps, servers, databases, VMs, etc.). Get comprehensive insight into your business infrastructure, drill down to IT application relationships, map them effortlessly and understand how applications interact with each other with the help of these dependency maps. Deep dive performance metrics Track key performance metrics of your applications like response times, requests per minute, lock details, session details, CPU and disk utilization, error states, etc. Measure the efficiency of your applications by visualizing the data at regular periodic intervals.
    [Show full text]
  • Redis - a Flexible Key/Value Datastore an Introduction
    Redis - a Flexible Key/Value Datastore An Introduction Alexandre Dulaunoy AIMS 2011 MapReduce and Network Forensic • MapReduce is an old concept in computer science ◦ The map stage to perform isolated computation on independent problems ◦ The reduce stage to combine the computation results • Network forensic computations can easily be expressed in map and reduce steps: ◦ parsing, filtering, counting, sorting, aggregating, anonymizing, shuffling... 2 of 16 Concurrent Network Forensic Processing • To allow concurrent processing, a non-blocking data store is required • To allow flexibility, a schema-free data store is required • To allow fast processing, you need to scale horizontally and to know the cost of querying the data store • To allow streaming processing, write/cost versus read/cost should be equivalent 3 of 16 Redis: a key-value/tuple store • Redis is key store written in C with an extended set of data types like lists, sets, ranked sets, hashes, queues • Redis is usually in memory with persistence achieved by regularly saving on disk • Redis API is simple (telnet-like) and supported by a multitude of programming languages • http://www.redis.io/ 4 of 16 Redis: installation • Download Redis 2.2.9 (stable version) • tar xvfz redis-2.2.9.tar.gz • cd redis-2.2.9 • make 5 of 16 Keys • Keys are free text values (up to 231 bytes) - newline not allowed • Short keys are usually better (to save memory) • Naming convention are used like keys separated by colon 6 of 16 Value and data types • binary-safe strings • lists of binary-safe strings • sets of binary-safe strings • hashes (dictionary-like) • pubsub channels 7 of 16 Running redis and talking to redis..
    [Show full text]
  • WEB2PY Enterprise Web Framework (2Nd Edition)
    WEB2PY Enterprise Web Framework / 2nd Ed. Massimo Di Pierro Copyright ©2009 by Massimo Di Pierro. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Copyright owner for permission should be addressed to: Massimo Di Pierro School of Computing DePaul University 243 S Wabash Ave Chicago, IL 60604 (USA) Email: [email protected] Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created ore extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data: WEB2PY: Enterprise Web Framework Printed in the United States of America.
    [Show full text]
  • Performance at Scale with Amazon Elasticache
    Performance at Scale with Amazon ElastiCache July 2019 Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved. Contents Introduction .......................................................................................................................... 1 ElastiCache Overview ......................................................................................................... 2 Alternatives to ElastiCache ................................................................................................. 2 Memcached vs. Redis ......................................................................................................... 3 ElastiCache for Memcached ............................................................................................... 5 Architecture with ElastiCache for Memcached ...............................................................
    [Show full text]
  • High Performance with Distributed Caching
    High Performance with Distributed Caching Key Requirements For Choosing The Right Solution High Performance with Distributed Caching: Key Requirements for Choosing the Right Solution Table of Contents Executive summary 3 Companies are choosing Couchbase for their caching layer, and much more 3 Memory-first 4 Persistence 4 Elastic scalability 4 Replication 5 More than caching 5 About this guide 5 Memcached and Oracle Coherence – two popular caching solutions 6 Oracle Coherence 6 Memcached 6 Why cache? Better performance, lower costs 6 Common caching use cases 7 Key requirements for an effective distributed caching solution 8 Problems with Oracle Coherence: cost, complexity, capabilities 8 Memcached: A simple, powerful open source cache 10 Lack of enterprise support, built-in management, and advanced features 10 Couchbase Server as a high-performance distributed cache 10 General-purpose NoSQL database with Memcached roots 10 Meets key requirements for distributed caching 11 Develop with agility 11 Perform at any scale 11 Manage with ease 12 Benchmarks: Couchbase performance under caching workloads 12 Simple migration from Oracle Coherence or Memcached to Couchbase 13 Drop-in replacement for Memcached: No code changes required 14 Migrating from Oracle Coherence to Couchbase Server 14 Beyond caching: Simplify IT infrastructure, reduce costs with Couchbase 14 About Couchbase 14 Caching has become Executive Summary a de facto technology to boost application For many web, mobile, and Internet of Things (IoT) applications that run in clustered performance as well or cloud environments, distributed caching is a key requirement, for reasons of both as reduce costs. performance and cost. By caching frequently accessed data in memory – rather than making round trips to the backend database – applications can deliver highly responsive experiences that today’s users expect.
    [Show full text]
  • Opinnäytetyön Malli Vanhoille Word-Versioille
    Bachelor’s Thesis Information Technology 2011 Kenneth Emeka Odoh DESIGN AND IMPLEMENTATION OF A WEB- BASED AUCTION SYSTEM ii BACHELOR’S THESIS (UAS) │ABSTRACT TURKU UNIVERSITY OF APPLIED SCIENCES Information Technology | Networking Date of completion of the thesis | 12th January 2012 Instructor: Ferm Tiina Kenneth Emeka Odoh DESIGN AND IMPLEMENTATION OF A WEB- BASED AUCTION SYSTEM Electronic auction has been a popular means of goods distribution. The number of items sold through the internet auction sites have grown in the past few years. Evidently, this has become the medium of choice for customers. This project entails the design and implementation of a web-based auction system for users to trade in goods. The system was implemented in the Django framework. On account that the trade over the Internet lacks any means of ascertaining the quality of goods, there is a need to implement a feedback system to rate the seller’s credibility in order to increase customer confidence in a given business. The feedback system is based on the history of the customer’s rating of the previous seller’s transactions. As a result, the auction system has a built-in feedback system to enhance the credibility of the auction system. The project was designed by using a modular approach to ensure maintainability. There is a number of engines that were implemented in order to enhance the functionality of the auction system. They include the following: commenting engine, search engine, business intelligence (user analytic and statistics), graph engine, advertisement engine and recommendation engine. As a result of this thesis undertaking, a full-fledged system robust enough to handle small or medium-sized traffic has been developed to specification.
    [Show full text]
  • Scaling Uber with Node.Js Amos Barreto @Amos Barreto
    Scaling Uber with Node.js Amos Barreto @amos_barreto Uber is everyone’s Private driver. REQUEST! RIDE! RATE! Tap to select location Sit back and relax, tell your Help us maintain a quality service" driver your destination by rating your experience YOUR DRIVERS 4 Your Drivers UBER QUALIFIED RIDER RATED LICENSED & INSURED Uber only partners with drivers Tell us what you think. Your From insurance to background who have a keen eye for feedback helps us work with checks, every driver meets or customer service and a drivers to constantly improve beats local regulations. passion for the trade. the Uber experience. 19 LOGISTICS 4 #OMGUBERICECREAM 22 UberChopper #OMGUBERCHOPPER 22 #UBERVALENTINES 22 #ICANHASUBERKITTENS 22 Trip State Machine (Simplified) Request Dispatch Accept Arrive End Begin 6 Trip State Machine (Extended) Expire / Request Dispatch (1) Reject Dispatch (2) Accept Arrive End Begin 6 OUR STORY 4 Version 1 • PHP dispatch PHP • Outsourced to remote contractors in Midwest • Half the code in spanish Cron • Flat file " • Lifetime: 6-9 months 6 33 “I read an article on HackerNews about a new framework called Node.js” """"Jason Roberts" Tradeoffs • Learning curve • Database drivers " " • Scalability • Documentation" " " • Performance • Monitoring" " " • Library ecosystem • Production operations" Version 2 • Lifetime: 9 months " Node.js • Developed in house " • Node.js application • Prototyped on 0.2 • Launched in production with 0.4 " • MongoDB datastore “I really don’t see dispatch changing much in the next three years” 33 Expect the
    [Show full text]
  • Redis-Lua Documentation Release 2.0.8
    redis-lua Documentation Release 2.0.8 Julien Kauffmann October 12, 2016 Contents 1 Quick start 3 1.1 Step-by-step analysis...........................................3 2 What’s the magic at play here ?5 3 One step further 7 4 What happens when I make a mistake ?9 5 What’s next ? 11 6 Table of contents 13 6.1 Basic usage................................................ 13 6.2 Advanced usage............................................. 14 6.3 API.................................................... 16 7 Indices and tables 19 i ii redis-lua Documentation, Release 2.0.8 redis-lua is a pure-Python library that eases usage of LUA scripts with Redis. It provides script loading and parsing abilities as well as testing primitives. Contents 1 redis-lua Documentation, Release 2.0.8 2 Contents CHAPTER 1 Quick start A code sample is worth a thousand words: from redis_lua import load_script # Loads the 'create_foo.lua' in the 'lua' directory. script= load_script(name='create_foo', path='lua/') # Run the script with the specified arguments. foo= script.get_runner(client=redis_client)( members={'john','susan','bob'}, size=5, ) 1.1 Step-by-step analysis Let’s go through the code sample step by step. First we have: from redis_lua import load_script We import the only function we need. Nothing too specific here. The next lines are: # Loads the 'create_foo.lua' in the 'lua' directory. script= load_script(name='create_foo', path='lua/') These lines looks for a file named create_foo.lua in the lua directory, relative to the current working directory. This example actually considers that using the current directory is correct. In a production code, you likely want to make sure to use a more reliable or absolute path.
    [Show full text]