Uwsgi Documentation Release 1.9

Total Page:16

File Type:pdf, Size:1020Kb

Uwsgi Documentation Release 1.9 uWSGI Documentation Release 1.9 uWSGI February 08, 2016 Contents 1 Included components (updated to latest stable release)3 2 Quickstarts 5 3 Table of Contents 11 4 Tutorials 137 5 Articles 139 6 uWSGI Subsystems 141 7 Scaling with uWSGI 197 8 Securing uWSGI 217 9 Keeping an eye on your apps 223 10 Async and loop engines 231 11 Web Server support 237 12 Language support 251 13 Release Notes 317 14 Contact 359 15 Donate 361 16 Indices and tables 363 Python Module Index 365 i ii uWSGI Documentation, Release 1.9 The uWSGI project aims at developing a full stack for building (and hosting) clustered/distributed network applica- tions. Mainly targeted at the web and its standards, it has been successfully used in a lot of different contexts. Thanks to its pluggable architecture it can be extended without limits to support more platforms and languages. Cur- rently, you can write plugins in C, C++ and Objective-C. The “WSGI” part in the name is a tribute to the namesake Python standard, as it has been the first developed plugin for the project. Versatility, performance, low-resource usage and reliability are the strengths of the project (and the only rules fol- lowed). Contents 1 uWSGI Documentation, Release 1.9 2 Contents CHAPTER 1 Included components (updated to latest stable release) The Core (implements configuration, processes management, sockets creation, monitoring, logging, shared memory areas, ipc, cluster membership and the uWSGI Subscription Server) Request plugins (implement application server interfaces for various languages and platforms: WSGI, PSGI, Rack, Lua WSAPI, CGI, PHP, Go ...) Gateways (implement load balancers, proxies and routers) The Emperor (implements massive instances management and monitoring) Loop engines (implement concurrency, components can be run in preforking, threaded, asynchronous/evented and green thread/coroutine modes. Various technologies are supported, including uGreen, Greenlet, Stackless, Gevent, Coro::AnyEvent, Goroutines and Fibers) Note: With a large open source project such as uWSGI the code and the documentation may not always be in sync. The mailing list is the best source for help regarding uWSGI. 3 uWSGI Documentation, Release 1.9 4 Chapter 1. Included components (updated to latest stable release) CHAPTER 2 Quickstarts 2.1 Quickstart for python/WSGI applications This quickstart will show you how to deploy simple WSGI applications and common frameworks Python here is meant as CPython, for PyPy you need to use the specific plugin: The PyPy plugin, Jython support is on work. 2.1.1 Installing uWSGI with python support Suggestion: when you start learning uWSGI try to build from official sources, using distro-supplied packages could bring you lot of headaches. When things are clear you can use modular builds (like the ones available in your distro) uWSGI is a (big) C application, so you need a C compiler (like the gcc or clang) and python development headers. On a debian-based distro a apt-get install build-essential python-dev will be enough You have various ways to install uWSGI for python via pip pip install uwsgi using the network installer curl http://uwsgi.it/install | bash -s default /tmp/uwsgi (this will install the uWSGI binary in /tmp/uwsgi, feel free to change it) or simply downloading a source tarball and ‘making’ it wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz tar zxvf uwsgi-latest.tar.gz cd <dir> make after the build you will have a ‘uwsgi’ binary in the current directory Installing via your package distribution is not covered (would be impossibile to make everyone happy), but all the general rules applies. 5 uWSGI Documentation, Release 1.9 One thing you may want to take in account when testing this quickstart with distro supplied packages, is that very probably your distribution has built uWSGI in modular way (every feature is a different plugin that must be loaded). To complete the quickstart you have to prepend --plugin python,http to the first serie of example and --plugin python when the HTTP router is removed (it could make no sense for you, just continue reading) 2.1.2 The first WSGI application Let’s start with the simple hello world example (this is for python 2.x, python 3.x requires the returned string to be bytes, see later): def application(env, start_response): start_response('200 OK', [('Content-Type','text/html')]) return "Hello World" save it as foobar.py As you can see it is composed by a single python function. It is called “application” as this is the default function that the uWSGI python loader will search for (but you can obviously customize it) The python 3.x version is the following: def application(env, start_response): start_response('200 OK', [('Content-Type','text/html')]) return b"Hello World" 2.1.3 Deploy it on HTTP port 9090 Now start uwsgi to run an http server/router passing requests to your WSGI application: uwsgi --http :9090 --wsgi-file foobar.py That’s all 2.1.4 Adding concurrency and monitoring The first tuning you would like to make is adding concurrency (by default uWSGI starts with a single process and a single thread) You can add more processes with the --processes option or more threads with the --threads options (or you can have both). uwsgi --http :9090 --wsgi-file foobar.py --master --processes4 --threads 2 this will spawn 4 processes (each with 2 threads), a master process that will respawn your processes when they die and the HTTP router seen before. One important task is monitoring. Understanding what is going on is vital in production deployment. The stats subsystem allows you to export uWSGI internal statistics via json uwsgi --http :9090 --wsgi-file foobar.py --master --processes4 --threads2 --stats 127.0.0.1:9191 make some request to your app and then telnet to the port 9191. You will get lot of funny information. There is a top-like tool for monitoring instances, named ‘uwsgitop’ (just pip install it) Pay attention: bind the stats socket to a private address (unless you know what you are doing) otherwise everyone could access it !!! 6 Chapter 2. Quickstarts uWSGI Documentation, Release 1.9 2.1.5 Putting behind a full webserver Even if the uWSGI http router is solid and high-performance, you may want to put your application behind a fully capable webserver. uWSGI natively speaks HTTP, FastCGI, SCGI and its specific protocol named “uwsgi” (yes, wrong naming choice). The best performing protocol is obviously the uwsgi one, already supported by nginx and Cherokee (while various Apache modules are available). A common nginx config is the following location/{ include uwsgi_params; uwsgi_pass 127.0.0.1:3031; } this means, “pass every request to the server bound to port 3031 speaking the uwsgi protocol”. Now we can spawn uWSGI to natively speak the uwsgi protocol uwsgi --socket 127.0.0.1:3031 --wsgi-file foobar.py --master --processes4 --threads2 --stats 127.0.0.1:9191 if you run ps aux you will see one process less. The http router has been removed as our “workers” (the processes assigned to uWSGI) natively speak the uwsgi protocol. 2.1.6 Automatically starting uWSGI on boot If you think about writing some init.d script for spawning uWSGI, just sit down and realize that we are no more in 1995. Each distribution has choosen its startup system (Upstart, SystemD...) and there are tons of process managers available (supervisord, god...). uWSGI will integrate very well with all of them (we hope), but if you plan to deploy a big number of apps check the uWSGI Emperor it is the dream of every devops. 2.1.7 Deploying Django Django is very probably the most used python web framework around. Deploying it is pretty easy (we continue our configuration with 4 processes with 2 threads each) We suppose the django project is in /home/foobar/myproject uwsgi --socket 127.0.0.1:3031 --chdir /home/foobar/myproject/ --wsgi-file myproject/wsgi.py --master --processes4 --threads2 --stats 127.0.0.1:9191 with –chdir we move to a specific directory. In django this is required to correctly load modules. if the file /home/foobar/myproject/myproject/wsgi.py (or whatever you have called your project) does not exist, you are very probably using an old (<1.4) django version. In such a case you need a little bit more configuration. uwsgi --socket 127.0.0.1:3031 --chdir /home/foobar/myproject/ --pythonpath .. --env DJANGO_SETTINGS_MODULE=myproject.settings --module "django.core.handlers.wsgi.WSGIHandler()" --processes4 --threads2 --stats 127.0.0.1:9191 ARGH !!! what the hell is this ??? Yes, you are right, dealing with such long command lines is basically unpractical (and foolish). uWSGI supports various configuration styles. In this quickstart we will use .ini files. 2.1. Quickstart for python/WSGI applications 7 uWSGI Documentation, Release 1.9 [uwsgi] socket= 127.0.0.1:3031 chdir= /home/foobar/myproject/ pythonpath=.. env= DJANGO_SETTINGS_MODULE=myproject.settings module= django.core.handlers.wsgi:WSGIHandler() processes=4 threads=2 stats= 127.0.0.1:9191 ...a lot better Just run it uwsgi yourfile.ini older (<1.4) Django releases need to set env, module and the pythonpath (note the .. that allows us to reach the myproject.settings module) 2.1.8 Deploying Flask Flask is another popular python web microframework from flask import Flask app= Flask(__name__) @app.route('/') def index(): return "<span style='color:red'>I am app 1</span>" Flask exports its WSGI function (the one we called ‘application’ at the start of the page) as ‘app’, so we need to instruct uwsgi to use it We still continue to use the 4 processes/2 threads and the uwsgi socket as the base uwsgi --socket 127.0.0.1:3031 --wsgi-file myflaskapp.py --callable app --processes4 --threads2 --stats 127.0.0.1:9191 the only addition is the –callable option.
Recommended publications
  • The Essentials of Stackless Python Tuesday, 10 July 2007 10:00 (30 Minutes)
    EuroPython 2007 Contribution ID: 62 Type: not specified The Essentials of Stackless Python Tuesday, 10 July 2007 10:00 (30 minutes) This is a re-worked, actualized and improved version of my talk at PyCon 2007. Repeating the abstract: As a surprise for people who think they know Stackless, we present the new Stackless implementation For PyPy, which has led to a significant amount of new insight about parallel programming and its possible implementations. We will isolate the known Stackless as a special case of a general concept. This is a Stackless, not a PyPy talk. But the insights presented here would not exist without PyPy’s existance. Summary Stackless has been around for a long time now. After several versions with different goals in mind, the basic concepts of channels and tasklets turned out to be useful abstractions, and since many versions, Stackless is only ported from version to version, without fundamental changes to the principles. As some spin-off, Armin Rigo invented Greenlets at a Stackless sprint. They are some kind of coroutines and a bit of special semantics. The major benefit is that Greenlets can runon unmodified CPython. In parallel to that, the PyPy project is in its fourth year now, and one of its goals was Stackless integration as an option. And of course, Stackless has been integrated into PyPy in a very nice and elegant way, much nicer than expected. During the design of the Stackless extension to PyPy, it turned out, that tasklets, greenlets and coroutines are not that different in principle, and it was possible to base all known parallel paradigms on one simple coroutine layout, which is as minimalistic as possible.
    [Show full text]
  • Method, 224 Vs. Typing Module Types
    Index A migration metadata, 266 running migrations, 268, 269 Abstract base classes, 223 setting up a new project, 264 @abc.abstractmethod decorator, 224 using constants in migrations, 267 __subclasshook__(...) method, 224 __anext__() method, 330 vs. typing module types, 477 apd.aggregation package, 397, 516, 524 virtual subclasses, 223 clean functions (see clean functions) Actions, 560 database, 254 analysis process, 571, 572 get_data_by_deployment(...) config file, 573, 575 function, 413, 416 DataProcessor class, 561 get_data(...) function, 415 extending, 581 plot_sensor(...) function, 458 IFTTT service, 566 plotting data, 429 ingesting data, 567, 571 plotting functions, 421 logs, 567 query functions, 417 trigger, 563, 564 apd.sensors package, 106 Adafruit, 42, 486 APDSensorsError, 500 Adapter pattern, 229 DataCollectionError, 500 __aenter__() method, 331, 365 directory structure, 108 __aexit__(...) method, 331, 365 extending, 149 Aggregation process, 533 IntermittentSensorFailureError, 500 aiohttp library, 336 making releases, 141 __aiter__() method, 330 sensors script, 32, 36, 130, 147, 148, Alembic, 264 153, 155, 156, 535 ambiguous changes, 268 UserFacingCLIError, 502 creating a new revision, 265 apd.sunnyboy_solar package, 148, current version, 269 155, 173 downgrading, 268, 270 API design, 190 irreversible, migrations, 270 authentication, 190 listing migrations, 269 versioning, 240, 241, 243 merging, migrations, 269 589 © Matthew Wilkes 2020 M. Wilkes, Advanced Python Development, https://doi.org/10.1007/978-1-4842-5793-7 INDEX AssertionError,
    [Show full text]
  • Python Concurrency
    Python Concurrency Threading, parallel and GIL adventures Chris McCafferty, SunGard Global Services Overview • The free lunch is over – Herb Sutter • Concurrency – traditionally challenging • Threading • The Global Interpreter Lock (GIL) • Multiprocessing • Parallel Processing • Wrap-up – the Pythonic Way Reminder - The Free Lunch Is Over How do we get our free lunch back? • Herb Sutter’s paper at: • http://www.gotw.ca/publications/concurrency-ddj.htm • Clock speed increase is stalled but number of cores is increasing • Parallel paths of execution will reduce time to perform computationally intensive tasks • But multi-threaded development has typically been difficult and fraught with danger Threading • Use the threading module, not thread • Offers usual helpers for making concurrency a bit less risky: Threads, Locks, Semaphores… • Use logging, not print() • Don’t start a thread in module import (bad) • Careful importing from daemon threads Traditional management view of Threads Baby pile of snakes, Justin Guyer Managing Locks with ‘with’ • With keyword is your friend • (compare with the ‘with file’ idiom) import threading rlock = threading.RLock() with rlock: print "code that can only be executed while we acquire rlock" #lock is released at end of code block, regardless of exceptions Atomic Operations in Python • Some operations can be pre-empted by another thread • This can lead to bad data or deadlocks • Some languages offer constructs to help • CPython has a set of atomic operations due to the operation of something called the GIL and the way the underlying C code is implemented • This is a fortuitous implementation detail – ideally use RLocks to future-proof your code CPython Atomic Operations • reading or replacing a single instance attribute • reading or replacing a single global variable • fetching an item from a list • modifying a list in place (e.g.
    [Show full text]
  • Thread-Level Parallelism II
    Great Ideas in UC Berkeley UC Berkeley Teaching Professor Computer Architecture Professor Dan Garcia (a.k.a. Machine Structures) Bora Nikolić Thread-Level Parallelism II Garcia, Nikolić cs61c.org Languages Supporting Parallel Programming ActorScript Concurrent Pascal JoCaml Orc Ada Concurrent ML Join Oz Afnix Concurrent Haskell Java Pict Alef Curry Joule Reia Alice CUDA Joyce SALSA APL E LabVIEW Scala Axum Eiffel Limbo SISAL Chapel Erlang Linda SR Cilk Fortan 90 MultiLisp Stackless Python Clean Go Modula-3 SuperPascal Clojure Io Occam VHDL Concurrent C Janus occam-π XC Which one to pick? Garcia, Nikolić Thread-Level Parallelism II (3) Why So Many Parallel Programming Languages? § Why “intrinsics”? ú TO Intel: fix your #()&$! compiler, thanks… § It’s happening ... but ú SIMD features are continually added to compilers (Intel, gcc) ú Intense area of research ú Research progress: 20+ years to translate C into good (fast!) assembly How long to translate C into good (fast!) parallel code? • General problem is very hard to solve • Present state: specialized solutions for specific cases • Your opportunity to become famous! Garcia, Nikolić Thread-Level Parallelism II (4) Parallel Programming Languages § Number of choices is indication of ú No universal solution Needs are very problem specific ú E.g., Scientific computing/machine learning (matrix multiply) Webserver: handle many unrelated requests simultaneously Input / output: it’s all happening simultaneously! § Specialized languages for different tasks ú Some are easier to use (for some problems)
    [Show full text]
  • NGINX-Conf-2018-Slides Rawdat
    Performance Tuning NGINX Name: Amir Rawdat Currently: Technical Marketing Engineer at NGINX inc. Previously: - Customer Applications Engineer at Nokia inc. Multi-Process Architecture with QPI Bus Web Server Topology wrk nginx Reverse Proxy Topology wrk nginx nginx J6 Technical Specifications # Sockets # Cores # Model RAM OS NIC per Threads Name Socket per Core Client 2 22 2 Intel(R) 128 GB Ubuntu 40GbE Xeon(R) CPU Xenial QSFP+ E5-2699 v4 @ 2.20GHz Web Server 2 24 2 Intel(R) 192 GB Ubuntu 40GbE Xeon(R) & Platinum Xenial QSFP+ Reverse 8168 CPU @ Proxy 2.70GHz Multi-Processor Architecture #1 Duplicate NGINX Configurations J9 Multi-Processor Architecture NGINX Configuration (Instance 1) user root; worker_processes 48 ; worker_cpu_affinity auto 000000000000000000000000111111111111111111111111000000000000000000000000111111111111111111111111; worker_rlimit_nofile 1024000; error_log /home/ubuntu/access.error error; ….. ……. J11 NGINX Configuration (Instance 2) user root; worker_processes 48 ; worker_cpu_affinity auto 111111111111111111111111000000000000000000000000111111111111111111111111000000000000000000000000; worker_rlimit_nofile 1024000; error_log /home/ubuntu/access.error error; ……. ……. J12 Deploying NGINX Instances $ nginx –c /path/to/configuration/instance-1 $ nginx –c /path/to/configuration/instance-2 $ ps aux | grep nginx nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx_0.conf nginx: worker process nginx: worker process nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx_1.conf nginx: worker process nginx: worker process
    [Show full text]
  • Bepasty Documentation Release 0.3.0
    bepasty Documentation Release 0.3.0 The Bepasty Team (see AUTHORS file) Jul 02, 2019 Contents 1 Contents 3 1.1 bepasty..................................................3 1.2 Using bepasty’s web interface......................................4 1.3 Using bepasty with non-web clients...................................6 1.4 Quickstart................................................7 1.5 Installation tutorial with Debian, NGinx and gunicorn......................... 10 1.6 ChangeLog................................................ 12 1.7 The bepasty software Project....................................... 14 1.8 License.................................................. 14 1.9 Authors.................................................. 15 Index 17 i ii bepasty Documentation, Release 0.3.0 bepasty is like a pastebin for every kind of file (text, image, audio, video, documents, . ). You can upload multiple files at once, simply by drag and drop. Contents 1 bepasty Documentation, Release 0.3.0 2 Contents CHAPTER 1 Contents 1.1 bepasty bepasty is like a pastebin for all kinds of files (text, image, audio, video, documents, . , binary). The documentation is there: http://bepasty-server.readthedocs.org/en/latest/ 1.1.1 Features • Generic: – you can upload multiple files at once, simply by drag and drop – after upload, you get a unique link to a view of each file – on that view, we show actions you can do with the file, metadata of the file and, if possible, we also render the file contents – if you uploaded multiple files, you can create a pastebin with the list
    [Show full text]
  • Next Generation Web Scanning Presentation
    Next generation web scanning New Zealand: A case study First presented at KIWICON III 2009 By Andrew Horton aka urbanadventurer NZ Web Recon Goal: To scan all of New Zealand's web-space to see what's there. Requirements: – Targets – Scanning – Analysis Sounds easy, right? urbanadventurer (Andrew Horton) www.morningstarsecurity.com Targets urbanadventurer (Andrew Horton) www.morningstarsecurity.com Targets What does 'NZ web-space' mean? It could mean: •Geographically within NZ regardless of the TLD •The .nz TLD hosted anywhere •All of the above For this scan it means, IPs geographically within NZ urbanadventurer (Andrew Horton) www.morningstarsecurity.com Finding Targets We need creative methods to find targets urbanadventurer (Andrew Horton) www.morningstarsecurity.com DNS Zone Transfer urbanadventurer (Andrew Horton) www.morningstarsecurity.com Find IP addresses on IRC and by resolving lots of NZ websites 58.*.*.* 60.*.*.* 65.*.*.* 91.*.*.* 110.*.*.* 111.*.*.* 113.*.*.* 114.*.*.* 115.*.*.* 116.*.*.* 117.*.*.* 118.*.*.* 119.*.*.* 120.*.*.* 121.*.*.* 122.*.*.* 123.*.*.* 124.*.*.* 125.*.*.* 130.*.*.* 131.*.*.* 132.*.*.* 138.*.*.* 139.*.*.* 143.*.*.* 144.*.*.* 146.*.*.* 150.*.*.* 153.*.*.* 156.*.*.* 161.*.*.* 162.*.*.* 163.*.*.* 165.*.*.* 166.*.*.* 167.*.*.* 192.*.*.* 198.*.*.* 202.*.*.* 203.*.*.* 210.*.*.* 218.*.*.* 219.*.*.* 222.*.*.* 729,580,500 IPs. More than we want to try. urbanadventurer (Andrew Horton) www.morningstarsecurity.com IP address blocks in the IANA IPv4 Address Space Registry Prefix Designation Date Whois Status [1] -----
    [Show full text]
  • Goless Documentation Release 0.6.0
    goless Documentation Release 0.6.0 Rob Galanakis July 11, 2014 Contents 1 Intro 3 2 Goroutines 5 3 Channels 7 4 The select function 9 5 Exception Handling 11 6 Examples 13 7 Benchmarks 15 8 Backends 17 9 Compatibility Details 19 9.1 PyPy................................................... 19 9.2 Python 2 (CPython)........................................... 19 9.3 Python 3 (CPython)........................................... 19 9.4 Stackless Python............................................. 20 10 goless and the GIL 21 11 References 23 12 Contributing 25 13 Miscellany 27 14 Indices and tables 29 i ii goless Documentation, Release 0.6.0 • Intro • Goroutines • Channels • The select function • Exception Handling • Examples • Benchmarks • Backends • Compatibility Details • goless and the GIL • References • Contributing • Miscellany • Indices and tables Contents 1 goless Documentation, Release 0.6.0 2 Contents CHAPTER 1 Intro The goless library provides Go programming language semantics built on top of gevent, PyPy, or Stackless Python. For an example of what goless can do, here is the Go program at https://gobyexample.com/select reimplemented with goless: c1= goless.chan() c2= goless.chan() def func1(): time.sleep(1) c1.send(’one’) goless.go(func1) def func2(): time.sleep(2) c2.send(’two’) goless.go(func2) for i in range(2): case, val= goless.select([goless.rcase(c1), goless.rcase(c2)]) print(val) It is surely a testament to Go’s style that it isn’t much less Python code than Go code, but I quite like this. Don’t you? 3 goless Documentation, Release 0.6.0 4 Chapter 1. Intro CHAPTER 2 Goroutines The goless.go() function mimics Go’s goroutines by, unsurprisingly, running the routine in a tasklet/greenlet.
    [Show full text]
  • Load Balancing for Heterogeneous Web Servers
    Load Balancing for Heterogeneous Web Servers Adam Pi´orkowski1, Aleksander Kempny2, Adrian Hajduk1, and Jacek Strzelczyk1 1 Department of Geoinfomatics and Applied Computer Science, AGH University of Science and Technology, Cracow, Poland {adam.piorkowski,jacek.strzelczyk}@agh.edu.pl http://www.agh.edu.pl 2 Adult Congenital and Valvular Heart Disease Center University of Muenster, Muenster, Germany [email protected] http://www.ukmuenster.de Abstract. A load balancing issue for heterogeneous web servers is de- scribed in this article. The review of algorithms and solutions is shown. The selected Internet service for on-line echocardiography training is presented. The independence of simultaneous requests for this server is proved. Results of experimental tests are presented3. Key words: load balancing, scalability, web server, minimum response time, throughput, on-line simulator 1 Introduction Modern web servers can handle millions of queries, although the performance of a single node is limited. Performance can be continuously increased, if the services are designed so that they can be scaled. The concept of scalability is closely related to load balancing. This technique has been used since the beginning of the first distributed systems, including rich client architecture. Most of the complex web systems use load balancing to improve performance, availability and security [1{4]. 2 Load Balancing in Cluster of web servers Clustering of web servers is a method of constructing scalable Internet services. The basic idea behind the construction of such a service is to set the relay server 3 This is the accepted version of: Piorkowski, A., Kempny, A., Hajduk, A., Strzelczyk, J.: Load Balancing for Heterogeneous Web Servers.
    [Show full text]
  • Zope Documentation Release 5.3
    Zope Documentation Release 5.3 The Zope developer community Jul 31, 2021 Contents 1 What’s new in Zope 3 1.1 What’s new in Zope 5..........................................4 1.2 What’s new in Zope 4..........................................4 2 Installing Zope 11 2.1 Prerequisites............................................... 11 2.2 Installing Zope with zc.buildout .................................. 12 2.3 Installing Zope with pip ........................................ 13 2.4 Building the documentation with Sphinx ............................... 14 3 Configuring and Running Zope 15 3.1 Creating a Zope instance......................................... 16 3.2 Filesystem Permissions......................................... 17 3.3 Configuring Zope............................................. 17 3.4 Running Zope.............................................. 18 3.5 Running Zope (plone.recipe.zope2instance install)........................... 20 3.6 Logging In To Zope........................................... 21 3.7 Special access user accounts....................................... 22 3.8 Troubleshooting............................................. 22 3.9 Using alternative WSGI server software................................. 22 3.10 Debugging Zope applications under WSGI............................... 26 3.11 Zope configuration reference....................................... 27 4 Migrating between Zope versions 37 4.1 From Zope 2 to Zope 4 or 5....................................... 37 4.2 Migration from Zope 4 to Zope 5.0..................................
    [Show full text]
  • Continuations and Stackless Python
    Continuations and Stackless Python Or "How to change a Paradigm of an existing Program" Christian Tismer Virtual Photonics GmbH mailto:[email protected] Abstract 2 Continuations In this paper, an implementation of "Stackless Python" (a Python which does not keep state on the C 2.1 What is a Continuation? stack) is presented. Surprisingly, the necessary changes affect just a small number of C modules, and Many attempts to explain continuations can be found a major rewrite of the C library can be avoided. The in the literature[5-10], more or less hard to key idea in this approach is a paradigm change for the understand. The following is due to Jeremy Hylton, Python code interpreter that is not easy to understand and I like it the best. Imagine a very simple series of in the first place. Recursive interpreter calls are statements: turned into tail recursion, which allows deferring evaluation by pushing frames to the frame stack, x = 2; y = x + 1; z = x * 2 without the C stack involved. In this case, the continuation of x=2 is y=x+1; By decoupling the frame stack from the C stack, we z=x*2. You might think of the second and third now have the ability to keep references to frames and assignments as a function (forgetting about variable to do non-local jumps. This turns the frame stack into scope for the moment). That function is the a tree, and every leaf of the tree can now be a jump continuation. In essence, every single line of code has target.
    [Show full text]
  • Thesis.Pdf (5.857Mb)
    Faculty OF Science AND TECHNOLOGY Department OF Computer Science Metadata STATE AND HISTORY SERVICE FOR DATASETS Enable EXTRacting, STORING AND ACCESS TO METADATA ABOUT A DATASET OVER time. — Roberth Hansen INF-3990 Master’S Thesis IN Computer Science - May 2018 This thesis document was typeset using the UiT Thesis LaTEX Template. © 2018 – http://github.com/egraff/uit-thesis To Maria. Thank you very much. “When I’m working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.” –R. Buckminster Fuller “The most important property of a program is whether it accomplishes the intention of its user.” –C.A.R Hoare AbstrACT Distributed Arctic Observatory (DAO) aims to automate, streamline and im- prove the collection, storage and analysis of images, video and weather mea- surements taken on the arctic tundra. Automating the process means that there are no human users that needs to be involved in the process. This leads to a loss of monitoring capabilities of the process. There are insufficient tools that allow the human user to monitor the process and analyze the collected volume of data. This dissertation presents a prototype of a system to aid researchers in moni- toring and analyzing metadata about a dataset. The approach is a system that collects metadata over time, stores it in-memory and visualizes the metadata to a human user. The architecture comprises three abstractions Dataset, Instrument and Visual- ization. The Dataset contains metadata. The Instrument extracts the metadata.
    [Show full text]