Introduction to Python

Part 3: Advanced Topics

Michael Kraus ([email protected])

Max-Planck-Institut fur¨ Plasmaphysik, Garching

1. December 2011 Advanced Topics

calling and embedding and Fortran code: Weave: inline C/C++ code, translation of Python code to C++ Cython: extension of the Python language providing static typed functions and variables, generating efficient C code for fast computations ctypes: call functions in C libraries f2py: wrap Fortran code

parallelisation: threading multiprocessing parallel python (PP) mpi4py

GUI programming with PyQt and PySide

symbolic computing with Sage Calling and Embedding C and Fortran Code in Python

Python is very fast when writing code, but not necessarily so fast when executing code (especially for numerical applications) implement time-consuming parts of your program in C/C++/Fortran  example: solving the 2D Laplace equation by an iterative finite difference scheme (500x500 grid, 100 iterations) Type of Solution Time Taken (secs) Python 1500.0 Numpy 29.3 Weave (Blitz) 9.5 Weave (Inline) 4.3 f2py 2.9 Cython 2.5 Matlab 29.0 Pure C++ 2.2 [Numbers from the Beginners Guide to Using Python for Performance Computing: http://www.scipy.org/PerformancePython] Sample Problem: Laplace Equation

solving the 2D Laplace equation using an iterative finite difference scheme (four point averaging, Gauss-Seidel or Gauss-Jordan) solve for some unknown function u(x, y) such that ∇2u = 0 with some  boundary condition specified

discretise the domain into an (nx × ny ) grid of points  the function u can be represented as a two-dimensional array u(nx , ny )  the values of u along the sides of the domain are given (and stay fixed)  the solution can be obtained by iterating in the following manner:  fori in range(1, nx-1): forj in range(1, ny-1): u[i,j] = ( (u[i-1, j] + u[i+1, j])*dy**2 + \ (u[i, j-1] + u[i, j+1])*dx**2 \ ) / (2.0*(dx**2 + dy**2)) Sample Problem: Laplace Equation in NumPy

the for loop of the Laplace solver can be readily expressed by a much simpler NumPy expression:

u[1:-1, 1:-1] = ( (u[0:-2, 1:-1] + u[2:, 1:-1])*dy**2 + \ (u[1:-1, 0:-2] + u[1:-1, 2:])*dx**2 \ ) / (2.0*(dx**2 + dy**2)) the advantage of this expression is that it is completely done in C  speedup of a factor of 50x over the pure Python loop  (another factor of 5 or so if you link NumPy with Intel MKL or ATLAS) (slight) drawback: this expression uses temporary arrays  during one iteration, the computed values at an already computed  location will not be used in the original for loop, once the value of u[1,1] is computed, the next  value for u[1,2] will use the newly computed u[1,1] and not the old one since the NumPy expression uses temporary arrays internally, only the old  value of u[1,1] will be used the algorithm will still converge but in twice as much time  reduction of the benefit by a factor of 2  Weave

Weave is a subpackage of SciPy and has two modes of operation weave.blitz accelerates Python code by translating it to C++ code which it compiles into a Python module weave.inline allows to embed C/C++ code directly into Python code mainly used to speed up calculations on arrays  fast/efficient: directly operates on NumPy arrays (no temporary copies)  the first time you run a blitz or inline function, it gets compiled into a  Python module, the next time it is called, it will run immediately

References: http://www.scipy.org/Weave http://www.scipy.org/Cookbook/Weave Sample Problem: Laplace Equation in weave.blitz

to use weave.blitz, the accelerated code has to be put into a string which is passed to the weave.blitz function:

from scipy import weave expr = """ u[1:-1, 1:-1]=((u[0:-2, 1:-1]+u[2:, 1:-1])*dy**2+\ (u[1:-1, 0:-2]+u[1:-1, 2:])*dx**2\ )/ (2.0*(dx**2+ dy**2)) """ weave.blitz(expr, check_size=0) the first time the code is called, weave.blitz converts the NumPy  expression into C++ code, builds a Python module, and invokes it for the array expressions, weave.blitz uses Blitz++  speedup of 100-200x over the Python loop  weave.blitz does not use temporary arrays for the computation (the  computed values are re-used immediately) and therefore behaves more like the original for loop Sample Problem: Laplace Equation in weave.inline

in weave.inline the C/C++ code has to be put into a string which is passed to the weave.inline function, together with the variables used:

from scipy.weave import converters, inline code = """ for(inti=1;i

Cython is a programming language based on Python provides extra syntax allowing for static type declarations  (remember: Python is generally dynamically typed) the source code gets translated into optimised C/C++ code and  compiled as Python extension modules Cython can compile (most) regular Python code but the optional static  type declarations usually achieve major speed improvements allows for very fast program execution and tight integration with  external C libraries combines the benefits of Python with the speed of C  References: http://cython.org/ http://docs.cython.org/ Cython: Compilation

Cython code must, unlike Python, be compiled: a .pyx source file is compiled by Cython to a .c file, containing the code of a Python extension module the .c file is compiled by a C compiler to a .so (shared object library) file which can be imported directly into a Python session several ways to build Cython code:  use pyximport, importing Cython .pyx files as if they were .py files (using distutils to compile and build the background) write a distutils setup.py run the cython command-line utility manually to produce the .c file from the .pyx file, then manually compile the .c file into a shared object library use the Sage notebook which allows Cython code inline Cython: Compilation

imagine a simple “hello world” script: hello.pyx def say_hello_to(name): print("Hello%s!" % name)

implicit compilation using pyximport:

>>> import pyximport >>> pyximport.install() >>> import hello >>> hello.say_hello_to("Mike") Hello Mike ! this allows to automatically run Cython on every .pyx that Python is  trying to import use this for simple Cython builds where no extra C libraries and no  special building setup is needed Cython: Compilation

the following could be a corresponding setup.py script: setup.py from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext

ext_modules = [Extension("hello",["hello.pyx"])]

setup ( name =’Hello World App’, cmdclass = {’build_ext’: build_ext}, ext_modules = ext_modules ) you can specify additional build settings (include dirs, linker flags, ...)  ext_modules = [Extension("hello",["hello.pyx"], include_dirs = [numpy.get_include()], extra_compile_args = [’-fopenmp’], extra_link_args = [’-fopenmp’])] Cython: Compilation

the script can be called with:

> python setup.py build_ext --inplace after compilation, Cython modules can be imported like every other Python module:

>>> import hello >>> hello.say_hello_to("Mike") Hello Mike ! Cython: Static Types

Cython can compile normal Python code without changes (with a few exceptions of some as-yet unsupported language features), but for performance critical code, it is often helpful to add static type declarations they allow Cython to step out of the dynamic nature of the Python  code and generate simpler and faster C code (sometimes faster by orders of magnitude) however, type declarations can make the source code more verbose and  thus less readable it is discouraged to use them without good reason, i.e. only in  performance critical sections where they really make the code substantially faster Cython: Static Types

consider the following pure Python code:

def f(x): return x**2-x

def integrate_f(a, b, N): s = 0 dx = (b-a)/N fori in range(N): s += f(a+i*dx) returns*dx simply compiling this in Cython gives a 35% speedup  adding some static types can make a much larger difference  Cython: Static Types

with additional type declarations, the former example might look like:

def f(double x): return x**2-x

def integrate_f(double a, double b, int N): cdef int i cdef double s, dx s = 0 dx = (b-a)/N fori in range(N): s += f(a+i*dx) returns*dx

typing the iterator variable i with C semantics, tells Cython to compile  the for-loop to pure C code typing a, s and dx is important as they are involved in arithmetic within  the for-loop this results in a 4x speedup over the pure Python version  Cython: Static Types

Python function calls can be expensive: one needs to convert to and from Python objects to do the call in our example, the argument is assumed to be a C double both inside  f() and in the call to it, yet a Python float object must be constructed around the argument in order to pass it Cython provides the cdef keyword for declaring a C-style function:  cdef double f(double x): return x**2-x speedup: 150x over pure Python  but: now the function is no longer available from Python-space, as  Python wouldn’t know how to call it using the cpdef keyword instead, a Python wrapper is created as well  Cython: Static Types and NumPy Arrays

you can use NumPy from Cython exactly the same as in regular Python, but by doing so you are loosing potentially high speedups Cython has support for fast access to NumPy arrays:  import numpy as np cimport numpy as np

def update(np.ndarray[double, ndim=2] f): cdef unsigned int i, j

cdef np.ndarray[double, ndim=2] h = \ np.zeros([xmax, ymax], dtype=DTYPE)

fori in range(0, f.shape[0]): forj in range(0, f.shape[1]): ...

typing the contents of ndarray by specifying the datatype and the  number of dimensions enables access to the data buffer directly at C speed, otherwise the [] operator still uses full Python operations Sample Problem: Laplace Equation in Cython a Cython implementation of the Laplace solver using static types: laplace cython.pyx cimport numpy as np

def laplace_cython(np.ndarray[double, ndim=2] u, double dx, double dy): cdef unsigned int i, j fori in xrange(1, u.shape[0]-1): forj in xrange(1, u.shape[1]-1): u[i,j] = ( (u[i+1, j] + u[i-1, j]) * dy**2 + (u[i, j+1] + u[i, j-1]) * dx**2 ) / (2.0*(dx**2 + dy**2)) looks very similar to the original pure Python implementation except  for the additional type-declarations using cimport numpy, you have to add the NumPy includes to setup.py:  ext_modules = [Extension("laplace_cython", ["laplace_cython.pyx"], include_dirs = [numpy.get_include()])] Cython: External Libraries

Cython provides declarations for many functions from the standard C library, e.g. for the C math library: cython cmath.py from libc.math cimport sin

cdef double f(double x): return sin(x*x)

calling C’s sin() functions is substantially faster than Python’s  math.sin() function as there’s no wrapping of arguments, etc. the math library is not linked by default  in addition to cimporting the declarations, you must configure your  build system to link against the shared library m, e.g. in setup.py:

ext_modules = [Extension("cython_cmath", ["cython_cmath.pyx"], libraries=["m"])] Cython: External Libraries

if you want to access C code for which Cython does not provide a ready to use declaration, you must declare them yourself:

cdef extern from"math.h": double sin(double)

this declares the sin() function in a way that makes it available to  Cython code and instructs Cython to generate C code that includes the math.h header file the C compiler will see the original declaration in math.h at compile  time, but Cython does not parse math.h and thus requires a separate definition you can declare and call into any C library as long as the module that  Cython generates is properly linked against the shared or static library Cython: Faster Code

if you’re confident you’ve done everything right, there are a few directives that can further speed up your code boundscheck=False disables array boundary checks  wraparound=False disables negative indexing of NumPy arrays  there are several ways of setting these compiler directives globally, with a special header comment at the top of the file:  #cython: boundscheck=False #cython: wraparound=False

locally, as function decorators or in a with statement:  cimport cython @cython.boundscheck(False) def update(np.ndarray[double, ndim=2] f): ... with cython.boundscheck(True): ... Cython: Parallelisation

Cython supports native parallelism via OpenMP (more backends might be supported in the future) by default, Python’s Global Interpreter Lock (GIL) prevents that several threads use the Python interpreter simultaneously to use this kind of parallelism, the GIL must be released  parallel loops can be created by using  prange([start], stop[, step], nogil=False, schedule=None) OpenMP automatically starts a thread pool and distributes the work  according to the schedule used static: approx. equally sized chunks, one for each thread dynamic:iterations are distributed as threads request them, chunk size of 1 guided: iterations are distributed as threads request them, the chunk size is proportional to the number of unassigned iterations divided by the number of threads, decreasing to 1 auto: the decision regarding scheduling is delegated to the compiler or runtime system Cython: Parallelisation

thread-locality and reductions are automatically inferred for variables example with reduction: the values from the thread-local copies of the variable will be reduced with the operator and assigned to the original variable after the loop

from cython.parallel import prange

cdef int i cdef int sum = 0

fori in prange(n, nogil=True): sum += i example with a shared NumPy array:

from cython.parallel import prange

def func(np.ndarray[double] x, double alpha): cdef unsigned int i fori in prange(x.shape[0], nogil=True): x[i] = alpha * x[i] Sample Problem: Laplace Equation in Parallel Cython

laplace cython parallel.pyx #cython: boundscheck(False) from cython.parallel import prange cimport numpy as np

def laplace_cython_parallel(np.ndarray[double, ndim=2] u,\ double dx, double dy): cdef unsigned int i, j fori in prange(1, u.shape[0]-1, nogil=True): forj in xrange(1, u.shape[1]-1): u[i,j] = ( (u[i+1, j] + u[i-1, j]) * dy*dy + (u[i, j+1] + u[i, j-1]) * dx*dx ) / (2.0*(dx**2 + dy**2))

you have to add the NumPy includes and OpenMP flags to setup.py:  ext_modules = [Extension("laplace_cython_parallel", ["laplace_cython_parallel.pyx"], include_dirs = [numpy.get_include()], extra_compile_args = [’-fopenmp’], extra_link_args = [’-fopenmp’])] ctypes

the ctypes module of the Python Standard Library provides C compatible data types and allows calling functions in shared libraries ctypes can be used to wrap these libraries in pure Python  to use ctypes to access C code you need to know some details about  the underlying C library (names, calling arguments, types, etc.), but you do not have to write C extension wrapper code or compile anything with a C compiler (like in Cython) simple example: libc.rand()  >>> import ctypes >>> libc = ctypes.CDLL("/usr/lib/libc.so") >>> libc.rand() 16807

ctypes provides functionality to take care of correct datatype handling,  automatic type casting, passing values by reference, pointers, etc.

Reference: http://docs.python.org/library/ctypes.html f2py

f2py is a NumPy module that lets you easily call Fortran functions from Python the f2py command builds a shared library and creates Python wrapper  code that makes the Fortran routine look like a native Python module:

> f2py -c laplace.f -m laplace_fortran in Python you only have to import it like every other Python module:  >>> import laplace_fortran >>> laplace_fortran.laplace(...)

when passing arrays, f2py automatically takes care of the right layout,  i.e. row-major in Python and C vs. column-major in Fortran

References: http://cens.ioc.ee/projects/f2py2e/ http://www.f2py.com/ http://www.scipy.org/Cookbook/F2Py f2py

in the Fortran routine you have to include some additional directives telling f2py the intent of the parameters:

subroutine laplace(u, n, m, dx, dy) real*8, dimension(1:n,1:m) :: u real*8 :: dx, dy integer :: n, m, i, j

!f2py intent(in,out) :: u !f2py intent(in) ::dx,dy !f2py intent(hide) ::n,m

... end subroutine the dimensions of the array are passed implicitly by Python:  >>> dx = dy = 0.1 >>> u = np.zeros( (100,100) ) >>> u [0] = 1. >>> laplace_fortran.laplace(u, dx, dy) Sample Problem: Laplace Solver with f2py

laplace.f90 subroutine laplace(u, nx, ny, dx, dy) real*8, dimension(1:nx,1:ny) :: u real*8 :: dx, dy integer :: nx, ny, i, j

!f2py intent(in,out) :: u !f2py intent(in) ::dx,dy !f2py intent(hide) :: nx, ny

do i=2, nx -1 do j=2, ny -1 u(i,j) = ( (u(i-1,j) + u(i+1,j))*dy*dy + (u(i,j-1) + u(i,j+1))*dx*dx ) / (2.0*(dx*dx + dy*dy)); enddo enddo

end subroutine Performance Python Benchmark Reloaded

2D Laplace Solver: 500x500 grid, 100 iterations

Type of Solution Time GNU (ms) Time Intel (ms) Numpy 895 907 Weave (Blitz) 286 n/a Weave (Inline) 291 n/a Cython 289 196 Cython (fast) 287 194 Cython (2 threads) 196 100 Cython (4 threads) 147 52 ctypes 195 143 f2py 409 353 Pure C 228 136 Pure Fortran 200 136

(Benchmarked on an Intel Xeon E5440 @ 2.83GHz) Parallel Programming

Python includes a multithreading and a multiprocessing package

multithreading is seriously limited by the Global Interpreter Lock, which allows only one thread to be interacting with the interpreter at a time this restricts Python programs to run on a single processor regardless  of how many CPU cores you have and how many threads you create

multiprocessing allows spawning subprocesses which run on different cores but are completely independent entities communication is only possible by message passing which makes  parallelisation an effort that is probably not justified by the gain (we’re talking about Python! your code won’t run on 1000s of cores) however, you can compile NumPy and SciPy with threaded libraries like ATLAS or MKL  use Cython’s prange for very simple parallelisation of loops via OpenMP  use Parallel Python (PP)  use mpi4py (again, message passing, but quite common in C and Fortran)  Parallel Python (PP)

the PP module provides a mechanism for parallel execution of python code on systems with multiple cores and clusters connected via network simple to implement job-based parallelisation technique  internally PP uses processes and Inter Process Communication (IPC)  to organise parallel computations all the details and complexity are hidden from you and your  application, it just submits jobs and retrieves their results very simple way to write parallel Python applications  cross-platform portability (, MacOSX, Windows), interoperability,  dynamic load-balancing software written with PP works in parallel also on many computers  connected via local network or internet even if the run different operating systems (it’s pure Python!)

Reference: http://www.parallelpython.com/ Parallel Python (PP)

import the pp module:

import pp

start pp execution server with the number of workers set to the number of processors in the system:

job_server = pp.Server() submit all the tasks for parallel execution:

f1 = job_server.submit(func1, args1, depfuncs1, modules1) f2 = job_server.submit(func1, args2, depfuncs1, modules1) f3 = job_server.submit(func2, args3, depfuncs2, modules2) ... retrieve the results as needed: r1 = f1 () r2 = f2 () r3 = f3 () ... Parallel Python (PP)

import math import pp

def sum_primes(nstart, nfinish): sum = 0 forn in xrange(nstart, nfinish+1): if isprime(n):# checks ifn isa prime sum += n return sum

nprimes = 100001 job_server = pp.Server() ncpus = job_server.get_ncpus()

np_cpu, np_add = divmod(nprimes, ncpus)

ranges = [ (i*np_cpu+1, (i+1)*np_cpu) fori in range(0,ncpus)] ranges[ncpus-1] = (ranges[ncpus-1][0], ranges[ncpus-1][1]+np_add)

sum = 0 jobs = [(job_server.submit(sum_primes, input, (isprime,), ("math",))) for input in ranges] for job in jobs: sum += job () Parallel Python (PP)

the task object, returned by a submit call, has an argument finished which indicates the status of the task and can be used to check if it has been completed:

task = job_server.submit(f1, (a,b,c)) ... if task.finished: print("The task is done!") else print("Still working on it...") you can perform an action at the time of completion of each individual task by setting the callback argument of the submit method:

sum = 0 def add_sum(n): sum += n ... task = job_server.submit(sum_primes, (nstart, nend), callback=add_sum) mpi4py

MPI for Python provides full-featured bindings of the Message Passing Interface standard for the Python programming language allows any Python program to exploit multiple processors  point-to-point (sends, receives) and collective (broadcasts, scatters,  gathers) communications of any picklable Python object (e.g. ndarray) (pickling: conversion of a Python object hierarchy into a byte stream) provides an object oriented interface which closely follows the MPI-2  C++ bindings and works with most of the MPI implementations any user of the standard C/C++ MPI bindings should be able to use  this module without the need of learning a new interface mpi4py also allows wrapping of C/C++ and Fortran code that uses  MPI with Cython and f2py

Reference: http://mpi4py.scipy.org/ GUI Programming

lots of options:

Tkinter: Python’s default GUI toolkit included in the Standard Library  wxPython: Python wrapper for wxWidgets  http://www.wxpython.org/ PyGTK: Python wrapper for GTK  http://www.pygtk.org/ PyQt and PySide: Python wrappers for (not just a GUI library!)  http://www.riverbankcomputing.co.uk/software/ http://www.pyside.org/ Traits and TraitsUI: development model that comes with automatically  created user interfaces http://code.enthought.com/projects/traits/ http://code.enthought.com/projects/traits_ui/ PyQt and PySide

PyQt and PySide are Python bindings for the Qt application framework run on all platforms supported by Qt (Linux, MacOSX, Windows)  the interface of both modules is almost identical  (PySide is slightly cleaner, see http://developer.qt.nokia.com/ wiki/Differences_Between_PySide_and_PyQt) main difference: License (PyQt: GPL and commercial, PySide: LGPL)  and PySide is supported by (who develops Qt) generate Python code from Qt Designer  add new GUI controls written in Python to Qt Designer  Documentation: http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/index.html http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/classes.html http://zetcode.com/tutorials/pyqt4/ http://doc.qt.nokia.com/ http://qt.nokia.com/learning PySide simple example: only shows a small window pyside simple example.py import sys from PySide import QtGui

def main(): app = QtGui.QApplication(sys.argv)

w = QtGui.QWidget() w.resize(250, 150) w.move(300, 300) w.setWindowTitle(’Simple Example’) w. show ()

sys.exit(app.exec_())

if __name__ ==’__main__’: main () we can do a lot with this window: resize it, maximise it, minimise it  PySide

necessary imports: basic GUI widgets are located in the QtGui module

from PySide import QtGui every PySide application must create an application object, the sys.argv parameter is a list of arguments from the command line:

app = QtGui.QApplication(sys.argv)

QtGui.QWidget is the base class of all user interface objects in PySide, the default constructor has no parent and creates a window:

w = QtGui.QWidget() resize the window, move it around on the screen, and set a title:

w.resize(250, 150) w.move(300, 300) w.setWindowTitle(’Simple Example’) PySide

make the window visible:

w. show () finally, enter the main loop of the application:

sys.exit(app.exec_()) the event handling starts from this point: the main loop receives events  from the window system and dispatches them to the application widgets the main loop ends, if we call the exit() method or the main widget is  destroyed (e.g. by clicking the little x on top of the window) the sys.exit() method ensures a clean exit  PySide

you could also set an application icon:

w.setWindowIcon(QtGui.QIcon(’my_app.png’))

the QtGui.QIcon is initialised by providing it with a (path and) filename  you can move and resize at once:

w.setGeometry(300, 300, 250, 150)

the first two parameters are the x and y positions of the window  the latter two parameters are the width and height of the window  PySide this was a procedural example, but in PySide your’re writing objects: pyside object example.py class Example(QtGui.QWidget): def __init__(self): super(Example, self).__init__() self.initUI()

def initUI(self): self.setGeometry(300, 300, 250, 150) self.setWindowTitle(’PySide Object Example’) self.setWindowIcon(QtGui.QIcon(’my_app.png’)) self.show()

def main(): app = QtGui.QApplication(sys.argv) ex = Example() sys.exit(app.exec_())

if __name__ ==’__main__’: main () PySide

create a new class called Example that inherits from QtGui.QWidget:

class Example(QtGui.QWidget):

we must call two constructors: for the Example class and for the inherited class:

def __init__(self): super(Example, self).__init__()

the super() method returns the parent object of the Example class  the constructor method is always called __init__() in Python  the creation of the GUI is delegated to the initUI() method:

self.initUI() PySide

our Example class inherits lots of methods from the QtGui.QWidget class:

self.setGeometry(300, 300, 250, 150) self.setWindowTitle(’PySide Object Example’) self.setWindowIcon(QtGui.QIcon(’my_app.png’)) PySide

add a button and some tooltips:

QtGui.QToolTip.setFont(QtGui.QFont(’SansSerif’, 10)) self.setToolTip(’This isaQWidget widget’)

btn = QtGui.QPushButton(’Button’, self) btn.setToolTip(’This isaQPushButton widget’) btn.resize(btn.sizeHint()) btn.move(50, 50) PySide

QtGui provides static methods to set default properties like fonts:

QtGui.QToolTip.setFont(QtGui.QFont(’SansSerif’, 10))

set a tooltip for our Example class:

self.setToolTip(’This isaQWidget widget’)

create a button which is placed within our Example class’ main widget:

btn = QtGui.QPushButton(’Button’, self) set a tooltip for the button, resize it and move it somewhere:

btn.setToolTip(’This isaQPushButton widget’) btn.resize(btn.sizeHint()) btn.move(50, 50) GUI elements can give a size hint corresponding to their content  (e.g. button text, picture size) PySide

bring the button to life by connecting it to a slot:

btn = QtGui.QPushButton(’Quit’, self) btn.clicked.connect(QtCore.QCoreApplication.instance().quit) btn.resize(qbtn.sizeHint()) btn.move(50, 50) now we can close our window programatically (not only by clicking x)  you have to import QtCore for this to work:  from PyQt4 import QtCore the event processing system in PySide uses the signal & slot mechanism  if we click on the button, the signal clicked is emitted  it can be connected to any Qt slot or any Python callable  QtCore.QCoreApplication is created with the QtGui.QApplication  it contains the main event loop and processes and dispatches all events  it’s instance() method returns its current instance  the quit() method terminates the application  PySide

add a QtGui.QLineEdit where the user can enter some text that is displayed in a popup window when he clicks the OK button:

def initUI(self): ... self.inputle = QtGui.QLineEdit(self) self.inputle.resize(120,20) self.inputle.move(10,50)

okbtn = QtGui.QPushButton(’OK’, self) okbtn.clicked.connect(self.showMessage) okbtn.resize(okbtn.sizeHint()) okbtn.move(150, 50) ... we also have to define a function that serves as slot:

def showMessage(self): QtGui.QMessageBox.information(self,"Information", self.inputle.text()) PySide

the QtGui.QLineEdit has to be an element of the class so that the slot can access it

self.inputle = QtGui.QLineEdit(self)

the OK button is connected to the method showMessage:

okbtn.clicked.connect(self.showMessage)

showMessage reads the content of the QtGui.QLineEdit via it’s text() method and creates a QtGui.QMessageBox:

QtGui.QMessageBox.information(self,"Information", self.inputle.text())

the title of the QtGui.QMessageBox is set to "Information"  the message it displays is the content of the QtGui.QLineEdit  PySide

this just gives you a taste of how PySide (and PyQt) works there are quite a few other important basic topics:  menus, toolbars, statusbar layout: absolute positioning vs. layout classes signals & slots: emit signals yourself event models catch events and ask to ignore or accept them (“Do you really want to quit?”) dialogs: input, file, print, ... widgets: combo box, check box, toggle button, slider, progress bar, ... custom widgets threading building UIs with the Designer

the aforementioned tutorial is a good place to start:  http://zetcode.com/tutorials/pyqt4/ Symbolic Computating with Sage

Sage is an open-source mathematics software system based on Python combines nearly 100 packages under a unified interface  includes a huge range of mathematics, including basic algebra,  calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, exact linear algebra and much more the user interface is a notebook in a web browser or the command line  it’s a viable, free alternative to Maple, Mathematica, and MATLAB  References: http://www.sagemath.org/

Sage: Beginner’s Guide (2011) Craig Finch