Wednesday, January 1, 2025

Welcome!

This blog accompanies A Student’s Guide to Python for Physical Modeling by Jesse M. Kinder and Philip Nelson.

A Student’s Guide provides an introduction to the Python computer language and a few libraries (NumPy, SciPy, and PyPlot) that will enable students to get started in physical modeling. Some of the topics covered include the following:

  • basic Python programming
  • importing and exporting data
  • numerical arrays
  • 2D and 3D plotting
  • Monte Carlo simulations
  • numerical integration
  • solving ordinary differential equations
  • symbolic mathematics
  • animation
  • image processing

You can have a look at the Table of Contents.

On this Web site, you will find data sets, code samples, errata, addtional resources, and extended discussions of the topics introduced in the book.

Enjoy!

Plots created with NumPy and PyPlot.
Plots created with NumPy and PyPlot.

Thursday, April 28, 2016

Make Your Own GUI with Python

Suppose you have written a Python script that carries out a simulation based on a physical model and creates a nice plot of the results. Now you wish to explore the model and run the calculation on many different sets of input parameters. What is the best way to proceed?

You could write a script that runs the same calculation on a list of inputs that you select ahead of time:

inputs = [(x0, y0, z0), (x1, y1, z1), (x2, y2, z2)]
for I in inputs:
    # Insert simulation code here.

You could place your script inside a function and call the function repeatedly from the IPython command line with different inputs:

def run_simulation(x, y, z):
    # Insert simulation code here.

Both methods work, but both neither is ideal. If you use a script with an input list, you have to select the inputs ahead of time. If you find an interesting region of parameter space, you have to modify the script and run it again. If you embed your simulation within a function, you have to do a lot of typing at the command line.

A simple graphical user interface (GUI) that runs the script would be a convenient way to explore the model. You could enter your inputs in the appropriate boxes — or even set them with slider bars — and run the simulation; change one, and run it again. Such an interface would also make it much easier for someone else to run your code.

This post will describe how to create a simple graphical user interface (GUI) for your own functions and scripts using the Tkinter module. The goal is not to provide an introduction to GUI development or to create a beautiful user interface. Instead, this post will focus on building a minimal GUI wrapper for a working Python program. You can learn how to add the bells and whistles later.

Anatomy of a GUI

Take a look at the screenshot of the GUI below. This post will describe how to construct it from scratch.

A simple GUI.
A simple GUI.

A GUI is an interactive window that controls a program or series of programs. It allows the user to provide input to and receive feedback from those programs. Before getting into the details of creating this particular GUI, let’s examine a few of the key ideas behind GUI programming.

A GUI is built from widgets.

A widget is, in essence, anything you can put inside a GUI. These are the fundamental units from which a GUI is built:

  • Text Boxes: These are inactive widgets that simply display a text message.

  • Entry Boxes: These are interactive text boxes. The user can enter text or numbers. The data in an entry box can be extracted and used within programs.

  • Buttons: These are interactive regions of the GUI. When the user clicks on a button, it usually causes something to happen. The button can display explanatory text.

Widgets are packed into frames.

A frame is an abstract region of the GUI occupied by a widget or collection of widgets. The GUI program will arrange frames within the GUI window and try to adjust the size of the frames to fill the available space.

We will pack widgets into frames by creating a widget, then specifying which frame to put it in. We can provide additional information about where it is to be placed, like “top”, “bottom”, “left”, or “right”.

We can also pack frames within frames. For instance, in the GUI above, the third line of the window contains a text box, an entry box, a button, and another text box that displays the result of the calculation. All of these widgets were packed into a single frame, then this frame was packed within the application window frame.

Events trigger actions.

When we interact with a GUI, we expect something to happen. An event is something that happens within the GUI: you press a key or click on a button. GUI programming links events like these to specific actions. The great thing about building your own GUIs is that you get to choose both the events and the actions.

  • A program runs when you click on a <Run> button.

  • A variable is updated when you click the mouse in a certain region of the window.

  • A calculation is carried out after you press <Enter> in a text box.

  • The window closes when you press the <Escape> key.

Events are bound to widgets.

When creating a GUI, we bind events to specific widgets. For instance, if you create a <Press Here> button, you get to assign the event “click mouse on <Press Here> button” to anything you like:

  • do nothing
  • display a message
  • make noise
  • evaluate a numerical calculation
  • run a Monte Carlo simulation
  • create a plot
  • exit the program

The same event can have different effects with different widgets: pressing <Escape> in an entry box may clear the entry, but pressing <Escape> outside of the entry box may close the window.

A Word on GUI Design

It is easy to get distracted by all of the options available in GUI programming, by all of the features you could add, by all of the fine-tuning you can do to the appearance and interface. To avoid getting sidetracked, you should plan before you write any code:

  1. Decide which events are necessary. If your goal is to enter the value of two parameters, then run a simulation, stick to that. Anything else is unnecessary.

  2. Decide which widgets you need to carry out your task. For the example above, we might use two entry boxes and a <Run> button. Perhaps some explanatory text boxes would also be helpful. The 3D control knobs for adjusting parameter values can wait …

  3. Make a quick sketch of the layout. This will help you in packing your minimal set of widgets into the window.

Now you are ready to build a GUI!

The Tkinter Module

To actually construct a GUI, we need to choose a GUI programming library. Tkinter is the standard GUI library for Python, and it is included with almost every Python distribution. It has the benefits of being widely available and platform-independent. (I.e., you can create a GUI for Windows, Mac OS X, or Linux with the same Python code.) There are many other options, but this post will focus exclusively on Tkinter.

Tkinter provides an object-oriented framework for GUI programming. There are objects like Frame, Label, Entry, and Button that implement the widgets we need. We will build a GUI by creating a collection of these objects, then using their methods to adjust their properties and pack them together into a single window.

To gain access to the Tkinter module, we must import it. In my scripts I use the following lines:

try:
    # This will work in Python 2.7
    import Tkinter
except ImportError:
    # This will work in Python 3.5
    import tkinter as Tkinter

For some reason, Python 3 uses the lower-case tkinter while Python 2.7 uses the upper-case Tkinter. The lines above will work with either environment and allow us to access the module as Tkinter.

There is one more caveat for those who use the Anaconda distribution of Python. If you are going to use PyPlot and Tkinter in the same program, you need to instruct PyPlot to use a Tk-based back end for displaying plots:

import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt

If you do not set the back end before importing PyPlot, you may see a bunch of error messages instead of your GUI, even if you never call a PyPlot command.

Some Simple GUIs

The first few GUIs we build will be simple and not very useful, but they will illustrate the basic properties of any GUI.

Your First GUI

Together with the import lines above, the following two lines will create the simplest GUI possible: an empty window that does absolutely nothing.

# Create main window.
root = Tkinter.Tk()

# Activate the window.
root.mainloop()

The first command creates an object for the window that will contain the entire GUI. According to the docstrings of Tkinter.Tk, this object is a

Toplevel widget of Tk which represents mostly the main window
of an application. It has an associated Tcl interpreter.

This command should probably come near the top of your program. Later, we will pack all of our widgets and frames into this master widget.

The second command should come near the end of your program. It launches the window. Any code you type after this command will not have any effect until the window closes.

Try the import command and the two commands above to make sure you can create a Tkinter window on your system. You should see a blank application window with the title “tk”. The window you create may not appear in the foreground. You can close it by clicking on the close button, as you would any other application.

Adding a Widget

Now let’s add the simplest possible widget to our GUI — a text box — and pack it into the main window.

# Create main window.
root = Tkinter.Tk()

# Create a text box and pack it in.
greeting = Tkinter.Label(text="Hello, world!")
greeting.pack()

# Activate the window.
root.mainloop()

Note how the widget was created. I created a variable greeting and assigned it to a Tkinter.Label object. When I created the object, I used the keyword argument text="Hello, world!" to set the message. Then, I packed this newly created widget into the main window.

If you run this script, you will see the effect of packing a single widget. The resulting window is tiny — just large enough to contain the text message. The widgets are packed into the smallest amount of space that will contain them.

Binding an Event to a Keystroke

Next, we will bind an event to a keystroke within the main window. With the following script, you can now close the window by pressing the <Escape> key as long as the window is active.

# Create main window.
root = Tkinter.Tk()

# Create a text box and pack it in.
greeting = Tkinter.Label(text="Hello, world!")
greeting.pack()

# Define a function to close the window.
def quit(event=None):
    root.destroy()

# Cause pressing <Esc> to close the window.
root.bind('<Escape>', quit)

# Activate the window.
root.mainloop()

Notice the procedure used here. First, I defined a function that carried out some action. In this case, it calls the destroy() method of the root window, closing the window. Next, I used the bind() method of the root window to associate this function with a particular event: “user presses the <Escape> key.” Note that you pass only the function name to the bind() method — no arguments and no parentheses. The following would have produced an error:

root.bind('<Escape>', quit() )

Some Tkinter methods pass an event object to the function they are given, and some do not. To accommodate both types of methods, I gave my function an optional argument with a default value.

If you want to bind an event to a mouse click instead of a keystroke, use '<Button-1>' as the “key”. Add the following lines to the script above, just before the root.mainloop() command:

def ring(event=None):
    root.bell()

root.bind('<Button-1>', ring)

Now you will here a beep any time you click the mouse inside the main window.

Binding an Event to a Widget

You can also bind events to specific widgets within the main window. Let’s add a button to close the window.

# Create main window.
root = Tkinter.Tk()

# Create a text box and pack it in.
greeting = Tkinter.Label(text="Hello, world!")
greeting.pack()

# Define a function to close the window.
def quit(event=None):
    root.destroy()

# Cause pressing <Esc> to close the window.
root.bind('<Escape>', quit)

# Create a button that will close the window.
button = Tkinter.Button(text="Exit", command=quit)
button.pack(side='bottom', fill='both')

# Activate the window.
root.mainloop()

In this example, I have created a button with some text and bound it to the quit() command that closes the window. Notice, also, that I have provided instructions for where the button is to be placed: side='bottom'. I also specified that the button should be expanded to fill all of the available space around it in the frame.

Entry Boxes, Variables, and Frames

Now, let’s look at the last few elements we will need to create a useful GUI.

An entry box allows you to pass information to programs called by the GUI. In order for the GUI to keep track of the variables it contains, we need to assign the data in an entry box to a Tkinter variable. Tkinter recognizes several types: IntVar for integers, DoubleVar for floats, StringVar for strings. I prefer to use strings to store exactly what the user types, then convert this to other types as needed.

If you provide an entry box, it is often useful to provide a text box that indicates what the user is entering. This can create a problem in packing: You want the text box and entry box to be side by side, but the program that arranges all of our widgets in the main window might not arrange things in an aesthetically pleasing manner. The solution is to pack the text box and entry box into a separate frame, then pack this frame into the application window.

The script logarithm.py below illustrates all of these ideas. Liberal comments explain all of the steps in the construction of this GUI, which computes the logarithm of a number entered by the user. You can evaluate the logarithm by pressing the “is” button or by pressing <Enter> in the entry box.

A Useful GUI

Now it is time to assemble everything into a useful application. The script ‘interference.py’ below creates a GUI wrapper that allows the user to set the amplitude and frequency of two waves. The function it calls adds the two waves together and displays the resulting interference pattern.

It uses one new construct: a grid for arranging the input text boxes and entry boxes. Instead of calling widget.pack() to place a widget, one calls widget.grid(I, J) to place the widget in cell (I,J) of a grid. The upper left corner of the grid is cell (0,0).

This script is not a sterling example of GUI programming. The functions that do the numerical calculation and create the graph should defined in a separate module so that they can be run with or without the GUI. The GUI wrapper should import the function that creates the plot from this module. However, ‘interference.py’ has the benefit of being self-contained: you can copy and paste it into your own editor and run it without further modification.

Once you understand the basics of constructing a GUI wrapper for a program, you can simplify the process by writing functions to help create the GUI! For instance, you could write a function that takes a list of variable names and automatically generates a grid of text and entry boxes.

And there are always embellishments. You could replace entry boxes with sliders for some variables. (Look up Tkinter.Scale.) You can add check boxes for Boolean variables. (Look up Tkinter.Checkbutton.) You can add menus and save dialogs and … well, you get the picture. Just don’t spend so much time building a fancy interface that you have none left to actually run the simulation!

Summary

This post covered a lot of ground quickly, but I hope it has provided enough information for you to create a GUI window to run your own scripts whenever this is a useful thing to do. (Many programs do not benefit from a GUI at all, and it is not useful to create a GUI for a program that does not yet run properly …)

Design a GUI before you start building it. Decide what it should do first. Then identify the widgets that will accomplish your goal and sketch the layout of the application before you write any GUI code. The Tkinter module available with most Python distributions provides a suite of tools for building GUIs in Python.

This post has only scratched the surface of the Tkinter module and has completely ignored other GUI programming libraries. A Web search for “Tkinter” or “GUI programming with Python” will reveal a wealth of resources for more advanced GUI programming.











Code Samples

The logarithm.py Module

# -----------------------------------------------------------------------------
# logarithm.py
# ----------------------------------------------------------------------------- 
"""
Create a GUI application to compute logarithms using the Tkinter module.
"""
try:
    # This will work in Python 2.7
    import Tkinter
except ImportError:
    # This will work in Python 3.5
    import tkinter as Tkinter

# -----------------------------------------------------------------------------
# Create main window.
# ----------------------------------------------------------------------------- 
root = Tkinter.Tk()

# Create two text boxes and pack them in.
greeting = Tkinter.Label(text="Hello, world!")
greeting.pack(side='top')

advertisement = Tkinter.Label(text="I am logarithm computing GUI.")
advertisement.pack(side='top')

# Define a function to close the window.
def quit(event=None):
    root.destroy()

# Cause pressing <Esc> to close the window.
root.bind('<Escape>', quit)

# Create a button that will close the window.
button = Tkinter.Button(text="Exit", command=quit)
button.pack(side='bottom', fill='both')

# -----------------------------------------------------------------------------
# Create a frame within the main window.
# ----------------------------------------------------------------------------- 
# The frame will contain the widgets needed to do a calculation.
# Each widget in "frame" is created with "frame" as its first argument.
frame = Tkinter.Frame(root)
frame.pack(side='top')

# Create a text box that explains the calculation.
invitation = Tkinter.Label(frame, text="The natural logarithm of")
invitation.pack(side='left')

# Define an input variable and add an entry box so the user can change its value.
x = Tkinter.StringVar()
x.set('2.71828')
x_entry = Tkinter.Entry(frame, width=8, textvariable=x)
x_entry.pack(side='left')

# Define an output variable and a function to compute its value.
y = Tkinter.StringVar()

def compute_y(event=None):
    from math import log
    # Get x and y from outside the function.
    global x, y

    # Get the string value of the x StringVar and convert it to a float.
    x_value = float(x.get())

    # Compute the floating point value of y.
    y_value = log(x_value)

    # Convert this to a formatted string, and store it in the y StringVar.
    y.set('%.6f' % y_value)

# Bind an event to the x_entry box: pressing <Enter> will calculate the
# logarithm of whatever number the user has typed.
x_entry.bind('<Return>', compute_y)

# Create a button to perform the calculation and pack it into the frame.
compute = Tkinter.Button(frame, text=' is ', command=compute_y)
compute.pack(side='left')

# Create a text box that displays the value of the y StringVar.
y_label = Tkinter.Label(frame, textvariable=y, width=8)
y_label.pack(side='left')

# -----------------------------------------------------------------------------
# Activate the window.
# ----------------------------------------------------------------------------- 
root.mainloop()

The interference.py Module

# -----------------------------------------------------------------------------
# interference.py
# ----------------------------------------------------------------------------- 
"""
Author:     Jesse M. Kinder
Created:    2016 Apr 15
Modified:   2016 Apr 15

Description
-----------
Build a GUI wrapper to explore the interference pattern of two waves.
"""
try:
    # This will work in Python 2.7
    import Tkinter
except ImportError:
    # This will work in Python 3.5
    import tkinter as Tkinter

# -----------------------------------------------------------------------------
# To use matplotlib, the author must use the TkAgg backend, or none of this will
# work and a long string of inexplicable error messages will ensue.
# ----------------------------------------------------------------------------- 
import matplotlib
matplotlib.use('TkAgg')
import numpy as np
import matplotlib.pyplot as plt

# Define a bold font:
BOLD = ('Courier', '24', 'bold')

# Create main application window.
root = Tkinter.Tk()

# Create a text box explaining the application.
greeting = Tkinter.Label(text="Create an Interference Pattern", font=BOLD)
greeting.pack(side='top')

# Create a frame for variable names and entry boxes for their values.
frame = Tkinter.Frame(root)
frame.pack(side='top')

# Variables for the calculation, and default values.
amplitudeA = Tkinter.StringVar()
amplitudeA.set('1.0')
frequencyA = Tkinter.StringVar()
frequencyA.set('1.0')

amplitudeB = Tkinter.StringVar()
amplitudeB.set('1.0')
frequencyB = Tkinter.StringVar()
frequencyB.set('1.0')

deltaPhi = Tkinter.StringVar()
deltaPhi.set('0.0')

# Create text boxes and entry boxes for the variables.
# Use grid geometry manager instead of packing the entries in.
row_counter = 0
aa_text = Tkinter.Label(frame, text='Amplitude of 1st wave:') 
aa_text.grid(row=row_counter, column=0)

aa_entry = Tkinter.Entry(frame, width=8, textvariable=amplitudeA)
aa_entry.grid(row=row_counter, column=1)

row_counter += 1
fa_text = Tkinter.Label(frame, text='Frequency of 1st wave:') 
fa_text.grid(row=row_counter, column=0)

fa_entry = Tkinter.Entry(frame, width=8, textvariable=frequencyA)
fa_entry.grid(row=row_counter, column=1)

row_counter += 1
ab_text = Tkinter.Label(frame, text='Amplitude of 2nd wave:') 
ab_text.grid(row=row_counter, column=0)

ab_entry = Tkinter.Entry(frame, width=8, textvariable=amplitudeB)
ab_entry.grid(row=row_counter, column=1)

row_counter += 1
fb_text = Tkinter.Label(frame, text='Frequency of 2nd wave:') 
fb_text.grid(row=row_counter, column=0)

fb_entry = Tkinter.Entry(frame, width=8, textvariable=frequencyB)
fb_entry.grid(row=row_counter, column=1)

row_counter += 1
dp_text = Tkinter.Label(frame, text='Phase Difference:') 
dp_text.grid(row=row_counter, column=0)

dp_entry = Tkinter.Entry(frame, width=8, textvariable=deltaPhi)
dp_entry.grid(row=row_counter, column=1)

# Define a function to create the desired plot.
def make_plot(event=None):
    # Get these variables from outside the function, and update them.
    global amplitudeA, frequencyA, amplitudeB, frequencyB, deltaPhi

    # Convert StringVar data to numerical data.
    aa = float(amplitudeA.get())
    fa = float(frequencyA.get())
    ab = float(amplitudeB.get())
    fb = float(frequencyB.get())
    phi = float(deltaPhi.get())

    # Define the range of the plot.
    t_min = -10
    t_max = 10
    dt = 0.01
    t = np.arange(t_min, t_max+dt, dt)

    # Create the two waves and find the combined intensity.
    waveA = aa * np.cos(fa * t)
    waveB = ab * np.cos(fb * t + phi)
    intensity = (waveA + waveB)**2

    # Create the plot.
    plt.figure()
    plt.plot(t, intensity, lw=3)
    plt.title('Interference Pattern')
    plt.xlabel('Time')
    plt.ylabel('Intensity')
    plt.show()


# Add a button to create the plot.
MakePlot = Tkinter.Button(root, command=make_plot, text="Create Plot")
MakePlot.pack(side='bottom', fill='both')

# Allow pressing <Return> to create plot.
root.bind('<Return>', make_plot)

# Allow pressing <Esc> to close the window.
root.bind('<Escape>', root.destroy)

# Activate the window.
root.mainloop()

Monday, January 4, 2016

Jupyter Notebooks

Happy New Year!

Let’s start off 2016 by learning something new: Jupyter Notebooks, formerly known as IPython Notebooks.

A Jupyter notebook is an interactive document that incorporates text, math, graphics, and code. It can be viewed in a Web browser. Unlike most documents, however, you can modify and execute the code inside the document. In this sense, a Jupyter notebook is similar to a session in Mathematica or Maple. The difference is that the interpreter running behind the scenes is not Mathematica or Maple. Jupyter notebooks were designed to run Julia, Python, and R, but they support over 40 languages at present.

The best way to learn about Jupyter notebooks is to take a look at one. This notebook is a simple example from a greenhorn, but you can find many more on the Web.

Introduction.ipynb

Click on the link to view the notebook. You can download the notebook using the Save icon at the upper right corner of the notebook Web page. You can then open the notebook, run it, and see the commands used to create it.

In my very limited experience, I find that notebooks are excellent for sharing code but not for developing code. An IDE like Spyder or working from the command line is more efficient for writing and debugging code, running scripts repeatedly, and generating complex graphics. However, once the scripts and modules are working properly, you can call them within a notebook.

Jupyter notebooks are great for exploring new ideas and solving problems. A Jupyter notebook allows you to integrate formatted text, mathematical formulas, Python code, and graphics into a single document. You can write about your ideas, do mathematical analyses, write and run code, view graphics, and reflect on the results without switching applications.

Even if you do not like to do all of your work on your laptop or in the computer lab — I still prefer pencil and paper for many tasks — Jupyter notebooks are a great option for writing up your work and sharing it with others. A Jupyter notebook is like a word processor that can also run Python code. You can keep detailed notes with working code for your own personal files. You can write up homework assignments. You can write reports to share your work with other scientists. You can publish your work on the Web. (The procedure for sharing a notebook over the Internet can be found here.) A notebook can be exported in several formats, including HTML. If you have LaTeX installed on your system, you can also convert the notebook to a PDF file.

If you would like to see more examples of notebooks, explore this gallery. Enjoy!

Thursday, October 1, 2015

Paths in Python

How do you get your Python interpreter to find modules that are not located in your current working directory? The answer is … you tell it where to look.

When you type something like

from my_module import my_function

Python searches a collection of directories (i.e., folders) on your computer. If the directory containing <my_module.py> is not in this collection, you will receive an ImportError. This can be frustrating if you know the file exists, and even more so if you know where it exists.

In this post, we will take a brief look at how to add paths to the collection of directories searched by Python.

Paths

A path is essentially a set of directions to a file:

/Users/username/modules/my_module.py        [Mac OS X, Linux]
C:\modules\my_module.py                     [Windows]

It tells your operating system how to navigate from a fixed starting point — the “root directory” / in Unix-based systems, or C:\ in Windows — through a collection of folders to the desired file.

In this case, there is a folder called <modules/> that contains the desired file, <my_module.py>. Let’s suppose that you are writing a script called <my_script.py> in your working directory:

/Users/username/scratch/my_script.py        [Mac OS X, Linux]
C:\scratch\my_script.py                     [Windows]

You want Python to import <my_module.py> when you run the script, but it cannot find the file.

Below are 4 options for adding the path for your module to Python’s collection of paths. None are extremely complicated, but the last two require working with the command line and editing startup files on your operating system. If you are unfamiliar or uncomfortable with this, you might prefer the first two methods.

You will need to replace directory names like /Users/username/ or C:\scratch\ with the correct directories on your own system for these examples to work.

Using The sys Module

There is a module called sys that lets you modify the collection of paths from within Python. According to its documentation,

This module provides access to some objects used or maintained by the
interpreter and to functions that interact strongly with the interpreter.

One of these objects is a list called sys.path. Again referring to the documentation,

path -- module search path

All we need to do is add the path to <my_module.py> to sys.path. On Unix-like systems, the commands are

import sys
sys.path.append('/Users/username/modules')

On Windows, it is easier to use a raw string to specify the path:

import sys
sys.path.append(r'c:\modules')

This will only affect the path in the current Python session. If you quit and restart, you will need to add the path again.

Using Spyder

If you are using the Spyder IDE, you can permanently add a directory to its collection of paths. Chose the “PYTHONPATH manager” from the menu (python > PYTHONPATH manager) as shown below:

Managing paths within Spyder.
Managing paths within Spyder.

This will open a window that shows a list of directories added to Spyder’s collection of paths. If you have not modified this list already, it will probably contain your working directory and nothing else. At the bottom of this window, there is an icon for “Add Path”. Click on this. Now you can find the directory you want to add, as if you were opening a file.

In this case, we navigate to

/Users/username/modules

and select “Choose”.

Once you have selected a directory, it will become a permanent addition to Python’s paths within Spyder. You will need to quit Spyder and restart or “Restart Kernel” for the changes to take effect.

This is a convenient way to access modules that are not part of the Anaconda distribution from within Spyder.

Using the Command Line

If you are working from a command line, you can add your module directory to Python’s collection of paths by setting the PYTHONPATH variable before you start a Python session or run a script:

$ export PYTHONPATH=${PYTHONPATH}:/Users/username/modules
$ python my_script.py

No matter what directory you are working in, Python now knows where to find <my_module.py>.

If you want to permanently add your module directory to PYTHONPATH, you can add the export command above to your <.profile> or <.bashrc> file.

Startup Files

Another option is to create a Python startup file — a Python script that will be run at the beginning of every interactive Python session (but not when you run scripts from the command line). First, create a file such as <python_startup.py> in your home directory. Then, enter Python commands exactly as you would type them at the command prompt:

# python_startup.py
# Startup script for interactive Python sessions.

import sys
sys.path.append('/Users/username/modules')

Next, edit your <.profile> or <.bashrc> file and add the line

export PYTHONSTARTUP=/Users/username/python_startup.py

And that’s it. Now, every time you start a new Python session, you can access your own modules.

IPython does not use the PYTHONSTARTUP variable. If you want to add a startup script for IPython, place a file like <python_startup.py> in IPython’s directory of startup scripts:

/Users/username/.ipython/default_profile/startup/

Now your modules will be available in every IPython session, including Spyder.

Summary

There are several ways to show Python where to find modules. For a single session, appending a path to the sys.path variable is an easy solution. For a permanent change, you can use Spyder’s PYTHONPATH manager or edit startup files from the command line. These methods will allow you to access modules without copying or moving files into your working directory, and you can easily access modules you write yourself or modules you download from the Web.

Monday, September 28, 2015

Speeding Up Python — Part 2: Optimization

The goal of this post and its predecessor is to provide some tools and tips for improving the performance of Python programs. In the previous post, we examined profiling tools — sophisticated stopwatches for timing programs as they execute. In this post, we will use these tools to demonstrate some general principles that make Python programs run faster.

Remember: If your program already runs fast enough, you do not need to worry about profiling and optimization. Faster is not always better, especially if you end up with code that is difficult to read, modify, or maintain.

Overview

We can summarize our principles for optimizing performance as follows:

  1. Debug first. Never attempt to optimize a program that does not work.
  2. Focus on bottlenecks. Find out what takes the most time, and work on that.
  3. Look for algorithmic improvements. A different theoretical approach might speed up your program by orders of magnitude.
  4. Use library functions. The routines in NumPy, SciPy, and PyPlot have already been optimized.
  5. Eliminate Python overhead. Iterating over an index takes more time than you think.

If fast code is important to you, you can start writing new code with these guidelines in mind and apply them to existing code to improve performance.

Debug first.

Your primary objective in programming is to produce a working program. Unless the program already does what it is supposed to do, there is no point in trying to speed it up. Profiling and optimization are not part of debugging. They come afterward.

Focus on bottlenecks.

The greatest gains in performance come from improving the most time-consuming portions of your program. The profiling tools described in Part 1 of this two-part post can help you identify bottlenecks — portions of a program that limit its overall performance. They can also help you identify faster alternatives. Once you have identified the bottlenecks in your program, apply the other principles of optimization to mitigate or eliminate them.

Look for algorithmic improvements.

The greatest gains in speed often come from using a different algorithm to solve your problem.

When comparing algorithms, one analyzes how the computation time scales with some measure of the size of the problem. Some number \(N\) usually characterizes the size of a problem — e.g., the number of elements in a vector or the number of items in a list. The time required for a program to run is expressed as a function of \(N\). If an algorithm scales as \(N^3\), it means that when \(N\) is large enough, doubling \(N\) will cause the algorithm to take roughly 8 times as long to run. If you can find an algorithm with better scaling, you can often improve the performance of your program by orders of magnitude.

As an example, consider the classic problem of sorting a jumbled list of numbers. You might consider using the following approach:

  1. Create an empty list to store the sorted numbers.
  2. Find the smallest item in the jumbled list.
  3. Remove the smallest item from the jumbled list and place it at the end of the sorted list.
  4. Repeat steps 2 and 3 until there are no more items in the jumbled list.

This method is called insertion sort. It works, and it is fast enough for sorting small lists. You can prove that sorting a jumbled list of \(N\) elements using an insertion sort will require on the order of \(N^2\) operations. That means that sorting a list with 1 million entries will take 1 million times longer than sorting a list of 1000 entries. That may take too long, no matter how much you optimize!

Delving into a computer science textbook, you might discover merge sort, a different algorithm that requires on the order of \(N \log N\) operations to sort a jumbled list of \(N\) elements. That means that sorting a list of 1 million entries will take roughly 6000 times longer than sorting a list of 1000 entries — not 1 million. For big lists, this algorithm is orders of magnitude faster, no matter what programming language you are using.

As another example, you might need to determine a vector \(\mathbf{x}\) in the linear algebra problem \(A \cdot \mathbf{x} = \mathbf{b}\). If \(A\) is an \(N \times N\) matrix, then inverting \(A\) and multiplying the inverse with \(\mathbf{b}\) takes on the order of \(N^3\) operations. You quickly reach the limits of what your computer can handle around \(N = 10,000\). However, if your matrix has some special structure, there may be an algorithm that takes advantage of that structure and solve the problem much faster. For example, if \(A\) is sparse (most of its elements are zero), you can reduce the scaling to \(N^2\) — or even \(N\) — instead of \(N^3\). That is a huge speedup if \(N = 10,000\)! These kinds of algorithmic improvements make an “impossible” calculation “trivial”.

Unfortunately, there is no algorithm for discovering better algorithms. It is an active branch of computer science. You need to understand your problem, delve into texts and journal articles, talk to people who do research in the field, and think really hard. The payoff is worth the effort.

Use library functions.

Once you have a working program and you have identified the bottlenecks, you are ready to start optimizing your code. If you are already using the best available algorithms, the simplest way to improve the performance of Python code in most scientific applications is to replace homemade Python functions with library functions.

It’s not that you are a bad coder! It’s just that someone else took the time to rewrite a time-consuming operation in C or FORTRAN, compile and optimize it, then provide you with a simple Python function to access it. You are simply taking advantage of work that someone else has already done.

Let’s use the profiling tool %timeit to look at a some examples of the speedup possible with library functions.

Building an array of random numbers

Suppose you need an array of random numbers for a simulation. Here is a perfectly acceptable function for generating the array:

import numpy as np

def Random(N):
    """
    Return an array of N random numbers in the range [0,1).
    """
    result = np.zeros(N)
    for i in range(N):
        result[i] = np.random.random()
    return result

This code works and it is easy to understand, but suppose my program does not run fast enough. I need to increase the size of the simulation by a factor of 10, but it already takes a while. After debugging and profiling, I see that the line of my program that calls Random(N) takes 99 percent the execution time. That is a bottleneck worth optimizing!

I can start by profiling this function:

$ %timeit Random(1000)
1000 loops, best of 3: 465 us per loop

When I look at the documentation for np.random.random, I discover it is capable of doing more than generate a single random number. Maybe I should just use it to generate the entire array …

$ %timeit np.random.random(1000)
10000 loops, best of 3: 28.9 us per loop

I can generate exactly the same array in just 6 percent of the time!

In this hypothetical example, my entire program will run almost 20 times faster. I can increase the size of my calculation by a factor of 10 and reduce the overall calculation time simply by replacing Random(N) by np.random.random(N).

Array operations

Let’s look at an even more dramatic example. In the previous post, I introduced a library to add, multiply, and take the square root of arrays. (The library, calculator.py is included at the bottom of this post.) Suppose, once again, I have identified the bottleneck in a working program, and it involves the functions in the calculator.py module.

NumPy has equivalent operations with the same names. (When you write x+y for two arrays, it is shorthand for np.add(x,y).) Let’s see how much we can speed the operations up by switching to the NumPy equivalents. First, import the modules and create two random two-dimensional arrays to act on:

import calculator as calc
import numpy as np
x = np.random.random((100,100))
y = np.random.random((100,100))

Now, time the operations in calculator.py and NumPy:

$ %timeit calc.add(x,y)
100 loops, best of 3: 9.76 ms per loop

$ %timeit np.add(x,y)
100000 loops, best of 3: 18.8 us per loop

Here we see an even more significant difference. The addition function written in pure Python takes 500 times longer to add the two arrays. Use %timeit to compare calc.multiply with np.multiply, and calc.sqrt with np.sqrt, and you will see similar results.

When to write your own functions

The implications of these examples are clear: NumPy array operations are much, much faster than the equivalent Python code. The same is true of special functions in the SciPy and PyPlot packages and many other Python libraries. To speed up your code, use functions from existing libraries when possible. This can save time in writing code, optimizing code, and running code.

So should you ever write your own functions?

I was once told, “Never write your own linear algebra routines. Somebody already wrote a faster one in the 1970s.” That may be generally true, but it is bad advice nonetheless. If you never write your own routine to invert a matrix, it is difficult to fully understand how these routines work and when they can fail, and you will certainly never discover a better algorithm.

If speed of execution is not important or if your goal is to understand how an algorithm works, you should write your own functions. If you need to speed up a working Python program, look to library functions.

Eliminate Python overhead.

Why are library functions are so much faster than their Python equivalents? The answer is a point we discussed in Chapter 2 of A Student’s Guide to Python for Physical Modeling: In Python, everything is an object. When you type “x = 1”, Python does not just store the value 1 in a memory cell. It creates an object endowed with many attributes and methods, one of which is the value 1. Type dir(1) to see all of the attributes and methods of an integer.

What’s more, Python has no way of knowing what type of objects are involved in a simple statement like z = x+y. First, it has to determine what kind of object x is and what kind of object y is (type-checking). Then it has to figure out how to interpret “+” for these two objects. If the operation makes sense, Python then has to create a new object to store the result of x+y. Finally, it has to assign this new object to the name z. This gives Python a lot of flexibility: x and y can be integers, floats, arrays, lists, strings, or just about anything else. This flexibility makes it easy to write programs, but it also adds to the total computation time.

To speed up programs, eliminate this overhead. In other words, make Python do as little as possible.

Using library functions from NumPy, SciPy, and PyPlot eliminates overhead, and this is the main reason they run so much faster. In the example above, np.add(x,y) is not doing anything fundamentally different than calc.add(x,y); it simply does addition and iteration in the background, without Python objects. Recall from the previous post that calc.add(x,y) spent almost 30 percent of its time iterating of the index j in the inner for loop.

Other ways to eliminate overhead are

  1. Use in-place operations. Operations like +=, -=, *=, and /= operate on an existing object instead of creating a new one.
  2. Use built-in methods. These methods are often optimized.
  3. Use list comprehensions and generators instead of for loops. Initializing a list and accessing its elements take time.
  4. Vectorize your code. (Section 2.5.1 of A Student’s Guide to Python for Physical Modeling)

Use %timeit to compare the performance of these functions. They use the principles above to eliminate some of the Python overhead in square_list0.

def square_list0(N):
    """
    Return a list of squares from 0 to N-1.
    """
    squares = []
    for n in range(N):
        squares = squares + [n**2]
    return squares


def square_list1(N):
    """
    Return a list of squares from 0 to N-1.
    """
    squares = []
    for n in range(N):
        # In-place operations: Replace "x = x + ..." with "x += ..."
        squares += [n**2]
    return squares
    

def square_list2(N):
    """
    Return a list of squares from 0 to N-1.
    """
    squares = []
    for n in range(N):
        # Built-in methods: Replace "x = x + ..." with "x.append(...)"
        squares.append(n**2)
    return squares
    

def square_list3(N):
    """
    Return a list of squares from 0 to N-1.
    """
    # Use list comprehension instead of for loop.
    return [n**2 for n in range(N)]
    

def square_array(N):
    """
    Return an array of squares from 0 to N-1.
    """
    # Vectorize the entire operation.
    from numpy import arange
    return arange(N)**2

In my tests, square_list3(1000) ran about 18 times faster than square_list0(1000), and square_array(N) was about 350 times faster than square_list0(1000). The last function virtually eliminates Python overhead by using NumPy arrays in vectorized code.

More Options

If performance is still not satisfactory after attempting the optimizations described here, you can try compiling your Python code. Compiling is beyond the scope of this post. You can find out more about Numba (which is included in the Anaconda distribution) or Cython by following these links. Numba allows you to compile pure Python code. Cython allows you to write fast C extensions for Python without learning C.

For users of the Anaconda distribution of Python, there is an optional add-on called Accelerate. This add-on will replace the standard NumPy, SciPy, and other scientific libraries with equivalent libraries that use Intel’s MKL routines for linear algebra. On many machines, this will improve performance without any effort on your part beyond installing the package. Accelerate also includes NumbaPro, a proprietary version of the Numba package. Accelerate is free to academic users.

Summary

To summarize, there are a few simple ways to speed up a Python program. Once you have a working program and you have identified its bottlenecks, you can look for library functions to replace the slowest functions in your program, you can rewrite your code to eliminate Python’s overhead, and you can search for faster algorithms to solve your problem. As you develop your programming skills, you will start to incorporate these principles automatically. Happily, this means less time profiling and optimizing!






Code Samples

The calculator.py Module

# -----------------------------------------------------------------------------
# calculator.py
# ----------------------------------------------------------------------------- 
"""
This module uses NumPy arrays for storage, but executes array math using Python
loops.
"""

import numpy as np

def add(x,y):
    """
    Add two arrays using a Python loop.
    x and y must be two-dimensional arrays of the same shape.
    """
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = x[i,j] + y[i,j]
    return z


def multiply(x,y):
    """
    Multiply two arrays using a Python loop.
    x and y must be two-dimensional arrays of the same shape.
    """
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = x[i,j] * y[i,j]
    return z


def sqrt(x):
    """
    Take the square root of the elements of an arrays using a Python loop.
    """
    from math import sqrt
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = sqrt(x[i,j])
    return z


def hypotenuse(x,y):
    """
    Return sqrt(x**2 + y**2) for two arrays, a and b.
    x and y must be two-dimensional arrays of the same shape.
    """
    xx = multiply(x,x)
    yy = multiply(y,y)
    zz = add(xx, yy)
    return sqrt(zz)

Thursday, September 24, 2015

Speeding Up Python — Part 1: Profiling

When people argue about programming languages, a common critique of Python is, “It’s slow.” This is occasionally followed by, “A program written in C will run a thousand times faster.” Such generalization carry little weight. Python is often fast enough, and a well-written Python program can run significantly faster than a poorly-written C program. Plus, Moore’s Law implies that computers today are over a thousand times faster than those of 15 years ago: You can do with Python today what was only possible with a highly optimized, compiled program in 2000.

It is also important to consider development time. Suppose a C program takes a week to write and debug and 1 minute to run, and an equivalent Python program takes a day to write and debug and 1000 minutes (about a day) to run. The “slow” Python program will finish running five days earlier than the “fast” C program! If you already know Python and don’t know any C, then the time difference will be even greater.

In short, you need not avoid Python or learn some other programming language just because someone tells you Python is slow. Of course, sometimes there is a need for speed. If you want to eke out the best performance from the available hardware, you may need to learn a compiled language. However, you might want to see how much you can improve a Python program first.

The goal of this post and its sequel is to provide some tools and tips for improving the performance of Python programs. In this post, we will look at some profiling tools — sophisticated stopwatches for timing programs as they execute. In the next post, we will use these tools to demonstrate some general principles that will help you speed up your Python programs.

Before proceeding, I offer this advice: If your program already runs fast enough, do not bother with profiling and optimization. There are an endless number of interesting problems waiting to be solved, and the question of how to improve the performance of a particular program by 20 percent is probably not one of them.

I have included a sample module <calculator.py> and a sample script <test.py> at the end of this post, which I will use to illustrate some of the profiling tools. You can copy and paste these into your own working directory to replicate the examples, or you can try the profiling tools on some of your own modules and scripts.

How Long Does It Really Take?

To improve the performance of a program, it is useful to gather quantitative data on how long it takes to run. This is called profiling. If the program takes a long time to run, you may be able to use your wristwatch to time it. For more accurate measurements, IPython provides some useful “magic” commands. These are commands preceded by a percent sign that must be entered at the IPython command prompt. (See Section 1.2.1 of A Student’s Guide to Python for Physical Modeling.)

All the commands that follow should by entered at the IPython command prompt.

The %time Command

%time is a basic stopwatch. It will tell you how much time elapses on your computer’s internal clock while a command is executed.

Try the following commands at an IPython command prompt and investigate the information provided by %time:

%time 2**100
%time pow(2,100)

You should see something like this:

CPU times: user 6 us, sys: 2 us, total: 8 us
Wall time: 14.1 us

The output includes several times: user, sys, total, and Wall time. Wall time is the time you would have measured with your stopwatch, were your reflexes fast enough. It is not a very good metric of how efficient a program is because it includes the time your job spent waiting in line to run as well as interruptions by other processes that your operating system thought were more important. user measures how much time your CPU spent running your code. sys is the amount of time devoted to such processes as memory access, reading and writing data, gathering input, and displaying output. total is the sum of user and sys. It is the best measure of performance, and it may be significantly less than Wall time.

Run the commands above several times. You may notice minor differences in the elapsed times, as well as a few significant variations. To get an accurate measure of performance, it is best to average over many repetitions of the same command. This is what the %timeit magic command does.

The %timeit Command

Try the same operations as before, but use the %timeit command instead of %time:

%timeit 2**100
%timeit pow(2,100)

The output should be something like this:

$ %timeit 2**100
10000000 loops, best of 3: 45.4 ns per loop

This means that Python inserted the command 2**100 inside a loop and carried out the operation ten million times. It evaluated 3 such loops. It recorded the total time for each loop, and then divided by 10 million. The best result from the 3 loops was an average execution time of 45.4 ns. (This is less than the result of %time, which includes the time required to transform the string "2**100" into instructions your CPU understands.)

You can already see the potential benefits of profiling. While 2**100 takes a mere 45 ns, pow(2,100) takes 1,230 ns — 27 times as long. If I am repeatedly computing large powers of integers, I can save time by using x**y instead of pow(x,y).

You may notice that %timeit does not execute different commands the same number of times. It uses an adaptive method to get as many iterations of a command as possible without taking too long. Its default is to do three loops of a large number of iterations, but you can modify this. For example, to force %timeit to use 10 loops of 1 million iterations, you would type

%timeit -r 10 -n 1000000 pow(2,100)

This method of specifying options will look strange if you have not worked at a UNIX command line. The hyphens and letters like “-r” are called option flags. -r tells Python to set the number of repetitions to whatever number comes next. Likewise, the -n tells Python to set the number of iterations in each loop to whatever number comes next. The command to time comes last. It may look jumbled and confusing, but don’t worry — Python knows what to do!

You can find out more about %time and %timeit at the IPython command prompt:

%time?
%timeit?

The %run -t Command

You can time the evaluation of an entire script by supplying an option flag to the %run magic command:

%run -t test.py

This will run the script and report the time it took to execute. You can repeat a script several times by supplying an additional option flag:

%run -t -N 10 test.py

This will run the script 10 times and report the total and average time of execution. Note that you must use a capital N here. Lower case n means something different to the %run command.

Which Part Takes the Longest?

You can accomplish a lot with the profiling tools mentioned so far. With %timeit, you can profile individual functions and commands. With %run -t, you can assess the effects of changes to a script. However, neither of these tools provides information on how time is divided among functions within a script, or subroutines within a function. You could try stepping through the program and using %timeit on each line to see which ones take the longest, but there are better ways. Spyder and the profile module allow you to see how much time Python spends executing individual functions, and the line_profiler.py module can measure the time spent on each line of a program!

This allows you to identify the elements of a program that take the most time — the bottlenecks of your program. You can then focus your efforts on optimizing those portions of your code that will have the greatest impact on overall performance.

To run the examples below, first execute the following commands (or run the test.py script) to import NumPy and the calculator module (included at the end of this post) and create two random arrays A and B.

import numpy as np
import calculator as calc

M = 10**3
N = 10**3

A = np.random.random((M,N))
B = np.random.random((M,N))

The profile Module

The profile module can be accessed from the command line or within scripts. It’s output is not always easy to understand, but it is useful for identifying which functions are consuming the most time.

To use the module, import it and use its run method.

import profile
profile.run('calc.hypotenuse(A,B)', sort='tottime')

This command will execute the command calc.hypotenuse(A,B) and display profiling statistics on the screen. I have used the optional keyword argument sort to instruct the method to display the most time-consuming functions at the top of the output. The default is to sort by function or method name. The output is plain text:

In [10]: profile.run('calc.hypotenuse(A,B)', sort='tottime')
          1000014 function calls in 3.943 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        2    2.010    1.005    2.018    1.009 calculator.py:16(multiply)
        1    0.999    0.999    1.003    1.003 calculator.py:3(add)
        1    0.743    0.743    0.917    0.917 calculator.py:29(sqrt)
  1000000    0.170    0.000    0.170    0.000 {built-in method sqrt}
        4    0.016    0.004    0.016    0.004 {built-in method zeros}
        1    0.004    0.004    3.943    3.943 <string>:1(<module>)
        1    0.000    0.000    3.943    3.943 {built-in method exec}
        1    0.000    0.000    3.938    3.938 calculator.py:42(hypotenuse)
        1    0.000    0.000    0.000    0.000 <frozen importlib._bootstrap>:2264(_handle_fromlist)
        1    0.000    0.000    0.000    0.000 {built-in method hasattr}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

The results show us which functions and modules were used during the execution of a script, how much time was spent on each, how many times each was called, and the file and line number where each function is defined.

This tool is useful for identifying which functions consume the most time. It does not provide a detailed analysis of individual functions, so you may still need to use %lprun.

The cProfile module contains the same methods and produces the same output as the profile module, but it takes less time to run. (Many of the Python routines in profile are rewritten as C routines in cProfile.) If you are doing a lot of profiling, you can speed up the process by replacing import profile with

import cProfile as profile

Profiling with Spyder

All of the tools mentioned so far are command-line tools that can be used with IPython, whether or not you are using Spyder. Spyder offers an additional option. You can use the menu command Run > Profile or the shortcut key <F10> to run a script using Spyder’s profiler. It will produce output like this:

Screen shot illustrating output from Spyders profiler.
Screen shot illustrating output from Spyder’s profiler.

Here we see information similar to the output of profile.run, but in a format that is easier to interpret.

The %lprun Command

If you have successfully identified the function that consumes the most time, you may wish to dissect it further to find out why. The line_profiler package, written by Robert Kern, does exactly this. It allows you to see how much time Python spends on each individual line of code.

The module is part of the Anaconda distribution. If you used the Miniconda installer, you may need to manually install this package from the command line:

$ conda install line_profiler

The package can do quite a lot, but we are only going to look at one of its tools — an IPython magic command for profiling. To make this command available in IPython, we need to load it using another magic command:

%load_ext line_profiler

This gives you access to a magic command called %lprun if the line_profiler module is installed correctly.

There are two modes available with %lprun. The first is “function mode”. This allows you to designate a specific function to be analyzed when you execute a command or series of commands. The second is “module mode”, which will analyze all of the functions in a module you designate.

To profile the add function in the calculator module with %lprun type the following:

%lprun -f calc.add calc.add(A,B)

The -f option indicates function mode. The next item is the name of the function to analyze. (Be sure you provide the name, and not a function call. Do not include parentheses or arguments.) The last item is the Python statement to execute. I have instructed Python to gather information on the add function in the calculator module (imported as calc) while it evaluates the statement calc.add(A,B). Here is the output:

Timer unit: 1e-06 s

Total time: 2.94468 s
File: calculator.py
Function: add at line 3

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     3                                           def add(x,y):
     4                                              """
     5                                              Add two arrays using a Python loop.
     6                                              x, y: 2D arrays with the same shape.
     7                                              """
     8         1            7      7.0      0.0     m,n = x.shape
     9         1         5704   5704.0      0.2     z = np.zeros((m,n))
    10      1001         1044      1.0      0.0     for i in range(m):
    11   1001000       872878      0.9     29.6         for j in range(n):
    12   1000000      2065045      2.1     70.1             z[i,j] = x[i,j] + y[i,j]
    13         1            1      1.0      0.0     return z

The total time is not that useful. It includes some of the overhead of analyzing the code line-by-line. If you are interested in the total execution time, use %timeit. The most useful information here is in the “% Time” column. This is the percentage of the total execution time spent on each line. Here, we see that most of the time (70.1 percent) is spent adding the elements of the arrays. However, it may surprise you to see that almost 30 percent of the time is spent on Line 11, evaluating the statement “for j in range(n)”.

Just seeing how time is spent during the function call can suggest ways to speed up the code. For example, if so much time is spent iterating over the values of the index, maybe a Python loop is a poor method for adding arrays …

It is also possible to use %lprun to analyze all of the functions in a module at once. This will print out a lot of information, but sometimes this is what you want.

%lprun -m calculator calc.hypotenuse(A,B)

The -m option indicates module mode, the next item is the name of the module to analyze, and the last item is the Python statement to execute. I have instructed Python to gather information on all of the functions in the calculator module while it evaluates the statement calc.hypotenuse(A,B).

Tips for Profiling

If Carl von Clausewitz were a computer programmer rather than a military strategist, he might have said, “The enemy of a good program is the dream of a perfect program.” The most important rules of profiling are

  • Avoid unnecessary profiling.

  • Avoid premature profiling.

Profiling is time-consuming. Unless you need a working program to run faster — or you simply want to learn about profiling — skip it. When you use profiling tools, you should only analyze a working program. Remember, the goal is to identify and eliminate bottlenecks. You cannot diagnose the most time-consuming step of a program is until the entire program is working. Profiling and “optimizing” code too early slow down development and often produce unintended consequences.

Profiling tools can provide a glut of information that is difficult to digest. If you are trying to speed up a program (for example, the test.py script at the end of this post), you might try the following procedure:

  1. Use %lprun in function mode, the profile module, or Spyder’s profiler to analyze the primary function (e.g., hypotenuse(A,B)) and identify bottlenecks.

  2. Use %lprun in function mode to dissect the secondary functions that consume the most time (e.g., multiply(x,y)).

  3. Use %timeit to find faster alternatives to the most time-consuming operations.

  4. Repeat steps 1–3 until your program is fast enough.

Analyzing the primary function is important. You might be able to speed up a secondary function by a factor of 1,000; however, if that function only takes 1 percent of the total run time of your program, you haven’t gained much. On the other hand, if another function takes 90 percent of the run time and you speed it up by a factor of 2, you have made a significant improvement.

There are many more profiling tools available in Python. Delve into the timeit, profile, and line_profiler modules if you need to go beyond the techniques described here.

Summary

The first step in improving the performance of your code is quantifying performance. IPython provides several tools that allow you to time statements, functions, code fragments, and scripts. These tools will allow you to identify the portions of your program that consume the most time — the bottlenecks. By focusing on these, you can get most out of your efforts toward optimization. Once your program is fast enough, you can move on to something more interesting!






Code Samples

The calculator.py Module

This module uses NumPy arrays for storage, but executes array math using Python loops.

# -----------------------------------------------------------------------------
# calculator.py
# ----------------------------------------------------------------------------- 
import numpy as np

def add(x,y):
    """
    Add two arrays using a Python loop.
    x and y must be two-dimensional arrays of the same shape.
    """
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = x[i,j] + y[i,j]
    return z


def multiply(x,y):
    """
    Multiply two arrays using a Python loop.
    x and y must be two-dimensional arrays of the same shape.
    """
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = x[i,j] * y[i,j]
    return z


def sqrt(x):
    """
    Take the square root of the elements of an arrays using a Python loop.
    """
    from math import sqrt
    m,n = x.shape
    z = np.zeros((m,n))
    for i in range(m):
        for j in range(n):
            z[i,j] = sqrt(x[i,j])
    return z


def hypotenuse(x,y):
    """
    Return sqrt(x**2 + y**2) for two arrays, a and b.
    x and y must be two-dimensional arrays of the same shape.
    """
    xx = multiply(x,x)
    yy = multiply(y,y)
    zz = add(xx, yy)
    return sqrt(zz)

The test.py Script

This is a short script that creaates some arrays and uses the calculator module.

# -----------------------------------------------------------------------------
# test.py
# ----------------------------------------------------------------------------- 
import numpy as np
import calculator as calc

M = 10**3
N = 10**3

A = np.random.random((M,N))
B = np.random.random((M,N))

calc.hypotenuse(A,B)

Friday, August 14, 2015

Lists, Comprehensions, and Generators

In A Student’s Guide to Python for Physical Modeling, we emphasized NumPy arrays and paid less attention to Python lists. The reason is simple: In most scientific computing applications, NumPy arrays store data more efficiently and speed up mathematical calculations, sometimes a thousandfold.

However, there are some applications where a Python list is the better choice. There are also times when the choice between a list and an array has little or no effect on performance. In such cases a list can make your code easier to read and understand, and that is always a good thing.

In this post, I will describe Python lists and explain a special Python construct for creating lists called a list comprehension. I will also describe a similar construct called a generator expression.

Lists

A list is an ordered collection of items. You may have made a “To Do” list this morning or a grocery list for a recent trip to the store. In computer science, a list is a data structure that supports a few basic methods like insert, remove, append, and find. You probably used several of these operations with your own list. Perhaps you penciled in a new task later in the day (append), then crossed tasks off the list as you completed them (remove).

An array is a rigid data structure that stores a fixed number of identical elements (integers, floats, eight-character strings, etc.). If operations like insert, remove, or append are important parts of a computational task, a more flexible data structure like a list may be appropriate. The type of data may also suggest that a Python list is a better choice than a NumPy array. For instance, how would you initialize an array to store a grocery list? Furthermore, if the number of items to be stored is not known at the outset, it may be easier to store the data in a list and convert it to an array later. (Perhaps you are taking data at regular intervals but do not know how long an experiment will run.) Finally, if you are not worried about performance and scaling, a Python list might be a simpler option than a NumPy array. If you just need the first 20 perfect cubes, do you really want to import all of NumPy?

Let’s use the example of a grocery list to create and transform a simple list. A more useful example in scientific computing might be managing a collection of data files to process into stunning figures for your latest report, but the principles are the same.

To create a Python list, just enclose a comma-separated list of elements inside square brackets.

groceries = ['milk', 'eggs', 'orange juice']

Some functions also return a list. I can create the same list from a string using the split method:

groceries = "milk, eggs, apples, orange juice".split(',')

To find an item in a list, use the index method. It will return the index of the first occurrence of the item you request if it is in the list and a ValueError if it is not.

groceries.index('eggs')
groceries.index('bread')

It looks like I forgot the bread! I can add it to the list using the append method:

groceries.append('bread')

Later, I see some orange juice in the back of the refrigerator (not yet expired …), so I will delete that item from the list using the remove method:

groceries.remove('orange juice')

Since I am headed to ABC Grocery, where everything is organized alphabetically, I will sort the list and review it:

groceries.sort()
print(groceries)

One more useful operation is joining lists, or concatenation. In Python, the addition operator for lists is defined to join two lists. The extend method of a list will also join two lists. Calling append with a list as its argument will not add each element to the original list. It will make the argument list the final element of the calling list. I.e., you will have a list within a list, not a concatenation of the two lists.

If I had two separate grocery lists, I could join them into a single list in a variety of ways:

old_list = ['bananas', 'coffee']
new_list = groceries + old_list         # Addition creates a new list.
groceries += old_list                   # In-place addition extends original list.
old_list.extend(groceries)              # extend also extends original list.

bad_list = ['bananas', 'coffee']
bad_list.append(groceries)              # append does NOT concatenate lists.

After this set of commands, groceries and new_list contain the same elements. old_list contains 'bananas' and 'coffee' twice, since the commands append old_list to groceries first, and then append the new groceries to the original old_list. As you can see, bad_list did not merge the two lists properly.

In case you are skeptical of the usefulness of Python lists in scientific endeavors, here is a function that creates a list of the first N Fibonacci numbers.

def fibonacci(N):
    if N == 0: return [1]       # Handle unusual request for 0th number.
    fib = [1, 1]                # Create a list for all other values of N.
    for k in range(1,N):
        # Next Fibonacci number is the sum of the previous two.
        fib.append(fib[-1] + fib[-2])
    return fib

If you are still skeptical of the possible utility of a Python list, try writing the same function using NumPy arrays and using it to compute fibonacci(100). I’ve included a solution at the end of the post.

The command inside the loop could also have been written using list addition. Either of the following commands will work:

fib += [fib[-1] + fib[-2]]
fib = fib + [fib[-1] + fib[-2]]

The second approach is less efficient because it makes a copy of the list every time a new element is added, but it is syntactically correct. Be aware that you can only use list addition to join two lists — not a list and some other object you would like to put inside it. The following alternatives would result in errors:

fib += fib[-1] + fib[-2]
fib += (fib[-1] + fib[-2])
fib = fib + (fib[-1] + fib[-2])

List Comprehensions

Sometimes a list is not a collection of random elements; it has a logical structure instead. For instance, suppose you want to find the sum of the first 20 perfect cubes. You could create an array or a list of cubes, then add them up. The familiar procedure using a NumPy array is

import numpy as np
cubes = np.arange(1,21)**3
cubes.sum()

This does not require too much typing, and the purpose of the code is fairly clear. However, compare it with the following code:

cubes = [n**3 for n in range(1,21)]
sum(cubes)

This is an example of a list comprehension: a Python expression inside of the square brackets that denote a list. The statement is similar to the notation used in mathematics to define a set. It also clearly describes what the list contains. Note the similarity of the list comprehension to the following loop, which creates the same list:

cubes = []                  # Initialize empty list.
for n in range(1,21):
    cubes.append(n**3)      # Append perfect cubes.

The list comprehension effectively compresses all of this into a single Python statement.

A list comprehension defines a new list from another collection of objects via an expression like “for n in ...” Rather than using range, you can build one list from another:

poly = [(x+1)*(x-1) for x in cubes]

You can also apply conditional statements in a list comprehension:

even_squares = [n**2 for n in range(1,51) if n%2 == 0]
odd_squares  = [n**2 for n in range(1,51) if n%2 == 1]

You can cram quite a lot of code into a list comprehension, but it is not always advisable:

pythagoras = [(a,b,c)   for a in range(1,31) for b in range(a,31) \
                        for c in range(1,31) if a**2 + b**2 == c**2]

Despite the length and complexity of this single expression, its meaning is still fairly clear.

Returning to our original task, we can do even better than adding up the first 20 perfect cubes. Using a nested list comprehension, we can make a list of sums of cubes!

sums_of_cubes = [sum([n**3 for n in range(1,N+1)]) for N in range(1,21)]

Generators

A list comprehension creates a Python list that stores all of the elements in a single data structure. Sometimes this is exactly what you need. Other times, you simply want to iterate over all of the items in a list. If you never need all of the items in the list at once, you can use a generator expression instead. A generator expression looks like a list comprehension, except that you enclose the expression in round parentheses instead of square brackets — (...) instead of [...]. Despite the round parentheses, a generator expression does not create a tuple, and there is no such thing as a “tuple comprehension”.

cube_list = [n**3 for n in range(1,101)]
cube_generator = (n**3 for n in range(1,101))

A generator is simpler than a list. You cannot insert, remove, or append items, nor can you search or sort a generator. A generator knows how to produce the next item in a sequence, and little else. Once it has reached the end of its sequence, it does even less.

for x in cube_list: print(x)            # Prints numbers stored in list.
for x in cube_list: print(x)            # Prints numbers stored in list again.

for x in cube_generator: print(x)       # Prints numbers provided by generator.
for x in cube_generator: print(x)       # Prints nothing.  Generator is finished.

The advantages of a generator over a list are size and speed. Compare the output of the __sizeof__() method for the following lists and generators. This method returns the size of the object in bytes.

cube_list = [n**3 for n in range(1,10)]
cube_generator = (n**3 for n in range(1,10))
print(cube_list.__sizeof__())
print(cube_generator.__sizeof__())

cube_list = [n**3 for n in range(1,10**3)]
cube_generator = (n**3 for n in range(1,10**3))
print(cube_list.__sizeof__())
print(cube_generator.__sizeof__())

cube_list = [n**3 for n in range(1,10**6)]
cube_generator = (n**3 for n in range(1,10**6))
print(cube_list.__sizeof__())
print(cube_generator.__sizeof__())

The list grows from 168 bytes to 9 kB to 8.7 MB, while the generator remains a constant 48 bytes. Also, you may have noticed a delay while Python created the large list during the last set of commands.

I generally prefer a generator when I iterate over a large sequence of items once — especially if the program might exit the loop before reaching the end of the sequence.

Summary

NumPy arrays are often the most efficient data structure for numerical work in Python. However, there are some tasks for which a Python list is a better choice — often when organizing data rather than processing data. Python offers a compact syntax for creating lists called a list comprehension. A generator expression is similar, but creates an object that can produce a sequence without storing all of its elements. A generator is often a better choice than a list or an array when iterating over a large sequence of items.




NumPy version of fibonacci(N)

Here is a version of the fibonacci(N) function above that uses NumPy arrays.

import numpy as np

def Fibonacci(N):
    if N == 0: return np.array(1)   # Handle unusual request for 0th number.
    fib = np.zeros(N+1, dtype=int)  # Initialize list for all other values of N.
    fib[0], fib[1] = 1, 1
    for k in range(2,N+1):
        # Next Fibonacci number is the sum of the previous two.
        fib[k] = fib[k-1] + fib[k-2]
    return fib

Perhaps you came up with a more elegant solution. I find this version more difficult to code and more confusing to read. Plus, using a NumPy array forces a compromise: Either use floating point numbers and lose significant digits for N > 78, or use integers and encounter an overflow error for N > 91. In either case, you cannot generate the 100th Fibonacci number!