I gave a talk on Debugging Django applications at Monday’s inaugural meeting of DJUGL, the London Django Users Group. I wanted to talk about something that wasn’t particularly well documented elsewhere, so I pitched the talk as “Bug Driven Development”—what happens when Test Driven Development goes the way of this unfortunate pony.The slides are up on SlideShare, but don’t provide quite enough context so I’m going to cover the tips in full here.
Making the most of the error page
Django’s default error page is great—it provides a detailed traceback with local variables, lets you expand out the lines of code around the problem, provides a plain text exception suitable for e-mailing to colleagues and even a one-click button to send details to http://dpaste.com/ so you can go and talk about the error on IRC. It also serves the same purpose as phpinfo()—it shows you your application’s settings, the GET, POST and COOKIE data from the request and the all important META fields assembled from the HTTP environment (great for remembering how to miss-spell HTTP_REFERER).
Useful tip number one is that you can trigger the error page from any view just by adding the following line:
You can serve up an expression with the assertion as well; it will be displayed at the top of the error page:
assert False, request.GET
One particularly useful place to use this is when you are building a complex form. If you want to see the data that was submitted, drop an assert False in to the view that the form targets and use the error page to inspect the data.
Logging to the development server console
If you want to be a bit more sophisticated with your logging, it’s worth turning to Python’s logging module (part of the standard library). You can configure it in your settings.py:
import logging logging.basicConfig( level = logging.DEBUG, format = '%(asctime)s %(levelname)s %(message)s', )
Then call it from any of your views:
def my_view(request): import logging logging.debug("A log message") ...
Again, this will log things to the terminal where the development server is running. If you want to log things to a file you can do so by extending the basicConfig call:
logging.basicConfig( level = logging.DEBUG, format = '%(asctime)s %(levelname)s %(message)s', filename = '/tmp/myapp.log', filemode = 'w' )
You can then use tail -f /tmp/myapp.log to see log lines being appended to that file in real-time. This can be used in production as well as development.
The above just scratches the surface of Python’s logging module; with a bit of digging around in the documentation you can use it to rotate log files, send log messages over the network and even POST log events to an HTTP server somewhere.
Often you find yourself dealing with an error that only occurs in certain circumstances—a function might be called from dozens of different places in your program but only runs in to trouble in a very specific case. You can use the traceback module to log the current stack, which will allow you to tell how a function was called when something went wrong:
import logging, traceback, pprint def my_buggy_function(arg): ... if error_condition: stack = pprint.pformat(traceback.extract_stack()) logging.debug('An error occurred: %s' % stack)
The tuple returned by
traceback.extract_stack() includes line numbers, function names and paths to Python files so you can use it to reconstruct a good amount of information about your program.
Using the debugger
By far the most powerful weapon in my debugging toolkit is the Python debugger, pdb. Again, this ships with the standard library so there’s nothing extra to install. pdb is a command line debugger (if you want a GUI options include PyEclipse and Komodo, but I haven’t used either myself). There are a bunch of ways to activate pdb, but the most straight forward is to simply drop the following line directly in to a Django view function:
import pdb; pdb.set_trace()
If you try to load that page in your browser, the browser will hang—the page will appear to be loading extremely slowly. What’s actually happened is the developer server has paused execution and thrown up the pdb interface—you can switch over to your console and start interacting directly with the server mid view.
Did I mention you should never, ever leave this on in production?
So, you’ve got a hung development server and a pdb prompt. What can you do with it? The answer is pretty much anything. I won’t provide a full pdb tutorial here (this is a good introduction), but the commands I find most useful are the following:
- Shows the lines of source code around your current point of execution. You can run it multiple times to increase the amount of source code displayed.
- Execute the next line
- Same as n, but steps in to any functions that are called. You can quickly get lost in a twisty maze of code with this command, but that’s OK because...
- Continues execution until the current function returns
- Goes UP one level in the stack—so you can see the function that called the function you are currently in
- Goes DOWN again
- not a pdb command, but handy for seeing what’s in your current scope
The pdb docs have a full list of commands.
The pdb prompt doubles up as a full Python interactive shell, so you can not only access variables but you can modify them, call functions and generally mess around with the internals of your application as much as you like, while it’s running. It’s kind of a poor man’s imitation of being a Smalltalk developer.
Remember though, the whole time you’re messing around in pdb your browser is still stuck there, waiting for the HTTP request to come back. If you hit “c” (for continue) your application will kick in again, the request will be served and your browser will breathe a sigh of relief.
Thankfully you don’t have to use pdb in a way that freezes your development server; it also works great in the interactive shell. If you’ve got a buggy function, one way to explore it is to run it interactively, then use the following idiom:
>>> def function_that_raises_an_exception(): ... assert False ... >>> function_that_raises_an_exception() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in function_that_raises_an_exception AssertionError >>> import pdb; pdb.pm() > <stdin>(2)function_that_raises_an_exception() (Pdb)
pdb.pm() stands for post-mortem, and is probably my favourite feature of the debugger—it lets you jump back in to debug the most recently raised exception, even if you hadn’t imported pdb at the time the exception was raised.
One last pdb tip: you can use it to debug Python command line scripts such as Django’s custom ./manage.py commands. The trick is to run the script like this:
python -i manage.py buggy_command
The -i argument causes Python to drop in to the interactive prompt after executing the script. If the script raised an exception, you can then use
pdb.pm() to debug it.
Handling errors in production
Django’s default behaviour in production (that is, when the DEBUG setting is set to False) is to e-mail exception reports to anyone listed in the ADMINS section. You can also turn on e-mail reports on every 404 error with the SEND_BROKEN_LINK_EMAILS setting, which will send them to addresses in the MANAGERS setting. As far as I know these settings don’t do anything else—they’re a pretty ancient bit of Django.
On a high traffic site you probably don’t want to be e-mailed on every server error. One neat alternative is David Cramer’s django-db-log, which logs exceptions to a database table. It cleverly uses an MD5 hash of the traceback to aggregate many reports of the same error. More importantly though, it acts as a really straight forward example of how to use Django middleware’s process_exception hook to roll your own error reporting. Take a look at the code to see how simple this is.
More useful middleware
In the talk I demoed a couple of other handy pieces of middleware. The first was the ProfilerMiddleware (one of several profiling tools on Django Snippets) which allows you to add ?prof to the end of any URL to see the output of Python’s cProfile module run against that request. The second is one that I’ve just released: DebugFooter, which adds a footer showing exactly which templates were loaded from where (handy for debugging complex template paths) as well as every executed SQL query and how long each one took.
Abusing the test client
A final tip for exploring your application interactively is to learn to use Django’s TestClient. Although designed for use in unit tests, this tool is equally useful for use at the interactive prompt. It allows you to simulate an in-process request against your application from within your Python code. Here’s an example:
>>> from django.test.client import Client >>> c = Client() >>> response = c.get("/") # The homepage >>> response <django.http.HttpResponse object at 0x2300470> >>> print response Vary: Cookie Content-Type: text/html; charset=utf-8 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> ...
The response object you get back is the
HttpResponse returned by the view, ready to be explored interactively.
There’s another function from the unit testing tools that can help with interactively exploring an application:
setup_test_environment(). This function monkey-patches in some additional hooks used by the unit tests, including one that intercepts template render calls and adds information on them to the request object. Here’s an example:
>>> from django.test.utils import setup_test_environment >>> setup_test_environment() >>> from django.test.client import Client >>> c = Client() >>> response = c.get("/") >>> response.template [<django.template.Template object at 0x2723dd0>, <django.template.Template object at 0x2723f30>, <django.template.Template object at 0x273ee10>] >>> response.context [ list of Context objects ]
This allows you to explore not just the HTML returned by a view, but also the templates and contexts that were used to render it.
Your tips welcome
If you have any useful tips on debugging Django applications, please share them in the comments on this entry.