Simon Willison’s Weblog

Using XPath to mine XHTML

This morning, I finally decided to install libxml2 and see what all the fuss was about, in particular with respect to XPath. What followed is best described as an enlightening experience.

XPath is a beautifully elegant way of adressing “nodes” within an XML document. XPath expressions look a little like file paths, for example:

Match any <second> elements that occur inside a <first> element that is the root element of the document
Match all <second> elements irrespective of their place in the document
Match all <second> elements with a ’hi’ attribute
Match all <second> elements with a ’hi’ attribute that equals “there”

A full XPath tutorial is available.

The Python libxml2 bindings make running XPath expressions incredibly simple. Here’s some code that extracts the titles of all of the entries on my Kansas blog from the site’s RSS feed:

>>> import libxml2
>>> import urllib
>>> rss = libxml2.parseDoc(
>>> rss.xpathEval('//item/title')
[<xmlNode (title) object at 0xb4b260>, <xmlNode (title) object at 0xa99968>, 
<xmlNode (title) object at 0x10dce68>]
>>> [node.content for node in rss.xpathEval('//item/title')]
['Music and Brunch', 'House hunting', 'Arrival']

Why is this so exciting? I’ve been saying for over a year that XHTML is an ideal format for storing pieces of content in a database or content management system. Serving content to browsers as HTML 4 makes perfect sense, but storing your actual content as XML gives you the ability to process that content in the future using XML tools.

So far, the best example of a powerful tool for manipulating this stored XML has been XSLT. XSLT has its fans, but is also often criticised as being unintuitive and having a steep learning curve. XPath is a far better example of a powerful, easy to use tool that can be brought to bare on XHTML content.

Enough talk, here’s an example of what I mean. The following code snippet creates a Python dictionary of all of the acronyms currently visible on the front page of my blog, mapping their shortened version to the expanded text (extracted from the title attribute):

>>> blog = libxml2.parseDoc(
>>> ctxt = blog.xpathNewContext()
>>> ctxt.xpathRegisterNs('xhtml', '')
>>> acronyms = dict([(a.content, a.prop('title')) 
    for a in ctxt.xpathEval('//xhtml:acronym')])
>>> for acronym, fulltext in acronyms.items():
	print acronym, ':', fulltext

DHTML : Dynamic HyperText Markup Language
URL : Universal Republic of Love
HTML : HyperText Markup Language
SIG : Special Interest Group
PHP : PHP: Hypertext Preprocessor
CSS : Cascading Style Sheets

The above code is slightly more complicated than the first example, as using XPath with a document that uses XML namespaces requires some extra work to register the namespace with the XPath parser. Still, it’s a pretty short piece of code considering what it does.

For an example of how powerful XPath can be on a much larger scale, take a look at Sam Ruby’s XPath enabled blog search feature.

This is Using XPath to mine XHTML by Simon Willison, posted on 21st October 2003.

Next: Google's Internal Blogs

Previous: Fun with DHTML and Flash

Previously hosted at