Mechanize the web
3rd February 2003
Via Keith Devens, Screen-scraping with WWW::Mechanize describes how Perl’s WWW::Mechanize
module can be used to grab information from sites that require a user login. I’ve always dismissed screen scraping as something of a wasted effort, given the fact that a major rewrite of the scraper is required whenever the target site tweaks its HTML. This article has encouraged me to reconsider—some of the functionality in WWW::Mechanise
is fantastic:
We create a WWW::Mechanize object and tell it the address of the site we’ll be working from. The Radio Times’ front page has an image link with an ALT text of “My Diary”, so we can use that to get to the right section of the site:
my $agent = WWW::Mechanize->new(); $agent->get("http://www.radiotimes.beeb.com/"); $agent->follow("My Diary");
The returned page contains two forms—one to allow you to choose from a list box of program types, and then a login form for the diary function. We tell WWW::Mechanize to use the second form for input. (Something to remember here is that WWW::Mechanize’s list of forms, unlike an array in Perl, is indexed starting at 1 rather than 0. Our index is, therefore,’2.’)
$agent->form(2);
Now we can fill in our e-mail address for the ’<INPUT name="email" type="text">’ field, and click the submit button. Nothing too complicated.
$agent->field("email", $email); $agent->click();
I’m still not quite impressed enough to learn Perl, but I’m very tempted to borrow some of the ideas and re-implement them in PHP or Python.
More recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024