HTML Screen Scraping: A How-To Document

Author: Dave Kuhlman
Address:
dkuhlman@rexx.com
http://www.rexx.com/~dkuhlman
Revision: 1.0a
Date: Jan. 9, 2004
Copyright: Copyright (c) 2004 Dave Kuhlman. This documentation is covered by The MIT License: http://www.opensource.org/licenses/mit-license.

Abstract

This document explains how to do HTML screen scraping. In effect it shows how to treat the Web as a resource by enabling you to retrieve and extract data from HTML Web pages.

Contents

1���Introduction

The Web contains a huge amount of information. This document shows how to use the Web as a back-end resource behind your Quixote Web applications.

This document explains how to do this behind your Quixote application (although similar techniques could be used in other environments as well). In particular, we will describe:

There is a distribution file containing the source code from which the samples in this document were selected. You can find it here: http://www.rexx.com/~dkuhlman/quixote_htmlscraping.zip.

The Web as a back-end resource for your Quixote applications is just one of a number of resources that you can put behind your Quixote application. Other possible back-end resources are, for example relational databases, XML-RPC, SOAP, and Python extension modules. For more information on how to access several other back-end resources under Quixote, see Special Tasks -- Back-end Resources in A Quixote Application: Getting Started.

2���Command-line Use

It is often useful to be able to make quick tests of your patterns by running them from the command-line. Here is a simple function that does so. It uses the urllib module to retrieve the Web page and the popen2 module to run sgrep:

import urllib
import popen2

#
# Scan the contents of a URL.
# Retrieve the URL, then feed the contents (a string) to sgrep.
#
def search_url(pattern, url, addOptions):
    if not addOptions:
        addOptions = ''
    options = "-g html -o '%r:::' -T " + addOptions
    cmd = "sgrep %s '%s' -" % (options, pattern)
    print 'cmd:', cmd
    try:
        instream = urllib.urlopen(url)
    except IOError:
        print '*** bad url: %s' % url
        return
    content = instream.read()
    instream.close()
    print 'len(content): %d' % len(content)
    # Feed the content through sgrep and collect the results.
    outfile, infile = popen2.popen2(cmd)
    infile.write(content)
    infile.close()
    results = outfile.read()
    outfile.close()
    print 'results:\n========\n'
    resultlist = results.split(':::')
    for result in resultlist:
        if result.strip():
            print result
            print '---------------'

Explanation:

3���From Within Quixote

OK, I'll admit it. Doing this inside Quixote is basically the same as doing it outside of Quixote. Again, you are going to use urllib to retrieve a Web page, then use sgrep to extract chunks of text from the page, and finally use the Python regular expression module re (or some other Python parsing technique) to extract data items from the results returned by sgrep.

One concern that we might have is latency. In particular, we might worry that:

  1. Our server might block while our Quixote application is waiting for a request (e.g. through urllib) to be completed.
  2. The client might notice an extra delay while our Quixote application is retrieving a Web page.

Here is an attempt to ease your worries:

  1. The possibility of blocking during a request is eliminated by using the SCGI server for Quixote. This server creates a new process for each request if an existing process is not available to handle the request. Because these request handlers are running in separate processes, they do not block each other. This should reassure even those of you who worry about the Python global interpreter lock (GIL). And, the maximum number of processes can be increased at server start-up time.
  2. A noticable delay for the client can't be helped. Some things take time. In some applications you may be able to cache results.

4���Sgrep Patterns

This section gives help with writing sgrep patterns that select data within an HTML document. It contains examples of sgrep patterns for typical data extraction tasks that you are likely to want to perform.

A few examples:

A few additional comments and notes:

5���Running sgrep

There are two techniques for running sgrep:

The use of pysgrep is described in PySgrep - Python Wrappers for Sgrep, so I will not repeat that here.

Using popen2 is simple and will be used in the examples below.

Comparison -- PySgrep vs. popen2:

6���Sgrep plus regular expressions

sgrep does not (yet) have the capability to use regular expressions. You can get some of the effect of regular expressions by using Python's re module and applying it to results produced by sgrep.

Basically we are going to use a regular expression to extract pieces of data from each of the chunks returned by sgrep.

An example -- Suppose that we want to extract the server and domain name from URLs. For example, suppose sgrep returns something like the following:

http://www.python.org/doc/current/tut/tut.html

And, we want to extract:

www.python.org

Here is how we might do that:

#
# Scan files.
# Read the files, then feed the file contents (a string) to sgrep.
#
def search_files(pattern, filenames, addOptions, regex):
    if not addOptions:
        addOptions = ''
    options = "-g html -o '%r:::' " + addOptions
    #
    # Compile the regular expression if there is one.
    expr = None
    if regex:
        expr = re.compile(regex)
    cmd = "sgrep %s '%s' -" % (options, pattern)
    for filename in filenames:
        inputfile = file(filename, 'r')
        outfile, infile = popen2.popen2(cmd)
        infile.write(inputfile.read())
        infile.close()
        results = outfile.read()
        outfile.close()
        print '=' * 50
        s1 = 'file: %s' % filename
        print s1
        print '=' * len(s1)
        resultlist = results.split(':::')
        for result in resultlist:
            if result.strip():
                print result
                #
                # If there is a regular expression, use it to
                #   search the result.
                if expr:
                    matchobject = expr.search(result)
                    if matchobject:
                        print 'match: %s' % matchobject.group(1)
                    else:
                        print 'no match'
                print '---------------'

Explanation:

7���Sgrep plus HTMLParser

In some cases the chunks of text returned by sgrep may contain HTML mark-up that is sufficiently complex so that it becomes very awkward to use regular expressions to analyze it. In other cases, it just may be easier to ask sgrep to return such chunks of HTML mark-up. This section describes how to use the Python HTMLParser module to analyze these chunks of text.

This technique is limited to some extent by the need to give the HTMLParser.feed() chunks of mark-up that are "complete", that is the contain balanced tags. However, this requirement is easy to satisfy with sgrep, because any query of the form "(stag("tag") .. etag("tag")) parenting ..." will return a chunk of HTML mark-up that we can give to the HTMLParser.feed(data) method. And, note that there is no requirement to feed a "complete" chunk of mark-up to HTMLParser.feed(data) in a single call; we can call feed multiple times in order to do so.

Here is a reasonably simple example. This example searches http://jobs.com with a query, then extracts (1) a brief job description, (2) a URL, (3) the company name, and (4) the job location, then formats a Web page with this extracted information.

Here is the code that does the query and data extraction:

class JobService:

    #
    # Process a single query.
    # Return a tuple: (urlList, descriptionList, companyList, locationList).
    # The query is a sequence of words separated by spaces.
    #
    def job_search(self, query):
        if query:
            q1 = query.replace(' ', '.')
            q2 = query.replace(' ', '%26')
            q3 = query.replace(' ', '+')
        else:
            return [], [], [], []
        req = 'http://%s.jobs.com/jobsearch.asp?re=9&vw=b&pg=1&cy=US&sq=%s&aj=%s' \
            % (q1, q2, q3)
        f = urllib.urlopen(req)
        content = f.read()
        f.close()
        resultTuple = self.job_parse(content)
        return resultTuple

    def job_parse(self, content):
        # Extract the URLs.
        cmd = "sgrep -g html -o '%r:::' 'attribute(\"HREF\") in " \
              "((stag(\"A\") .. etag(\"A\")) childrening " \
              "(stag(\"TD\") .. etag(\"TD\")) containing " \
              "attribute(\"HREF\") containing \"getjob\")' -"
        urlList = self.extract(cmd, content)
        # Extract the descriptions.
        cmd = "sgrep -g html -o '%r:::' '(stag(\"A\") __ etag(\"A\")) in " \
              "((stag(\"A\") .. etag(\"A\")) childrening " \
              "(stag(\"TD\") .. etag(\"TD\")) containing " \
              "attribute(\"HREF\") containing \"getjob\")' -"
        descriptionList = self.extract(cmd, content)
        # Extract the company names and locations.
        cmd = "sgrep -g html -o '%r:::' '(stag(\"TR\") .. etag(\"TR\")) containing " \
              "(stag(\"TD\") .. etag(\"TD\")) parenting " \
              "stag(\"A\") containing " \
              "attribute(\"HREF\") containing \"getjob\"'"
        companyList, locationList = self.extract_with_htmlparser(cmd, content)
        return urlList, descriptionList, companyList, locationList

    def extract(self, cmd, content):
        outfile, infile = popen2.popen2(cmd)
        infile.write(content)
        infile.close()
        results = outfile.read()
        outfile.close()
        resultList = results.split(':::')
        return resultList

    def extract_with_htmlparser(self, cmd, content):
        parser = LocationHTMLParser()
        outfile, infile = popen2.popen2(cmd)
        infile.write(content)
        infile.close()
        results = outfile.read()
        outfile.close()
        resultList = results.split(':::')
        companyList = []
        locationList = []
        for result in resultList:
            parser.clear()
            parser.feed(result)
            companyList.append(parser.getCompany())
            locationList.append(parser.getLocation())
        return companyList, locationList


class LocationHTMLParser(HTMLParser.HTMLParser):

    def __init__(self):
        HTMLParser.HTMLParser.__init__(self)
        self.count = 0
        self.company = ''
        self.location = ''

    def handle_starttag(self, tag, attrs):
        if tag == 'td':
            self.count += 1

##    def handle_endtag(self, tag):
##        pass

    def handle_data(self, data):
        if self.count == 4:
            self.company += data
        elif self.count == 5:
            self.location += data

    #
    # Note to self:  Do not use name "reset".  HTMLParser
    #   defines and uses that.
    def clear(self):
        self.count = 0
        self.company = ''
        self.location = ''

    def getCompany(self):
        return self.company

    def getLocation(self):
        return self.location

Explanation:

And, here is the code that provides the Quixote Web user interface and that generates the Web page:

class ServicesUI:
    o
    o
    o

    def do_job_search [html] (self, request):
        queryString = widget.StringWidget('query_string')
        submit = widget.SubmitButtonWidget(value='Search')
        if request.form:
            queryStringValue = queryString.parse(request)
        else:
            queryStringValue = ''
        jobservice = services.JobService()
        urlList, descriptionList, companyList, locationList = jobservice.job_search(
            str(queryStringValue))
        header('Jobs')
        '<form method="POST" action="job_search">\n'
        '<p>Query string:'
        queryString.render(request)
        '</p>\n'
        '<p>'
        submit.render(request)
        '</p>\n'
        '</form>\n'
        '<hr/>\n'
        '<ul>\n'
        re1 = re.compile(str('href="([^"]*)"'))
        q1 = queryStringValue.replace(str(' '), str('.'))
        for idx in range(len(urlList)):
            result = urlList[idx]
            description = descriptionList[idx]
            company = companyList[idx]
            location = locationList[idx]
            if result.strip():
                mo = re1.search(result)
                if mo:
                    url = mo.group(1)
                    '<li><a href="http://%s.jobs.com%s">%s</a> at %s in %s</li>' % \
                        (q1, url, description, company, location)
        '</ul>'
        footer()

Explanation:

8���An HTML Scraping Development Methodology

This section is part summary/review and part suggestion for a sequence of steps to follow for this kind of work.

For each HTML scraping operation, do the following:

  1. Determine and capture the URL -- Use your Web browser to visit the page containing the data you want. Then copy the contents of the address field. Note that in you Python code, you may have to replace arguments in the URL.

  2. Capture a sample of the Web page in a file. Here is a simple script that retrieves a Web page and writes it to stdout, which you can pipe to the file of your choice:

    import urllib
     
    def get_page(url):
        f = urllib.urlopen(url)
        content = f.read()
        f.close()
        print content
    
  3. Off-line (i.e. outside of Quixote), write and test a script that extracts the data items you want from this sample page you have captured in a file. Here is a harness that you can use to test your data extraction scripts. It is actually taken from a unit test, and it tests the code in the example above:

    from test2.services import jobsearchservices
    
    class TestDataExtraction(unittest.TestCase):
        o
        o
        o
        def test_retrieve_and_parse(self):
            jobservice = jobsearchservices.JobService()
            urlList, descriptionList, companyList, locationList = \
                jobservice.job_search('python internet')
            self.assert_(len(urlList) > 0)
            self.assert_(len(urlList) == len(descriptionList))
            self.assert_(len(urlList) == len(companyList))
            self.assert_(len(urlList) == len(locationList))
    
  4. Copy and paste the data extraction function that you have just tested into your Quixote application. Or, with a little prior planning, you could put these data extraction functions into a Python module where they can be used both during off-line testing and from within your Quixote application.

  5. Use the Python unittest framework to set up tests for your data extraction functions.

8.1���Unit tests

Good development methodology is -- First, documentation. Next, unit tests. Then code.

Here is a sample unit test that you can use to get started:

#!/usr/bin/env python

import unittest
from test2.services import jobsearchservices

class TestDataExtraction(unittest.TestCase):
    
    def setUp(self):
        pass

    def test_retrieve_and_parse(self):
        jobservice = jobsearchservices.JobService()
        urlList, descriptionList, companyList, locationList = \
            jobservice.job_search('python internet')
        self.assert_(len(urlList) > 0)
        self.assert_(len(urlList) == len(descriptionList))
        self.assert_(len(urlList) == len(companyList))
        self.assert_(len(urlList) == len(locationList))

if __name__ == '__main__':
    unittest.main()

Explanation:

  • This sample runs one simple test and verifies that we extract at least one data item and that the lengths of the returned lists of extracted data items are equal.
  • Because our HTML data retrieval and extraction code is in a separate module (from the rest of our Quixote application), we can import and test it outside of the Quixote application.
  • In order to add additional tests, add additional methods whose names begin with the letters "test".
  • For more information on the Python unit testing framework, see: http://www.python.org/doc/current/lib/module-unittest.html.

9���Summary

9.1���Hints and suggestions

Here are several suggestions for the implementation of your HTML data access:

  • Separate the user interface from the model -- Try to put your back-end access code in a separate module that contains no Quixote code. There are several reasons for doing this: (1) if you do so, it is good evidence of a separation of your user interface from your model (the application logic) and (2) doing so will allow you to run the code from outside of Quixote (e.g. from unit tests written with the Python unit test framework).
  • Define an API -- Try to write your code in a way that makes the access to a set of resources well-defined. One way to accomplish this is (1) to implement a separate function or method for each Web resource access and (2) to specify, for each function/method both the URL and the data returned. Specifying the URL may require describing arguments that are filled into the URL, for example CGI variables. Specifying the data returned may require describing a complex structure, for example a list of lists.

9.2���Motivation and tools

The Web is a huge resource. All we lack is the motivation and tools to make use of it. Or do we ...

10���See Also

http://www.mems-exchange.org/software/quixote/: The Quixote support Web site.

Sgrep home page

sgrep - search a file for a structured pattern: The sgrep man page.

re -- Regular expression operations: Documentation on Python's regular expression module.

PySgrep - Python Wrappers for Sgrep: Detailed information on the Python extension module for sgrep.

unittest: The Python Unit testing framework