Tokenizers

1
   Overview

Tokenizers divide strings into lists of substrings. For example, tokenizers can be used to find the list of sentences or words in a string. Tokenizers are implemented in NLTK as subclasses of the nltk.tokenize.TokenizerI interface, which defines the tokenize() method. Here's an example of their use:

 
>>> from nltk.tokenize import *
>>> tokenizer = WordPunctTokenizer()
>>> tokenizer.tokenize("She said 'hello'.")
['She', 'said', "'", 'hello', "'."]

Many of the tokenization methods are also available as simple functions, which save you the trouble of creating a tokenizer object every time you want to tokenize a string. For example:

 
>>> wordpunct_tokenize("She said 'hello'.")
['She', 'said', "'", 'hello', "'."]

2   Simple Tokenizers

The following tokenizers, defined in nltk.tokenize.simple, just divide the string using the string split() method.

 
>>> s = ("Good muffins cost $3.88\nin New York.  Please buy me\n"
...      "two of them.\n\nThanks.")
 
>>> # same as s.split():
>>> WhitespaceTokenizer().tokenize(s) 
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.',
 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']
 
>>> # same as s.split(' '):
>>> SpaceTokenizer().tokenize(s) 
['Good', 'muffins', 'cost', '$3.88\nin', 'New', 'York.', '',
 'Please', 'buy', 'me\ntwo', 'of', 'them.\n\nThanks.']
 
>>> # same as s.split('\n'):
>>> LineTokenizer(blanklines='keep').tokenize(s) 
['Good muffins cost $3.88', 'in New York.  Please buy me',
 'two of them.', '', 'Thanks.']
 
>>> # same as [l for l in s.split('\n') if l.strip()]:
>>> LineTokenizer(blanklines='discard').tokenize(s) 
['Good muffins cost $3.88', 'in New York.  Please buy me',
 'two of them.', 'Thanks.']
 
>>> # same as s.split('\t'):
>>> TabTokenizer().tokenize('a\tb c\n\t d') 
['a', 'b c\n', ' d']

The simple tokenizers are not available as separate functions; instead, you should just use the string split() method directly:

 
>>> s.split() 
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.',
 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']
>>> s.split(' ') 
['Good', 'muffins', 'cost', '$3.88\nin', 'New', 'York.', '',
 'Please', 'buy', 'me\ntwo', 'of', 'them.\n\nThanks.']
>>> s.split('\n') 
['Good muffins cost $3.88', 'in New York.  Please buy me',
 'two of them.', '', 'Thanks.']

The simple tokenizers are mainly useful because they follow the standard TokenizerI interface, and so can be used with any code that expects a tokenizer. For example, these tokenizers can be used to specify the tokenization conventions when building a CorpusReader.

3   Regular Expression Tokenizers

RegexpTokenizer splits a string into substrings using a regular expression. By default, any substrings matched by this regexp will be returned as tokens. For example, the following tokenizer selects out only capitalized words, and throws everything else away:

 
>>> capword_tokenizer = RegexpTokenizer('[A-Z]\w+')
>>> capword_tokenizer.tokenize(s)
['Good', 'New', 'York', 'Please', 'Thanks']

The following tokenizer forms tokens out of alphabetic sequences, money expressions, and any other non-whitespace sequences:

 
>>> tokenizer = RegexpTokenizer('\w+|\$[\d\.]+|\S+')
>>> tokenizer.tokenize(s)  
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York', '.',
 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']

RegexpTokenizers can be told to use their regexp pattern to match separators between tokens, using gaps=True:

 
>>> tokenizer = RegexpTokenizer('\s+', gaps=True)
>>> tokenizer.tokenize(s)  
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.',
 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']

The nltk.tokenize.regexp module contains several subclasses of RegexpTokenizer that use pre-defined regular expressions:

 
>>> # Uses '\w+':
>>> WordTokenizer().tokenize(s) 
['Good', 'muffins', 'cost', '3', '88', 'in', 'New', 'York', 'Please',
 'buy', 'me', 'two', 'of', 'them', 'Thanks']
 
>>> # Uses '\w+|[^\w\s]+':
>>> WordPunctTokenizer().tokenize(s) 
['Good', 'muffins', 'cost', '$', '3', '.', '88', 'in', 'New', 'York',
 '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']
 
>>> # Uses '\s*\n\s*\n\s*':
>>> BlanklineTokenizer().tokenize(s) 
['Good muffins cost $3.88\nin New York.  Please buy me\ntwo of them.',
 'Thanks.']

All of the regular expression tokenizers are also available as simple functions:

 
>>> regexp_tokenize(s, pattern='\w+|\$[\d\.]+|\S+') 
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York', '.',
'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']
>>> word_tokenize(s) 
['Good', 'muffins', 'cost', '3', '88', 'in', 'New', 'York', 'Please',
 'buy', 'me', 'two', 'of', 'them', 'Thanks']
>>> wordpunct_tokenize(s) 
['Good', 'muffins', 'cost', '$', '3', '.', '88', 'in', 'New', 'York',
 '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']
>>> blankline_tokenize(s) 
['Good muffins cost $3.88\nin New York.  Please buy me\ntwo of them.',
 'Thanks.']

Warning

The function regexp_tokenize() takes the text as its first argument, and the regular expression pattern as its second argument. This differs from the conventions used by Python's re functions, where the pattern is always the first argument. But regexp_tokenize() is primarily a tokenization function, so we chose to follow the convention among other tokenization functions that the text should always be the first argument.

4   S-Expression Tokenizers

SExprTokenizer is used to find parenthasized expressions in a string. In particular, it divides a string into a sequence of substrings that are either parenthasized expressions (including any nested parenthasized expressions), or other whitespace-separated tokens.

 
>>> SExprTokenizer().tokenize('(a b (c d)) e f (g)')
['(a b (c d))', 'e', 'f', '(g)']

By default, SExprTokenizer will raise a ValueError exception if used to tokenize an expression with non-matching parenthases:

 
>>> SExprTokenizer().tokenize('c) d) e (f (g')
Traceback (most recent call last):
  ...
ValueError: Un-matched close paren at char 1

But the strict argument can be set to False to allow for non-matching parenthases. Any unmatched close parenthases will be listed as their own s-expression; and the last partial sexpr with unmatched open parenthases will be listed as its own sexpr:

 
>>> SExprTokenizer(strict=False).tokenize('c) d) e (f (g')
['c', ')', 'd', ')', 'e', '(f (g']

The characters used for open and close parenthases may be customized using the parens argument to the SExprTokenizer constructor:

 
>>> SExprTokenizer(parens='{}').tokenize('{a b {c d}} e f {g}')
['{a b {c d}}', 'e', 'f', '{g}']

The s-expression tokenizer is also available as a function:

 
>>> sexpr_tokenize('(a b (c d)) e f (g)')
['(a b (c d))', 'e', 'f', '(g)']

5   Punkt Tokenizer

The PunktSentenceTokenizer divides a text into a list of sentences, by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It must be trained on a large collection of plaintext in the taret language before it can be used. The algorithm for this tokenizer is described in Kiss & Strunk (2006):

Kiss, Tibor and Strunk, Jan (2006): Unsupervised Multilingual Sentence
  Boundary Detection.  Computational Linguistics 32: 485-525.

The NLTK data package includes a pre-trained Punkt tokenizer for English.

 
>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries.  And sometimes sentences
... can start with non-capitalized words.  i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> print '\n-----\n'.join(tokenizer.tokenize(text.strip()))
Punkt knows that the periods in Mr. Smith and Johann S. Bach
do not mark sentence boundaries.
-----
And sometimes sentences
can start with non-capitalized words.
-----
i is a good variable
name.

(Note that whitespace from the original text, including newlines, is retained in the output.)

The nltk.tokenize.punkt module also defines PunktWordTokenizer, which uses a series of regular expressions to divide a word into tokens:

 
>>> PunktWordTokenizer().tokenize(s) 
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.', 'Please',
 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']

This word tokenizer is also available as a function:

 
>>> punkt_word_tokenize(s) 
['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.', 'Please',
 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']

6   Regression Tests: Regexp Tokenizer

Some additional test strings.

 
>>> s = ("Good muffins cost $3.88\nin New York.  Please buy me\n"
...      "two of them.\n\nThanks.")
>>> s2 = ("Alas, it has not rained today. When, do you think, "
...       "will it rain again?")
>>> s3 = ("<p>Although this is <b>not</b> the case here, we must "
...       "not relax our vigilance!</p>")
 
>>> print regexp_tokenize(s2, r'[,\.\?!"]\s*', gaps=False)
[', ', '. ', ', ', ', ', '?']
>>> print regexp_tokenize(s2, r'[,\.\?!"]\s*', gaps=True)
['Alas', 'it has not rained today', 'When', 'do you think',
 'will it rain again']

Make sure that grouping parenthases don't confuse the tokenizer:

 
>>> print regexp_tokenize(s3, r'</?(b|p)>', gaps=False)
['<p>', '<b>', '</b>', '</p>']
>>> print regexp_tokenize(s3, r'</?(b|p)>', gaps=True)
['Although this is ', 'not',
 ' the case here, we must not relax our vigilance!']

Make sure that named groups don't confuse the tokenizer:

 
>>> print regexp_tokenize(s3, r'</?(?P<named>b|p)>', gaps=False)
['<p>', '<b>', '</b>', '</p>']
>>> print regexp_tokenize(s3, r'</?(?P<named>b|p)>', gaps=True)
['Although this is ', 'not',
 ' the case here, we must not relax our vigilance!']

Make sure that nested groups don't confuse the tokenizer:

 
>>> print regexp_tokenize(s2, r'(h|r|l)a(s|(i|n0))', gaps=False)
['las', 'has', 'rai', 'rai']
>>> print regexp_tokenize(s2, r'(h|r|l)a(s|(i|n0))', gaps=True)
['A', ', it ', ' not ', 'ned today. When, do you think, will it ',
 'n again?']

The tokenizer should reject any patterns with backreferences:

 
>>> print regexp_tokenize(s2, r'(.)\1')
Traceback (most recent call last):
   ...
ValueError: Regular expressions with back-references are
not supported: '(.)\\1'
>>> print regexp_tokenize(s2, r'(?P<foo>)(?P=foo)')
Traceback (most recent call last):
   ...
ValueError: Regular expressions with back-references are
not supported: '(?P<foo>)(?P=foo)'

A simple sentence tokenizer '.(s+|$)'

 
>>> print regexp_tokenize(s, pattern=r'\.(\s+|$)', gaps=True)
['Good muffins cost $3.88\nin New York',
 'Please buy me\ntwo of them', 'Thanks']