This HOWTO contains a variety of examples relating to the Portuguese language. It is intended to be read in conjunction with the NLTK book (http://nltk.org/book). For instructions on running the Python interpreter, please see the section Getting Started with Python, in Chapter 1.
Chapter 1 of the NLTK book contains many elementary programming examples, all with English texts. In this section, we'll see some corresponding examples using Portuguese. Please refer to the chapter for full discussion. Vamos!
|
Any time we want to find out about these texts, we just have to enter their names at the Python prompt:
|
A concordance permits us to see words in context.
|
For a given word, we can find words with a similar text distribution:
|
We can search for the statistically significant collocations in a text:
|
We can search for words in context, with the help of regular expressions, e.g.:
|
We can automatically generate random text based on a given text, e.g.:
|
A few sentences have been defined for you.
|
Notice that the sentence has been tokenized. Each token is represented as a string, represented using quotes, e.g. 'coisa'. Some strings contain special characters, e.g. \xf3, the internal representation for ó. The tokens are combined in the form of a list. How long is this list?
|
What is the vocabulary of this sentence?
|
Let's iterate over each item in psent2, and print information for each:
|
Observe how we make a human-readable version of a string, using decode(). Also notice that we accessed the last character of a string w using w[-1].
We just saw a for loop above. Another useful control structure is a list comprehension.
|
We can examine the relative frequency of words in a text, using FreqDist:
|
NLTK includes the complete works of Machado de Assis.
|
Each file corresponds to one of the works of Machado de Assis. To see a complete list of works, you can look at the corpus README file: print machado.readme(). Let's access the text of the Posthumous Memories of Brás Cubas.
We can access the text as a list of characters, and access 200 characters starting from position 10,000.
|
However, this is not a very useful way to work with a text. We generally think of a text as a sequence of words and punctuation, not characters:
|
Here's a program that finds the most common ngrams that contain a particular target word.
|
NLTK includes the MAC-MORPHO Brazilian Portuguese POS-tagged news text, with over a million words of journalistic texts extracted from ten sections of the daily newspaper Folha de Sao Paulo, 1994.
|
We can also access it in sentence chunks.
|
This data can be used to train taggers (examples below for the Floresta treebank).
The NLTK data distribution includes the "Floresta Sinta(c)tica Corpus" version 7.4, available from http://www.linguateca.pt/Floresta/.
We can access this corpus as a sequence of words or tagged words as follows:
|
The tags consist of some syntactic information, followed by a plus sign, followed by a conventional part-of-speech tag. Let's strip off the material before the plus sign:
|
Pretty printing the tagged words:
|
Count the word tokens and types, and determine the most common word:
|
List the 20 most frequent tags, in order of decreasing frequency:
|
We can also access the corpus grouped by sentence:
|
To view a parse tree, use the draw() method, e.g.:
|
Python understands the common character encoding used for Portuguese, ISO 8859-1 (ISO Latin 1).
|
For more information about character encodings and Python, please see section 3.3 of the book.
Here's a function that takes a word and a specified amount of context (measured in characters), and generates a concordance for that word.
|
|
Let's begin by getting the tagged sentence data, and simplifying the tags as described earlier.
|
We already know that n is the most common tag, so we can set up a default tagger that tags every word as a noun, and see how well it does:
|
Evidently, about one in every six words is a noun. Let's improve on this by training a unigram tagger:
|
Next a bigram tagger:
|
Punkt is a language-neutral sentence segmentation tool. We
|
The sentence tokenizer can be trained and evaluated on other text. The source text (from the Floresta Portuguese Treebank) contains one sentence per line. We read the text, split it into its lines, and then join these lines together using spaces. Now the information about sentence breaks has been discarded. We split this material into training and testing data:
|
Now we train the sentence segmenter (or sentence tokenizer) and use it on our test sentences:
|
NLTK's data collection includes a trained model for Portuguese sentence segmentation, which can be loaded as follows. It is faster to load a trained model than to retrain it.
|
NLTK includes the RSLP Portuguese stemmer. Here we use it to stem some Portuguese text:
|
NLTK includes Portuguese stopwords:
|
Now we can use these to filter text. Let's find the most frequent words (other than stopwords) and print them in descending order of frequency:
|