nltk.sentiment package¶
Submodules¶
nltk.sentiment.sentiment_analyzer module¶
A SentimentAnalyzer is a tool to implement and facilitate Sentiment Analysis tasks using NLTK features and classifiers, especially for teaching and demonstrative purposes.
-
class
nltk.sentiment.sentiment_analyzer.
SentimentAnalyzer
(classifier=None)[source]¶ Bases:
object
A Sentiment Analysis tool based on machine learning approaches.
-
add_feat_extractor
(function, **kwargs)[source]¶ Add a new function to extract features from a document. This function will be used in extract_features(). Important: in this step our kwargs are only representing additional parameters, and NOT the document we have to parse. The document will always be the first parameter in the parameter list, and it will be added in the extract_features() function.
Parameters: - function – the extractor function to add to the list of feature extractors.
- kwargs – additional parameters required by the function function.
-
all_words
(documents, labeled=None)[source]¶ Return all words/tokens from the documents (with duplicates). :param documents: a list of (words, label) tuples. :param labeled: if True, assume that each document is represented by a
(words, label) tuple: (list(str), str). If False, each document is considered as being a simple list of strings: list(str).Return type: list(str) Returns: A list of all words/tokens in documents.
-
apply_features
(documents, labeled=None)[source]¶ Apply all feature extractor functions to the documents. This is a wrapper around nltk.classify.util.apply_features.
- If labeled=False, return featuresets as:
- [feature_func(doc) for doc in documents]
- If labeled=True, return featuresets as:
- [(feature_func(tok), label) for (tok, label) in toks]
Parameters: documents – a list of documents. If labeled=True, the method expects a list of (words, label) tuples. Return type: LazyMap
-
bigram_collocation_feats
(documents, top_n=None, min_freq=3, assoc_measure=<bound method NgramAssocMeasures.pmi of <class 'nltk.metrics.association.BigramAssocMeasures'>>)[source]¶ Return top_n bigram features (using assoc_measure). Note that this method is based on bigram collocations measures, and not on simple bigram frequency.
Parameters: - documents – a list (or iterable) of tokens.
- top_n – number of best words/tokens to use, sorted by association measure.
- assoc_measure – bigram association measure to use as score function.
- min_freq – the minimum number of occurrencies of bigrams to take into consideration.
Returns: top_n ngrams scored by the given association measure.
-
classify
(instance)[source]¶ Classify a single instance applying the features that have already been stored in the SentimentAnalyzer.
Parameters: instance – a list (or iterable) of tokens. Returns: the classification result given by applying the classifier.
-
evaluate
(test_set, classifier=None, accuracy=True, f_measure=True, precision=True, recall=True, verbose=False)[source]¶ Evaluate and print classifier performance on the test set.
Parameters: - test_set – A list of (tokens, label) tuples to use as gold set.
- classifier – a classifier instance (previously trained).
- accuracy – if True, evaluate classifier accuracy.
- f_measure – if True, evaluate classifier f_measure.
- precision – if True, evaluate classifier precision.
- recall – if True, evaluate classifier recall.
Returns: evaluation results.
Return type: dict(str): float
-
extract_features
(document)[source]¶ Apply extractor functions (and their parameters) to the present document. We pass document as the first parameter of the extractor functions. If we want to use the same extractor function multiple times, we have to add it to the extractors with add_feat_extractor using multiple sets of parameters (one for each call of the extractor function).
Parameters: document – the document that will be passed as argument to the feature extractor functions. Returns: A dictionary of populated features extracted from the document. Return type: dict
-
train
(trainer, training_set, save_classifier=None, **kwargs)[source]¶ Train classifier on the training set, optionally saving the output in the file specified by save_classifier. Additional arguments depend on the specific trainer used. For example, a MaxentClassifier can use max_iter parameter to specify the number of iterations, while a NaiveBayesClassifier cannot.
Parameters: - trainer – train method of a classifier. E.g.: NaiveBayesClassifier.train
- training_set – the training set to be passed as argument to the classifier train method.
- save_classifier – the filename of the file where the classifier will be stored (optional).
- kwargs – additional parameters that will be passed as arguments to the classifier train function.
Returns: A classifier instance trained on the training set.
Return type:
-
unigram_word_feats
(words, top_n=None, min_freq=0)[source]¶ Return most common top_n word features.
Parameters: - words – a list of words/tokens.
- top_n – number of best words/tokens to use, sorted by frequency.
Return type: list(str)
Returns: A list of top_n words/tokens (with no duplicates) sorted by frequency.
-
nltk.sentiment.util module¶
Utility methods for Sentiment Analysis.
-
nltk.sentiment.util.
demo_liu_hu_lexicon
(sentence, plot=False)[source]¶ Basic example of sentiment classification using Liu and Hu opinion lexicon. This function simply counts the number of positive, negative and neutral words in the sentence and classifies it depending on which polarity is more represented. Words that do not appear in the lexicon are considered as neutral.
Parameters: - sentence – a sentence whose polarity has to be classified.
- plot – if True, plot a visual representation of the sentence polarity.
-
nltk.sentiment.util.
demo_movie_reviews
(trainer, n_instances=None, output=None)[source]¶ Train classifier on all instances of the Movie Reviews dataset. The corpus has been preprocessed using the default sentence tokenizer and WordPunctTokenizer. Features are composed of:
- most frequent unigrams
Parameters: - trainer – train method of a classifier.
- n_instances – the number of total reviews that have to be used for training and testing. Reviews will be equally split between positive and negative.
- output – the output file where results have to be reported.
-
nltk.sentiment.util.
demo_sent_subjectivity
(text)[source]¶ Classify a single sentence as subjective or objective using a stored SentimentAnalyzer.
Parameters: text – a sentence whose subjectivity has to be classified.
-
nltk.sentiment.util.
demo_subjectivity
(trainer, save_analyzer=False, n_instances=None, output=None)[source]¶ Train and test a classifier on instances of the Subjective Dataset by Pang and Lee. The dataset is made of 5000 subjective and 5000 objective sentences. All tokens (words and punctuation marks) are separated by a whitespace, so we use the basic WhitespaceTokenizer to parse the data.
Parameters: - trainer – train method of a classifier.
- save_analyzer – if True, store the SentimentAnalyzer in a pickle file.
- n_instances – the number of total sentences that have to be used for training and testing. Sentences will be equally split between positive and negative.
- output – the output file where results have to be reported.
-
nltk.sentiment.util.
demo_tweets
(trainer, n_instances=None, output=None)[source]¶ Train and test Naive Bayes classifier on 10000 tweets, tokenized using TweetTokenizer. Features are composed of:
- 1000 most frequent unigrams
- 100 top bigrams (using BigramAssocMeasures.pmi)
Parameters: - trainer – train method of a classifier.
- n_instances – the number of total tweets that have to be used for training and testing. Tweets will be equally split between positive and negative.
- output – the output file where results have to be reported.
-
nltk.sentiment.util.
demo_vader_instance
(text)[source]¶ Output polarity scores for a text using Vader approach.
Parameters: text – a text whose polarity has to be evaluated.
-
nltk.sentiment.util.
demo_vader_tweets
(n_instances=None, output=None)[source]¶ Classify 10000 positive and negative tweets using Vader approach.
Parameters: - n_instances – the number of total tweets that have to be classified.
- output – the output file where results have to be reported.
-
nltk.sentiment.util.
extract_bigram_feats
(document, bigrams)[source]¶ Populate a dictionary of bigram features, reflecting the presence/absence in the document of each of the tokens in bigrams. This extractor function only considers contiguous bigrams obtained by nltk.bigrams.
Parameters: - document – a list of words/tokens.
- unigrams – a list of bigrams whose presence/absence has to be checked in document.
Returns: a dictionary of bigram features {bigram : boolean}.
>>> bigrams = [('global', 'warming'), ('police', 'prevented'), ('love', 'you')] >>> document = 'ice is melting due to global warming'.split() >>> sorted(extract_bigram_feats(document, bigrams).items()) [('contains(global - warming)', True), ('contains(love - you)', False), ('contains(police - prevented)', False)]
-
nltk.sentiment.util.
extract_unigram_feats
(document, unigrams, handle_negation=False)[source]¶ Populate a dictionary of unigram features, reflecting the presence/absence in the document of each of the tokens in unigrams.
Parameters: - document – a list of words/tokens.
- unigrams – a list of words/tokens whose presence/absence has to be checked in document.
- handle_negation – if handle_negation == True apply mark_negation method to document before checking for unigram presence/absence.
Returns: a dictionary of unigram features {unigram : boolean}.
>>> words = ['ice', 'police', 'riot'] >>> document = 'ice is melting due to global warming'.split() >>> sorted(extract_unigram_feats(document, words).items()) [('contains(ice)', True), ('contains(police)', False), ('contains(riot)', False)]
-
nltk.sentiment.util.
json2csv_preprocess
(json_file, outfile, fields, encoding='utf8', errors='replace', gzip_compress=False, skip_retweets=True, skip_tongue_tweets=True, skip_ambiguous_tweets=True, strip_off_emoticons=True, remove_duplicates=True, limit=None)[source]¶ Convert json file to csv file, preprocessing each row to obtain a suitable dataset for tweets Semantic Analysis.
Parameters: - json_file – the original json file containing tweets.
- outfile – the output csv filename.
- fields – a list of fields that will be extracted from the json file and kept in the output csv file.
- encoding – the encoding of the files.
- errors – the error handling strategy for the output writer.
- gzip_compress – if True, create a compressed GZIP file.
- skip_retweets – if True, remove retweets.
- skip_tongue_tweets – if True, remove tweets containing “:P” and “:-P” emoticons.
- skip_ambiguous_tweets – if True, remove tweets containing both happy and sad emoticons.
- strip_off_emoticons – if True, strip off emoticons from all tweets.
- remove_duplicates – if True, remove tweets appearing more than once.
- limit – an integer to set the number of tweets to convert. After the limit is reached the conversion will stop. It can be useful to create subsets of the original tweets json data.
-
nltk.sentiment.util.
mark_negation
(document, double_neg_flip=False, shallow=False)[source]¶ Append _NEG suffix to words that appear in the scope between a negation and a punctuation mark.
Parameters: - document – a list of words/tokens, or a tuple (words, label).
- shallow – if True, the method will modify the original document in place.
- double_neg_flip – if True, double negation is considered affirmation (we activate/deactivate negation scope everytime we find a negation).
Returns: if shallow == True the method will modify the original document and return it. If shallow == False the method will return a modified document, leaving the original unmodified.
>>> sent = "I didn't like this movie . It was bad .".split() >>> mark_negation(sent) ['I', "didn't", 'like_NEG', 'this_NEG', 'movie_NEG', '.', 'It', 'was', 'bad', '.']
-
nltk.sentiment.util.
output_markdown
(filename, **kwargs)[source]¶ Write the output of an analysis to a file.
-
nltk.sentiment.util.
parse_tweets_set
(filename, label, word_tokenizer=None, sent_tokenizer=None, skip_header=True)[source]¶ Parse csv file containing tweets and output data a list of (text, label) tuples.
Parameters: - filename – the input csv filename.
- label – the label to be appended to each tweet contained in the csv file.
- word_tokenizer – the tokenizer instance that will be used to tokenize each sentence into tokens (e.g. WordPunctTokenizer() or BlanklineTokenizer()). If no word_tokenizer is specified, tweets will not be tokenized.
- sent_tokenizer – the tokenizer that will be used to split each tweet into sentences.
- skip_header – if True, skip the first line of the csv file (which usually contains headers).
Returns: a list of (text, label) tuples.
-
nltk.sentiment.util.
save_file
(content, filename)[source]¶ Store content in filename. Can be used to store a SentimentAnalyzer.
-
nltk.sentiment.util.
split_train_test
(all_instances, n=None)[source]¶ Randomly split n instances of the dataset into train and test sets.
Parameters: - all_instances – a list of instances (e.g. documents) that will be split.
- n – the number of instances to consider (in case we want to use only a subset).
Returns: two lists of instances. Train set is 8/10 of the total and test set is 2/10 of the total.
nltk.sentiment.vader module¶
If you use the VADER sentiment analysis tools, please cite:
Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
-
class
nltk.sentiment.vader.
SentiText
(text)[source]¶ Bases:
object
Identify sentiment-relevant string-level properties of input text.
-
class
nltk.sentiment.vader.
SentimentIntensityAnalyzer
(lexicon_file='sentiment/vader_lexicon.zip/vader_lexicon/vader_lexicon.txt')[source]¶ Bases:
object
Give a sentiment intensity score to sentences.
-
nltk.sentiment.vader.
allcap_differential
(words)[source]¶ Check whether just some words in the input are ALL CAPS
Parameters: words (list) – The words to inspect Returns: True if some but not all items in words are ALL CAPS
-
nltk.sentiment.vader.
negated
(input_words, include_nt=True)[source]¶ Determine if input contains negation words
Module contents¶
NLTK Sentiment Analysis Package