Package nltk :: Module evaluate
[hide private]
[frames] | no frames]

Module evaluate

source code

Utility functions for evaluating processing modules.

Classes [hide private]
  ConfusionMatrix
The confusion matrix between a list of reference values and a corresponding list of test values.
Functions [hide private]
 
accuracy(reference, test)
Given a list of reference values and a corresponding list of test values, return the percentage of corresponding values that are equal.
source code
float or None
precision(reference, test)
Given a set of reference values and a set of test values, return the percentage of test values that appear in the reference set.
source code
float or None
recall(reference, test)
Given a set of reference values and a set of test values, return the percentage of reference values that appear in the test set.
source code
float or None
f_measure(reference, test, alpha=0.5)
Given a set of reference values and a set of test values, return the f-measure of the test values, when compared against the reference values.
source code
 
log_likelihood(reference, test)
Given a list of reference values and a corresponding list of test probability distributions, return the average log likelihood of the reference values, given the probability distributions.
source code
tuple
approxrand(a, b, **kwargs)
Returns an approximate significance level between two lists of independently generated test values.
source code
int
windowdiff(seg1, seg2, k, boundary='1')
Compute the windowdiff score for a pair of segmentations.
source code
 
_edit_dist_init(len1, len2) source code
 
_edit_dist_step(lev, i, j, c1, c2) source code
 
edit_dist(s1, s2)
Calculate the Levenshtein edit-distance between two strings.
source code
 
demo() source code
Variables [hide private]
  betai = None
Function Details [hide private]

accuracy(reference, test)

source code 

Given a list of reference values and a corresponding list of test values, return the percentage of corresponding values that are equal. In particular, return the percentage of indices 0<i<=len(test) such that test[i] == reference[i].

Parameters:
  • reference (list) - An ordered list of reference values.
  • test (list) - A list of values to compare against the corresponding reference values.
Raises:
  • ValueError - If reference and length do not have the same length.

precision(reference, test)

source code 

Given a set of reference values and a set of test values, return the percentage of test values that appear in the reference set. In particular, return |referencetest|/|test|. If test is empty, then return None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

recall(reference, test)

source code 

Given a set of reference values and a set of test values, return the percentage of reference values that appear in the test set. In particular, return |referencetest|/|reference|. If reference is empty, then return None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

f_measure(reference, test, alpha=0.5)

source code 

Given a set of reference values and a set of test values, return the f-measure of the test values, when compared against the reference values. The f-measure is the harmonic mean of the precision and recall, weighted by alpha. In particular, given the precision p and recall r defined by:

  • p = |referencetest|/|test|
  • r = |referencetest|/|reference|

The f-measure is:

  • 1/(alpha/p + (1-alpha)/r)

If either reference or test is empty, then f_measure returns None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

log_likelihood(reference, test)

source code 

Given a list of reference values and a corresponding list of test probability distributions, return the average log likelihood of the reference values, given the probability distributions.

Parameters:
  • reference (list) - A list of reference values
  • test (list of ProbDistI) - A list of probability distributions over values to compare against the corresponding reference values.

approxrand(a, b, **kwargs)

source code 

Returns an approximate significance level between two lists of independently generated test values.

Approximate randomization calculates significance by randomly drawing from a sample of the possible permutations. At the limit of the number of possible permutations, the significance level is exact. The approximate significance level is the sample mean number of times the statistic of the permutated lists varies from the actual statistic of the unpermuted argument lists.

Parameters:
  • a (list) - a list of test values
  • b (list) - another list of independently generated test values
Returns: tuple
a tuple containing an approximate significance level, the count of the number of times the pseudo-statistic varied from the actual statistic, and the number of shuffles

windowdiff(seg1, seg2, k, boundary='1')

source code 

Compute the windowdiff score for a pair of segmentations. A segmentation is any sequence over a vocabulary of two items (e.g. "0", "1"), where the specified boundary value is used to mark the edge of a segmentation.

Parameters:
  • seg1 (string or list) - a segmentation
  • seg2 (string or list) - a segmentation
  • k (int) - window width
  • boundary (string or int or bool) - boundary value
Returns: int

edit_dist(s1, s2)

source code 

Calculate the Levenshtein edit-distance between two strings. The edit distance is the number of characters that need to be substituted, inserted, or deleted, to transform s1 into s2. For example, transforming "rain" to "shine" requires three steps, consisting of two substitutions and one insertion: "rain" -> "sain" -> "shin" -> "shine". These operations could have been done in other orders, but at least three steps are needed.

Parameters:
  • s1 (string), s2 (string @rtype int) - The strings to be analysed