Home | Trees | Indices | Help |
|
---|
|
object --+ | api.TokenizerI --+ | TabTokenizer
A tokenizer that divides a string into substrings by treating any
single tab character as a separator. If you are performing the
tokenization yourself (rather than building a tokenizer to pass to some
other piece of code), consider using the string split()
method instead:
>>> words = s.split('\t')
|
|||
|
|||
Inherited from Inherited from |
|
|||
Inherited from |
|
Divide the given string into a list of substrings.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0beta1 on Wed Aug 27 15:08:58 2008 | http://epydoc.sourceforge.net |