Skip to content

Subwords tokenizer based on google code from tensor2tensor

License

Notifications You must be signed in to change notification settings

kovalevfm/SubTokenizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SubTokenizer

Build status

Subwords tokenizer based on google code from tensor2tensor. It supports tags and combined tokens in addition to google tokenizer.

  • Tags are tokens starting from @, they are not splited on parts.
  • No break symbol ¬ '\xac' allows to join several words in one token.

Tokenizer does unicode normalization and controls characters escaping. It's also possible to encode rare symbols so they can be splited on parts by subwords algorithm.

Original google subwords tokenizer: https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/text_encoder.py

By default before learning and tokenizing SubTokenizer encodes all control characters and @ ¬ symbols. To use tags it's needed to run encoding first then add tags and after learn/tokenize with encode_controls=Flase or --no_encode_controls in command line mode.

Install:

 pip install subtokenizer

Usage:

cat text_file.txt | subtokenizer learn -o bpe.file -s 1000 -r reserved_tokens.txt
cat text_file.txt | subtokenizer tokenize -s bpe.file > tokenized_file.txt
cat tokenized_file.txt | subtokenizer detokenize -s bpe.file > text_file.txt

Or:

from subtokenizer import SubTokenizer

tokenizer = SubTokenizer.learn(words_count)
tokenizer.save(subwords_filename)

tokenizer = SubTokenizer.load(subwords_filename)
tokens = tokenizer.tokenize(line)
line = tokenizer.detokenize(tokens)

About

Subwords tokenizer based on google code from tensor2tensor

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published