Skip to content

Commit

Permalink
filter_token calls compactify automatically (see issue #326 and commit
Browse files Browse the repository at this point in the history
…4863040), so I fixed that point in tutorial.
  • Loading branch information
VorontsovIE committed Jul 25, 2017
1 parent da383bf commit 3199aa1
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions docs/notebooks/Corpora_and_Vector_Spaces.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@
"source": [
"Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.\n",
"\n",
"We are going to create the dictionary from the mycorpus.txt file without loading the entire file into memory. Then, we will generate the list of token ids to remove from this dictionary by querying the dictionary for the token ids of the stop words, and by querying the document frequencies dictionary (dictionary.dfs) for token ids that only appear once. Finally, we will filter these token ids out of our dictionary and call dictionary.compactify() to remove the gaps in the token id series."
"We are going to create the dictionary from the mycorpus.txt file without loading the entire file into memory. Then, we will generate the list of token ids to remove from this dictionary by querying the dictionary for the token ids of the stop words, and by querying the document frequencies dictionary (`dictionary.dfs`) for token ids that only appear once. Finally, we will filter these token ids out of our dictionary. Keep in mind that `dictionary.filter_tokens` (and some other functions such as `dictionary.add_document`) will call `dictionary.compactify()` to remove the gaps in the token id series thus enumeration of remaining tokens can be changed."
]
},
{
Expand Down Expand Up @@ -385,9 +385,6 @@
"\n",
"# remove stop words and words that appear only once\n",
"dictionary.filter_tokens(stop_ids + once_ids)\n",
"\n",
"# remove gaps in id sequence after words that were removed\n",
"dictionary.compactify()\n",
"print(dictionary)"
]
},
Expand Down

0 comments on commit 3199aa1

Please sign in to comment.