-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combine Tokenizers for better results #54
Comments
Hi, I do want to make sure I understand the problem a little better. Here is my take on the issue you laid out
But what I understand in your case, you probably have all these variation of data in the same element, and changing Tokenizers is not an option, but if you add all types of tokens, you have a better chance of matching. Is that correct ? I do see concatTokenizer a useful add for such scenarios. Just a caution on performance, as you data size grows large, the additional token will slow down the match. But in any case, I am open to add this into the library. If you would like to do a Pull Request with some unit tests, I can have this out in our next release |
Thanks for the reply.
Yes, I have a single data set where I have to match a mix of exact matches, typos and other variations like missing spaces, so the use of a single tokenizer was skipping some. Using a combination of two or three tokenizers got all the cases covered in a single run.
I understand, I have a small set of ~200 docs matched against ~500 so I didn't consider performance. I will keep in mind that.
Sure, I hope to do it in relative short time. |
Adds support for chaining tokenizers. Fixes #54
Hi, I had problems using a single tokenizer for matching names.
The wordSoundexEncodeTokenizer was matching as equal two different names. I was matching "Caputo" and the MatchService returned "Caputo" and "Chabot" with equal score.
The wordTokenizer was skipping "Nikolau" as the correct match was "Nikolaou".
The triGramTokenizer was skipping "Leao", when there was a direct match with "Rafael Leao".
I found a temporary solution concatenating the Tokenizers with a custom method:
and using it like
I'm not sure if this is the correct approach, but I hope that the function is helpful to others with the same problem.
If the solution is correct I would like to have it inserted in the library.
The results after the use of the function were as expected and all items matched perfectly, but I would suggest a further development with, if possible, a weight given to the tokenizers or listing the tokenizers in order and when one gives no results the next one is used, to prioritize exact match over like-sounding solutions that may have the same score.
The text was updated successfully, but these errors were encountered: