Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine Tokenizers for better results #54

Closed
dogeweb opened this issue Sep 16, 2021 · 2 comments
Closed

Combine Tokenizers for better results #54

dogeweb opened this issue Sep 16, 2021 · 2 comments

Comments

@dogeweb
Copy link

dogeweb commented Sep 16, 2021

Hi, I had problems using a single tokenizer for matching names.
The wordSoundexEncodeTokenizer was matching as equal two different names. I was matching "Caputo" and the MatchService returned "Caputo" and "Chabot" with equal score.
The wordTokenizer was skipping "Nikolau" as the correct match was "Nikolaou".
The triGramTokenizer was skipping "Leao", when there was a direct match with "Rafael Leao".

I found a temporary solution concatenating the Tokenizers with a custom method:

@SafeVarargs
    public static <T> Function<Element<T>, Stream<Token<T>>> concatTokenizers(Function<Element<T>, Stream<Token<T>>>... funct) {
        return element -> Arrays.stream(funct).flatMap(fun -> fun.apply(element));
    }

and using it like

                .setTokenizerFunction(concatTokenizers(
                        TokenizerFunction.wordTokenizer(),
                        TokenizerFunction.wordSoundexEncodeTokenizer(),
                        TokenizerFunction.triGramTokenizer()
                ))

I'm not sure if this is the correct approach, but I hope that the function is helpful to others with the same problem.
If the solution is correct I would like to have it inserted in the library.

The results after the use of the function were as expected and all items matched perfectly, but I would suggest a further development with, if possible, a weight given to the tokenizers or listing the tokenizers in order and when one gives no results the next one is used, to prioritize exact match over like-sounding solutions that may have the same score.

@manishobhatia
Copy link
Contributor

Hi,
Thanks for taking interest in this project. I like the idea of concatTokenizer , and I think it can be a useful addition.

I do want to make sure I understand the problem a little better. Here is my take on the issue you laid out

  • "Caputo" and "Chabot" matched equal with wordSoundexEncodeTokenizer : This relies on Apache Soundex library which tries to find words with similar phonemes. I can see in this case it might have generate the same code for both these words
  • "Nikolau" and "Nikolaou" did not match with wordTokenizer : This again is expected, as it is doing a string equals. But "wordSoundexEncodeTokenizer" would been a better fit here
  • "Leao" and "Rafael Leao" did not match with triGramTokenizer: I think it would have matched, but did not go above the default threshold. This function breaks the words in tokens of 3 , and since the "Leao" will generate less tokens [lea, eao] (2) compared with "Rafael Leao" [Raf, afa, fae, ael, el , l L, Le, Lea, eao] (9). This match score will only be around 2/9 . I think in this case either changing the threshold or using the above two tokenizer will be better

But what I understand in your case, you probably have all these variation of data in the same element, and changing Tokenizers is not an option, but if you add all types of tokens, you have a better chance of matching. Is that correct ?

I do see concatTokenizer a useful add for such scenarios. Just a caution on performance, as you data size grows large, the additional token will slow down the match.

But in any case, I am open to add this into the library. If you would like to do a Pull Request with some unit tests, I can have this out in our next release

@dogeweb
Copy link
Author

dogeweb commented Sep 18, 2021

Thanks for the reply.

But what I understand in your case, you probably have all these variation of data in the same element, and changing Tokenizers is not an option, but if you add all types of tokens, you have a better chance of matching. Is that correct ?

Yes, I have a single data set where I have to match a mix of exact matches, typos and other variations like missing spaces, so the use of a single tokenizer was skipping some. Using a combination of two or three tokenizers got all the cases covered in a single run.

Just a caution on performance, as you data size grows large, the additional token will slow down the match.

I understand, I have a small set of ~200 docs matched against ~500 so I didn't consider performance. I will keep in mind that.

If you would like to do a Pull Request with some unit tests, I can have this out in our next release

Sure, I hope to do it in relative short time.

manishobhatia added a commit that referenced this issue Oct 14, 2021
Adds support for chaining tokenizers. Fixes #54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants