We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I'm interested in your project and I wonder how you can avoid sequence's length issue.
I guess each sign language have various length so before training through LSTM, the data may be applied padding and masking. Is it right?
But in the code, I can't find padding and masking process... like this
Is there any process to pad and mask that I didn't find? May you give me insight for this?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi, I'm interested in your project and I wonder how you can avoid sequence's length issue.
I guess each sign language have various length so before training through LSTM, the data may be applied padding and masking. Is it right?
But in the code, I can't find padding and masking process... like this

Is there any process to pad and mask that I didn't find? May you give me insight for this?
The text was updated successfully, but these errors were encountered: