Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in dataloader -- total perplexity in paper incorrect #4

Open
UsmannK opened this issue Apr 28, 2021 · 1 comment
Open

Issue in dataloader -- total perplexity in paper incorrect #4

UsmannK opened this issue Apr 28, 2021 · 1 comment

Comments

@UsmannK
Copy link

UsmannK commented Apr 28, 2021

Hi, your dataloader uses the default collate_fn in pytorch 0.4.1. This truncates every batch to the length of the shortest sentence in that batch. By extension your experiments are only operating on about 20% of the total datasets.

When fixed, this increases perplexity to about 300 for both FedAtt and FedAvg on the Penn Treebank dataset.

@UsmannK
Copy link
Author

UsmannK commented Apr 28, 2021

Can you provide the parameters used to reproduct the results in the paper? epochs, local epochs, learning rate, etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant