Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Failed tests with mobile bert resize tokens embedding #33950

Merged
merged 13 commits into from
Oct 9, 2024

Conversation

abuelnasr0
Copy link
Contributor

@abuelnasr0 abuelnasr0 commented Oct 4, 2024

Fixes the failures introduced by #33325

The tests failed with mobilebert because of a missing transposing for the old_lm_head. This PR fixes that. I have tried the two failed tests locally.
It's weird that all tests passed before merging. EDIT: I see now, some tests were skipped

I have also changed the logic when the covariance matrix is not positive definite, just initialize the new embeddings with the mean if covariance is not positive definite.

c.c. @ArthurZucker

@abuelnasr0 abuelnasr0 changed the title Fix Failed tests with mobile bert Fix Failed tests with mobile bert resize tokens embedding Oct 4, 2024
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SG to me, why were the tests skipped?

@ArthurZucker
Copy link
Collaborator

Ah could you update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py::RecurrentGemmaModelTest::test_resize_tokens_embeddings as welll?

@ArthurZucker
Copy link
Collaborator

Ah and tests/models/git/test_modeling_git.py::GitModelTest::test_resize_tokens_embeddings - AssertionError: Padding_idx must be within num_embeddings as well

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@abuelnasr0
Copy link
Contributor Author

@ArthurZucker I have addressed mobilebert test, GitModeltest, and recurrent_gemma test.

recurrent_gemma test failed because an outlier was sampled, so I multiplied covariance by 1e-9 instead of 1e-5.
Git model tests failed because the config was overwritten after the first resizing test and used to initialize the model again, so I created a new copy for the new model initialization.

@abuelnasr0
Copy link
Contributor Author

abuelnasr0 commented Oct 4, 2024

I feel bad about those tests' failures actually, I wanted to deliver good code but tests didn't help me. 😅

The outlier of recurrent_gemma got sampled only after merging the code haha.

And I am not sure if other tests were actually skipped before merging. It's weird.
Do you know why the test failures appeared after merging? mobilebert and GitModeltest?

@ArthurZucker
Copy link
Collaborator

Yep! They are not part of the important model, the test fetcher seems to badly behave! It should have found out the whole dependencies!

No worries, we are the ones who set you up for failure in that case!

cc @ydshieh if you can have a look at the reasons why this was not fetched when you have time!

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks 🤗

config.vocab_size = 4
config.pad_token_id = 3
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what was this failing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pad_toke_id in the config is 98 for the GitModel. This results in an error in the embedding layer because it's higher than the vocab_size
This error appeared in the GitModel after fixing the error which was caused by overwriting the configuration.

@ydshieh
Copy link
Collaborator

ydshieh commented Oct 7, 2024

Yep! They are not part of the important model, the test fetcher seems to badly behave! It should have found out the whole dependencies!

No worries, we are the ones who set you up for failure in that case!

cc @ydshieh if you can have a look at the reasons why this was not fetched when you have time!

Could one of you provide a link of the previous (PR) job run page where you believe there is something being missed by the test fetcher?

@ArthurZucker
Copy link
Collaborator

In the mean time @abuelnasr0 could you commit with a message test_all 🤗

@ArthurZucker
Copy link
Collaborator

This way all models should be ran!

@ArthurZucker
Copy link
Collaborator

image A few failing tests still!

@ArthurZucker
Copy link
Collaborator

RUN_SLOW=1 pytest -n 4 tests/models/*/test_modeling_* -k test_resize_tokens_embeddings to filter!

@gante
Copy link
Member

gante commented Oct 9, 2024

I hope you don't mind @abuelnasr0 -- this PR is blocking other PRs, so I'm taking care of the rest of the fixes 🤗

@gante gante merged commit cdee528 into huggingface:main Oct 9, 2024
24 checks passed
@gante
Copy link
Member

gante commented Oct 9, 2024

(@ArthurZucker -- all tests in pytest tests/models/ -k test_resize_tokens_embeddings were green)

@abuelnasr0
Copy link
Contributor Author

@gante No problem. That is completely fine.
I am sorry for blocking other PRs! Thanks for the fixes!

@ArthurZucker
Copy link
Collaborator

@gante thanks for fixing it!

BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
…e#33950)

* Fix Failed tests with mobile bert

* Cast to the correct dtype

* Code fixup

* Fix padding_idx larger that embedding_size

* Reduce covariance more. use 1e-7 instead of 1e-5

* Comment fix

* Reduce covariance more. use 1e-9 instead of 1e-7

* Copy new config

* all but MRA fixed

* fix mra

* very flaky

* skip instead

* make fixup

---------

Co-authored-by: Joao Gante <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants