-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stack_llama: update instructions in README, fix broken _get_submodules and save tokenizer #358
Conversation
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me! Indeed, in the latest release of peft
the _get_submodules
method have been moved to peft.utils
, can you maybe add a note on the README you modified that the script needs to be run for peft>=0.3.0
?
|
Wow, you are super quick! Thanks |
I'd like to say thank you for putting this repo together. I have been concerned about the likes of ChatGPT taking away the ability to fine-tune models for specific tasks and this is a great piece of work towards democratizing NLP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again!
same problem, but my peft==0.3.0 |
merge_peft_adapter.py
are out of date in the README_get_submodules