We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p-tuning method is used to tune the ADGEN dataset. The default super parameters are used. so how to fix it?
delete " –quantization_bit 4 " works for me. but i need to use it with quantization.
No response
bash train.sh
- OS:Linux - Python:3.8 - Transformers:4.30.2 - PyTorch:2.0.0 - CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :True
The text was updated successfully, but these errors were encountered:
Quantization rely on cpm_kernels. Install it by:
cpm_kernels
pip install cpm_kernels
Sorry, something went wrong.
it works. Thanks a lot
Thanks!
Quantization rely on cpm_kernels. Install it by: pip install cpm_kernels
It really works. Thanks.
No branches or pull requests
Is there an existing issue for this?
Current Behavior
p-tuning method is used to tune the ADGEN dataset. The default super parameters are used.
so how to fix it?
delete " –quantization_bit 4 " works for me.
but i need to use it with quantization.
Expected Behavior
No response
Steps To Reproduce
bash train.sh
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: