-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where does the pre-trained bert model gets cached in my system by default? #2323
Comments
AFAIK, the cache folder is hidden. You can download the files manually and the save them to your desired location two files to download is config.json and .bin and you can call it through pretrained suppose you wanted to instantiate BERT then do |
Each file in the cache comes with a .json file describing what's inside. This isn't part of transformers' public API and may change at any time in the future. Anyway, here's how you can locate a specific file:
Here, |
The discussion in #2157 could be useful too. |
Hi! |
For anyone landed here wondering if one can globally change the cache directory: set |
You can get find it the same way transformers do it:
|
For me huggingface changed the default cache folder to:
|
Thank you, this worked for me! Note that I had to remove the |
Note that the hf_bucket_url has been removed so you can use this now. ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' #22390 |
❓ Questions & Help
I used model_class.from_pretrained('bert-base-uncased') to download and use the model. The next time when I use this command, it picks up the model from cache. But when I go into the cache, I see several files over 400M with large random names. How do I know which is the bert-base-uncased or distilbert-base-uncased model? Maybe I am looking at the wrong place
The text was updated successfully, but these errors were encountered: