Releases: huggingface/huggingface_hub
v0.28.1: FIX path in `HF_ENDPOINT` discarded
Release 0.28.0 introduced a bug making it impossible to set a HF_ENDPOINT
env variable with a value with a subpath. This has been fixed in #2807.
Full Changelog: v0.28.0...v0.28.1
[v0.28.0]: Third-party Inference Providers on the Hub & multiple quality of life improvements and bug fixes
⚡️Unified Inference Across Multiple Inference Providers
The InferenceClient
now supports third-party providers, offering a unified interface to run inference across multiple services while leveraging models from the Hugging Face Hub. This update enables developers to:
- 🌐 Switch providers seamlessly - Transition between inference providers with a single interface.
- 🔗 Unified model IDs - Always reference Hugging Face Hub model IDs, even when using external providers.
- 🔑 Simplified billing and access management - You can use your Hugging Face Token for routing to third-party providers (billed through your HF account).
A list of supported third-party providers can be found here.
Example of text-to-image inference with Replicate:
>>> from huggingface_hub import InferenceClient
>>> replicate_client = InferenceClient(
... provider="replicate",
... api_key="my_replicate_api_key", # Using your personal Replicate key
)
>>> image = replicate_client.text_to_image(
... "A cyberpunk cat hacking neural networks",
... model="black-forest-labs/FLUX.1-schnell"
)
>>> image.save("cybercat.png")
Another example of chat completion with Together AI:
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
... provider="together", # Use Together AI provider
... api_key="<together_api_key>", # Pass your Together API key directly
... )
>>> client.chat_completion(
... model="deepseek-ai/DeepSeek-R1",
... messages=[{"role": "user", "content": "How many r's are there in strawberry?"}],
... )
When using external providers, you can choose between two access modes: either use the provider's native API key, as shown in the examples above, or route calls through Hugging Face infrastructure (billed to your HF account):
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
... provider="fal-ai",
... token="hf_****" # Your Hugging Face token
)
🔜 New providers/models/tasks will be added iteratively in the future.
👉 You can find a list of supported tasks per provider and more details here.
- [InferenceClient] Add third-party providers support by @hanouticelina in #2757
- Unified
prepare_request
method + class-based providers by @Wauplin in #2777- [InferenceClient] Support proxy calls for 3rd party providers by @hanouticelina in #2781
- [InferenceClient] Add
text-to-video
task and update supported tasks and models by @hanouticelina in #2786- Add type hints for providers by @Wauplin in #2788
- [InferenceClient] Update inference documentation by @hanouticelina in #2776
- Add text-to-video to supported tasks by @Wauplin in #2790
✨ HfApi
The following change aligns the client with server-side updates by adding new repositories properties: usedStorage
and resourceGroup
.
[HfApi] update list of repository properties following server side updates by @hanouticelina in #2728
Extends empty commit prevention to file copy operations, preserving clean version histories when no changes are made.
[HfApi] prevent empty commits when copying files by @hanouticelina in #2730
🌐 📚 Documentation
Thanks to @WizKnight, the hindi translation is much better!
Improved Hindi Translation in Documentation📝 by @WizKnight in #2697
💔 Breaking changes
The like
endpoint has been removed to prevent misuse. You can still remove existing likes using the unlike
endpoint.
[HfApi] remove
like
endpoint by @hanouticelina in #2739
🛠️ Small fixes and maintenance
😌 QoL improvements
- [InferenceClient] flag
chat_completion()
'slogit_bias
as UNUSED by @hanouticelina in #2724 - Remove unused parameters from method's docstring by @hanouticelina in #2738
- Add optional rejection_reason when rejecting a user access token by @Wauplin in #2758
- Add
py.typed
to be compliant with PEP-561 again by @hanouticelina in #2752
🐛 Bug and typo fixes
- Fix super_squash_history revision not urlencoded by @Wauplin in #2795
- Replace model repo with repo in docstrings by @albertvillanova in #2715
- [BUG] Fix 404 NOT FOUND issue caused by endpoint tail slash by @Mingqi2 in #2721
- Fix
typing.get_type_hints
call on aModelHubMixin
by @aliberts in #2729 - fix typo by @qwertyforce in #2762
- rejection reason docstring by @Wauplin in #2764
- Add timeout to WeakFileLock by @Wauplin in #2751
- Fix
CardData.get()
to respect default values whenNone
by @hanouticelina in #2770 - Fix RepoCard.load when passing a repo_id that is also a dir path by @Wauplin in #2771
- Fix filename too long when downloading to local folder by @Wauplin in #2789
🏗️ internal
- Migrate to new Ruff "2025 style guide" formatter by @hanouticelina in #2749
- remove org tokens tests by @hanouticelina in #2759
- Fix
RepoCard
test on Windows by @hanouticelina in #2774 - [Bot] Update inference types by @HuggingFaceInfra in #2712
[v0.27.1]: Fix `typing.get_type_hints` call on a `ModelHubMixin`
Full Changelog: v0.27.0...v0.27.1
See #2729 for more details.
[v0.27.0] DDUF tooling, torch model loading helpers & multiple quality of life improvements and bug fixes
📦 Introducing DDUF tooling
DDUF (DDUF's Diffusion Unified Format) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. We will soon have a detailed documentation for that.
The huggingface_hub
library now provides tooling to handle DDUF files in Python. It includes helpers to read and export DDUF files, and built-in rules to validate file integrity.
How to write a DDUF file?
>>> from huggingface_hub import export_folder_as_dduf
# Export "path/to/FLUX.1-dev" folder as a DDUF file
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")
How to read a DDUF file?
>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file
# Read DDUF metadata (only metadata is loaded, lightweight operation)
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")
# Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)
# Load the `model_index.json` content
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}
# Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
... state_dict = safetensors.torch.load(mm)
👉 More details about the API in the documentation here.
💾 Serialization
Following the introduction of the torch serialization module in 0.22.*
and the support of saving torch state dict to disk in 0.24.*
, we now provide helpers to load torch state dicts from disk.
By centralizing these functionalities in huggingface_hub
, we ensure a consistent implementation across the HF ecosystem while allowing external libraries to benefit from standardized weight handling.
>>> from huggingface_hub import load_torch_model, load_state_dict_from_file
# load state dict from a single file
>>> state_dict = load_state_dict_from_file("path/to/weights.safetensors")
# Directly load weights into a PyTorch model
>>> model = ... # A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")
More details in the serialization package reference.
[Serialization] support loading torch state dict from disk by @hanouticelina in #2687
We added a flag to save_torch_state_dict()
helper to properly handle model saving in distributed environments, aligning with existing implementations across the Hugging Face ecosystem:
[Serialization] Add is_main_process argument to save_torch_state_dict() by @hanouticelina in #2648
A bug with shared tensor handling reported in transformers#35080 has been fixed:
add argument to pass shared tensors keys to discard by @hanouticelina in #2696
✨ HfApi
The following changes align the client with server-side updates in how security metadata is handled and exposed in the API responses. In particular, The repository security status returned by HfApi().model_info()
is now available in the security_repo_status
field:
from huggingface_hub import HfApi
api = HfApi()
model = api.model_info("your_model_id", securityStatus=True)
# get security status info of your model
- security_info = model.securityStatus
+ security_info = model.security_repo_status
- Update how file's security metadata is retrieved following changes in the API response by @hanouticelina in #2621
- Expose repo security status field in ModelInfo by @hanouticelina in #2639
🌐 📚 Documentation
Thanks to @miaowumiaomiaowu, more documentation is now available in Chinese! And thanks @13579606 for reviewing these PRs. Check out the result here.
📝Translating docs to Simplified Chinese by @miaowumiaomiaowu in #2689, #2704 and #2705.
💔 Breaking changes
A few breaking changes have been introduced:
RepoCardData
serialization now preservesNone
values in nested structures.InferenceClient.image_to_image()
now takes atarget_size
argument instead ofheight
andwidth
arguments. This is has been reflected in the InferenceClient async equivalent as well.InferenceClient.table_question_answering()
no longer accepts aparameter
argument. This is has been reflected in the InferenceClient async equivalent as well.- Due to low usage,
list_metrics()
has been removed fromHfApi
.
⏳ Deprecations
Some deprecations have been introduced as well:
- Legacy token permission checks are deprecated as they are no longer relevant with fine-grained tokens, This includes
is_write_action
inbuild_hf_headers()
,write_permission=True
in login methods.get_token_permission
has been deprecated as well. labels
argument is deprecated inInferenceClient.zero_shot_classification()
andInferenceClient.image_zero_shot_classification()
. This is has been reflected in the InferenceClient async equivalent as well.
🛠️ Small fixes and maintenance
😌 QoL improvements
- Add utf8 encoding to read_text to avoid Windows charmap crash by @tomaarsen in #2627
- Add user CLI unit tests by @hanouticelina in #2628
- Update consistent error message (we can't do much about it) by @Wauplin in #2641
- Warn about upload_large_folder if really large folder by @Wauplin in #2656
- Support context mananger in commit scheduler by @Wauplin in #2670
- Fix autocompletion not working with ModelHubMixin by @Wauplin in #2695
- Enable tqdm progress in cloud environments by @cbensimon in #2698
🐛 Bug and typo fixes
- bugfix huggingface-cli command execution in python3.8 by @PineApple777 in #2620
- Fix documentation link formatting in README_cn by @BrickYo in #2615
- Update hf_file_system.md by @SwayStar123 in #2616
- Fix download local dir edge case (remove lru_cache) by @Wauplin in #2629
- Fix typos by @omahs in #2634
- Fix ModelCardData's datasets typing by @hanouticelina in #2644
- Fix HfFileSystem.exists() for deleted repos and update documentation by @hanouticelina in #2643
- Fix max tokens default value in text generation and chat completion by @hanouticelina in #2653
- Fix sorting properties by @hanouticelina in #2655
- Don't write the ref file unless necessary by @d8ahazard in #2657
- update attribute used in delete_collection_item docstring by @davanstrien in #2659
- 🐛: Fix bug by ignoring specific files in cache manager by @johnmai-dev in #2660
- Bug in model_card_consistency_reminder.yml by @deanwampler in #2661
- [Inference Client] fix zero_shot_image_classification's parameters by @hanouticelina in #2665
- Use asyncio.sleep in AsyncInferenceClient (not time.sleep) by @Wauplin in #2674
- Make sure create_repo respect organization privacy settings by @Wauplin in #2679
- Fix timestamp parsing to always include milliseconds by @hanouticelina in #2683
- will be used by @julien-c in #2701
- remove context manager when loading shards and handle mlx weights by @hanouticelina in #2709
🏗️ internal
- prepare for release v0.27 by @hanouticelina in #2622
- Support python 3.13 by @hanouticelina in #2636
- Add CI to auto-generate inference types by @Wauplin in #2600
- [InferenceClient] Automatically handle outdated task parameters by @hanouticelina in #2633
- Fix logo in README when dark mode is on by @hanouticelina in #2669
- Fix lint after ruff update by @Wauplin in #2680
- Fix test_list_spaces_linked by @Wauplin in #2707
[v0.26.5]: Serialization: Add argument to pass shared tensors names to drop when saving
Full Changelog: v0.26.3...v0.26.5
See #2696 for more details.
[v0.26.3]: Fix timestamp parsing to always include milliseconds
Full Changelog: v0.26.2...v0.26.3
See #2683 for more details.
[v0.26.2] Fix: Reflect API response changes in file and repo security status fields
This patch release includes updates to align with recent API response changes:
- Update how file's security metadata is retrieved following changes in the API response (#2621).
- Expose repo security status field in ModelInfo (#2639).
Full Changelog: v0.26.1...v0.26.2
[v0.26.1] Hot-fix: fix Python 3.8 support for `huggingface-cli` commands
Full Changelog: v0.26.0...v0.26.1
See #2620 for more details.
v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements
🔐 Multiple access tokens support
Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.
To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:
- Store multiple tokens on your machine by simply logging in with the
login()
command with each token:
huggingface-cli login
- Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
- List available access tokens on your machine:
huggingface-cli auth list
- Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]
✅ Nothing changes if you are using the HF_TOKEN
environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗
- Support multiple tokens locally by @hanouticelina in #2549
⚡️ InferenceClient improvements
🖼️ Conversational VLMs support
Conversational vision-language models inference is now supported with InferenceClient
's chat completion!
from huggingface_hub import InferenceClient
# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_url},
},
{
"type": "text",
"text": "Describe this image in one sentence.",
},
],
},
],
)
print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.
🔧 More complete support for inference parameters
You can now pass additional inference parameters to more task methods in the InferenceClient
, including: image_classification
, text_classification
, image_segmentation
, object_detection
, document_question_answering
and more!
For more details, visit the InferenceClient
reference guide.
✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗
- Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
- [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
- Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569
✨ HfApi
update_repo_settings
can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility
which is deprecated and will be removed in version v0.29.0
.
- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
- Feature: switch visibility with update_repo_settings by @WizKnight in #2541
📄 Daily papers API is now supported in huggingface_hub
, enabling you to search for papers on the Hub and retrieve detailed paper information.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")
🌐 📚 Documentation
Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.
💔 Breaking changes
A few breaking changes have been introduced:
cached_download()
,url_to_filename()
,filename_to_url()
methods are now completely removed. From now on, you will have to usehf_hub_download()
to benefit from the new cache layout.legacy_cache_layout
argument fromhf_hub_download()
has been removed as well.
These breaking changes have been announced with a regular deprecation cycle.
Also, any templating-related utility has been removed from huggingface_hub
. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.
Prepare for release 0.26 by @hanouticelina in #2579
Remove templating utility by @Wauplin in #2611
🛠️ Small fixes and maintenance
😌 QoL improvements
- docs: move translations to
i18n
by @SauravMaheshkar in #2566 - Preserve card metadata format/ordering on load->save by @hlky in #2570
- Remove raw HTML from error message content and improve request ID capture by @hanouticelina in #2584
- [Inference Client] Factorize inference payload build by @hanouticelina in #2601
- Use proper logging in auth module by @hanouticelina in #2604
🐛 fixes
- Use repo_type in HfApi.grant_access url by @albertvillanova in #2551
- Raise error if encountered in chat completion SSE stream by @Wauplin in #2558
- Add 500 HTTP Error to retry list by @farzadab in #2567
- Add missing documentation by @adiaholic in #2572
- Serialization: take into account meta tensor when splitting the
state_dict
by @SunMarc in #2591 - Fix snapshot download when
local_dir
is provided. by @hanouticelina in #2592 - Fix PermissionError while creating '.no_exist/' directory in cache by @Wauplin in #2594
- Fix 2609 - Import packaging by default by @Wauplin in #2610
🏗️ internal
- Fix test by @Wauplin in #2582
- Make SafeTensorsInfo.parameters a Dict instead of List by @adiaholic in #2585
- Fix tests listing text generation models by @Wauplin in #2593
- Skip flaky Repository test by @Wauplin in #2595
- Support python 3.12 by @hanouticelina in #2605
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @SauravMaheshkar
- docs: move translations to
i18n
(#2566)
- docs: move translations to
- @WizKnight
- @hlky
- @Raghul-M
- Translated index.md and installation.md to Tamil (#2555)
[v0.25.2]: Fix snapshot download when `local_dir` is provided
Full Changelog : v0.25.1...v0.25.2
For more details, refer to the related PR #2592