Releases: huggingface/huggingface_hub
v0.19.1 - Hot-fix: ignore TypeError when listing models with corrupted ModelCard
Full Changelog: v0.19.0...v0.19.1.
Fixes a regression bug (PR #1821) introduced in 0.19.0
that made looping over models with list_models
fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.
v0.19.0: Inference Endpoints and robustness!
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🚀 Inference Endpoints API
Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With huggingface_hub>=0.19.0
integration, you can now manage your Inference Endpoints programmatically. Combined with the InferenceClient
, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!
Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our dedicated guide.
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint
# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()
# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])
# Pause endpoint
>>> endpoint.pause()
- Implement API for Inference Endpoints by @Wauplin in #1779
- Fix inference endpoints docs by @Wauplin in #1785
⏬ Improved download experience
huggingface_hub
is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!
More robust downloads
Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.
- Retry on ConnectionError/ReadTimeout when streaming file from server by @Wauplin in #1766
- Reset nb_retries if data has been received from the server by @Wauplin in #1784
In addition to this, it is possible to configure huggingface_hub
with higher timeouts thanks to @Shahafgo. This should help getting around some issues on slower connections.
- Adding the ability to configure the timeout of get request by @Shahafgo in #1720
- Fix a bug to respect the HF_HUB_ETAG_TIMEOUT. by @Shahafgo in #1728
Progress bars while using hf_transfer
hf_transfer
is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (pip install -U hf_transfer
), it can transparently be used with huggingface_hub
simply by setting HF_HUB_ENABLE_HF_TRANSFER=1
as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.
hf-transfer
progress bar by @cbensimon in #1792- Add support for progress bars in hf_transfer uploads by @Wauplin in #1804
📚 Documentation
huggingface-cli
guide
huggingface-cli
is the CLI tool shipped with huggingface_hub
. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so here it is!
Environment variables
Environment variables are useful to configure how huggingface_hub
should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the package reference for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.
- Harmonize environment variables by @Wauplin in #1786
- Ensure backward compatibility for HUGGING_FACE_HUB_TOKEN env variable by @Wauplin in #1795
- Do not promote
HF_ENDPOINT
environment variable by @Wauplin in #1799
Hindi translation
Hindi documentation landed on the Hub thanks to @aneeshd27! Checkout the Hindi version of the quickstart guide here.
- Added translation of 3 files as mentioned in issue by @aneeshd27 in #1772
Minor docs fixes
- Added
[[autodoc]]
forModelStatus
by @jamesbraza in #1758 - Expanded docstrings on
post
andModelStatus
by @jamesbraza in #1740 - Fix document link for manage-cache by @liuxueyang in #1774
- Minor doc fixes by @pcuenca in #1775
💔 Breaking changes
Legacy ModelSearchArguments
and DatasetSearchArguments
have been completely removed from huggingface_hub
. This shouldn't cause problem as they were already not in use (and unusable in practice).
- Removed GeneralTags, ModelTags and DatasetTags by @VictorHugoPilled in #1761
Classes containing details about a repo (ModelInfo
, DatasetInfo
and SpaceInfo
) have been refactored by @mariosasko to be more Pythonic and aligned with the other classes in huggingface_hub
. In particular those objects are now based the dataclass
module instead of a custom ReprMixin
class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.
- Replace
ReprMixin
with dataclasses by @mariosasko in #1788 - Fix SpaceInfo initialization + add test by @Wauplin in #1802
The legacy Repository
and InferenceAPI
classes are now deprecated but will not be removed before the next major release (v1.0
).
Instead of the git-based Repository
, we advice to use the http-based HfApi
. Check out this guide explaining the reasons behind it. For InferenceAPI
, we recommend to switch to InferenceClient
which is much more feature-complete and will keep getting improved.
⚙️ Miscellaneous improvements, fixes and maintenance
InferenceClient
- Adding
InferenceClient.get_recommended_model
by @jamesbraza in #1770 - Fix InferenceClient.text_generation when pydantic is not installed by @Wauplin in #1793
- Supporting
pydantic<3
by @jamesbraza in #1727
HfFileSystem
- [hffs] Raise
NotImplementedError
on transaction commits by @Wauplin in #1736 - Fix huggingface filesystem repo_type not forwarded by @Wauplin in #1791
- Fix
HfFileSystemFile
when init fails + improve error message by @Wauplin in #1805
FIPS compliance
Misc fixes
- Fix UnboundLocalError when using commit context manager by @hahunavth in #1722
- Fixed improperly configured 'every' leading to test_sync_and_squash_history failure by @jamesbraza in #1731
- Testing
WEBHOOK_PAYLOAD_EXAMPLE
deserialization by @jamesbraza in #1732 - Keep lock files in a
/locks
folder to prevent rare concurrency issue by @beeender in #1659 - Fix Space runtime on static Space by @Wauplin in #1754
- Clearer error message on unprocessable entity. by @Wauplin in #1755
- Do not warn in ModelHubMixin on missing config file by @Wauplin in #1776
- Update SpaceHardware enum by @Wauplin in #1798
- change prop name by @julien-c in #1803
Internal
- Bump version to 0.19 by @Wauplin in #1723
- Make
@retry_endpoint
a default for all test by @Wauplin in #1725 - Retry test on 502 Bad Gateway by @Wauplin in #1737
- Consolidated mypy type ignores in
InferenceClient.post
by @jamesbraza in #1742 - fix: remove useless token by @rtrompier in #1765
- Fix CI (typing-extensions minimal requirement by @Wauplin in #1781
- remove black formatter to use only ruff by @Wauplin in #1783
- Separate test and prod cache (+ ruff formatter) by @Wauplin in #1789
- fix 3.8 tensorflow in ci by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @VictorHugoPilled
- Removed GeneralTags, ModelTags and DatasetTags (#1761)
- @aneeshd27
- Added translation of 3 files as mentioned in issue (#1772)
v0.18.0: Collection API, translated documentation and more!
(Discuss about the release and provide feedback in the Community Tab!)
Collection API 🎉
Collection API is now fully supported in huggingface_hub
!
A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this guide to understand in more detail what collections are and this guide to learn how to build them programmatically.
Create/get/update/delete collection:
get_collection
create_collection
: title, description, namespace, privateupdate_collection_metadata
: title, description, position, private, themedelete_collection
Add/update/remove item from collection:
add_collection_item
: item id, item type, noteupdate_collection_item
: note, positiondelete_collection_item
Usage
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem: {
{'_id': '6507f6d5423b46492ee1413e',
'id': 'TheBloke/TigerBot-70B-Chat-GPTQ',
'author': 'TheBloke',
'item_type': 'model',
'lastModified': '2023-09-19T12:55:21.000Z',
(...)
}}
>>> from huggingface_hub import create_collection
# Create collection
>>> collection = create_collection(
... title="ICCV 2023",
... description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
# Add item with a note
>>> add_collection_item(
... collection_slug=collection.slug, # e.g. "davanstrien/climate-64f99dc2a5067f6b65531bab"
... item_id="datasets/climate_fever",
... item_type="dataset",
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
- Add Collection API by @Wauplin in #1687
- Add
url
attribute to Collection class by @Wauplin in #1695 - [Fix] Add collections guide to overview page by @Wauplin in #1696
📚 Translated documentation
Documentation is now available in both German and Korean thanks to community contributions! This is an important milestone for Hugging Face in its mission to democratize good machine learning.
- 🌐 [i18n-DE] Translate docs to German by @martinbrose in #1646
- 🌐 [i18n-KO] Translated README, landing docs to Korean by @wonhyeongseo in #1667
- Update i18n template by @Wauplin in #1680
- Add German concepts guide by @martinbrose in #1686
Preupload files before committing
(Disclaimer: this is a power-user usage. It is not expected to be used directly by end users.)
When using create_commit
(or upload_file
/upload_folder
), the internal workflow has 3 main steps:
- List the files to upload and check if those are regular files (text) or LFS files (binaries or huge files)
- Upload the LFS files to S3
- Create a commit on the Hub (upload regular files + reference S3 urls at once). The LFS upload is important to avoid large payloads during the commit call.
In this release, we introduce preupload_lfs_files
to perform step 2 independently of step 3. This is useful for libraries like datasets
that generate huge files "on-the-fly" and want to preupload them one by one before making one commit with all the files. For more details, please read this guide.
- Preupload lfs files before committing by @Wauplin in #1699
- Hide
CommitOperationAdd
's internal attributes by @mariosasko in #1716
Miscellaneous improvements
❤️ List repo likers
Similarly to list_user_likes
(listing all likes of a user), we now introduce list_repo_likers
to list all likes on a repo - thanks to @issamarabi.
>>> from huggingface_hub import list_repo_likers
>>> likers = list_repo_likers("gpt2")
>>> len(likers)
204
>>> likers
[User(username=..., fullname=..., avatar_url=...), ...]
- Add list_repo_likers method to HfApi by @issamarabi in #1715
Refactored Dataset Card template
Template for the Dataset Card has been updated to be more aligned with the Model Card template.
- Dataset card template overhaul by @mariosasko in #1708
QOL improvements
This release also adds a few QOL improvement for the users:
- Suggest to check firewall/proxy settings + default to local file by @Wauplin in #1670
- debug logs to debug level by @Wauplin (direct commit on main)
- Change
TimeoutError
=>asyncio.TimeoutError
by @matthewgrossman in #1666 - Handle
refs/convert/parquet
and PR revision correctly in hffs by @Wauplin in #1712 - Document hf_transfer more prominently by @Wauplin in #1714
Breaking change
A breaking change has been introduced in CommitOperationAdd
in order to implement preupload_lfs_files
in a way that is convenient for the users. The main change is that CommitOperationAdd
is no longer a static object but is modified internally by preupload_lfs_files
and create_commit
. This means that you cannot reuse a CommitOperationAdd
object once it has been committed to the Hub. If you do so, an explicit exception will be raised. You can still reuse the operation objects if the commit call failed and you retry it. We hope that it will not affect any users but please open an issue if you're encountering any problem.
⚙️ Small fixes and maintenance
Docs fixes
- Move repo size limitations to Hub docs by @Wauplin in #1660
- Correct typo in upload guide by @martinbrose in #1677
- Fix broken tips in login reference by @Wauplin in #1688
Misc fixes
- Fixes filtering by tags with list_models and adds test case by @martinbrose in #1673
- Add default user-agent to huggingface-cli by @Wauplin in #1664
- Automatically retry on create_repo if '409 conflicting op in progress' by @Wauplin in #1675
- Fix upload CLI when pushing to Space by @Wauplin in #1669
- longer pbar descr, drop D-word by @poedator in #1679
- Pin
fsspec
to use defaultexpand_path
by @mariosasko in #1681 - Address failing _check_disk_space() when path doesn't exist yet by @martinbrose in #1692
- Handle TGI error when streaming tokens by @Wauplin in #1711
Internal
- bump version to
0.18.0.dev0
by @Wauplin in #1658 - sudo apt update in CI by @Wauplin (direct commit on main)
- fix CI tests by @Wauplin (direct commit on main)
- Skip flaky InferenceAPI test by @Wauplin (direct commit on main)
- Respect
HTTPError
spec by @Wauplin in #1693 - skip flaky test by @Wauplin (direct commit on main)
- Fix LFS tests after password auth deprecation by @Wauplin in #1713
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @martinbrose
- @wonhyeongseo
- 🌐 [i18n-KO] Translated README, landing docs to Korean (#1667)
v0.17.3 - Hot-fix: ignore errors when checking available disk space
Full Changelog: v0.17.2...v0.17.3
Fixing a bug when downloading files to a non-existent directory. In #1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in #1690. This hot-fix fixes it thanks to #1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError
) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.
Checkout those release notes to learn more about the v0.17 release.
v0.17.2 - Hot-fix: make `huggingface-cli upload` work with Spaces
Full Changelog: v0.17.1...v0.17.2
Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk
was not found in that case. More details in #1669.
Also updated the user-agent when using huggingface-cli upload
. See #1664.
Checkout those release notes to learn more about the v0.17 release.
v0.17.0: Inference, CLI and Space API
InferenceClient
All tasks are now supported! 💥
Thanks to a massive community effort, all inference tasks are now supported in InferenceClient
. Newly added tasks are:
- Object detection by @dulayjm in #1548
- Text classification by @martinbrose in #1606
- Token classification by @martinbrose in #1607
- Translation by @martinbrose in #1608
- Question answering by @martinbrose in #1609
- Table question answering by @martinbrose in #1612
- Fill mask by @martinbrose in #1613
- Tabular classification by @martinbrose in #1614
- Tabular regression by @martinbrose in #1615
- Document question answering by @martinbrose in #1620
- Visual question answering by @martinbrose in #1621
- Zero shot classification by @Wauplin in #1644
Documentation, including examples, for each of these tasks can be found in this table.
All those methods also support async mode using AsyncInferenceClient
.
Get InferenceAPI status
Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:
list_deployed_models
aims to help users discover which models are currently deployed, listed by task.get_model_status
aims to get the status of a specific model. That's useful if you already know which model you want to use.
Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]
# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')
- Add get_model_status function by @sifisKoen in #1558
- Add list_deployed_models to inference client by @martinbrose in #1622
Few fixes
- Send Accept: image/png as header for image tasks by @Wauplin in #1567
- FIX
text_to_image
andimage_to_image
parameters by @Wauplin in #1582 - Distinguish _bytes_to_dict and _bytes_to_list + fix issues by @Wauplin in #1641
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
Download and upload files... from the CLI 🔥 🔥 🔥
This is a long-awaited feature finally implemented! huggingface-cli
now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone
, git pull
and git push
. Despite being less feature-complete than git
(no .git/
folder, no notion of local commits), it offers the flexibility required when working with large repositories.
Download
# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json
# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7
Upload
# Upload single file
huggingface-cli upload my-cool-model model.safetensors
# Upload entire directory
huggingface-cli upload my-cool-model ./models
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
Docs
For more examples, check out the documentation:
- Implemented CLI download functionality by @martinbrose in #1617
- Implemented CLI upload functionality by @martinbrose in #1618
🚀 Space API
Some new features have been added to the Space API to:
- request persistent storage for a Space
- set a description to a Space's secrets
- set variables on a Space
- configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
A special thank to @martinbrose who largely contributed on those new features.
- Request Persistent Storage by @freddyaboulton in #1571
- Support factory reboot when restarting a Space by @Wauplin in #1586
- Added support for secret description by @martinbrose in #1594
- Added support for space variables by @martinbrose in #1592
- Add settings for creating and duplicating spaces by @martinbrose in #1625
📚 Documentation
A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.
- Tips to upload large models/datasets by @Wauplin in #1565
- Add the hard limit of 50GB on LFS files by @severo in #1624
🗺️ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!
- Add translation guide + update repo structure by @Wauplin in #1602
- Fix i18n issue template links by @Wauplin in #1627
Breaking change
The behavior of InferenceClient.feature_extraction
has been updated to fix a bug happening with certain models. The shape of the returned array for transformers
models has changed from (sequence_length, hidden_size)
to (1, sequence_length, hidden_size)
which is the breaking change.
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
QOL improvements
HfApi
helpers:
Two new helpers have been added to check if a file or a repo exists on the Hub:
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False
- Check if repo or file exists by @martinbrose in #1591
Also, hf_hub_download
and snapshot_download
are now part of HfApi
(keeping the same syntax and behavior).
Download improvements:
- When a user tries to download a model but the disk is full, a warning is triggered.
- When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.
- Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by @jiamings in #1561
- Implemented check_disk_space function by @martinbrose in #1590
Small fixes and maintenance
⚙️ Doc fixes
- Fix table by @stevhliu in #1577
- Improve docstrings for text generation by @osanseviero in #1597
- Fix superfluous-typo by @julien-c in #1611
- minor missing paren by @julien-c in #1637
- update i18n template by @Wauplin (direct commit on main)
- Add documentation for modelcard Metadata. Resolves by @sifisKoen in #1448
⚙️ Other fixes
- Add
missing_ok
option indelete_repo
by @Wauplin in #1640 - Implement
super_squash_history
inHfApi
by @Wauplin in #1639 - 1546 fix empty metadata on windows by @Wauplin in #1547
- Fix tqdm by @NielsRogge in #1629
- Fix bug #1634 (drop finishing spaces and EOL) by @GBR-613 in #1638
⚙️ Internal
- Prepare for 0.17 by @Wauplin in #1540
- update mypy version + fix issues + remove deprecatedlist helper by @Wauplin in #1628
- mypy traceck by @Wauplin (direct commit on main)
- pin pydantic version by @Wauplin (direct commit on main)
- Fix ci tests by @Wauplin in #1630
- Fix test in contrib CI by @Wauplin (direct commit on main)
- skip gated repo test on contrib by @Wauplin (direct commit on main)
- skip failing test by @Wauplin (direct commit on main)
- Fix fsspec tests in ci by @Wauplin in #1635
- FIX windows CI by @Wauplin (direct commit on main)
- FIX style issues by pinning black version by @Wauplin (direct commit on main)
- forgot test case by @Wauplin (direct commit on main)
- shorter is better by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @dulayjm
- Add object detection to inference client (#1548)
- @martinbrose
- Added support for s...
v0.16.4 - Hot-fix: Do not share request.Session between processes
Full Changelog: v0.16.3...v0.16.4
Hotfix to avoid sharing requests.Session
between processes. More information in #1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.
Check out these release notes to learn more about the v0.16 release.
v0.16.3: Hotfix - More verbose ConnectionError
Full Changelog: v0.16.2...v0.16.3
Hotfix to print the request ID if any RequestException
happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.
Check out these release notes to learn more about the v0.16 release.
v0.16.2: Inference, CommitScheduler and Tensorboard
Inference
Introduced in the v0.15
release, the InferenceClient
got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.
Async client
Asyncio calls are supported thanks to AsyncInferenceClient
. Based on asyncio
and aiohttp
, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient
is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
Text-generation
Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.
Text generation has 4 modes depending on details
(bool) and stream
(bool) values. By default, a raw string is returned. If details=True
, more information about the generated tokens is returned. If stream=True
, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)
Of course, the async client also supports text-generation (see docs):
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
- prepare for tgi by @Wauplin in #1511
- Support text-generation in InferenceClient by @Wauplin in #1513
Zero-shot-image-classification
InferenceClient
now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
... labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]
Thanks to @dulayjm for your contribution on this task!
Other
When using InferenceClient
's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.
It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...)
. All calls made with this client will then use these headers/cookies.
Commit API
CommitScheduler
The CommitScheduler
is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.
>>> from huggingface_hub import CommitScheduler
# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... )
Check out this guide to understand how to use the CommitScheduler
. It comes with a Space to showcase how to use it in 4 practical examples.
CommitScheduler
: upload folder every 5 minutes by @Wauplin in #1494- Encourage to overwrite CommitScheduler.push_to_hub by @Wauplin in #1506
- FIX Use token by default in CommitScheduler by @Wauplin in #1509
- safer commit scheduler by @Wauplin (direct commit on main)
HFSummaryWriter (tensorboard)
The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents
) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!
With the HFSummaryWriter
you can now take full advantage of the feature for your training, simply by updating a single line of code.
>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)
HFSummaryWriter
inherits from SummaryWriter
and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.
For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.
CommitOperationCopy
It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.
- add CommitOperationCopy by @lhoestq in #1495
- Use CommitOperationCopy in hffs by @Wauplin in #1497
- Batch fetch_lfs_files_to_copy by @lhoestq in #1504
Breaking changes
ModelHubMixin
got updated (after a deprecation cycle):
- Force to use kwargs instead of passing everything a positional arg
- It is not possible anymore to pass
model_id
asusername/repo_name@revision
inModelHubMixin
. Revision must be passed as a separaterevision
argument if needed.
Bug fixes and small improvements
Doc fixes
- [doc build] Use secrets by @mishig25 in #1501
- Migrate doc files to Markdown by @Wauplin in #1522
- fix doc example by @Wauplin (direct commit on main)
- Update readme and contributing guide by @Wauplin in #1534
HTTP fixes
A x-request-id
header is sent by default for every request made to the Hub. This should help debugging user issues.
3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.
- Set 30s timeout on downloads (instead of 10s) by @Wauplin in #1514
- Set timeout to 60 instead of 30 when downloading files by @Wauplin in #1523
- Set timeout to 10s by @ydshieh in #1530
Misc
- Rename "configs" dataset card field to "config_names" by @polinaeterna in #1491
- update stats by @Wauplin (direct commit on main)
- Retry on both ConnectTimeout and ReadTimeout by @Wauplin in #1529
- update tip by @Wauplin (direct commit on main)
- make repo_info public by @Wauplin (direct commit on main)
Significant community contributions
The following contributors have made significant changes to the library over the last ...
v0.15.1: InferenceClient and background uploads!
InferenceClient
We introduce InferenceClient
, a new client to run inference on the Hub. The objective is to:
- support both InferenceAPI and Inference Endpoints services in a single client.
- offer a nice interface with:
- 1 method per task (e.g.
summary = client.summarization("this is a long text")
) - 1 default model per task (i.e. easy to prototype)
- explicit and documented parameters
- convenient binary inputs (from url, path, file-like object,...)
- 1 method per task (e.g.
- be flexible and support custom requests if needed
Check out the Inference guide to get a complete overview.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio
calls. The mid-term goal is to deprecate and replace InferenceAPI
.
Non-blocking uploads
It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository
but is now available for HTTP-based methods like upload_file
, upload_folder
and create_commit
. If run_as_future=True
is passed:
- the job is queued in a background thread. Only 1 worker is spawned to ensure no race condition. The goal is NOT to speed up a process by parallelizing concurrent calls to the Hub.
- a
Future
object is returned to check the job status - main thread is not interrupted, even if an exception occurs during the upload
In addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(...) # takes Xs
# URL to upload file
>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
- Run
HfApi
methods in the background (run_as_future
) by @Wauplin in #1458 - fix docs for run_as_future by @Wauplin (direct commit on main)
Breaking changes
Some (announced) breaking changes have been introduced:
list_models
,list_datasets
andlist_spaces
return an iterable instead of a list (lazy-loading of paginated results)- The parameter
cardData
inlist_datasets
has been removed in favor of the parameterfull
.
Both changes had a deprecation cycle for a few releases now.
Bugfixes and small improvements
Token permission
New parameters in login()
:
new_session
: skip login if new_session=False and user is already logged inwrite_permission
: write permission is required (login fails otherwise)
Also added a new HfApi().get_token_permission()
method that returns "read"
or "write"
(or None
if not logged in).
- Add new_session, write_permission args by @aliabid94 in #1476
List files with details
New parameter to get more details when listing files: list_repo_files(..., expand=True)
.
API call is slower but lastCommit
and security
fields are returned as well.
Docs fixes
- Resolve broken link to 'filesystem' by @tomaarsen in #1461
- Fix broken link in docs to hf_file_system guide by @albertvillanova in #1469
- Remove hffs from docs by @albertvillanova in #1468
Misc
- Fix consistency check when downloading a file by @Wauplin in #1449
- Fix discussion URL on datasets and spaces by @Wauplin in #1465
- FIX user agent not passed in snapshot_download by @Wauplin in #1478
- Avoid
ImportError
when importingWebhooksServer
and Gradio is not installed by @mariosasko in #1482 - add utf8 encoding when opening files for windows by @abidlabs in #1484
- Fix incorrect syntax in
_deprecation.py
warning message for_deprecate_list_output()
by @x11kjm in #1485 - Update _hf_folder.py by @SimonKitSangChu in #1487
- fix pause_and_restart test by @Wauplin (direct commit on main)
- Support image-to-image task in InferenceApi by @Wauplin in #1489