Spaces:
Runtime error
๐ข [v0.23.0]: LLMs with tools, seamless downloads, and much more!
Note: pre-release 0.23.0.rc0 is available on PyPI. Official release will occur in the coming days.
EDIT: 0.23.0 is out!
๐ Seamless download to local dir
The 0.23.0
release comes with a big revamp of the download process, especially when it comes to downloading to a local directory. Previously the process was still involving the cache directory and symlinks which led to misconceptions and a suboptimal user experience. The new workflow involves a .cache/huggingface/
folder, similar to the .git/
one, that keeps track of the progress of a download. The main features are:
- no symlinks
- no local copy
- don't re-download when not necessary
- same behavior on both Unix and Windows
- unrelated to cache-system
Example to download q4 GGUF file for microsoft/Phi-3-mini-4k-instruct-gguf:
# Download q4 GGUF file from
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir=data/phi3
With this addition, interrupted downloads are now resumable! This applies both for downloads in local and cache directories which should greatly improve UX for users with slow/unreliable connections. In this regard, the resume_download
parameter is now deprecated (not relevant anymore).
- Revamp download to local dir process by @Wauplin in #2223
- Rename
.huggingface/
folder to.cache/huggingface/
by @Wauplin in #2262
๐ก Grammar and Tools in InferenceClient
It is now possible to provide a list of tools when chatting with a model using the InferenceClient
! This major improvement has been made possible thanks to TGI that handle them natively.
>>> from huggingface_hub import InferenceClient
# Ask for weather in the next days using tools
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
... {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."},
... {"role": "user", "content": "What's the weather like the next 3 days in San Francisco, CA?"},
... ]
>>> tools = [
... {
... "type": "function",
... "function": {
... "name": "get_current_weather",
... "description": "Get the current weather",
... "parameters": {
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, e.g. San Francisco, CA",
... },
... "format": {
... "type": "string",
... "enum": ["celsius", "fahrenheit"],
... "description": "The temperature unit to use. Infer this from the users location.",
... },
... },
... "required": ["location", "format"],
... },
... },
... },
... ...
... ]
>>> response = client.chat_completion(
... model="meta-llama/Meta-Llama-3-70B-Instruct",
... messages=messages,
... tools=tools,
... tool_choice="auto",
... max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
arguments={
'location': 'San Francisco, CA',
'format': 'fahrenheit',
'num_days': 3
},
name='get_n_day_weather_forecast',
description=None
)
It is also possible to provide grammar rules to the text_generation
task. This ensures that the output follows a precise JSON Schema specification or matches a regular expression. For more details about it, check out the Guidance guide from Text-Generation-Inference docs.
โ๏ธ Other
Mention more chat-completion
task instead of conversation
in documentation.
chat-completion
relies on server-side rendering in all cases, including when model is transformers
-backed. Previously it was only the case for TGI-backed models and templates were rendered client-side otherwise.
Improved logic to determine whether a model is served via TGI or transformers
.
๐ ๐ Korean community is on fire!
The PseudoLab team is a non-profit dedicated to make AI more accessible in the Korean-speaking community. In the past few weeks, their team of contributors managed to translated (almost) entirely the huggingface_hub
documentation. Huge shout-out to the coordination on this task! Documentation can be accessed here.
- ๐ [i18n-KO] Translated
guides/webhooks_server.md
to Korean by @nuatmochoi in #2145 - ๐ [i18n-KO] Translated
reference/login.md
to Korean by @SeungAhSon in #2151 - ๐ [i18n-KO] Translated
package_reference/tensorboard.md
to Korean by @fabxoe in #2173 - ๐ [i18n-KO] Translated
package_reference/inference_client.md
to Korean by @cjfghk5697 in #2178 - ๐ [i18n-KO] Translated
reference/inference_endpoints.md
to Korean by @harheem in #2180 - ๐ [i18n-KO] Translated
package_reference/file_download.md
to Korean by @seoyoung-3060 in #2184 - ๐ [i18n-KO] Translated
package_reference/cache.md
to Korean by @nuatmochoi in #2191 - ๐ [i18n-KO] Translated
package_reference/collections.md
to Korean by @boyunJang in #2214 - ๐ [i18n-KO] Translated
package_reference/inference_types.md
to Korean by @fabxoe in #2171 - ๐ [i18n-KO] Translated
guides/upload.md
to Korean by @junejae in #2139 - ๐ [i18n-KO] Translated
reference/repository.md
to Korean by @junejae in #2189 - ๐ [i18n-KO] Translated
package_reference/space_runtime.md
to Korean by @boyunJang in #2213 - ๐ [i18n-KO] Translated
guides/repository.md
to Korean by @cjfghk5697 in #2124 - ๐ [i18n-KO] Translated
guides/model_cards.md
to Korean" by @SeungAhSon in #2128 - ๐ [i18n-KO] Translated
guides/community.md
to Korean by @seoulsky-field in #2126 - ๐ [i18n-KO] Translated
guides/cli.md
to Korean by @harheem in #2131 - ๐ [i18n-KO] Translated
guides/search.md
to Korean by @seoyoung-3060 in #2134 - ๐ [i18n-KO] Translated
guides/inference.md
to Korean by @boyunJang in #2130 - ๐ [i18n-KO] Translated
guides/manage-spaces.md
to Korean by @boyunJang in #2220 - ๐ [i18n-KO] Translating
guides/hf_file_system.md
to Korean by @heuristicwave in #2146 - ๐ [i18n-KO] Translated
package_reference/hf_api.md
to Korean by @fabxoe in #2165 - ๐ [i18n-KO] Translated
package_reference/mixins.md
to Korean by @fabxoe in #2166 - ๐ [i18n-KO] Translated
guides/inference_endpoints.md
to Korean by @usr-bin-ksh in #2164 - ๐ [i18n-KO] Translated
package_reference/utilities.md
to Korean by @cjfghk5697 in #2196 - fix ko docs by @Wauplin (direct commit on main)
- ๐ [i18n-KO] Translated package_reference/serialization.md to Korean by @seoyoung-3060 in #2233
- ๐ [i18n-KO] Translated package_reference/hf_file_system.md to Korean by @SeungAhSon in #2174
๐ ๏ธ Misc improvements
User API
@bilgehanertan added support for 2 new routes:
get_user_overview
to retrieve high-level information about a user: username, avatar, number of models/datasets/Spaces, number of likes and upvotes, number of interactions in discussion, etc.
- User API endpoints by @bilgehanertan in #2147
CLI tag
@bilgehanertan added a new command to the CLI to handle tags. It is now possible to:
- tag a repo
>>> huggingface-cli tag Wauplin/my-cool-model v1.0
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
- retrieve the list of tags for a repo
>>> huggingface-cli tag Wauplin/gradio-space-ci -l --repo-type space
Tags for space Wauplin/gradio-space-ci:
0.2.2
0.2.1
0.2.0
0.1.2
0.0.2
0.0.1
- delete a tag on a repo
>>> huggingface-cli tag -d Wauplin/my-cool-model v1.0
You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model
For more details, check out the CLI guide.
- CLI Tag Functionality by @bilgehanertan in #2172
๐งฉ ModelHubMixin
This ModelHubMixin
got a set of nice improvement to generate model cards and handle custom data types in the config.json
file. More info in the integration guide.
ModelHubMixin
: more metadata + arbitrary config types + proper guide by @Wauplin in #2230- Fix ModelHubMixin when class is a dataclass by @Wauplin in #2159
- Do not document private attributes of ModelHubMixin by @Wauplin in #2216
- Add support for pipeline_tag in ModelHubMixin by @Wauplin in #2228
โ๏ธ Other
In a shared environment, it is now possible to set a custom path HF_TOKEN_PATH
as environment variable so that each user of the cluster has their own access token.
Thanks to
@Y4suyuki
and @lappemic, most custom errors defined in huggingface_hub
are now aggregated in the same module. This makes it very easy to import them from from huggingface_hub.errors import ...
.
Fixed HFSummaryWriter
(class to seamlessly log tensorboard events to the Hub) to work with either tensorboardX
or torch.utils
implementation, depending on the user setup.
Speed to list files using HfFileSystem
has been drastically improved, thanks to
@awgr
. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by HfFileSystem
, they would need to copy them before-hand. This is expected to be a very limited drawback.
Progress bars in huggingface_hub
got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to logging.getLogger
) and to enable/disable only some progress bars. More details in this guide.
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")
# No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass
# But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
... pass
100%|โโโโโโโโโโโโโโโโโ| 5/5 [00:00<00:00, 117817.53it/s]
- Implement hierarchical progress bar control in huggingface_hub by @lappemic in #2217
๐ Breaking changes
--local-dir-use-symlink
and --resume-download
As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
- a
.cache/huggingface/
folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command. --local-dir-use-symlink
is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the.cache/huggingface/
folder, it shouldn't be needed anyway.--resume-download
has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use--force-download
.
Inference Types
As part of #2237 (Grammar and Tools support), we've updated the return value from InferenceClient.chat_completion
and InferenceClient.text_generation
to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had from huggingface_hub import TextGenerationOutput
in your code. This is however not the common usage since those objects are already instantiated by huggingface_hub
directly.
Expected breaking changes
Some other breaking changes were expected (and announced since 0.19.x):
list_files_info
is definitively removed in favor ofget_paths_info
andlist_repo_tree
WebhookServer.run
is definitively removed in favor ofWebhookServer.launch
api_endpoint
in ModelHubMixinpush_to_hub
's method is definitively removed in favor of theHF_ENDPOINT
environment variable
Check #2156 for more details.
Small fixes and maintenance
โ๏ธ CI optimization
โ๏ธ fixes
- Fix HF_ENDPOINT not handled correctly by @Wauplin in #2155
- Fix proxy if dynamic endpoint by @Wauplin (direct commit on main)
- Update the note message when logging in to make it easier to understand and clearer by @lh0x00 in #2163
- Fix URL when uploading to proxy by @Wauplin in #2167
- Fix SafeTensorsInfo initialization by @Wauplin in #2190
- Doc cli download timeout by @zioalex in #2198
- Fix Typos in CONTRIBUTION.md and Formatting in README.md by @lappemic in #2201
- change default model card by @Wauplin (direct commit on main)
- Add returns documentation for save_pretrained by @alexander-soare in #2226
- Update cli.md by @QuinnPiers in #2242
- add warning tip that list_deployed_models only searches over cache by @MoritzLaurer in #2241
- Respect default timeouts in
hf_file_system
by @Wauplin in #2253 - Update harmonized token param desc and type def by @lappemic in #2252
- Better document download attribute by @Wauplin in #2250
- Correctly check inference endpoint is ready by @Wauplin in #2229
- Add support for
updatedRefs
in WebhookPayload by @Wauplin in #2169
โ๏ธ internal
- prepare for 0.23 by @Wauplin in #2156
- lint by @Wauplin (direct commit on main)
- quick fix by @Wauplin (direct commit on main)
- Fix CI (inference tests, dataset viewer user, mypy) by @Wauplin in #2208
- link by @Wauplin (direct commit on main)
- Fix circular imports in eager mode? by @Wauplin in #2211
- Drop generic from InferenceAPI framework list by @Wauplin in #2240
- Remove test sort by acsending likes by @Wauplin in #2243
- Delete legacy tests in
TestHfHubDownloadRelativePaths
+ implicit delete folder is ok by @Wauplin in #2259 - small doc clarification by @julien-c #2261
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @lappemic
- @bilgehanertan
- @cjfghk5697
- @SeungAhSon
- @seoulsky-field
- ๐ [i18n-KO] Translated
guides/community.md
to Korean (#2126)
- ๐ [i18n-KO] Translated
-
@Y4suyuki
- Define errors in errors.py (#2170)
- @harheem
- @seoyoung-3060
- @boyunJang
- @nuatmochoi
-
@fabxoe
- ๐ [i18n-KO] Translated
package_reference/tensorboard.md
to Korean (#2173) - ๐ [i18n-KO] Translated
package_reference/inference_types.md
to Korean (#2171) - ๐ [i18n-KO] Translated
package_reference/hf_api.md
to Korean (#2165) - ๐ [i18n-KO] Translated
package_reference/mixins.md
to Korean (#2166)
- ๐ [i18n-KO] Translated
- @junejae
-
@heuristicwave
- ๐ [i18n-KO] Translating
guides/hf_file_system.md
to Korean (#2146)
- ๐ [i18n-KO] Translating
-
@usr-bin-ksh
- ๐ [i18n-KO] Translated
guides/inference_endpoints.md
to Korean (#2164)
- ๐ [i18n-KO] Translated
so if you want to download a model completely to a local dir with '--local-directory' you don't need to use '--local-dir-use-symlinks False' anymore?
Nevermind, just tested it and now I understand the .huggingface folder. I thought it meant using ~/.cache/huggingface
Thanks for your, and every other contributor's work.