Albert Villanova

albertvillanova

AI & ML interests

None yet

Organizations

albertvillanova's activity

posted an update 4 days ago
view post
Post
1040
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
posted an update 11 days ago
view post
Post
2952
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
posted an update 16 days ago
view post
Post
1199
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator
posted an update 18 days ago
view post
Post
1886
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? πŸ€”

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an exampleπŸ‘‡

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! πŸ“Š

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! πŸ‘‡
posted an update 23 days ago
view post
Post
1912
🚨 We’ve just released a new tool to compare the performance of models in the πŸ€— Open LLM Leaderboard: the Comparator πŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. πŸ¦™πŸ§΅πŸ‘‡

1/ Load the Models' Results
- Go to the πŸ€— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab πŸ“Š
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab βš™οΈ
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! βœ…

4/ Compare Predictions by Sample in the Details Tab πŸ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

πŸš€ Try the πŸ€— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
posted an update about 2 months ago
posted an update 6 months ago
view post
Post
2693
Easily convert your script-based datasets to Parquet and explore them in the dataset viewer. 🌟

πŸ› οΈ Use @huggingface Datasets CLI:
$ 𝚍𝚊𝚝𝚊𝚜𝚎𝚝𝚜-πšŒπš•πš’ πšŒπš˜πš—πšŸπšŽπš›πš_𝚝𝚘_πš™πšŠπš›πššπšžπšŽπš πš„πš‚π™΄πšπ™½π™°π™Όπ™΄/π™³π™°πšƒπ™°πš‚π™΄πšƒ_𝙽𝙰𝙼𝙴

Learn more: https://huggingface.co/docs/datasets/main/en/cli#convert-to-parquet
#Data #AI
posted an update 6 months ago
view post
Post
4055
Recently, the Hugging Face πŸ€— datasets team met with the Language Technologies team led by Marta Villegas ( @mvillegas ) at Barcelona Supercomputing Center @BSC-LT . Eager to collaborate to promote AI across Catalan, Spanish, Basque, and Galician languages and share open-source datasets/models. 🀝 #AI #LanguageTech #OpenSource
  • 1 reply
Β·
posted an update 6 months ago
view post
Post
1659
πŸš€ We recently released datasets 2.19.0! πŸ“¦

πŸ”₯ What's New:
- Polars integration πŸ»β€β„οΈ
- fsspec support for conversion to JSON, CSV, and Parquet
- Mode parameter for Image feature
- CLI function to convert script-datasets to Parquet
- Dataset.take and Dataset.skip

Plus, a bunch of general improvements & bug fixes!

Check out the release notes: https://github.com/huggingface/datasets/releases/tag/2.19.0

Upgrade now and power up your data workflows! πŸ’₯
  • 2 replies
Β·
reacted to Wauplin's post with πŸš€ 6 months ago
view post
Post
1818
πŸš€ Just released version 0.23.0 of the huggingface_hub Python library!

Exciting updates include:
πŸ“ Seamless download to local dir!
πŸ’‘ Grammar and Tools in InferenceClient!
🌐 Documentation full translated to Korean!
πŸ‘₯ User API: get likes, upvotes, nb of repos, etc.!
🧩 Better model cards and encoding for ModelHubMixin!

Check out the full release notes for more details:
Wauplin/huggingface_hub#6
πŸ‘€
reacted to julien-c's post with πŸ€—β€οΈ 9 months ago
view post
Post
πŸ“£ NEW on HF

the Dataset Viewer is now available on *private datasets* too

You need to be a PRO or a Enterprise Hub user. πŸ”₯

Great work from our Datasets team πŸ₯°: @lhoestq @severo @polinaeterna @asoria @albertvillanova and the whole team πŸ₯°
  • 1 reply
Β·