Alessandro Ercolani

giux78

AI & ML interests

NLP, Reinforcement Learning, Semantics, Computational Neuroscience

Recent Activity

liked a dataset 3 days ago
ReDiX/QA-ita-200k
liked a dataset 5 days ago
microsoft/orca-agentinstruct-1M-v1
updated a dataset 5 days ago
mii-llm/pinocchio-results

Articles

Organizations

giux78's activity

reacted to their post with 🚀 4 months ago
view post
Post
1639
We https://mii-llm.ai just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA

Thanks to @efederici who released efederici/MMLU-Pro-ita a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the "Eval Aggiuntive" tab of https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard the results on Italian open source LLMs.

If you want to deepen read the blog article on hf https://huggingface.co/blog/giux78/mmlu-pro-ita
posted an update 4 months ago
view post
Post
1639
We https://mii-llm.ai just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA

Thanks to @efederici who released efederici/MMLU-Pro-ita a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the "Eval Aggiuntive" tab of https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard the results on Italian open source LLMs.

If you want to deepen read the blog article on hf https://huggingface.co/blog/giux78/mmlu-pro-ita
reacted to dvilasuero's post with ❤️🤗🚀🔥 5 months ago
view post
Post
7939
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
·
reacted to their post with ❤️🚀 6 months ago
view post
Post
1454
@FinancialSupport and I just released a new version of the Italian LLMs leaderboard https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
using the super useful https://huggingface.co/demo-leaderboard template from @clefourrier .
We’ve evaluated over 50 models (base, merged, fine-tuned, etc.) from:
- Major companies like Meta, Mistral, Google ...
- University groups such as https://huggingface.co/sapienzanlp or https://huggingface.co/swap-uniba
- Italian Companies like https://huggingface.co/MoxoffSpA , https://huggingface.co/FairMind or https://huggingface.co/raicrits
- Various communities and individuals
All models were tested on #Italian benchmarks #mmlu #arc-c #hellaswag, which we contributed to the opensource lm-evaluation-harness library from https://huggingface.co/EleutherAI.
Plus, you can now submit your model for automatic evaluation, thanks to to https://huggingface.co/seeweb sponsored computation.
Curious about the top Italian models? Check out the leaderboard and submit your model!

https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard

  • 1 reply
·
posted an update 6 months ago
view post
Post
1454
@FinancialSupport and I just released a new version of the Italian LLMs leaderboard https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
using the super useful https://huggingface.co/demo-leaderboard template from @clefourrier .
We’ve evaluated over 50 models (base, merged, fine-tuned, etc.) from:
- Major companies like Meta, Mistral, Google ...
- University groups such as https://huggingface.co/sapienzanlp or https://huggingface.co/swap-uniba
- Italian Companies like https://huggingface.co/MoxoffSpA , https://huggingface.co/FairMind or https://huggingface.co/raicrits
- Various communities and individuals
All models were tested on #Italian benchmarks #mmlu #arc-c #hellaswag, which we contributed to the opensource lm-evaluation-harness library from https://huggingface.co/EleutherAI.
Plus, you can now submit your model for automatic evaluation, thanks to to https://huggingface.co/seeweb sponsored computation.
Curious about the top Italian models? Check out the leaderboard and submit your model!

https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard

  • 1 reply
·
reacted to efederici's post with 🔥 6 months ago
view post
Post
1551
Finally, I can post! 🚀

I created a Capybara-inspired Italian dataset by translating the initial instruction and running it through a pipeline to generate conversations. I used Claude Sonnet for translation and instruction generation, and Opus for generating the answers.

I hope this dataset proves useful for people working on 🇮🇹 language models.

⛁ Open sourcing the dataset here: efederici/capybara-claude-15k-ita
  • 1 reply
·
reacted to their post with 🚀 7 months ago
view post
Post
1583
@mik3ml just released ReDiX/wikipediaQA-ita an interesting synthetic dataset originated from wikipedia using a fine tuned version of mistral-7B specific for the Italian language 🇮🇹 .

  • 1 reply
·
posted an update 7 months ago
view post
Post
1583
@mik3ml just released ReDiX/wikipediaQA-ita an interesting synthetic dataset originated from wikipedia using a fine tuned version of mistral-7B specific for the Italian language 🇮🇹 .

  • 1 reply
·
reacted to tomaarsen's post with 🔥 7 months ago
view post
Post
2734
I've just stumbled upon some excellent work on (🇫🇷 French) retrieval models by @antoinelouis . Kudos to him!

- French Embedding Models: https://huggingface.co/collections/antoinelouis/dense-single-vector-bi-encoders-651523c0c75a3d4c44fc864d
- French Reranker Models: antoinelouis/cross-encoder-rerankers-651523f16efa656d1788a239
- French Multi-vector Models: https://huggingface.co/collections/antoinelouis/dense-multi-vector-bi-encoders-6589a8ee6b17c06872e9f075
- Multilingual Models: https://huggingface.co/collections/antoinelouis/modular-retrievers-65d53d0db64b1d644aea620c

A lot of these models use the MS MARCO Hard Negatives dataset, which I'm currently reformatting to be more easily usable. Notably, they should work out of the box without any pre-processing for training embedding models in the upcoming Sentence Transformers v3.
reacted to gsarti's post with 🚀 7 months ago
view post
Post
2795
🔍 Today's (self-serving) pick in Interpretability & Analysis of LMs:

A Primer on the Inner Workings of Transformer-based Language Models
by @javifer @gsarti @arianna-bis and M. R. Costa-jussà
( @mt-upc , @GroNLP , @facebook )

This primer can serve as a comprehensive introduction to recent advances in interpretability for Transformer-based LMs for a technical audience, employing a unified notation to introduce network modules and present state-of-the-art interpretability methods.

Interpretability methods are presented with detailed formulations and categorized as either localizing the inputs or model components responsible for a particular prediction or decoding information stored in learned representations. Then, various insights on the role of specific model components are summarized alongside recent work using model internals to direct editing and mitigate hallucinations.

Finally, the paper provides a detailed picture of the open-source interpretability tools landscape, supporting the need for open-access models to advance interpretability research.

📄 Paper: A Primer on the Inner Workings of Transformer-based Language Models (2405.00208)

🔍 All daily picks: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-ofc-lms-65ae3339949c5675d25de2f9
reacted to danielhanchen's post with 🚀 7 months ago
view post
Post
3588
Yay we got 500K+ monthly HF downloads on our Unsloth HF repo! :) Super appreciate everyone in the OSS community - and thanks for using Unsloth!!
·
reacted to HugoLaurencon's post with ❤️ 7 months ago
view post
Post
2805
The Cauldron is a massive collection of 50 high-quality datasets, all converted to the user/assistant format, and ready to use to fine-tune any Vision Language Model.

The Cauldron covers a wide range of tasks, including general visual question answering, counting, captioning, text transcription, document understanding, chart/figure understanding, table understanding, visual reasoning, geometry, spotting differences between 2 images or converting a screenshot to a code.

HuggingFaceM4/the_cauldron
reacted to thomwolf's post with 🧠🚀🔥 7 months ago
view post
Post
4828
Is is time for the open-source AI robots revolution 🚀?

With @haixuantao and @Leyo we’ve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.

Links to find all the hardware/software we used in the demo:
- robot control framework – dora-rs: https://github.com/dora-rs/dora
- speech-to-text model – whisper: openai/whisper-base
- vision-text model – Idefics2: HuggingFaceM4/idefics2-8b-AWQ
- text-to-speech model – ParlerTTS mini: parler-tts/parler_tts_mini_v0.1
- robot: https://dji.com/robomaster-s1
- code gist: https://gist.github.com/haixuanTao/860e1740245dc2c8dd85b496150a9320
- Larger codebase: dora-rs/dora-idefics2
- laptop/pc: any with a recent GPU card (our has a RTX 4090)

Enjoy!
·
reacted to their post with ❤️ 7 months ago
view post
Post
1804
🎉 Super @DeepMount00 just released 𝗚𝗲𝗺𝗺𝗮_𝗤𝗔_𝗜𝗧𝗔_𝘃𝟯 𝗹𝗲𝗮𝗱𝗶𝗻𝗴 the 𝗥𝗔𝗚 𝘁𝗮𝘀𝗸 on the Italian 𝗟𝗟𝗠_𝗜𝗧𝗔_𝗟𝗘𝗔𝗗𝗘𝗥𝗕𝗢𝗔𝗥𝗗. The model is a fine tuned version of Gemma 2B.
Model details: DeepMount00/Gemma_QA_ITA_v3
Explore the full RAG section rankings here: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard on section Classifica RAG