Louis Brulรฉ Naudet PRO

louisbrulenaudet

AI & ML interests

Research in business taxation and development, University Dauphine-PSL ๐Ÿ“– | Backed by the Microsoft for Startups Hub program and Google Cloud Platform for startups program | Hugging Face for Legal ๐Ÿค—

Recent Activity

updated a dataset about 9 hours ago
louisbrulenaudet/bofip
updated a dataset about 9 hours ago
louisbrulenaudet/code-voirie-routiere
updated a dataset about 9 hours ago
louisbrulenaudet/code-travail

Organizations

louisbrulenaudet's activity

posted an update 7 days ago
view post
Post
1554
Iโ€™ve published a new dataset to simplify model merging ๐Ÿค—

This dataset facilitates the search for compatible architectures for model merging with @arcee_aiโ€™s mergekit, streamlining the automation of high-performance merge searches ๐Ÿ“–

Dataset : louisbrulenaudet/mergekit-configs
reacted to m-ric's post with ๐Ÿ”ฅ 8 days ago
view post
Post
3144
๐—ค๐˜„๐—ฒ๐—ป๐Ÿฎ.๐Ÿฑ-๐—–๐—ผ๐—ฑ๐—ฒ๐—ฟ-๐Ÿฏ๐Ÿฎ๐—•: ๐—ป๐—ฒ๐˜„ ๐—ฏ๐—ฒ๐˜€๐˜-๐—ถ๐—ป-๐—ฐ๐—น๐—ฎ๐˜€๐˜€ ๐—ผ๐—ฝ๐—ฒ๐—ป ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น, ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ ๐—ผ๐—ป ๐—บ๐—ผ๐˜€๐˜ ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€!๐Ÿ’ฅ

๐Ÿ’ช It's the first time Open-Source coding model of this size class that clearly matches GPT-4o's coding capabilities!

โœจ Completes the previous two Qwen 2.5 Coder release with 4 new size: 0.5B, 3B, 14B, 32B
๐Ÿ“š Support long context up to 128K (for the 14B and 32B models)
โœ… Drop-in replacement to GPT-4o as a coding assistant on Cursor or for Artifacts!
๐Ÿค— Models available right now on the Hub, under Apache 2.0 license!

They have setup a crazy Artifacts demo, you should go have a look!
๐Ÿ‘‰ Qwen/Qwen2.5-Coder-Artifacts
reacted to m-ric's post with ๐Ÿ‘€ 8 days ago
view post
Post
2336
A non-Instruct LLM assistant is mostly useless. ๐Ÿง

Since it's mostly a model trained to complete text, when you ask it a question like "What to do during a stopover in Paris?", it can just go on and on adding more details to your question instead of answering, which would be valid to complete text from its training corpus, but not to answer questions.

โžก๏ธ So the post-training stage includes an important Instruction tuning step where you teach your model how to be useful : answer questions, be concise, be polite... RLHF is a well known technique for this.

For people interested to understand how this step works, the folks at Adaptive ML have made a great guide!

Read it here ๐Ÿ‘‰ https://www.adaptive-ml.com/post/from-zero-to-ppo
reacted to prithivMLmods's post with ๐Ÿค 10 days ago
view post
Post
5594
New Style, New Mix, New Drop ๐Ÿงค

๐ŸงจFlux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC

๐ŸŽ†Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
๐ŸŽ†Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
๐ŸŽ†Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
๐ŸŽ†Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
๐ŸŽ†Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA

๐ŸงจFlux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

.
.
.
@prithivMLmods
reacted to m-ric's post with ๐Ÿš€ 10 days ago
view post
Post
1611
๐—”๐—ป๐—ฑ๐—ฟ๐—ผ๐—ถ๐—ฑ๐—Ÿ๐—ฎ๐—ฏ: ๐—™๐—ถ๐—ฟ๐˜€๐˜ ๐—ฒ๐˜ƒ๐—ฒ๐—ฟ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—ป๐—ฑ๐—ฟ๐—ผ๐—ถ๐—ฑ ๐—บ๐—ผ๐—ฏ๐—ถ๐—น๐—ฒ ๐—ฎ๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐˜€๐—ต๐—ผ๐˜„๐˜€ ๐˜๐—ต๐—ฎ๐˜ ๐˜€๐—บ๐—ฎ๐—น๐—น, ๐—ณ๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ฒ๐—ฑ ๐—ผ๐—ฝ๐—ฒ๐—ป ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ฐ๐—ฎ๐—ป ๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ฎ ๐—๐—”๐—ฅ๐—ฉ๐—œ๐—ฆ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ ๐—ผ๐—ป ๐˜†๐—ผ๐˜‚๐—ฟ ๐˜€๐—บ๐—ฎ๐—ฟ๐˜๐—ฝ๐—ต๐—ผ๐—ป๐—ฒ ๐Ÿ“ฑ๐Ÿ”ฅ

A team from Tsinghua University just released AndroidLab, the first systematic framework to evaluate and train Android mobile agents that works with both text-only and multimodal models.

They show that fine-tuning small open-source models can significantly boost performance, matching that of much bigger closed models like GPT-4o.

The team built:

๐Ÿ“Šย A reproducible benchmark with 138 tasks across 9 apps to evaluate mobile agents systematically

๐Ÿ“๐Ÿ“ฑย A framework supporting both text-only (via XML) and visual (via marked screenshots) interfaces

โœ…ย An instruction dataset of 10.5k operation traces for training mobile agents

Key insights:

- ๐Ÿ“ˆ Fine-tuning improves performance BY A LOT: Open-source model Llama-3.1-8B improves from 2% to 24% success rate after training, nearly reaching GPT-4o performance although itโ€™s much smaller
- โš™๏ธ Text-only agents match multimodal ones: XML-based agents achieve similar performance to screenshot-based multimodal agents.

Read their paper here ๐Ÿ‘‰ AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents (2410.24024)
reacted to abhishek's post with ๐Ÿ”ฅ 10 days ago
view post
Post
4951
INTRODUCING Hugging Face AutoTrain Client ๐Ÿ”ฅ
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks ๐Ÿค—

To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.

"pip install autotrain-advanced"

Github repo: https://github.com/huggingface/autotrain-advanced
  • 6 replies
ยท
reacted to prithivMLmods's post with โค๏ธ 10 days ago
view post
Post
4862
Style flo : : ๐ŸŽ‰๐Ÿค—

{ Try Now on Flux LoRA DLC โ›ต } : prithivMLmods/FLUX-LoRA-DLC

-- Undersea
{ Red Fluid } : prithivMLmods/Red-Undersea-Flux-LoRA

-- 3D Realmix
{ 3D Portrait Render } : prithivMLmods/3D-Render-Flux-LoRA

-- Pop
{ Yellow Pop } : prithivMLmods/Yellow-Pop-Flux-Dev-LoRA

-- Grid
{ Purple Grid } : prithivMLmods/Purple-Grid-Flux-LoRA

{ collections : : }

๐Ÿš€ Flux LoRA :
prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

๐Ÿš€Collection zero: prithivMLmods/collection-zero-and-demo-recently-updated-65e48a7dd8212873836ceca2


.
.
@prithivMLmods ๐Ÿงจ
reacted to yagilb's post with ๐Ÿ‘€ 14 days ago
reacted to singhsidhukuldeep's post with ๐Ÿ‘€ 16 days ago
view post
Post
2075
Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!

The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.

The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.

Data Preparation
- Convert questions into existence claims using rule-based transformations.
- Combine retrieval data with NLI data from SNLI and MNLI datasets.
- Unify the format of both data types using a consistent prompting framework.

Entailment Tuning Process
- Initialize the model using pre-trained language models like BERT or RoBERTa.
- Apply aggressive masking (ฮฒ=0.8) specifically to the hypothesis components while preserving premise information.
- Train the model to predict the masked hypothesis tokens from the premise content.
- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.

Training Arguments for Entailment Tuning (Yes! They Shared Them)
- Use a learning rate of 2e-5 with 100 warmup steps.
- Set batch size to 128.
- Apply weight decay of 0.01.
- Utilize the Adam optimizer with beta values (0.9, 0.999).
- Maintain maximum gradient norm at 1.0.

Deployment
- Index passages using FAISS for efficient retrieval.
- Shard vector store across multiple GPUs.
- Enable sub-millisecond retrieval of the top-100 passages per query.

Integration with Existing Systems
- Insert entailment tuning between pre-training and fine-tuning stages.
- Maintain compatibility with current dense retrieval methods.
- Preserve existing contrastive learning approaches during fine-tuning.

Simple, intuitive, and effective!

This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks.
reacted to reach-vb's post with ๐Ÿš€ 18 days ago
view post
Post
2956
Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed ๐Ÿ”ฅ

> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs

> Three checkpoints:

- AMD OLMo 1B: Pre-trained model
- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets
- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset

Key Insights:
> Pre-trained with less than half the tokens of OLMo-1B
> Post-training steps include two-phase SFT and DPO alignment
> Data for SFT:
- Phase 1: Tulu V2
- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback

> Model checkpoints on the Hub & Integrated with Transformers โšก๏ธ

Congratulations & kudos to AMD on a brilliant smol model release! ๐Ÿค—

amd/amd-olmo-6723e7d04a49116d8ec95070
replied to their post 25 days ago
view reply

Hello,

Thank you for reaching out. I'm interested in learning more about its potential applications and dataset specifics. To ensure weโ€™re aligned on objectives and timelines, would you mind detailing a bit further on the following in the Tally form? (https://tally.so/r/w2xe0A)

  • Project Goals: What are the primary objectives for your model, and how do you envision deploying it?
  • Data and Compute Requirements: Could you outline the volume and nature of data you'd like to process and any specific requirements for H100 access?
  • Finetuning Method: I'd be interested to hear more about your finetuning approach. Do you have a plan for iterations or specific benchmarks in mind?

Please submit your responses via the form to streamline our discussion. Once we have the foundational details clarified, we can determine the next steps and see how best to leverage the Azure credits together.

Looking forward to exploring the possibilities.

Best regards, Louis

replied to their post 25 days ago
view reply

Hello @Siddartha10 ,

Thank you for reaching out! I'm excited to hear about your work and the potential for collaboration.

To help assess how best to support your project, could you please share a bit more detail? Specifically:

  • Project Overview: A brief description of your project and its objectives.
  • Data Preparedness: Whether your data is ready for immediate use and the nature of this data.
  • Expected Outcomes: The goals or deliverables you anticipate achieving with this additional compute power.

Feel free to submit your details via this form Tally form (https://tally.so/r/w2xe0A) so we can proceed efficiently.

Looking forward to learning more about your project and potentially collaborating!

Best regards,
Louis

replied to their post 25 days ago
view reply

Hi @Pankaj8922 ,

Thank you for reaching out and sharing your project concept! For this collaboration, I'm specifically seeking projects that already have data prepared and ready for immediate use, as the Azure credits are limited and focused on applications that can be initiated without additional data generation steps.

If you have any projects with data fully prepared, feel free to submit details through the form here: https://tally.so/r/w2xe0A.

Best of luck with your synthetic dataset project!

posted an update 25 days ago
view post
Post
1001
Introducing Lemone-router, a series of classification models designed to produce an optimal multi-agent system for different branches of tax law.

Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impรดts :

label2id = {
    "Bรฉnรฉfices professionnels": 0,
    "Contrรดle et contentieux": 1,
    "Dispositifs transversaux": 2,
    "Fiscalitรฉ des entreprises": 3,
    "Patrimoine et enregistrement": 4,
    "Revenus particuliers": 5,
    "Revenus patrimoniaux": 6,
    "Taxes sur la consommation": 7
}
	
id2label = {
    0: "Bรฉnรฉfices professionnels",
    1: "Contrรดle et contentieux",
    2: "Dispositifs transversaux",
    3: "Fiscalitรฉ des entreprises",
    4: "Patrimoine et enregistrement",
    5: "Revenus particuliers",
    6: "Revenus patrimoniaux",
    7: "Taxes sur la consommation"
}

It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191

Link to the collection: louisbrulenaudet/lemone-router-671cce21d6410f3570514762
reacted to albertvillanova's post with ๐Ÿ‘ 29 days ago
view post
Post
1929
๐Ÿšจ Weโ€™ve just released a new tool to compare the performance of models in the ๐Ÿค— Open LLM Leaderboard: the Comparator ๐ŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Letโ€™s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. ๐Ÿฆ™๐Ÿงต๐Ÿ‘‡

1/ Load the Models' Results
- Go to the ๐Ÿค— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab ๐Ÿ“Š
- Head over to the Results tab.
- Here, youโ€™ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! ๐ŸŒŸ
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab โš™๏ธ
- To ensure youโ€™re comparing apples to apples, head to the Configs tab.
- Review both modelsโ€™ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, itโ€™s good to know before drawing conclusions! โœ…

4/ Compare Predictions by Sample in the Details Tab ๐Ÿ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each modelโ€™s outputs.

5/ With this tool, itโ€™s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youโ€™re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

๐Ÿš€ Try the ๐Ÿค— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
reacted to Taylor658's post with ๐Ÿ”ฅ 29 days ago
view post
Post
2194
The Mystery Bot ๐Ÿ•ต๏ธโ€โ™‚๏ธ saga I posted about from earlier this week has been solved...๐Ÿค—

Cohere for AI has just announced its open source Aya Expanse multilingual model. The Initial release supports 23 languages with more on the way soon.๐ŸŒŒ ๐ŸŒ

You can also try Aya Expanse via SMS on your mobile phone using the global WhatsApp number or one of the initial set of country specific numbers listed below.โฌ‡๏ธ

๐ŸŒWhatsApp - +14313028498
Germany - (+49) 1771786365
USA โ€“ +18332746219
United Kingdom โ€” (+44) 7418373332
Canada โ€“ (+1) 2044107115
Netherlands โ€“ (+31) 97006520757
Brazil โ€” (+55) 11950110169
Portugal โ€“ (+351) 923249773
Italy โ€“ (+39) 3399950813
Poland - (+48) 459050281
  • 1 reply
ยท
reacted to malhajar's post with ๐Ÿ”ฅ 29 days ago
view post
Post
3772
๐Ÿ‡ซ๐Ÿ‡ท Lancement officiel de l'OpenLLM French Leaderboard : initiative open-source pour rรฉfรฉrencer lโ€™รฉvaluation des LLMs francophones

Aprรจs beaucoup dโ€™efforts et de sueurs avec Alexandre Lavallee, nous sommes ravis dโ€™annoncer que le OpenLLMFrenchLeaderboard est en ligne sur Hugging Face (space url: le-leadboard/OpenLLMFrenchLeaderboard) la toute premiรจre plateforme dรฉdiรฉe ร  lโ€™รฉvaluation des grands modรจles de langage (LLM) en franรงais. ๐Ÿ‡ซ๐Ÿ‡ทโœจ

Ce projet de longue haleine est avant tout une ล“uvre de passion mais surtout une nรฉcessitรฉ absolue. Il devient urgent et vital d'oeuvrer ร  plus de transparence dans ce domaine stratรฉgique des LLM dits multilingues. La premiรจre piรจce ร  l'รฉdifice est donc la mise en place d'une รฉvaluation systรฉmatique et systรฉmique des modรจles actuels et futurs.

Votre modรจle IA franรงais est-il prรชt ร  se dรฉmarquer ? Soumettez le dans notre espace, et voyez comment vous vous comparez par rapport aux autres modรจles.

โ“ Comment รงa marche :
Soumettez votre LLM franรงais pour รฉvaluation, et nous le testerons sur des benchmarks de rรฉfรฉrence spรฉcifiquement adaptรฉs pour la langue franรงaise โ€” notre suite de benchmarks comprend :

- BBH-fr : Raisonnement complexe
- IFEval-fr : Suivi d'instructions
- GPQA-fr : Connaissances avancรฉes
- MUSR-fr : Raisonnement narratif
- MATH_LVL5-fr : Capacitรฉs mathรฉmatiques
- MMMLU-fr : Comprรฉhension multitรขche

Le processus est encore manuel, mais nous travaillons sur son automatisation, avec le soutien de la communautรฉ Hugging Face.

@clem , on se prรฉpare pour une mise ร  niveau de lโ€™espace ? ๐Ÿ˜๐Ÿ‘€

Ce n'est pas qu'une question de chiffresโ€”il s'agit de crรฉer une IA qui reflรจte vraiment notre langue, notre culture et nos valeurs. OpenLLMFrenchLeaderboard est notre contribution personnelle pour faรงonner l'avenir des LLM en France.
  • 1 reply
ยท
reacted to m-ric's post with ๐Ÿ‘€ about 1 month ago
view post
Post
717
โšก๏ธ ๐“๐ก๐ข๐ฌ ๐ฆ๐จ๐ง๐ญ๐ก'๐ฌ ๐ฆ๐จ๐ฌ๐ญ ๐ข๐ฆ๐ฉ๐จ๐ซ๐ญ๐š๐ง๐ญ ๐›๐ซ๐ž๐š๐ค๐ญ๐ก๐ซ๐จ๐ฎ๐ ๐ก: ๐ƒ๐ข๐Ÿ๐Ÿ๐ž๐ซ๐ž๐ง๐ญ๐ข๐š๐ฅ ๐“๐ซ๐š๐ง๐ฌ๐Ÿ๐จ๐ซ๐ฆ๐ž๐ซ ๐ฏ๐š๐ฌ๐ญ๐ฅ๐ฒ ๐ข๐ฆ๐ฉ๐ซ๐จ๐ฏ๐ž๐ฌ ๐š๐ญ๐ญ๐ž๐ง๐ญ๐ข๐จ๐ง โ‡’ ๐›๐ž๐ญ๐ญ๐ž๐ซ ๐ซ๐ž๐ญ๐ซ๐ข๐ž๐ฏ๐š๐ฅ ๐š๐ง๐ ๐Ÿ๐ž๐ฐ๐ž๐ซ ๐ก๐š๐ฅ๐ฅ๐ฎ๐œ๐ข๐ง๐š๐ญ๐ข๐จ๐ง๐ฌ!

Thought that self-attention could not be improved anymore?

Microsoft researchers have dropped a novel "differential attention" mechanism that amplifies focus on relevant context while canceling out noise. It sounds like a free lunch, but it does really seem to vastly improve LLM performance!

๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:

๐Ÿง  Differential attention computes the difference between two separate softmax attention maps, canceling out noise and promoting sparse attention patterns

๐Ÿ”ฅ DIFF Transformer outperforms standard Transformers while using 35-40% fewer parameters or training tokens

๐Ÿ“ Scales well to long contexts up to 64K tokens, leveraging increasing context length more effectively

๐Ÿ”Ž Dramatically improves key information retrieval, enhancing in-context learning, and possibly reducing risk of hallucinations ๐Ÿคฏ

๐Ÿ”ข Reduces activation outliers, potentially enabling lower-bit quantization without performance drop!

โš™๏ธ Can be directly implemented using existing FlashAttention kernels

This new architecture could lead much more capable LLMs, with vastly improved strengths in long-context understanding and factual accuracy.

But they didnโ€™t release weights on the Hub: letโ€™s wait for the community to train the first open-weights DiffTransformer! ๐Ÿš€

Read their paper ๐Ÿ‘‰ย  Differential Transformer (2410.05258)
reacted to thomwolf's post with ๐Ÿš€ about 1 month ago
view post
Post
4827
Is is time for the open-source AI robots revolution ๐Ÿš€?

With @haixuantao and @Leyo weโ€™ve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.

Links to find all the hardware/software we used in the demo:
- robot control framework โ€“ dora-rs: https://github.com/dora-rs/dora
- speech-to-text model โ€“ whisper: openai/whisper-base
- vision-text model โ€“ Idefics2: HuggingFaceM4/idefics2-8b-AWQ
- text-to-speech model โ€“ ParlerTTS mini: parler-tts/parler_tts_mini_v0.1
- robot: https://dji.com/robomaster-s1
- code gist: https://gist.github.com/haixuanTao/860e1740245dc2c8dd85b496150a9320
- Larger codebase: dora-rs/dora-idefics2
- laptop/pc: any with a recent GPU card (our has a RTX 4090)

Enjoy!
ยท
reacted to alielfilali01's post with ๐Ÿ‘€ about 1 month ago
view post
Post
1619
I feel like this incredible resource hasn't gotten the attention it deserves in the community!

@clefourrier and generally the HuggingFace evaluation team put together a fantastic guidebook covering a lot about ๐—˜๐—ฉ๐—”๐—Ÿ๐—จ๐—”๐—ง๐—œ๐—ข๐—ก from basics to advanced tips.

link : https://github.com/huggingface/evaluation-guidebook

I havenโ€™t finished it yet, but i'am enjoying every piece of it so far. Huge thanks @clefourrier and the team for this invaluable resource !
  • 3 replies
ยท