Hugging face presents FineVideo π₯! Unlocking the next generation of Video understanding π
π€―3400 hours of annotated Creative Common videos with rich character descriptions, scene splits, mood, and content descriptions per scene as well as QA pairs. π₯ @mfarre processed over 2M videos of Youtube-CC to make this incredibly powerful selection.
Sorry judge, my lawyer hallucinated? π If you get an AI lawyer, you would want it to be hallucination-free!
New @Stanford-@Yale research reveals surprising findings about leading AI legal research tools. Here's what you need to know:
>> Key Findings The study tested LexisNexis (Lexis+ AI), Thomson Reuters (Westlaw AI & Ask Practical Law AI), and GPT-4, finding hallucination rates between 17-33% despite claims of being "hallucination-free".
>> Technical Deep Dive The research evaluated these tools using Retrieval-Augmented Generation (RAG) architecture, which operates in two crucial steps:
1. Retrieval System: - Uses neural text embeddings to capture semantic meaning - Employs both lexical and semantic search mechanisms - Implements document filtering and extraction - Retrieves relevant legal documents from vast databases
2. Generation Pipeline: - Processes retrieved documents alongside original queries - Synthesizes information from multiple legal sources - Generates responses based on retrieved context - Includes citation verification mechanisms
>> Why This Matters This research exposes critical vulnerabilities in AI legal tools that lawyers increasingly rely on. It's essential for legal professionals to understand these limitations when incorporating AI into their practice.
reacted to prithivMLmods's
post with β€οΈπ€1 day ago
π Glif App's Remixes feature allows you to slap a logo onto anything, seamlessly integrating the input image (logo) into various contexts. The result is stunning remixes that blend the input logo with generated images (img2img logo mapping) for incredible outcomes.
Build datasets for AI on the Hugging Face Hubβ10x easier than ever!
Today, I'm excited to share our biggest feature since we joined Hugging Face.
Hereβs how it works:
1. Pick a datasetβupload your own or choose from 240K open datasets. 2. Paste the Hub dataset ID into Argilla and set up your labeling interface. 3. Share the URL with your team or the whole community!
And the best part? Itβs: - No code β no Python needed - Integrated β all within the Hub - Scalable β from solo labeling to 100s of contributors
I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.
Let's make this sentence obsolete: "Everyone wants to do the model work, not the data work."
Import any dataset from the Hub and configure your labeling tasks without needing any code!
Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out π«Άπ½
π¨ Weβve just released a new tool to compare the performance of models in the π€ Open LLM Leaderboard: the Comparator π open-llm-leaderboard/comparator
Want to see how two different versions of LLaMA stack up? Letβs walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. π¦π§΅π
1/ Load the Models' Results - Go to the π€ Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator - Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns. - Press the Load button. Ready to dive into the results!
2/ Compare Metric Results in the Results Tab π - Head over to the Results tab. - Here, youβll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! π - Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.
3/ Check Config Alignment in the Configs Tab βοΈ - To ensure youβre comparing apples to apples, head to the Configs tab. - Review both modelsβ evaluation configurations, such as metrics, datasets, prompts, few-shot configs... - If something looks off, itβs good to know before drawing conclusions! β
4/ Compare Predictions by Sample in the Details Tab π - Curious about how each model responds to specific inputs? The Details tab is your go-to! - Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button. - Check out the side-by-side predictions and dive into the nuances of each modelβs outputs.
5/ With this tool, itβs never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youβre a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.
π Try the π€ Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
reacted to m-ric's
post with πabout 1 month ago
By far the coolest release of the day! > The Open LLM Leaderboard, most comprehensive suite for comparing Open LLMs on many benchmarks, just released a comparator tool that lets you dig into the detail of differences between any models.
Here's me checking how the new Llama-3.1-Nemotron-70B that we've heard so much compares to the original Llama-3.1-70B. π€π
You can now build a custom text classifier without days of human labeling!
π LLMs work reasonably well as text classifiers. π They are expensive to run at scale and their performance drops in specialized domains.
π Purpose-built classifiers have low latency and can potentially run on CPU. π They require labeled training data.
Combine the best of both worlds: the automatic labeling capabilities of LLMs and the high-quality annotations from human experts to train and deploy a specialized model.
Big news! You can now build strong ML models without days of human labelling
You simply: - Define your dataset, including annotation guidelines, labels and fields - Optionally label some records manually. - Use an LLM to auto label your data with a human (you? your team?) in the loop!
Open-source AI creates healthy competition in a field where natural tendencies lead to extreme concentration of power. Imagine a world where only one or two companies could build software. This is the biggest risk and ethical challenge of them all IMO. Let's fight this!
π Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!
πΌ Image Field: Seamlessly work with multimodal datasets π Dark Mode: Reduce eye strain with our sleek new look π€ Enhanced Hugging Face Hub import with the SDK πͺπΈ Spanish UI: Breaking language barriers
Plus more improvements to supercharge your model curation workflow!