Vlad Bogolin

vladbogo

AI & ML interests

LLMs, Computer Vision

Recent Activity

upvoted a paper about 23 hours ago
updated a collection about 23 hours ago
AI Paper of the Day
upvoted a paper about 23 hours ago

Articles

Organizations

vladbogo's activity

posted an update 8 months ago
view post
Post
2250
SwapAnything is a new method that allows swapping any object in an image with personalized concepts given by a reference image.

Key points:
1️⃣ It uses pre-trained diffusion models to enable precise and high-fidelity object swapping in images.
2️⃣Targeted variable swapping ensures perfect background preservation while swapping specific areas.
3️⃣SwapAnything achieves good results in single-object, multi-object, partial-object, and cross-domain swapping tasks.

Paper: SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing (2404.05717)
Project page: https://swap-anything.github.io

Congrats to the authors for their work!
reacted to mlabonne's post with 🔥 8 months ago
view post
Post
12780
🔁 AutoMerger created the best 7B model on the Open LLM Leaderboard

By randomly combining top models from the Open LLM Leaderboard, AutoMerger created YamshadowExperiment28-7B. The model is three weeks old and has been at the top of the leaderboard for a week now. It was created through a simple SLERP merge of:

- automerger/YamShadow-7B (another top model created by AutoMerger)
- yam-peleg/Experiment28-7B (a top model from @yam-peleg )

1/ On the Open LLM Leaderboard, it managed to outperform the excellent M7-7b model, which has been the #1 7B model for a while now.

2/ On the YALL leaderboard, YamshadowExperiment28-7B is ranked as the 9th best-performing automerge (but note that the scores are very close to each other). Compared to others, it does not perform particularly well on AGIEval or Bigbench.

3/ Thanks to @sam-paech , I have scores on EQ-Bench, where it managed to outperform all of my previous models. It even surpasses recent models such as DBRX instruct, Qwen1.5 32B Chat, and Cohere's Command R+.

Surprisingly, it does not support ChatML or Mistral Instruct, unlike my other merges (which are part of its family tree). Alpaca works well 99% of the time, but the model can sometimes produce a lot of "INST" tokens for no reason.

In my experiments, YamshadowExperiment28-7B doesn't seem smarter than other successful merges like AlphaMonarch. On the contrary, I found several mathematical or reasoning problems where it fails.

Considering these results, it looks like it might overfit the Open LLM Leaderboard. I guess it's anything but surprising when you randomly merge 156 models.

🤗 Model: automerger/YamshadowExperiment28-7B
🔁 AutoMerger: mlabonne/AutoMerger
reacted to akhaliq's post with 🚀❤️ 8 months ago
view post
Post
2606
No "Zero-Shot" Without Exponential Data

Pretraining Concept Frequency Determines Multimodal Model Performance

No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance (2404.04125)

Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation. However, it is unclear how meaningful the notion of "zero-shot" generalization is for such multimodal models, as it is not known to what extent their pretraining datasets encompass the downstream concepts targeted for during "zero-shot" evaluation. In this work, we ask: How is the performance of multimodal models on downstream concepts influenced by the frequency of these concepts in their pretraining datasets? We comprehensively investigate this question across 34 models and five standard pretraining datasets (CC-3M, CC-12M, YFCC-15M, LAION-400M, LAION-Aesthetics), generating over 300GB of data artifacts. We consistently find that, far from exhibiting "zero-shot" generalization, multimodal models require exponentially more data to achieve linear improvements in downstream "zero-shot" performance, following a sample inefficient log-linear scaling trend. This trend persists even when controlling for sample-level similarity between pretraining and downstream datasets, and testing on purely synthetic data distributions. Furthermore, upon benchmarking models on long-tailed data sampled based on our analysis, we demonstrate that multimodal models across the board perform poorly. We contribute this long-tail test set as the "Let it Wag!" benchmark to further research in this direction. Taken together, our study reveals an exponential need for training data which implies that the key to "zero-shot" generalization capabilities under large-scale training paradigms remains to be found.
reacted to akhaliq's post with ❤️ 8 months ago
view post
Post
2635
Advancing LLM Reasoning Generalists with Preference Trees

Advancing LLM Reasoning Generalists with Preference Trees (2404.02078)

We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model.
reacted to MoritzLaurer's post with 🔥 8 months ago
view post
Post
3706
🆕 Releasing a new series of 8 zeroshot classifiers: better performance, fully commercially useable thanks to synthetic data, up to 8192 tokens, run on any hardware.

Summary:
🤖 The zeroshot-v2.0-c series replaces commercially restrictive training data with synthetic data generated with mistralai/Mixtral-8x7B-Instruct-v0.1 (Apache 2.0). All models are released under the MIT license.
🦾 The best model performs 17%-points better across 28 tasks vs. facebook/bart-large-mnli (the most downloaded commercially-friendly baseline).
🌏 The series includes a multilingual variant fine-tuned from BAAI/bge-m3 for zeroshot classification in 100+ languages and with a context window of 8192 tokens
🪶 The models are 0.2 - 0.6 B parameters small, so they run on any hardware. The base-size models are +2x faster than bart-large-mnli while performing significantly better.
🤏 The models are not generative LLMs, they are efficient encoder-only models specialized in zeroshot classification through the universal NLI task.
🤑 For users where commercially restrictive training data is not an issue, I've also trained variants with even more human data for improved performance.

Next steps:
✍️ I'll publish a blog post with more details soon
🔮 There are several improvements I'm planning for v2.1. Especially the multilingual model has room for improvement.

All models are available for download in this Hugging Face collection: MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f

These models are an extension of the approach explained in this paper, but with additional synthetic data: https://arxiv.org/abs/2312.17543
·
posted an update 8 months ago
view post
Post
1759
Anthropic introduces "Many-shot Jailbreaking" (MSJ), a new attack on large language models! MSJ exploits long context windows to override safety constraints.

Key Points:
* Prompts LLMs with hundreds of examples of harmful behavior formatted as a dialogue
* Generates malicious examples using an uninhibited "helpful-only" model
* Effective at jailbreaking models like Claude 2.0, GPT-3.5, GPT-4
* Standard alignment techniques provide limited protection against long context attacks

Paper: https://www.anthropic.com/research/many-shot-jailbreaking
More details in my blog: https://huggingface.co/blog/vladbogo/many-shot-jailbreaking

Congrats to the authors for their work!
  • 2 replies
·
reacted to osanseviero's post with ❤️ 8 months ago
view post
Post
2272
Diaries of Open Source. Part 13 🤗

🤏Two different bitnet 1.5 open-source replications
Original paper: The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits (2402.17764)
1bitllm experiment: https://hf.co/blog/joey00072/experiments-with-bitnet-1-5
NousResearch experiment NousResearch/OLMo-Bitnet-1B

🥳Tiny and large multimodal models great for embeddings
GitHub: https://github.com/unum-cloud/uform
Encoders: https://hf.co/collections/unum-cloud/multimodal-encoders-660553903617c5297eb16838
ONNX weights: https://hf.co/collections/unum-cloud/uform-vl-english-large-onnx-66055a57c182d846f3bc1949

📜 SMPLer-X: Expressive Human Pose and Shape Estimation
Project website: https://caizhongang.com/projects/SMPLer-X/
Demo: caizhongang/SMPLer-X
Paper: SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation (2309.17448)

🧙GeoWizard: 3D Geometry Estimation
Project website: https://fuxiao0719.github.io/projects/geowizard/
Demo: lemonaddie/geowizard

Misc models and datasets
- Dolphin-2.8-mistral-7b-v0.2 cognitivecomputations/dolphin-2.8-mistral-7b-v02
- Hermes-2-Pro-11B, a self-frankenmerge 11B variant mattshumer/Hermes-2-Pro-11B
- Large conversational dataset based on Usenet data in the Italian language mii-community/UsenetArchiveIT-conversations
  • 3 replies
·
reacted to thomwolf's post with 🔥 8 months ago
view post
Post
1992
Little know gem: the Open-source Cookbook

A collection of notebooks for building practical AI applications using open-source tools and models: https://lnkd.in/e6m6Jmwu

Doc: https://lnkd.in/e3FE6TUq

Currently contains 16 notebooks in English (and some in Chinese):
1. Using LLM-as-a-judge 🧑‍⚖️ for an automated and versatile evaluation
2. Create a legal preference dataset
3. Suggestions for Data Annotation with SetFit in Zero-shot Text Classification
4. Implementing semantic cache to improve a RAG system
5. Building A RAG Ebook “Librarian” Using LlamaIndex
6. Stable Diffusion Interpolation
7. Building A RAG System with Gemma, MongoDB and Open Source Models
8. Prompt Tuning with PEFT Library
9. Migrating from OpenAI to Open LLMs Using TGI’s Messages API
10. Automatic Embeddings with TEI through Inference Endpoints
11. Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain
12. Embedding multimodal data for similarity search using 🤗 transformers, 🤗 datasets and FAISS
13. Fine-tuning a Code LLM on Custom Code on a single GPU
14. RAG Evaluation Using Synthetic data and LLM-As-A-Judge
15. Advanced RAG on HuggingFace documentation using LangChain
16. Detecting Issues in a Text Dataset with Cleanlab
reacted to akhaliq's post with ❤️ 8 months ago
view post
Post
2279
Jamba

A Hybrid Transformer-Mamba Language Model

Jamba: A Hybrid Transformer-Mamba Language Model (2403.19887)

We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.
reacted to Symbol-LLM's post with 🚀 8 months ago
posted an update 8 months ago
view post
Post
2570
Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation and reranking.

Keypoints:
* Uses an LLM to generate diverse synthetic queries and tasks from web passages
* Refines the data by retrieving candidate passages and relabeling positives/negatives using the same LLM
* Achieves very good results on the Massive Text Embedding Benchmark, where compact 256D Gecko outperforms 768D models.
* 768D Gecko achieves state-of-the-art performance competing with models a lot larger larger.

Paper: Gecko: Versatile Text Embeddings Distilled from Large Language Models (2403.20327)
More details in my blog: https://huggingface.co/blog/vladbogo/gecko

Congrats to the authors for their work!
posted an update 8 months ago
view post
Post
1701
A new paper titled "Long-Form Factuality in Large Language Models" proposes a new approach to evaluate the long-form factuality of large language models using an AI agent! They introduce SAFE (Search-Augmented Factuality Evaluator) which leverages an LLM to break down responses into individual facts, query Google to verify each fact, and perform multi-step reasoning.

Keypoints:
* SAFE (Search-Augmented Factuality Evaluator) is an automated method using an LLM agent to evaluate factuality
* It also introduces LongFact, a 2,280 prompt set spanning 38 topics to test open-domain factual knowledge
* SAFE achieves a 72% humans agreement while being 20x cheaper. It also wins 76% of the disagreements measured on a small scale experiment where a more thorough human procedure (researchers + full internet search) was used.
* Larger models like GPT-4, Claude Opus and Gemini Ultra tend to exhibit better long-form factuality.

Paper: Long-form factuality in large language models (2403.18802)
Code and data: https://github.com/google-deepmind/long-form-factuality

Congrats to the authors for their work!
reacted to osanseviero's post with 🔥 8 months ago
view post
Post
2065
Diaries of Open Source. Part 11 🚀

🚀Databricks release DBRX, potentially the best open access model! A 132B Mixture of Experts with 36B active params and trained on 12 trillion tokens
Blog: https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
Base and instruct models: databricks/dbrx-6601c0852a0cdd3c59f71962
Demo: databricks/dbrx-instruct

🤏1-bit and 2-bit quantization exploration using HQQ+
Blog post: https://mobiusml.github.io/1bit_blog/
Models: https://hf.co/collections/mobiuslabsgmbh/llama2-7b-hqq-6604257a96fc8b9c4e13e0fe
GitHub: https://github.com/mobiusml/hqq

📚Cosmopedia: a large-scale synthetic dataset for pre-training - it includes 25 billion tokens and 30 million files
Dataset: HuggingFaceTB/cosmopedia
Blog: https://hf.co/blog/cosmopedia

⭐Mini-Gemini: multi-modal VLMs, from 2B to 34B
Models: https://hf.co/collections/YanweiLi/mini-gemini-6603c50b9b43d044171d0854
Paper: Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models (2403.18814)
GitHub: https://github.com/dvlab-research/MiniGemini

🔥VILA - On Pre-training for VLMs
Models: Efficient-Large-Model/vila-on-pre-training-for-visual-language-models-65d8022a3a52cd9bcd62698e
Paper: VILA: On Pre-training for Visual Language Models (2312.07533)

Misc
👀 FeatUp: a framework for image features at any resolution: mhamilton723/FeatUp FeatUp: A Model-Agnostic Framework for Features at Any Resolution (2403.10516)
🍞ColBERTus Maxiums, a colbertialized embedding model mixedbread-ai/mxbai-colbert-large-v1
🖌️Semantic Palette, a new drawing paradigm ironjr/SemanticPalette
🧑‍⚕️HistoGPT, a vision model that generates accurate pathology reports marr-peng-lab/histogpt https://www.medrxiv.org/content/10.1101/2024.03.15.24304211v1
·
posted an update 8 months ago
view post
Post
1385
A new paper introduces Visual CoT, a new approach that enhances multi-modal large language models with visual chain-of-thought reasoning capabilities. This allows language models to dynamically identify and focus on specific regions within images that are most relevant for answering questions, mimicking human-like efficient visual reasoning.

Keypoints:
* Introduces the 373k Visual CoT dataset with bounding box annotations highlighting essential image regions
* Proposes a multi-turn pipeline for focusing on relevant visual inputs
* Achieves strong results on multi-modal benchmarks

Paper: Visual CoT: Unleashing Chain-of-Thought Reasoning in Multi-Modal Language Models (2403.16999)
Code, data and other resources: https://github.com/deepcs233/Visual-CoT

Congrats to the authors for their work!
posted an update 8 months ago
view post
Post
xAI releases the weights for Grok-1. Apparently it's a 314B MoE with 25% of the weights active on a given token.

Blog: https://x.ai/blog/grok-os
Code: https://github.com/xai-org/grok
Model: xai-org/grok-1
Weights: magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
  • 2 replies
·
reacted to DmitryRyumin's post with 🔥 8 months ago
view post
Post
🚀💃🏻🌟 New Research Alert - CVPR 2024! 🌟🕺 🚀
📄 Title: Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling 🌟🚀

📝 Description: Animatable Gaussians - a novel method for creating lifelike human avatars from RGB videos, utilizing 2D CNNs and 3D Gaussian splatting to capture pose-dependent garment details and dynamic appearances with high fidelity.

👥 Authors: Zhe Li, Zerong Zheng, Lizhen Wang, and Yebin Liu

📅 Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA 🇺🇸

🔗 Paper: Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling (2311.16096)

🌐 Github Page: https://animatable-gaussians.github.io
📁 Repository: https://github.com/lizhe00/AnimatableGaussians

📺 Video: https://www.youtube.com/watch?v=kOmZxD0HxZI

📚 More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

🚀 Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

🔍 Keywords: #AnimatableGaussians #HumanAvatars #3DGaussianSplatting #CVPR2024 #DeepLearning #Animation #Innovation
posted an update 8 months ago
view post
Post
"Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts" is a new framework designed to animate specific regions within an image through user inputs.

Key points:
* Enables precise animation of selected image regions with just a user click and a concise motion description.
* Achieves promising results for generating localized animations.

Paper: Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts (2403.08268)

Congrats to the authors for their work!
posted an update 8 months ago
view post
Post
Synth^2 is a new approach that leverages large language models and text-to-image generators to create synthetic image-caption data for boosting visual-language model performance.

Key Points:
* Overcomes data limitations by generating high-quality synthetic image-caption pairs, reducing reliance on costly human annotations.
* Achieves competitive results on image captioning tasks using 40x less paired data than state-of-the-art methods.

Paper: Synth$^2$: Boosting Visual-Language Models with Synthetic Captions and Image Embeddings (2403.07750)

Congrats to the authors for their work!
posted an update 9 months ago
view post
Post
A recent paper titled "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect" proposes a simple and effective approach to pruning Large Language Models (LLMs) by removing redundant layers.

Key points:
* Discovers significant redundancy across layers in LLMs, with some layers playing a negligible role for the final performance.
* Defines a new metric called Block Influence (BI) to quantify the importance of each layer in an LLM.
* Removes layers with low BI scores, achieving up to 25% reduction in parameters and computation while maintaining 92% of the LLM's performance.

Congrats to the authors for their work!

Paper: ShortGPT: Layers in Large Language Models are More Redundant Than You Expect (2403.03853)