Alexander Visheratin
AI & ML interests
Articles
Organizations
visheratin's activity
🔥Sakana releases Evolutionary Model Merge
Blog post: https://sakana.ai/evolutionary-model-merge/
Paper: Evolutionary Optimization of Model Merging Recipes (2403.13187)
Models and demo: https://hf.co/SakanaAI
🍞MixedBread releases new SoTA sentence embedding model
Announcement: https://www.mixedbread.ai/blog/mxbai-embed-large-v1
Model: mixedbread-ai/mxbai-embed-large-v1
🎥VideoMamba, a Mamba-based model for video understanding
Blog: https://hf.co/blog/vladbogo/video-mamba
Demo: OpenGVLab/VideoMamba
Model: OpenGVLab/VideoMamba
🔍 MathVerse, a visual math benchmark for multimodal LLMs
Paper page: https://mathverse-cuhk.github.io/
Dataset: AI4Math/MathVerse
Paper: MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems? (2403.14624)
🧠GraphWiz, a family of instruct-tuned LLMs to solve graph problems
Repos: https://hf.co/GraphWiz
Paper: GraphWiz: An Instruction-Following Language Model for Graph Problems (2402.16029)
🪆NLLB-SigLIP-MRL: a combination of NLLB and SigLIP trained with Matryoshka representation learning
Model: visheratin/nllb-siglip-mrl-large
Tweet: https://twitter.com/visheratin/status/1766643219909984734?s=46
🧍HDM and ProciGen: Template-free reconstruction of human-object interactions
Paper page: https://virtualhumans.mpi-inf.mpg.de/procigen-hdm/
Demo: xiexh20/HDM-interaction-recon
Models: xiexh20/HDM-models
🌎Models and data around the world
EagleX 7B, multi-lingual RNN-based model https://hf.co/spaces/recursal/EagleX-7B-1.7T-Gradio-Demo
Tamil LLM mervinpraison/tamil-large-language-model-7b-v1.0
🌏Cohere and Cohere4AI release Command-R, a 35B model that is multilingual, RAG-optimized, and can manage tools!
Model: CohereForAI/c4ai-command-r-v01
Blog post: https://txt.cohere.com/command-r/
🧑🍳StarChat2: A powerful code model that is conversational
Try it out: HuggingFaceH4/starchat2-playground
Repos: HuggingFaceH4/starchat2-15b-65f068417b330fafad751fce
Training code: https://github.com/huggingface/alignment-handbook/tree/main/recipes/starchat2-15b
🐲Yi-9B: trained on 3 trillion tokens, this english-chinese LLM is quite good and with a very nice detailed report!
Model: 01-ai/Yi-9B
Paper: Yi: Open Foundation Models by 01.AI (2403.04652)
🐋DeepSeek-VL, 1.3B and 7B VLMs
Paper: DeepSeek-VL: Towards Real-World Vision-Language Understanding (2403.05525)
Large model: deepseek-ai/deepseek-vl-7b-chat
✍️Writer releases OmniACT: a dataset for multimodal agents for desktop and web.
Dataset: Writer/omniact
Paper: OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web (2402.17553)
🍎Apple releases MobileCLIP: fast image-text models! https://github.com/apple/ml-mobileclip
🦙💪LlamaGym - fine-tune LLM agents with RL in just a few lines of code! https://github.com/KhoomeiK/LlamaGym
🖼️New multimodal leaderboard ConTextual https://huggingface.co/blog/leaderboard-contextual
🎁 Design2Code: benchmark for multimodal LLMs for automating front-end development.
Dataset SALT-NLP/Design2Code
Paper Design2Code: How Far Are We From Automating Front-End Engineering? (2403.03163)
Project https://salt-nlp.github.io/Design2Code/
You can find the previous part at https://huggingface.co/posts/osanseviero/633758457910104
It uses the same vision encoder, so I expect that nothing changes.
The large model is finally SoTA for both image and text multilingual retrieval!
The models are available on the hub:
- visheratin/nllb-siglip-mrl-base
- visheratin/nllb-siglip-mrl-large
💻 OpenCodeInterpreter, a family of very powerful code generation models
Models: m-a-p/opencodeinterpreter-65d312f6f88da990a64da456
Paper: OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement (2402.14658)
Demo m-a-p/OpenCodeInterpreter_demo
🔷🔶Zephyr 7B Gemma, Gemma fine-tuned with the Zephyr recipe
Model: HuggingFaceH4/zephyr-7b-gemma-v0.1
Demo: HuggingFaceH4/zephyr-7b-gemma-chat
GH Repo: https://github.com/huggingface/alignment-handbook
🪆The MixedBread folks released a 2D Matryoshka text embedding model, which means you can dynamically change the embedding size and layer counts
Model: mixedbread-ai/mxbai-embed-2d-large-v1
Release blog post: https://www.mixedbread.ai/blog/mxbai-embed-2d-large-v1
🐋Microsoft released Orca Math, which includes 200K grade school math problems
Dataset: microsoft/orca-math-word-problems-200k
🥷IBM silently released Merlinite, a cool model trained on Mixtral-generated synthetic data using a novel LAB method ibm/merlinite-7b
🌚 Moondream2 - a small vision language model to run on-device!
Model: vikhyatk/moondream2
Demo: vikhyatk/moondream2
🏙️CityDreamer: 3D City Generation
Demo: hzxie/city-dreamer
Repo: https://github.com/hzxie/city-dreamer
Model: hzxie/city-dreamer
🌏ML in all languages
Sailor, a family of South-East Asian languages models sail/sailor-language-models-65e19a749f978976f1959825
Samvaad dataset, which includes 140k QA pairs in Hindi, Bengali, Marathi, Tamil, Telugu, Oriya, Punjabi, and Gujarati GenVRadmin/Samvaad-Mixed-Language-2
You can see the previous part at https://huggingface.co/posts/osanseviero/674644082063278
I used 8xA100 80GB. With LoRA and smaller batch size, it should be possible to train on smaller GPUs, but it is still very resource-intensive.
You are right. The method requires multiple passes for the vision encoder, which increases memory usage. This is not such a big problem during inference, but it makes training harder because of the gradients stored. At the moment, I don't have a solution to make it more efficient. But this is an ongoing project, so maybe I will find one =)
There are links to existing papers in the blog post if you want to dive into the field.
I used mainly the LLaVA training codebase with some changes to support multi-crop. I'll be working on the next post about fine-tuning MC-LLaVA on a task-specific dataset and will open all the code.
Check it out, and let me know what you think!
Other notable updates:
- I use SigLIP from Transformers, so you don't need to install additional libraries.
- the model now supports auto classes, so you can create the model and processor with only two lines.
- performance increased by 10%+ across all benchmarks.
The work is far from over, but it feels like good progress.
The model on the hub: visheratin/MC-LLaVA-3b
You can try the model here: visheratin/mc-llava-3b