Paused
174
π
Note Mistral model fine-tuned on the OpenOrca dataset
Note Another Mistral fine-tune with great results in TruthfulQA
Note Very high performant model by Stability. WIth just 3B params, it achieves some great results
Note Check out this amazing blog post explaining this https://huggingface.co/blog/tomaarsen/attention-sinks
Note Run SDXL with TPU with a in-depth technical explanation
Note A model that can do multimodal instruction following data
Note Code models for the win! This is a 15B model that turns natural language to SQL
Note And this is the 7B version of the above
Note A dataset of math questions for fine-tuning
Note Generate memes with IDEFICS, the multimodal model