Quant pls?

#4
by Yhyu13 - opened

Hi,

@TheBloke @LoneStriker Would you like to quant this model which rank#1 on open llm leader board so far?
Thanks!

Hi,

@TheBloke @LoneStriker Would you like to quant this model which rank#1 on open llm leader board so far?
Thanks!

I'll put it in the queue. Hopefully it will quant properly with exl2; no guarantees for the various franken-MoEs.

exl2 quants will appear here (a couple already completed): https://huggingface.co/models?search=LoneStriker/Mixtral_34Bx2_MoE_60B

Wow, I have to say, this model is very very good. The only real negative is that the model itself seems to run inference slower than LLaMA-2 70B models though. Definitely one of the top models in my manual testing.

May I ask which question did you use?

I generally don't post my analysis questions specifically. But, my manual tests are along these lines:

  • Ask model to generate an explanation of a technical subject, generate a story/poem/song/something creative, etc. but being specific about how I want the output to be structured and what information to include
  • Iteratively ask to modify the provided output to expand in certain areas or sections (expand sections, explain importance of certain ideas, etc.)
  • Ask the model to summarize the information and to provide the information in various formats (JSON, etc.)
  • Ask the model to extend the information and provide new information outside of what was provided.

I'm looking for how well the model follows my directions and how coherent the answers are. Good models will very specifically follow all directions and answer all questions. No matter what the leaderboard shows, it's very consistent that the actual good models first and foremost follow instructions extremely well; the great models will often infer my intentions from the questions/instructions.

@LoneStriker Well received! Thank you!

Sign up or log in to comment