Third party evaluation on this so-called "LLM".

#7
by Enigrand - opened

Another followup from Artificial Analysis:

Reflection 70B update: Quick note on timeline and outstanding questions from our perspective

Timeline:

  • We tested the initial Reflection 70B release and saw worse performance than Llama 3.1 70B.

  • We were given access to a private API which we tested and saw impressive performance but not to the level of the initial claims. As this testing was performed on a private API, we were not able to independently verify exactly what we were testing.

  • Since then, there have been additional HF releases which some providers have hosted. The latest version appears to be: huggingface.co/mattshumer/re…. We are seeing significantly worse results when benchmarking ref_70_e3 than what we saw via the private API.

Outstanding questions:

  • We are not clear on why a version would be published which is not the version we tested via Reflection’s private API.

  • We are not clear why the model weights of the version we tested would not be released yet.

As soon as the weights are released on Hugging Face, we plan to re-test and compare to our evaluation of the private endpoint.

P.S. Just for the record, t****.ai stackunseen hyped up this "LLM". If you believe the hype, just trust their results.

Why are you posting articles and not your own inference for a local model?

@nisten , why don't you post your evaluation results here so everyone can see clearly.

Or if you want to claim their results are wrong or something, show your evidences here or my post collecting third-party results.

This comment has been hidden

Sign up or log in to comment