How does the space know whether a model is fine-tuned or not?

#3
by patrickvonplaten - opened

I tried evaluated microsoft/deberta-v3 on GLUE, but it didn't show up in the list. This makes sense I guess since the model is only pretrained and not fine-tuned. How does the space know which models are fine-tuned and which aren't?

Evaluation on the Hub org

Hey @patrickvonplaten , the list of compatible models is determined by two criteria:

  • Whether the pipeline_tag in the model card matches the selected task
  • Whether the selected dataset belongs to one of the datasets listed on the model card

So yes, you won't find fill-mask models in the list right now as we don't support this (yet) in the backend - do you see a good use case for evaluating pretrained models?

For references, here's the filter I apply on the models: https://huggingface.co/spaces/autoevaluate/model-evaluator/blob/main/utils.py#L89

Sign up or log in to comment