Allow return attention weights

#1
by orby-yanan - opened
No description provided.
orby-yanan changed pull request status to open

Hi, could you please open this on https://github.com/mosaicml/llm-foundry? We upstream updates to these models from there (and that way we can propagate it to all MPT models easily).

Cannot merge
This branch has merge conflicts in the following files:
  • blocks.py
  • modeling_mpt.py

Sign up or log in to comment