Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,11 @@ datasets:
|
|
10 |
|
11 |
# mpt-7b-storywriter: sharded
|
12 |
|
|
|
|
|
|
|
|
|
|
|
13 |
This is a version of the [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in `bfloat16` so in theory you can run this on CPU, though it may take forever.
|
14 |
|
15 |
Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license.
|
|
|
10 |
|
11 |
# mpt-7b-storywriter: sharded
|
12 |
|
13 |
+
|
14 |
+
<a href="https://colab.research.google.com/gist/pszemraj/a979cdcc02edb916661c5dd97cf2294e/mpt-storywriter-sharded-inference.ipynb">
|
15 |
+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
16 |
+
</a>
|
17 |
+
|
18 |
This is a version of the [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in `bfloat16` so in theory you can run this on CPU, though it may take forever.
|
19 |
|
20 |
Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license.
|