Splend1dchan
commited on
Commit
•
c0f6ff8
1
Parent(s):
dc1b0b4
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,9 @@
|
|
1 |
# This model weight is identical to laion/CLIP-ViT-H-14-laion2B-s32B-b79K, but with the pytorch_model component only (without open_clip_pytorch_model.bin).
|
2 |
|
3 |
This is to support loading the model as a ClipModel, as I failed to load the original model using AutoModel (feedback appreciated)
|
|
|
4 |
With this distribution, I was finally able to load from AutoModel, and further support image classification tasks using my self-defined class CLIPViTForImageClassification listed below.
|
|
|
5 |
However, there is still a small issue that I cannot resolve, I can only load the model if I git clone this repo to local, if I load from web, the loading still fails.
|
6 |
```python
|
7 |
from transformers.models.clip.modeling_clip import CLIPPreTrainedModel, CLIPConfig, CLIPVisionTransformer
|
|
|
1 |
# This model weight is identical to laion/CLIP-ViT-H-14-laion2B-s32B-b79K, but with the pytorch_model component only (without open_clip_pytorch_model.bin).
|
2 |
|
3 |
This is to support loading the model as a ClipModel, as I failed to load the original model using AutoModel (feedback appreciated)
|
4 |
+
|
5 |
With this distribution, I was finally able to load from AutoModel, and further support image classification tasks using my self-defined class CLIPViTForImageClassification listed below.
|
6 |
+
|
7 |
However, there is still a small issue that I cannot resolve, I can only load the model if I git clone this repo to local, if I load from web, the loading still fails.
|
8 |
```python
|
9 |
from transformers.models.clip.modeling_clip import CLIPPreTrainedModel, CLIPConfig, CLIPVisionTransformer
|