File size: 1,734 Bytes
715fd50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 77.9
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 156.154
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:0.0001
![Explained Variance](https://img.shields.io/badge/Explained%20Variance-77.9%25-blue)
![Sparsity](https://img.shields.io/badge/Active%20Features-15615.4%-green)
### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 8
- Component: hook_resid_post
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 0.0001
- L0 Sparsity: 156.1541
- Explained Variance: 0.7787 (77.87%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: aoa9e6a9
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/aoa9e6a9/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 8, hook_resid_post, Run ID: aoa9e6a9}
}
|