Edit model card

CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05

Explained Variance Sparsity

Training Details

  • Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
  • Layer: 8
  • Component: hook_mlp_out

Model Architecture

  • Input Dimension: 768
  • SAE Dimension: 49,152
  • Expansion Factor: x64 (vanilla architecture)
  • Activation Function: ReLU
  • Initialization: encoder_transpose_decoder
  • Context Size: 50 tokens

Performance Metrics

  • L1 Coefficient: 1e-05
  • L0 Sparsity: 1077.5402
  • Explained Variance: 0.9789 (97.89%)

Training Configuration

  • Learning Rate: 0.0004
  • LR Scheduler: Cosine Annealing with Warmup (200 steps)
  • Epochs: 10
  • Gradient Clipping: 1.0
  • Device: NVIDIA Quadro RTX 8000

Experiment Tracking:

Citation

@misc{2024josephsparseautoencoders,
    title={Sparse Autoencoders for CLIP-ViT-B-32},
    author={Joseph, Sonia},
    year={2024},
    publisher={Prisma-Multimodal},
    url={https://huggingface.co/Prisma-Multimodal},
    note={Layer 8, hook_mlp_out, Run ID: nd1oa29p}
}
Downloads last month
14
Inference Examples
Inference API (serverless) does not yet support torch models for this pipeline type.