SAELens
Edit model card

1. Gemma Scope

Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.

See our landing page for details on the whole suite. This is a specific set of SAEs:

2. What Is gemma-scope-2b-pt-transcoders?

  • gemma-scope-: See 1.
  • 2b-pt-: These SAEs were trained on Gemma v2 2B base model.
  • transcoders: These SAEs are transcoders: they were trained to reconstruct the output of MLP sublayers from the input to the MLP sublayers: see https://arxiv.org/abs/2406.11944

3. Point of Contact

Point of contact: Arthur Conmy

Contact by email:

''.join(list('moc.elgoog@ymnoc')[::-1])

HuggingFace account: https://huggingface.co/ArthurConmyGDM

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including google/gemma-scope-2b-pt-transcoders