Gemma Scope Release
Collection
A comprehensive, open suite of sparse autoencoders for Gemma 2 2B and 9B.
•
10 items
•
Updated
•
13
Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
See our landing page for details on the whole suite. This is a specific set of SAEs:
gemma-scope-2b-pt-att
?
gemma-scope-
: See 1.2b-pt-
: These SAEs were trained on Gemma v2 2B base model.att
: These SAEs were trained on the attention layer outputs, before the final linear projection.from sae_lens import SAE # pip install sae-lens
sae, cfg_dict, sparsity = SAE.from_pretrained(
release = "gemma-scope-2b-pt-att-canonical",
sae_id = "layer_0/width_16k/canonical",
)
See https://github.com/jbloomAus/SAELens for details on this library.
Point of contact: Arthur Conmy
Contact by email:
''.join(list('moc.elgoog@ymnoc')[::-1])
HuggingFace account: https://huggingface.co/ArthurConmyGDM