Edit model card

Sliding Llama Model Card

Model Description

Model Name: Sliding Llama

Base Model: Llama 3

Description: Sliding Llama is a variant of the Llama 3 model that introduces the ability to configure different layers with a sliding window approach. This configuration allows users to customize the attention and memory mechanisms across different layers.

Features

  • Sliding Window Configuration: Users can specify the size of sliding windows for different layers using the sliding_windows argument.
  • Flexibility: This model is highly adaptable, providing fine-tuned control over how information flows through the network.
  • Enhanced Performance: By adjusting sliding window sizes, users can potentially improve model performance on tasks requiring specific contextual understandings.

Usage

Installation

To use Sliding Llama for inference, you need to have a customized Hugging Face Transformers library installed. If you don't have it installed yet, you can do so with the following command:

pip install git+https://github.com/kyleliang919/transformers

This is important because we need a custom hybrid cache implementation for cached inference, since some of the model layers have different length of context (window).

For training, you can use the default transformers as it's.

Loading and using the Model

The sliding_windows argument is a list where each element specifies the window size for the corresponding layer. You can load the Sliding Llama model using the following code snippet: For instance, in the example below there is one full attention in every four layers and have a total interpolated context of 32K (originally llama3 8b has 8K context length)

from transformers import AutoConfig, AutoTokenizer
from modeling_sliding_llama import LlamaForCausalLM
# Load the tokenizer and model
config = AutoConfig.from_pretrained("kz919/sliding_llama3_8b_no_finetune", trust_remote_code=True)
config.sliding_windows = [512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0, 512, 512, 512, 0]
config.rope_scaling = {
    "factor": 4.0,
    "high_freq_factor": 4.0,
    "low_freq_factor": 1.0,
    "original_max_position_embeddings": 8192,
    "rope_type": "llama3"
  }
tokenizer = AutoTokenizer.from_pretrained("kz919/sliding_llama3_8b_no_finetune")
model = LlamaForCausalLM.from_pretrained("kz919/sliding_llama3_8b_no_finetune",
                                                config = config,
                                                device_map="auto",
                                                trust_remote_code=True)
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors = "pt")
outputs = model.generate(**inputs, use_cache = True)
print(tokenizer.decode(outputs[0]))

Notice in this repo, the weights are not finetuned (as indicated in the name), the weights are exactly identical as Llama3, you should be able to swap the weights or add a lora on top to accustom it to longer context. To use Lora adapters, you can use the following command after you load the model as above

from peft import PeftModel
model = PeftModel.from_pretrained(model, "path_to_your_adepter")
model = model.merge_and_unload()

Then you can do inference, generation calls as usual.

Limitations and Future Work

  • Computational Overhead: Configuring large sliding windows for multiple layers might increase computational requirements.
  • Optimal Configuration: Finding the optimal sliding window sizes for specific tasks may require experimentation and tuning.

Acknowledgments

We thank the developers and researchers behind Llama 3 and the Hugging Face community for their contributions and support.

Citation

If you use this model in your research, please cite:

@inproceedings{slidingllama2024,
  title={Sliding Llama},
  author={Kaizhao Liang},
  year={2024}
}

License

The Sliding Llama model is released under the Apache License 2.0.


For more details and updates, visit the Sliding Llama GitHub repository.

Downloads last month
379
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.