File size: 2,795 Bytes
6db6d0a
 
 
1596d3b
 
 
 
 
 
 
 
 
 
d184c0a
1596d3b
 
 
 
 
 
 
 
 
 
 
 
 
 
d184c0a
1596d3b
d184c0a
1596d3b
 
 
 
 
 
 
 
 
d184c0a
1596d3b
 
 
 
 
 
 
d184c0a
1596d3b
 
 
 
 
 
 
 
 
 
 
d184c0a
1596d3b
 
d184c0a
1596d3b
 
 
 
d184c0a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: other
---


Usage
See also: colab with an example usage of LongLLaMA.

Requirements
pip install --upgrade pip
pip install transformers==4.30  sentencepiece accelerate

Loading model
```
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM

tokenizer = LlamaTokenizer.from_pretrained("monuirctc/llama-7b-instruct-indo")
model = AutoModelForCausalLM.from_pretrained("monuirctc/llama-7b-instruct-indo", 
                                            torch_dtype=torch.float32, 
                                            trust_remote_code=True)

Input handling and generation
LongLLaMA uses the Hugging Face interface, the long input given to the model will be split into context windows and loaded into the memory cache.

prompt = "My name is Julien and I like to"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model(input_ids=input_ids)
```
During the model call, one can provide the parameter last_context_length (default 1024), which specifies the number of tokens left in the last context window. Tuning this parameter can improve generation as the first layers do not have access to memory. See details in How LongLLaMA handles long inputs.
```
generation_output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    num_beams=1,
    last_context_length=1792,
    do_sample=True,
    temperature=1.0,
)
print(tokenizer.decode(generation_output[0]))
```

Additional configuration
LongLLaMA has several other parameters:

mem_layers specifies layers endowed with memory (should be either an empty list or a list of all memory layers specified in the description of the checkpoint).
mem_dtype allows changing the type of memory cache
mem_attention_grouping can trade off speed for reduced memory usage. When equal to (4, 2048), the memory layers will process at most 4*2048 queries at once (4 heads and 2048 queries for each head).
```
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM

tokenizer = LlamaTokenizer.from_pretrained("monuirctc/llama-7b-instruct-indo")
model = AutoModelForCausalLM.from_pretrained(
    "monuirctc/llama-7b-instruct-indo", torch_dtype=torch.float32, 
    mem_layers=[], 
    mem_dtype='bfloat16',
    trust_remote_code=True,
    mem_attention_grouping=(4, 2048),
)
```
Drop-in use with LLaMA code
LongLLaMA checkpoints can also be used as a drop-in replacement for LLaMA checkpoints in Hugging Face implementation of LLaMA, but in this case, they will be limited to the original context length of 2048.
```
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch

tokenizer = LlamaTokenizer.from_pretrained("monuirctc/llama-7b-instruct-indo")
model = LlamaForCausalLM.from_pretrained("monuirctc/llama-7b-instruct-indo", torch_dtype=torch.float32)
```