sunzeyeah commited on
Commit
8b6bc47
0 Parent(s):
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - glm
7
+ - chatglm
8
+ - chatgpt
9
+ ---
10
+
11
+ Link to github: [here](https://github.com/sunzeyeah/RLHF)
12
+
13
+ # ChatGLM-6B
14
+
15
+ 本仓库由[THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) fork而来,原仓库实现了PyTorch版本的ChatGLM模型,该模型有60亿参数量,模型权重文件以FP16格式存储。
16
+
17
+ 本仓库在原始代码的基础上进行了部分调整,以支持ChatGPT训练pipeline,具体实现可参考:[sunzeyeah/RLHF](https://github.com/sunzeyeah/RLHF).
18
+
19
+ This repository is forked from [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) that contains PyTorch implementation of ChatGLM model with 6 billion parameters pretrained weights (FP16 precision).
20
+
21
+ It is slightly different from the original ChatGLM implementation to support the ChatGPT training pipeline in this github repo: [sunzeyeah/RLHF](https://github.com/sunzeyeah/RLHF).
22
+
23
+
24
+ ## 介绍
25
+ ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
26
+
27
+ ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference.
28
+
29
+ ## 软件依赖
30
+
31
+ ```shell
32
+ pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
33
+ ```
34
+
35
+ ## 代码调用
36
+
37
+ 可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
38
+
39
+ ```ipython
40
+ >>> from transformers import AutoTokenizer, AutoModel
41
+ >>> tokenizer = AutoTokenizer.from_pretrained("sunzeyeah/chatglm-6B", trust_remote_code=True)
42
+ >>> model = AutoModel.from_pretrained("sunzeyeah/chatglm-6B", trust_remote_code=True).half().cuda()
43
+ >>> response, history = model.chat(tokenizer, "你好", history=[])
44
+ >>> print(response)
45
+ 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
46
+ >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
47
+ >>> print(response)
48
+ 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
49
+
50
+ 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
51
+ 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
52
+ 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
53
+ 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
54
+ 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
55
+ 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
56
+
57
+ 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
58
+ ```
59
+
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "THUDM/chatglm-6b",
3
+ "architectures": [
4
+ "ChatGLMModel"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_chatglm.ChatGLMConfig",
8
+ "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
9
+ "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
10
+ },
11
+ "bos_token_id": 150004,
12
+ "eos_token_id": 150005,
13
+ "pad_token_id": 20003,
14
+ "hidden_size": 4096,
15
+ "inner_hidden_size": 16384,
16
+ "layernorm_epsilon": 1e-05,
17
+ "max_sequence_length": 2048,
18
+ "model_type": "chatglm",
19
+ "num_attention_heads": 32,
20
+ "num_layers": 28,
21
+ "position_encoding_2d": true,
22
+ "torch_dtype": "float16",
23
+ "transformers_version": "4.23.1",
24
+ "use_cache": true,
25
+ "vocab_size": 150528
26
+ }
configuration_chatglm.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ ChatGLM model configuration """
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers.utils import logging
5
+
6
+ logger = logging.get_logger(__name__)
7
+
8
+
9
+ class ChatGLMConfig(PretrainedConfig):
10
+ r"""
11
+ This is the configuration class to store the configuration of a [`~ChatGLMModel`].
12
+ It is used to instantiate an ChatGLM model according to the specified arguments, defining the model
13
+ architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
14
+ the ChatGLM-6B [THUDM/ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b) architecture.
15
+
16
+ Configuration objects inherit from [`PretrainedConfig`] and can be used
17
+ to control the model outputs. Read the documentation from [`PretrainedConfig`]
18
+ for more information.
19
+
20
+
21
+ Args:
22
+ vocab_size (`int`, *optional*, defaults to 150528):
23
+ Vocabulary size of the ChatGLM-6B model. Defines the number of different tokens that can be represented by the
24
+ `inputs_ids` passed when calling [`~ChatGLMModel`] or
25
+ [`~TFChatGLMModel`].
26
+ hidden_size (`int`, *optional*, defaults to 4096):
27
+ Dimension of the encoder layers and the pooler layer.
28
+ num_hidden_layers (`int`, *optional*, defaults to 28):
29
+ Number of hidden layers in the Transformer encoder.
30
+ num_attention_heads (`int`, *optional*, defaults to 32):
31
+ Number of attention heads for each attention layer in the Transformer encoder.
32
+ inner_hidden_size (`int`, *optional*, defaults to 16384):
33
+ Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
34
+ max_sequence_length (`int`, *optional*, defaults to 512):
35
+ The maximum sequence length that this model might ever be used with.
36
+ Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
37
+ layernorm_epsilon (`float`, *optional*, defaults to 1e-5):
38
+ The epsilon used by the layer normalization layers.
39
+ use_cache (`bool`, *optional*, defaults to `True`):
40
+ Whether the model should return the last key/values attentions (not used by all models).
41
+ Example:
42
+
43
+ ```python
44
+ >>> from configuration_chatglm import ChatGLMConfig
45
+ >>> from modeling_chatglm import ChatGLMModel
46
+
47
+ >>> # Initializing a ChatGLM-6B THUDM/ChatGLM-6B style configuration
48
+ >>> configuration = ChatGLMConfig()
49
+
50
+ >>> # Initializing a model from the THUDM/ChatGLM-6B style configuration
51
+ >>> model = ChatGLMModel(configuration)
52
+
53
+ >>> # Accessing the model configuration
54
+ >>> configuration = model.config
55
+ ```
56
+ """
57
+ model_type = "chatglm"
58
+
59
+ def __init__(
60
+ self,
61
+ vocab_size=150528,
62
+ hidden_size=4096,
63
+ num_layers=28,
64
+ num_attention_heads=32,
65
+ layernorm_epsilon=1e-5,
66
+ use_cache=False,
67
+ bos_token_id=150004,
68
+ eos_token_id=150005,
69
+ pad_token_id=0,
70
+ max_sequence_length=2048,
71
+ inner_hidden_size=16384,
72
+ position_encoding_2d=True,
73
+ quantization_bit=0,
74
+ pre_seq_len=None,
75
+ prefix_projection=False,
76
+ **kwargs
77
+ ):
78
+ self.num_layers = num_layers
79
+ self.vocab_size = vocab_size
80
+ self.hidden_size = hidden_size
81
+ self.num_attention_heads = num_attention_heads
82
+ self.max_sequence_length = max_sequence_length
83
+ self.layernorm_epsilon = layernorm_epsilon
84
+ self.inner_hidden_size = inner_hidden_size
85
+ self.use_cache = use_cache
86
+ self.bos_token_id = bos_token_id
87
+ self.eos_token_id = eos_token_id
88
+ self.pad_token_id = pad_token_id
89
+ self.position_encoding_2d = position_encoding_2d
90
+ self.quantization_bit = quantization_bit
91
+ self.pre_seq_len = pre_seq_len
92
+ self.prefix_projection = prefix_projection
93
+
94
+ super().__init__(
95
+ pad_token_id=pad_token_id,
96
+ bos_token_id=bos_token_id,
97
+ eos_token_id=eos_token_id,
98
+ **kwargs
99
+ )
ice_text.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99871e0c85db81ad7af1028854fd091cd5778c8414ae9d94bbbc10d02c831c21
3
+ size 2699926
modeling_chatglm.py ADDED
@@ -0,0 +1,1383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ PyTorch ChatGLM model. """
2
+
3
+ import math
4
+ import copy
5
+ import os
6
+ import warnings
7
+ import re
8
+ import sys
9
+
10
+ import torch
11
+ import torch.utils.checkpoint
12
+ import torch.nn.functional as F
13
+ from torch import nn
14
+ from torch.nn import CrossEntropyLoss, LayerNorm
15
+ from torch.nn.utils import skip_init
16
+ from typing import Optional, Tuple, Union, List, Callable
17
+
18
+ from transformers.utils import (
19
+ add_code_sample_docstrings,
20
+ add_start_docstrings,
21
+ add_start_docstrings_to_model_forward,
22
+ )
23
+ from transformers.modeling_outputs import (
24
+ BaseModelOutputWithPast,
25
+ CausalLMOutputWithPast,
26
+ BaseModelOutputWithPastAndCrossAttentions,
27
+ )
28
+ from transformers.modeling_utils import PreTrainedModel
29
+ from transformers.utils import logging
30
+ from transformers.generation.logits_process import LogitsProcessor
31
+ from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig
32
+
33
+ from .configuration_chatglm import ChatGLMConfig
34
+
35
+ # flags required to enable jit fusion kernels
36
+
37
+ if sys.platform != 'darwin':
38
+ torch._C._jit_set_profiling_mode(False)
39
+ torch._C._jit_set_profiling_executor(False)
40
+ torch._C._jit_override_can_fuse_on_cpu(True)
41
+ torch._C._jit_override_can_fuse_on_gpu(True)
42
+
43
+ logger = logging.get_logger(__name__)
44
+
45
+ _CHECKPOINT_FOR_DOC = "THUDM/ChatGLM-6B"
46
+ _CONFIG_FOR_DOC = "ChatGLM6BConfig"
47
+
48
+ CHATGLM_6B_PRETRAINED_MODEL_ARCHIVE_LIST = [
49
+ "THUDM/chatglm-6b",
50
+ # See all ChatGLM-6B models at https://huggingface.co/models?filter=chatglm
51
+ ]
52
+
53
+
54
+ class InvalidScoreLogitsProcessor(LogitsProcessor):
55
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
56
+ if torch.isnan(scores).any() or torch.isinf(scores).any():
57
+ scores.zero_()
58
+ scores[..., 20005] = 5e4
59
+ return scores
60
+
61
+
62
+ def load_tf_weights_in_chatglm_6b(model, config, tf_checkpoint_path):
63
+ """Load tf checkpoints in a pytorch model."""
64
+ try:
65
+ import re
66
+
67
+ import numpy as np
68
+ import tensorflow as tf
69
+ except ImportError:
70
+ logger.error(
71
+ "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
72
+ "https://www.tensorflow.org/install/ for installation instructions."
73
+ )
74
+ raise
75
+ tf_path = os.path.abspath(tf_checkpoint_path)
76
+ logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
77
+ # Load weights from TF model
78
+ init_vars = tf.train.list_variables(tf_path)
79
+ names = []
80
+ arrays = []
81
+ for name, shape in init_vars:
82
+ logger.info(f"Loading TF weight {name} with shape {shape}")
83
+ array = tf.train.load_variable(tf_path, name)
84
+ names.append(name)
85
+ arrays.append(array)
86
+
87
+ for name, array in zip(names, arrays):
88
+ name = name.split("/")
89
+ # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
90
+ # which are not required for using pretrained model
91
+ if any(
92
+ n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
93
+ for n in name
94
+ ):
95
+ logger.info(f"Skipping {'/'.join(name)}")
96
+ continue
97
+ pointer = model
98
+ for m_name in name:
99
+ if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
100
+ scope_names = re.split(r"_(\d+)", m_name)
101
+ else:
102
+ scope_names = [m_name]
103
+ if scope_names[0] == "kernel" or scope_names[0] == "gamma":
104
+ pointer = getattr(pointer, "weight")
105
+ elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
106
+ pointer = getattr(pointer, "bias")
107
+ elif scope_names[0] == "output_weights":
108
+ pointer = getattr(pointer, "weight")
109
+ elif scope_names[0] == "squad":
110
+ pointer = getattr(pointer, "classifier")
111
+ else:
112
+ try:
113
+ pointer = getattr(pointer, scope_names[0])
114
+ except AttributeError:
115
+ logger.info(f"Skipping {'/'.join(name)}")
116
+ continue
117
+ if len(scope_names) >= 2:
118
+ num = int(scope_names[1])
119
+ pointer = pointer[num]
120
+ if m_name[-11:] == "_embeddings":
121
+ pointer = getattr(pointer, "weight")
122
+ elif m_name == "kernel":
123
+ array = np.transpose(array)
124
+ try:
125
+ assert (
126
+ pointer.shape == array.shape
127
+ ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
128
+ except AssertionError as e:
129
+ e.args += (pointer.shape, array.shape)
130
+ raise
131
+ logger.info(f"Initialize PyTorch weight {name}")
132
+ pointer.data = torch.from_numpy(array)
133
+ return model
134
+
135
+
136
+ class PrefixEncoder(torch.nn.Module):
137
+ """
138
+ The torch.nn model to encode the prefix
139
+ Input shape: (batch-size, prefix-length)
140
+ Output shape: (batch-size, prefix-length, 2*layers*hidden)
141
+ """
142
+
143
+ def __init__(self, config):
144
+ super().__init__()
145
+ self.prefix_projection = config.prefix_projection
146
+ if self.prefix_projection:
147
+ # Use a two-layer MLP to encode the prefix
148
+ self.embedding = torch.nn.Embedding(config.pre_seq_len, config.hidden_size)
149
+ self.trans = torch.nn.Sequential(
150
+ torch.nn.Linear(config.hidden_size, config.hidden_size),
151
+ torch.nn.Tanh(),
152
+ torch.nn.Linear(config.hidden_size, config.num_layers * config.hidden_size * 2)
153
+ )
154
+ else:
155
+ self.embedding = torch.nn.Embedding(config.pre_seq_len, config.num_layers * config.hidden_size * 2)
156
+
157
+ def forward(self, prefix: torch.Tensor):
158
+ if self.prefix_projection:
159
+ prefix_tokens = self.embedding(prefix)
160
+ past_key_values = self.trans(prefix_tokens)
161
+ else:
162
+ past_key_values = self.embedding(prefix)
163
+ return past_key_values
164
+
165
+
166
+ @torch.jit.script
167
+ def gelu_impl(x):
168
+ """OpenAI's gelu implementation."""
169
+ return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * x *
170
+ (1.0 + 0.044715 * x * x)))
171
+
172
+
173
+ def gelu(x):
174
+ return gelu_impl(x)
175
+
176
+
177
+ class RotaryEmbedding(torch.nn.Module):
178
+ def __init__(self, dim, base=10000, precision=torch.half, learnable=False):
179
+ super().__init__()
180
+ inv_freq = 1. / (base ** (torch.arange(0, dim, 2).float() / dim))
181
+ inv_freq = inv_freq.half()
182
+ self.learnable = learnable
183
+ if learnable:
184
+ self.inv_freq = torch.nn.Parameter(inv_freq)
185
+ self.max_seq_len_cached = None
186
+ else:
187
+ self.register_buffer('inv_freq', inv_freq)
188
+ self.max_seq_len_cached = None
189
+ self.cos_cached = None
190
+ self.sin_cached = None
191
+ self.precision = precision
192
+
193
+ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys,
194
+ error_msgs):
195
+ pass
196
+
197
+ def forward(self, x, seq_dim=1, seq_len=None):
198
+ if seq_len is None:
199
+ seq_len = x.shape[seq_dim]
200
+ if self.max_seq_len_cached is None or (seq_len > self.max_seq_len_cached):
201
+ self.max_seq_len_cached = None if self.learnable else seq_len
202
+ t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
203
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
204
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
205
+ emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
206
+ if self.precision == torch.bfloat16:
207
+ emb = emb.float()
208
+
209
+ # [sx, 1 (b * np), hn]
210
+ cos_cached = emb.cos()[:, None, :]
211
+ sin_cached = emb.sin()[:, None, :]
212
+ if self.precision == torch.bfloat16:
213
+ cos_cached = cos_cached.bfloat16()
214
+ sin_cached = sin_cached.bfloat16()
215
+ if self.learnable:
216
+ return cos_cached, sin_cached
217
+ self.cos_cached, self.sin_cached = cos_cached, sin_cached
218
+ return self.cos_cached[:seq_len, ...], self.sin_cached[:seq_len, ...]
219
+
220
+ def _apply(self, fn):
221
+ if self.cos_cached is not None:
222
+ self.cos_cached = fn(self.cos_cached)
223
+ if self.sin_cached is not None:
224
+ self.sin_cached = fn(self.sin_cached)
225
+ return super()._apply(fn)
226
+
227
+
228
+ def rotate_half(x):
229
+ x1, x2 = x[..., :x.shape[-1] // 2], x[..., x.shape[-1] // 2:]
230
+ return torch.cat((-x2, x1), dim=x1.ndim - 1) # dim=-1 triggers a bug in earlier torch versions
231
+
232
+
233
+ @torch.jit.script
234
+ def apply_rotary_pos_emb_index(q, k, cos, sin, position_id):
235
+ # position_id: [sq, b], q, k: [sq, b, np, hn], cos: [sq, 1, hn] -> [sq, b, 1, hn]
236
+ cos, sin = F.embedding(position_id, cos.squeeze(1)).unsqueeze(2), \
237
+ F.embedding(position_id, sin.squeeze(1)).unsqueeze(2)
238
+ q, k = (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
239
+ return q, k
240
+
241
+
242
+ def attention_fn(
243
+ self,
244
+ query_layer,
245
+ key_layer,
246
+ value_layer,
247
+ attention_mask,
248
+ hidden_size_per_partition,
249
+ layer_id,
250
+ layer_past=None,
251
+ scaling_attention_score=True,
252
+ use_cache=False,
253
+ ):
254
+ if layer_past is not None:
255
+ past_key, past_value = layer_past[0], layer_past[1]
256
+ key_layer = torch.cat((past_key, key_layer), dim=0)
257
+ value_layer = torch.cat((past_value, value_layer), dim=0)
258
+
259
+ # seqlen, batch, num_attention_heads, hidden_size_per_attention_head
260
+ seq_len, b, nh, hidden_size = key_layer.shape
261
+
262
+ if use_cache:
263
+ present = (key_layer, value_layer)
264
+ else:
265
+ present = None
266
+
267
+ query_key_layer_scaling_coeff = float(layer_id + 1)
268
+ if scaling_attention_score:
269
+ query_layer = query_layer / (math.sqrt(hidden_size) * query_key_layer_scaling_coeff)
270
+
271
+ # ===================================
272
+ # Raw attention scores. [b, np, s, s]
273
+ # ===================================
274
+
275
+ # [b, np, sq, sk]
276
+ output_size = (query_layer.size(1), query_layer.size(2), query_layer.size(0), key_layer.size(0))
277
+
278
+ # [sq, b, np, hn] -> [sq, b * np, hn]
279
+ query_layer = query_layer.view(output_size[2], output_size[0] * output_size[1], -1)
280
+ # [sk, b, np, hn] -> [sk, b * np, hn]
281
+ key_layer = key_layer.view(output_size[3], output_size[0] * output_size[1], -1)
282
+
283
+ matmul_result = torch.empty(
284
+ output_size[0] * output_size[1],
285
+ output_size[2],
286
+ output_size[3],
287
+ dtype=query_layer.dtype,
288
+ device=query_layer.device,
289
+ )
290
+
291
+ matmul_result = torch.baddbmm(
292
+ matmul_result,
293
+ query_layer.transpose(0, 1), # [b * np, sq, hn]
294
+ key_layer.transpose(0, 1).transpose(1, 2), # [b * np, hn, sk]
295
+ beta=0.0,
296
+ alpha=1.0,
297
+ )
298
+
299
+ # change view to [b, np, sq, sk]
300
+ attention_scores = matmul_result.view(*output_size)
301
+
302
+ if self.scale_mask_softmax:
303
+ self.scale_mask_softmax.scale = query_key_layer_scaling_coeff
304
+ attention_probs = self.scale_mask_softmax(attention_scores, attention_mask.contiguous())
305
+ else:
306
+ if not (attention_mask == 0).all():
307
+ # if auto-regressive, skip
308
+ attention_scores.masked_fill_(attention_mask, -10000.0)
309
+ dtype = attention_scores.dtype
310
+ attention_scores = attention_scores.float()
311
+ attention_scores = attention_scores * query_key_layer_scaling_coeff
312
+
313
+ attention_probs = F.softmax(attention_scores, dim=-1)
314
+
315
+ attention_probs = attention_probs.type(dtype)
316
+
317
+ # =========================
318
+ # Context layer. [sq, b, hp]
319
+ # =========================
320
+
321
+ # value_layer -> context layer.
322
+ # [sk, b, np, hn] --> [b, np, sq, hn]
323
+
324
+ # context layer shape: [b, np, sq, hn]
325
+ output_size = (value_layer.size(1), value_layer.size(2), query_layer.size(0), value_layer.size(3))
326
+
327
+ # change view [sk, b * np, hn]
328
+ value_layer = value_layer.view(value_layer.size(0), output_size[0] * output_size[1], -1)
329
+
330
+ # change view [b * np, sq, sk]
331
+ attention_probs = attention_probs.view(output_size[0] * output_size[1], output_size[2], -1)
332
+
333
+ # matmul: [b * np, sq, hn]
334
+ context_layer = torch.bmm(attention_probs, value_layer.transpose(0, 1))
335
+
336
+ # change view [b, np, sq, hn]
337
+ context_layer = context_layer.view(*output_size)
338
+
339
+ # [b, np, sq, hn] --> [sq, b, np, hn]
340
+ context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
341
+
342
+ # [sq, b, np, hn] --> [sq, b, hp]
343
+ new_context_layer_shape = context_layer.size()[:-2] + (hidden_size_per_partition,)
344
+ context_layer = context_layer.view(*new_context_layer_shape)
345
+
346
+ outputs = (context_layer, present, attention_probs)
347
+
348
+ return outputs
349
+
350
+
351
+ class SelfAttention(torch.nn.Module):
352
+ def __init__(self, hidden_size, num_attention_heads,
353
+ layer_id, hidden_size_per_attention_head=None, bias=True,
354
+ params_dtype=torch.float, position_encoding_2d=True):
355
+ super(SelfAttention, self).__init__()
356
+
357
+ self.layer_id = layer_id
358
+ self.hidden_size = hidden_size
359
+ self.hidden_size_per_partition = hidden_size
360
+ self.num_attention_heads = num_attention_heads
361
+ self.num_attention_heads_per_partition = num_attention_heads
362
+ self.position_encoding_2d = position_encoding_2d
363
+ self.rotary_emb = RotaryEmbedding(
364
+ self.hidden_size // (self.num_attention_heads * 2)
365
+ if position_encoding_2d
366
+ else self.hidden_size // self.num_attention_heads,
367
+ base=10000,
368
+ precision=torch.half,
369
+ learnable=False,
370
+ )
371
+
372
+ self.scale_mask_softmax = None
373
+
374
+ if hidden_size_per_attention_head is None:
375
+ self.hidden_size_per_attention_head = hidden_size // num_attention_heads
376
+ else:
377
+ self.hidden_size_per_attention_head = hidden_size_per_attention_head
378
+
379
+ self.inner_hidden_size = num_attention_heads * self.hidden_size_per_attention_head
380
+
381
+ # Strided linear layer.
382
+ self.query_key_value = skip_init(
383
+ torch.nn.Linear,
384
+ hidden_size,
385
+ 3 * self.inner_hidden_size,
386
+ bias=bias,
387
+ dtype=params_dtype,
388
+ )
389
+
390
+ self.dense = skip_init(
391
+ torch.nn.Linear,
392
+ self.inner_hidden_size,
393
+ hidden_size,
394
+ bias=bias,
395
+ dtype=params_dtype,
396
+ )
397
+
398
+ @staticmethod
399
+ def attention_mask_func(attention_scores, attention_mask):
400
+ attention_scores.masked_fill_(attention_mask, -10000.0)
401
+ return attention_scores
402
+
403
+ def split_tensor_along_last_dim(self, tensor, num_partitions,
404
+ contiguous_split_chunks=False):
405
+ """Split a tensor along its last dimension.
406
+ Arguments:
407
+ tensor: input tensor.
408
+ num_partitions: number of partitions to split the tensor
409
+ contiguous_split_chunks: If True, make each chunk contiguous
410
+ in memory.
411
+ """
412
+ # Get the size and dimension.
413
+ last_dim = tensor.dim() - 1
414
+ last_dim_size = tensor.size()[last_dim] // num_partitions
415
+ # Split.
416
+ tensor_list = torch.split(tensor, last_dim_size, dim=last_dim)
417
+ # Note: torch.split does not create contiguous tensors by default.
418
+ if contiguous_split_chunks:
419
+ return tuple(chunk.contiguous() for chunk in tensor_list)
420
+
421
+ return tensor_list
422
+
423
+ def forward(
424
+ self,
425
+ hidden_states: torch.Tensor,
426
+ position_ids,
427
+ attention_mask: torch.Tensor,
428
+ layer_id,
429
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
430
+ use_cache: bool = False,
431
+ output_attentions: bool = False,
432
+ ):
433
+ """
434
+ hidden_states: [seq_len, batch, hidden_size]
435
+ attention_mask: [(1, 1), seq_len, seq_len]
436
+ """
437
+
438
+ # [seq_len, batch, 3 * hidden_size]
439
+ mixed_raw_layer = self.query_key_value(hidden_states)
440
+
441
+ # [seq_len, batch, 3 * hidden_size] --> [seq_len, batch, num_attention_heads, 3 * hidden_size_per_attention_head]
442
+ new_tensor_shape = mixed_raw_layer.size()[:-1] + (
443
+ self.num_attention_heads_per_partition,
444
+ 3 * self.hidden_size_per_attention_head,
445
+ )
446
+ mixed_raw_layer = mixed_raw_layer.view(*new_tensor_shape)
447
+
448
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
449
+ (query_layer, key_layer, value_layer) = self.split_tensor_along_last_dim(mixed_raw_layer, 3)
450
+
451
+ if self.position_encoding_2d:
452
+ q1, q2 = query_layer.chunk(2, dim=(query_layer.ndim - 1))
453
+ k1, k2 = key_layer.chunk(2, dim=(key_layer.ndim - 1))
454
+ cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
455
+ position_ids, block_position_ids = position_ids[:, 0, :].transpose(0, 1).contiguous(), \
456
+ position_ids[:, 1, :].transpose(0, 1).contiguous()
457
+ q1, k1 = apply_rotary_pos_emb_index(q1, k1, cos, sin, position_ids)
458
+ q2, k2 = apply_rotary_pos_emb_index(q2, k2, cos, sin, block_position_ids)
459
+ query_layer = torch.concat([q1, q2], dim=(q1.ndim - 1))
460
+ key_layer = torch.concat([k1, k2], dim=(k1.ndim - 1))
461
+ else:
462
+ position_ids = position_ids.transpose(0, 1)
463
+ cos, sin = self.rotary_emb(value_layer, seq_len=position_ids.max() + 1)
464
+ # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
465
+ query_layer, key_layer = apply_rotary_pos_emb_index(query_layer, key_layer, cos, sin, position_ids)
466
+
467
+ # [seq_len, batch, hidden_size]
468
+ context_layer, present, attention_probs = attention_fn(
469
+ self=self,
470
+ query_layer=query_layer,
471
+ key_layer=key_layer,
472
+ value_layer=value_layer,
473
+ attention_mask=attention_mask,
474
+ hidden_size_per_partition=self.hidden_size_per_partition,
475
+ layer_id=layer_id,
476
+ layer_past=layer_past,
477
+ use_cache=use_cache
478
+ )
479
+
480
+ output = self.dense(context_layer)
481
+
482
+ outputs = (output, present)
483
+
484
+ if output_attentions:
485
+ outputs += (attention_probs,)
486
+
487
+ return outputs # output, present, attention_probs
488
+
489
+
490
+ class GEGLU(torch.nn.Module):
491
+ def __init__(self):
492
+ super().__init__()
493
+ self.activation_fn = F.gelu
494
+
495
+ def forward(self, x):
496
+ # dim=-1 breaks in jit for pt<1.10
497
+ x1, x2 = x.chunk(2, dim=(x.ndim - 1))
498
+ return x1 * self.activation_fn(x2)
499
+
500
+
501
+ class GLU(torch.nn.Module):
502
+ def __init__(self, hidden_size, inner_hidden_size=None,
503
+ layer_id=None, bias=True, activation_func=gelu, params_dtype=torch.float):
504
+ super(GLU, self).__init__()
505
+ self.layer_id = layer_id
506
+ self.activation_func = activation_func
507
+
508
+ # Project to 4h.
509
+ self.hidden_size = hidden_size
510
+ if inner_hidden_size is None:
511
+ inner_hidden_size = 4 * hidden_size
512
+ self.inner_hidden_size = inner_hidden_size
513
+ self.dense_h_to_4h = skip_init(
514
+ torch.nn.Linear,
515
+ self.hidden_size,
516
+ self.inner_hidden_size,
517
+ bias=bias,
518
+ dtype=params_dtype,
519
+ )
520
+ # Project back to h.
521
+ self.dense_4h_to_h = skip_init(
522
+ torch.nn.Linear,
523
+ self.inner_hidden_size,
524
+ self.hidden_size,
525
+ bias=bias,
526
+ dtype=params_dtype,
527
+ )
528
+
529
+ def forward(self, hidden_states):
530
+ """
531
+ hidden_states: [seq_len, batch, hidden_size]
532
+ """
533
+
534
+ # [seq_len, batch, inner_hidden_size]
535
+ intermediate_parallel = self.dense_h_to_4h(hidden_states)
536
+
537
+ intermediate_parallel = self.activation_func(intermediate_parallel)
538
+
539
+ output = self.dense_4h_to_h(intermediate_parallel)
540
+
541
+ return output
542
+
543
+
544
+ class GLMBlock(torch.nn.Module):
545
+ def __init__(
546
+ self,
547
+ hidden_size,
548
+ num_attention_heads,
549
+ layernorm_epsilon,
550
+ layer_id,
551
+ inner_hidden_size=None,
552
+ hidden_size_per_attention_head=None,
553
+ layernorm=LayerNorm,
554
+ use_bias=True,
555
+ params_dtype=torch.float,
556
+ num_layers=28,
557
+ position_encoding_2d=True
558
+ ):
559
+ super(GLMBlock, self).__init__()
560
+ # Set output layer initialization if not provided.
561
+
562
+ self.layer_id = layer_id
563
+
564
+ # Layernorm on the input data.
565
+ self.input_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
566
+
567
+ self.position_encoding_2d = position_encoding_2d
568
+
569
+ # Self attention.
570
+ self.attention = SelfAttention(
571
+ hidden_size,
572
+ num_attention_heads,
573
+ layer_id,
574
+ hidden_size_per_attention_head=hidden_size_per_attention_head,
575
+ bias=use_bias,
576
+ params_dtype=params_dtype,
577
+ position_encoding_2d=self.position_encoding_2d
578
+ )
579
+
580
+ # Layernorm on the input data.
581
+ self.post_attention_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
582
+
583
+ self.num_layers = num_layers
584
+
585
+ # GLU
586
+ self.mlp = GLU(
587
+ hidden_size,
588
+ inner_hidden_size=inner_hidden_size,
589
+ bias=use_bias,
590
+ layer_id=layer_id,
591
+ params_dtype=params_dtype,
592
+ )
593
+
594
+ def forward(
595
+ self,
596
+ hidden_states: torch.Tensor,
597
+ position_ids,
598
+ attention_mask: torch.Tensor,
599
+ layer_id,
600
+ layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
601
+ use_cache: bool = False,
602
+ output_attentions: bool = False,
603
+ ):
604
+ """
605
+ hidden_states: [seq_len, batch, hidden_size]
606
+ attention_mask: [(1, 1), seq_len, seq_len]
607
+ """
608
+
609
+ # Layer norm at the begining of the transformer layer.
610
+ # [seq_len, batch, hidden_size]
611
+ attention_input = self.input_layernorm(hidden_states)
612
+
613
+ # Self attention.
614
+ attention_outputs = self.attention(
615
+ attention_input,
616
+ position_ids,
617
+ attention_mask=attention_mask,
618
+ layer_id=layer_id,
619
+ layer_past=layer_past,
620
+ use_cache=use_cache,
621
+ output_attentions=output_attentions
622
+ )
623
+
624
+ attention_output = attention_outputs[0]
625
+
626
+ outputs = attention_outputs[1:]
627
+
628
+ # Residual connection.
629
+ alpha = (2 * self.num_layers) ** 0.5
630
+ hidden_states = attention_input * alpha + attention_output
631
+
632
+ mlp_input = self.post_attention_layernorm(hidden_states)
633
+
634
+ # MLP.
635
+ mlp_output = self.mlp(mlp_input)
636
+
637
+ # Second residual connection.
638
+ output = mlp_input * alpha + mlp_output
639
+
640
+ if use_cache:
641
+ outputs = (output,) + outputs
642
+ else:
643
+ outputs = (output,) + outputs[1:]
644
+
645
+ return outputs # hidden_states, present, attentions
646
+
647
+
648
+ class ChatGLMPreTrainedModel(PreTrainedModel):
649
+ """
650
+ An abstract class to handle weights initialization and
651
+ a simple interface for downloading and loading pretrained models.
652
+ """
653
+
654
+ is_parallelizable = False
655
+ supports_gradient_checkpointing = True
656
+ config_class = ChatGLMConfig
657
+ base_model_prefix = "transformer"
658
+ _no_split_modules = ["GLM6BBlock"]
659
+
660
+ def __init__(self, *inputs, **kwargs):
661
+ super().__init__(*inputs, **kwargs)
662
+
663
+ def _init_weights(self, module: nn.Module):
664
+ """Initialize the weights."""
665
+ return
666
+
667
+ def _set_gradient_checkpointing(self, module, value=False):
668
+ if isinstance(module, ChatGLMModel):
669
+ module.gradient_checkpointing = value
670
+
671
+
672
+ CHATGLM_6B_START_DOCSTRING = r"""
673
+ This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class.
674
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general
675
+ usage and behavior.
676
+
677
+ Parameters:
678
+ config ([`~ChatGLM6BConfig`]): Model configuration class with all the parameters of the model.
679
+ Initializing with a config file does not load the weights associated with the model, only the configuration.
680
+ Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
681
+ """
682
+
683
+ CHATGLM_6B_INPUTS_DOCSTRING = r"""
684
+ Args:
685
+ input_ids (`torch.LongTensor` of shape `({0})`):
686
+ Indices of input sequence tokens in the vocabulary.
687
+
688
+ Indices can be obtained using [`ChatGLM6BTokenizer`].
689
+ See [`PreTrainedTokenizer.encode`] and
690
+ [`PreTrainedTokenizer.__call__`] for details.
691
+
692
+ [What are input IDs?](../glossary#input-ids)
693
+ attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
694
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
695
+
696
+ - 1 for tokens that are **not masked**,
697
+ - 0 for tokens that are **masked**.
698
+
699
+ [What are attention masks?](../glossary#attention-mask)
700
+ token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
701
+ Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
702
+
703
+ - 0 corresponds to a *sentence A* token,
704
+ - 1 corresponds to a *sentence B* token.
705
+
706
+ [What are token type IDs?](../glossary#token-type-ids)
707
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
708
+ Indices of positions of each input sequence tokens in the position embeddings.
709
+ Selected in the range `[0, config.max_position_embeddings - 1]`.
710
+
711
+ [What are position IDs?](../glossary#position-ids)
712
+ head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
713
+ Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
714
+
715
+ - 1 indicates the head is **not masked**,
716
+ - 0 indicates the head is **masked**.
717
+
718
+ inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
719
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
720
+ This is useful if you want more control over how to convert *input_ids* indices into associated vectors
721
+ than the model's internal embedding lookup matrix.
722
+ output_attentions (`bool`, *optional*):
723
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
724
+ tensors for more detail.
725
+ output_hidden_states (`bool`, *optional*):
726
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
727
+ more detail.
728
+ return_dict (`bool`, *optional*):
729
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
730
+ """
731
+
732
+
733
+ @add_start_docstrings(
734
+ "The bare ChatGLM-6B Model transformer outputting raw hidden-states without any specific head on top.",
735
+ CHATGLM_6B_START_DOCSTRING,
736
+ )
737
+ class ChatGLMModel(ChatGLMPreTrainedModel):
738
+ """
739
+
740
+ The model can behave as an encoder (with only self-attention) as well
741
+ as a decoder, in which case a layer of cross-attention is added between
742
+ the self-attention layers, following the architecture described in [Attention is
743
+ all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani,
744
+ Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
745
+
746
+ To behave as an decoder the model needs to be initialized with the
747
+ `is_decoder` argument of the configuration set to `True`.
748
+ To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder`
749
+ argument and `add_cross_attention` set to `True`; an
750
+ `encoder_hidden_states` is then expected as an input to the forward pass.
751
+ """
752
+
753
+ def __init__(self, config: ChatGLMConfig):
754
+ super().__init__(config)
755
+
756
+ # recording parameters
757
+ self.max_sequence_length = config.max_sequence_length
758
+ self.hidden_size = config.hidden_size
759
+ self.params_dtype = torch.half
760
+ self.num_attention_heads = config.num_attention_heads
761
+ self.vocab_size = config.vocab_size
762
+ self.num_layers = config.num_layers
763
+ self.layernorm_epsilon = config.layernorm_epsilon
764
+ self.inner_hidden_size = config.inner_hidden_size
765
+ self.hidden_size_per_attention_head = self.hidden_size // self.num_attention_heads
766
+ self.position_encoding_2d = config.position_encoding_2d
767
+ self.pre_seq_len = config.pre_seq_len
768
+ self.prefix_projection = config.prefix_projection
769
+
770
+ self.word_embeddings = skip_init(
771
+ torch.nn.Embedding,
772
+ num_embeddings=self.vocab_size, embedding_dim=self.hidden_size,
773
+ dtype=self.params_dtype
774
+ )
775
+ self.gradient_checkpointing = False
776
+
777
+ def get_layer(layer_id):
778
+ return GLMBlock(
779
+ self.hidden_size,
780
+ self.num_attention_heads,
781
+ self.layernorm_epsilon,
782
+ layer_id,
783
+ inner_hidden_size=self.inner_hidden_size,
784
+ hidden_size_per_attention_head=self.hidden_size_per_attention_head,
785
+ layernorm=LayerNorm,
786
+ use_bias=True,
787
+ params_dtype=self.params_dtype,
788
+ position_encoding_2d=self.position_encoding_2d,
789
+ )
790
+
791
+ self.layers = torch.nn.ModuleList(
792
+ [get_layer(layer_id) for layer_id in range(self.num_layers)]
793
+ )
794
+
795
+ # Final layer norm before output.
796
+ self.final_layernorm = LayerNorm(self.hidden_size, eps=self.layernorm_epsilon)
797
+
798
+ if self.pre_seq_len is not None:
799
+ for param in self.parameters():
800
+ param.requires_grad = False
801
+ self.prefix_tokens = torch.arange(self.pre_seq_len).long()
802
+ self.prefix_encoder = PrefixEncoder(config)
803
+ self.dropout = torch.nn.Dropout(0.1)
804
+
805
+ # total_params = sum(p.numel() for p in self.parameters())
806
+ # trainable_params = sum(p.numel() for p in self.parameters() if p.requires_grad)
807
+ # print("Using p-tuning v2: # trainable_params = {} / {}".format(trainable_params, total_params))
808
+
809
+ def get_input_embeddings(self):
810
+ return self.word_embeddings
811
+
812
+ def set_input_embeddings(self, new_embeddings: torch.Tensor):
813
+ self.word_embeddings = new_embeddings
814
+
815
+ def get_prompt(self, batch_size, device, dtype=torch.half):
816
+ prefix_tokens = self.prefix_tokens.unsqueeze(0).expand(batch_size, -1).to(device)
817
+ past_key_values = self.prefix_encoder(prefix_tokens).type(dtype)
818
+ past_key_values = past_key_values.view(
819
+ batch_size,
820
+ self.pre_seq_len,
821
+ self.num_layers * 2,
822
+ self.num_attention_heads,
823
+ self.hidden_size // self.num_attention_heads
824
+ )
825
+ # seq_len, b, nh, hidden_size
826
+ past_key_values = self.dropout(past_key_values)
827
+ past_key_values = past_key_values.permute([2, 1, 0, 3, 4]).split(2)
828
+ # past_key_values = [(v[0], v[1]) for v in past_key_values]
829
+ return past_key_values
830
+
831
+ def get_masks(self, input_ids, device):
832
+ batch_size, seq_length = input_ids.shape
833
+ context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
834
+ attention_mask = torch.ones((batch_size, seq_length, seq_length), device=device)
835
+ attention_mask.tril_()
836
+ for i, context_length in enumerate(context_lengths):
837
+ attention_mask[i, :, :context_length] = 1
838
+ attention_mask.unsqueeze_(1)
839
+ attention_mask = (attention_mask < 0.5).bool()
840
+
841
+ return attention_mask
842
+
843
+ def get_position_ids(self, input_ids, mask_positions, device, gmask=False):
844
+ batch_size, seq_length = input_ids.shape
845
+ context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
846
+ if self.position_encoding_2d:
847
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=device).expand(batch_size, seq_length)
848
+ if not gmask:
849
+ for i, context_length in enumerate(context_lengths):
850
+ position_ids[i, context_length:] = mask_positions[i]
851
+ block_position_ids = [torch.cat((
852
+ torch.zeros(context_length, dtype=torch.long, device=device),
853
+ torch.arange(seq_length - context_length, dtype=torch.long, device=device) + 1
854
+ )) for context_length in context_lengths]
855
+ block_position_ids = torch.stack(block_position_ids, dim=0)
856
+ position_ids = torch.stack((position_ids, block_position_ids), dim=1)
857
+ else:
858
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=device).expand(batch_size, seq_length)
859
+ if not gmask:
860
+ for i, context_length in enumerate(context_lengths):
861
+ position_ids[context_length:] = mask_positions[i]
862
+
863
+ return position_ids
864
+
865
+ @add_start_docstrings_to_model_forward(CHATGLM_6B_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
866
+ @add_code_sample_docstrings(
867
+ checkpoint=_CHECKPOINT_FOR_DOC,
868
+ output_type=BaseModelOutputWithPastAndCrossAttentions,
869
+ config_class=_CONFIG_FOR_DOC,
870
+ )
871
+ def forward(
872
+ self,
873
+ input_ids: Optional[torch.LongTensor] = None,
874
+ position_ids: Optional[torch.LongTensor] = None,
875
+ attention_mask: Optional[torch.Tensor] = None,
876
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
877
+ inputs_embeds: Optional[torch.LongTensor] = None,
878
+ use_cache: Optional[bool] = None,
879
+ output_attentions: Optional[bool] = None,
880
+ output_hidden_states: Optional[bool] = None,
881
+ return_dict: Optional[bool] = None,
882
+ ) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPast]:
883
+
884
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
885
+ output_hidden_states = (
886
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
887
+ )
888
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
889
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
890
+
891
+ if self.gradient_checkpointing and self.training:
892
+ if use_cache:
893
+ # logger.warning_once(
894
+ # "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
895
+ # )
896
+ use_cache = False
897
+
898
+ if input_ids is not None and inputs_embeds is not None:
899
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
900
+ elif input_ids is not None:
901
+ batch_size, seq_length = input_ids.shape[:2]
902
+ elif inputs_embeds is not None:
903
+ batch_size, seq_length, _ = inputs_embeds.shape[:2]
904
+ else:
905
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
906
+
907
+ if inputs_embeds is None:
908
+ inputs_embeds = self.word_embeddings(input_ids)
909
+
910
+ if past_key_values is None:
911
+ if self.pre_seq_len is not None:
912
+ past_key_values = self.get_prompt(batch_size=input_ids.shape[0], device=input_ids.device,
913
+ dtype=inputs_embeds.dtype)
914
+ else:
915
+ past_key_values = tuple([None] * len(self.layers))
916
+
917
+ if attention_mask is None:
918
+ attention_mask = self.get_masks(
919
+ input_ids,
920
+ device=input_ids.device
921
+ )
922
+
923
+ if self.pre_seq_len is not None:
924
+ prefix_attention_mask = torch.ones(batch_size, 1, input_ids.size(-1), self.pre_seq_len).to(
925
+ attention_mask.device)
926
+ prefix_attention_mask = (prefix_attention_mask < 0.5).bool()
927
+ attention_mask = torch.cat((prefix_attention_mask, attention_mask), dim=3)
928
+
929
+ if position_ids is None:
930
+ MASK, gMASK = 150000, 150001
931
+ mask_token = MASK if MASK in input_ids else gMASK
932
+ use_gmask = False if MASK in input_ids else gMASK
933
+
934
+ mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]
935
+ position_ids = self.get_position_ids(
936
+ input_ids,
937
+ mask_positions=mask_positions,
938
+ device=input_ids.device,
939
+ gmask=use_gmask
940
+ )
941
+
942
+ # [seq_len, batch, hidden_size]
943
+ hidden_states = inputs_embeds.transpose(0, 1)
944
+
945
+ presents = () if use_cache else None
946
+ all_self_attentions = () if output_attentions else None
947
+ all_hidden_states = () if output_hidden_states else None
948
+
949
+ if attention_mask is None:
950
+ attention_mask = torch.zeros(1, 1, device=input_ids.device).bool()
951
+
952
+ else:
953
+ attention_mask = attention_mask.to(input_ids.device)
954
+
955
+ for i, layer in enumerate(self.layers):
956
+
957
+ if output_hidden_states:
958
+ all_hidden_states = all_hidden_states + (hidden_states,)
959
+ layer_past = past_key_values[i]
960
+
961
+ if self.gradient_checkpointing and self.training:
962
+ layer_ret = torch.utils.checkpoint.checkpoint(
963
+ layer,
964
+ hidden_states,
965
+ position_ids,
966
+ attention_mask,
967
+ torch.tensor(i),
968
+ layer_past,
969
+ use_cache,
970
+ output_attentions
971
+ )
972
+ else:
973
+ layer_ret = layer(
974
+ hidden_states,
975
+ position_ids=position_ids,
976
+ attention_mask=attention_mask,
977
+ layer_id=torch.tensor(i),
978
+ layer_past=layer_past,
979
+ use_cache=use_cache,
980
+ output_attentions=output_attentions
981
+ )
982
+
983
+ hidden_states = layer_ret[0]
984
+
985
+ if use_cache:
986
+ presents = presents + (layer_ret[1],)
987
+
988
+ if output_attentions:
989
+ all_self_attentions = all_self_attentions + (layer_ret[2 if use_cache else 1],)
990
+
991
+ # Final layer norm.
992
+ hidden_states = self.final_layernorm(hidden_states)
993
+
994
+ if output_hidden_states:
995
+ all_hidden_states = all_hidden_states + (hidden_states,)
996
+
997
+ if not return_dict:
998
+ return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
999
+
1000
+ return BaseModelOutputWithPast(
1001
+ last_hidden_state=hidden_states,
1002
+ past_key_values=presents,
1003
+ hidden_states=all_hidden_states,
1004
+ attentions=all_self_attentions,
1005
+ )
1006
+
1007
+
1008
+ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
1009
+ def __init__(self, config: ChatGLMConfig):
1010
+ super().__init__(config)
1011
+
1012
+ # self.hidden_size = config.hidden_size
1013
+ # self.params_dtype = torch.half
1014
+ # self.vocab_size = config.vocab_size
1015
+ self.max_sequence_length = config.max_sequence_length
1016
+
1017
+ self.position_encoding_2d = config.position_encoding_2d
1018
+
1019
+ self.transformer = ChatGLMModel(config)
1020
+
1021
+ self.lm_head = skip_init(
1022
+ nn.Linear,
1023
+ config.hidden_size,
1024
+ config.vocab_size,
1025
+ bias=False,
1026
+ dtype=torch.half
1027
+ )
1028
+
1029
+ self.config = config
1030
+
1031
+ self.quantized = False
1032
+
1033
+ if self.config.quantization_bit:
1034
+ self.quantize(self.config.quantization_bit, empty_init=True)
1035
+
1036
+ def get_output_embeddings(self):
1037
+ return self.lm_head
1038
+
1039
+ def set_output_embeddings(self, new_embeddings):
1040
+ self.lm_head = new_embeddings
1041
+
1042
+ def get_masks_and_position_ids(self, input_ids, mask_positions, device, gmask=False):
1043
+ batch_size, seq_length = input_ids.shape
1044
+ context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
1045
+ attention_mask = torch.ones((batch_size, seq_length, seq_length), device=device)
1046
+ attention_mask.tril_()
1047
+ for i, context_length in enumerate(context_lengths):
1048
+ attention_mask[i, :, :context_length] = 1
1049
+ attention_mask.unsqueeze_(1)
1050
+ attention_mask = (attention_mask < 0.5).bool()
1051
+
1052
+ batch_size, seq_length = input_ids.shape
1053
+ context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
1054
+ if self.position_encoding_2d:
1055
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=device).expand(batch_size, seq_length)
1056
+ if not gmask:
1057
+ for i, context_length in enumerate(context_lengths):
1058
+ position_ids[i, context_length:] = mask_positions[i]
1059
+ block_position_ids = [torch.cat((
1060
+ torch.zeros(context_length, dtype=torch.long, device=device),
1061
+ torch.arange(seq_length - context_length, dtype=torch.long, device=device) + 1
1062
+ )) for context_length in context_lengths]
1063
+ block_position_ids = torch.stack(block_position_ids, dim=0)
1064
+ position_ids = torch.stack((position_ids, block_position_ids), dim=1)
1065
+ else:
1066
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=device).expand(batch_size, seq_length)
1067
+ if not gmask:
1068
+ for i, context_length in enumerate(context_lengths):
1069
+ position_ids[context_length:] = mask_positions[i]
1070
+
1071
+ return attention_mask, position_ids
1072
+
1073
+ def prepare_inputs_for_generation(
1074
+ self,
1075
+ input_ids: torch.LongTensor,
1076
+ past: Optional[torch.Tensor] = None,
1077
+ past_key_values: Optional[torch.Tensor] = None,
1078
+ attention_mask: Optional[torch.Tensor] = None,
1079
+ **kwargs
1080
+ ) -> dict:
1081
+ batch_size, seq_length = input_ids.shape
1082
+ MASK, gMASK = 150000, 150001
1083
+ mask_token = MASK if MASK in input_ids else gMASK
1084
+ use_gmask = False if MASK in input_ids else gMASK
1085
+ seqs = input_ids.tolist()
1086
+ mask_positions = [seq.index(mask_token) for seq in seqs]
1087
+
1088
+ # only last token for input_ids if past is not None
1089
+ if past is not None or past_key_values is not None:
1090
+ context_lengths = [seq.index(self.config.bos_token_id) for seq in seqs]
1091
+ last_token = input_ids[:, -1].unsqueeze(-1)
1092
+ if self.position_encoding_2d:
1093
+ position_ids = torch.tensor(
1094
+ [[mask_position, seq_length - context_length] for mask_position, context_length in
1095
+ zip(mask_positions, context_lengths)], dtype=torch.long, device=input_ids.device).unsqueeze(-1)
1096
+ else:
1097
+ position_ids = torch.tensor([mask_position for mask_position in mask_positions], dtype=torch.long,
1098
+ device=input_ids.device).unsqueeze(-1)
1099
+
1100
+ if past is None:
1101
+ past = past_key_values
1102
+ return {
1103
+ "input_ids": last_token,
1104
+ "past_key_values": past,
1105
+ "position_ids": position_ids,
1106
+ }
1107
+ else:
1108
+ attention_mask, position_ids = self.get_masks_and_position_ids(
1109
+ input_ids,
1110
+ mask_positions=mask_positions,
1111
+ device=input_ids.device,
1112
+ gmask=use_gmask
1113
+ )
1114
+
1115
+ return {
1116
+ "input_ids": input_ids,
1117
+ "past_key_values": past,
1118
+ "position_ids": position_ids,
1119
+ "attention_mask": attention_mask
1120
+ }
1121
+
1122
+ def forward(
1123
+ self,
1124
+ input_ids: Optional[torch.Tensor] = None,
1125
+ position_ids: Optional[torch.Tensor] = None,
1126
+ attention_mask: Optional[torch.Tensor] = None,
1127
+ past_key_values: Optional[Tuple[torch.FloatTensor]] = None,
1128
+ inputs_embeds: Optional[torch.Tensor] = None,
1129
+ labels: Optional[torch.Tensor] = None,
1130
+ use_cache: Optional[bool] = None,
1131
+ output_attentions: Optional[bool] = None,
1132
+ output_hidden_states: Optional[bool] = None,
1133
+ return_dict: Optional[bool] = None,
1134
+ ):
1135
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1136
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1137
+
1138
+ transformer_outputs = self.transformer(
1139
+ input_ids=input_ids,
1140
+ position_ids=position_ids,
1141
+ attention_mask=attention_mask,
1142
+ past_key_values=past_key_values,
1143
+ inputs_embeds=inputs_embeds,
1144
+ use_cache=use_cache,
1145
+ output_attentions=output_attentions,
1146
+ output_hidden_states=output_hidden_states,
1147
+ return_dict=return_dict,
1148
+ )
1149
+
1150
+ hidden_states = transformer_outputs[0]
1151
+
1152
+ lm_logits = self.lm_head(hidden_states).permute(1, 0, 2).contiguous()
1153
+
1154
+ loss = None
1155
+ if labels is not None:
1156
+ lm_logits = lm_logits.to(torch.float32)
1157
+
1158
+ # Shift so that tokens < n predict n
1159
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1160
+ shift_labels = labels[..., 1:].contiguous()
1161
+ # Flatten the tokens
1162
+ loss_fct = CrossEntropyLoss(ignore_index=self.config.pad_token_id)
1163
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1164
+
1165
+ lm_logits = lm_logits.to(hidden_states.dtype)
1166
+ loss = loss.to(hidden_states.dtype)
1167
+
1168
+ if not return_dict:
1169
+ output = (lm_logits,) + transformer_outputs[1:]
1170
+ return ((loss,) + output) if loss is not None else output
1171
+
1172
+ return CausalLMOutputWithPast(
1173
+ loss=loss,
1174
+ logits=lm_logits,
1175
+ past_key_values=transformer_outputs.past_key_values,
1176
+ hidden_states=transformer_outputs.hidden_states,
1177
+ attentions=transformer_outputs.attentions,
1178
+ )
1179
+
1180
+ @staticmethod
1181
+ def _reorder_cache(
1182
+ past: Tuple[Tuple[torch.Tensor, torch.Tensor], ...], beam_idx: torch.LongTensor
1183
+ ) -> Tuple[Tuple[torch.Tensor, torch.Tensor], ...]:
1184
+ """
1185
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
1186
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
1187
+ beam_idx at every generation step.
1188
+
1189
+ Output shares the same memory storage as `past`.
1190
+ """
1191
+ return tuple(
1192
+ (
1193
+ layer_past[0].index_select(1, beam_idx.to(layer_past[0].device)),
1194
+ layer_past[1].index_select(1, beam_idx.to(layer_past[1].device)),
1195
+ )
1196
+ for layer_past in past
1197
+ )
1198
+
1199
+ def process_response(self, response):
1200
+ response = response.strip()
1201
+ response = response.replace("[[训练时间]]", "2023年")
1202
+ punkts = [
1203
+ [",", ","],
1204
+ ["!", "!"],
1205
+ [":", ":"],
1206
+ [";", ";"],
1207
+ ["\?", "?"],
1208
+ ]
1209
+ for item in punkts:
1210
+ response = re.sub(r"([\u4e00-\u9fff])%s" % item[0], r"\1%s" % item[1], response)
1211
+ response = re.sub(r"%s([\u4e00-\u9fff])" % item[0], r"%s\1" % item[1], response)
1212
+ return response
1213
+
1214
+ @torch.no_grad()
1215
+ def chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048, num_beams=1,
1216
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1217
+ if history is None:
1218
+ history = []
1219
+ if logits_processor is None:
1220
+ logits_processor = LogitsProcessorList()
1221
+ logits_processor.append(InvalidScoreLogitsProcessor())
1222
+ gen_kwargs = {"max_length": max_length, "num_beams": num_beams, "do_sample": do_sample, "top_p": top_p,
1223
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1224
+ if not history:
1225
+ prompt = query
1226
+ else:
1227
+ prompt = ""
1228
+ for i, (old_query, response) in enumerate(history):
1229
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1230
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1231
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1232
+ input_ids = input_ids.to(self.device)
1233
+ outputs = self.generate(**input_ids, **gen_kwargs)
1234
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1235
+ response = tokenizer.decode(outputs)
1236
+ response = self.process_response(response)
1237
+ history = history + [(query, response)]
1238
+ return response, history
1239
+
1240
+ @torch.no_grad()
1241
+ def stream_chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 2048,
1242
+ do_sample=True, top_p=0.7, temperature=0.95, logits_processor=None, **kwargs):
1243
+ if history is None:
1244
+ history = []
1245
+ if logits_processor is None:
1246
+ logits_processor = LogitsProcessorList()
1247
+ logits_processor.append(InvalidScoreLogitsProcessor())
1248
+ gen_kwargs = {"max_length": max_length, "do_sample": do_sample, "top_p": top_p,
1249
+ "temperature": temperature, "logits_processor": logits_processor, **kwargs}
1250
+ if not history:
1251
+ prompt = query
1252
+ else:
1253
+ prompt = ""
1254
+ for i, (old_query, response) in enumerate(history):
1255
+ prompt += "[Round {}]\n问:{}\n答:{}\n".format(i, old_query, response)
1256
+ prompt += "[Round {}]\n问:{}\n答:".format(len(history), query)
1257
+ input_ids = tokenizer([prompt], return_tensors="pt", padding=True)
1258
+ input_ids = input_ids.to(self.device)
1259
+ for outputs in self.stream_generate(**input_ids, **gen_kwargs):
1260
+ outputs = outputs.tolist()[0][len(input_ids["input_ids"][0]):]
1261
+ response = tokenizer.decode(outputs)
1262
+ response = self.process_response(response)
1263
+ new_history = history + [(query, response)]
1264
+ yield response, new_history
1265
+
1266
+ @torch.no_grad()
1267
+ def stream_generate(
1268
+ self,
1269
+ input_ids,
1270
+ generation_config: Optional[GenerationConfig] = None,
1271
+ logits_processor: Optional[LogitsProcessorList] = None,
1272
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1273
+ prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
1274
+ **kwargs,
1275
+ ):
1276
+ batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
1277
+
1278
+ if generation_config is None:
1279
+ generation_config = self.generation_config
1280
+ generation_config = copy.deepcopy(generation_config)
1281
+ model_kwargs = generation_config.update(**kwargs)
1282
+ bos_token_id, eos_token_id = generation_config.bos_token_id, generation_config.eos_token_id
1283
+
1284
+ if isinstance(eos_token_id, int):
1285
+ eos_token_id = [eos_token_id]
1286
+
1287
+ has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None
1288
+ if has_default_max_length and generation_config.max_new_tokens is None:
1289
+ warnings.warn(
1290
+ f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. "
1291
+ "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we"
1292
+ " recommend using `max_new_tokens` to control the maximum length of the generation.",
1293
+ UserWarning,
1294
+ )
1295
+ elif generation_config.max_new_tokens is not None:
1296
+ generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1297
+ if not has_default_max_length:
1298
+ logger.warn(
1299
+ f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(="
1300
+ f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. "
1301
+ "Please refer to the documentation for more information. "
1302
+ "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)",
1303
+ UserWarning,
1304
+ )
1305
+
1306
+ if input_ids_seq_length >= generation_config.max_length:
1307
+ input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
1308
+ logger.warning(
1309
+ f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to"
1310
+ f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
1311
+ " increasing `max_new_tokens`."
1312
+ )
1313
+
1314
+ # 2. Set generation parameters if not already defined
1315
+ logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
1316
+ stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
1317
+
1318
+ logits_processor = self._get_logits_processor(
1319
+ generation_config=generation_config,
1320
+ input_ids_seq_length=input_ids_seq_length,
1321
+ encoder_input_ids=input_ids,
1322
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1323
+ logits_processor=logits_processor,
1324
+ )
1325
+
1326
+ stopping_criteria = self._get_stopping_criteria(
1327
+ generation_config=generation_config, stopping_criteria=stopping_criteria
1328
+ )
1329
+ logits_warper = self._get_logits_warper(generation_config)
1330
+
1331
+ unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
1332
+ scores = None
1333
+ while True:
1334
+ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1335
+ # forward pass to get next token
1336
+ outputs = self(
1337
+ **model_inputs,
1338
+ return_dict=True,
1339
+ output_attentions=False,
1340
+ output_hidden_states=False,
1341
+ )
1342
+
1343
+ next_token_logits = outputs.logits[:, -1, :]
1344
+
1345
+ # pre-process distribution
1346
+ next_token_scores = logits_processor(input_ids, next_token_logits)
1347
+ next_token_scores = logits_warper(input_ids, next_token_scores)
1348
+
1349
+ # sample
1350
+ probs = nn.functional.softmax(next_token_scores, dim=-1)
1351
+ if generation_config.do_sample:
1352
+ next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
1353
+ else:
1354
+ next_tokens = torch.argmax(probs, dim=-1)
1355
+
1356
+ # update generated ids, model inputs, and length for next step
1357
+ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
1358
+ model_kwargs = self._update_model_kwargs_for_generation(
1359
+ outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
1360
+ )
1361
+ unfinished_sequences = unfinished_sequences.mul((sum(next_tokens != i for i in eos_token_id)).long())
1362
+
1363
+ # stop when each sentence is finished, or if we exceed the maximum length
1364
+ if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
1365
+ break
1366
+ yield input_ids
1367
+
1368
+ def quantize(self, bits: int, empty_init=False, **kwargs):
1369
+ if bits == 0:
1370
+ return
1371
+
1372
+ from .quantization import quantize
1373
+
1374
+ if self.quantized:
1375
+ logger.info("Already quantized.")
1376
+ return self
1377
+
1378
+ self.quantized = True
1379
+
1380
+ self.config.quantization_bit = bits
1381
+
1382
+ self.transformer = quantize(self.transformer, bits, empty_init=empty_init, **kwargs)
1383
+ return self
pytorch_model-00001-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe5bac6bfa5b5404ddfe3fabe04862b785e013afd7b308b7beca08239f9489fa
3
+ size 1904491802
pytorch_model-00002-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a80198fb714f7363d7e541125bb70b9cb6b1d1ef5988d32a7a25a852a374cbc3
3
+ size 1879731432
pytorch_model-00003-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aaba0ae53b3ea30559575c8528dab52ca291a26ac847c5601fcf874db401198f
3
+ size 1980385902
pytorch_model-00004-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:968d134dd9b11e393d160144f097d6bff8c559413e3f75e9e0b6d35618eba669
3
+ size 1913294120
pytorch_model-00005-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc628ce0dcd5c38783e63fc81dd1b609fe01670ec3b855b358aa0d1d7ea48bf3
3
+ size 1879722289
pytorch_model-00006-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:511ec23b7907b7a26461671775a2ac08c08fb3695285bbe7d91fc534d7cbfd7e
3
+ size 1879731496
pytorch_model-00007-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:245d64e05cebeb214d696bccc87c1dbdf16c67c366e7f54af452ec5748c2186e
3
+ size 1074103621
pytorch_model-00008-of-00008.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e764ebdece24219efeda3c18aa32fe6414da3d1f533df8845815609e9ef7f4a7
3
+ size 1233126123
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 13744473856
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00008-of-00008.bin",
7
+ "transformer.final_layernorm.bias": "pytorch_model-00007-of-00008.bin",
8
+ "transformer.final_layernorm.weight": "pytorch_model-00007-of-00008.bin",
9
+ "transformer.layers.0.attention.dense.bias": "pytorch_model-00001-of-00008.bin",
10
+ "transformer.layers.0.attention.dense.weight": "pytorch_model-00001-of-00008.bin",
11
+ "transformer.layers.0.attention.query_key_value.bias": "pytorch_model-00001-of-00008.bin",
12
+ "transformer.layers.0.attention.query_key_value.weight": "pytorch_model-00001-of-00008.bin",
13
+ "transformer.layers.0.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
14
+ "transformer.layers.0.input_layernorm.bias": "pytorch_model-00001-of-00008.bin",
15
+ "transformer.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
16
+ "transformer.layers.0.mlp.dense_4h_to_h.bias": "pytorch_model-00001-of-00008.bin",
17
+ "transformer.layers.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00008.bin",
18
+ "transformer.layers.0.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00008.bin",
19
+ "transformer.layers.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00008.bin",
20
+ "transformer.layers.0.post_attention_layernorm.bias": "pytorch_model-00001-of-00008.bin",
21
+ "transformer.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
22
+ "transformer.layers.1.attention.dense.bias": "pytorch_model-00001-of-00008.bin",
23
+ "transformer.layers.1.attention.dense.weight": "pytorch_model-00001-of-00008.bin",
24
+ "transformer.layers.1.attention.query_key_value.bias": "pytorch_model-00001-of-00008.bin",
25
+ "transformer.layers.1.attention.query_key_value.weight": "pytorch_model-00001-of-00008.bin",
26
+ "transformer.layers.1.attention.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
27
+ "transformer.layers.1.input_layernorm.bias": "pytorch_model-00001-of-00008.bin",
28
+ "transformer.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
29
+ "transformer.layers.1.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
30
+ "transformer.layers.1.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
31
+ "transformer.layers.1.mlp.dense_h_to_4h.bias": "pytorch_model-00001-of-00008.bin",
32
+ "transformer.layers.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00008.bin",
33
+ "transformer.layers.1.post_attention_layernorm.bias": "pytorch_model-00001-of-00008.bin",
34
+ "transformer.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
35
+ "transformer.layers.10.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
36
+ "transformer.layers.10.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
37
+ "transformer.layers.10.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
38
+ "transformer.layers.10.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
39
+ "transformer.layers.10.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
40
+ "transformer.layers.10.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
41
+ "transformer.layers.10.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
42
+ "transformer.layers.10.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
43
+ "transformer.layers.10.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
44
+ "transformer.layers.10.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
45
+ "transformer.layers.10.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
46
+ "transformer.layers.10.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
47
+ "transformer.layers.10.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
48
+ "transformer.layers.11.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
49
+ "transformer.layers.11.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
50
+ "transformer.layers.11.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
51
+ "transformer.layers.11.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
52
+ "transformer.layers.11.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
53
+ "transformer.layers.11.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
54
+ "transformer.layers.11.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
55
+ "transformer.layers.11.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
56
+ "transformer.layers.11.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
57
+ "transformer.layers.11.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
58
+ "transformer.layers.11.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
59
+ "transformer.layers.11.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
60
+ "transformer.layers.11.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
61
+ "transformer.layers.12.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
62
+ "transformer.layers.12.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
63
+ "transformer.layers.12.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
64
+ "transformer.layers.12.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
65
+ "transformer.layers.12.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
66
+ "transformer.layers.12.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
67
+ "transformer.layers.12.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
68
+ "transformer.layers.12.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
69
+ "transformer.layers.12.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
70
+ "transformer.layers.12.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
71
+ "transformer.layers.12.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
72
+ "transformer.layers.12.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
73
+ "transformer.layers.12.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
74
+ "transformer.layers.13.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
75
+ "transformer.layers.13.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
76
+ "transformer.layers.13.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
77
+ "transformer.layers.13.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
78
+ "transformer.layers.13.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
79
+ "transformer.layers.13.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
80
+ "transformer.layers.13.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
81
+ "transformer.layers.13.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
82
+ "transformer.layers.13.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
83
+ "transformer.layers.13.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
84
+ "transformer.layers.13.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
85
+ "transformer.layers.13.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
86
+ "transformer.layers.13.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
87
+ "transformer.layers.14.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
88
+ "transformer.layers.14.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
89
+ "transformer.layers.14.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
90
+ "transformer.layers.14.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
91
+ "transformer.layers.14.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
92
+ "transformer.layers.14.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
93
+ "transformer.layers.14.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
94
+ "transformer.layers.14.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
95
+ "transformer.layers.14.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
96
+ "transformer.layers.14.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
97
+ "transformer.layers.14.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
98
+ "transformer.layers.14.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
99
+ "transformer.layers.14.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
100
+ "transformer.layers.15.attention.dense.bias": "pytorch_model-00004-of-00008.bin",
101
+ "transformer.layers.15.attention.dense.weight": "pytorch_model-00004-of-00008.bin",
102
+ "transformer.layers.15.attention.query_key_value.bias": "pytorch_model-00004-of-00008.bin",
103
+ "transformer.layers.15.attention.query_key_value.weight": "pytorch_model-00004-of-00008.bin",
104
+ "transformer.layers.15.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
105
+ "transformer.layers.15.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
106
+ "transformer.layers.15.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
107
+ "transformer.layers.15.mlp.dense_4h_to_h.bias": "pytorch_model-00004-of-00008.bin",
108
+ "transformer.layers.15.mlp.dense_4h_to_h.weight": "pytorch_model-00004-of-00008.bin",
109
+ "transformer.layers.15.mlp.dense_h_to_4h.bias": "pytorch_model-00004-of-00008.bin",
110
+ "transformer.layers.15.mlp.dense_h_to_4h.weight": "pytorch_model-00004-of-00008.bin",
111
+ "transformer.layers.15.post_attention_layernorm.bias": "pytorch_model-00004-of-00008.bin",
112
+ "transformer.layers.15.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
113
+ "transformer.layers.16.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
114
+ "transformer.layers.16.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
115
+ "transformer.layers.16.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
116
+ "transformer.layers.16.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
117
+ "transformer.layers.16.attention.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
118
+ "transformer.layers.16.input_layernorm.bias": "pytorch_model-00004-of-00008.bin",
119
+ "transformer.layers.16.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
120
+ "transformer.layers.16.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
121
+ "transformer.layers.16.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
122
+ "transformer.layers.16.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
123
+ "transformer.layers.16.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
124
+ "transformer.layers.16.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
125
+ "transformer.layers.16.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
126
+ "transformer.layers.17.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
127
+ "transformer.layers.17.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
128
+ "transformer.layers.17.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
129
+ "transformer.layers.17.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
130
+ "transformer.layers.17.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
131
+ "transformer.layers.17.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
132
+ "transformer.layers.17.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
133
+ "transformer.layers.17.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
134
+ "transformer.layers.17.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
135
+ "transformer.layers.17.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
136
+ "transformer.layers.17.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
137
+ "transformer.layers.17.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
138
+ "transformer.layers.17.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
139
+ "transformer.layers.18.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
140
+ "transformer.layers.18.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
141
+ "transformer.layers.18.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
142
+ "transformer.layers.18.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
143
+ "transformer.layers.18.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
144
+ "transformer.layers.18.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
145
+ "transformer.layers.18.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
146
+ "transformer.layers.18.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
147
+ "transformer.layers.18.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
148
+ "transformer.layers.18.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
149
+ "transformer.layers.18.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
150
+ "transformer.layers.18.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
151
+ "transformer.layers.18.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
152
+ "transformer.layers.19.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
153
+ "transformer.layers.19.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
154
+ "transformer.layers.19.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
155
+ "transformer.layers.19.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
156
+ "transformer.layers.19.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
157
+ "transformer.layers.19.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
158
+ "transformer.layers.19.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
159
+ "transformer.layers.19.mlp.dense_4h_to_h.bias": "pytorch_model-00005-of-00008.bin",
160
+ "transformer.layers.19.mlp.dense_4h_to_h.weight": "pytorch_model-00005-of-00008.bin",
161
+ "transformer.layers.19.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
162
+ "transformer.layers.19.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
163
+ "transformer.layers.19.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
164
+ "transformer.layers.19.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
165
+ "transformer.layers.2.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
166
+ "transformer.layers.2.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
167
+ "transformer.layers.2.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
168
+ "transformer.layers.2.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
169
+ "transformer.layers.2.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
170
+ "transformer.layers.2.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
171
+ "transformer.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
172
+ "transformer.layers.2.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
173
+ "transformer.layers.2.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
174
+ "transformer.layers.2.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
175
+ "transformer.layers.2.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
176
+ "transformer.layers.2.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
177
+ "transformer.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
178
+ "transformer.layers.20.attention.dense.bias": "pytorch_model-00005-of-00008.bin",
179
+ "transformer.layers.20.attention.dense.weight": "pytorch_model-00005-of-00008.bin",
180
+ "transformer.layers.20.attention.query_key_value.bias": "pytorch_model-00005-of-00008.bin",
181
+ "transformer.layers.20.attention.query_key_value.weight": "pytorch_model-00005-of-00008.bin",
182
+ "transformer.layers.20.attention.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
183
+ "transformer.layers.20.input_layernorm.bias": "pytorch_model-00005-of-00008.bin",
184
+ "transformer.layers.20.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
185
+ "transformer.layers.20.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
186
+ "transformer.layers.20.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
187
+ "transformer.layers.20.mlp.dense_h_to_4h.bias": "pytorch_model-00005-of-00008.bin",
188
+ "transformer.layers.20.mlp.dense_h_to_4h.weight": "pytorch_model-00005-of-00008.bin",
189
+ "transformer.layers.20.post_attention_layernorm.bias": "pytorch_model-00005-of-00008.bin",
190
+ "transformer.layers.20.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
191
+ "transformer.layers.21.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
192
+ "transformer.layers.21.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
193
+ "transformer.layers.21.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
194
+ "transformer.layers.21.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
195
+ "transformer.layers.21.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
196
+ "transformer.layers.21.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
197
+ "transformer.layers.21.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
198
+ "transformer.layers.21.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
199
+ "transformer.layers.21.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
200
+ "transformer.layers.21.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
201
+ "transformer.layers.21.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
202
+ "transformer.layers.21.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
203
+ "transformer.layers.21.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
204
+ "transformer.layers.22.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
205
+ "transformer.layers.22.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
206
+ "transformer.layers.22.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
207
+ "transformer.layers.22.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
208
+ "transformer.layers.22.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
209
+ "transformer.layers.22.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
210
+ "transformer.layers.22.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
211
+ "transformer.layers.22.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
212
+ "transformer.layers.22.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
213
+ "transformer.layers.22.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
214
+ "transformer.layers.22.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
215
+ "transformer.layers.22.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
216
+ "transformer.layers.22.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
217
+ "transformer.layers.23.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
218
+ "transformer.layers.23.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
219
+ "transformer.layers.23.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
220
+ "transformer.layers.23.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
221
+ "transformer.layers.23.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
222
+ "transformer.layers.23.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
223
+ "transformer.layers.23.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
224
+ "transformer.layers.23.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
225
+ "transformer.layers.23.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
226
+ "transformer.layers.23.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
227
+ "transformer.layers.23.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
228
+ "transformer.layers.23.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
229
+ "transformer.layers.23.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
230
+ "transformer.layers.24.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
231
+ "transformer.layers.24.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
232
+ "transformer.layers.24.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
233
+ "transformer.layers.24.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
234
+ "transformer.layers.24.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
235
+ "transformer.layers.24.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
236
+ "transformer.layers.24.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
237
+ "transformer.layers.24.mlp.dense_4h_to_h.bias": "pytorch_model-00006-of-00008.bin",
238
+ "transformer.layers.24.mlp.dense_4h_to_h.weight": "pytorch_model-00006-of-00008.bin",
239
+ "transformer.layers.24.mlp.dense_h_to_4h.bias": "pytorch_model-00006-of-00008.bin",
240
+ "transformer.layers.24.mlp.dense_h_to_4h.weight": "pytorch_model-00006-of-00008.bin",
241
+ "transformer.layers.24.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
242
+ "transformer.layers.24.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
243
+ "transformer.layers.25.attention.dense.bias": "pytorch_model-00006-of-00008.bin",
244
+ "transformer.layers.25.attention.dense.weight": "pytorch_model-00006-of-00008.bin",
245
+ "transformer.layers.25.attention.query_key_value.bias": "pytorch_model-00006-of-00008.bin",
246
+ "transformer.layers.25.attention.query_key_value.weight": "pytorch_model-00006-of-00008.bin",
247
+ "transformer.layers.25.attention.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
248
+ "transformer.layers.25.input_layernorm.bias": "pytorch_model-00006-of-00008.bin",
249
+ "transformer.layers.25.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
250
+ "transformer.layers.25.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
251
+ "transformer.layers.25.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
252
+ "transformer.layers.25.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
253
+ "transformer.layers.25.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
254
+ "transformer.layers.25.post_attention_layernorm.bias": "pytorch_model-00006-of-00008.bin",
255
+ "transformer.layers.25.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
256
+ "transformer.layers.26.attention.dense.bias": "pytorch_model-00007-of-00008.bin",
257
+ "transformer.layers.26.attention.dense.weight": "pytorch_model-00007-of-00008.bin",
258
+ "transformer.layers.26.attention.query_key_value.bias": "pytorch_model-00007-of-00008.bin",
259
+ "transformer.layers.26.attention.query_key_value.weight": "pytorch_model-00007-of-00008.bin",
260
+ "transformer.layers.26.attention.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
261
+ "transformer.layers.26.input_layernorm.bias": "pytorch_model-00007-of-00008.bin",
262
+ "transformer.layers.26.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
263
+ "transformer.layers.26.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
264
+ "transformer.layers.26.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
265
+ "transformer.layers.26.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
266
+ "transformer.layers.26.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
267
+ "transformer.layers.26.post_attention_layernorm.bias": "pytorch_model-00007-of-00008.bin",
268
+ "transformer.layers.26.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
269
+ "transformer.layers.27.attention.dense.bias": "pytorch_model-00007-of-00008.bin",
270
+ "transformer.layers.27.attention.dense.weight": "pytorch_model-00007-of-00008.bin",
271
+ "transformer.layers.27.attention.query_key_value.bias": "pytorch_model-00007-of-00008.bin",
272
+ "transformer.layers.27.attention.query_key_value.weight": "pytorch_model-00007-of-00008.bin",
273
+ "transformer.layers.27.attention.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
274
+ "transformer.layers.27.input_layernorm.bias": "pytorch_model-00007-of-00008.bin",
275
+ "transformer.layers.27.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
276
+ "transformer.layers.27.mlp.dense_4h_to_h.bias": "pytorch_model-00007-of-00008.bin",
277
+ "transformer.layers.27.mlp.dense_4h_to_h.weight": "pytorch_model-00007-of-00008.bin",
278
+ "transformer.layers.27.mlp.dense_h_to_4h.bias": "pytorch_model-00007-of-00008.bin",
279
+ "transformer.layers.27.mlp.dense_h_to_4h.weight": "pytorch_model-00007-of-00008.bin",
280
+ "transformer.layers.27.post_attention_layernorm.bias": "pytorch_model-00007-of-00008.bin",
281
+ "transformer.layers.27.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
282
+ "transformer.layers.3.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
283
+ "transformer.layers.3.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
284
+ "transformer.layers.3.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
285
+ "transformer.layers.3.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
286
+ "transformer.layers.3.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
287
+ "transformer.layers.3.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
288
+ "transformer.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
289
+ "transformer.layers.3.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
290
+ "transformer.layers.3.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
291
+ "transformer.layers.3.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
292
+ "transformer.layers.3.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
293
+ "transformer.layers.3.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
294
+ "transformer.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
295
+ "transformer.layers.4.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
296
+ "transformer.layers.4.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
297
+ "transformer.layers.4.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
298
+ "transformer.layers.4.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
299
+ "transformer.layers.4.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
300
+ "transformer.layers.4.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
301
+ "transformer.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
302
+ "transformer.layers.4.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
303
+ "transformer.layers.4.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
304
+ "transformer.layers.4.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
305
+ "transformer.layers.4.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
306
+ "transformer.layers.4.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
307
+ "transformer.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
308
+ "transformer.layers.5.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
309
+ "transformer.layers.5.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
310
+ "transformer.layers.5.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
311
+ "transformer.layers.5.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
312
+ "transformer.layers.5.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
313
+ "transformer.layers.5.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
314
+ "transformer.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
315
+ "transformer.layers.5.mlp.dense_4h_to_h.bias": "pytorch_model-00002-of-00008.bin",
316
+ "transformer.layers.5.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00008.bin",
317
+ "transformer.layers.5.mlp.dense_h_to_4h.bias": "pytorch_model-00002-of-00008.bin",
318
+ "transformer.layers.5.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00008.bin",
319
+ "transformer.layers.5.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
320
+ "transformer.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
321
+ "transformer.layers.6.attention.dense.bias": "pytorch_model-00002-of-00008.bin",
322
+ "transformer.layers.6.attention.dense.weight": "pytorch_model-00002-of-00008.bin",
323
+ "transformer.layers.6.attention.query_key_value.bias": "pytorch_model-00002-of-00008.bin",
324
+ "transformer.layers.6.attention.query_key_value.weight": "pytorch_model-00002-of-00008.bin",
325
+ "transformer.layers.6.attention.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
326
+ "transformer.layers.6.input_layernorm.bias": "pytorch_model-00002-of-00008.bin",
327
+ "transformer.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
328
+ "transformer.layers.6.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
329
+ "transformer.layers.6.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
330
+ "transformer.layers.6.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
331
+ "transformer.layers.6.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
332
+ "transformer.layers.6.post_attention_layernorm.bias": "pytorch_model-00002-of-00008.bin",
333
+ "transformer.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
334
+ "transformer.layers.7.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
335
+ "transformer.layers.7.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
336
+ "transformer.layers.7.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
337
+ "transformer.layers.7.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
338
+ "transformer.layers.7.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
339
+ "transformer.layers.7.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
340
+ "transformer.layers.7.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
341
+ "transformer.layers.7.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
342
+ "transformer.layers.7.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
343
+ "transformer.layers.7.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
344
+ "transformer.layers.7.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
345
+ "transformer.layers.7.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
346
+ "transformer.layers.7.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
347
+ "transformer.layers.8.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
348
+ "transformer.layers.8.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
349
+ "transformer.layers.8.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
350
+ "transformer.layers.8.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
351
+ "transformer.layers.8.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
352
+ "transformer.layers.8.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
353
+ "transformer.layers.8.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
354
+ "transformer.layers.8.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
355
+ "transformer.layers.8.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
356
+ "transformer.layers.8.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
357
+ "transformer.layers.8.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
358
+ "transformer.layers.8.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
359
+ "transformer.layers.8.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
360
+ "transformer.layers.9.attention.dense.bias": "pytorch_model-00003-of-00008.bin",
361
+ "transformer.layers.9.attention.dense.weight": "pytorch_model-00003-of-00008.bin",
362
+ "transformer.layers.9.attention.query_key_value.bias": "pytorch_model-00003-of-00008.bin",
363
+ "transformer.layers.9.attention.query_key_value.weight": "pytorch_model-00003-of-00008.bin",
364
+ "transformer.layers.9.attention.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
365
+ "transformer.layers.9.input_layernorm.bias": "pytorch_model-00003-of-00008.bin",
366
+ "transformer.layers.9.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
367
+ "transformer.layers.9.mlp.dense_4h_to_h.bias": "pytorch_model-00003-of-00008.bin",
368
+ "transformer.layers.9.mlp.dense_4h_to_h.weight": "pytorch_model-00003-of-00008.bin",
369
+ "transformer.layers.9.mlp.dense_h_to_4h.bias": "pytorch_model-00003-of-00008.bin",
370
+ "transformer.layers.9.mlp.dense_h_to_4h.weight": "pytorch_model-00003-of-00008.bin",
371
+ "transformer.layers.9.post_attention_layernorm.bias": "pytorch_model-00003-of-00008.bin",
372
+ "transformer.layers.9.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
373
+ "transformer.word_embeddings.weight": "pytorch_model-00001-of-00008.bin"
374
+ }
375
+ }
quantization.py ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.nn import Linear
2
+ from torch.nn.parameter import Parameter
3
+
4
+ import bz2
5
+ import torch
6
+ import base64
7
+ import ctypes
8
+ from transformers.utils import logging
9
+
10
+ from typing import List
11
+ from functools import partial
12
+
13
+ logger = logging.get_logger(__name__)
14
+
15
+ try:
16
+ from cpm_kernels.kernels.base import LazyKernelCModule, KernelFunction, round_up
17
+
18
+ class Kernel:
19
+ def __init__(self, code: bytes, function_names: List[str]):
20
+ self.code = code
21
+ self._function_names = function_names
22
+ self._cmodule = LazyKernelCModule(self.code)
23
+
24
+ for name in self._function_names:
25
+ setattr(self, name, KernelFunction(self._cmodule, name))
26
+
27
+ quantization_code = "$QlpoOTFBWSZTWU9yuJUAQHN//////////f/n/8/n///n//bt4dTidcVx8X3V9FV/92/v4B7/AD5FBQFAAAChSgKpFCFAFVSigUAAAEKhSgUUqgFBKigqVREQAABQBQIANDTTIGI00BkZBkNGE0A0BkBkGQGRkaNAaAGQNBoGgDIAAYIGTI0DQAQAaGmmQMRpoDIyDIaMJoBoDIDIMgMjI0aA0AMgaDQNAGQAAwQMmRoGgAgA0NNMgYjTQGRkGQ0YTQDQGQGQZAZGRo0BoAZA0GgaAMgABggZMjQNABABoaaZAxGmgMjIMhowmgGgMgMgyAyMjRoDQAyBoNA0AZAADBAyZGgaAAmqU1NEgJqnptU/Sn4jRR6J6epk2pqb1Q/SgAPUGgyNNGjQ2SBpoAZAAGg0NB6mgDIAAAAA2oaApSREBNAARhGiYEaEwU8pvImlP0k2aam1GaGqbFNM1MHpTwmkepmyU9R6nqPKekHqNNPUxNGhp6n6p6QaZ6o9TG1GMqcoV9ly6nRanHlq6zPNbnGZNi6HSug+2nPiZ13XcnFYZW+45W11CumhzYhchOJ2GLLV1OBjBjGf4TptOddTSOcVxhqYZMYwZXZZY00zI1paX5X9J+b+f4e+x43RXSxXPOdquiGpduatGyXneN696M9t4HU2eR5XX/kPhP261NTx3JO1Ow7LyuDmeo9a7d351T1ZxnvnrvYnrXv/hXxPCeuYx2XsNmO003eg9J3Z6U7b23meJ4ri01OdzTk9BNO96brz+qT5nuvvH3ds/G+m/JcG/F2XYuhXlvO+jP7U3XgrzPN/lr8Sf1n6j4j7jZs+s/T0tNaNNYzTs12rxjwztHlnire3Nzc3N1wuBwOBwXBvZfoHpD7rFmR99V5vj3aXza3xdBbXMalubTg/jIv5dfAi54Pdc75j4z412n3Npj3Ld/ENm7a3b/Cod6h/ret1/5vn/C+l+gdslMvgPSLJ8d8q+U66fevYn/tW1chleEtNTGlcHCbLRlq0tHzF5tsbbZZfHjjLgZu42XCuC3NrdjTasZGNzgxPIrGqp7r3p7L2p5XjnpPSmTd5XtzqnB6U87zzg1Ol0zd0zsLszxR6lkxp35u6/teL0L0W922cR7Lu1lpL9CsHirzuM2T+BgsyViT6LHcm0/Vr6U/7LGGyJeqTEjt0PHWhF5mCT7R9mtlDwriYv0Tyr/OxYt6qp5r0mPVT0608TqnqMZaarU2nFwrTzzlrs1ed7z1ux60wyr4ydCaTi3enW8x68x0zU7tXSlcmPSW1mGpWJMg4zmPC2lK96tp0OE80y4MfEvnZj8zGluR6b22ki1Ou9V2nCd9xovcPvcYMZYy0lvN60ScZ45vN6yeCeeXFb1lVjnnCar5fwXwE2bzJ4HI1XVPXfXZMm44GUsMpYsmLB65TuVdm0cl0b+i/wGNN66XjeV7zuPpHcnK/juhhjdfId5jMdE5nN0dGmmm2zZs2cexD5n9p/dY352XsvXHaZNWWsmmS1atjR452nYudzvqv2HMRyvNNnlMcDl3R2+yx2uVrBubTW9icHDVtbNXlZm7jma1rM4VurZZd2y6nUau7ZXZ7bVU+mnoOVxZGMrVmvX60605JwmzGZhhhjTWtaaaMaaGTGmNMZasY0iX8VMUl8eepaIrzGSpemWOQyZORk2bNpjUybMmxqYmknCGCFynutfksaZpjTNMaaatM0xsxcGR0sociNqxNSmhhR1ZJPbsn8qyF0t2qH6iYBclclalbtTTcHTDsPaX6rlnElph2Jyumumtynv2Kk8GI7rsvXbIcJgHJOSaSXnnGaI3m87RtVXJOZ/YtgdTE6Wpha6ZlE8ayXkef1fh602r2WwvfMXtMdLlkfnLFdYYwYso+bWqm7yJqHXZGw2nrS5ZanSYnWlxBxMF1V940K2wdrI7R6OYf7DGGamMmTSbRhlS45xmVOumF1EyPCmHrrN8wwZOOrdNtLeMtzFzDlWnfTBxMk2NaXIZHBYxYLD4w8yju0ao65Vz1OIXoS9dLanwCe1PWrYuWMqf1if1z2k2yYfKJ741PDgno1ZQ8DRqvUny3mNoWTzGO6m1DkrJI8JiR5cSd+vZdGOO8nrMoc5+NDUFsMSXaZJeNlMmGLtJsovOsUp7I9S5VojKxF6bTVEelXqlfJobQr3LozSh2Jk7VcrVMfhXqszGWMzNqGhqZY0OadxkyyMssKugZR0KNFXBHlqwmJgTE/BNVMk6ItJXZMR0H47GpXv/DMOvNkmVuaV1PRfEdxuqc7Hcd+ZV/zTLaRxWk0nl9CdCeM6mn5rstHIBcpiuwmUZXeq81DacHI2rmrZ5SuE5mOZd6LQrZg9mx32TprA8BMo5jKN6yLTCi3WzQaZSuhzTtM1fUTGVpG8Tw+KXI0tjEpiWxtLYynOlktSbVlaI5kxP8TDH8kx50xoxi5KcA4pcja8KWLRlO/Ks6q06ergnvm1ca3Tq8Uw7LTUsmWyctXPWmpitl/uvGcWTGXGuAXDfhqazGmjkxcJW5hMMMMpYsXl2TZYtVOddG3XCarUt6Ptq9CZXSNzyuRzqRZOjsxdBbFVz6OA5HI43r1jityVlVpVkxmOsyaYWE1NTGq1sOVh36mHMcxtSvcy70edG0ZGR3I1Go1GRlV7mWWo1G0ZGRqlvH40l7o4m5xMWLLLYyNjnqc8556mdPqLJ31n/1nWOncxzG1tizrHs/Z+d2vP/B/l8wdJ6rHUn2nbbDq4p6htFtYzMMMTaZis1K5GKzGNmxhmUx2DDlZ/qNnIx41xnaMfCZWYaZWtNLTNW8ND4Fw1MyZOCdM428suKG1ehW8TesOydg7J+YYcD4cYR+8dFK6M4E3HM9ZfRNNL+Sn6rsl4DsrDl2HpPCnfxjGXtbZtYys1ttlyJ4T+BvexjGWRjMszK4Jpc77D3GyuVD7q0+G8m9G+2+rGm7cOR2y7FdtY2XUYx/oNlfRYxhMYyYZkyyg55enna9Kt/FFi6GMMwYwdwxWgxGMLKYmUyGExTKMZkMFhkymKuh0NOBNnBu+23LdwDoZYYzGGMxtORaTU1pjTGWTTGGtMrNWUsyyTTLLG1qy2ZjbK2DBllWqxMtBMaYZQmcE7zvvRcTkclUwdkxTaSdyySt/7fpL+T1v516Ji97fwr5JbLu305zMn5+GMTTZ9F+y7ExwmGVfG44yxn3dLv6l5i+Wth1jCrDq21nW9LqvvDzz3Vf3LLH/O/32TJ/erx3bXftO4eF+G956D952K/An4NfvOpjFjExjevP/UmE0fIoZXx6/w6lX/no3D0bLt+ixjieBM6ksRd0yB4Lt2SwYNE+gd1detlZWUnpiZfGfFaK+4PyCa/v18V8X75pe9fLXzp7l3VjF76vWZmHwGz1IZNWT7b8yddJ4q5kyrVdfru6atWc7bVYztL9Jf4GXvT+Y8m9/YsXP6H018a8D4XVOqvfzqeR+6yZOD8dPv0+U7/q5Pl+2dNb0MjzGVH5p6MNQ7cOWvw62U9aHE8DprDek+McLyvDz+te+9Zhq5+YTruufMcWMabqysTmZVWjKPfnK0wyVcrsuhjZRdLkHNvD72b9abriOSGIxiLixMOoalNPXzy+wT/tf+U6HHONfsz+xe8ufHBdQWWGWLA9if0rsnmrxK5LvRZQeWsTCsrmOYy8VteVfuRfcVTtDLItLIsMYxZLdU/DbtSemxF6Z6Zo5WBXE4tFdCyVMMXMTEMZXVlS6Xec2T4e0tHsRcEuWshcJ2YsNF5rUx1E8ifCq6Z+ZP7qdCeu/aTwFd53l16/o0NOw6O3dLavP4Hbi4RdmuDk6DoYaninC0+o4uZjbJ7Rxeu0/FbuFg+q7DVS6fQe0rZ6NDGUNNU6DEqOaLTicKnYZMnBWruljQxoaS3dZhocDge0bSTyOvdAbG5hxe2xji7E/L55xX13wWNDi6HCekcFxfCPGxY0MXC+s7afWaMdDyjyr+o8Rudm/NabOZvdl274zH4f5XK9z6On1Pe/K5TdPAslg77BjuO6Y3eO7GqvOPG/stknp1leyvLL0Z7bl9I4noMvLkzytLhWYzrOZzLXCORe028rORzOg4N/L0HlMOQ3Pgmnbb6KczlabORpu980q37TBqRu0/p3PO6234Bl03Ynuz+9W7gnsEcmvYaYY3aMYY0wx3pYd+ujsXauWdaY5Xkbtl23fPzFHiDB/QMo0yFjBllYxTQYYyxkrwn7JufwJ/PfgJ+C83X69ni6zvXcnyXabv0ncbLwsceS+RNlyN2mnneJtX0ngYO0+e+0+UnA+Wch3ji8hj5an4h+i6XBySU4n+R0roVcbw5yvHrmr4Yw8Y7x6c+9POPYHI5HI5HI5HI5HGXGww4nE4nrVyOR8XeqPEO7PLOiukYa3Novk5hV4cdtYZLI93e+uxff2jRo0aNGjRo0aNG1bVtW1dy3m83m8+tQ5ZzHw3nObwOu8La9Rc1dtkdS8A3eTk823tnktXWlxN6Oixe06zrN70Isd9jiOgZFq9yfkPqP/SLhN2Myl8jDM43bl1nbcb4cO57jlh8Jow6pzXZdL4dyODTuuhu77FyO27DdwdRxmvO+O+3N2+BdqyTwLHVczDVY4UPE4O66/ZO2cx1LFzVdSXtF7G4HMbrauOHRw6c8FdZ5m9fHZHYZXfTlZquyynSyTTKke6vcffSD9pzPA/G7n7jxPmuhc1DHMynPMrGL6AdewYmwu5ko+UUyTwrMv27rPH1v1nGqd87+p6N6LU8k3NEng53xXyHS97+44OSg/sy/hn+Se6yfYNjW0/uTgP+PvWYzLMmjhcLB/gGpri6H83/84eUXWT6T9Hsv7785z/7z4icpW+zfXypuR7rx/gMdZb1/wC678pcs8/2a3mDitGHxl9mfPlll5MafWWqxk/eYuTDgcNMzDGWLWvsuglNxs53GtN6uWpktlW1tZZYcuinMMWmnNnJydze3b2Y1McBxrBkXw799izLMZZYyy0TkbsGM4p03S2uVu5s/XXUdSdec6smVxZYYGpVmT8A+8ajuEyV5FatkvVru2x6uxGXXbH4A+jvgP4GMYy3iPLXzq/6z65+E005ey+cwMZD3fZcqc6xpjTFjQ0P3U+e++cPYmTIwj0nrK5NPTfl3WvpfLtXDcb2HQMudYOxFXQBor4L4T6vrOauFctYXJQ++NUWmJe5bmx1jDiZS1dTqWxo4GR8jm3fttpmPHppk9PEyv4/y8/sO07XacOmcqc0x2Vi9BvNJvN5oW8x4mOsydpidRxMYJPx06m1bqPzq9KtK8sxXNXFodD/+MYYaJTLwOhc9brCsV18oOR1i4tXChyTkq4lf4y1Ke+9axjDHqs1mfBbMXuP4Hzi+X7t8vzv7bHerrUPgPCxhjre4fXdfLNtNM+Jd+Zdh8xd8wP87uNPoPgv4W7/5P2BuxfsMabNnMnza+54Pdi5U671GPZY8CehX8Voeoo7FHpkeEc6715FwHZrIrUrHaviPUbPZHND+IhczrP6FcYvhOZ0Di/ETt0OI+YwNWR9r7tpf6WDeZKZDB1+z2IthOl1mPyb5FluvEx9h9d0NnM0Y1XPFkWIsk1WotJ0PBMmkvjvQTd0e71tfeV+8r8lQ/tpzpsmxJ+InrI/dj2UajUajVTUajatRqNRtGo1Go1Go4wjeMpZFMVV9CHbofPraLsJ3JpWV2XOoanCuFky4y3PPNxucK2uKC1Lbdb1eo+m5XomN6HfeZsabHLHRX/K+offtNGGmHWctcVcG44MdSqsOLY9VzX+Zxfxn2HPdWTpzWvkrtJ8M5zorrKcquRytJ5N5DZmcaW02l76nWO+BqPXm1A2Ry/0q71dH/mqrqeFjkYxjEXtsX8qubTk67rGycyqsdm4tZx5D6D5hhi0waaWmiaMP81Yjii5qxPlPuU/GfTL1Y5E6Jyfiq63qTa39A4J0sOGDgO9WF9bOXl0XfPRbsY2bPNKPy1YrFYrFYmRhhlTIyMjJWJYZHXuCXI8OoXsvfljGLFicNifpp2XunoPiG1wtx3p1Tah+/DD66OnVtVXP9rKbVxOnL0tR/rHtqB5UDErUVcl11D4qqvjpOcxX7armUNJB3LpW6bxVvD08e8h3odKKvyCFZBdSh2FVcST9xV3n3T8t1j7Kr9qgrqXg+13Pt5U7JCvFXVIV1YG5lRhkVYZJYYDDD4KOIMoHCp26WS8GB7uBh2zIdgq/PKyInjV2STShuoapUdCpX1yTwqq/z1VvET7Kh5nVPkO8YyxjLt2MaaMmWTLQvx3qnzltnXW0p2jxgbEtSny/Osv8Y9pLMXYoHVPAhkVdWVeODhR6q9/Sxe2liwwZWMVvFXfRkeIDxAePUPIrdJ4ey6yquzH+PD/bUOWAu05qVHtFd8rrKHSoeNIOUqrYr3FXyToqfYJgwmJdKpXXOwYYegNNGMzfZPp/t3t/DVs4zjNTN61rRqaWaa4NYbRjTa0tWwy2Y2tGN8ZO8ofNKq4j9SL7I+cSm4/6ovLV5HNXLI0jJidwrtk6ynCaP6Z++GjRlWS3tLeW129Mi9evxU9mtz6s5J3Z7M2ngTgnKvmpomxpaLCzPfmx0JWE+m3NLDDGOX47RctdYYNK5jakdqLkRlI39n590T5zctGSwwZZDJj6kW8XSi6ot2MmWWJ0DUT3nuvebBudScjZ79g8cWJ8av0k+/bE5WKd5MdbFpbDVMxu1DVMmtNZGJvq1mtRbn6M+g/kP0FwDwr7quZs7xosNGpbscyxhhd9TyJyFwbLcxlTasg75vW7TsV5K7ji44XPMMrdoj+Y3rT0Hie62nlYV/pwczzOmdLqLhYkzGMzCZWGMQzGMSsZYY6Di1t4nlJ+Em63mJxrVLxPbYxNEdgc1dU2iOKyoYYWjNrEeHTYybVk0atSa7ehuwsWMWTqn1TrnS6hYsi71d1+s+k+ic70e20fzE/VaTdxT9ZtU4GIXdeNx3X77guYYfpHeTQjaMX6brOu4OY4K7Y2d9mbHarI5ox3p4GpJ2Vd/Tst60f7j999pppjR+Q/Qf8J/VaORs3cji7FfFuN61+ui9s8hix1OCh5KGVV23BPXvZfz3CLyHpix+exi8z/KnCnosY2eunor+cxyPO/xJ0vKey9OvE9VjqaYu0x3Z3jd6o2b1T12D+F8l232lwaaacD5LE8LBxu7WTlbWraWpew8Xexjel3E+wWD4APITdNqR8F3R3T0lunCQ4GaE9R37DxeCYfcHi4xci5ovKfxVs55y2hf+65E/Xdp6jR5nrebTmi5incpkyOjs50JvrZwstbbW6kfuuQw+2mykf/EXNFzxfKTrxew929TR6bWnGL//F3JFOFCQT3K4lQ"
28
+
29
+ kernels = Kernel(
30
+ bz2.decompress(base64.b64decode(quantization_code)),
31
+ [
32
+ "int4WeightCompression",
33
+ "int4WeightExtractionFloat",
34
+ "int4WeightExtractionHalf",
35
+ "int8WeightExtractionFloat",
36
+ "int8WeightExtractionHalf",
37
+ ],
38
+ )
39
+ except Exception as exception:
40
+ kernels = None
41
+ logger.warning("Failed to load cpm_kernels:" + str(exception))
42
+
43
+
44
+ class W8A16Linear(torch.autograd.Function):
45
+ @staticmethod
46
+ def forward(ctx, inp: torch.Tensor, quant_w: torch.Tensor, scale_w: torch.Tensor, weight_bit_width):
47
+ ctx.inp_shape = inp.size()
48
+ ctx.weight_bit_width = weight_bit_width
49
+ out_features = quant_w.size(0)
50
+ inp = inp.contiguous().view(-1, inp.size(-1))
51
+ weight = extract_weight_to_half(quant_w, scale_w, weight_bit_width)
52
+ ctx.weight_shape = weight.size()
53
+ output = inp.mm(weight.t())
54
+ ctx.save_for_backward(inp, quant_w, scale_w)
55
+ return output.view(*(ctx.inp_shape[:-1] + (out_features,)))
56
+
57
+ @staticmethod
58
+ def backward(ctx, grad_output: torch.Tensor):
59
+ inp, quant_w, scale_w = ctx.saved_tensors
60
+ weight = extract_weight_to_half(quant_w, scale_w, ctx.weight_bit_width)
61
+ grad_output = grad_output.contiguous().view(-1, weight.size(0))
62
+ grad_input = grad_output.mm(weight)
63
+ grad_weight = grad_output.t().mm(inp)
64
+ return grad_input.view(ctx.inp_shape), grad_weight.view(ctx.weight_shape), None, None
65
+
66
+
67
+ def compress_int4_weight(weight: torch.Tensor): # (n, m)
68
+ with torch.cuda.device(weight.device):
69
+ n, m = weight.size(0), weight.size(1)
70
+ assert m % 2 == 0
71
+ m = m // 2
72
+ out = torch.empty(n, m, dtype=torch.int8, device="cuda")
73
+ stream = torch.cuda.current_stream()
74
+
75
+ gridDim = (n, 1, 1)
76
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
77
+
78
+ kernels.int4WeightCompression(
79
+ gridDim,
80
+ blockDim,
81
+ 0,
82
+ stream,
83
+ [ctypes.c_void_p(weight.data_ptr()), ctypes.c_void_p(out.data_ptr()), ctypes.c_int32(n), ctypes.c_int32(m)],
84
+ )
85
+ return out
86
+
87
+
88
+ def extract_weight_to_half(weight: torch.Tensor, scale_list: torch.Tensor, source_bit_width: int):
89
+ if source_bit_width == 8:
90
+ func = kernels.int8WeightExtractionHalf
91
+ elif source_bit_width == 4:
92
+ func = kernels.int4WeightExtractionHalf
93
+ else:
94
+ assert False, "Unsupported bit-width"
95
+
96
+ with torch.cuda.device(weight.device):
97
+ n, m = weight.size(0), weight.size(1)
98
+ out = torch.empty(n, m * (8 // source_bit_width), dtype=torch.half, device="cuda")
99
+ stream = torch.cuda.current_stream()
100
+
101
+ gridDim = (n, 1, 1)
102
+ blockDim = (min(round_up(m, 32), 1024), 1, 1)
103
+
104
+ func(
105
+ gridDim,
106
+ blockDim,
107
+ 0,
108
+ stream,
109
+ [
110
+ ctypes.c_void_p(weight.data_ptr()),
111
+ ctypes.c_void_p(scale_list.data_ptr()),
112
+ ctypes.c_void_p(out.data_ptr()),
113
+ ctypes.c_int32(n),
114
+ ctypes.c_int32(m),
115
+ ],
116
+ )
117
+ return out
118
+
119
+
120
+ class QuantizedLinear(Linear):
121
+ def __init__(self, weight_bit_width: int, weight_tensor=None, bias_tensor=None, empty_init=False, *args, **kwargs):
122
+ super(QuantizedLinear, self).__init__(*args, **kwargs)
123
+ self.weight_bit_width = weight_bit_width
124
+
125
+ shape = self.weight.shape
126
+ del self.weight
127
+
128
+ if weight_tensor is None or empty_init:
129
+ self.weight = torch.empty(
130
+ shape[0], shape[1] * weight_bit_width // 8, dtype=torch.int8, device=kwargs["device"]
131
+ )
132
+ self.weight_scale = torch.empty(shape[0], dtype=kwargs["dtype"], device=kwargs["device"])
133
+ else:
134
+ self.weight_scale = (weight_tensor.abs().max(dim=-1).values / ((2 ** (weight_bit_width - 1)) - 1)).half()
135
+ self.weight = torch.round(weight_tensor / self.weight_scale[:, None]).to(torch.int8)
136
+ if weight_bit_width == 4:
137
+ self.weight = compress_int4_weight(self.weight)
138
+
139
+ self.weight = Parameter(self.weight.to(kwargs["device"]), requires_grad=False)
140
+ self.weight_scale = Parameter(self.weight_scale.to(kwargs["device"]), requires_grad=False)
141
+ if bias_tensor is not None:
142
+ self.bias = Parameter(bias_tensor.to(kwargs["device"]), requires_grad=False)
143
+ else:
144
+ self.bias = None
145
+
146
+ def forward(self, input):
147
+ output = W8A16Linear.apply(input, self.weight, self.weight_scale, self.weight_bit_width)
148
+ if self.bias is not None:
149
+ output = output + self.bias
150
+ return output
151
+
152
+
153
+ def quantize(model, weight_bit_width, empty_init=False, **kwargs):
154
+ """Replace fp16 linear with quantized linear"""
155
+
156
+ for layer in model.layers:
157
+ layer.attention.query_key_value = QuantizedLinear(
158
+ weight_bit_width=weight_bit_width,
159
+ weight_tensor=layer.attention.query_key_value.weight.to(torch.cuda.current_device()),
160
+ bias_tensor=layer.attention.query_key_value.bias,
161
+ in_features=layer.attention.query_key_value.in_features,
162
+ out_features=layer.attention.query_key_value.out_features,
163
+ bias=True,
164
+ dtype=torch.half,
165
+ device=layer.attention.query_key_value.weight.device,
166
+ empty_init=empty_init
167
+ )
168
+ layer.attention.dense = QuantizedLinear(
169
+ weight_bit_width=weight_bit_width,
170
+ weight_tensor=layer.attention.dense.weight.to(torch.cuda.current_device()),
171
+ bias_tensor=layer.attention.dense.bias,
172
+ in_features=layer.attention.dense.in_features,
173
+ out_features=layer.attention.dense.out_features,
174
+ bias=True,
175
+ dtype=torch.half,
176
+ device=layer.attention.dense.weight.device,
177
+ empty_init=empty_init
178
+ )
179
+ layer.mlp.dense_h_to_4h = QuantizedLinear(
180
+ weight_bit_width=weight_bit_width,
181
+ weight_tensor=layer.mlp.dense_h_to_4h.weight.to(torch.cuda.current_device()),
182
+ bias_tensor=layer.mlp.dense_h_to_4h.bias,
183
+ in_features=layer.mlp.dense_h_to_4h.in_features,
184
+ out_features=layer.mlp.dense_h_to_4h.out_features,
185
+ bias=True,
186
+ dtype=torch.half,
187
+ device=layer.mlp.dense_h_to_4h.weight.device,
188
+ empty_init=empty_init
189
+ )
190
+ layer.mlp.dense_4h_to_h = QuantizedLinear(
191
+ weight_bit_width=weight_bit_width,
192
+ weight_tensor=layer.mlp.dense_4h_to_h.weight.to(torch.cuda.current_device()),
193
+ bias_tensor=layer.mlp.dense_4h_to_h.bias,
194
+ in_features=layer.mlp.dense_4h_to_h.in_features,
195
+ out_features=layer.mlp.dense_4h_to_h.out_features,
196
+ bias=True,
197
+ dtype=torch.half,
198
+ device=layer.mlp.dense_4h_to_h.weight.device,
199
+ empty_init=empty_init
200
+ )
201
+ return model
tokenization_chatglm.py ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tokenization classes for ChatGLM."""
2
+ import sys
3
+ import unicodedata
4
+ from typing import List, Optional, Union
5
+ from functools import lru_cache
6
+ import os
7
+ import collections
8
+ import re
9
+
10
+ from transformers.tokenization_utils import PreTrainedTokenizer
11
+ from icetk.text_tokenizer import TextTokenizer
12
+ from icetk.utils import auto_create
13
+ import icetk.sentencepiece_model_pb2 as sp_model
14
+ from transformers.utils import logging
15
+
16
+ logger = logging.get_logger(__name__)
17
+
18
+ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
19
+ "THUDM/chatglm-6b": 2048,
20
+ }
21
+
22
+
23
+ class SPTokenizer:
24
+ def __init__(
25
+ self,
26
+ vocab_file,
27
+ max_blank_length=80,
28
+ byte_fallback=True,
29
+ ):
30
+ assert vocab_file is not None
31
+ self.vocab_file = vocab_file
32
+ self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "<unused_0>", "<sop>", "<eop>", "<ENC>", "<dBLOCK>"]
33
+ self.max_blank_length = max_blank_length
34
+ self.byte_fallback = byte_fallback
35
+ self.text_tokenizer = self._build_text_tokenizer(encode_special_tokens=False)
36
+ self.special_text_tokenizer = self._build_text_tokenizer(encode_special_tokens=True)
37
+
38
+ @staticmethod
39
+ def _configure_tokenizer(
40
+ text_tokenizer: TextTokenizer,
41
+ special_tokens: List[str],
42
+ max_blank_length: int,
43
+ byte_fallback: bool,
44
+ encode_special_tokens=False,
45
+ ):
46
+ # special token
47
+ special_token_type = 4 if encode_special_tokens else 3 # 3 - CONTROL, 4 - USER_DEFINE
48
+ for token in special_tokens:
49
+ text_tokenizer.proto.pieces.append(
50
+ sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=special_token_type)
51
+ )
52
+ # whitespaces
53
+ for token in [SPTokenizer.get_tab_token()] + [
54
+ SPTokenizer.get_blank_token(i) for i in range(2, max_blank_length + 1)
55
+ ]:
56
+ text_tokenizer.proto.pieces.append(sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=4))
57
+ # byte fallback
58
+ if byte_fallback:
59
+ text_tokenizer.proto.trainer_spec.byte_fallback = True
60
+ for i in range(256):
61
+ text_tokenizer.proto.pieces.append(
62
+ sp_model.ModelProto.SentencePiece(piece="<0x{:02X}>".format(i), score=0.0, type=6)
63
+ )
64
+ text_tokenizer.refresh()
65
+
66
+ def _build_text_tokenizer(self, encode_special_tokens=False):
67
+ tokenizer = TextTokenizer(self.vocab_file)
68
+ self._configure_tokenizer(
69
+ tokenizer, self.special_tokens, self.max_blank_length, self.byte_fallback, encode_special_tokens
70
+ )
71
+ return tokenizer
72
+
73
+ def _get_text_tokenizer(self, encode_special_tokens=False):
74
+ if encode_special_tokens:
75
+ return self.special_text_tokenizer
76
+ else:
77
+ return self.text_tokenizer
78
+
79
+ @staticmethod
80
+ def get_blank_token(length: int):
81
+ assert length >= 2
82
+ return f"<|blank_{length}|>"
83
+
84
+ @staticmethod
85
+ def get_tab_token():
86
+ return f"<|tab|>"
87
+
88
+ @property
89
+ def num_image_tokens(self):
90
+ return 20000
91
+
92
+ @property
93
+ def num_text_tokens(self):
94
+ return self.text_tokenizer.num_tokens
95
+
96
+ @property
97
+ def num_tokens(self):
98
+ return self.num_image_tokens + self.num_text_tokens
99
+
100
+ @staticmethod
101
+ def _encode_whitespaces(text: str, max_len: int = 80):
102
+ text = text.replace("\t", SPTokenizer.get_tab_token())
103
+ for i in range(max_len, 1, -1):
104
+ text = text.replace(" " * i, SPTokenizer.get_blank_token(i))
105
+ return text
106
+
107
+ def _preprocess(self, text: str, linebreak=True, whitespaces=True):
108
+ if linebreak:
109
+ text = text.replace("\n", "<n>")
110
+ if whitespaces:
111
+ text = self._encode_whitespaces(text, max_len=self.max_blank_length)
112
+ return text
113
+
114
+ def encode(
115
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
116
+ ) -> List[int]:
117
+ """
118
+ @param text: Text to encode.
119
+ @param linebreak: Whether to encode newline (\n) in text.
120
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
121
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
122
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
123
+ """
124
+ text = self._preprocess(text, linebreak, whitespaces)
125
+ if not add_dummy_prefix:
126
+ text = "<n>" + text
127
+ tmp = self._get_text_tokenizer(encode_special_tokens=special_tokens).encode(text)
128
+ tokens = [x + self.num_image_tokens for x in tmp]
129
+ return tokens if add_dummy_prefix else tokens[2:]
130
+
131
+ def decode(self, text_ids: List[int], special_tokens=False) -> str:
132
+ ids = [int(_id) - self.num_image_tokens for _id in text_ids]
133
+ ids = [_id for _id in ids if _id >= 0]
134
+ text = self._get_text_tokenizer(encode_special_tokens=special_tokens).decode(ids)
135
+ text = text.replace("<n>", "\n")
136
+ text = text.replace(SPTokenizer.get_tab_token(), "\t")
137
+ for i in range(2, self.max_blank_length + 1):
138
+ text = text.replace(self.get_blank_token(i), " " * i)
139
+ return text
140
+
141
+ def tokenize(
142
+ self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
143
+ ) -> List[str]:
144
+ """
145
+ @param text: Text to encode.
146
+ @param linebreak: Whether to encode newline (\n) in text.
147
+ @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
148
+ @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
149
+ @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
150
+ """
151
+ text = self._preprocess(text, linebreak, whitespaces)
152
+ if not add_dummy_prefix:
153
+ text = "<n>" + text
154
+ tokens = self._get_text_tokenizer(encode_special_tokens=special_tokens).tokenize(text)
155
+ return tokens if add_dummy_prefix else tokens[2:]
156
+
157
+ def __getitem__(self, x: Union[int, str]):
158
+ if isinstance(x, int):
159
+ if x < self.num_image_tokens:
160
+ return "<image_{}>".format(x)
161
+ else:
162
+ return self.text_tokenizer.convert_id_to_token(x - self.num_image_tokens)
163
+ elif isinstance(x, str):
164
+ if x.startswith("<image_") and x.endswith(">") and x[7:-1].isdigit():
165
+ return int(x[7:-1])
166
+ else:
167
+ return self.text_tokenizer.convert_token_to_id(x) + self.num_image_tokens
168
+ else:
169
+ raise ValueError("The key should be str or int.")
170
+
171
+
172
+ class ChatGLMTokenizer(PreTrainedTokenizer):
173
+ """
174
+ Construct a ChatGLM tokenizer. Based on byte-level Byte-Pair-Encoding.
175
+
176
+ Args:
177
+ vocab_file (`str`):
178
+ Path to the vocabulary file.
179
+ """
180
+
181
+ vocab_files_names = {"vocab_file": "ice_text.model"}
182
+ max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
183
+ model_input_names = ["input_ids"]
184
+
185
+ def __init__(
186
+ self,
187
+ vocab_file,
188
+ do_lower_case=False,
189
+ remove_space=False,
190
+ bos_token='sop',
191
+ eos_token='eos',
192
+ eop_token='eop',
193
+ mask_token='[MASK]',
194
+ gmask_token='[gMASK]',
195
+ padding_side="right",
196
+ **kwargs
197
+ ) -> None:
198
+ super().__init__(
199
+ do_lower_case=do_lower_case,
200
+ remove_space=remove_space,
201
+ padding_side=padding_side,
202
+ **kwargs
203
+ )
204
+
205
+ self.do_lower_case = do_lower_case
206
+ self.remove_space = remove_space
207
+ self.vocab_file = vocab_file
208
+
209
+ self.bos_token = bos_token
210
+ self.eos_token = eos_token
211
+ self.eop_token = eop_token
212
+ self.mask_token = mask_token
213
+ self.gmask_token = gmask_token
214
+
215
+ self.sp_tokenizer = SPTokenizer(vocab_file)
216
+
217
+ """ Initialisation """
218
+
219
+ @property
220
+ def eop_token_id(self) -> Optional[int]:
221
+ """
222
+ `Optional[int]`: Id of the end of sentence token in the vocabulary. Returns `None` if the token has not been
223
+ set.
224
+ """
225
+ if self.eop_token is None:
226
+ return None
227
+ return self.convert_tokens_to_ids(self.eop_token)
228
+
229
+ @property
230
+ def gmask_token_id(self) -> Optional[int]:
231
+ """
232
+ `Optional[int]`: Id of the end of sentence token in the vocabulary. Returns `None` if the token has not been
233
+ set.
234
+ """
235
+ if self.gmask_token is None:
236
+ return None
237
+ return self.convert_tokens_to_ids(self.gmask_token)
238
+
239
+ @property
240
+ def vocab_size(self):
241
+ """ Returns vocab size """
242
+ return self.sp_tokenizer.num_tokens
243
+
244
+ def get_vocab(self):
245
+ """ Returns vocab as a dict """
246
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
247
+ vocab.update(self.added_tokens_encoder)
248
+ return vocab
249
+
250
+ def preprocess_text(self, inputs):
251
+ if self.remove_space:
252
+ outputs = " ".join(inputs.strip().split())
253
+ else:
254
+ outputs = inputs
255
+
256
+ if self.do_lower_case:
257
+ outputs = outputs.lower()
258
+
259
+ return outputs
260
+
261
+ def _tokenize(self, text, **kwargs):
262
+ """ Returns a tokenized string. """
263
+ text = self.preprocess_text(text)
264
+
265
+ seq = self.sp_tokenizer.tokenize(text)
266
+
267
+ return seq
268
+
269
+ def decode(
270
+ self,
271
+ token_ids: Union[List[int], List[List[int]]],
272
+ skip_special_tokens: bool = False,
273
+ clean_up_tokenization_spaces: bool = True,
274
+ spaces_between_special_tokens: bool = True,
275
+ **kwargs
276
+ ) -> str:
277
+ if isinstance(token_ids[0], list):
278
+ tokens = []
279
+ for single_token_ids in token_ids:
280
+ if self.pad_token_id in single_token_ids: # remove pad
281
+ single_token_ids = list(filter((self.pad_token_id).__ne__, single_token_ids))
282
+ tokens.append(self.sp_tokenizer.decode(single_token_ids))
283
+ return (tokens)
284
+ else:
285
+ if self.pad_token_id in token_ids: # remove pad
286
+ token_ids = list(filter((self.pad_token_id).__ne__, token_ids))
287
+ return self.sp_tokenizer.decode(token_ids)
288
+
289
+ def _convert_token_to_id(self, token):
290
+ """ Converts a token (str) in an id using the vocab. """
291
+ return self.sp_tokenizer[token]
292
+
293
+ def _convert_id_to_token(self, index):
294
+ """Converts an index (integer) in a token (str) using the vocab."""
295
+ return self.sp_tokenizer[index]
296
+
297
+ def save_vocabulary(self, save_directory, filename_prefix=None):
298
+ """
299
+ Save the vocabulary and special tokens file to a directory.
300
+
301
+ Args:
302
+ save_directory (`str`):
303
+ The directory in which to save the vocabulary.
304
+ filename_prefix (`str`, *optional*):
305
+ An optional prefix to add to the named of the saved files.
306
+
307
+ Returns:
308
+ `Tuple(str)`: Paths to the files saved.
309
+ """
310
+ if os.path.isdir(save_directory):
311
+ vocab_file = os.path.join(
312
+ save_directory, self.vocab_files_names["vocab_file"]
313
+ )
314
+ else:
315
+ vocab_file = save_directory
316
+
317
+ with open(self.vocab_file, 'rb') as fin:
318
+ proto_str = fin.read()
319
+
320
+ with open(vocab_file, "wb") as writer:
321
+ writer.write(proto_str)
322
+
323
+ return (vocab_file,)
324
+
325
+ # def num_special_tokens_to_add(self, pair: bool = False) -> int:
326
+ # """
327
+ # Returns the number of added tokens when encoding a sequence with special tokens.
328
+ #
329
+ # <Tip>
330
+ #
331
+ # This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put
332
+ # this inside your training loop.
333
+ #
334
+ # </Tip>
335
+ #
336
+ # Args:
337
+ # pair (`bool`, *optional*, defaults to `False`):
338
+ # Whether the number of added tokens should be computed in the case of a sequence pair or a single
339
+ # sequence.
340
+ #
341
+ # Returns:
342
+ # `int`: Number of special tokens added to sequences.
343
+ # """
344
+ # # token_ids_0 = []
345
+ # # token_ids_1 = []
346
+ # # return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None))
347
+ # return 2
348
+
349
+ def build_inputs_with_special_tokens(
350
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
351
+ ) -> List[int]:
352
+ """
353
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
354
+ adding special tokens. A BERT sequence has the following format:
355
+
356
+ - single sequence: `[CLS] X [SEP]`
357
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
358
+
359
+ Args:
360
+ token_ids_0 (`List[int]`):
361
+ List of IDs to which the special tokens will be added.
362
+ token_ids_1 (`List[int]`, *optional*):
363
+ Optional second list of IDs for sequence pairs.
364
+
365
+ Returns:
366
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
367
+ """
368
+ mask_id = self.sp_tokenizer[self.mask_token]
369
+ gmask_id = self.sp_tokenizer[self.gmask_token]
370
+ eos_id = self.sp_tokenizer[self.eos_token]
371
+ bos_id = self.sp_tokenizer[self.bos_token]
372
+ eop_id = self.sp_tokenizer[self.eop_token]
373
+
374
+ if mask_id not in token_ids_0 and gmask_id not in token_ids_0:
375
+ token_ids_0 += [gmask_id]
376
+
377
+ if token_ids_0[-1] != mask_id and token_ids_0[-1] != gmask_id:
378
+ token_ids_0 += [eos_id]
379
+
380
+ token_ids_0 += [bos_id]
381
+
382
+ if token_ids_1 is not None:
383
+ token_ids_0 += token_ids_1 + [eop_id]
384
+
385
+ return token_ids_0
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "THUDM/chatglm-6b",
3
+ "bos_token": "<sop>",
4
+ "eop_token": "<eop>",
5
+ "eos_token": "</s>",
6
+ "gmask_token": "[gMASK]",
7
+ "mask_token": "[MASK]",
8
+ "pad_token": "<pad>",
9
+ "unk_token": "<unk>",
10
+ "remove_space": false,
11
+ "do_lower_case": false,
12
+ "tokenizer_class": "ChatGLMTokenizer",
13
+ "auto_map": {
14
+ "AutoTokenizer": [
15
+ "tokenization_chatglm.ChatGLMTokenizer",
16
+ null
17
+ ]
18
+ }
19
+ }