How to merge the bias item of c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size) into LLama?
#2
by
songkq
- opened
Hello, could you please share how to merge the bias item of c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size) into LLama?
This model was forked from https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat , I only changed the tokenizer. I don't have weight convert script either. You can ask JosephusCheung for help.
OK. Thanks.
This model was forked from https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat , I only changed the tokenizer. I don't have weight convert script either. You can ask JosephusCheung for help.
songkq
changed discussion status to
closed