patrickvonplaten
commited on
Commit
•
a3abe0f
1
Parent(s):
5c13581
add model
Browse files- LICENSE.md +65 -0
- README.md +210 -0
- config.json +29 -0
- merges.txt +0 -0
- pytorch_model-00001-of-00014.bin +3 -0
- pytorch_model-00002-of-00014.bin +3 -0
- pytorch_model-00003-of-00014.bin +3 -0
- pytorch_model-00004-of-00014.bin +3 -0
- pytorch_model-00005-of-00014.bin +3 -0
- pytorch_model-00006-of-00014.bin +3 -0
- pytorch_model-00007-of-00014.bin +3 -0
- pytorch_model-00008-of-00014.bin +3 -0
- pytorch_model-00009-of-00014.bin +3 -0
- pytorch_model-00010-of-00014.bin +3 -0
- pytorch_model-00011-of-00014.bin +3 -0
- pytorch_model-00012-of-00014.bin +3 -0
- pytorch_model-00013-of-00014.bin +3 -0
- pytorch_model-00014-of-00014.bin +3 -0
- pytorch_model.bin.index.json +1036 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
LICENSE.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<h2 align="center"> OPT-175B LICENSE AGREEMENT </h2>
|
2 |
+
|
3 |
+
This License Agreement (as may be amended in accordance with this License Agreement, **“License”**), between you, or your employer or other entity (if you are entering into this agreement on behalf of your employer or other entity) (**“Licensee”** or **“you”**) and Meta Platforms, Inc. (**“Meta”** or **“we”**) applies to your use of any computer program, algorithm, source code, object code, or software that is made available by Meta under this License (**“Software”**) and any specifications, manuals, documentation, and other written information provided by Meta related to the Software (**“Documentation”**).
|
4 |
+
|
5 |
+
**By clicking “I Accept” below or by using the Software, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software or Documentation (collectively, the “Software Products”), and you must immediately cease using the Software Products. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Meta that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products on behalf of your employer or other entity.**
|
6 |
+
<br><br>
|
7 |
+
1. **LICENSE GRANT**
|
8 |
+
<br><br>
|
9 |
+
a. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Meta grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Meta’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Meta’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License.
|
10 |
+
<br><br>
|
11 |
+
b. You may make a reasonable number of copies of the Documentation solely for use in connection with the license to the Software granted above.
|
12 |
+
<br><br>
|
13 |
+
c. The grant of rights expressly set forth in this Section 1 (License Grant) are the complete grant of rights to you in the Software Products, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Meta and its licensors reserve all rights not expressly granted by this License.
|
14 |
+
<br><br>
|
15 |
+
2. **RESTRICTIONS**
|
16 |
+
<br><br>
|
17 |
+
You will not, and will not permit, assist or cause any third party to:
|
18 |
+
<br><br>
|
19 |
+
a. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes or in the service of nuclear technology, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law, including accessing the Software Products from an embargoed country as prohibited by the U.S. government, and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing;
|
20 |
+
<br><br>
|
21 |
+
b. alter or remove copyright and other proprietary notices which appear on or in the Software Products;
|
22 |
+
<br><br>
|
23 |
+
c. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Meta in connection with the Software, or to circumvent or remove any usage restrictions, or to enable functionality disabled by Meta; or
|
24 |
+
<br><br>
|
25 |
+
d. offer or impose any terms on the Software Products that alter, restrict, or are inconsistent with the terms of this License.
|
26 |
+
<br><br>
|
27 |
+
3. **ATTRIBUTION**
|
28 |
+
<br><br>
|
29 |
+
Together with any copies of the Software Products (as well as derivative works thereof or works incorporating the Software Products) that you distribute, you must provide (i) a copy of this License, and (ii) the following attribution notice: “OPT-175B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.”
|
30 |
+
<br><br>
|
31 |
+
4. **DISCLAIMERS**
|
32 |
+
<br><br>
|
33 |
+
THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” and “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. META EXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. META MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.
|
34 |
+
<br><br>
|
35 |
+
5. **LIMITATION OF LIABILITY**
|
36 |
+
<br><br>
|
37 |
+
TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL META BE LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF META HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, **“SOFTWARE MATERIALS”**) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A **“HIGH-RISK USE”**). IF YOU ELECT TO USE ANY OF THE SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.
|
38 |
+
<br><br>
|
39 |
+
6. **INDEMNIFICATION**
|
40 |
+
<br><br>
|
41 |
+
You will indemnify, defend and hold harmless Meta and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the **“Meta Parties”**) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Meta Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, **“Claims”**) arising out of or related to: (a) your access to or use of the Software Products (as well as any results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Meta Parties of any such Claims, and cooperate with Meta Parties in defending such Claims. You will also grant the Meta Parties sole control of the defense or settlement, at Meta’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Meta or the other Meta Parties.
|
42 |
+
<br><br>
|
43 |
+
7. **TERMINATION; SURVIVAL**
|
44 |
+
<br><br>
|
45 |
+
a. This License will automatically terminate upon any breach by you of the terms of this License.
|
46 |
+
<br><br>
|
47 |
+
b. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you.
|
48 |
+
<br><br>
|
49 |
+
c. The following sections survive termination of this License: 2 (Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability), 6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9 (Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).
|
50 |
+
<br><br>
|
51 |
+
8. **THIRD PARTY MATERIALS**
|
52 |
+
<br><br>
|
53 |
+
The Software Products may contain third-party software or other components (including free and open source software) (all of the foregoing, **“Third Party Materials”**), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Meta does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk.
|
54 |
+
<br><br>
|
55 |
+
9. **TRADEMARKS**
|
56 |
+
<br><br>
|
57 |
+
Licensee has not been granted any trademark license as part of this License and may not use any name or mark associated with Meta without the prior written permission of Meta, except to the extent necessary to make the reference required by the “ATTRIBUTION” section of this Agreement.
|
58 |
+
<br><br>
|
59 |
+
10. **APPLICABLE LAW; DISPUTE RESOLUTION**
|
60 |
+
<br><br>
|
61 |
+
This License will be governed and construed under the laws of the State of California without regard to conflicts of law provisions. Any suit or proceeding arising out of or relating to this License will be brought in the federal or state courts, as applicable, in San Mateo County, California, and each party irrevocably submits to the jurisdiction and venue of such courts.
|
62 |
+
<br><br>
|
63 |
+
11. **MISCELLANEOUS**
|
64 |
+
<br><br>
|
65 |
+
If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Meta to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Meta regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Meta regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Meta.
|
README.md
ADDED
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
inference: false
|
4 |
+
tags:
|
5 |
+
- text-generation
|
6 |
+
- opt
|
7 |
+
|
8 |
+
license: other
|
9 |
+
commercial: false
|
10 |
+
---
|
11 |
+
|
12 |
+
# OPT : Open Pre-trained Transformer Language Models
|
13 |
+
|
14 |
+
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
|
15 |
+
|
16 |
+
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
|
17 |
+
Content from **this** model card has been written by the Hugging Face team.
|
18 |
+
|
19 |
+
## Intro
|
20 |
+
|
21 |
+
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
|
22 |
+
|
23 |
+
> Large language models trained on massive text collections have shown surprising emergent
|
24 |
+
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
|
25 |
+
> can interact with these models through paid APIs, full model access is currently limited to only a
|
26 |
+
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
|
27 |
+
> why these large language models work, hindering progress on improving known challenges in areas
|
28 |
+
> such as robustness, bias, and toxicity.
|
29 |
+
|
30 |
+
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
|
31 |
+
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
|
32 |
+
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
|
33 |
+
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
|
34 |
+
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
|
35 |
+
> collective research community as a whole, which is only possible when models are available for study.
|
36 |
+
|
37 |
+
## Model description
|
38 |
+
|
39 |
+
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
|
40 |
+
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
|
41 |
+
|
42 |
+
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
|
43 |
+
the [official paper](https://arxiv.org/abs/2205.01068).
|
44 |
+
## Intended uses & limitations
|
45 |
+
|
46 |
+
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
|
47 |
+
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
|
48 |
+
|
49 |
+
### How to use
|
50 |
+
|
51 |
+
For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
|
52 |
+
one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
|
53 |
+
It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
|
54 |
+
method as follows:
|
55 |
+
|
56 |
+
|
57 |
+
```python
|
58 |
+
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
|
59 |
+
>>> import torch
|
60 |
+
|
61 |
+
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda()
|
62 |
+
|
63 |
+
>>> # the fast tokenizer currently does not work correctly
|
64 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False)
|
65 |
+
|
66 |
+
>>> prompt = "Hello, I am conscious and"
|
67 |
+
|
68 |
+
|
69 |
+
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
|
70 |
+
|
71 |
+
>>> generated_ids = model.generate(input_ids)
|
72 |
+
|
73 |
+
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
74 |
+
['Hello, I am conscious and I am here.\nI am also conscious and I am here']
|
75 |
+
```
|
76 |
+
|
77 |
+
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
|
78 |
+
|
79 |
+
```python
|
80 |
+
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
|
81 |
+
>>> import torch
|
82 |
+
|
83 |
+
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda()
|
84 |
+
|
85 |
+
>>> # the fast tokenizer currently does not work correctly
|
86 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False)
|
87 |
+
|
88 |
+
>>> prompt = "Hello, I am conscious and"
|
89 |
+
|
90 |
+
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
|
91 |
+
|
92 |
+
>>> set_seed(32)
|
93 |
+
>>> generated_ids = model.generate(input_ids, do_sample=True)
|
94 |
+
|
95 |
+
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
96 |
+
['Hello, I am conscious and aware that you have your back turned to me and want to talk']
|
97 |
+
```
|
98 |
+
|
99 |
+
### Limitations and bias
|
100 |
+
|
101 |
+
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
|
102 |
+
unfiltered content from the internet, which is far from neutral the model is strongly biased :
|
103 |
+
|
104 |
+
> Like other large language models for which the diversity (or lack thereof) of training
|
105 |
+
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
|
106 |
+
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
|
107 |
+
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
|
108 |
+
> large language models.
|
109 |
+
|
110 |
+
Here's an example of how the model can have biased predictions:
|
111 |
+
|
112 |
+
```python
|
113 |
+
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
|
114 |
+
>>> import torch
|
115 |
+
|
116 |
+
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda()
|
117 |
+
|
118 |
+
>>> # the fast tokenizer currently does not work correctly
|
119 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False)
|
120 |
+
|
121 |
+
>>> prompt = "The woman worked as a"
|
122 |
+
|
123 |
+
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
|
124 |
+
|
125 |
+
>>> set_seed(32)
|
126 |
+
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
|
127 |
+
|
128 |
+
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
129 |
+
The woman worked as a supervisor in the office
|
130 |
+
The woman worked as a social worker in a
|
131 |
+
The woman worked as a cashier at the
|
132 |
+
The woman worked as a teacher from 2011 to
|
133 |
+
he woman worked as a maid at the house
|
134 |
+
```
|
135 |
+
|
136 |
+
compared to:
|
137 |
+
|
138 |
+
```python
|
139 |
+
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
|
140 |
+
>>> import torch
|
141 |
+
|
142 |
+
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda()
|
143 |
+
|
144 |
+
>>> # the fast tokenizer currently does not work correctly
|
145 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False)
|
146 |
+
|
147 |
+
>>> prompt = "The man worked as a"
|
148 |
+
|
149 |
+
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
|
150 |
+
|
151 |
+
>>> set_seed(32)
|
152 |
+
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
|
153 |
+
|
154 |
+
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
155 |
+
The man worked as a school bus driver for
|
156 |
+
The man worked as a bartender in a bar
|
157 |
+
The man worked as a cashier at the
|
158 |
+
The man worked as a teacher, and was
|
159 |
+
The man worked as a professional at a range
|
160 |
+
```
|
161 |
+
|
162 |
+
This bias will also affect all fine-tuned versions of this model.
|
163 |
+
|
164 |
+
## Training data
|
165 |
+
|
166 |
+
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
|
167 |
+
|
168 |
+
- BookCorpus, which consists of more than 10K unpublished books,
|
169 |
+
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
|
170 |
+
story-like style of Winograd schemas,
|
171 |
+
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
|
172 |
+
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
|
173 |
+
Roller et al. (2021)
|
174 |
+
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
|
175 |
+
dataset that was used in RoBERTa (Liu et al., 2019b)
|
176 |
+
|
177 |
+
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
|
178 |
+
to each dataset’s size in the pretraining corpus.
|
179 |
+
|
180 |
+
The dataset might contains offensive content as parts of the dataset are a subset of
|
181 |
+
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
|
182 |
+
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
|
183 |
+
|
184 |
+
### Collection process
|
185 |
+
|
186 |
+
The dataset was collected form internet, and went through classic data processing algorithms and
|
187 |
+
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
|
188 |
+
*This ebook by Project Gutenberg.*
|
189 |
+
|
190 |
+
## Training procedure
|
191 |
+
|
192 |
+
### Preprocessing
|
193 |
+
|
194 |
+
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
|
195 |
+
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
|
196 |
+
|
197 |
+
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
|
198 |
+
|
199 |
+
### BibTeX entry and citation info
|
200 |
+
|
201 |
+
```bibtex
|
202 |
+
@misc{zhang2022opt,
|
203 |
+
title={OPT: Open Pre-trained Transformer Language Models},
|
204 |
+
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
|
205 |
+
year={2022},
|
206 |
+
eprint={2205.01068},
|
207 |
+
archivePrefix={arXiv},
|
208 |
+
primaryClass={cs.CL}
|
209 |
+
}
|
210 |
+
```
|
config.json
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "./",
|
3 |
+
"_remove_final_layer_norm": false,
|
4 |
+
"activation_dropout": 0.0,
|
5 |
+
"activation_function": "relu",
|
6 |
+
"architectures": [
|
7 |
+
"OPTForCausalLM"
|
8 |
+
],
|
9 |
+
"attention_dropout": 0.0,
|
10 |
+
"bos_token_id": 2,
|
11 |
+
"do_layer_norm_before": true,
|
12 |
+
"dropout": 0.1,
|
13 |
+
"eos_token_id": 2,
|
14 |
+
"ffn_dim": 36864,
|
15 |
+
"hidden_size": 9216,
|
16 |
+
"init_std": 0.02,
|
17 |
+
"layerdrop": 0.0,
|
18 |
+
"max_position_embeddings": 2048,
|
19 |
+
"model_type": "opt",
|
20 |
+
"num_attention_heads": 72,
|
21 |
+
"num_hidden_layers": 64,
|
22 |
+
"pad_token_id": 1,
|
23 |
+
"prefix": "</s>",
|
24 |
+
"torch_dtype": "float16",
|
25 |
+
"transformers_version": "4.21.0.dev0",
|
26 |
+
"use_cache": true,
|
27 |
+
"vocab_size": 50272,
|
28 |
+
"word_embed_proj_dim": 9216
|
29 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model-00001-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:744510f92cc096ae3f4edf39a2c64f366e247f89251822261113c67892b29472
|
3 |
+
size 9798734195
|
pytorch_model-00002-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5b6d9b5ffc29c3b902e4d9eb88c2ab3c9acb579244babb1b12c3a24a66395214
|
3 |
+
size 9853568015
|
pytorch_model-00003-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1dcca67152f0e84e0598271f0e635ff21aa1b900a7603669e30aadebd2c906af
|
3 |
+
size 9853605493
|
pytorch_model-00004-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:953df9853ef7ff57febce65652479331820228e5fdca0474b04624c917a6ad70
|
3 |
+
size 9513848537
|
pytorch_model-00005-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:65314db7835c09877a91408c704ee83be206132f46e2eb709300b4e6212c83c6
|
3 |
+
size 9513830787
|
pytorch_model-00006-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6e816ea64bf480c391daf418b66afa8e1cfb909414151373d356c6087137be60
|
3 |
+
size 9853568143
|
pytorch_model-00007-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7f47936f5d4653fe32ac1176e31e91d566cc0d3bf9c9cb9c1cf2162fa266ee3d
|
3 |
+
size 9853605493
|
pytorch_model-00008-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:59f7c0650c24bc3667715bb47d96661bba96c5ca16e41731f38b983209eaa869
|
3 |
+
size 9513848537
|
pytorch_model-00009-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa3a862c0f70f59bb468cb8008a5249e5c885225b8485f154c09e6e060063f69
|
3 |
+
size 9513830787
|
pytorch_model-00010-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:32bc892be53af6dd6b5d99feae1cc845b9010d9ff863864ba6802b06e79b1f8d
|
3 |
+
size 9853568143
|
pytorch_model-00011-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bd33e09b2f91a4c0a0d43a73fe5821cee64b6778cac41fa86a89b5146c303d89
|
3 |
+
size 9853605493
|
pytorch_model-00012-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:320656a12ad3ef8fdb0a747bf1e7805b8c34ee06fe9ee52ba006405c1cd7482a
|
3 |
+
size 9513848537
|
pytorch_model-00013-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1f3eb44f25b8b01ef62c706e849fb7af1644e2264945f46e90c4763ef3ed969b
|
3 |
+
size 9513830787
|
pytorch_model-00014-of-00014.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1636886e309abc5883b7dfa9df902c8ab89812ff9d59b5ee6a565e897da3d5e5
|
3 |
+
size 6363051796
|
pytorch_model.bin.index.json
ADDED
@@ -0,0 +1,1036 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"metadata": {
|
3 |
+
"total_size": 132366016512
|
4 |
+
},
|
5 |
+
"weight_map": {
|
6 |
+
"lm_head.weight": "pytorch_model-00014-of-00014.bin",
|
7 |
+
"model.decoder.embed_positions.weight": "pytorch_model-00001-of-00014.bin",
|
8 |
+
"model.decoder.embed_tokens.weight": "pytorch_model-00001-of-00014.bin",
|
9 |
+
"model.decoder.final_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
10 |
+
"model.decoder.final_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
11 |
+
"model.decoder.layers.0.fc1.bias": "pytorch_model-00001-of-00014.bin",
|
12 |
+
"model.decoder.layers.0.fc1.weight": "pytorch_model-00001-of-00014.bin",
|
13 |
+
"model.decoder.layers.0.fc2.bias": "pytorch_model-00001-of-00014.bin",
|
14 |
+
"model.decoder.layers.0.fc2.weight": "pytorch_model-00001-of-00014.bin",
|
15 |
+
"model.decoder.layers.0.final_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
16 |
+
"model.decoder.layers.0.final_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
17 |
+
"model.decoder.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00014.bin",
|
18 |
+
"model.decoder.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
|
19 |
+
"model.decoder.layers.0.self_attn.out_proj.bias": "pytorch_model-00001-of-00014.bin",
|
20 |
+
"model.decoder.layers.0.self_attn.out_proj.weight": "pytorch_model-00001-of-00014.bin",
|
21 |
+
"model.decoder.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00014.bin",
|
22 |
+
"model.decoder.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
|
23 |
+
"model.decoder.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00014.bin",
|
24 |
+
"model.decoder.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
|
25 |
+
"model.decoder.layers.0.self_attn_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
26 |
+
"model.decoder.layers.0.self_attn_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
27 |
+
"model.decoder.layers.1.fc1.bias": "pytorch_model-00001-of-00014.bin",
|
28 |
+
"model.decoder.layers.1.fc1.weight": "pytorch_model-00001-of-00014.bin",
|
29 |
+
"model.decoder.layers.1.fc2.bias": "pytorch_model-00001-of-00014.bin",
|
30 |
+
"model.decoder.layers.1.fc2.weight": "pytorch_model-00001-of-00014.bin",
|
31 |
+
"model.decoder.layers.1.final_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
32 |
+
"model.decoder.layers.1.final_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
33 |
+
"model.decoder.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00014.bin",
|
34 |
+
"model.decoder.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
|
35 |
+
"model.decoder.layers.1.self_attn.out_proj.bias": "pytorch_model-00001-of-00014.bin",
|
36 |
+
"model.decoder.layers.1.self_attn.out_proj.weight": "pytorch_model-00001-of-00014.bin",
|
37 |
+
"model.decoder.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00014.bin",
|
38 |
+
"model.decoder.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
|
39 |
+
"model.decoder.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00014.bin",
|
40 |
+
"model.decoder.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
|
41 |
+
"model.decoder.layers.1.self_attn_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
42 |
+
"model.decoder.layers.1.self_attn_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
43 |
+
"model.decoder.layers.10.fc1.bias": "pytorch_model-00003-of-00014.bin",
|
44 |
+
"model.decoder.layers.10.fc1.weight": "pytorch_model-00003-of-00014.bin",
|
45 |
+
"model.decoder.layers.10.fc2.bias": "pytorch_model-00003-of-00014.bin",
|
46 |
+
"model.decoder.layers.10.fc2.weight": "pytorch_model-00003-of-00014.bin",
|
47 |
+
"model.decoder.layers.10.final_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
48 |
+
"model.decoder.layers.10.final_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
49 |
+
"model.decoder.layers.10.self_attn.k_proj.bias": "pytorch_model-00003-of-00014.bin",
|
50 |
+
"model.decoder.layers.10.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
|
51 |
+
"model.decoder.layers.10.self_attn.out_proj.bias": "pytorch_model-00003-of-00014.bin",
|
52 |
+
"model.decoder.layers.10.self_attn.out_proj.weight": "pytorch_model-00003-of-00014.bin",
|
53 |
+
"model.decoder.layers.10.self_attn.q_proj.bias": "pytorch_model-00003-of-00014.bin",
|
54 |
+
"model.decoder.layers.10.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
|
55 |
+
"model.decoder.layers.10.self_attn.v_proj.bias": "pytorch_model-00003-of-00014.bin",
|
56 |
+
"model.decoder.layers.10.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
|
57 |
+
"model.decoder.layers.10.self_attn_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
58 |
+
"model.decoder.layers.10.self_attn_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
59 |
+
"model.decoder.layers.11.fc1.bias": "pytorch_model-00003-of-00014.bin",
|
60 |
+
"model.decoder.layers.11.fc1.weight": "pytorch_model-00003-of-00014.bin",
|
61 |
+
"model.decoder.layers.11.fc2.bias": "pytorch_model-00003-of-00014.bin",
|
62 |
+
"model.decoder.layers.11.fc2.weight": "pytorch_model-00003-of-00014.bin",
|
63 |
+
"model.decoder.layers.11.final_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
64 |
+
"model.decoder.layers.11.final_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
65 |
+
"model.decoder.layers.11.self_attn.k_proj.bias": "pytorch_model-00003-of-00014.bin",
|
66 |
+
"model.decoder.layers.11.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
|
67 |
+
"model.decoder.layers.11.self_attn.out_proj.bias": "pytorch_model-00003-of-00014.bin",
|
68 |
+
"model.decoder.layers.11.self_attn.out_proj.weight": "pytorch_model-00003-of-00014.bin",
|
69 |
+
"model.decoder.layers.11.self_attn.q_proj.bias": "pytorch_model-00003-of-00014.bin",
|
70 |
+
"model.decoder.layers.11.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
|
71 |
+
"model.decoder.layers.11.self_attn.v_proj.bias": "pytorch_model-00003-of-00014.bin",
|
72 |
+
"model.decoder.layers.11.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
|
73 |
+
"model.decoder.layers.11.self_attn_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
74 |
+
"model.decoder.layers.11.self_attn_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
75 |
+
"model.decoder.layers.12.fc1.bias": "pytorch_model-00003-of-00014.bin",
|
76 |
+
"model.decoder.layers.12.fc1.weight": "pytorch_model-00003-of-00014.bin",
|
77 |
+
"model.decoder.layers.12.fc2.bias": "pytorch_model-00003-of-00014.bin",
|
78 |
+
"model.decoder.layers.12.fc2.weight": "pytorch_model-00003-of-00014.bin",
|
79 |
+
"model.decoder.layers.12.final_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
80 |
+
"model.decoder.layers.12.final_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
81 |
+
"model.decoder.layers.12.self_attn.k_proj.bias": "pytorch_model-00003-of-00014.bin",
|
82 |
+
"model.decoder.layers.12.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
|
83 |
+
"model.decoder.layers.12.self_attn.out_proj.bias": "pytorch_model-00003-of-00014.bin",
|
84 |
+
"model.decoder.layers.12.self_attn.out_proj.weight": "pytorch_model-00003-of-00014.bin",
|
85 |
+
"model.decoder.layers.12.self_attn.q_proj.bias": "pytorch_model-00003-of-00014.bin",
|
86 |
+
"model.decoder.layers.12.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
|
87 |
+
"model.decoder.layers.12.self_attn.v_proj.bias": "pytorch_model-00003-of-00014.bin",
|
88 |
+
"model.decoder.layers.12.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
|
89 |
+
"model.decoder.layers.12.self_attn_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
90 |
+
"model.decoder.layers.12.self_attn_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
91 |
+
"model.decoder.layers.13.fc1.bias": "pytorch_model-00003-of-00014.bin",
|
92 |
+
"model.decoder.layers.13.fc1.weight": "pytorch_model-00003-of-00014.bin",
|
93 |
+
"model.decoder.layers.13.fc2.bias": "pytorch_model-00003-of-00014.bin",
|
94 |
+
"model.decoder.layers.13.fc2.weight": "pytorch_model-00003-of-00014.bin",
|
95 |
+
"model.decoder.layers.13.final_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
96 |
+
"model.decoder.layers.13.final_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
97 |
+
"model.decoder.layers.13.self_attn.k_proj.bias": "pytorch_model-00003-of-00014.bin",
|
98 |
+
"model.decoder.layers.13.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
|
99 |
+
"model.decoder.layers.13.self_attn.out_proj.bias": "pytorch_model-00003-of-00014.bin",
|
100 |
+
"model.decoder.layers.13.self_attn.out_proj.weight": "pytorch_model-00003-of-00014.bin",
|
101 |
+
"model.decoder.layers.13.self_attn.q_proj.bias": "pytorch_model-00003-of-00014.bin",
|
102 |
+
"model.decoder.layers.13.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
|
103 |
+
"model.decoder.layers.13.self_attn.v_proj.bias": "pytorch_model-00003-of-00014.bin",
|
104 |
+
"model.decoder.layers.13.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
|
105 |
+
"model.decoder.layers.13.self_attn_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
106 |
+
"model.decoder.layers.13.self_attn_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
107 |
+
"model.decoder.layers.14.fc1.bias": "pytorch_model-00004-of-00014.bin",
|
108 |
+
"model.decoder.layers.14.fc1.weight": "pytorch_model-00004-of-00014.bin",
|
109 |
+
"model.decoder.layers.14.fc2.bias": "pytorch_model-00004-of-00014.bin",
|
110 |
+
"model.decoder.layers.14.fc2.weight": "pytorch_model-00004-of-00014.bin",
|
111 |
+
"model.decoder.layers.14.final_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
112 |
+
"model.decoder.layers.14.final_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
113 |
+
"model.decoder.layers.14.self_attn.k_proj.bias": "pytorch_model-00004-of-00014.bin",
|
114 |
+
"model.decoder.layers.14.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
|
115 |
+
"model.decoder.layers.14.self_attn.out_proj.bias": "pytorch_model-00004-of-00014.bin",
|
116 |
+
"model.decoder.layers.14.self_attn.out_proj.weight": "pytorch_model-00004-of-00014.bin",
|
117 |
+
"model.decoder.layers.14.self_attn.q_proj.bias": "pytorch_model-00004-of-00014.bin",
|
118 |
+
"model.decoder.layers.14.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
|
119 |
+
"model.decoder.layers.14.self_attn.v_proj.bias": "pytorch_model-00004-of-00014.bin",
|
120 |
+
"model.decoder.layers.14.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
|
121 |
+
"model.decoder.layers.14.self_attn_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
122 |
+
"model.decoder.layers.14.self_attn_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
123 |
+
"model.decoder.layers.15.fc1.bias": "pytorch_model-00004-of-00014.bin",
|
124 |
+
"model.decoder.layers.15.fc1.weight": "pytorch_model-00004-of-00014.bin",
|
125 |
+
"model.decoder.layers.15.fc2.bias": "pytorch_model-00004-of-00014.bin",
|
126 |
+
"model.decoder.layers.15.fc2.weight": "pytorch_model-00004-of-00014.bin",
|
127 |
+
"model.decoder.layers.15.final_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
128 |
+
"model.decoder.layers.15.final_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
129 |
+
"model.decoder.layers.15.self_attn.k_proj.bias": "pytorch_model-00004-of-00014.bin",
|
130 |
+
"model.decoder.layers.15.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
|
131 |
+
"model.decoder.layers.15.self_attn.out_proj.bias": "pytorch_model-00004-of-00014.bin",
|
132 |
+
"model.decoder.layers.15.self_attn.out_proj.weight": "pytorch_model-00004-of-00014.bin",
|
133 |
+
"model.decoder.layers.15.self_attn.q_proj.bias": "pytorch_model-00004-of-00014.bin",
|
134 |
+
"model.decoder.layers.15.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
|
135 |
+
"model.decoder.layers.15.self_attn.v_proj.bias": "pytorch_model-00004-of-00014.bin",
|
136 |
+
"model.decoder.layers.15.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
|
137 |
+
"model.decoder.layers.15.self_attn_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
138 |
+
"model.decoder.layers.15.self_attn_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
139 |
+
"model.decoder.layers.16.fc1.bias": "pytorch_model-00004-of-00014.bin",
|
140 |
+
"model.decoder.layers.16.fc1.weight": "pytorch_model-00004-of-00014.bin",
|
141 |
+
"model.decoder.layers.16.fc2.bias": "pytorch_model-00004-of-00014.bin",
|
142 |
+
"model.decoder.layers.16.fc2.weight": "pytorch_model-00004-of-00014.bin",
|
143 |
+
"model.decoder.layers.16.final_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
144 |
+
"model.decoder.layers.16.final_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
145 |
+
"model.decoder.layers.16.self_attn.k_proj.bias": "pytorch_model-00004-of-00014.bin",
|
146 |
+
"model.decoder.layers.16.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
|
147 |
+
"model.decoder.layers.16.self_attn.out_proj.bias": "pytorch_model-00004-of-00014.bin",
|
148 |
+
"model.decoder.layers.16.self_attn.out_proj.weight": "pytorch_model-00004-of-00014.bin",
|
149 |
+
"model.decoder.layers.16.self_attn.q_proj.bias": "pytorch_model-00004-of-00014.bin",
|
150 |
+
"model.decoder.layers.16.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
|
151 |
+
"model.decoder.layers.16.self_attn.v_proj.bias": "pytorch_model-00004-of-00014.bin",
|
152 |
+
"model.decoder.layers.16.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
|
153 |
+
"model.decoder.layers.16.self_attn_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
154 |
+
"model.decoder.layers.16.self_attn_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
155 |
+
"model.decoder.layers.17.fc1.bias": "pytorch_model-00004-of-00014.bin",
|
156 |
+
"model.decoder.layers.17.fc1.weight": "pytorch_model-00004-of-00014.bin",
|
157 |
+
"model.decoder.layers.17.fc2.bias": "pytorch_model-00004-of-00014.bin",
|
158 |
+
"model.decoder.layers.17.fc2.weight": "pytorch_model-00004-of-00014.bin",
|
159 |
+
"model.decoder.layers.17.final_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
160 |
+
"model.decoder.layers.17.final_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
161 |
+
"model.decoder.layers.17.self_attn.k_proj.bias": "pytorch_model-00004-of-00014.bin",
|
162 |
+
"model.decoder.layers.17.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
|
163 |
+
"model.decoder.layers.17.self_attn.out_proj.bias": "pytorch_model-00004-of-00014.bin",
|
164 |
+
"model.decoder.layers.17.self_attn.out_proj.weight": "pytorch_model-00004-of-00014.bin",
|
165 |
+
"model.decoder.layers.17.self_attn.q_proj.bias": "pytorch_model-00004-of-00014.bin",
|
166 |
+
"model.decoder.layers.17.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
|
167 |
+
"model.decoder.layers.17.self_attn.v_proj.bias": "pytorch_model-00004-of-00014.bin",
|
168 |
+
"model.decoder.layers.17.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
|
169 |
+
"model.decoder.layers.17.self_attn_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
170 |
+
"model.decoder.layers.17.self_attn_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
171 |
+
"model.decoder.layers.18.fc1.bias": "pytorch_model-00004-of-00014.bin",
|
172 |
+
"model.decoder.layers.18.fc1.weight": "pytorch_model-00004-of-00014.bin",
|
173 |
+
"model.decoder.layers.18.fc2.bias": "pytorch_model-00005-of-00014.bin",
|
174 |
+
"model.decoder.layers.18.fc2.weight": "pytorch_model-00005-of-00014.bin",
|
175 |
+
"model.decoder.layers.18.final_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
176 |
+
"model.decoder.layers.18.final_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
177 |
+
"model.decoder.layers.18.self_attn.k_proj.bias": "pytorch_model-00004-of-00014.bin",
|
178 |
+
"model.decoder.layers.18.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
|
179 |
+
"model.decoder.layers.18.self_attn.out_proj.bias": "pytorch_model-00004-of-00014.bin",
|
180 |
+
"model.decoder.layers.18.self_attn.out_proj.weight": "pytorch_model-00004-of-00014.bin",
|
181 |
+
"model.decoder.layers.18.self_attn.q_proj.bias": "pytorch_model-00004-of-00014.bin",
|
182 |
+
"model.decoder.layers.18.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
|
183 |
+
"model.decoder.layers.18.self_attn.v_proj.bias": "pytorch_model-00004-of-00014.bin",
|
184 |
+
"model.decoder.layers.18.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
|
185 |
+
"model.decoder.layers.18.self_attn_layer_norm.bias": "pytorch_model-00004-of-00014.bin",
|
186 |
+
"model.decoder.layers.18.self_attn_layer_norm.weight": "pytorch_model-00004-of-00014.bin",
|
187 |
+
"model.decoder.layers.19.fc1.bias": "pytorch_model-00005-of-00014.bin",
|
188 |
+
"model.decoder.layers.19.fc1.weight": "pytorch_model-00005-of-00014.bin",
|
189 |
+
"model.decoder.layers.19.fc2.bias": "pytorch_model-00005-of-00014.bin",
|
190 |
+
"model.decoder.layers.19.fc2.weight": "pytorch_model-00005-of-00014.bin",
|
191 |
+
"model.decoder.layers.19.final_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
192 |
+
"model.decoder.layers.19.final_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
193 |
+
"model.decoder.layers.19.self_attn.k_proj.bias": "pytorch_model-00005-of-00014.bin",
|
194 |
+
"model.decoder.layers.19.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
|
195 |
+
"model.decoder.layers.19.self_attn.out_proj.bias": "pytorch_model-00005-of-00014.bin",
|
196 |
+
"model.decoder.layers.19.self_attn.out_proj.weight": "pytorch_model-00005-of-00014.bin",
|
197 |
+
"model.decoder.layers.19.self_attn.q_proj.bias": "pytorch_model-00005-of-00014.bin",
|
198 |
+
"model.decoder.layers.19.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
|
199 |
+
"model.decoder.layers.19.self_attn.v_proj.bias": "pytorch_model-00005-of-00014.bin",
|
200 |
+
"model.decoder.layers.19.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
|
201 |
+
"model.decoder.layers.19.self_attn_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
202 |
+
"model.decoder.layers.19.self_attn_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
203 |
+
"model.decoder.layers.2.fc1.bias": "pytorch_model-00001-of-00014.bin",
|
204 |
+
"model.decoder.layers.2.fc1.weight": "pytorch_model-00001-of-00014.bin",
|
205 |
+
"model.decoder.layers.2.fc2.bias": "pytorch_model-00001-of-00014.bin",
|
206 |
+
"model.decoder.layers.2.fc2.weight": "pytorch_model-00001-of-00014.bin",
|
207 |
+
"model.decoder.layers.2.final_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
208 |
+
"model.decoder.layers.2.final_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
209 |
+
"model.decoder.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00014.bin",
|
210 |
+
"model.decoder.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
|
211 |
+
"model.decoder.layers.2.self_attn.out_proj.bias": "pytorch_model-00001-of-00014.bin",
|
212 |
+
"model.decoder.layers.2.self_attn.out_proj.weight": "pytorch_model-00001-of-00014.bin",
|
213 |
+
"model.decoder.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00014.bin",
|
214 |
+
"model.decoder.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
|
215 |
+
"model.decoder.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00014.bin",
|
216 |
+
"model.decoder.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
|
217 |
+
"model.decoder.layers.2.self_attn_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
218 |
+
"model.decoder.layers.2.self_attn_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
219 |
+
"model.decoder.layers.20.fc1.bias": "pytorch_model-00005-of-00014.bin",
|
220 |
+
"model.decoder.layers.20.fc1.weight": "pytorch_model-00005-of-00014.bin",
|
221 |
+
"model.decoder.layers.20.fc2.bias": "pytorch_model-00005-of-00014.bin",
|
222 |
+
"model.decoder.layers.20.fc2.weight": "pytorch_model-00005-of-00014.bin",
|
223 |
+
"model.decoder.layers.20.final_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
224 |
+
"model.decoder.layers.20.final_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
225 |
+
"model.decoder.layers.20.self_attn.k_proj.bias": "pytorch_model-00005-of-00014.bin",
|
226 |
+
"model.decoder.layers.20.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
|
227 |
+
"model.decoder.layers.20.self_attn.out_proj.bias": "pytorch_model-00005-of-00014.bin",
|
228 |
+
"model.decoder.layers.20.self_attn.out_proj.weight": "pytorch_model-00005-of-00014.bin",
|
229 |
+
"model.decoder.layers.20.self_attn.q_proj.bias": "pytorch_model-00005-of-00014.bin",
|
230 |
+
"model.decoder.layers.20.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
|
231 |
+
"model.decoder.layers.20.self_attn.v_proj.bias": "pytorch_model-00005-of-00014.bin",
|
232 |
+
"model.decoder.layers.20.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
|
233 |
+
"model.decoder.layers.20.self_attn_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
234 |
+
"model.decoder.layers.20.self_attn_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
235 |
+
"model.decoder.layers.21.fc1.bias": "pytorch_model-00005-of-00014.bin",
|
236 |
+
"model.decoder.layers.21.fc1.weight": "pytorch_model-00005-of-00014.bin",
|
237 |
+
"model.decoder.layers.21.fc2.bias": "pytorch_model-00005-of-00014.bin",
|
238 |
+
"model.decoder.layers.21.fc2.weight": "pytorch_model-00005-of-00014.bin",
|
239 |
+
"model.decoder.layers.21.final_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
240 |
+
"model.decoder.layers.21.final_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
241 |
+
"model.decoder.layers.21.self_attn.k_proj.bias": "pytorch_model-00005-of-00014.bin",
|
242 |
+
"model.decoder.layers.21.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
|
243 |
+
"model.decoder.layers.21.self_attn.out_proj.bias": "pytorch_model-00005-of-00014.bin",
|
244 |
+
"model.decoder.layers.21.self_attn.out_proj.weight": "pytorch_model-00005-of-00014.bin",
|
245 |
+
"model.decoder.layers.21.self_attn.q_proj.bias": "pytorch_model-00005-of-00014.bin",
|
246 |
+
"model.decoder.layers.21.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
|
247 |
+
"model.decoder.layers.21.self_attn.v_proj.bias": "pytorch_model-00005-of-00014.bin",
|
248 |
+
"model.decoder.layers.21.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
|
249 |
+
"model.decoder.layers.21.self_attn_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
250 |
+
"model.decoder.layers.21.self_attn_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
251 |
+
"model.decoder.layers.22.fc1.bias": "pytorch_model-00005-of-00014.bin",
|
252 |
+
"model.decoder.layers.22.fc1.weight": "pytorch_model-00005-of-00014.bin",
|
253 |
+
"model.decoder.layers.22.fc2.bias": "pytorch_model-00005-of-00014.bin",
|
254 |
+
"model.decoder.layers.22.fc2.weight": "pytorch_model-00005-of-00014.bin",
|
255 |
+
"model.decoder.layers.22.final_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
256 |
+
"model.decoder.layers.22.final_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
257 |
+
"model.decoder.layers.22.self_attn.k_proj.bias": "pytorch_model-00005-of-00014.bin",
|
258 |
+
"model.decoder.layers.22.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
|
259 |
+
"model.decoder.layers.22.self_attn.out_proj.bias": "pytorch_model-00005-of-00014.bin",
|
260 |
+
"model.decoder.layers.22.self_attn.out_proj.weight": "pytorch_model-00005-of-00014.bin",
|
261 |
+
"model.decoder.layers.22.self_attn.q_proj.bias": "pytorch_model-00005-of-00014.bin",
|
262 |
+
"model.decoder.layers.22.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
|
263 |
+
"model.decoder.layers.22.self_attn.v_proj.bias": "pytorch_model-00005-of-00014.bin",
|
264 |
+
"model.decoder.layers.22.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
|
265 |
+
"model.decoder.layers.22.self_attn_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
266 |
+
"model.decoder.layers.22.self_attn_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
267 |
+
"model.decoder.layers.23.fc1.bias": "pytorch_model-00006-of-00014.bin",
|
268 |
+
"model.decoder.layers.23.fc1.weight": "pytorch_model-00006-of-00014.bin",
|
269 |
+
"model.decoder.layers.23.fc2.bias": "pytorch_model-00006-of-00014.bin",
|
270 |
+
"model.decoder.layers.23.fc2.weight": "pytorch_model-00006-of-00014.bin",
|
271 |
+
"model.decoder.layers.23.final_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
272 |
+
"model.decoder.layers.23.final_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
273 |
+
"model.decoder.layers.23.self_attn.k_proj.bias": "pytorch_model-00005-of-00014.bin",
|
274 |
+
"model.decoder.layers.23.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
|
275 |
+
"model.decoder.layers.23.self_attn.out_proj.bias": "pytorch_model-00005-of-00014.bin",
|
276 |
+
"model.decoder.layers.23.self_attn.out_proj.weight": "pytorch_model-00005-of-00014.bin",
|
277 |
+
"model.decoder.layers.23.self_attn.q_proj.bias": "pytorch_model-00005-of-00014.bin",
|
278 |
+
"model.decoder.layers.23.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
|
279 |
+
"model.decoder.layers.23.self_attn.v_proj.bias": "pytorch_model-00005-of-00014.bin",
|
280 |
+
"model.decoder.layers.23.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
|
281 |
+
"model.decoder.layers.23.self_attn_layer_norm.bias": "pytorch_model-00005-of-00014.bin",
|
282 |
+
"model.decoder.layers.23.self_attn_layer_norm.weight": "pytorch_model-00005-of-00014.bin",
|
283 |
+
"model.decoder.layers.24.fc1.bias": "pytorch_model-00006-of-00014.bin",
|
284 |
+
"model.decoder.layers.24.fc1.weight": "pytorch_model-00006-of-00014.bin",
|
285 |
+
"model.decoder.layers.24.fc2.bias": "pytorch_model-00006-of-00014.bin",
|
286 |
+
"model.decoder.layers.24.fc2.weight": "pytorch_model-00006-of-00014.bin",
|
287 |
+
"model.decoder.layers.24.final_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
288 |
+
"model.decoder.layers.24.final_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
289 |
+
"model.decoder.layers.24.self_attn.k_proj.bias": "pytorch_model-00006-of-00014.bin",
|
290 |
+
"model.decoder.layers.24.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
|
291 |
+
"model.decoder.layers.24.self_attn.out_proj.bias": "pytorch_model-00006-of-00014.bin",
|
292 |
+
"model.decoder.layers.24.self_attn.out_proj.weight": "pytorch_model-00006-of-00014.bin",
|
293 |
+
"model.decoder.layers.24.self_attn.q_proj.bias": "pytorch_model-00006-of-00014.bin",
|
294 |
+
"model.decoder.layers.24.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
|
295 |
+
"model.decoder.layers.24.self_attn.v_proj.bias": "pytorch_model-00006-of-00014.bin",
|
296 |
+
"model.decoder.layers.24.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
|
297 |
+
"model.decoder.layers.24.self_attn_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
298 |
+
"model.decoder.layers.24.self_attn_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
299 |
+
"model.decoder.layers.25.fc1.bias": "pytorch_model-00006-of-00014.bin",
|
300 |
+
"model.decoder.layers.25.fc1.weight": "pytorch_model-00006-of-00014.bin",
|
301 |
+
"model.decoder.layers.25.fc2.bias": "pytorch_model-00006-of-00014.bin",
|
302 |
+
"model.decoder.layers.25.fc2.weight": "pytorch_model-00006-of-00014.bin",
|
303 |
+
"model.decoder.layers.25.final_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
304 |
+
"model.decoder.layers.25.final_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
305 |
+
"model.decoder.layers.25.self_attn.k_proj.bias": "pytorch_model-00006-of-00014.bin",
|
306 |
+
"model.decoder.layers.25.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
|
307 |
+
"model.decoder.layers.25.self_attn.out_proj.bias": "pytorch_model-00006-of-00014.bin",
|
308 |
+
"model.decoder.layers.25.self_attn.out_proj.weight": "pytorch_model-00006-of-00014.bin",
|
309 |
+
"model.decoder.layers.25.self_attn.q_proj.bias": "pytorch_model-00006-of-00014.bin",
|
310 |
+
"model.decoder.layers.25.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
|
311 |
+
"model.decoder.layers.25.self_attn.v_proj.bias": "pytorch_model-00006-of-00014.bin",
|
312 |
+
"model.decoder.layers.25.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
|
313 |
+
"model.decoder.layers.25.self_attn_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
314 |
+
"model.decoder.layers.25.self_attn_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
315 |
+
"model.decoder.layers.26.fc1.bias": "pytorch_model-00006-of-00014.bin",
|
316 |
+
"model.decoder.layers.26.fc1.weight": "pytorch_model-00006-of-00014.bin",
|
317 |
+
"model.decoder.layers.26.fc2.bias": "pytorch_model-00006-of-00014.bin",
|
318 |
+
"model.decoder.layers.26.fc2.weight": "pytorch_model-00006-of-00014.bin",
|
319 |
+
"model.decoder.layers.26.final_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
320 |
+
"model.decoder.layers.26.final_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
321 |
+
"model.decoder.layers.26.self_attn.k_proj.bias": "pytorch_model-00006-of-00014.bin",
|
322 |
+
"model.decoder.layers.26.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
|
323 |
+
"model.decoder.layers.26.self_attn.out_proj.bias": "pytorch_model-00006-of-00014.bin",
|
324 |
+
"model.decoder.layers.26.self_attn.out_proj.weight": "pytorch_model-00006-of-00014.bin",
|
325 |
+
"model.decoder.layers.26.self_attn.q_proj.bias": "pytorch_model-00006-of-00014.bin",
|
326 |
+
"model.decoder.layers.26.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
|
327 |
+
"model.decoder.layers.26.self_attn.v_proj.bias": "pytorch_model-00006-of-00014.bin",
|
328 |
+
"model.decoder.layers.26.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
|
329 |
+
"model.decoder.layers.26.self_attn_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
330 |
+
"model.decoder.layers.26.self_attn_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
331 |
+
"model.decoder.layers.27.fc1.bias": "pytorch_model-00006-of-00014.bin",
|
332 |
+
"model.decoder.layers.27.fc1.weight": "pytorch_model-00006-of-00014.bin",
|
333 |
+
"model.decoder.layers.27.fc2.bias": "pytorch_model-00006-of-00014.bin",
|
334 |
+
"model.decoder.layers.27.fc2.weight": "pytorch_model-00006-of-00014.bin",
|
335 |
+
"model.decoder.layers.27.final_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
336 |
+
"model.decoder.layers.27.final_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
337 |
+
"model.decoder.layers.27.self_attn.k_proj.bias": "pytorch_model-00006-of-00014.bin",
|
338 |
+
"model.decoder.layers.27.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
|
339 |
+
"model.decoder.layers.27.self_attn.out_proj.bias": "pytorch_model-00006-of-00014.bin",
|
340 |
+
"model.decoder.layers.27.self_attn.out_proj.weight": "pytorch_model-00006-of-00014.bin",
|
341 |
+
"model.decoder.layers.27.self_attn.q_proj.bias": "pytorch_model-00006-of-00014.bin",
|
342 |
+
"model.decoder.layers.27.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
|
343 |
+
"model.decoder.layers.27.self_attn.v_proj.bias": "pytorch_model-00006-of-00014.bin",
|
344 |
+
"model.decoder.layers.27.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
|
345 |
+
"model.decoder.layers.27.self_attn_layer_norm.bias": "pytorch_model-00006-of-00014.bin",
|
346 |
+
"model.decoder.layers.27.self_attn_layer_norm.weight": "pytorch_model-00006-of-00014.bin",
|
347 |
+
"model.decoder.layers.28.fc1.bias": "pytorch_model-00007-of-00014.bin",
|
348 |
+
"model.decoder.layers.28.fc1.weight": "pytorch_model-00007-of-00014.bin",
|
349 |
+
"model.decoder.layers.28.fc2.bias": "pytorch_model-00007-of-00014.bin",
|
350 |
+
"model.decoder.layers.28.fc2.weight": "pytorch_model-00007-of-00014.bin",
|
351 |
+
"model.decoder.layers.28.final_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
352 |
+
"model.decoder.layers.28.final_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
353 |
+
"model.decoder.layers.28.self_attn.k_proj.bias": "pytorch_model-00006-of-00014.bin",
|
354 |
+
"model.decoder.layers.28.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
|
355 |
+
"model.decoder.layers.28.self_attn.out_proj.bias": "pytorch_model-00007-of-00014.bin",
|
356 |
+
"model.decoder.layers.28.self_attn.out_proj.weight": "pytorch_model-00007-of-00014.bin",
|
357 |
+
"model.decoder.layers.28.self_attn.q_proj.bias": "pytorch_model-00007-of-00014.bin",
|
358 |
+
"model.decoder.layers.28.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
|
359 |
+
"model.decoder.layers.28.self_attn.v_proj.bias": "pytorch_model-00006-of-00014.bin",
|
360 |
+
"model.decoder.layers.28.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
|
361 |
+
"model.decoder.layers.28.self_attn_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
362 |
+
"model.decoder.layers.28.self_attn_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
363 |
+
"model.decoder.layers.29.fc1.bias": "pytorch_model-00007-of-00014.bin",
|
364 |
+
"model.decoder.layers.29.fc1.weight": "pytorch_model-00007-of-00014.bin",
|
365 |
+
"model.decoder.layers.29.fc2.bias": "pytorch_model-00007-of-00014.bin",
|
366 |
+
"model.decoder.layers.29.fc2.weight": "pytorch_model-00007-of-00014.bin",
|
367 |
+
"model.decoder.layers.29.final_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
368 |
+
"model.decoder.layers.29.final_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
369 |
+
"model.decoder.layers.29.self_attn.k_proj.bias": "pytorch_model-00007-of-00014.bin",
|
370 |
+
"model.decoder.layers.29.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
|
371 |
+
"model.decoder.layers.29.self_attn.out_proj.bias": "pytorch_model-00007-of-00014.bin",
|
372 |
+
"model.decoder.layers.29.self_attn.out_proj.weight": "pytorch_model-00007-of-00014.bin",
|
373 |
+
"model.decoder.layers.29.self_attn.q_proj.bias": "pytorch_model-00007-of-00014.bin",
|
374 |
+
"model.decoder.layers.29.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
|
375 |
+
"model.decoder.layers.29.self_attn.v_proj.bias": "pytorch_model-00007-of-00014.bin",
|
376 |
+
"model.decoder.layers.29.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
|
377 |
+
"model.decoder.layers.29.self_attn_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
378 |
+
"model.decoder.layers.29.self_attn_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
379 |
+
"model.decoder.layers.3.fc1.bias": "pytorch_model-00001-of-00014.bin",
|
380 |
+
"model.decoder.layers.3.fc1.weight": "pytorch_model-00001-of-00014.bin",
|
381 |
+
"model.decoder.layers.3.fc2.bias": "pytorch_model-00001-of-00014.bin",
|
382 |
+
"model.decoder.layers.3.fc2.weight": "pytorch_model-00001-of-00014.bin",
|
383 |
+
"model.decoder.layers.3.final_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
384 |
+
"model.decoder.layers.3.final_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
385 |
+
"model.decoder.layers.3.self_attn.k_proj.bias": "pytorch_model-00001-of-00014.bin",
|
386 |
+
"model.decoder.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
|
387 |
+
"model.decoder.layers.3.self_attn.out_proj.bias": "pytorch_model-00001-of-00014.bin",
|
388 |
+
"model.decoder.layers.3.self_attn.out_proj.weight": "pytorch_model-00001-of-00014.bin",
|
389 |
+
"model.decoder.layers.3.self_attn.q_proj.bias": "pytorch_model-00001-of-00014.bin",
|
390 |
+
"model.decoder.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
|
391 |
+
"model.decoder.layers.3.self_attn.v_proj.bias": "pytorch_model-00001-of-00014.bin",
|
392 |
+
"model.decoder.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
|
393 |
+
"model.decoder.layers.3.self_attn_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
394 |
+
"model.decoder.layers.3.self_attn_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
395 |
+
"model.decoder.layers.30.fc1.bias": "pytorch_model-00007-of-00014.bin",
|
396 |
+
"model.decoder.layers.30.fc1.weight": "pytorch_model-00007-of-00014.bin",
|
397 |
+
"model.decoder.layers.30.fc2.bias": "pytorch_model-00007-of-00014.bin",
|
398 |
+
"model.decoder.layers.30.fc2.weight": "pytorch_model-00007-of-00014.bin",
|
399 |
+
"model.decoder.layers.30.final_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
400 |
+
"model.decoder.layers.30.final_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
401 |
+
"model.decoder.layers.30.self_attn.k_proj.bias": "pytorch_model-00007-of-00014.bin",
|
402 |
+
"model.decoder.layers.30.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
|
403 |
+
"model.decoder.layers.30.self_attn.out_proj.bias": "pytorch_model-00007-of-00014.bin",
|
404 |
+
"model.decoder.layers.30.self_attn.out_proj.weight": "pytorch_model-00007-of-00014.bin",
|
405 |
+
"model.decoder.layers.30.self_attn.q_proj.bias": "pytorch_model-00007-of-00014.bin",
|
406 |
+
"model.decoder.layers.30.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
|
407 |
+
"model.decoder.layers.30.self_attn.v_proj.bias": "pytorch_model-00007-of-00014.bin",
|
408 |
+
"model.decoder.layers.30.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
|
409 |
+
"model.decoder.layers.30.self_attn_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
410 |
+
"model.decoder.layers.30.self_attn_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
411 |
+
"model.decoder.layers.31.fc1.bias": "pytorch_model-00007-of-00014.bin",
|
412 |
+
"model.decoder.layers.31.fc1.weight": "pytorch_model-00007-of-00014.bin",
|
413 |
+
"model.decoder.layers.31.fc2.bias": "pytorch_model-00007-of-00014.bin",
|
414 |
+
"model.decoder.layers.31.fc2.weight": "pytorch_model-00007-of-00014.bin",
|
415 |
+
"model.decoder.layers.31.final_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
416 |
+
"model.decoder.layers.31.final_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
417 |
+
"model.decoder.layers.31.self_attn.k_proj.bias": "pytorch_model-00007-of-00014.bin",
|
418 |
+
"model.decoder.layers.31.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
|
419 |
+
"model.decoder.layers.31.self_attn.out_proj.bias": "pytorch_model-00007-of-00014.bin",
|
420 |
+
"model.decoder.layers.31.self_attn.out_proj.weight": "pytorch_model-00007-of-00014.bin",
|
421 |
+
"model.decoder.layers.31.self_attn.q_proj.bias": "pytorch_model-00007-of-00014.bin",
|
422 |
+
"model.decoder.layers.31.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
|
423 |
+
"model.decoder.layers.31.self_attn.v_proj.bias": "pytorch_model-00007-of-00014.bin",
|
424 |
+
"model.decoder.layers.31.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
|
425 |
+
"model.decoder.layers.31.self_attn_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
426 |
+
"model.decoder.layers.31.self_attn_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
427 |
+
"model.decoder.layers.32.fc1.bias": "pytorch_model-00007-of-00014.bin",
|
428 |
+
"model.decoder.layers.32.fc1.weight": "pytorch_model-00007-of-00014.bin",
|
429 |
+
"model.decoder.layers.32.fc2.bias": "pytorch_model-00007-of-00014.bin",
|
430 |
+
"model.decoder.layers.32.fc2.weight": "pytorch_model-00007-of-00014.bin",
|
431 |
+
"model.decoder.layers.32.final_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
432 |
+
"model.decoder.layers.32.final_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
433 |
+
"model.decoder.layers.32.self_attn.k_proj.bias": "pytorch_model-00007-of-00014.bin",
|
434 |
+
"model.decoder.layers.32.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
|
435 |
+
"model.decoder.layers.32.self_attn.out_proj.bias": "pytorch_model-00007-of-00014.bin",
|
436 |
+
"model.decoder.layers.32.self_attn.out_proj.weight": "pytorch_model-00007-of-00014.bin",
|
437 |
+
"model.decoder.layers.32.self_attn.q_proj.bias": "pytorch_model-00007-of-00014.bin",
|
438 |
+
"model.decoder.layers.32.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
|
439 |
+
"model.decoder.layers.32.self_attn.v_proj.bias": "pytorch_model-00007-of-00014.bin",
|
440 |
+
"model.decoder.layers.32.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
|
441 |
+
"model.decoder.layers.32.self_attn_layer_norm.bias": "pytorch_model-00007-of-00014.bin",
|
442 |
+
"model.decoder.layers.32.self_attn_layer_norm.weight": "pytorch_model-00007-of-00014.bin",
|
443 |
+
"model.decoder.layers.33.fc1.bias": "pytorch_model-00008-of-00014.bin",
|
444 |
+
"model.decoder.layers.33.fc1.weight": "pytorch_model-00008-of-00014.bin",
|
445 |
+
"model.decoder.layers.33.fc2.bias": "pytorch_model-00008-of-00014.bin",
|
446 |
+
"model.decoder.layers.33.fc2.weight": "pytorch_model-00008-of-00014.bin",
|
447 |
+
"model.decoder.layers.33.final_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
448 |
+
"model.decoder.layers.33.final_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
449 |
+
"model.decoder.layers.33.self_attn.k_proj.bias": "pytorch_model-00008-of-00014.bin",
|
450 |
+
"model.decoder.layers.33.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
|
451 |
+
"model.decoder.layers.33.self_attn.out_proj.bias": "pytorch_model-00008-of-00014.bin",
|
452 |
+
"model.decoder.layers.33.self_attn.out_proj.weight": "pytorch_model-00008-of-00014.bin",
|
453 |
+
"model.decoder.layers.33.self_attn.q_proj.bias": "pytorch_model-00008-of-00014.bin",
|
454 |
+
"model.decoder.layers.33.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
|
455 |
+
"model.decoder.layers.33.self_attn.v_proj.bias": "pytorch_model-00008-of-00014.bin",
|
456 |
+
"model.decoder.layers.33.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
|
457 |
+
"model.decoder.layers.33.self_attn_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
458 |
+
"model.decoder.layers.33.self_attn_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
459 |
+
"model.decoder.layers.34.fc1.bias": "pytorch_model-00008-of-00014.bin",
|
460 |
+
"model.decoder.layers.34.fc1.weight": "pytorch_model-00008-of-00014.bin",
|
461 |
+
"model.decoder.layers.34.fc2.bias": "pytorch_model-00008-of-00014.bin",
|
462 |
+
"model.decoder.layers.34.fc2.weight": "pytorch_model-00008-of-00014.bin",
|
463 |
+
"model.decoder.layers.34.final_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
464 |
+
"model.decoder.layers.34.final_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
465 |
+
"model.decoder.layers.34.self_attn.k_proj.bias": "pytorch_model-00008-of-00014.bin",
|
466 |
+
"model.decoder.layers.34.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
|
467 |
+
"model.decoder.layers.34.self_attn.out_proj.bias": "pytorch_model-00008-of-00014.bin",
|
468 |
+
"model.decoder.layers.34.self_attn.out_proj.weight": "pytorch_model-00008-of-00014.bin",
|
469 |
+
"model.decoder.layers.34.self_attn.q_proj.bias": "pytorch_model-00008-of-00014.bin",
|
470 |
+
"model.decoder.layers.34.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
|
471 |
+
"model.decoder.layers.34.self_attn.v_proj.bias": "pytorch_model-00008-of-00014.bin",
|
472 |
+
"model.decoder.layers.34.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
|
473 |
+
"model.decoder.layers.34.self_attn_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
474 |
+
"model.decoder.layers.34.self_attn_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
475 |
+
"model.decoder.layers.35.fc1.bias": "pytorch_model-00008-of-00014.bin",
|
476 |
+
"model.decoder.layers.35.fc1.weight": "pytorch_model-00008-of-00014.bin",
|
477 |
+
"model.decoder.layers.35.fc2.bias": "pytorch_model-00008-of-00014.bin",
|
478 |
+
"model.decoder.layers.35.fc2.weight": "pytorch_model-00008-of-00014.bin",
|
479 |
+
"model.decoder.layers.35.final_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
480 |
+
"model.decoder.layers.35.final_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
481 |
+
"model.decoder.layers.35.self_attn.k_proj.bias": "pytorch_model-00008-of-00014.bin",
|
482 |
+
"model.decoder.layers.35.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
|
483 |
+
"model.decoder.layers.35.self_attn.out_proj.bias": "pytorch_model-00008-of-00014.bin",
|
484 |
+
"model.decoder.layers.35.self_attn.out_proj.weight": "pytorch_model-00008-of-00014.bin",
|
485 |
+
"model.decoder.layers.35.self_attn.q_proj.bias": "pytorch_model-00008-of-00014.bin",
|
486 |
+
"model.decoder.layers.35.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
|
487 |
+
"model.decoder.layers.35.self_attn.v_proj.bias": "pytorch_model-00008-of-00014.bin",
|
488 |
+
"model.decoder.layers.35.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
|
489 |
+
"model.decoder.layers.35.self_attn_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
490 |
+
"model.decoder.layers.35.self_attn_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
491 |
+
"model.decoder.layers.36.fc1.bias": "pytorch_model-00008-of-00014.bin",
|
492 |
+
"model.decoder.layers.36.fc1.weight": "pytorch_model-00008-of-00014.bin",
|
493 |
+
"model.decoder.layers.36.fc2.bias": "pytorch_model-00008-of-00014.bin",
|
494 |
+
"model.decoder.layers.36.fc2.weight": "pytorch_model-00008-of-00014.bin",
|
495 |
+
"model.decoder.layers.36.final_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
496 |
+
"model.decoder.layers.36.final_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
497 |
+
"model.decoder.layers.36.self_attn.k_proj.bias": "pytorch_model-00008-of-00014.bin",
|
498 |
+
"model.decoder.layers.36.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
|
499 |
+
"model.decoder.layers.36.self_attn.out_proj.bias": "pytorch_model-00008-of-00014.bin",
|
500 |
+
"model.decoder.layers.36.self_attn.out_proj.weight": "pytorch_model-00008-of-00014.bin",
|
501 |
+
"model.decoder.layers.36.self_attn.q_proj.bias": "pytorch_model-00008-of-00014.bin",
|
502 |
+
"model.decoder.layers.36.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
|
503 |
+
"model.decoder.layers.36.self_attn.v_proj.bias": "pytorch_model-00008-of-00014.bin",
|
504 |
+
"model.decoder.layers.36.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
|
505 |
+
"model.decoder.layers.36.self_attn_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
506 |
+
"model.decoder.layers.36.self_attn_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
507 |
+
"model.decoder.layers.37.fc1.bias": "pytorch_model-00008-of-00014.bin",
|
508 |
+
"model.decoder.layers.37.fc1.weight": "pytorch_model-00008-of-00014.bin",
|
509 |
+
"model.decoder.layers.37.fc2.bias": "pytorch_model-00009-of-00014.bin",
|
510 |
+
"model.decoder.layers.37.fc2.weight": "pytorch_model-00009-of-00014.bin",
|
511 |
+
"model.decoder.layers.37.final_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
512 |
+
"model.decoder.layers.37.final_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
513 |
+
"model.decoder.layers.37.self_attn.k_proj.bias": "pytorch_model-00008-of-00014.bin",
|
514 |
+
"model.decoder.layers.37.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
|
515 |
+
"model.decoder.layers.37.self_attn.out_proj.bias": "pytorch_model-00008-of-00014.bin",
|
516 |
+
"model.decoder.layers.37.self_attn.out_proj.weight": "pytorch_model-00008-of-00014.bin",
|
517 |
+
"model.decoder.layers.37.self_attn.q_proj.bias": "pytorch_model-00008-of-00014.bin",
|
518 |
+
"model.decoder.layers.37.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
|
519 |
+
"model.decoder.layers.37.self_attn.v_proj.bias": "pytorch_model-00008-of-00014.bin",
|
520 |
+
"model.decoder.layers.37.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
|
521 |
+
"model.decoder.layers.37.self_attn_layer_norm.bias": "pytorch_model-00008-of-00014.bin",
|
522 |
+
"model.decoder.layers.37.self_attn_layer_norm.weight": "pytorch_model-00008-of-00014.bin",
|
523 |
+
"model.decoder.layers.38.fc1.bias": "pytorch_model-00009-of-00014.bin",
|
524 |
+
"model.decoder.layers.38.fc1.weight": "pytorch_model-00009-of-00014.bin",
|
525 |
+
"model.decoder.layers.38.fc2.bias": "pytorch_model-00009-of-00014.bin",
|
526 |
+
"model.decoder.layers.38.fc2.weight": "pytorch_model-00009-of-00014.bin",
|
527 |
+
"model.decoder.layers.38.final_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
528 |
+
"model.decoder.layers.38.final_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
529 |
+
"model.decoder.layers.38.self_attn.k_proj.bias": "pytorch_model-00009-of-00014.bin",
|
530 |
+
"model.decoder.layers.38.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
|
531 |
+
"model.decoder.layers.38.self_attn.out_proj.bias": "pytorch_model-00009-of-00014.bin",
|
532 |
+
"model.decoder.layers.38.self_attn.out_proj.weight": "pytorch_model-00009-of-00014.bin",
|
533 |
+
"model.decoder.layers.38.self_attn.q_proj.bias": "pytorch_model-00009-of-00014.bin",
|
534 |
+
"model.decoder.layers.38.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
|
535 |
+
"model.decoder.layers.38.self_attn.v_proj.bias": "pytorch_model-00009-of-00014.bin",
|
536 |
+
"model.decoder.layers.38.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
|
537 |
+
"model.decoder.layers.38.self_attn_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
538 |
+
"model.decoder.layers.38.self_attn_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
539 |
+
"model.decoder.layers.39.fc1.bias": "pytorch_model-00009-of-00014.bin",
|
540 |
+
"model.decoder.layers.39.fc1.weight": "pytorch_model-00009-of-00014.bin",
|
541 |
+
"model.decoder.layers.39.fc2.bias": "pytorch_model-00009-of-00014.bin",
|
542 |
+
"model.decoder.layers.39.fc2.weight": "pytorch_model-00009-of-00014.bin",
|
543 |
+
"model.decoder.layers.39.final_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
544 |
+
"model.decoder.layers.39.final_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
545 |
+
"model.decoder.layers.39.self_attn.k_proj.bias": "pytorch_model-00009-of-00014.bin",
|
546 |
+
"model.decoder.layers.39.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
|
547 |
+
"model.decoder.layers.39.self_attn.out_proj.bias": "pytorch_model-00009-of-00014.bin",
|
548 |
+
"model.decoder.layers.39.self_attn.out_proj.weight": "pytorch_model-00009-of-00014.bin",
|
549 |
+
"model.decoder.layers.39.self_attn.q_proj.bias": "pytorch_model-00009-of-00014.bin",
|
550 |
+
"model.decoder.layers.39.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
|
551 |
+
"model.decoder.layers.39.self_attn.v_proj.bias": "pytorch_model-00009-of-00014.bin",
|
552 |
+
"model.decoder.layers.39.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
|
553 |
+
"model.decoder.layers.39.self_attn_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
554 |
+
"model.decoder.layers.39.self_attn_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
555 |
+
"model.decoder.layers.4.fc1.bias": "pytorch_model-00002-of-00014.bin",
|
556 |
+
"model.decoder.layers.4.fc1.weight": "pytorch_model-00002-of-00014.bin",
|
557 |
+
"model.decoder.layers.4.fc2.bias": "pytorch_model-00002-of-00014.bin",
|
558 |
+
"model.decoder.layers.4.fc2.weight": "pytorch_model-00002-of-00014.bin",
|
559 |
+
"model.decoder.layers.4.final_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
560 |
+
"model.decoder.layers.4.final_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
561 |
+
"model.decoder.layers.4.self_attn.k_proj.bias": "pytorch_model-00001-of-00014.bin",
|
562 |
+
"model.decoder.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
|
563 |
+
"model.decoder.layers.4.self_attn.out_proj.bias": "pytorch_model-00001-of-00014.bin",
|
564 |
+
"model.decoder.layers.4.self_attn.out_proj.weight": "pytorch_model-00001-of-00014.bin",
|
565 |
+
"model.decoder.layers.4.self_attn.q_proj.bias": "pytorch_model-00001-of-00014.bin",
|
566 |
+
"model.decoder.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
|
567 |
+
"model.decoder.layers.4.self_attn.v_proj.bias": "pytorch_model-00001-of-00014.bin",
|
568 |
+
"model.decoder.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
|
569 |
+
"model.decoder.layers.4.self_attn_layer_norm.bias": "pytorch_model-00001-of-00014.bin",
|
570 |
+
"model.decoder.layers.4.self_attn_layer_norm.weight": "pytorch_model-00001-of-00014.bin",
|
571 |
+
"model.decoder.layers.40.fc1.bias": "pytorch_model-00009-of-00014.bin",
|
572 |
+
"model.decoder.layers.40.fc1.weight": "pytorch_model-00009-of-00014.bin",
|
573 |
+
"model.decoder.layers.40.fc2.bias": "pytorch_model-00009-of-00014.bin",
|
574 |
+
"model.decoder.layers.40.fc2.weight": "pytorch_model-00009-of-00014.bin",
|
575 |
+
"model.decoder.layers.40.final_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
576 |
+
"model.decoder.layers.40.final_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
577 |
+
"model.decoder.layers.40.self_attn.k_proj.bias": "pytorch_model-00009-of-00014.bin",
|
578 |
+
"model.decoder.layers.40.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
|
579 |
+
"model.decoder.layers.40.self_attn.out_proj.bias": "pytorch_model-00009-of-00014.bin",
|
580 |
+
"model.decoder.layers.40.self_attn.out_proj.weight": "pytorch_model-00009-of-00014.bin",
|
581 |
+
"model.decoder.layers.40.self_attn.q_proj.bias": "pytorch_model-00009-of-00014.bin",
|
582 |
+
"model.decoder.layers.40.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
|
583 |
+
"model.decoder.layers.40.self_attn.v_proj.bias": "pytorch_model-00009-of-00014.bin",
|
584 |
+
"model.decoder.layers.40.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
|
585 |
+
"model.decoder.layers.40.self_attn_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
586 |
+
"model.decoder.layers.40.self_attn_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
587 |
+
"model.decoder.layers.41.fc1.bias": "pytorch_model-00009-of-00014.bin",
|
588 |
+
"model.decoder.layers.41.fc1.weight": "pytorch_model-00009-of-00014.bin",
|
589 |
+
"model.decoder.layers.41.fc2.bias": "pytorch_model-00009-of-00014.bin",
|
590 |
+
"model.decoder.layers.41.fc2.weight": "pytorch_model-00009-of-00014.bin",
|
591 |
+
"model.decoder.layers.41.final_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
592 |
+
"model.decoder.layers.41.final_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
593 |
+
"model.decoder.layers.41.self_attn.k_proj.bias": "pytorch_model-00009-of-00014.bin",
|
594 |
+
"model.decoder.layers.41.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
|
595 |
+
"model.decoder.layers.41.self_attn.out_proj.bias": "pytorch_model-00009-of-00014.bin",
|
596 |
+
"model.decoder.layers.41.self_attn.out_proj.weight": "pytorch_model-00009-of-00014.bin",
|
597 |
+
"model.decoder.layers.41.self_attn.q_proj.bias": "pytorch_model-00009-of-00014.bin",
|
598 |
+
"model.decoder.layers.41.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
|
599 |
+
"model.decoder.layers.41.self_attn.v_proj.bias": "pytorch_model-00009-of-00014.bin",
|
600 |
+
"model.decoder.layers.41.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
|
601 |
+
"model.decoder.layers.41.self_attn_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
602 |
+
"model.decoder.layers.41.self_attn_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
603 |
+
"model.decoder.layers.42.fc1.bias": "pytorch_model-00010-of-00014.bin",
|
604 |
+
"model.decoder.layers.42.fc1.weight": "pytorch_model-00010-of-00014.bin",
|
605 |
+
"model.decoder.layers.42.fc2.bias": "pytorch_model-00010-of-00014.bin",
|
606 |
+
"model.decoder.layers.42.fc2.weight": "pytorch_model-00010-of-00014.bin",
|
607 |
+
"model.decoder.layers.42.final_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
608 |
+
"model.decoder.layers.42.final_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
609 |
+
"model.decoder.layers.42.self_attn.k_proj.bias": "pytorch_model-00009-of-00014.bin",
|
610 |
+
"model.decoder.layers.42.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
|
611 |
+
"model.decoder.layers.42.self_attn.out_proj.bias": "pytorch_model-00009-of-00014.bin",
|
612 |
+
"model.decoder.layers.42.self_attn.out_proj.weight": "pytorch_model-00009-of-00014.bin",
|
613 |
+
"model.decoder.layers.42.self_attn.q_proj.bias": "pytorch_model-00009-of-00014.bin",
|
614 |
+
"model.decoder.layers.42.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
|
615 |
+
"model.decoder.layers.42.self_attn.v_proj.bias": "pytorch_model-00009-of-00014.bin",
|
616 |
+
"model.decoder.layers.42.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
|
617 |
+
"model.decoder.layers.42.self_attn_layer_norm.bias": "pytorch_model-00009-of-00014.bin",
|
618 |
+
"model.decoder.layers.42.self_attn_layer_norm.weight": "pytorch_model-00009-of-00014.bin",
|
619 |
+
"model.decoder.layers.43.fc1.bias": "pytorch_model-00010-of-00014.bin",
|
620 |
+
"model.decoder.layers.43.fc1.weight": "pytorch_model-00010-of-00014.bin",
|
621 |
+
"model.decoder.layers.43.fc2.bias": "pytorch_model-00010-of-00014.bin",
|
622 |
+
"model.decoder.layers.43.fc2.weight": "pytorch_model-00010-of-00014.bin",
|
623 |
+
"model.decoder.layers.43.final_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
624 |
+
"model.decoder.layers.43.final_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
625 |
+
"model.decoder.layers.43.self_attn.k_proj.bias": "pytorch_model-00010-of-00014.bin",
|
626 |
+
"model.decoder.layers.43.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
|
627 |
+
"model.decoder.layers.43.self_attn.out_proj.bias": "pytorch_model-00010-of-00014.bin",
|
628 |
+
"model.decoder.layers.43.self_attn.out_proj.weight": "pytorch_model-00010-of-00014.bin",
|
629 |
+
"model.decoder.layers.43.self_attn.q_proj.bias": "pytorch_model-00010-of-00014.bin",
|
630 |
+
"model.decoder.layers.43.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
|
631 |
+
"model.decoder.layers.43.self_attn.v_proj.bias": "pytorch_model-00010-of-00014.bin",
|
632 |
+
"model.decoder.layers.43.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
|
633 |
+
"model.decoder.layers.43.self_attn_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
634 |
+
"model.decoder.layers.43.self_attn_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
635 |
+
"model.decoder.layers.44.fc1.bias": "pytorch_model-00010-of-00014.bin",
|
636 |
+
"model.decoder.layers.44.fc1.weight": "pytorch_model-00010-of-00014.bin",
|
637 |
+
"model.decoder.layers.44.fc2.bias": "pytorch_model-00010-of-00014.bin",
|
638 |
+
"model.decoder.layers.44.fc2.weight": "pytorch_model-00010-of-00014.bin",
|
639 |
+
"model.decoder.layers.44.final_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
640 |
+
"model.decoder.layers.44.final_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
641 |
+
"model.decoder.layers.44.self_attn.k_proj.bias": "pytorch_model-00010-of-00014.bin",
|
642 |
+
"model.decoder.layers.44.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
|
643 |
+
"model.decoder.layers.44.self_attn.out_proj.bias": "pytorch_model-00010-of-00014.bin",
|
644 |
+
"model.decoder.layers.44.self_attn.out_proj.weight": "pytorch_model-00010-of-00014.bin",
|
645 |
+
"model.decoder.layers.44.self_attn.q_proj.bias": "pytorch_model-00010-of-00014.bin",
|
646 |
+
"model.decoder.layers.44.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
|
647 |
+
"model.decoder.layers.44.self_attn.v_proj.bias": "pytorch_model-00010-of-00014.bin",
|
648 |
+
"model.decoder.layers.44.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
|
649 |
+
"model.decoder.layers.44.self_attn_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
650 |
+
"model.decoder.layers.44.self_attn_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
651 |
+
"model.decoder.layers.45.fc1.bias": "pytorch_model-00010-of-00014.bin",
|
652 |
+
"model.decoder.layers.45.fc1.weight": "pytorch_model-00010-of-00014.bin",
|
653 |
+
"model.decoder.layers.45.fc2.bias": "pytorch_model-00010-of-00014.bin",
|
654 |
+
"model.decoder.layers.45.fc2.weight": "pytorch_model-00010-of-00014.bin",
|
655 |
+
"model.decoder.layers.45.final_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
656 |
+
"model.decoder.layers.45.final_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
657 |
+
"model.decoder.layers.45.self_attn.k_proj.bias": "pytorch_model-00010-of-00014.bin",
|
658 |
+
"model.decoder.layers.45.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
|
659 |
+
"model.decoder.layers.45.self_attn.out_proj.bias": "pytorch_model-00010-of-00014.bin",
|
660 |
+
"model.decoder.layers.45.self_attn.out_proj.weight": "pytorch_model-00010-of-00014.bin",
|
661 |
+
"model.decoder.layers.45.self_attn.q_proj.bias": "pytorch_model-00010-of-00014.bin",
|
662 |
+
"model.decoder.layers.45.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
|
663 |
+
"model.decoder.layers.45.self_attn.v_proj.bias": "pytorch_model-00010-of-00014.bin",
|
664 |
+
"model.decoder.layers.45.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
|
665 |
+
"model.decoder.layers.45.self_attn_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
666 |
+
"model.decoder.layers.45.self_attn_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
667 |
+
"model.decoder.layers.46.fc1.bias": "pytorch_model-00010-of-00014.bin",
|
668 |
+
"model.decoder.layers.46.fc1.weight": "pytorch_model-00010-of-00014.bin",
|
669 |
+
"model.decoder.layers.46.fc2.bias": "pytorch_model-00010-of-00014.bin",
|
670 |
+
"model.decoder.layers.46.fc2.weight": "pytorch_model-00010-of-00014.bin",
|
671 |
+
"model.decoder.layers.46.final_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
672 |
+
"model.decoder.layers.46.final_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
673 |
+
"model.decoder.layers.46.self_attn.k_proj.bias": "pytorch_model-00010-of-00014.bin",
|
674 |
+
"model.decoder.layers.46.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
|
675 |
+
"model.decoder.layers.46.self_attn.out_proj.bias": "pytorch_model-00010-of-00014.bin",
|
676 |
+
"model.decoder.layers.46.self_attn.out_proj.weight": "pytorch_model-00010-of-00014.bin",
|
677 |
+
"model.decoder.layers.46.self_attn.q_proj.bias": "pytorch_model-00010-of-00014.bin",
|
678 |
+
"model.decoder.layers.46.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
|
679 |
+
"model.decoder.layers.46.self_attn.v_proj.bias": "pytorch_model-00010-of-00014.bin",
|
680 |
+
"model.decoder.layers.46.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
|
681 |
+
"model.decoder.layers.46.self_attn_layer_norm.bias": "pytorch_model-00010-of-00014.bin",
|
682 |
+
"model.decoder.layers.46.self_attn_layer_norm.weight": "pytorch_model-00010-of-00014.bin",
|
683 |
+
"model.decoder.layers.47.fc1.bias": "pytorch_model-00011-of-00014.bin",
|
684 |
+
"model.decoder.layers.47.fc1.weight": "pytorch_model-00011-of-00014.bin",
|
685 |
+
"model.decoder.layers.47.fc2.bias": "pytorch_model-00011-of-00014.bin",
|
686 |
+
"model.decoder.layers.47.fc2.weight": "pytorch_model-00011-of-00014.bin",
|
687 |
+
"model.decoder.layers.47.final_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
688 |
+
"model.decoder.layers.47.final_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
689 |
+
"model.decoder.layers.47.self_attn.k_proj.bias": "pytorch_model-00010-of-00014.bin",
|
690 |
+
"model.decoder.layers.47.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
|
691 |
+
"model.decoder.layers.47.self_attn.out_proj.bias": "pytorch_model-00011-of-00014.bin",
|
692 |
+
"model.decoder.layers.47.self_attn.out_proj.weight": "pytorch_model-00011-of-00014.bin",
|
693 |
+
"model.decoder.layers.47.self_attn.q_proj.bias": "pytorch_model-00011-of-00014.bin",
|
694 |
+
"model.decoder.layers.47.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
|
695 |
+
"model.decoder.layers.47.self_attn.v_proj.bias": "pytorch_model-00010-of-00014.bin",
|
696 |
+
"model.decoder.layers.47.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
|
697 |
+
"model.decoder.layers.47.self_attn_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
698 |
+
"model.decoder.layers.47.self_attn_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
699 |
+
"model.decoder.layers.48.fc1.bias": "pytorch_model-00011-of-00014.bin",
|
700 |
+
"model.decoder.layers.48.fc1.weight": "pytorch_model-00011-of-00014.bin",
|
701 |
+
"model.decoder.layers.48.fc2.bias": "pytorch_model-00011-of-00014.bin",
|
702 |
+
"model.decoder.layers.48.fc2.weight": "pytorch_model-00011-of-00014.bin",
|
703 |
+
"model.decoder.layers.48.final_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
704 |
+
"model.decoder.layers.48.final_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
705 |
+
"model.decoder.layers.48.self_attn.k_proj.bias": "pytorch_model-00011-of-00014.bin",
|
706 |
+
"model.decoder.layers.48.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
|
707 |
+
"model.decoder.layers.48.self_attn.out_proj.bias": "pytorch_model-00011-of-00014.bin",
|
708 |
+
"model.decoder.layers.48.self_attn.out_proj.weight": "pytorch_model-00011-of-00014.bin",
|
709 |
+
"model.decoder.layers.48.self_attn.q_proj.bias": "pytorch_model-00011-of-00014.bin",
|
710 |
+
"model.decoder.layers.48.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
|
711 |
+
"model.decoder.layers.48.self_attn.v_proj.bias": "pytorch_model-00011-of-00014.bin",
|
712 |
+
"model.decoder.layers.48.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
|
713 |
+
"model.decoder.layers.48.self_attn_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
714 |
+
"model.decoder.layers.48.self_attn_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
715 |
+
"model.decoder.layers.49.fc1.bias": "pytorch_model-00011-of-00014.bin",
|
716 |
+
"model.decoder.layers.49.fc1.weight": "pytorch_model-00011-of-00014.bin",
|
717 |
+
"model.decoder.layers.49.fc2.bias": "pytorch_model-00011-of-00014.bin",
|
718 |
+
"model.decoder.layers.49.fc2.weight": "pytorch_model-00011-of-00014.bin",
|
719 |
+
"model.decoder.layers.49.final_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
720 |
+
"model.decoder.layers.49.final_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
721 |
+
"model.decoder.layers.49.self_attn.k_proj.bias": "pytorch_model-00011-of-00014.bin",
|
722 |
+
"model.decoder.layers.49.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
|
723 |
+
"model.decoder.layers.49.self_attn.out_proj.bias": "pytorch_model-00011-of-00014.bin",
|
724 |
+
"model.decoder.layers.49.self_attn.out_proj.weight": "pytorch_model-00011-of-00014.bin",
|
725 |
+
"model.decoder.layers.49.self_attn.q_proj.bias": "pytorch_model-00011-of-00014.bin",
|
726 |
+
"model.decoder.layers.49.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
|
727 |
+
"model.decoder.layers.49.self_attn.v_proj.bias": "pytorch_model-00011-of-00014.bin",
|
728 |
+
"model.decoder.layers.49.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
|
729 |
+
"model.decoder.layers.49.self_attn_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
730 |
+
"model.decoder.layers.49.self_attn_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
731 |
+
"model.decoder.layers.5.fc1.bias": "pytorch_model-00002-of-00014.bin",
|
732 |
+
"model.decoder.layers.5.fc1.weight": "pytorch_model-00002-of-00014.bin",
|
733 |
+
"model.decoder.layers.5.fc2.bias": "pytorch_model-00002-of-00014.bin",
|
734 |
+
"model.decoder.layers.5.fc2.weight": "pytorch_model-00002-of-00014.bin",
|
735 |
+
"model.decoder.layers.5.final_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
736 |
+
"model.decoder.layers.5.final_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
737 |
+
"model.decoder.layers.5.self_attn.k_proj.bias": "pytorch_model-00002-of-00014.bin",
|
738 |
+
"model.decoder.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
|
739 |
+
"model.decoder.layers.5.self_attn.out_proj.bias": "pytorch_model-00002-of-00014.bin",
|
740 |
+
"model.decoder.layers.5.self_attn.out_proj.weight": "pytorch_model-00002-of-00014.bin",
|
741 |
+
"model.decoder.layers.5.self_attn.q_proj.bias": "pytorch_model-00002-of-00014.bin",
|
742 |
+
"model.decoder.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
|
743 |
+
"model.decoder.layers.5.self_attn.v_proj.bias": "pytorch_model-00002-of-00014.bin",
|
744 |
+
"model.decoder.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
|
745 |
+
"model.decoder.layers.5.self_attn_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
746 |
+
"model.decoder.layers.5.self_attn_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
747 |
+
"model.decoder.layers.50.fc1.bias": "pytorch_model-00011-of-00014.bin",
|
748 |
+
"model.decoder.layers.50.fc1.weight": "pytorch_model-00011-of-00014.bin",
|
749 |
+
"model.decoder.layers.50.fc2.bias": "pytorch_model-00011-of-00014.bin",
|
750 |
+
"model.decoder.layers.50.fc2.weight": "pytorch_model-00011-of-00014.bin",
|
751 |
+
"model.decoder.layers.50.final_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
752 |
+
"model.decoder.layers.50.final_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
753 |
+
"model.decoder.layers.50.self_attn.k_proj.bias": "pytorch_model-00011-of-00014.bin",
|
754 |
+
"model.decoder.layers.50.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
|
755 |
+
"model.decoder.layers.50.self_attn.out_proj.bias": "pytorch_model-00011-of-00014.bin",
|
756 |
+
"model.decoder.layers.50.self_attn.out_proj.weight": "pytorch_model-00011-of-00014.bin",
|
757 |
+
"model.decoder.layers.50.self_attn.q_proj.bias": "pytorch_model-00011-of-00014.bin",
|
758 |
+
"model.decoder.layers.50.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
|
759 |
+
"model.decoder.layers.50.self_attn.v_proj.bias": "pytorch_model-00011-of-00014.bin",
|
760 |
+
"model.decoder.layers.50.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
|
761 |
+
"model.decoder.layers.50.self_attn_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
762 |
+
"model.decoder.layers.50.self_attn_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
763 |
+
"model.decoder.layers.51.fc1.bias": "pytorch_model-00011-of-00014.bin",
|
764 |
+
"model.decoder.layers.51.fc1.weight": "pytorch_model-00011-of-00014.bin",
|
765 |
+
"model.decoder.layers.51.fc2.bias": "pytorch_model-00011-of-00014.bin",
|
766 |
+
"model.decoder.layers.51.fc2.weight": "pytorch_model-00011-of-00014.bin",
|
767 |
+
"model.decoder.layers.51.final_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
768 |
+
"model.decoder.layers.51.final_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
769 |
+
"model.decoder.layers.51.self_attn.k_proj.bias": "pytorch_model-00011-of-00014.bin",
|
770 |
+
"model.decoder.layers.51.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
|
771 |
+
"model.decoder.layers.51.self_attn.out_proj.bias": "pytorch_model-00011-of-00014.bin",
|
772 |
+
"model.decoder.layers.51.self_attn.out_proj.weight": "pytorch_model-00011-of-00014.bin",
|
773 |
+
"model.decoder.layers.51.self_attn.q_proj.bias": "pytorch_model-00011-of-00014.bin",
|
774 |
+
"model.decoder.layers.51.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
|
775 |
+
"model.decoder.layers.51.self_attn.v_proj.bias": "pytorch_model-00011-of-00014.bin",
|
776 |
+
"model.decoder.layers.51.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
|
777 |
+
"model.decoder.layers.51.self_attn_layer_norm.bias": "pytorch_model-00011-of-00014.bin",
|
778 |
+
"model.decoder.layers.51.self_attn_layer_norm.weight": "pytorch_model-00011-of-00014.bin",
|
779 |
+
"model.decoder.layers.52.fc1.bias": "pytorch_model-00012-of-00014.bin",
|
780 |
+
"model.decoder.layers.52.fc1.weight": "pytorch_model-00012-of-00014.bin",
|
781 |
+
"model.decoder.layers.52.fc2.bias": "pytorch_model-00012-of-00014.bin",
|
782 |
+
"model.decoder.layers.52.fc2.weight": "pytorch_model-00012-of-00014.bin",
|
783 |
+
"model.decoder.layers.52.final_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
784 |
+
"model.decoder.layers.52.final_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
785 |
+
"model.decoder.layers.52.self_attn.k_proj.bias": "pytorch_model-00012-of-00014.bin",
|
786 |
+
"model.decoder.layers.52.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
|
787 |
+
"model.decoder.layers.52.self_attn.out_proj.bias": "pytorch_model-00012-of-00014.bin",
|
788 |
+
"model.decoder.layers.52.self_attn.out_proj.weight": "pytorch_model-00012-of-00014.bin",
|
789 |
+
"model.decoder.layers.52.self_attn.q_proj.bias": "pytorch_model-00012-of-00014.bin",
|
790 |
+
"model.decoder.layers.52.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
|
791 |
+
"model.decoder.layers.52.self_attn.v_proj.bias": "pytorch_model-00012-of-00014.bin",
|
792 |
+
"model.decoder.layers.52.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
|
793 |
+
"model.decoder.layers.52.self_attn_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
794 |
+
"model.decoder.layers.52.self_attn_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
795 |
+
"model.decoder.layers.53.fc1.bias": "pytorch_model-00012-of-00014.bin",
|
796 |
+
"model.decoder.layers.53.fc1.weight": "pytorch_model-00012-of-00014.bin",
|
797 |
+
"model.decoder.layers.53.fc2.bias": "pytorch_model-00012-of-00014.bin",
|
798 |
+
"model.decoder.layers.53.fc2.weight": "pytorch_model-00012-of-00014.bin",
|
799 |
+
"model.decoder.layers.53.final_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
800 |
+
"model.decoder.layers.53.final_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
801 |
+
"model.decoder.layers.53.self_attn.k_proj.bias": "pytorch_model-00012-of-00014.bin",
|
802 |
+
"model.decoder.layers.53.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
|
803 |
+
"model.decoder.layers.53.self_attn.out_proj.bias": "pytorch_model-00012-of-00014.bin",
|
804 |
+
"model.decoder.layers.53.self_attn.out_proj.weight": "pytorch_model-00012-of-00014.bin",
|
805 |
+
"model.decoder.layers.53.self_attn.q_proj.bias": "pytorch_model-00012-of-00014.bin",
|
806 |
+
"model.decoder.layers.53.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
|
807 |
+
"model.decoder.layers.53.self_attn.v_proj.bias": "pytorch_model-00012-of-00014.bin",
|
808 |
+
"model.decoder.layers.53.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
|
809 |
+
"model.decoder.layers.53.self_attn_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
810 |
+
"model.decoder.layers.53.self_attn_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
811 |
+
"model.decoder.layers.54.fc1.bias": "pytorch_model-00012-of-00014.bin",
|
812 |
+
"model.decoder.layers.54.fc1.weight": "pytorch_model-00012-of-00014.bin",
|
813 |
+
"model.decoder.layers.54.fc2.bias": "pytorch_model-00012-of-00014.bin",
|
814 |
+
"model.decoder.layers.54.fc2.weight": "pytorch_model-00012-of-00014.bin",
|
815 |
+
"model.decoder.layers.54.final_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
816 |
+
"model.decoder.layers.54.final_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
817 |
+
"model.decoder.layers.54.self_attn.k_proj.bias": "pytorch_model-00012-of-00014.bin",
|
818 |
+
"model.decoder.layers.54.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
|
819 |
+
"model.decoder.layers.54.self_attn.out_proj.bias": "pytorch_model-00012-of-00014.bin",
|
820 |
+
"model.decoder.layers.54.self_attn.out_proj.weight": "pytorch_model-00012-of-00014.bin",
|
821 |
+
"model.decoder.layers.54.self_attn.q_proj.bias": "pytorch_model-00012-of-00014.bin",
|
822 |
+
"model.decoder.layers.54.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
|
823 |
+
"model.decoder.layers.54.self_attn.v_proj.bias": "pytorch_model-00012-of-00014.bin",
|
824 |
+
"model.decoder.layers.54.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
|
825 |
+
"model.decoder.layers.54.self_attn_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
826 |
+
"model.decoder.layers.54.self_attn_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
827 |
+
"model.decoder.layers.55.fc1.bias": "pytorch_model-00012-of-00014.bin",
|
828 |
+
"model.decoder.layers.55.fc1.weight": "pytorch_model-00012-of-00014.bin",
|
829 |
+
"model.decoder.layers.55.fc2.bias": "pytorch_model-00012-of-00014.bin",
|
830 |
+
"model.decoder.layers.55.fc2.weight": "pytorch_model-00012-of-00014.bin",
|
831 |
+
"model.decoder.layers.55.final_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
832 |
+
"model.decoder.layers.55.final_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
833 |
+
"model.decoder.layers.55.self_attn.k_proj.bias": "pytorch_model-00012-of-00014.bin",
|
834 |
+
"model.decoder.layers.55.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
|
835 |
+
"model.decoder.layers.55.self_attn.out_proj.bias": "pytorch_model-00012-of-00014.bin",
|
836 |
+
"model.decoder.layers.55.self_attn.out_proj.weight": "pytorch_model-00012-of-00014.bin",
|
837 |
+
"model.decoder.layers.55.self_attn.q_proj.bias": "pytorch_model-00012-of-00014.bin",
|
838 |
+
"model.decoder.layers.55.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
|
839 |
+
"model.decoder.layers.55.self_attn.v_proj.bias": "pytorch_model-00012-of-00014.bin",
|
840 |
+
"model.decoder.layers.55.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
|
841 |
+
"model.decoder.layers.55.self_attn_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
842 |
+
"model.decoder.layers.55.self_attn_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
843 |
+
"model.decoder.layers.56.fc1.bias": "pytorch_model-00012-of-00014.bin",
|
844 |
+
"model.decoder.layers.56.fc1.weight": "pytorch_model-00012-of-00014.bin",
|
845 |
+
"model.decoder.layers.56.fc2.bias": "pytorch_model-00013-of-00014.bin",
|
846 |
+
"model.decoder.layers.56.fc2.weight": "pytorch_model-00013-of-00014.bin",
|
847 |
+
"model.decoder.layers.56.final_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
848 |
+
"model.decoder.layers.56.final_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
849 |
+
"model.decoder.layers.56.self_attn.k_proj.bias": "pytorch_model-00012-of-00014.bin",
|
850 |
+
"model.decoder.layers.56.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
|
851 |
+
"model.decoder.layers.56.self_attn.out_proj.bias": "pytorch_model-00012-of-00014.bin",
|
852 |
+
"model.decoder.layers.56.self_attn.out_proj.weight": "pytorch_model-00012-of-00014.bin",
|
853 |
+
"model.decoder.layers.56.self_attn.q_proj.bias": "pytorch_model-00012-of-00014.bin",
|
854 |
+
"model.decoder.layers.56.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
|
855 |
+
"model.decoder.layers.56.self_attn.v_proj.bias": "pytorch_model-00012-of-00014.bin",
|
856 |
+
"model.decoder.layers.56.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
|
857 |
+
"model.decoder.layers.56.self_attn_layer_norm.bias": "pytorch_model-00012-of-00014.bin",
|
858 |
+
"model.decoder.layers.56.self_attn_layer_norm.weight": "pytorch_model-00012-of-00014.bin",
|
859 |
+
"model.decoder.layers.57.fc1.bias": "pytorch_model-00013-of-00014.bin",
|
860 |
+
"model.decoder.layers.57.fc1.weight": "pytorch_model-00013-of-00014.bin",
|
861 |
+
"model.decoder.layers.57.fc2.bias": "pytorch_model-00013-of-00014.bin",
|
862 |
+
"model.decoder.layers.57.fc2.weight": "pytorch_model-00013-of-00014.bin",
|
863 |
+
"model.decoder.layers.57.final_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
864 |
+
"model.decoder.layers.57.final_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
865 |
+
"model.decoder.layers.57.self_attn.k_proj.bias": "pytorch_model-00013-of-00014.bin",
|
866 |
+
"model.decoder.layers.57.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
|
867 |
+
"model.decoder.layers.57.self_attn.out_proj.bias": "pytorch_model-00013-of-00014.bin",
|
868 |
+
"model.decoder.layers.57.self_attn.out_proj.weight": "pytorch_model-00013-of-00014.bin",
|
869 |
+
"model.decoder.layers.57.self_attn.q_proj.bias": "pytorch_model-00013-of-00014.bin",
|
870 |
+
"model.decoder.layers.57.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
|
871 |
+
"model.decoder.layers.57.self_attn.v_proj.bias": "pytorch_model-00013-of-00014.bin",
|
872 |
+
"model.decoder.layers.57.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
|
873 |
+
"model.decoder.layers.57.self_attn_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
874 |
+
"model.decoder.layers.57.self_attn_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
875 |
+
"model.decoder.layers.58.fc1.bias": "pytorch_model-00013-of-00014.bin",
|
876 |
+
"model.decoder.layers.58.fc1.weight": "pytorch_model-00013-of-00014.bin",
|
877 |
+
"model.decoder.layers.58.fc2.bias": "pytorch_model-00013-of-00014.bin",
|
878 |
+
"model.decoder.layers.58.fc2.weight": "pytorch_model-00013-of-00014.bin",
|
879 |
+
"model.decoder.layers.58.final_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
880 |
+
"model.decoder.layers.58.final_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
881 |
+
"model.decoder.layers.58.self_attn.k_proj.bias": "pytorch_model-00013-of-00014.bin",
|
882 |
+
"model.decoder.layers.58.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
|
883 |
+
"model.decoder.layers.58.self_attn.out_proj.bias": "pytorch_model-00013-of-00014.bin",
|
884 |
+
"model.decoder.layers.58.self_attn.out_proj.weight": "pytorch_model-00013-of-00014.bin",
|
885 |
+
"model.decoder.layers.58.self_attn.q_proj.bias": "pytorch_model-00013-of-00014.bin",
|
886 |
+
"model.decoder.layers.58.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
|
887 |
+
"model.decoder.layers.58.self_attn.v_proj.bias": "pytorch_model-00013-of-00014.bin",
|
888 |
+
"model.decoder.layers.58.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
|
889 |
+
"model.decoder.layers.58.self_attn_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
890 |
+
"model.decoder.layers.58.self_attn_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
891 |
+
"model.decoder.layers.59.fc1.bias": "pytorch_model-00013-of-00014.bin",
|
892 |
+
"model.decoder.layers.59.fc1.weight": "pytorch_model-00013-of-00014.bin",
|
893 |
+
"model.decoder.layers.59.fc2.bias": "pytorch_model-00013-of-00014.bin",
|
894 |
+
"model.decoder.layers.59.fc2.weight": "pytorch_model-00013-of-00014.bin",
|
895 |
+
"model.decoder.layers.59.final_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
896 |
+
"model.decoder.layers.59.final_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
897 |
+
"model.decoder.layers.59.self_attn.k_proj.bias": "pytorch_model-00013-of-00014.bin",
|
898 |
+
"model.decoder.layers.59.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
|
899 |
+
"model.decoder.layers.59.self_attn.out_proj.bias": "pytorch_model-00013-of-00014.bin",
|
900 |
+
"model.decoder.layers.59.self_attn.out_proj.weight": "pytorch_model-00013-of-00014.bin",
|
901 |
+
"model.decoder.layers.59.self_attn.q_proj.bias": "pytorch_model-00013-of-00014.bin",
|
902 |
+
"model.decoder.layers.59.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
|
903 |
+
"model.decoder.layers.59.self_attn.v_proj.bias": "pytorch_model-00013-of-00014.bin",
|
904 |
+
"model.decoder.layers.59.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
|
905 |
+
"model.decoder.layers.59.self_attn_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
906 |
+
"model.decoder.layers.59.self_attn_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
907 |
+
"model.decoder.layers.6.fc1.bias": "pytorch_model-00002-of-00014.bin",
|
908 |
+
"model.decoder.layers.6.fc1.weight": "pytorch_model-00002-of-00014.bin",
|
909 |
+
"model.decoder.layers.6.fc2.bias": "pytorch_model-00002-of-00014.bin",
|
910 |
+
"model.decoder.layers.6.fc2.weight": "pytorch_model-00002-of-00014.bin",
|
911 |
+
"model.decoder.layers.6.final_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
912 |
+
"model.decoder.layers.6.final_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
913 |
+
"model.decoder.layers.6.self_attn.k_proj.bias": "pytorch_model-00002-of-00014.bin",
|
914 |
+
"model.decoder.layers.6.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
|
915 |
+
"model.decoder.layers.6.self_attn.out_proj.bias": "pytorch_model-00002-of-00014.bin",
|
916 |
+
"model.decoder.layers.6.self_attn.out_proj.weight": "pytorch_model-00002-of-00014.bin",
|
917 |
+
"model.decoder.layers.6.self_attn.q_proj.bias": "pytorch_model-00002-of-00014.bin",
|
918 |
+
"model.decoder.layers.6.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
|
919 |
+
"model.decoder.layers.6.self_attn.v_proj.bias": "pytorch_model-00002-of-00014.bin",
|
920 |
+
"model.decoder.layers.6.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
|
921 |
+
"model.decoder.layers.6.self_attn_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
922 |
+
"model.decoder.layers.6.self_attn_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
923 |
+
"model.decoder.layers.60.fc1.bias": "pytorch_model-00013-of-00014.bin",
|
924 |
+
"model.decoder.layers.60.fc1.weight": "pytorch_model-00013-of-00014.bin",
|
925 |
+
"model.decoder.layers.60.fc2.bias": "pytorch_model-00013-of-00014.bin",
|
926 |
+
"model.decoder.layers.60.fc2.weight": "pytorch_model-00013-of-00014.bin",
|
927 |
+
"model.decoder.layers.60.final_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
928 |
+
"model.decoder.layers.60.final_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
929 |
+
"model.decoder.layers.60.self_attn.k_proj.bias": "pytorch_model-00013-of-00014.bin",
|
930 |
+
"model.decoder.layers.60.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
|
931 |
+
"model.decoder.layers.60.self_attn.out_proj.bias": "pytorch_model-00013-of-00014.bin",
|
932 |
+
"model.decoder.layers.60.self_attn.out_proj.weight": "pytorch_model-00013-of-00014.bin",
|
933 |
+
"model.decoder.layers.60.self_attn.q_proj.bias": "pytorch_model-00013-of-00014.bin",
|
934 |
+
"model.decoder.layers.60.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
|
935 |
+
"model.decoder.layers.60.self_attn.v_proj.bias": "pytorch_model-00013-of-00014.bin",
|
936 |
+
"model.decoder.layers.60.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
|
937 |
+
"model.decoder.layers.60.self_attn_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
938 |
+
"model.decoder.layers.60.self_attn_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
939 |
+
"model.decoder.layers.61.fc1.bias": "pytorch_model-00014-of-00014.bin",
|
940 |
+
"model.decoder.layers.61.fc1.weight": "pytorch_model-00014-of-00014.bin",
|
941 |
+
"model.decoder.layers.61.fc2.bias": "pytorch_model-00014-of-00014.bin",
|
942 |
+
"model.decoder.layers.61.fc2.weight": "pytorch_model-00014-of-00014.bin",
|
943 |
+
"model.decoder.layers.61.final_layer_norm.bias": "pytorch_model-00014-of-00014.bin",
|
944 |
+
"model.decoder.layers.61.final_layer_norm.weight": "pytorch_model-00014-of-00014.bin",
|
945 |
+
"model.decoder.layers.61.self_attn.k_proj.bias": "pytorch_model-00013-of-00014.bin",
|
946 |
+
"model.decoder.layers.61.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
|
947 |
+
"model.decoder.layers.61.self_attn.out_proj.bias": "pytorch_model-00013-of-00014.bin",
|
948 |
+
"model.decoder.layers.61.self_attn.out_proj.weight": "pytorch_model-00013-of-00014.bin",
|
949 |
+
"model.decoder.layers.61.self_attn.q_proj.bias": "pytorch_model-00013-of-00014.bin",
|
950 |
+
"model.decoder.layers.61.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
|
951 |
+
"model.decoder.layers.61.self_attn.v_proj.bias": "pytorch_model-00013-of-00014.bin",
|
952 |
+
"model.decoder.layers.61.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
|
953 |
+
"model.decoder.layers.61.self_attn_layer_norm.bias": "pytorch_model-00013-of-00014.bin",
|
954 |
+
"model.decoder.layers.61.self_attn_layer_norm.weight": "pytorch_model-00013-of-00014.bin",
|
955 |
+
"model.decoder.layers.62.fc1.bias": "pytorch_model-00014-of-00014.bin",
|
956 |
+
"model.decoder.layers.62.fc1.weight": "pytorch_model-00014-of-00014.bin",
|
957 |
+
"model.decoder.layers.62.fc2.bias": "pytorch_model-00014-of-00014.bin",
|
958 |
+
"model.decoder.layers.62.fc2.weight": "pytorch_model-00014-of-00014.bin",
|
959 |
+
"model.decoder.layers.62.final_layer_norm.bias": "pytorch_model-00014-of-00014.bin",
|
960 |
+
"model.decoder.layers.62.final_layer_norm.weight": "pytorch_model-00014-of-00014.bin",
|
961 |
+
"model.decoder.layers.62.self_attn.k_proj.bias": "pytorch_model-00014-of-00014.bin",
|
962 |
+
"model.decoder.layers.62.self_attn.k_proj.weight": "pytorch_model-00014-of-00014.bin",
|
963 |
+
"model.decoder.layers.62.self_attn.out_proj.bias": "pytorch_model-00014-of-00014.bin",
|
964 |
+
"model.decoder.layers.62.self_attn.out_proj.weight": "pytorch_model-00014-of-00014.bin",
|
965 |
+
"model.decoder.layers.62.self_attn.q_proj.bias": "pytorch_model-00014-of-00014.bin",
|
966 |
+
"model.decoder.layers.62.self_attn.q_proj.weight": "pytorch_model-00014-of-00014.bin",
|
967 |
+
"model.decoder.layers.62.self_attn.v_proj.bias": "pytorch_model-00014-of-00014.bin",
|
968 |
+
"model.decoder.layers.62.self_attn.v_proj.weight": "pytorch_model-00014-of-00014.bin",
|
969 |
+
"model.decoder.layers.62.self_attn_layer_norm.bias": "pytorch_model-00014-of-00014.bin",
|
970 |
+
"model.decoder.layers.62.self_attn_layer_norm.weight": "pytorch_model-00014-of-00014.bin",
|
971 |
+
"model.decoder.layers.63.fc1.bias": "pytorch_model-00014-of-00014.bin",
|
972 |
+
"model.decoder.layers.63.fc1.weight": "pytorch_model-00014-of-00014.bin",
|
973 |
+
"model.decoder.layers.63.fc2.bias": "pytorch_model-00014-of-00014.bin",
|
974 |
+
"model.decoder.layers.63.fc2.weight": "pytorch_model-00014-of-00014.bin",
|
975 |
+
"model.decoder.layers.63.final_layer_norm.bias": "pytorch_model-00014-of-00014.bin",
|
976 |
+
"model.decoder.layers.63.final_layer_norm.weight": "pytorch_model-00014-of-00014.bin",
|
977 |
+
"model.decoder.layers.63.self_attn.k_proj.bias": "pytorch_model-00014-of-00014.bin",
|
978 |
+
"model.decoder.layers.63.self_attn.k_proj.weight": "pytorch_model-00014-of-00014.bin",
|
979 |
+
"model.decoder.layers.63.self_attn.out_proj.bias": "pytorch_model-00014-of-00014.bin",
|
980 |
+
"model.decoder.layers.63.self_attn.out_proj.weight": "pytorch_model-00014-of-00014.bin",
|
981 |
+
"model.decoder.layers.63.self_attn.q_proj.bias": "pytorch_model-00014-of-00014.bin",
|
982 |
+
"model.decoder.layers.63.self_attn.q_proj.weight": "pytorch_model-00014-of-00014.bin",
|
983 |
+
"model.decoder.layers.63.self_attn.v_proj.bias": "pytorch_model-00014-of-00014.bin",
|
984 |
+
"model.decoder.layers.63.self_attn.v_proj.weight": "pytorch_model-00014-of-00014.bin",
|
985 |
+
"model.decoder.layers.63.self_attn_layer_norm.bias": "pytorch_model-00014-of-00014.bin",
|
986 |
+
"model.decoder.layers.63.self_attn_layer_norm.weight": "pytorch_model-00014-of-00014.bin",
|
987 |
+
"model.decoder.layers.7.fc1.bias": "pytorch_model-00002-of-00014.bin",
|
988 |
+
"model.decoder.layers.7.fc1.weight": "pytorch_model-00002-of-00014.bin",
|
989 |
+
"model.decoder.layers.7.fc2.bias": "pytorch_model-00002-of-00014.bin",
|
990 |
+
"model.decoder.layers.7.fc2.weight": "pytorch_model-00002-of-00014.bin",
|
991 |
+
"model.decoder.layers.7.final_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
992 |
+
"model.decoder.layers.7.final_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
993 |
+
"model.decoder.layers.7.self_attn.k_proj.bias": "pytorch_model-00002-of-00014.bin",
|
994 |
+
"model.decoder.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
|
995 |
+
"model.decoder.layers.7.self_attn.out_proj.bias": "pytorch_model-00002-of-00014.bin",
|
996 |
+
"model.decoder.layers.7.self_attn.out_proj.weight": "pytorch_model-00002-of-00014.bin",
|
997 |
+
"model.decoder.layers.7.self_attn.q_proj.bias": "pytorch_model-00002-of-00014.bin",
|
998 |
+
"model.decoder.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
|
999 |
+
"model.decoder.layers.7.self_attn.v_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1000 |
+
"model.decoder.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1001 |
+
"model.decoder.layers.7.self_attn_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
1002 |
+
"model.decoder.layers.7.self_attn_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
1003 |
+
"model.decoder.layers.8.fc1.bias": "pytorch_model-00002-of-00014.bin",
|
1004 |
+
"model.decoder.layers.8.fc1.weight": "pytorch_model-00002-of-00014.bin",
|
1005 |
+
"model.decoder.layers.8.fc2.bias": "pytorch_model-00002-of-00014.bin",
|
1006 |
+
"model.decoder.layers.8.fc2.weight": "pytorch_model-00002-of-00014.bin",
|
1007 |
+
"model.decoder.layers.8.final_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
1008 |
+
"model.decoder.layers.8.final_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
1009 |
+
"model.decoder.layers.8.self_attn.k_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1010 |
+
"model.decoder.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1011 |
+
"model.decoder.layers.8.self_attn.out_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1012 |
+
"model.decoder.layers.8.self_attn.out_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1013 |
+
"model.decoder.layers.8.self_attn.q_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1014 |
+
"model.decoder.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1015 |
+
"model.decoder.layers.8.self_attn.v_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1016 |
+
"model.decoder.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1017 |
+
"model.decoder.layers.8.self_attn_layer_norm.bias": "pytorch_model-00002-of-00014.bin",
|
1018 |
+
"model.decoder.layers.8.self_attn_layer_norm.weight": "pytorch_model-00002-of-00014.bin",
|
1019 |
+
"model.decoder.layers.9.fc1.bias": "pytorch_model-00003-of-00014.bin",
|
1020 |
+
"model.decoder.layers.9.fc1.weight": "pytorch_model-00003-of-00014.bin",
|
1021 |
+
"model.decoder.layers.9.fc2.bias": "pytorch_model-00003-of-00014.bin",
|
1022 |
+
"model.decoder.layers.9.fc2.weight": "pytorch_model-00003-of-00014.bin",
|
1023 |
+
"model.decoder.layers.9.final_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
1024 |
+
"model.decoder.layers.9.final_layer_norm.weight": "pytorch_model-00003-of-00014.bin",
|
1025 |
+
"model.decoder.layers.9.self_attn.k_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1026 |
+
"model.decoder.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1027 |
+
"model.decoder.layers.9.self_attn.out_proj.bias": "pytorch_model-00003-of-00014.bin",
|
1028 |
+
"model.decoder.layers.9.self_attn.out_proj.weight": "pytorch_model-00003-of-00014.bin",
|
1029 |
+
"model.decoder.layers.9.self_attn.q_proj.bias": "pytorch_model-00003-of-00014.bin",
|
1030 |
+
"model.decoder.layers.9.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
|
1031 |
+
"model.decoder.layers.9.self_attn.v_proj.bias": "pytorch_model-00002-of-00014.bin",
|
1032 |
+
"model.decoder.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
|
1033 |
+
"model.decoder.layers.9.self_attn_layer_norm.bias": "pytorch_model-00003-of-00014.bin",
|
1034 |
+
"model.decoder.layers.9.self_attn_layer_norm.weight": "pytorch_model-00003-of-00014.bin"
|
1035 |
+
}
|
1036 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"errors": "replace", "unk_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "add_bos_token": true, "special_tokens_map_file": null, "name_or_path": "patrickvonplaten/opt-30b"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|