v2ray commited on
Commit
3363bac
1 Parent(s): 042dfd0

Fixed LoRA targeting.

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. LICENSE.txt +176 -0
  2. NOTICE.txt +1 -0
  3. README.md +159 -0
  4. __init__.py +2 -0
  5. config.json +39 -0
  6. configuration_dbrx.py +264 -0
  7. generation_config.json +4 -0
  8. model-00001-of-00054.safetensors +3 -0
  9. model-00002-of-00054.safetensors +3 -0
  10. model-00003-of-00054.safetensors +3 -0
  11. model-00004-of-00054.safetensors +3 -0
  12. model-00005-of-00054.safetensors +3 -0
  13. model-00006-of-00054.safetensors +3 -0
  14. model-00007-of-00054.safetensors +3 -0
  15. model-00008-of-00054.safetensors +3 -0
  16. model-00009-of-00054.safetensors +3 -0
  17. model-00010-of-00054.safetensors +3 -0
  18. model-00011-of-00054.safetensors +3 -0
  19. model-00012-of-00054.safetensors +3 -0
  20. model-00013-of-00054.safetensors +3 -0
  21. model-00014-of-00054.safetensors +3 -0
  22. model-00015-of-00054.safetensors +3 -0
  23. model-00016-of-00054.safetensors +3 -0
  24. model-00017-of-00054.safetensors +3 -0
  25. model-00018-of-00054.safetensors +3 -0
  26. model-00019-of-00054.safetensors +3 -0
  27. model-00020-of-00054.safetensors +3 -0
  28. model-00021-of-00054.safetensors +3 -0
  29. model-00022-of-00054.safetensors +3 -0
  30. model-00023-of-00054.safetensors +3 -0
  31. model-00024-of-00054.safetensors +3 -0
  32. model-00025-of-00054.safetensors +3 -0
  33. model-00026-of-00054.safetensors +3 -0
  34. model-00027-of-00054.safetensors +3 -0
  35. model-00028-of-00054.safetensors +3 -0
  36. model-00029-of-00054.safetensors +3 -0
  37. model-00030-of-00054.safetensors +3 -0
  38. model-00031-of-00054.safetensors +3 -0
  39. model-00032-of-00054.safetensors +3 -0
  40. model-00033-of-00054.safetensors +3 -0
  41. model-00034-of-00054.safetensors +3 -0
  42. model-00035-of-00054.safetensors +3 -0
  43. model-00036-of-00054.safetensors +3 -0
  44. model-00037-of-00054.safetensors +3 -0
  45. model-00038-of-00054.safetensors +3 -0
  46. model-00039-of-00054.safetensors +3 -0
  47. model-00040-of-00054.safetensors +3 -0
  48. model-00041-of-00054.safetensors +3 -0
  49. model-00042-of-00054.safetensors +3 -0
  50. model-00043-of-00054.safetensors +3 -0
LICENSE.txt ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Databricks Open Model License
2
+
3
+ By using, reproducing, modifying, distributing, performing or displaying
4
+ any portion or element of DBRX or DBRX Derivatives, or otherwise accepting
5
+ the terms of this Agreement, you agree to be bound by this Agreement.
6
+
7
+ Version Release Date: March 27, 2024
8
+
9
+
10
+ Section 1: Definitions
11
+
12
+ “Agreement” means these terms and conditions that govern the use, reproduction,
13
+ modification, distribution, performance or display of DBRX and/or DBRX
14
+ Derivatives and any terms and conditions incorporated by reference.
15
+
16
+ “Databricks” or “we” means Databricks, Inc.
17
+
18
+ “Licensee” or “you” means you, or your employer or any other person or entity
19
+ (if you are entering into this Agreement on such person or entity’s behalf),
20
+ of the age required under applicable laws, rules or regulations to provide
21
+ legal consent and that has legal authority to bind your employer or such other
22
+ person or entity if you are entering in this Agreement on their behalf.
23
+
24
+ “DBRX Derivatives” means all (i) modifications to DBRX, (ii) works based on
25
+ DBRX and (iii) any other derivative works thereof. Outputs are not deemed DBRX
26
+ Derivatives.
27
+
28
+ “DBRX” means the foundational large language models and software and
29
+ algorithms, including machine-learning model code, trained model weights,
30
+ inference-enabling code, training-enabling code, fine-tuning enabling code,
31
+ documentation and other elements of the foregoing identified by Databricks at
32
+ https://github.com/databricks/dbrx, regardless of the source that you obtained
33
+ it from.
34
+
35
+ “Output” means the results of operating DBRX or DBRX Derivatives.
36
+
37
+ As used in this Agreement, “including” means “including without limitation.”
38
+
39
+
40
+ Section 2: License Rights and Conditions on Use and Distribution
41
+
42
+ 2.1 Grant of Rights
43
+
44
+ You are granted a non-exclusive, worldwide, non-transferable and royalty-free
45
+ limited license under Databricks’ intellectual property or other rights owned
46
+ by Databricks embodied in DBRX to use, reproduce, distribute, copy, modify,
47
+ and create derivative works of DBRX in accordance with the terms of this
48
+ Agreement.
49
+
50
+ 2.2 Reproduction and Distribution
51
+
52
+ 1. All distributions of DBRX or DBRX Derivatives must be accompanied by a
53
+ "Notice" text file that contains the following notice: "DBRX is provided
54
+ under and subject to the Databricks Open Model License, Copyright ©
55
+ Databricks, Inc. All rights reserved."
56
+
57
+ 2. If you distribute or make DBRX or DBRX Derivatives available to a third
58
+ party, you must provide a copy of this Agreement to such third party.
59
+
60
+ 3. You must cause any modified files that you distribute to carry prominent
61
+ notices stating that you modified the files.
62
+
63
+ You may add your own intellectual property statement to your modifications of
64
+ DBRX and, except as set forth in this Section, may provide additional or
65
+ different terms and conditions for use, reproduction, or distribution of DBRX
66
+ or DBRX Derivatives as a whole, provided your use, reproduction, modification,
67
+ distribution, performance, and display of DBRX or DBRX Derivatives otherwise
68
+ complies with the terms and conditions of this Agreement. Any additional or
69
+ different terms and conditions you impose must not conflict with the terms of
70
+ this Agreement and in the event of a conflict, the terms and conditions of this
71
+ Agreement shall govern over any such additional or different terms and conditions.
72
+
73
+ 2.3 Use Restrictions
74
+
75
+ You will not use DBRX or DBRX Derivatives or any Output to improve any other
76
+ large language model (excluding DBRX or DBRX Derivatives).
77
+
78
+ You will not use DBRX or DBRX Derivatives:
79
+
80
+ 1. for any restricted use set forth in the Databricks Open Model Acceptable
81
+ Use Policy identified at
82
+ https://www.databricks.com/legal/acceptable-use-policy-open-model
83
+ ("Acceptable Use Policy"), which is hereby incorporated by reference into
84
+ this Agreement; or
85
+
86
+ 2. in violation of applicable laws and regulations.
87
+
88
+ To the maximum extent permitted by law, Databricks reserves the right to
89
+ restrict (remotely or otherwise) usage of DBRX or DBRX Derivatives that
90
+ Databricks reasonably believes are in violation of this Agreement.
91
+
92
+
93
+ Section 3: Additional Commercial Terms
94
+
95
+ If, on the DBRX version release date, the monthly active users of the products
96
+ or services made available by or for Licensee, or Licensee’s affiliates, is
97
+ greater than 700 million monthly active users in the preceding calendar month,
98
+ you must request a license from Databricks, which we may grant to you in our
99
+ sole discretion, and you are not authorized to exercise any of the rights under
100
+ this Agreement unless or until Databricks otherwise expressly grants you such
101
+ rights.
102
+
103
+ If you receive DBRX or DBRX Derivatives from a direct or indirect licensee as
104
+ part of an integrated end user product, then this section (Section 3) of the
105
+ Agreement will not apply to you.
106
+
107
+
108
+ Section 4: Additional Provisions
109
+
110
+ 4.1 Updates
111
+
112
+ Databricks may update DBRX from time to time, and you must make reasonable
113
+ efforts to use the latest version of DBRX.
114
+
115
+ 4.2 Intellectual Property
116
+
117
+ a. No trademark licenses are granted under this Agreement, and in connection
118
+ with DBRX or DBRX Derivatives, neither Databricks nor Licensee may use any name
119
+ or mark owned by or associated with the other or any of its affiliates, except
120
+ as required for reasonable and customary use in describing and redistributing
121
+ DBRX or DBRX Derivatives.
122
+
123
+ b. Subject to Databricks’ ownership of DBRX and DRBX Derivatives made by or for
124
+ Databricks, with respect to any DBRX Derivatives that are made by you, as
125
+ between you and Databricks, you are and will be the owner of such DBRX
126
+ Derivatives.
127
+
128
+ c. Databricks claims no ownership rights in Outputs. You are responsible for
129
+ Outputs and their subsequent uses.
130
+
131
+ d. If you institute litigation or other proceedings against Databricks or any
132
+ entity (including a cross-claim or counterclaim in a lawsuit) alleging that
133
+ DBRX or Outputs or results therefrom, or any portion of any of the foregoing,
134
+ constitutes infringement of intellectual property or other rights owned or
135
+ licensable by you, then any licenses granted to you under this Agreement shall
136
+ terminate as of the date such litigation or claim is filed or instituted. You
137
+ will indemnify and hold harmless Databricks from and against any claim by any
138
+ third party arising out of or related to your use or distribution of DBRX or
139
+ DBRX Derivatives.
140
+
141
+ 4.3 DISCLAIMER OF WARRANTY
142
+
143
+ UNLESS REQUIRED BY APPLICABLE LAW, DBRX AND ANY OUTPUT AND RESULTS THEREFROM
144
+ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER
145
+ EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE,
146
+ NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU
147
+ ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR
148
+ REDISTRIBUTING DBRX OR DBRX DERIVATIVES AND ANY OUTPUT AND ASSUME ANY RISKS
149
+ ASSOCIATED WITH YOUR USE OF DBRX OR DBRX DERIVATIVES AND ANY OUTPUT AND RESULTS.
150
+
151
+ 4.4 LIMITATION OF LIABILITY
152
+
153
+ IN NO EVENT WILL DATABRICKS OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
154
+ LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR
155
+ OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,
156
+ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF
157
+ DATABRICKS OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE
158
+ FOREGOING.
159
+
160
+ 4.5 Term and Termination
161
+
162
+ The term of this Agreement will commence upon your acceptance of this Agreement
163
+ or access to DBRX or DBRX Derivatives and will continue in full force and
164
+ effect until terminated in accordance with the terms and conditions herein.
165
+ Databricks may terminate this Agreement if you are in breach of any term or
166
+ condition of this Agreement. Upon termination of this Agreement, you shall
167
+ delete and cease use of DBRX or any DBRX Derivatives. Sections 1, 4.2(d), 4.3,
168
+ 4.4, and 4.6 shall survive the termination of this Agreement.
169
+
170
+ 4.6 Governing Law and Jurisdiction
171
+
172
+ This Agreement will be governed and construed under the laws of the State of
173
+ California without regard to choice of law principles, and the UN Convention
174
+ on Contracts for the International Sale of Goods does not apply to this
175
+ Agreement. The courts of California shall have exclusive jurisdiction of any
176
+ dispute arising out of this Agreement.
NOTICE.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved.
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ license_name: databricks-open-model-license
5
+ license_link: https://www.databricks.com/legal/open-model-license
6
+ ---
7
+ # Fix for the DBRX Code
8
+ The original DBRX implementation code has a few bugs which only affect training, which I fixed in this re-upload.
9
+
10
+ The issues - How I fixed them:
11
+ 1. Error when using gradient checkpointing - Fixed by using positional arguments instead because `_gradient_checkpointing_func` doesn't support kwargs.
12
+ 2. VRAM usage go zoom and `CUDA Out of Memory` when backpropping through the MLP layer - Fixed by separating the experts' weights into different tensors instead of using a single tensor for all the experts. IDK why this fixed it but **maybe** it's because torch is trying to compute gradient for every expert at once, which shouldn't happen since it's a MoE model.
13
+
14
+ # DBRX Base
15
+
16
+ * DBRX Base is a mixture-of-experts (MoE) large language model trained from scratch by Databricks.
17
+ * We are releasing both DBRX Base, a pretrained base model, and DBRX Instruct, a fine-tuned version for few-turn interactions, under [an open license](https://www.databricks.com/legal/open-model-license).
18
+ * This is the repository for DBRX Base. DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
19
+ * For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
20
+
21
+
22
+ ## Model Overview
23
+ DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
24
+ It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
25
+ It was pre-trained on 12T tokens of text and code data.
26
+ Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
27
+ This provides 65x more possible combinations of experts and we found that this improves model quality.
28
+ DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
29
+ It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository.
30
+ We made these choices based on exhaustive evaluation and scaling experiments.
31
+
32
+ DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
33
+ We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
34
+ This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
35
+ We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
36
+
37
+ * **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
38
+ * **Outputs:** DBRX only produces text-based outputs.
39
+ * **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
40
+ * **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
41
+ * **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
42
+ * **Version:** 1.0
43
+ * **Owner:** Databricks, Inc.
44
+
45
+
46
+ ## Usage
47
+ These are several general ways to use the DBRX models:
48
+ * DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
49
+ * The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
50
+ * DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
51
+ * For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
52
+
53
+
54
+ ## Quickstart Guide
55
+ **NOTE: This is DBRX Base, and has not been instruction finetuned. It has not been trained for interactive chat and is only a completion model.**
56
+ If you are looking for the finetuned model, please use [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct).
57
+
58
+ Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
59
+
60
+ ```bash
61
+ pip install transformers tiktoken
62
+ ```
63
+
64
+ If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
65
+ ```bash
66
+ pip install hf_transfer
67
+ export HF_HUB_ENABLE_HF_TRANSFER=1
68
+ ```
69
+
70
+ ### Run the model on a CPU:
71
+ ```python
72
+ from transformers import AutoTokenizer, AutoModelForCausalLM
73
+ import torch
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained("v2ray/dbrx-base-fixed", trust_remote_code=True)
76
+ model = AutoModelForCausalLM.from_pretrained("v2ray/dbrx-base-fixed", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True)
77
+
78
+ input_text = "Databricks was founded in "
79
+ input_ids = tokenizer(input_text, return_tensors="pt")
80
+
81
+ outputs = model.generate(**input_ids, max_new_tokens=100)
82
+ print(tokenizer.decode(outputs[0]))
83
+ ```
84
+
85
+ ### Run the model on multiple GPUs:
86
+ ```python
87
+ from transformers import AutoTokenizer, AutoModelForCausalLM
88
+ import torch
89
+
90
+ tokenizer = AutoTokenizer.from_pretrained("v2ray/dbrx-base-fixed", trust_remote_code=True)
91
+ model = AutoModelForCausalLM.from_pretrained("v2ray/dbrx-base-fixed", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
92
+
93
+ input_text = "Databricks was founded in "
94
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
95
+
96
+ outputs = model.generate(**input_ids, max_new_tokens=100)
97
+ print(tokenizer.decode(outputs[0]))
98
+ ```
99
+ If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
100
+
101
+
102
+ ## Limitations and Ethical Considerations
103
+ ### Training Dataset Limitations
104
+ The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
105
+
106
+ The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
107
+
108
+ DBRX does not have multimodal capabilities.
109
+
110
+ ### Associated Risks and Recommendations
111
+ All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
112
+ Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
113
+ Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
114
+ We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
115
+
116
+
117
+ ## Intended Uses
118
+ ### Intended Use Cases
119
+ The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
120
+ They can be further fine-tuned for various domain-specific natural language and coding tasks.
121
+ DBRX Base can be used as an off-the-shelf model for text completion for general English-language and coding tasks.
122
+
123
+ Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
124
+
125
+ ### Out-of-Scope Use Cases
126
+ DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
127
+ DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
128
+
129
+
130
+ ## Training Stack
131
+ MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
132
+
133
+ Composer is our core library for large-scale training.
134
+ It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
135
+ [FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
136
+ convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
137
+
138
+ Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
139
+
140
+ Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
141
+
142
+ LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
143
+
144
+ DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
145
+
146
+
147
+ ## Evaluation
148
+ We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
149
+ The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
150
+ The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
151
+ HumanEval measures coding ability.
152
+
153
+ Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
154
+
155
+
156
+ ## Acknowledgements
157
+ The DBRX models were made possible thanks in large part to the open-source community, especially:
158
+ * The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
159
+ * [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from .configuration_dbrx import *
2
+ from .modeling_dbrx import *
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dbrx_old",
3
+ "architectures": [
4
+ "DbrxForCausalLM"
5
+ ],
6
+ "attn_config": {
7
+ "clip_qkv": 8,
8
+ "kv_n_heads": 8,
9
+ "model_type": "",
10
+ "rope_theta": 500000
11
+ },
12
+ "auto_map": {
13
+ "AutoConfig": "configuration_dbrx.DbrxConfig",
14
+ "AutoModelForCausalLM": "modeling_dbrx.DbrxForCausalLM"
15
+ },
16
+ "d_model": 6144,
17
+ "emb_pdrop": 0.0,
18
+ "ffn_config": {
19
+ "ffn_hidden_size": 10752,
20
+ "model_type": "",
21
+ "moe_jitter_eps": 0.01,
22
+ "moe_loss_weight": 0.05,
23
+ "moe_num_experts": 16,
24
+ "moe_top_k": 4
25
+ },
26
+ "initializer_range": 0.02,
27
+ "max_seq_len": 32768,
28
+ "model_type": "dbrx",
29
+ "n_heads": 48,
30
+ "n_layers": 40,
31
+ "output_router_logits": false,
32
+ "resid_pdrop": 0.0,
33
+ "router_aux_loss_coef": 0.05,
34
+ "tie_word_embeddings": false,
35
+ "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.39.1",
37
+ "use_cache": true,
38
+ "vocab_size": 100352
39
+ }
configuration_dbrx.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Dbrx configuration."""
2
+ from typing import Any, Optional
3
+
4
+ from transformers.configuration_utils import PretrainedConfig
5
+ from transformers.utils import logging
6
+
7
+ logger = logging.get_logger(__name__)
8
+
9
+ DBRX_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
10
+
11
+
12
+ class DbrxAttentionConfig(PretrainedConfig):
13
+ """Configuration class for Dbrx Attention.
14
+
15
+ [`DbrxAttention`] class. It is used to instantiate attention layers
16
+ according to the specified arguments, defining the layers architecture.
17
+
18
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
19
+ documentation from [`PretrainedConfig`] for more information.
20
+
21
+ Args:
22
+ attn_pdrop (`float`, *optional*, defaults to 0.0):
23
+ The dropout probability for the attention layers.
24
+ clip_qkv (`float`, *optional*, defualts to None):
25
+ If not `None`, clip the queries, keys, and values in the attention layer to this value.
26
+ kv_n_heads (Optional[int]): For grouped_query_attention only, allow user to specify number of kv heads.
27
+ rope_theta (float): The base frequency for rope.
28
+ """
29
+
30
+ def __init__(
31
+ self,
32
+ attn_pdrop: float = 0,
33
+ clip_qkv: Optional[float] = None,
34
+ kv_n_heads: int = 1,
35
+ rope_theta: float = 10000.0,
36
+ **kwargs: Any,
37
+ ):
38
+ super().__init__(**kwargs)
39
+ self.attn_pdrop = attn_pdrop
40
+ self.clip_qkv = clip_qkv
41
+ self.kv_n_heads = kv_n_heads
42
+ self.rope_theta = rope_theta
43
+
44
+ for k in ['model_type']:
45
+ if k in kwargs:
46
+ kwargs.pop(k)
47
+ if len(kwargs) != 0:
48
+ raise ValueError(f'Found unknown {kwargs=}')
49
+
50
+ @classmethod
51
+ def from_pretrained(cls, pretrained_model_name_or_path: str,
52
+ **kwargs: Any) -> 'PretrainedConfig':
53
+ cls._set_token_in_kwargs(kwargs)
54
+
55
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path,
56
+ **kwargs)
57
+
58
+ if config_dict.get('model_type') == 'dbrx':
59
+ config_dict = config_dict['attn_config']
60
+
61
+ if 'model_type' in config_dict and hasattr(
62
+ cls,
63
+ 'model_type') and config_dict['model_type'] != cls.model_type:
64
+ logger.warning(
65
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
66
+ +
67
+ f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
68
+ )
69
+
70
+ return cls.from_dict(config_dict, **kwargs)
71
+
72
+
73
+ class DbrxFFNConfig(PretrainedConfig):
74
+ """Configuration class for Dbrx FFN.
75
+
76
+ [`DbrxFFN`] class. It is used to instantiate feedforward layers according to
77
+ the specified arguments, defining the layers architecture.
78
+
79
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
80
+ documentation from [`PretrainedConfig`] for more information.
81
+
82
+ Args:
83
+ ffn_act_fn (dict, optional): A dict specifying activation function for the FFN.
84
+ The dict should have a key 'name' with the value being the name of
85
+ the activation function along with any additional keyword arguments.
86
+ ffn_hidden_size (int, optional): The hidden size of the feedforward network.
87
+ moe_num_experts (int, optional): The number of experts in the mixture of experts layer.
88
+ moe_top_k (int, optional): The number of experts to use in the mixture of experts layer.
89
+ moe_jitter_eps (float, optional): The jitter epsilon for the mixture of experts layer.
90
+ moe_loss_weight (float, optional): The loss weight for the mixture of experts layer.
91
+ moe_normalize_expert_weights (float, optional): The normalization factor for the expert weights.
92
+ uniform_expert_assignment (bool, optional): Whether to use uniform expert assignment.
93
+ This should only be used for benchmarking purposes.
94
+ """
95
+
96
+ def __init__(
97
+ self,
98
+ ffn_act_fn: Optional[dict] = None,
99
+ ffn_hidden_size: int = 3584,
100
+ moe_num_experts: int = 4,
101
+ moe_top_k: int = 1,
102
+ moe_jitter_eps: Optional[float] = None,
103
+ moe_loss_weight: float = 0.01,
104
+ moe_normalize_expert_weights: Optional[float] = 1,
105
+ uniform_expert_assignment: bool = False,
106
+ **kwargs: Any,
107
+ ):
108
+ super().__init__()
109
+ if ffn_act_fn is None:
110
+ ffn_act_fn = {'name': 'silu'}
111
+ self.ffn_act_fn = ffn_act_fn
112
+ self.ffn_hidden_size = ffn_hidden_size
113
+ self.moe_num_experts = moe_num_experts
114
+ self.moe_top_k = moe_top_k
115
+ self.moe_jitter_eps = moe_jitter_eps
116
+ self.moe_loss_weight = moe_loss_weight
117
+ self.moe_normalize_expert_weights = moe_normalize_expert_weights
118
+ self.uniform_expert_assignment = uniform_expert_assignment
119
+
120
+ for k in ['model_type']:
121
+ if k in kwargs:
122
+ kwargs.pop(k)
123
+ if len(kwargs) != 0:
124
+ raise ValueError(f'Found unknown {kwargs=}')
125
+
126
+ @classmethod
127
+ def from_pretrained(cls, pretrained_model_name_or_path: str,
128
+ **kwargs: Any) -> 'PretrainedConfig':
129
+ cls._set_token_in_kwargs(kwargs)
130
+
131
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path,
132
+ **kwargs)
133
+
134
+ if config_dict.get('model_type') == 'dbrx':
135
+ config_dict = config_dict['ffn_config']
136
+
137
+ if 'model_type' in config_dict and hasattr(
138
+ cls,
139
+ 'model_type') and config_dict['model_type'] != cls.model_type:
140
+ logger.warning(
141
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
142
+ +
143
+ f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
144
+ )
145
+
146
+ return cls.from_dict(config_dict, **kwargs)
147
+
148
+
149
+ class DbrxConfig(PretrainedConfig):
150
+ """Configuration class for Dbrx.
151
+
152
+ [`DbrxModel`]. It is used to instantiate a Dbrx model according to the
153
+ specified arguments, defining the model architecture.
154
+
155
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
156
+ documentation from [`PretrainedConfig`] for more information.
157
+
158
+
159
+ Args:
160
+ d_model (`int`, *optional*, defaults to 6144):
161
+ Dimensionality of the embeddings and hidden states.
162
+ n_heads (`int`, *optional*, defaults to 48):
163
+ Number of attention heads for each attention layer in the Transformer encoder.
164
+ n_layers (`int`, *optional*, defaults to 40):
165
+ Number of hidden layers in the Transformer encoder.
166
+ max_seq_len (`int`, *optional*, defaults to 32768):
167
+ The maximum sequence length of the model.
168
+ vocab_size (`int`, *optional*, defaults to 100352):
169
+ Vocabulary size of the Dbrx model. Defines the maximum number of different tokens that can be represented by
170
+ the `inputs_ids` passed when calling [`DbrxModel`].
171
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
172
+ The dropout probability applied to the attention output before combining with residual.
173
+ emb_pdrop (`float`, *optional*, defaults to 0.0):
174
+ The dropout probability for the embedding layer.
175
+ attn_config (`dict`, *optional*):
176
+ A dictionary used to configure the model's attention module.
177
+ ffn_config (`dict`, *optional*):
178
+ A dictionary used to configure the model's FFN module.
179
+ use_cache (`bool`, *optional*, defaults to `False`):
180
+ Whether or not the model should return the last key/values attentions (not used by all models).
181
+ initializer_range (`float`, *optional*, defaults to 0.02):
182
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
183
+ output_router_logits (`bool`, *optional*, defaults to `False`):
184
+ Whether or not the router logits should be returned by the model. Enabling this will also
185
+ allow the model to output the auxiliary loss. See [here]() for more details
186
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
187
+ The aux loss factor for the total loss.
188
+
189
+
190
+ Example:
191
+ ```python
192
+ >>> from transformers import DbrxConfig, DbrxModel
193
+
194
+ >>> # Initializing a Dbrx configuration
195
+ >>> configuration = DbrxConfig()
196
+
197
+ >>> # Initializing a model (with random weights) from the configuration
198
+ >>> model = DbrxModel(configuration)
199
+
200
+ >>> # Accessing the model configuration
201
+ >>> configuration = model.config
202
+ ```
203
+ """
204
+
205
+ model_type = 'dbrx'
206
+ attribute_map = {
207
+ 'num_attention_heads': 'n_heads',
208
+ 'hidden_size': 'd_model',
209
+ 'num_hidden_layers': 'n_layers',
210
+ 'max_position_embeddings': 'max_seq_len'
211
+ }
212
+
213
+ def __init__(
214
+ self,
215
+ d_model: int = 2048,
216
+ n_heads: int = 16,
217
+ n_layers: int = 24,
218
+ max_seq_len: int = 2048,
219
+ vocab_size: int = 32000,
220
+ resid_pdrop: float = 0.0,
221
+ emb_pdrop: float = 0.0,
222
+ attn_config: Optional[DbrxAttentionConfig] = None,
223
+ ffn_config: Optional[DbrxFFNConfig] = None,
224
+ use_cache: bool = True,
225
+ initializer_range: float = 0.02,
226
+ output_router_logits: bool = False,
227
+ router_aux_loss_coef: float = 0.05,
228
+ **kwargs: Any,
229
+ ):
230
+ if attn_config is None:
231
+ self.attn_config = DbrxAttentionConfig()
232
+ elif isinstance(attn_config, dict):
233
+ self.attn_config = DbrxAttentionConfig(**attn_config)
234
+ else:
235
+ self.attn_config = attn_config
236
+
237
+ if ffn_config is None:
238
+ self.ffn_config = DbrxFFNConfig()
239
+ elif isinstance(ffn_config, dict):
240
+ self.ffn_config = DbrxFFNConfig(**ffn_config)
241
+ else:
242
+ self.ffn_config = ffn_config
243
+
244
+ self.d_model = d_model
245
+ self.n_heads = n_heads
246
+ self.n_layers = n_layers
247
+ self.max_seq_len = max_seq_len
248
+ self.vocab_size = vocab_size
249
+ self.resid_pdrop = resid_pdrop
250
+ self.emb_pdrop = emb_pdrop
251
+ self.use_cache = use_cache
252
+ self.initializer_range = initializer_range
253
+ self.output_router_logits = output_router_logits
254
+ self.router_aux_loss_coef = router_aux_loss_coef
255
+
256
+ tie_word_embeddings = kwargs.pop('tie_word_embeddings', False)
257
+ if tie_word_embeddings:
258
+ raise ValueError(
259
+ 'tie_word_embeddings is not supported for Dbrx models.')
260
+
261
+ super().__init__(
262
+ tie_word_embeddings=tie_word_embeddings,
263
+ **kwargs,
264
+ )
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "transformers_version": "4.39.1"
4
+ }
model-00001-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19fbfca36b21fae548f2fe7ddf86ea444420a06501952f817b5888be59df5ddd
3
+ size 4976767312
model-00002-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60e92feb3fbaa9a5fe4cded206020fdac8373e63ef101fc56003cb97dbcb867f
3
+ size 4932728256
model-00003-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6edb0b60e27d77d404df003bd53f525220b03c1e5b8fee199a6837d4d6d893b7
3
+ size 4932728256
model-00004-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f013307c64633a918029b4f5bba2c8303d271f58041264e3fe5c723e1b4f51ed
3
+ size 4888466376
model-00005-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5828fd1e9fd5a7843df77ab428aa1e8a2ec65984a5d06a6edf58678f24c734d4
3
+ size 4932728248
model-00006-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:752d18fa174550b38e0f57cba20d0881a3cb4ad343b63801e7ea6ef877c000ef
3
+ size 4932728256
model-00007-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:236dab52058e35463b0df20f1b2d6e3a73309db0a64344824008406e5e508f1d
3
+ size 4932728256
model-00008-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:427eb68d0aa3aace4eebd05ff029446086f69f79a74091aaba8b296a943eaa26
3
+ size 4888466376
model-00009-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:317349f07d0538415ace7dbf4935873b5214bdae84342d6491105e71b85060b6
3
+ size 4932728248
model-00010-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b25c985829b95966535b2f89d5c35a5d18535b71c172f4cc6e57b2d0c44c7912
3
+ size 4932728256
model-00011-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad7b9c00f70301577b5505e81b9cbee42dff4937c784cc4fc21c5923cf947955
3
+ size 4932728256
model-00012-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:014272db51c88d6899f698baf7adceed1b52490205aa618cd9a4a75756640cac
3
+ size 4888466376
model-00013-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7875904e40352508740d81090cf838036174fd45e19f94f58d9eae21d9c06312
3
+ size 4932728240
model-00014-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad94f408f30f8e47292547ee9bd550f6799d54f2d3bd38834abeec4104b79475
3
+ size 4932728280
model-00015-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1780de45e106eb7eb2e36b02667bd5271a73958addcf0e4da55fe7fa004181cb
3
+ size 4932728296
model-00016-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ea5dd7e1bd9b18553bb5c2933b03c597cd1e5318e6e0e048ac41726c2086a6d
3
+ size 4888466416
model-00017-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d913f299740d1d485c636442d84b909dfe892add1f320fd0ab92395006b64de
3
+ size 4932728288
model-00018-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b8b8c3ec11f6a87b7267a1eaa58f7928449161c1fb2a3a2d36bf7549e239828
3
+ size 4932728296
model-00019-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660062506cf4f17f78e4ef06c26814a940a890f510a39cc4e3c5e696543c43e1
3
+ size 4932728296
model-00020-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df6d9fea4010988f8eb078bc0264c24ddf2adc45546fff3a8ba1fbf303453fda
3
+ size 4888466416
model-00021-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc30253d5c9a92c14b5e673557784dd3f107a5e914ad15a5b42de69bc4cdf837
3
+ size 4932728288
model-00022-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab138aef999b5d52f7499d118b7ea3ca15f0099557bbdf74e325764b4bd65a0f
3
+ size 4932728296
model-00023-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:042526a2936e7fb570abd769da46aedd6fcc65745aaee13f00bcdc70a5e06b81
3
+ size 4932728296
model-00024-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67fd60628e07b5d9949dcece5a3df3401103254236dfb529bc8611495d717b44
3
+ size 4888466416
model-00025-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:608f8b7adaa5ee361a3bcfc861cbd19dd4d25cdfacba9e66d65c71f4f1a12590
3
+ size 4932728288
model-00026-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:125927560348254678be9980be2e2f5356752a12fe5c10083fd2cd01c9771825
3
+ size 4932728296
model-00027-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aaa116bfa3aada012f42dba1e9a947e3ec71603cbd08ccd42b952770f5c26b9
3
+ size 4932728296
model-00028-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1585e6fefa1db90f1da8fc755dfd5a85586c7a965426f6bbf51f607fce1f509d
3
+ size 4888466416
model-00029-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bf4270c4e2ad7736d243ee204e1a0dc2edf7b7b5bb468cb10eac60991b1268b
3
+ size 4932728288
model-00030-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6c5cec08dde81c379e81e3889ccad8f582c7d20cd9199b0c30e4c73bb9cbc6d
3
+ size 4932728296
model-00031-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a95281a3b86e707a5b49fa545ba11b532bd3caeb2e28bba7a7fc2a82f76fd8
3
+ size 4932728296
model-00032-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abb9e3f9e4b4c7b54eab2c13625a47669613a7e23a4ac6d49e220f05095a0f22
3
+ size 4888466416
model-00033-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6787a8612292eaabb2e86c2202b58538013bcb813d1a5d89835f3e9feb4fd78
3
+ size 4932728288
model-00034-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc24fee799cd1a543c7431a5fe91b5c0a618b4334790e5bbc1f91decbb497fb7
3
+ size 4932728288
model-00035-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0801edd2ee96f386a3fc72198da027a7eb5d129c073a8efb89f0e3282b0c5f01
3
+ size 4932728296
model-00036-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e638ca6b4d67de41d2b45a71b0093892c3cfb9f4bd02ab456f6c3c9b5674d65
3
+ size 4989142256
model-00037-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ccefb4eac2e288a0bbc3fdc8ad46e4766dd3de2138450986a7318e5dc894f1
3
+ size 4964173160
model-00038-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa596dfbde688b0596e47272ede67b059c3b50ae6c97ec3713e93d1c8d38c83
3
+ size 4932728288
model-00039-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3370713b9fd67d33b19e436912495607e4d928bd84197474263f9a417e6959f
3
+ size 4932728296
model-00040-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd31b2109e45f5eb2fc906a53b8e20f35ffba88959c4f9cf17bcb3d73f439ecc
3
+ size 4932728296
model-00041-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35d5effa40761808648e92c7902c7fc7c9769260ada2df4a0ce884e356f1295b
3
+ size 4888466408
model-00042-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b94fc632abb875f851c52234873dac13fd31fc3d35f68415ed8b41ed5822c1bc
3
+ size 4932728288
model-00043-of-00054.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5c1ced190ebe41c3fdc0650c234ffe648db7f685850b94c54261edeb2977a2b
3
+ size 4932728296