Ali-Forootani
commited on
Commit
•
eacc43b
1
Parent(s):
062d453
Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,11 @@ see more on ORPO [link](https://arxiv.org/abs/2403.07691)
|
|
24 |
|
25 |
Llama 3 models also increased the context length up to 8,192 tokens (4,096 tokens for Llama 2), and potentially scale up to 32k with RoPE. Additionally, the models use a new tokenizer with a 128K-token vocabulary, reducing the number of tokens required to encode text by 15%. This vocabulary also explains the bump from 7B to 8B parameters.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
27 |
# Required packages
|
28 |
```bash
|
29 |
pip install -U transformers datasets accelerate peft trl bitsandbytes wandb
|
@@ -244,17 +249,17 @@ Created on Wed Jul 3 15:57:22 2024
|
|
244 |
@author: Ali forootani
|
245 |
"""
|
246 |
|
247 |
-
|
248 |
!pip install -U transformers datasets accelerate peft trl bitsandbytes wandb
|
249 |
!pip install -qqq flash-attn
|
250 |
!pip install -qU transformers accelerate
|
251 |
-
|
|
|
|
|
|
|
|
|
|
|
252 |
|
253 |
-
"""
|
254 |
-
wandb
|
255 |
-
https://wandb.ai/your_account
|
256 |
-
dde689e74d3f9146d2d116b098016f5e0d9cc202
|
257 |
-
"""
|
258 |
|
259 |
|
260 |
```python
|
@@ -407,188 +412,4 @@ tokenizer.push_to_hub(repo_name, use_auth_token=True)
|
|
407 |
|
408 |
|
409 |
|
410 |
-
|
411 |
-
<!-- Provide a longer summary of what this model is. -->
|
412 |
-
|
413 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
414 |
-
|
415 |
-
- **Developed by:** [More Information Needed]
|
416 |
-
- **Funded by [optional]:** [More Information Needed]
|
417 |
-
- **Shared by [optional]:** [More Information Needed]
|
418 |
-
- **Model type:** [More Information Needed]
|
419 |
-
- **Language(s) (NLP):** [More Information Needed]
|
420 |
-
- **License:** [More Information Needed]
|
421 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
422 |
-
|
423 |
-
### Model Sources [optional]
|
424 |
-
|
425 |
-
<!-- Provide the basic links for the model. -->
|
426 |
-
|
427 |
-
- **Repository:** [More Information Needed]
|
428 |
-
- **Paper [optional]:** [More Information Needed]
|
429 |
-
- **Demo [optional]:** [More Information Needed]
|
430 |
-
|
431 |
-
## Uses
|
432 |
-
|
433 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
434 |
-
|
435 |
-
### Direct Use
|
436 |
-
|
437 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
438 |
-
|
439 |
-
[More Information Needed]
|
440 |
-
|
441 |
-
### Downstream Use [optional]
|
442 |
-
|
443 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
444 |
-
|
445 |
-
[More Information Needed]
|
446 |
-
|
447 |
-
### Out-of-Scope Use
|
448 |
-
|
449 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
450 |
-
|
451 |
-
[More Information Needed]
|
452 |
-
|
453 |
-
## Bias, Risks, and Limitations
|
454 |
-
|
455 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
456 |
-
|
457 |
-
[More Information Needed]
|
458 |
-
|
459 |
-
### Recommendations
|
460 |
-
|
461 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
462 |
-
|
463 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
464 |
-
|
465 |
-
## How to Get Started with the Model
|
466 |
-
|
467 |
-
Use the code below to get started with the model.
|
468 |
-
|
469 |
-
[More Information Needed]
|
470 |
-
|
471 |
-
## Training Details
|
472 |
-
|
473 |
-
### Training Data
|
474 |
-
|
475 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
476 |
-
|
477 |
-
[More Information Needed]
|
478 |
-
|
479 |
-
### Training Procedure
|
480 |
-
|
481 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
482 |
-
|
483 |
-
#### Preprocessing [optional]
|
484 |
-
|
485 |
-
[More Information Needed]
|
486 |
-
|
487 |
-
|
488 |
-
#### Training Hyperparameters
|
489 |
-
|
490 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
491 |
-
|
492 |
-
#### Speeds, Sizes, Times [optional]
|
493 |
-
|
494 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
495 |
-
|
496 |
-
[More Information Needed]
|
497 |
-
|
498 |
-
## Evaluation
|
499 |
-
|
500 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
501 |
-
|
502 |
-
### Testing Data, Factors & Metrics
|
503 |
-
|
504 |
-
#### Testing Data
|
505 |
-
|
506 |
-
<!-- This should link to a Dataset Card if possible. -->
|
507 |
-
|
508 |
-
[More Information Needed]
|
509 |
-
|
510 |
-
#### Factors
|
511 |
-
|
512 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
513 |
-
|
514 |
-
[More Information Needed]
|
515 |
-
|
516 |
-
#### Metrics
|
517 |
-
|
518 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
519 |
-
|
520 |
-
[More Information Needed]
|
521 |
-
|
522 |
-
### Results
|
523 |
-
|
524 |
-
[More Information Needed]
|
525 |
-
|
526 |
-
#### Summary
|
527 |
-
|
528 |
-
|
529 |
-
|
530 |
-
## Model Examination [optional]
|
531 |
-
|
532 |
-
<!-- Relevant interpretability work for the model goes here -->
|
533 |
-
|
534 |
-
[More Information Needed]
|
535 |
-
|
536 |
-
## Environmental Impact
|
537 |
-
|
538 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
539 |
-
|
540 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
541 |
-
|
542 |
-
- **Hardware Type:** [More Information Needed]
|
543 |
-
- **Hours used:** [More Information Needed]
|
544 |
-
- **Cloud Provider:** [More Information Needed]
|
545 |
-
- **Compute Region:** [More Information Needed]
|
546 |
-
- **Carbon Emitted:** [More Information Needed]
|
547 |
-
|
548 |
-
## Technical Specifications [optional]
|
549 |
-
|
550 |
-
### Model Architecture and Objective
|
551 |
-
|
552 |
-
[More Information Needed]
|
553 |
-
|
554 |
-
### Compute Infrastructure
|
555 |
-
|
556 |
-
[More Information Needed]
|
557 |
-
|
558 |
-
#### Hardware
|
559 |
-
|
560 |
-
[More Information Needed]
|
561 |
-
|
562 |
-
#### Software
|
563 |
-
|
564 |
-
[More Information Needed]
|
565 |
-
|
566 |
-
## Citation [optional]
|
567 |
-
|
568 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
569 |
-
|
570 |
-
**BibTeX:**
|
571 |
-
|
572 |
-
[More Information Needed]
|
573 |
-
|
574 |
-
**APA:**
|
575 |
-
|
576 |
-
[More Information Needed]
|
577 |
-
|
578 |
-
## Glossary [optional]
|
579 |
-
|
580 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
581 |
-
|
582 |
-
[More Information Needed]
|
583 |
-
|
584 |
-
## More Information [optional]
|
585 |
-
|
586 |
-
[More Information Needed]
|
587 |
-
|
588 |
-
## Model Card Authors [optional]
|
589 |
-
|
590 |
-
[More Information Needed]
|
591 |
-
|
592 |
-
## Model Card Contact
|
593 |
-
|
594 |
[More Information Needed]
|
|
|
24 |
|
25 |
Llama 3 models also increased the context length up to 8,192 tokens (4,096 tokens for Llama 2), and potentially scale up to 32k with RoPE. Additionally, the models use a new tokenizer with a 128K-token vocabulary, reducing the number of tokens required to encode text by 15%. This vocabulary also explains the bump from 7B to 8B parameters.
|
26 |
|
27 |
+
|
28 |
+
## Hardware:
|
29 |
+
|
30 |
+
- I used a Nvidia-A100 80GB GPU. Note that you need a good GPU for training and testing on lower GPU will not work!
|
31 |
+
|
32 |
# Required packages
|
33 |
```bash
|
34 |
pip install -U transformers datasets accelerate peft trl bitsandbytes wandb
|
|
|
249 |
@author: Ali forootani
|
250 |
"""
|
251 |
|
252 |
+
```bash
|
253 |
!pip install -U transformers datasets accelerate peft trl bitsandbytes wandb
|
254 |
!pip install -qqq flash-attn
|
255 |
!pip install -qU transformers accelerate
|
256 |
+
```
|
257 |
+
|
258 |
+
_hint_
|
259 |
+
- wandb account
|
260 |
+
- visit: https://wandb.ai/your_account
|
261 |
+
- wnadb token : take your wandb token and save it somewhere
|
262 |
|
|
|
|
|
|
|
|
|
|
|
263 |
|
264 |
|
265 |
```python
|
|
|
412 |
|
413 |
|
414 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
415 |
[More Information Needed]
|