bhavinjawade commited on
Commit
168dad0
1 Parent(s): 6f53854

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - Intel/orca_dpo_pairs
5
+ ---
6
+
7
+ ## SOLAR-10B-Nector-Orca-DPO-LoRA-Jawade
8
+
9
+ ### Overview
10
+ This model card is instruction finetuned version of `upstage/SOLAR-10.7B-Instruct-v1.0` model. Trained on a mixture of Berkley Nector dataset and Intel DPO Orca dataset using LoRA.
11
+
12
+ ![model_card_image](SOLAR_ORCA.png)
13
+
14
+ ## How to Use This Model
15
+
16
+ To use the model `bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`, follow these steps:
17
+
18
+ 1. **Import and Load the Model and Tokenizer**
19
+ Begin by importing the model and tokenizer. Load them using the `from_pretrained` method.
20
+
21
+ ```python
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+ model = AutoModelForCausalLM.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
24
+ tokenizer = AutoTokenizer.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade")
25
+ ```
26
+
27
+ 2. **Format the Prompt**
28
+ Format the chat input as a list of messages, each with a role ('system' or 'user') and content.
29
+
30
+ ```python
31
+ message = [
32
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
33
+ {"role": "user", "content": "Is the universe real? or is it a simulation? whats your opinion?"}
34
+ ]
35
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
36
+ ```
37
+
38
+ 3. **Create a Pipeline**
39
+ Set up a pipeline for text generation with the loaded model and tokenizer.
40
+
41
+ ```python
42
+ pipeline = transformers.pipeline(
43
+ "text-generation",
44
+ model=model,
45
+ tokenizer=tokenizer
46
+ )
47
+ ```
48
+
49
+ 4. **Generate Text**
50
+ Use the pipeline to generate a sequence of text based on the prompt. You can adjust parameters like temperature and top_p for different styles of responses.
51
+
52
+ ```python
53
+ sequences = pipeline(
54
+ prompt,
55
+ do_sample=True,
56
+ temperature=0.7,
57
+ top_p=0.9,
58
+ num_return_sequences=1,
59
+ max_length=200,
60
+ )
61
+ print(sequences[0]['generated_text'])
62
+ ```
63
+
64
+ This setup allows you to utilize the capabilities of the **bhavinjawade/SOLAR-10B-OrcaDPO-Jawade** model for generating responses to chat inputs.
65
+
66
+ ### License
67
+ - **Type**: MIT License
68
+ - **Details**: This license permits reuse, modification, and distribution for both private and commercial purposes under the terms of the MIT License.
69
+
70
+ ### Model Details
71
+ - **Model Name**: SOLAR-10.7B-Instruct-v1.0
72
+ - **Organization**: Upstage
73
+ - **Training Dataset**: Intel/orca_dpo_pairs
74
+ - **Technique Used**: LoRA (Low-Rank Adaptation)
75
+
76
+ ### Contact Information
77
+ - https://bhavinjawade.github.io