Solshine commited on
Commit
dffbe4f
1 Parent(s): ae261d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -8,7 +8,14 @@ tags:
8
  - unsloth
9
  - gemma
10
  - trl
 
 
 
 
 
11
  base_model: unsloth/gemma-2b-it-bnb-4bit
 
 
12
  ---
13
 
14
  # Uploaded model
@@ -17,6 +24,10 @@ base_model: unsloth/gemma-2b-it-bnb-4bit
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
19
 
20
- This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
8
  - unsloth
9
  - gemma
10
  - trl
11
+ - agriculture
12
+ - farming
13
+ - climate
14
+ - biology
15
+ - agritech
16
  base_model: unsloth/gemma-2b-it-bnb-4bit
17
+ datasets:
18
+ - CopyleftCultivars/Natural-Farming-Real-QandA-Conversations-Q1-2024-Update
19
  ---
20
 
21
  # Uploaded model
 
24
  - **License:** apache-2.0
25
  - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
26
 
27
+ Background: Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model. Following this, the Gemma version was released.
28
+
29
+ Updates for this model: We then revisited the data, adding four additional months of real-world in-field data from hundreds of users which was then editted by a domain expert in regenerative farming and natural farming (approximately 2,000 instruct examples.) This was combined with a small portion of synthetic datasets and semisynthetic datasets related to regenerative agriculture and natural farming, including some non-english language samples
30
+
31
+ This gemma model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
32
 
33
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)