Imagroune commited on
Commit
e902e7f
1 Parent(s): 9deb71c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -4,11 +4,9 @@
4
 
5
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645364cbf666f76551f93111/ZviQjj2NvCvl0R7IZiRai.png)
6
 
7
- #### Welcome to the FeynModel repository, a Vision Language model with the resapnning capabilities of the LLM (Language Learning Model). it aims to explore the power of both worlds on scientific reasonin capaicties, this model is fine-tuned using the LoRA (Local Re-Attention) method, optimizing it for enhanced performance in diverse vision and language tasks.
8
 
9
- #### The 0.1 version uses pretrained layers from DaVit Vision Tower of Florence2-base (Microsoft) and Gemma2-2B (Google) and was finetuned on M3IT coco and ScencieQA
10
-
11
- #### It use a S6 block to wire context memory for Q* TS (experimental)
12
 
13
  # how to use
14
 
 
4
 
5
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645364cbf666f76551f93111/ZviQjj2NvCvl0R7IZiRai.png)
6
 
7
+ #### Welcome to the FeynModel repository, a Vision Language model with the reasoning capabilities of an LLM (Large Language Model). It aims to explore the combined power of vision and language for scientific reasoning tasks. This model is fine-tuned using the LoRA (Low-Rank Adaptation) method, optimizing it for enhanced performance in a variety of vision and language tasks.
8
 
9
+ #### Version 0.1 utilizes pretrained layers from the DaVit Vision Tower of Florence2-base (Microsoft) and Gemma2-2B (Google), and was fine-tuned on M3IT, COCO, and ScienceQA datasets. It employs an S6 block to integrate context memory for Q*TS (experimental).
 
 
10
 
11
  # how to use
12