ibleducation/ibl-multiple-choice-7B
ibleducation/ibl-multiple-choice-7B is a model finetuned on top of mistralai/Mistral-7B-Instruct-v0.1
The model is finetuned to generate a multiple choice questions. The output of the model is a json object with the following entries
- category: The topic area of the question
- qtext: The question text
- ra: The aid of the correct answer
- answers: a list of possible answer choices each with an
aid
(answer id) andatext
(answer text.)
Example Conversations
- Question: Photosynthesis
Answer:{ "category": "Photosynthesis", "qtext": "The chlorophyll fluorescence measurement technique is based on the emission of fluorescence by the chlorophylls present in the photosynthetic pigmentation:", "ra": 4, "answers": [ {"aid": 1, "atext": "It is used to determine the light absorption characteristics of the pigments."}, {"aid": 2, "atext": "It is used to determine the light emission characteristics of the pigments."}, {"aid": 3, "atext": "It is used to determine the kinetics of light absorption by the pigments."}, {"aid": 4, "atext": "It is used to determine the kinetics of light emission by the pigments."}, {"aid": 5, "atext": "It is used to determine the energy that the pigments emit when they absorb light."} ] }
Model Details
- Developed by: IBL Education
- Model type: Mistral-7B-v0.1
- Base Model: Mistral-7B-Instruct-v0.1
- Language: English
- Finetuned from weights: Mistral-7B-Instruct-v0.1
- Finetuned on data:
- Model License: MIT
How to Get Started with the Model
Install the necessary packages
Requires: transformers > 4.35.0
pip install transformers
pip install accelerate
You can then try the following example code
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
model_id = "ibleducation/ibl-multiple-choice-7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
prompt = "<s>[INST] Algebra [/INST] "
response = pipeline(prompt)
print(response['generated_text'])
Important - Use the prompt template below:
<s>[INST] {prompt} [/INST]
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.