π Summarization Model Card π
Model Overview
- Model Name: Llama-3.2-1B Instruct Model Fine-tuned for Summarization
- Developed by: saishshinde15
- License: Apache-2.0
- Base Model: meta-llama/Llama-3.2-1B-Instruct
Description
This model has been fine-tuned to excel in generating concise and informative summaries from lengthy texts. It captures key ideas while presenting them in an easy-to-read bullet-point format.
Key Features
- Language: English
- Fine-tuned on: The dataset
openai/summarize_from_feedback
for improved summarization capabilities. - Performance Metric: Evaluated based on accuracy.
Prompt for Optimal Use
For the best results, please utilize the following tried-and-true prompt structure:
You are given the following text. Please provide a summary in 5-10 key points, depending on the length of the document. Each point should be clearly formatted in bullet format, starting with an asterisk (*).
**Note:** The examples provided below are for your reference only and should not be included in your response.
### Examples (for reference only):
* The sky is blue on a clear day.
* Water boils at 100 degrees Celsius.
* Trees produce oxygen through photosynthesis.
### Original Text:
{}
### Key Points Summary (in bullet points):
# Model Loading Instructions
To load this model, use the following code snippet:
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
# Replace "lora_model" with your actual model name
model = AutoPeftModelForCausalLM.from_pretrained(
"saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct", # YOUR MODEL YOU USED FOR TRAINING
load_in_4bit=True, # Adjust as needed
)
tokenizer = AutoTokenizer.from_pretrained("saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct")
- Downloads last month
- 103
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct
Base model
meta-llama/Llama-3.2-1B-Instruct