Model Card for Model ID
Model Details
Model Card: 4yo1/llama3-pre1-pre2-lora3-mergkit-base2 with Fine-Tuning Model Overview Model Name: 4yo1/llama3-pre1-pre2-lora3-mergkit-base2
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
how to use - sample code
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/4yo1/llama3-pre1-pre2-lora3-mergkit-base2")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-lora3-mergkit-base2")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-lora3-mergkit-base2")
datasets:
- 140kgpt
license: mit
- Downloads last month
- 1,265
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.