Text Generation
Transformers
Safetensors
Japanese
English
mistral
conversational
text-generation-inference
Inference Endpoints
karasu-7B-chat / README.md
shun1taniguchi's picture
Update README.md
fc31587
metadata
license: apache-2.0
datasets:
  - OpenAssistant/oasst1
  - zetavg/ShareGPT-Processed
  - augmxnt/ultra-orca-boros-en-ja-v1
language:
  - ja
  - en

drawing

Evaluation

image/png

How to use

Huggingface

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("lightblue/karasu-7B-chat")
model = AutoModelForCausalLM.from_pretrained("lightblue/karasu-7B-chat", torch_dtype=torch.bfloat16, device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})

prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)

pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)

VLLM

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/karasu-7B-chat")

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Base checkpoint

lightblue/karasu-7B

Training datasets (total ~7B)

Developed by

Lightblue technology logo

Engineers

Peter Devine

Sho Higuchi

Advisors

Yuuki Yamanaka

Atom Sonoda

Project manager

Shunichi Taniguchi

Dataset evaluator

Renju Aoki