Edit model card

Kurage

An anime image of a pink and blue jellyfish surrounded by bubbles

Kurage is a multipurpose RAG model from Lightblue.

This version of the model has been trained to perform RAG in Japanese.

Features of these models include:

  • Multi-chunk RAG - Performs RAG using multiple contexts at once.
  • Single-chunk RAG - Performs RAG using one context at a time, allowing for parallel computing.
  • Answer extension - Prompts the model to write a longer answer to a given question.
  • Multilingual RAG - Performs RAG using contexts in languages different to the language of the question.
  • Q&A generation - Generates questions and answers from a reference text in order to pre-index a set of texts.

Find out how to use these features below.

For models in other languages check our Kurage collection. A multilingual model is coming soon!

This model was trained using a ml.gu7ef.8xlarge-gu100 instance on Platform For AI from Alibaba Cloud.

Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with the coming release of Qwen 2.5.

Basic usage

To use the model for basic multi-chunk RAG, you can use the following code:

from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

def create_rag_prompt(contexts, question):

    context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])

    str_inputs = f"""{context_str}

    <<Question>>
    {question}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

contexts = [
   "日銀の中川順子審議委員は11日、実質金利は現在きわめて低い水準にあるとした上で、先行き日銀の経済・物価見通しが実現していくとすれば、物価目標実現の観点から金融緩和の度合いを調整していくことになると述べた。",
   "7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。",
   "鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
   "8月の円高局面で、日本の機関投資家が過去最大の対外証券投資に動いていたことが、外為市場で話題となっている。"
]

question = "現在、日本の第一次所得収支はいくらですか?"

inputs = create_rag_prompt(contexts, question)

outputs = llm.generate([inputs], sampling_params)

print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4兆円

Feature: Multi-chunk RAG

This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.

Prompt style

Input:

<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.

<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.

<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.

<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.

<<Question>>
What is Japan's primary income balance currently?

Output:

<<References>>
2

<<Answer>>
4.4 trillion yen
Python code
from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

def create_rag_prompt(contexts, question):

    context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])

    str_inputs = f"""{context_str}

    <<Question>>
    {question}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

contexts = [
    "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
    "Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
    "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
    "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]

question = "What is Japan's primary income balance currently?"

inputs = create_rag_prompt(contexts, question)

outputs = llm.generate([inputs], sampling_params)

print(outputs[0].outputs[0].text)
# <<References>>
# 2
# 
# <<Answer>>
# 4.4 trillion yen.

Feature: Single-chunk RAG

This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.

Prompt style

Irrelevant context input:

<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.

<<Question>>
What is Japan's primary income balance currently?

Irrelevant context output:

<<References>>
None

Relevant context input:

<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.

<<Question>>
What is Japan's primary income balance currently?

Relevant context output:

<<References>>
1

<<Answer>>
4.4 trillion yen
Python code
from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

def create_rag_prompt(contexts, question):

    context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])

    str_inputs = f"""{context_str}

    <<Question>>
    {question}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

contexts = [
    "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
    "Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
    "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
    "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]

question = "What is Japan's primary income balance currently?"

outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)

print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None

# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.

# 3.
# <<References>>
# None

# 4.
# <<References>>
# None

Feature: Answer extension

By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <>" after your question.

Prompt style

Input:

<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.

<<Question>>
What is Japan's primary income balance currently? <<Long>>

Relevant context output:

<<References>>
1

<<Answer>>
4.4 trillion yen
Python code
from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

def create_rag_prompt(contexts, question):

    context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])

    str_inputs = f"""{context_str}

    <<Question>>
    {question}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

contexts = [
    "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
    "Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
    "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
    "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]

question = "What is Japan's primary income balance currently? <<Long>>"

inputs = create_rag_prompt(contexts, question)

outputs = llm.generate([inputs], sampling_params)

print(outputs[0].outputs[0].text)

# <<References>>
# 2
# 
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.

Feature: Multilinguality

We have trained our model to be able to answer questions in Japanese based on texts in other languages too!

(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)

Prompt style

Input:

<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.

<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。

<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี

<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.

<<Question>>
What is Japan's primary income balance currently?

Output:

<<References>>
2

<<Answer>>
4.4 trillion yen
Python code

from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

def create_rag_prompt(contexts, question):

    context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])

    str_inputs = f"""{context_str}

    <<Question>>
    {question}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

contexts = [
    "นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
    "Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
    "鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
    "Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]

question = "What is Japan's primary income balance currently?"

inputs = create_rag_prompt(contexts, question)

outputs = llm.generate([inputs], sampling_params)

print(outputs[0].outputs[0].text)
# <<References>>
# 2
# 
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.

Feature: Q&A generation

This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.

Prompt style

Input:

<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.

Output:

<<Question>>
What is Japan's current account surplus in July?

<<Answer>>
3.2 trillion yen
Python code
from vllm import LLM, SamplingParams

llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)

context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",

def create_qagen_prompt(context):

    str_inputs = f"""<<Q&A Generation Context>>
{context}"""

    chat = [
      {"role": "user", "content": str_inputs},
    ]

    return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

outputs = llm.generate([create_qagen_prompt(context)], sampling_params)

print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
# 
# <<Answer>>
# Kenji Suwazono

Training data

We trained on chunks sourced from the documents in MADLAD-400 dataset that had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.

We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.

We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.

Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the BAAI/bge-m3 model.

Downloads last month
10
Safetensors
Model size
7.61B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Collection including lightblue/kurage-ja