Edit model card

Model infos:

MegaBeam-Mistral-7B-300k quantized to FP8 weights and activations using per-tensor quantization, ready for inference with vLLM >= 0.5.1.

Original model README.md file:

MegaBeam-Mistral-7B-300k Model

MegaBeam-Mistral-7B-300k is a fine-tuned Mistral-7B-Instruct-v0.2 language model that supports input contexts up to 320k tokens. MegaBeam-Mistral-7B-300k can be deployed on a single AWS g5.48xlarge instance using serving frameworks such as vLLM, Sagemaker DJL endpoint, and others. Similarities and differences beween MegaBeam-Mistral-7B-300k and Mistral-7B-Instruct-v0.2 are summarized below:

Model Max context length rope_theta prompt template
Mistral-7B-Instruct-v0.2 32K 1e6 instruction format
MegaBeam-Mistral-7B-300k 320K 25e6 AS ABOVE

Evaluations

InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens

InfiniteBench is a cutting-edge benchmark tailored for evaluating the capabilities of language models to process, understand, and reason over super long contexts (100k+ tokens). We therefore evaluated MegaBeam-Mistral-7B-300k, Mistral-7B-Instruct-v0.2, Llama-3-8B-Instruct-262k, and Llama3-70B-1M on InfiniteBench. The InfiniteBench authors also evaluated SOTA proprietary and open-source LLMs on InfiniteBench. We thus combined both results in the table below.

Task Name MegaBeam-Mistral-7B-300k Mistral-7B-Instruct-v0.2 Llama-3-8B-Instruct-262k Llama3-70B-1M GPT-4-1106-preview YaRN-Mistral-7B Kimi-Chat Claude 2 Yi-6B-200K Yi-34B-200K Chatglm3-6B-128K
Retrieve.PassKey 100% 75.76% 98.30% 81.35% 100% 92.71% 98.14% 97.80% 100.00% 100.00% 92.20%
Retrieve.Number 96.10% 25.25% 97.79% 97.62% 100% 56.61% 95.42% 98.14% 94.92% 100.00% 80.68%
Retrieve.KV 0% 0% 3.40% 3% 89.00% < 5% 53.60% 65.40% < 5% < 5% < 5%
En.Sum 29.39% 22.13% 16.40% 20.72% 14.73% 9.09% 17.93% 14.45% < 5% < 5% < 5%
En.QA 14.93% 4.93% 13.20% 16.52% 22.22% 9.55% 16.52% 11.97% 9.20% 12.17% < 5%
En.MC 51.52% 7.80% 50.65% 62% 67.25% 27.95% 72.49% 62.88% 36.68% 38.43% 10.48%
En.Dia 9.50% 3.50% 1% 12.50% 8.50% 7.50% 11.50% 46.50% < 5% < 5% < 5%
Zh.QA 10.71% 3.43% 19.02% 26% 25.96% 14.43% 17.93% 9.64% 15.07% 13.61% < 5%
Code.Debug 27.41% 11.60% 22.08% 23.85% 39.59% < 5% 18.02% < 5% < 5% < 5% < 5%
Code.Run 1.75% 0.25% 0% 0% 23.25% < 5% < 5% < 5% < 5% < 5% < 5%
Math.Calc 0% 0% 0% 0% < 5% < 5% < 5% < 5% < 5% < 5% < 5%
Math.Find 24.28% 26.28% 15.40% 30% 60.00% 17.14% 12.57% 32.29% < 5% 25.71% 7.71%
Average 30.70% 15.08% 28.10% 31.13% 46.08% 20.41% 34.93% 37.21% 22.78% 25.41% 17.59%

The 12 evaluation tasks are summarized below (as per InfiniteBench)

Task Name Context # Examples Avg Input Tokens Avg Output Tokens Description
En.Sum Fake Book 103 171.5k 1.1k Summarization of a fake book created with core entity substitution.
En.QA Fake Book 351 192.6k 4.8 Free-form question answering based on the fake book.
En.MC Fake Book 229 184.4k 5.3 Multiple choice questions derived from the fake book.
En.Dia Script 200 103.6k 3.4 Identification of talkers in partially anonymized scripts.
Zh.QA New Book 175 2068.6k 6.3 Question answering on a set of newly collected books.
Code.Debug Code Document 394 114.7k 4.8 Finding which function in a code repo contains an crashing error (in multiple choice form).
Code.Run Synthetic 400 75.2k 1.3 Simulating execution of multiple simple, synthetic functions.
Math.Calc Synthetic 50 43.9k 43.9k Calculations involving super-long arithmetic equations.
Math.Find Synthetic 350 87.9k 1.3 Finding special integers in a lengthy list.
Retrieve.PassKey Synthetic 590 122.4k 2.0 Retrieving hidden keys in a noisy long context.
Retrieve.Number Synthetic 590 122.4k 4.0 Locating repeated hidden numbers in a noisy long context.
Retrieve.KV Synthetic 500 89.9k 22.7 Finding the corresponding value from a dictionary and a key.

Serve MegaBeam-Mistral-7B-300k on EC2 instances

On an AWS g5.48xlarge instance, upgrade vLLM to the latest version as per documentation on vLLM.

Start the server

python3 -m vllm.entrypoints.openai.api_server --model amazon/MegaBeam-Mistral-7B-300k --tensor-parallel-size 8

Important Note - We have set the max_position_embeddings in the config.json to 288,800 in order to fit model's KV-cache on a single g5.48xlarge instance, which has 8 x A10 GPUs (24GB RAM per GPU).

On an instance with larger GPU RAM (e.g. p4d.24xlarge), feel free to increase the value of the max_position_embeddings(e.g. to 350K), which the model should be able to process.

Run the client

from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    # defaults to os.environ.get("OPENAI_API_KEY")
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

chat_completion = client.chat.completions.create(
        messages = [
            {"role": "user", "content": "What is your favourite condiment?"}, # insert your long context here
            {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
            {"role": "user", "content": "Do you have mayonnaise recipes?"} # insert your long context here
        ],
        model=model,
)

print("Chat completion results:")
print(chat_completion)

Deploy the model on a SageMaker Endpoint

To deploy MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please follow this SageMaker DJL deployment guide.

Run the following Python code in a SageMaker notebook (with each block running in a separate cell)

import sagemaker
from sagemaker import Model, image_uris, serializers, deserializers

sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()

%%writefile serving.properties
engine=Python
option.model_id=amazon/MegaBeam-Mistral-7B-300k
option.dtype=bf16
option.task=text-generation
option.rolling_batch=vllm
option.tensor_parallel_degree=8
option.device_map=auto

%%sh
mkdir mymodel
mv serving.properties mymodel/
tar czvf mymodel.tar.gz mymodel/
rm -rf mymodel

image_uri = image_uris.retrieve(
        framework="djl-deepspeed",
        region=region,
        version="0.27.0"
)

s3_code_prefix = "megaBeam-mistral-7b-300k/code"
bucket = sagemaker_session.default_bucket()  # bucket to house artifacts
code_artifact = sagemaker_session.upload_data("mymodel.tar.gz", bucket, s3_code_prefix)
print(f"S3 Code or Model tar ball uploaded to --- &gt; {code_artifact}")
model = Model(image_uri=image_uri, model_data=code_artifact, role=role)

instance_type = "ml.g5.48xlarge"
endpoint_name = sagemaker.utils.name_from_base("megaBeam-mistral-7b-300k")
model.deploy(initial_instance_count=1,
             instance_type=instance_type,
             endpoint_name=endpoint_name
            )

# our requests and responses will be in json format so we specify the serializer and the deserializer
predictor = sagemaker.Predictor(
    endpoint_name=endpoint_name,
    sagemaker_session=sagemaker_session,
    serializer=serializers.JSONSerializer(),
)

# test the endpoint
input_str = """<s>[INST] What is your favourite condiment? [/INST]
Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
[INST] Do you have mayonnaise recipes? [/INST]"""
predictor.predict(
    {"inputs": input_str, "parameters": {"max_new_tokens": 75}}
)

Invoke the model on a SageMaker Endpoint

To use MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please try following this example:

import boto3
import json

def call_endpoint(text:str, endpoint_name:str):
    client = boto3.client("sagemaker-runtime")

    parameters = {
        "max_new_tokens": 450,
        "do_sample": True,
        "temperature": 0.7,
    }

    payload = {"inputs": text, "parameters": parameters}

    response = client.invoke_endpoint(
        EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json"
    )

    output = json.loads(response["Body"].read().decode())

    result = output["generated_text"]
    return result

# please insert your long prompt/document content here
prompt = """<s>[INST] What are the main challenges to support long contexts for a Large Language Model? [/INST]"""

#print(prompt)
endpoint_name = "megaBeam-mistral-7b-300k-2024-05-13-14-23-41-219" # please use a valid endpoint name
result = call_endpoint(prompt, endpoint_name)
print(result)

Limitations

Before using the MegaBeam-Mistral-7B-300k model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.

The AWS Contributors

Chen Wu, Yin Song, Verdi March, Eden Duthi--- license: apache-2.0 inference: false

MegaBeam-Mistral-7B-300k Model

MegaBeam-Mistral-7B-300k is a fine-tuned Mistral-7B-Instruct-v0.2 language model that supports input contexts up to 320k tokens. MegaBeam-Mistral-7B-300k can be deployed on a single AWS g5.48xlarge instance using serving frameworks such as vLLM, Sagemaker DJL endpoint, and others. Similarities and differences beween MegaBeam-Mistral-7B-300k and Mistral-7B-Instruct-v0.2 are summarized below:

Model Max context length rope_theta prompt template
Mistral-7B-Instruct-v0.2 32K 1e6 instruction format
MegaBeam-Mistral-7B-300k 320K 25e6 AS ABOVE

Evaluations

InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens

InfiniteBench is a cutting-edge benchmark tailored for evaluating the capabilities of language models to process, understand, and reason over super long contexts (100k+ tokens). We therefore evaluated MegaBeam-Mistral-7B-300k, Mistral-7B-Instruct-v0.2, Llama-3-8B-Instruct-262k, and Llama3-70B-1M on InfiniteBench. The InfiniteBench authors also evaluated SOTA proprietary and open-source LLMs on InfiniteBench. We thus combined both results in the table below.

Task Name MegaBeam-Mistral-7B-300k Mistral-7B-Instruct-v0.2 Llama-3-8B-Instruct-262k Llama3-70B-1M GPT-4-1106-preview YaRN-Mistral-7B Kimi-Chat Claude 2 Yi-6B-200K Yi-34B-200K Chatglm3-6B-128K
Retrieve.PassKey 100% 75.76% 98.30% 81.35% 100% 92.71% 98.14% 97.80% 100.00% 100.00% 92.20%
Retrieve.Number 96.10% 25.25% 97.79% 97.62% 100% 56.61% 95.42% 98.14% 94.92% 100.00% 80.68%
Retrieve.KV 0% 0% 3.40% 3% 89.00% < 5% 53.60% 65.40% < 5% < 5% < 5%
En.Sum 29.39% 22.13% 16.40% 20.72% 14.73% 9.09% 17.93% 14.45% < 5% < 5% < 5%
En.QA 14.93% 4.93% 13.20% 16.52% 22.22% 9.55% 16.52% 11.97% 9.20% 12.17% < 5%
En.MC 51.52% 7.80% 50.65% 62% 67.25% 27.95% 72.49% 62.88% 36.68% 38.43% 10.48%
En.Dia 9.50% 3.50% 1% 12.50% 8.50% 7.50% 11.50% 46.50% < 5% < 5% < 5%
Zh.QA 10.71% 3.43% 19.02% 26% 25.96% 14.43% 17.93% 9.64% 15.07% 13.61% < 5%
Code.Debug 27.41% 11.60% 22.08% 23.85% 39.59% < 5% 18.02% < 5% < 5% < 5% < 5%
Code.Run 1.75% 0.25% 0% 0% 23.25% < 5% < 5% < 5% < 5% < 5% < 5%
Math.Calc 0% 0% 0% 0% < 5% < 5% < 5% < 5% < 5% < 5% < 5%
Math.Find 24.28% 26.28% 15.40% 30% 60.00% 17.14% 12.57% 32.29% < 5% 25.71% 7.71%
Average 30.70% 15.08% 28.10% 31.13% 46.08% 20.41% 34.93% 37.21% 22.78% 25.41% 17.59%

The 12 evaluation tasks are summarized below (as per InfiniteBench)

Task Name Context # Examples Avg Input Tokens Avg Output Tokens Description
En.Sum Fake Book 103 171.5k 1.1k Summarization of a fake book created with core entity substitution.
En.QA Fake Book 351 192.6k 4.8 Free-form question answering based on the fake book.
En.MC Fake Book 229 184.4k 5.3 Multiple choice questions derived from the fake book.
En.Dia Script 200 103.6k 3.4 Identification of talkers in partially anonymized scripts.
Zh.QA New Book 175 2068.6k 6.3 Question answering on a set of newly collected books.
Code.Debug Code Document 394 114.7k 4.8 Finding which function in a code repo contains an crashing error (in multiple choice form).
Code.Run Synthetic 400 75.2k 1.3 Simulating execution of multiple simple, synthetic functions.
Math.Calc Synthetic 50 43.9k 43.9k Calculations involving super-long arithmetic equations.
Math.Find Synthetic 350 87.9k 1.3 Finding special integers in a lengthy list.
Retrieve.PassKey Synthetic 590 122.4k 2.0 Retrieving hidden keys in a noisy long context.
Retrieve.Number Synthetic 590 122.4k 4.0 Locating repeated hidden numbers in a noisy long context.
Retrieve.KV Synthetic 500 89.9k 22.7 Finding the corresponding value from a dictionary and a key.

Serve MegaBeam-Mistral-7B-300k on EC2 instances

On an AWS g5.48xlarge instance, upgrade vLLM to the latest version as per documentation on vLLM.

Start the server

python3 -m vllm.entrypoints.openai.api_server --model amazon/MegaBeam-Mistral-7B-300k --tensor-parallel-size 8

Important Note - We have set the max_position_embeddings in the config.json to 288,800 in order to fit model's KV-cache on a single g5.48xlarge instance, which has 8 x A10 GPUs (24GB RAM per GPU).

On an instance with larger GPU RAM (e.g. p4d.24xlarge), feel free to increase the value of the max_position_embeddings(e.g. to 350K), which the model should be able to process.

Run the client

from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    # defaults to os.environ.get("OPENAI_API_KEY")
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

chat_completion = client.chat.completions.create(
        messages = [
            {"role": "user", "content": "What is your favourite condiment?"}, # insert your long context here
            {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
            {"role": "user", "content": "Do you have mayonnaise recipes?"} # insert your long context here
        ],
        model=model,
)

print("Chat completion results:")
print(chat_completion)

Deploy the model on a SageMaker Endpoint

To deploy MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please follow this SageMaker DJL deployment guide.

Run the following Python code in a SageMaker notebook (with each block running in a separate cell)

import sagemaker
from sagemaker import Model, image_uris, serializers, deserializers

sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()

%%writefile serving.properties
engine=Python
option.model_id=amazon/MegaBeam-Mistral-7B-300k
option.dtype=bf16
option.task=text-generation
option.rolling_batch=vllm
option.tensor_parallel_degree=8
option.device_map=auto

%%sh
mkdir mymodel
mv serving.properties mymodel/
tar czvf mymodel.tar.gz mymodel/
rm -rf mymodel

image_uri = image_uris.retrieve(
        framework="djl-deepspeed",
        region=region,
        version="0.27.0"
)

s3_code_prefix = "megaBeam-mistral-7b-300k/code"
bucket = sagemaker_session.default_bucket()  # bucket to house artifacts
code_artifact = sagemaker_session.upload_data("mymodel.tar.gz", bucket, s3_code_prefix)
print(f"S3 Code or Model tar ball uploaded to --- &gt; {code_artifact}")
model = Model(image_uri=image_uri, model_data=code_artifact, role=role)

instance_type = "ml.g5.48xlarge"
endpoint_name = sagemaker.utils.name_from_base("megaBeam-mistral-7b-300k")
model.deploy(initial_instance_count=1,
             instance_type=instance_type,
             endpoint_name=endpoint_name
            )

# our requests and responses will be in json format so we specify the serializer and the deserializer
predictor = sagemaker.Predictor(
    endpoint_name=endpoint_name,
    sagemaker_session=sagemaker_session,
    serializer=serializers.JSONSerializer(),
)

# test the endpoint
input_str = """<s>[INST] What is your favourite condiment? [/INST]
Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
[INST] Do you have mayonnaise recipes? [/INST]"""
predictor.predict(
    {"inputs": input_str, "parameters": {"max_new_tokens": 75}}
)

Invoke the model on a SageMaker Endpoint

To use MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please try following this example:

import boto3
import json

def call_endpoint(text:str, endpoint_name:str):
    client = boto3.client("sagemaker-runtime")

    parameters = {
        "max_new_tokens": 450,
        "do_sample": True,
        "temperature": 0.7,
    }

    payload = {"inputs": text, "parameters": parameters}

    response = client.invoke_endpoint(
        EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json"
    )

    output = json.loads(response["Body"].read().decode())

    result = output["generated_text"]
    return result

# please insert your long prompt/document content here
prompt = """<s>[INST] What are the main challenges to support long contexts for a Large Language Model? [/INST]"""

#print(prompt)
endpoint_name = "megaBeam-mistral-7b-300k-2024-05-13-14-23-41-219" # please use a valid endpoint name
result = call_endpoint(prompt, endpoint_name)
print(result)

Limitations

Before using the MegaBeam-Mistral-7B-300k model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.

The AWS Contributors

Chen Wu, Yin Song, Verdi March, Eden Duthie

Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
FP16
·
F8_E4M3
·
Inference Examples
Inference API (serverless) has been turned off for this model.