Text Generation
Transformers
PyTorch
RefinedWebModel
custom_code
text-generation-inference
Inference Endpoints
mastermax-7b / README.md
lifeofcoding's picture
Update README.md
ff59c01
metadata
license: apache-2.0
datasets:
  - tiiuae/falcon-refinedweb
  - OpenAssistant/oasst1
  - timdettmers/openassistant-guanaco

Mastermax 7B

Mastermax-7B is a 7B parameters causal decoder-only model based on TII's Falcon 7b which was trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.

This model was fine tuned on the following additional datasets:

How to use Model

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "lifeofcoding/mastermax-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
  • Developed by: LifeOfCoding
  • Model type: Causal decoder-only;
  • Language(s) (NLP): English and French;
  • License: Apache 2.0;
  • Finetuned from model: Falcon-7B.

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]