Edit model card

This is a pre-release model interface, training started on February 7, 2024, and the model will be released in the future.

The model adopts the Phi architecture, with 550 million parameters. It only supports English and does not support code writing.

The model's dataset is obtained by cleaning and deduplicating open-source datasets, with pre-training using approximately 30 billion instances.

If you are a native English speaker, you might find these sentences uncomfortable to read because the training of this model and the writing of this document were only completed by a very inexperienced Chinese high school student.

Anyway, this is a new attempt. It is trained on consumer-grade devices and without the guidance of professionals, so it's hard for us to expect it to perform exceptionally well.

But we hope this will be the beginning of a new great exploration.

(We have released a preview version on February 24, 2024, and you can run it using the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

tokenizer = AutoTokenizer.from_pretrained('pathtotokenizer')
model = AutoModelForCausalLM.from_pretrained('pathtomodel').to(device)
tokenizer.pad_token = tokenizer.eos_token

txt = 'inputtext'

# greedy search
gen_conf = GenerationConfig(
    num_beams=1,
    do_sample=True,
    max_length=700,
    no_repeat_ngram_size=6,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.pad_token_id,
    temperature=0.93,
    top_k=36,
    top_p=0.80
)

tokend = tokenizer.encode_plus(text=txt)
input_ids, attention_mask = torch.LongTensor([tokend.input_ids]).to(device), \
    torch.LongTensor([tokend.attention_mask]).to(device)

outputs = model.generate(
    inputs=input_ids,
    attention_mask=attention_mask,
    generation_config=gen_conf,
    
)

outs = tokenizer.decode(outputs[0].cpu().numpy(), clean_up_tokenization_spaces=True,)
print(outs)
Downloads last month
345
Safetensors
Model size
555M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for PhelixZhen/Algae-550M

Quantizations
1 model