Edit model card

A 231M parameter base model trained on 4.4B tokens of lichess games from January 2023 that ended in checkmate (filtered out games that were won because of time).

Inference

from transformers import GPT2LMHeadModel, AutoTokenizer

model = GPT2LMHeadModel.from_pretrained("nsarrazin/chessformer").eval()
tokenizer = AutoTokenizer.from_pretrained("nsarrazin/chessformer")

moves = " ".join(["e2e4", "e7e5", "d2d4", "d7d5"])

model_inputs = tokenizer(moves, return_tensors="pt")
gen_tokens = model.generate(**model_inputs, max_new_tokens=1)[0]
next_move = tokenizer.decode(gen_tokens[-1])

print(next_move) #d4e5

End of game detection

The model also has three special tokens for end game detection <BLACK_WIN>, <WHITE_WIN> and <DRAW>. This can be useful for implementing beam search strategies.

moves = " ".join(["f2f3", "e7e5", "g2g4", "d8h4"])

model_inputs = tokenizer(moves, return_tensors="pt")
gen_tokens = model.generate(**model_inputs, max_new_tokens=1)[0]
next_move = tokenizer.decode(gen_tokens[-1])

print(next_move) # <BLACK_WIN>
Downloads last month
411
Safetensors
Model size
232M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train nsarrazin/chessformer