Update tokenizer's chat template to support assistant masks
#52 opened 17 days ago
by
leleogere
How to use [REFERENCE_DOC_x] in fill-in-the-middle tasks?
#51 opened 30 days ago
by
JimZhang
Facing NoneType for metadata while creating pipeline for codestral-22b
#50 opened about 1 month ago
by
sivajyothi82
Request for access to Codestral-22B-v0.1
#49 opened about 1 month ago
by
sivajyothi82
can we use this model for production?
1
#48 opened about 2 months ago
by
Bebish
how to use huggingface-transformers for FIM task
#47 opened 3 months ago
by
aa327chenge
Deployment to SageMaker - instance type?
1
#46 opened 4 months ago
by
MavWolverine
Hardware requirements to use mistralai/Codestral-22B-v0.1
1
#44 opened 5 months ago
by
tzak86
Add chat_template to tokenizer_config.json
1
#39 opened 5 months ago
by
Albhebvvsbbe
Run Codestral on Google Colab or Kaggle
#38 opened 5 months ago
by
mohammad1998
Update config.json
1
#36 opened 5 months ago
by
rameshjhar480
Optimizing with the transformers library
#35 opened 5 months ago
by
Taylor658
Why "Mistral Inference" and why your own tokenizer formats?
#34 opened 5 months ago
by
winddude
NS1
#33 opened 5 months ago
by
nishan3000
test2
2
#32 opened 5 months ago
by
yangwei999
Knowledge cutoff date
2
#30 opened 5 months ago
by
aravindsr
Sample code using HF
1
#29 opened 5 months ago
by
vanshils
Update README.md
1
#28 opened 5 months ago
by
Criztov
How to convert the weight to HF compatible?
7
#26 opened 5 months ago
by
salaki
Question about consolidated.safetensors vs model*.safetensors
#25 opened 5 months ago
by
JineLD
New tokenizer.json doesn't match tokenizer.model.v3, there's no [SUFFIX], [MIDDLE] and [PREFIX] tokens.
1
#24 opened 5 months ago
by
mukel
can inference with vllm?
1
#21 opened 5 months ago
by
amosxy
Why does this have 518 likes without any download, inference api or spaces?
3
#20 opened 5 months ago
by
GPT007
How to load in multi-gpu instance ?
6
#19 opened 5 months ago
by
aastha6
Is it a pretrained/base model or instruct model?
2
#17 opened 5 months ago
by
Sinsauzero
how to fine tune this model?
4
#16 opened 5 months ago
by
leo009
What is the context size on this model? And it does not appear to deal with JSON, function calling well.
1
#15 opened 5 months ago
by
BigDeeper
Can I use the model's output code to develop software for a commercial website? (I am a MERN stack developer working in a company)
1
#14 opened 5 months ago
by
nib12345
Disclosing training data
1
#13 opened 5 months ago
by
Vipitis
Fill in the Middle evaluation benchmark
1
#12 opened 5 months ago
by
phqtuyen
No [PREFIX] and [SUFFIX] in tokenizer vocab
5
#10 opened 5 months ago
by
Vokturz
Update README.md
1
#9 opened 5 months ago
by
machinez
License
3
#8 opened 5 months ago
by
mrfakename
How to convert to HF format?
5
#6 opened 5 months ago
by
ddh0
What does the tokenization for fill-in-the-middle requests look like?
2
#5 opened 5 months ago
by
XeIaso
bpmn modeling
#4 opened 5 months ago
by
alibama
We are so back
5
#3 opened 5 months ago
by
nanowell