momonga PRO
mmnga
AI & ML interests
None yet
Organizations
mmnga's activity
Would it be possible to have an 8bit gguf?
2
#1 opened 3 months ago
by
PurityWolf
Please use split ggufs instead of splitting files manually
1
#1 opened 4 months ago
by
lmg-anon
Usage in the model card seems to be ChatML format.
1
#1 opened 4 months ago
by
yamikumods
LM Studioでのエラー
3
#1 opened 6 months ago
by
alfredplpl
Update tokenization_arcade100k.py
#1 opened 6 months ago
by
mmnga
Please tell me how did you convert this FAST model into gguf file.
7
#1 opened 7 months ago
by
wattai
Update config.json
1
#3 opened 7 months ago
by
mmnga
Differences in output from the original model
2
#1 opened 10 months ago
by
nitky
Librarian Bot: Add moe tag to model
#3 opened 10 months ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened 10 months ago
by
librarian-bot
Librarian Bot: Add moe tag to model
#1 opened 10 months ago
by
librarian-bot
Maybe a slerp or some other merge method will preserve the component experts better?
3
#2 opened 11 months ago
by
BlueNipples
Responses somewhat related to the prompt but still gibberish
2
#1 opened 11 months ago
by
JeroenAdam
Tritonのサポート切れによるColab A100への移行
2
#2 opened about 1 year ago
by
alfredplpl
bfloat16でなくfloat16による量子化
2
#1 opened about 1 year ago
by
alfredplpl
Missing tokenizer.model
4
#1 opened about 1 year ago
by
mmnga
is this related with GPT-Neo-2.7B-AID ?
1
#1 opened about 1 year ago
by
adriey