How to run tis model please help me
1
#40 opened 24 minutes ago
by
Agoogleuser
project_bb
#39 opened about 21 hours ago
by
salunkesayali
Update README.md
#37 opened 4 days ago
by
leonsas14
AssertionError: Torch not compiled with CUDA enabled
1
#35 opened 6 days ago
by
gokul9
Upload 4 files
#34 opened 6 days ago
by
amberfelts890
FP8 Quantized model now available! (only requires half the original model's VRAM)
#33 opened 7 days ago
by
mysticbeing
Reparación de Gadgets
#32 opened 10 days ago
by
gvilchis
Update README.md
#31 opened 10 days ago
by
MarcosMT
Request: DOI
1
#30 opened 14 days ago
by
Natwar
Update README.md
#27 opened 15 days ago
by
jinyongkenny
Is there an AWQ quant version?
#26 opened 16 days ago
by
imonsoon
Request: DOI
#25 opened 18 days ago
by
Arslanbey123
Suitable hardware config for usage this model
10
#22 opened 22 days ago
by
nimool
Templete Prompt
2
#20 opened 22 days ago
by
sadra
Any way I can run it on my low-mid tier HP Desktop? specs attached as a .png, btw i know its probably a long shot.
6
#18 opened 23 days ago
by
vgrowhouse
How to inference it on a 40 GB A100 and 80 GB Ram of Colab PRO?
1
#17 opened 24 days ago
by
SadeghPouriyan
nvdiallm
#16 opened 25 days ago
by
jhaavinash
Update README.md
#12 opened 26 days ago
by
Delcos
[EVALS] Metrics compared to 3.1-70b Instruct by Meta
2
#11 opened 27 days ago
by
ID0M
Congrats to the Nvidia team!
#10 opened 28 days ago
by
nickandbro
Will quantised version be available?
4
#9 opened 28 days ago
by
angerhang
Adding Evaluation Results
#8 opened 28 days ago
by
leaderboard-pr-bot
the model is not optimize in term of inference
1
#7 opened 28 days ago
by
Imran1
there are 3 "r"s in the playful "strawrberry"?
5
#6 opened 28 days ago
by
JieYingZhang
405B version
2
#5 opened 28 days ago
by
nonetrix
Other lanuage ablity
2
#4 opened 28 days ago
by
nonetrix
What's the difference between Instruct-HF vs Instruct?
1
#2 opened 29 days ago
by
Backup6
Turn inference ON?
2
#1 opened 29 days ago
by
victor