Recommended inference devices
#18
by
AdrienVeepee
- opened
Hello, thanks for this model and work done on this project !
I'm designing a solution with NVLM-D-72B and I was looking at how much should we provision in terms of GPU ?
For testing puropose, will the H100 will be sufficient ?
How much RAM and VRAM should we target ? Thanks !
AdrienVeepee
changed discussion status to
closed