Inference error. Replacing the LLM part with Llama-3.1 70B quantized causing error ( RuntimeError: shape mismatch: value tensor of shape [1024] cannot be broadcast to indexing result of shape [1025] )
#75 opened about 1 month ago
by
CCRss
FutureWarning and no result
#74 opened about 1 month ago
by
najeebahmad
Text-only inference
#71 opened 3 months ago
by
ZoeyYao27
use with llama-cpp-python server
#70 opened 3 months ago
by
SurtMcGert
Is it possible to merge MiniCPM-Llama3-V-2-5 with a Llama-3-1 based model using MOE
10
#68 opened 3 months ago
by
rameshch
LoRA bug
#59 opened 4 months ago
by
SHIMURA0321
how to inference with transformers pipeline?
2
#54 opened 5 months ago
by
Zoitd
Fix task name
#53 opened 5 months ago
by
merve
About LLaVA Bench
#52 opened 5 months ago
by
Pistachioo
OCR问题
1
#51 opened 5 months ago
by
william0014
Anybody know how/what can actually load/inference this model?
4
#50 opened 5 months ago
by
SytanSD
I want to know the textual context length of this model
1
#48 opened 5 months ago
by
vobbilisettyjayadeep
couldn't connect to 'https://hf-mirror.com'
#47 opened 5 months ago
by
ccccccyan
How does MiniCPM-Llama3-V 2.5 OCR functions compared with ABBYY?
1
#46 opened 5 months ago
by
DouglasZHANG
gguf / llama.cpp support
2
#45 opened 5 months ago
by
cmp-nct
feat: Added judgment logic to support training with plain text data.
#42 opened 5 months ago
by
PaceWang
关于openai compatible api
1
#41 opened 5 months ago
by
weiminw
Deploying the model to Amazon Sagemaker for "inference"
#40 opened 5 months ago
by
Shan097
The Kernel crashed while executing code in the current cell or a previous cell.
2
#34 opened 5 months ago
by
Entz
[AUTOMATED] Model Memory Requirements
#33 opened 5 months ago
by
model-sizer-bot
failed to load model in LM Studio 0.2.24
2
#32 opened 5 months ago
by
skzz
Helpme
#30 opened 6 months ago
by
HamBoneTheSniff
Is there any tuturial for inference with batch_size > 1?
3
#29 opened 6 months ago
by
virtueai-mz
为什么回答是乱码?/Why is the response garbled?
3
#28 opened 6 months ago
by
z784721485
will there be doc for introducing details of the dataset and training strategies?
1
#27 opened 6 months ago
by
stiffxj
Potential bug when batch inferencing with left padding.
1
#26 opened 6 months ago
by
Snorlax
Why doesn't it work with bitsnbytes 8 or 4bit?
3
#25 opened 6 months ago
by
jackboot
Latest Code
#24 opened 6 months ago
by
Taylor658
llama3-V project is stealing a lot of academic work from MiniCPM-Llama3-V 2.5 !
10
#23 opened 6 months ago
by
pzc163
finetune
6
#22 opened 6 months ago
by
deqiuqiuzhang
Quite exceptional - looking forward to see the fork implemented in llama.cpp - but why that fork choice ?
1
#20 opened 6 months ago
by
cmp-nct
Why 'trust_remote_code=True' ? What remote code is being executed ?
3
#17 opened 6 months ago
by
Kkordik
Mobile App?
3
#16 opened 6 months ago
by
Yhyu13
总结几个重点问题:
1
#14 opened 6 months ago
by
windkkk
请教一下,openbmb/MiniCPM-Llama3-V-2_5这个模型想转onnx格式,该怎么操作?有好的建议或者集成工具吗?
4
#13 opened 6 months ago
by
fridayfairy
我想把这个模型转换成 gguf 格式,然后导入到 ollama 中,测试下效果,转换时候出错,请求帮助。
7
#12 opened 6 months ago
by
changingshow
Update README.md
#10 opened 6 months ago
by
saishf
Does MiniCPM support multi-image input?
17
#2 opened 6 months ago
by
huanghui1997
Using gguf format
7
#1 opened 6 months ago
by
Tomy99999