Request: DOI
1
#27 opened 5 months ago
by
SriK007
I have to compute this LLaMA-2 model to GPU but getting errors
#26 opened 8 months ago
by
heiskareem
Addressing Inconsistencies in Model Outputs: Understanding and Solutions
#25 opened 12 months ago
by
shivammehta
cuda error when loading llama 7b chat
2
#24 opened about 1 year ago
by
rachelshalom
Number of tokens exceeded maximum context length
2
#22 opened about 1 year ago
by
praxis-dev
Deploying with Text Generation Inference
#21 opened about 1 year ago
by
mariiaponom
Can anyone suggest me how can i run "llama.cpp" on GPU.
1
#18 opened about 1 year ago
by
fais4321
Prompt Template
#17 opened about 1 year ago
by
vincenzomanzoni
Could you quant Multilingual llm such as OpenBuddy/openbuddy-llama2-70b-v10.1-bf16, thank you.
#16 opened about 1 year ago
by
hugingfaceg
Fine Tuning this huggingface model
#15 opened about 1 year ago
by
kohnsolution
invalid magic number: latest release of llama.cpp cannot import 13B GGML q4.0 model
8
#14 opened about 1 year ago
by
zenitica
Help needed to load model
19
#13 opened about 1 year ago
by
sanjay-dev-ds-28
end of sentence token in fine-tuning dataset
1
#12 opened about 1 year ago
by
tanner-sorensen
ValueError: Error raised by inference API: Pipeline cannot infer suitable model classes from TheBloke/Llama-2-13B-chat-GGML
1
#11 opened over 1 year ago
by
username098
is there a checksum for each of these downloads?
2
#10 opened over 1 year ago
by
kechan
difference in inference between llama_cpp and langchain's LlamaCpp wrapper
#9 opened over 1 year ago
by
YairFr
Os win error
#8 opened over 1 year ago
by
Charan5145
Prompt template wrong in the description?
1
#7 opened over 1 year ago
by
h3ndrik
Error in PrivateGPT
1
#4 opened over 1 year ago
by
zWarhammer
Notebook to test Llama 2 in Colab free tier
8
#3 opened over 1 year ago
by
r3gm
Extreme levels of censorship?
8
#2 opened over 1 year ago
by
ceoofcapybaras
Holy stuff this is huge!!!! Cant wait for 70b GGML model!!!!
11
#1 opened over 1 year ago
by
rombodawg