Junlin Zhou
jlzhou
AI & ML interests
None yet
Organizations
None yet
jlzhou's activity
Update license?
1
#5 opened about 2 months ago
by
jlzhou
Base model
4
#2 opened 3 months ago
by
Stark2008
How to download the dataset in bulk?
1
#7 opened 5 months ago
by
Chinglin
what different between the-stack-v2-train-full-ids and the-stack-v2-dedup
5
#2 opened 7 months ago
by
shawn0wang
Actual dataset size?
3
#4 opened 4 months ago
by
jlzhou
Instruct version please
1
#5 opened 4 months ago
by
rjmehta
What does low_cpu_mem_usage do?
1
#8 opened 5 months ago
by
omgwenxx
Problem Running Model
13
#3 opened 11 months ago
by
bezale
It seems that this model sometimes ignores user instruction
3
#12 opened 6 months ago
by
jlzhou
Start an API for falcon-180B
6
#22 opened 12 months ago
by
DrLuttapi
Add `chat_template` in tokenizer config
2
#3 opened 6 months ago
by
jlzhou
Please create google Gemma-7b (8.5b) based version
12
#4 opened 7 months ago
by
rombodawg
Dose HF-TGI support this GGUF version?
1
#2 opened 6 months ago
by
gpt3eth
How to convert 4bit model back to fp16 data format?
3
#52 opened 6 months ago
by
tremblingbrain
Add `chat_template` in tokenizer config
1
#11 opened 7 months ago
by
jlzhou
Poor Model Performance with Recommended Quantized Model
1
#21 opened 8 months ago
by
nlpsingh
13b in the future?
9
#21 opened 12 months ago
by
deleted
No memory within model?
5
#3 opened 9 months ago
by
jdc4429
fix: missing suffix for system message
1
#1 opened 10 months ago
by
jlzhou
Problem with streaming support
5
#17 opened 10 months ago
by
mattma1970
fix: quantize param in TGI example
1
#8 opened 11 months ago
by
jlzhou
any idea how to test this for inferencing using vllm?
3
#1 opened 12 months ago
by
silvacarl
Failed to run this model on A6000 48GB VRAM Machine
2
#3 opened 12 months ago
by
Leegohi
CPU or GPU
1
#76 opened about 1 year ago
by
lalit34
How to quantise the model?
2
#2 opened about 1 year ago
by
szbigcat
Does it increase inference speed on the same gpu?
2
#1 opened about 1 year ago
by
aibarito-ua
Getting HTTP Error Code: 422 when using Inference API
2
#96 opened about 1 year ago
by
reetkat
Model sometimes generates '</s>'
1
#63 opened about 1 year ago
by
jlzhou