Shuyue Jia (Bruce)
shuyuej
AI & ML interests
A Ph.D. Student at @vkola-lab, Boston University. Passionate about Large Language Models (LLMs), Multimodal Foundation Models, Generative AI, and Medical AI.
Recent Activity
liked
a model
about 12 hours ago
shuyuej/SFR-Embedding-2_R-GPTQ
updated
a model
about 12 hours ago
shuyuej/SFR-Embedding-2_R-GPTQ
updated
a model
3 days ago
shuyuej/Mixtral-8x7B-Instruct-v0.1-2048
Organizations
shuyuej's activity
missing model.safetensors.index.json
3
#1 opened 4 months ago
by
kresimirfijacko
Can you create gptq 8 bits quants?
1
#1 opened 4 months ago
by
rjmehta
Can you provide one model using `group_size=1024` to make the model smaller?
#15 opened 4 months ago
by
shuyuej
Update quantize_config.json
1
#12 opened 4 months ago
by
shuyuej
Update config.json
1
#11 opened 4 months ago
by
shuyuej
Source codes to quantize the LLaMA 3.1 405B model
3
#10 opened 4 months ago
by
shuyuej
Request for Mistral Large Instruct GPTQ INT4
4
#2 opened 4 months ago
by
sparsh35
Missing config.json
5
#6 opened 4 months ago
by
wxl2001
Where can we download `quant.py`?
1
#1 opened 4 months ago
by
shuyuej
Learning Rate during pretraining
1
#58 opened 4 months ago
by
shuyuej
About the tokenizer - Why use LLaMA tokenizer?
#5 opened 4 months ago
by
shuyuej
Model max_seq_length
6
#6 opened 4 months ago
by
shuyuej
Model max_seq_length
1
#4 opened 4 months ago
by
shuyuej
Where can we find `eval_medical_llm.py` and `main.py`
1
#15 opened 6 months ago
by
shuyuej
Fine-Tune a gemma model for question answering
17
#62 opened 9 months ago
by
Iamexperimenting
Weird Performance Issue with Gemma-7b compared to Gemma-2b with Qlora
6
#91 opened 7 months ago
by
UserDAN
What is the actual context size of mistralai/Mixtral-8x7B-Instruct-v0.1 model
3
#186 opened 8 months ago
by
Pradeep1995
Very different results with float16. [Actually, gemma-7b-it does not work with float16]
6
#33 opened 9 months ago
by
EarthWorm001