Björn Plüster
bjoernp
AI & ML interests
None yet
Organizations
bjoernp's activity
Can you share how you converted this?
7
#1 opened 5 months ago
by
bjoernp
Hf safetensors version
9
#3 opened 5 months ago
by
ehartford
use_flash_attention_2=True
3
#9 opened 6 months ago
by
TillFetzer
leo-mistral-hessianai-7b-chat for privateGPT
3
#8 opened 7 months ago
by
Dodo124
Update tokenizer_config.json
#1 opened 7 months ago
by
bjoernp
Problems with flash-attention2
1
#13 opened 8 months ago
by
omaer0
Loss function?
1
#10 opened 11 months ago
by
narvind2003
No multi GPU inference support?
8
#4 opened 11 months ago
by
dataautogpt3
Llama2 vs Mistral
1
#2 opened 11 months ago
by
lightningRalf
Add languages
#8 opened 11 months ago
by
lbourdois
Missing module/classes: from transformers.cache_utils import Cache, DynamicCache
1
#7 opened 11 months ago
by
panopstor
changed "tokenizer" typo to be the one we create.
#4 opened 11 months ago
by
dyngnosis
Which transformers version is being used here?
2
#6 opened 11 months ago
by
Promptengineering
Flash dependency (locks out non-NVIDIA GPUs)
3
#4 opened 11 months ago
by
Thalesian
Update modeling_moe_mistral.py
#5 opened 11 months ago
by
bjoernp
Trying to quantize. Running into the issue below. Any suggestions?
1
#5 opened 11 months ago
by
BigDeeper
small readme fix
#1 opened 11 months ago
by
jphme
Update modeling_moe_mistral.py
2
#1 opened 11 months ago
by
bjoernp
AWQ-Variante
4
#2 opened 11 months ago
by
SebastianBodza