xldistance
xldistance
AI & ML interests
None yet
Recent Activity
New activity
7 days ago
rombodawg/Rombos-Coder-V2.5-Qwen-32b
New activity
7 days ago
rombodawg/Rombos-Coder-V2.5-Qwen-32b
Organizations
None yet
xldistance's activity
Your trained model calls the ollama api frequently unresponsive, you need to restart ollama to reply again.
1
#3 opened 7 days ago
by
xldistance
The most powerful open source code large model!!!
3
#1 opened 10 days ago
by
xldistance
gguf model not loading properly in ollama
3
#1 opened 4 months ago
by
xldistance
Can you quantify this model in exl2?
1
#7 opened 6 months ago
by
xldistance
Can you provide the EXL2 quantitative model?
1
#1 opened 8 months ago
by
xldistance
Create GGUF for this please
8
#2 opened 9 months ago
by
ishanparihar
Can you produce a 2.4bpw exl2 quantisation of this model?
1
#2 opened 9 months ago
by
xldistance
Can you quantify the model?
5
#1 opened 10 months ago
by
xldistance
Can you make a 2.4bpw exl2 quantisation for this model?
4
#1 opened 10 months ago
by
xldistance
GGUF Version?
20
#1 opened 10 months ago
by
johnnnna
Can you quantize this model to 2.4 bpw?
#2 opened 10 months ago
by
xldistance
Can you do a 2.0bpw quantization model?
#4 opened 10 months ago
by
xldistance
maximum context length
2
#3 opened 10 months ago
by
MaziyarPanahi
Can you make a 2.4bpw quantization?
5
#1 opened 11 months ago
by
xldistance
Can you quantify this model?
5
#1 opened 10 months ago
by
xldistance
2.4bpw quantitative modeling can have broken or non-responsive responses
#1 opened 11 months ago
by
xldistance
Can you make a 2.4bpw quantization?
1
#1 opened 11 months ago
by
xldistance
Is there a big performance difference between 2bit quantization and 4bit quantization conversations?
1
#2 opened 11 months ago
by
xldistance
The model often answers the question over and over again with no overflow of video memory
1
#2 opened 11 months ago
by
xldistance