by @RachidAR
RachidAR
RachidAR
AI & ML interests
1.58 bit LLM
Organizations
Collections
5
-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 143 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 602 -
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Paper • 2404.16710 • Published • 73 -
Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory
Paper • 2405.08707 • Published • 27
models
26
RachidAR/Whisper-v3-large-turbo
Automatic Speech Recognition
•
Updated
RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF
Text Generation
•
Updated
•
29
RachidAR/Mistral-Small-Instruct-2409-Q4_K_M-GGUF
Updated
•
19
RachidAR/RWKV-v6-Finch-14B-HF-Q5_K_M-GGUF
Updated
•
35
•
1
RachidAR/RWKV-v6-Finch-7B-HF-Q5_K_M-GGUF
Updated
•
62
•
1
RachidAR/RWKV-v6-Finch-1B6-HF-Q5_K_M-GGUF
Updated
•
19
•
2
RachidAR/Phi-3.5-mini-instruct-Q5_K_M-GGUF
Text Generation
•
Updated
•
16
RachidAR/Phi-3-mini-4k-ins-June2024-Q5_K_M-imat-GGUF
Text Generation
•
Updated
•
23
RachidAR/Phi-3-mini-4k-instruct-June2024-Q6_K-GGUF
Text Generation
•
Updated
•
20
RachidAR/saiga_llama3_8b-Q6_K-GGUF
Updated
•
30
datasets
None public yet