--- base_model: smallcloudai/Refact-1_6B-fim license: bigscience-openrail-m model_creator: Small Magellanic Cloud AI model_name: Refact-1.6B pipeline_tag: text-generation prompt_template: '{prefix}{suffix}' pretrain-datasets: - books - arxiv - c4 - falcon-refinedweb - wiki - github-issues - stack_markdown - self-made dataset of permissive github code datasets: - bigcode/the-stack-dedup - rombodawg/2XUNCENSORED_MegaCodeTraining188k - bigcode/commitpackft tags: - code language: - en --- # Refact-1.6B-fim-GGUF - Model creator: [Small Magellanic Cloud AI](https://huggingface.co/smallcloudai) - Original model: [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim) ## Description This repository contains quantized GGUF format model files for [Refact-1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim). ## Prompt: fill in the middle ``` def print_hello_world():\n """\n print("Hello world!") ``` ## Prompt: chat (experimental) ``` SYSTEM You are a programming assistant USER How do I sort a list in Python? ASSISTANT ``` ## Example `llama.cpp` command ```shell ./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '{prefix}{suffix}' ``` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)