--- license: apache-2.0 datasets: - cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split - mlabonne/FineTome-100k - Vezora/Open-Critic-GPT - m-a-p/Code-Feedback language: - en base_model: trollek/LittleInstructionMaker-4B-v0.2 --- # LittleInstructionMaker-4B-v0.2-iMat-GGUF Original model: [LittleInstructionMaker-4B-v0.2](https://huggingface.co/trollek/LittleInstructionMaker-4B-v0.2) Creator: [trollek](https://huggingface.co/trollek) ## Quantization notes Made with llama.cpp b3621 with imatrix file based on exllamav2 calibration data. # Original model card # LittleInstructionMaker-4B-v0.2 > Now able to generate more complex instructions thanks to [cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split](https://huggingface.co/datasets/cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split) and [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k). It even does coding prompts now with help from [Vezora/Open-Critic-GPT](https://huggingface.co/datasets/Vezora/Open-Critic-GPT) and [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback). ### Benchmarks | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr| |--------|------:|------|------|-----------------|-------:|---|-----:| |eq_bench| 2.1|none |None |eqbench | 32.7345|± |3.4507| | | |none |None |percent_parseable|100.0000|± |0.0000| |winogrande| 1|none | 5|acc |0.7703|± |0.0118| ### Usage example ```python import torch from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( "trollek/LittleInstructionMaker-4B-v0.2", dtype=torch.bfloat16, load_in_4bit=True, max_seq_length=8192 ) FastLanguageModel.for_inference(model) def instruction_generator(system_message: str, num_instructions: int): if system_message is "": raise ValueError if num_instructions < 1: raise ValueError magpie_template = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n" input_ids = tokenizer(magpie_template, return_tensors="pt").input_ids.to("cuda") for idx in range(num_instructions): generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.65, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) yield response for instruct in instruction_generator("You are an AI coding assistant.", 2): print(instruct) ``` ``` Write a Python function that generates a random password of length 10 consisting of lowercase letters, uppercase letters, and special characters. The function should also check if the generated password meets the following criteria: - At least one letter must be in uppercase. - At least two numbers must be included. - At least one special character should be present (a symbol such as !@#$%^&*). The function should return the generated password along with its length, whether it satisfies all the criteria or not. ``` ``` You are given a list of integers, `nums`, that contains both positive and negative numbers. You need to write a function `median` to find the median of the numbers in the list. The median is defined as the middle number when the numbers are arranged in ascending order. If there is an even number of elements in the list, the median will be the average of the two middle numbers. Write a function `median(nums: List[int]) -> int` to find the median of the given list. Example 1: Input: nums = [5, -10, 4, 0, 7] Output: 4 Explanation: After sorting the list, we have [-10, 0, 4, 5, 7]. The middle element is 4, so the median is 4. Example 2: Input: nums = [1, 2, 3, 4, 5] Output: 3 Explanation: After sorting the list, we have [1, 2, 3, 4, 5]. There are five elements, so the median is 3. ```