Edit model card

Phi-3.5 Mini-Instruct Modification using MedIT-mesh Technique

Primary Use Cases:

  • Commercial use in environments requiring memory and compute constraints.
  • Use in latency-bound scenarios where accuracy is crucial.
  • Strong reasoning capabilities, especially for code, math, and logic applications.

Model Description:

The Phi-3.5 Mini-Instruct modification is designed to accelerate research on language and multimodal models. It is a 3.8B parameter model optimized for commercial and research use in multiple languages. The MedIT-mesh technique provides improved memory and compute efficiency, making it suitable for environments with limited resources.

Use Case Considerations:

When selecting use cases, developers should consider language models' limitations and evaluate accuracy, safety, and fairness before using them within a specific downstream application. Developers should be aware of applicable laws and regulations (e.g., privacy, trade compliance) relevant to their use case. It is essential to adhere to the license terms for the model being used.

Release Notes:

An update over the June 2024 instruction-tuned Phi-3 Mini release based on user feedback. Additional post-training data was incorporated, leading to substantial gains in multilingual and multi-turn conversation quality, and reasoning capability. This release is expected to benefit most use cases, but users are encouraged to test in their particular AI applications.

Downloads last month
166
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for meditsolutions/MedIT-Mesh-3B-Instruct

Finetuned
(37)
this model
Quantizations
1 model