Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- npu
|
4 |
+
- amd
|
5 |
+
- llama3
|
6 |
+
- Ryzen AI
|
7 |
+
---
|
8 |
+
|
9 |
+
This model is [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) AWQ quantized and converted version to run on the [NPU installed Ryzen AI PC](https://github.com/amd/RyzenAI-SW/issues/18), for example, Ryzen 9 7940HS Processor.
|
10 |
+
|
11 |
+
For set up Ryzen AI for LLMs in window 11, see [Running LLM on AMD NPU Hardware](https://www.hackster.io/gharada2013/running-llm-on-amd-npu-hardware-19322f).
|
12 |
+
|
13 |
+
The following sample assumes that the setup on the above page has been completed.
|
14 |
+
|
15 |
+
This model has only been tested on RyzenAI for Windows 11. It does not work in Linux environments such as WSL.
|
16 |
+
|
17 |
+
Sample script will be uploaded tomorrow.
|