Files changed (1) hide show
  1. README.md +66 -109
README.md CHANGED
@@ -6,113 +6,70 @@ colorTo: gray
6
  sdk: static
7
  pinned: false
8
  ---
 
9
 
10
- <div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
11
- <p class="lg:col-span-3">
12
- Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
13
- </p>
14
- <a
15
- href="https://huggingface.co/blog/intel"
16
- class="block overflow-hidden group"
17
- >
18
- <div
19
- class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
20
- >
21
- <img
22
- alt=""
23
- src="https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png"
24
- class="w-40"
25
- />
26
- </div>
27
- <div class="underline">Learn more about Hugging Face collaboration with Intel AI</div>
28
- </a>
29
- <a
30
- href="https://github.com/huggingface/optimum"
31
- class="block overflow-hidden group"
32
- >
33
- <div
34
- class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
35
- >
36
- <img
37
- alt=""
38
- src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png"
39
- class="w-40"
40
- />
41
- </div>
42
- <div class="underline">Quantize Transformers with Intel® Neural Compressor and Optimum</div>
43
- </a>
44
- <a href="https://huggingface.co/blog/generative-ai-models-on-intel-cpu" class="block overflow-hidden group">
45
- <div
46
- class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
47
- >
48
- <img
49
- alt=""
50
- src="/blog/assets/143_q8chat/thumbnail.png"
51
- class="w-40"
52
- />
53
- </div>
54
- <div class="underline">Quantizing 7B LLM on Intel CPU</div>
55
- </a>
56
- <div class="lg:col-span-3">
57
- <p class="mb-2">
58
- Intel optimizes widely adopted and innovative AI software
59
- tools, frameworks, and libraries for Intel® architecture. Whether
60
- you are computing locally or deploying AI applications on a massive
61
- scale, your organization can achieve peak performance with AI
62
- software optimized for Intel® Xeon® Scalable platforms.
63
- </p>
64
- <p class="mb-2">
65
- Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers.
66
- </p>
67
- <h3>Useful Resources:</h3>
68
- <ul>
69
- <li class="ml-6"><a href="https://huggingface.co/hardware/intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked partner page" data-ga-label="partner page">Intel AI + Hugging Face partner page</a></li>
70
- <li class="ml-6"><a href="https://github.com/IntelAI" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel ai github" data-ga-label="intel ai github">Intel AI GitHub</a></li>
71
- <li class="ml-6"><a href="https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel partner page" data-ga-label="intel partner page">Developer Resources from Intel and Hugging Face</a></li>
72
- </ul>
73
- <p>&nbsp;</p>
74
- </div>
75
- <div class="lg:col-span-3">
76
- <h1>Get Started</h1>
77
- <h3>1. Intel Acceleration Libraries</h3>
78
- <p class="mb-2">
79
- To get started with Intel hardware and software optimizations, download and install the Optimum Intel
80
- and Intel® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries:
81
- </p>
82
- <ul>
83
- <li class="ml-6"><a href="https://github.com/huggingface/optimum-intel#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">🤗 Optimum Intel library</a></li>
84
- <li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">Intel® Extension for Transformers</a></li>
85
- </ul>
86
- <p class="mb-2">
87
- The Optimum Intel library provides primarily hardware acceleration, while the Intel® Extension
88
- for Transformers is focused more on software accleration. Both should be present to achieve ideal
89
- performance and productivity gains in transfer learning and fine-tuning with Hugging Face.
90
- </p>
91
- <h3>2. Find Your Model</h3>
92
- <p class="mb-2">
93
- Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Face’s website.
94
- Add “intel” to your search to narrow your search to models pretrained by Intel.
95
- </p>
96
- <img
97
- alt=""
98
- src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-model_search.png"
99
- style="margin:auto;transform:scale(0.8);"
100
- />
101
- <h3>3. Read Through the Demo, Dataset, and Quick-Start Commands</h3>
102
- <p class="mb-2">
103
- On the model’s page (called a “Model Card”) you will find description and usage information, an embedded
104
- inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers”
105
- for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.
106
- </p>
107
- <img
108
- alt=""
109
- src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-use_transformers.png"
110
- style="margin:auto;transform:scale(0.8);"
111
- />
112
- <img
113
- alt=""
114
- src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-quickstart.png"
115
- style="margin:auto;transform:scale(0.8);"
116
- />
117
- </div>
118
- </div>
 
6
  sdk: static
7
  pinned: false
8
  ---
9
+ Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
10
 
11
+ ### Models
12
+ Check out Intel's models here on our Hugging Face page or directly through the [Hugging Face Models Hub search](https://huggingface.co/models?sort=trending&search=intel). Here are some of Intel's models:
13
+
14
+ | Model | Type |
15
+ | :--- | :--- |
16
+ | [dpt-hybrid-midas](https://huggingface.co/Intel/dpt-hybrid-midas) | Monocular depth estimation |
17
+ | [llava-gemma-2b](https://huggingface.co/Intel/llava-gemma-2b) | Multimodal |
18
+ | [gpt2 on Gaudi](https://huggingface.co/Habana/gpt2) | Text generation |
19
+ | [neural-chat-7b-v3-3-int8-ov](https://huggingface.co/OpenVINO/neural-chat-7b-v3-3-int8-ov) | Text generation |
20
+
21
+ ### Datasets
22
+
23
+ Intel has created a number of [datasets](https://huggingface.co/Intel?sort_datasets=modified#datasets) for use in fine-tuning both vision and language models. Check out the datasets below on our page, including [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) for natural language processing tasks and [SocialCounterfactuals](https://huggingface.co/datasets/Intel/SocialCounterfactuals) for vision tasks.
24
+
25
+ ### Collections
26
+
27
+ Our Collections categorize models that pertain to Intel hardware and software. Here are a few:
28
+
29
+ | Collection | Description |
30
+ | :--- | :--- |
31
+ | [DPT 3.1](https://huggingface.co/collections/Intel/dpt-31-65b2a13eb0a5a381b6df9b6b) | Monocular depth (MiDaS) models, leveraging state-of-the-art vision backbones such as BEiT and Swinv2 |
32
+ | [Whisper](https://huggingface.co/collections/Intel/whisper-65b3d8d2d5bf0d622a866e3a) | Whisper models for automatic speech recognition (ASR) and speech translation, quantized for faster inference speeds. |
33
+ | [Intel Neural Chat](https://huggingface.co/collections/Intel/intel-neural-chat-65b3d2f2d0ba0a801668ef2c) | Fine-tuned 7B parameter LLM models, one of which made it to the top of the 7B HF LLM Leaderboard |
34
+
35
+ ### Spaces
36
+ Check out Intel's leaderboards and other demo applications from our [Spaces](https://huggingface.co/Intel?sort_spaces=modified#spaces):
37
+
38
+ | Space | Description |
39
+ | :--- | :--- |
40
+ | [Powered-by-Intel LLM Leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) | Evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardware 🦾 |
41
+ | [Intel Low-bit Quantized Open LLM Leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard) | Evaluation leaderboard for quantized language models |
42
+
43
+ ### Blogs
44
+
45
+ Get started with deploying Intel's models on Intel architecture with these hands-on tutorials from blogs written by staff from Hugging Face and Intel:
46
+
47
+ | Blog | Description |
48
+ | :--- | :--- |
49
+ | [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) | Develop and deploy RAG applications as part of OPEA, the Open Platform for Enterprise AI |
50
+ | [Running Large Multimodal Models on an AI PC's NPU](https://huggingface.co/blog/bconsolvo/llava-gemma-2b-aipc-npu) | Run the llava-gemma-2b model on an AI PC's NPU |
51
+ | [A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake](https://huggingface.co/blog/phi2-intel-meteor-lake) | Deploy Phi-2 on your local laptop with Intel OpenVINO in the Optimum Intel library |
52
+ | [Partnering to Democratize ML Hardware Acceleration](https://huggingface.co/blog/intel) | Intel and Hugging Face collaborate to build state-of-the-art hardware acceleration to train, fine-tune and predict with Transformers |
53
+
54
+ ### Documentation
55
+
56
+ To learn more about deploying models on Intel hardware with Transformers, visit the resources listed below.
57
+
58
+ *Optimum Habana* - To deploy on Intel Gaudi accelerators, check out [optimum-habana](https://github.com/huggingface/optimum-habana/), the interface between Gaudi and the 🤗 Transformers and Diffusers libraries. To install the latest stable release:
59
+
60
+ ```bash
61
+ pip install --upgrade-strategy eager optimum[habana]
62
+ ```
63
+
64
+ *Optimum Intel* - To deploy on all other Intel architectures, check out [optimum-intel](https://github.com/huggingface/optimum-intel), the interface between Intel architectures and the 🤗 Transformers and Diffusers libraries. Depending on your need, you can use these backends:
65
+
66
+ | Accelerator | Installation |
67
+ |:---|:---|
68
+ | [Intel Neural Compressor](https://huggingface.co/docs/optimum/en/intel/optimization_inc) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"` |
69
+ | [OpenVINO](https://huggingface.co/docs/optimum/en/intel/inference) | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"` |
70
+ | [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |
71
+
72
+
73
+ ### Join Our Dev Community
74
+
75
+ Please join us on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t) to ask questions and interact with our AI developer community!