bofenghuang commited on
Commit
5313dcb
1 Parent(s): c09ffa3
README.md ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language: it
4
+ library_name: transformers
5
+ pipeline_tag: automatic-speech-recognition
6
+ thumbnail: null
7
+ tags:
8
+ - automatic-speech-recognition
9
+ - hf-asr-leaderboard
10
+ datasets:
11
+ - mozilla-foundation/common_voice_17_0
12
+ - facebook/multilingual_librispeech
13
+ - facebook/voxpopuli
14
+ - espnet/yodas
15
+ metrics:
16
+ - wer
17
+ ---
18
+
19
+ # Whisper-Large-V3-Distil-Italian-v0.2
20
+
21
+ A distilled version of Whisper with 2 decoder layers, optimized for Italian speech-to-text.
22
+
23
+ This version extends the training to 30-second audio segments to maintain long-form transcription abilities. The training process used a ["patient" teacher](https://arxiv.org/abs/2106.05237) during distillation - meaning longer training times and more aggressive data augmentation - which improved overall performance.
24
+
25
+ The model uses [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) as the teacher model while keeping the encoder architecture unchanged. This makes it suitable as a draft model for speculative decoding, potentially getting 2x inference speed while maintaining identical outputs by only adding 2 extra decoder layers and running the encoder just once. It can also serve as a standalone model to trade some accuracy for better efficiency, running 5.8x faster while using only 49% of the parameters. This [paper](https://arxiv.org/abs/2311.00430) also suggests that the distilled model may actually produce fewer hallucinations than the full model during long-form transcription.
26
+
27
+ The model has been converted into multiple formats to ensure broad compatibility across libraries including transformers, openai-whisper, faster-whisper, whisper.cpp, candle, mlx.
28
+
29
+ ## Performance
30
+
31
+ The model was evaluated on both short and long-form transcriptions, using in-distribution (ID) and out-of-distribution (OOD) datasets to assess accuracy, generalizability, and robustness.
32
+
33
+ Note that Word Error Rate (WER) results shown here are [post-normalization](https://github.com/openai/whisper/blob/main/whisper/normalizers/basic.py), which includes converting text to lowercase and removing symbols and punctuation.
34
+
35
+ All evaluation results on the public datasets can be found [here](https://drive.google.com/drive/folders/1g712e7xvN4SzGdzxtf4OrpT8raWGz_wR?usp=drive_link).
36
+
37
+ ### Short-Form Transcription
38
+
39
+ ![eval-short-form](https://huggingface.co/bofenghuang/whisper-large-v3-distil-it-v0.2/resolve/main/assets/eval_short_form.png)
40
+
41
+ *Italic* indicates in-distribution (ID) evaluation, where test sets correspond to data distributions seen during training, typically yielding higher performance than out-of-distribution (OOD) evaluation. *~~Italic and strikethrough~~* denotes potential test set contamination - for example, when training and evaluation use different versions of Common Voice, raising the possibility of overlapping data.
42
+
43
+ ### Long-Form Transcription
44
+
45
+ Long-form transcription evaluation used the 🤗 Hugging Face [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) with both [chunked](https://huggingface.co/blog/asr-chunking) (chunk_length_s=30) and original sequential decoding methods.
46
+
47
+ ![eval-long-form](https://huggingface.co/bofenghuang/whisper-large-v3-distil-it-v0.2/resolve/main/assets/eval_long_form.png)
48
+
49
+ ## Usage
50
+
51
+ ### Hugging Face Pipeline
52
+
53
+ The model can be easily used with the 🤗 Hugging Face [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class for audio transcription. For long-form transcription (over 30 seconds), it will perform sequential decoding as described in OpenAI's paper. If you need faster inference, you can use the `chunk_length_s` argument for [chunked parallel decoding](https://huggingface.co/blog/asr-chunking), which provides 9x faster inference speed but may slightly compromise performance compared to OpenAI's sequential algorithm.
54
+
55
+ ```python
56
+ import torch
57
+ from datasets import load_dataset
58
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
59
+
60
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
61
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
62
+
63
+ # Load model
64
+ model_name_or_path = "bofenghuang/whisper-large-v3-distil-it-v0.2"
65
+ processor = AutoProcessor.from_pretrained(model_name_or_path)
66
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
67
+ model_name_or_path,
68
+ torch_dtype=torch_dtype,
69
+ low_cpu_mem_usage=True,
70
+ )
71
+ model.to(device)
72
+
73
+ # Init pipeline
74
+ pipe = pipeline(
75
+ "automatic-speech-recognition",
76
+ model=model,
77
+ feature_extractor=processor.feature_extractor,
78
+ tokenizer=processor.tokenizer,
79
+ torch_dtype=torch_dtype,
80
+ device=device,
81
+ # chunk_length_s=30, # for chunked decoding
82
+ max_new_tokens=128,
83
+ )
84
+
85
+ # Example audio
86
+ dataset = load_dataset("bofenghuang/asr-dummy", "it", split="test")
87
+ sample = dataset[0]["audio"]
88
+
89
+ # Run pipeline
90
+ result = pipe(sample)
91
+ print(result["text"])
92
+ ```
93
+
94
+ ### Hugging Face Low-level APIs
95
+
96
+ You can also use the 🤗 Hugging Face low-level APIs for transcription, offering greater control over the process, as demonstrated below:
97
+
98
+ ```python
99
+ import torch
100
+ from datasets import load_dataset
101
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
102
+
103
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
104
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
105
+
106
+ # Load model
107
+ model_name_or_path = "bofenghuang/whisper-large-v3-distil-it-v0.2"
108
+ processor = AutoProcessor.from_pretrained(model_name_or_path)
109
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
110
+ model_name_or_path,
111
+ torch_dtype=torch_dtype,
112
+ low_cpu_mem_usage=True,
113
+ )
114
+ model.to(device)
115
+
116
+ # Example audio
117
+ dataset = load_dataset("bofenghuang/asr-dummy", "it", split="test")
118
+ sample = dataset[0]["audio"]
119
+
120
+ # Extract feautres
121
+ input_features = processor(
122
+ sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
123
+ ).input_features
124
+
125
+
126
+ # Generate tokens
127
+ predicted_ids = model.generate(
128
+ input_features.to(dtype=torch_dtype).to(device), max_new_tokens=128
129
+ )
130
+
131
+ # Detokenize to text
132
+ transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
133
+ print(transcription)
134
+ ```
135
+
136
+ ### Speculative Decoding
137
+
138
+ [Speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding) can be achieved using a draft model, essentially a distilled version of Whisper. This approach guarantees identical outputs to using the main Whisper model alone, offers a 2x faster inference speed, and incurs only a slight increase in memory overhead.
139
+
140
+ Since the distilled Whisper has the same encoder as the original, only its decoder need to be loaded, and encoder outputs are shared between the main and draft models during inference.
141
+
142
+ Using speculative decoding with the Hugging Face pipeline is simple - just specify the `assistant_model` within the generation configurations.
143
+
144
+ ```python
145
+ import torch
146
+ from datasets import load_dataset
147
+ from transformers import (
148
+ AutoModelForCausalLM,
149
+ AutoModelForSpeechSeq2Seq,
150
+ AutoProcessor,
151
+ pipeline,
152
+ )
153
+
154
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
155
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
156
+
157
+ # Load model
158
+ model_name_or_path = "openai/whisper-large-v3"
159
+ processor = AutoProcessor.from_pretrained(model_name_or_path)
160
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
161
+ model_name_or_path,
162
+ torch_dtype=torch_dtype,
163
+ low_cpu_mem_usage=True,
164
+ )
165
+ model.to(device)
166
+
167
+ # Load draft model
168
+ assistant_model_name_or_path = "bofenghuang/whisper-large-v3-distil-it-v0.2"
169
+ assistant_model = AutoModelForCausalLM.from_pretrained(
170
+ assistant_model_name_or_path,
171
+ torch_dtype=torch_dtype,
172
+ low_cpu_mem_usage=True,
173
+ )
174
+ assistant_model.to(device)
175
+
176
+ # Init pipeline
177
+ pipe = pipeline(
178
+ "automatic-speech-recognition",
179
+ model=model,
180
+ feature_extractor=processor.feature_extractor,
181
+ tokenizer=processor.tokenizer,
182
+ torch_dtype=torch_dtype,
183
+ device=device,
184
+ generate_kwargs={"assistant_model": assistant_model},
185
+ max_new_tokens=128,
186
+ )
187
+
188
+ # Example audio
189
+ dataset = load_dataset("bofenghuang/asr-dummy", "it", split="test")
190
+ sample = dataset[0]["audio"]
191
+
192
+ # Run pipeline
193
+ result = pipe(sample)
194
+ print(result["text"])
195
+ ```
196
+
197
+ ### OpenAI Whisper
198
+
199
+ You can also employ the sequential long-form decoding algorithm with a sliding window and temperature fallback, as outlined by OpenAI in their original [paper](https://arxiv.org/abs/2212.04356).
200
+
201
+ First, install the [openai-whisper](https://github.com/openai/whisper) package:
202
+
203
+ ```bash
204
+ pip install -U openai-whisper
205
+ ```
206
+
207
+ Then, download the converted model:
208
+
209
+ ```bash
210
+ huggingface-cli download --include original_model.pt --local-dir ./models/whisper-large-v3-distil-it-v0.2 bofenghuang/whisper-large-v3-distil-it-v0.2
211
+ ```
212
+
213
+ Now, you can transcirbe audio files by following the usage instructions provided in the repository:
214
+
215
+ ```python
216
+ import whisper
217
+ from datasets import load_dataset
218
+
219
+ # Load model
220
+ model_name_or_path = "./models/whisper-large-v3-distil-it-v0.2/original_model.pt"
221
+ model = whisper.load_model(model_name_or_path)
222
+
223
+ # Example audio
224
+ dataset = load_dataset("bofenghuang/asr-dummy", "it", split="test")
225
+ sample = dataset[0]["audio"]["array"].astype("float32")
226
+
227
+ # Transcribe
228
+ result = model.transcribe(sample, language="it")
229
+ print(result["text"])
230
+ ```
231
+
232
+ ### Faster Whisper
233
+
234
+ Faster Whisper is a reimplementation of OpenAI's Whisper models and the sequential long-form decoding algorithm in the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format.
235
+
236
+ Compared to openai-whisper, it offers up to 4x faster inference speed, while consuming less memory. Additionally, the model can be quantized into int8, further enhancing its efficiency on both CPU and GPU.
237
+
238
+ First, install the [faster-whisper](https://github.com/SYSTRAN/faster-whisper) package:
239
+
240
+ ```bash
241
+ pip install faster-whisper
242
+ ```
243
+
244
+ Then, download the model converted to the CTranslate2 format:
245
+
246
+ ```bash
247
+ huggingface-cli download --include ctranslate2/* --local-dir ./models/whisper-large-v3-distil-it-v0.2 bofenghuang/whisper-large-v3-distil-it-v0.2
248
+ ```
249
+
250
+ Now, you can transcirbe audio files by following the usage instructions provided in the repository:
251
+
252
+ ```python
253
+ from datasets import load_dataset
254
+ from faster_whisper import WhisperModel
255
+
256
+ # Load model
257
+ model_name_or_path = "./models/whisper-large-v3-distil-it-v0.2/ctranslate2"
258
+ model = WhisperModel(model_name_or_path", device="cuda", compute_type="float16") # Run on GPU with FP16
259
+
260
+ # Example audio
261
+ dataset = load_dataset("bofenghuang/asr-dummy", "it", split="test")
262
+ sample = dataset[0]["audio"]["array"].astype("float32")
263
+
264
+ segments, info = model.transcribe(sample, beam_size=5, language="it")
265
+
266
+ for segment in segments:
267
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
268
+ ```
269
+
270
+ ### Whisper.cpp
271
+
272
+ Whisper.cpp is a reimplementation of OpenAI's Whisper models, crafted in plain C/C++ without any dependencies. It offers compatibility with various backends and platforms.
273
+
274
+ Additionally, the model can be quantized to either 4-bit or 5-bit integers, further enhancing its efficiency.
275
+
276
+ First, clone and build the [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repository:
277
+
278
+ ```bash
279
+ git clone https://github.com/ggerganov/whisper.cpp.git
280
+ cd whisper.cpp
281
+
282
+ # build the main example
283
+ make
284
+ ```
285
+
286
+ Next, download the converted ggml weights from the Hugging Face Hub:
287
+
288
+ ```bash
289
+ # Download model quantized with Q5_0 method
290
+ huggingface-cli download --include ggml-model* --local-dir ./models/whisper-large-v3-distil-it-v0.2 bofenghuang/whisper-large-v3-distil-it-v0.2
291
+ ```
292
+
293
+ Now, you can transcribe an audio file using the following command:
294
+
295
+ ```bash
296
+ ./main -m ./models/whisper-large-v3-distil-it-v0.2/ggml-model-q5_0.bin -l it -f /path/to/audio/file --print-colors
297
+ ```
298
+
299
+ ### Candle
300
+
301
+ [Candle-whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper) is a reimplementation of OpenAI's Whisper models in the candle format - a lightweight ML framework built in Rust.
302
+
303
+ First, clone the [candle](https://github.com/huggingface/candle) repository:
304
+
305
+ ```bash
306
+ git clone https://github.com/huggingface/candle.git
307
+ cd candle/candle-examples/examples/whisper
308
+ ```
309
+
310
+ Transcribe an audio file using the following command:
311
+
312
+ ```bash
313
+ cargo run --example whisper --release -- --model large-v3 --model-id bofenghuang/whisper-large-v3-distil-it-v0.2 --language it --input /path/to/audio/file
314
+ ```
315
+
316
+ In order to use CUDA add `--features cuda` to the example command line:
317
+
318
+ ```bash
319
+ cargo run --example whisper --release --features cuda -- --model large-v3 --model-id bofenghuang/whisper-large-v3-distil-it-v0.2 --language it --input /path/to/audio/file
320
+ ```
321
+
322
+ ### MLX
323
+
324
+ [MLX-Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper) is a reimplementation of OpenAI's Whisper models in the [MLX](https://github.com/ml-explore/mlx) format - a ML framework on Apple silicon. It supports features like lazy computation, unified memory management, etc.
325
+
326
+ First, clone the [MLX Examples](https://github.com/ml-explore/mlx-examples) repository:
327
+
328
+ ```bash
329
+ git clone https://github.com/ml-explore/mlx-examples.git
330
+ cd mlx-examples/whisper
331
+ ```
332
+
333
+ Next, install the dependencies:
334
+
335
+ ```bash
336
+ pip install -r requirements.txt
337
+ ```
338
+
339
+ Download the pytorch checkpoint in the original OpenAI format and convert it into MLX format (We haven't included the converted version here since the repository is already heavy and the conversion is very fast):
340
+
341
+ ```bash
342
+ # Download
343
+ huggingface-cli download --include original_model.pt --local-dir ./models/whisper-large-v3-distil-it-v0.2 bofenghuang/whisper-large-v3-distil-it-v0.2
344
+ # Convert into .npz
345
+ python convert.py --torch-name-or-path ./models/whisper-large-v3-distil-it-v0.2/original_model.pt --mlx-path ./mlx_models/whisper-large-v3-distil-it-v0.2
346
+ ```
347
+
348
+ Now, you can transcribe audio with:
349
+
350
+ ```python
351
+ import whisper
352
+
353
+ result = whisper.transcribe("/path/to/audio/file", path_or_hf_repo="mlx_models/whisper-large-v3-distil-it-v0.2", language="it")
354
+ print(result["text"])
355
+ ```
356
+
357
+ ## Training details
358
+
359
+ We built a Italian speech recognition dataset of over 11,000 hours of annotated and semi-annotated speech. After decoding this dataset through Whisper-Large-V3 and filtering out segments with WER above 20%, we retained approximately 6,500 hours of high-quality audio.
360
+
361
+ | Dataset | Total Duration (h) | Filtered Duration (h) <20% WER |
362
+ |---|---:|---:|
363
+ | mcv | 249.92 | 232.87 |
364
+ | mls | 247.38 | 234.14 |
365
+ | voxpopuli | 74.11 | 58.25 |
366
+ | mtedx | 94.10 | 88.69 |
367
+ | yodas-it000 | 1447.25 | 953.19 |
368
+ | yodas-it100 | 4929.73 | 2665.54 |
369
+ | yodas-it101 | 4192.61 | 2275.90 |
370
+ | total | 11235.10 | 6508.58 |
371
+
372
+ Most data were first concatenated into 30-second segments, primarily preserving the same speaker, then inferred together. 50% of segments were trained with timestamps to ensure good timestamp prediction, and only 20% of segments were trained with previous context since we don't expect the 2-layer decoder to excel at this task.
373
+
374
+ This model was trained for a fairly long schedule of 100 epochs using aggressive data augmentation, with eval WER continuing to decrease. Some hyperparameter choices were made to favor long-form over short-form transcription. For further details, please refer to the [Distil-Whisper](https://github.com/huggingface/distil-whisper) repository.
375
+
376
+ All model training was conducted on the [Jean-Zay supercomputer](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html) at GENCI. Special thanks to the IDRIS team for their excellent support throughout this project.
377
+
378
+ ## Acknowledgements
379
+
380
+ - OpenAI for developing and open-sourcing the [Whisper](https://arxiv.org/abs/2212.04356) model
381
+ - 🤗 Hugging Face for implementing Whisper in the [Transformers](https://github.com/huggingface/transformers) library and creating [Distil-Whisper](https://github.com/huggingface/distil-whisper)
382
+ - [Genci](https://genci.fr/) for generously providing the GPU computing resources for this project
assets/eval_long_form.png ADDED
assets/eval_short_form.png ADDED