Model inference
Can you please tell how can I use model without pipeline and upload tensors directly? Can not find needed for processor shapes
You can use the Whisper modules from Transformers like below for example:
from transformers import WhisperPreTrainedModel, WhisperModel, WhisperForCausalLM
model = WhisperModel.from_pretrained("Sandiago21/whisper-large-v2-italian")
Yes, I've seen this. But my question is how can I inference model futher? Usage of wavelet form with torch audio ends up with an error in dimensions.
Can you please provide a short code snippet (if you have) how can I use the model without pipeline and preprocess .wav file?
Grazie mille!
Unfortunately I don't have a code snippet as I have mainly used the Pipeline for inference, so I would have to look on it, but maybe you can make use of the Whisper transcribe function, https://github.com/openai/whisper/blob/main/whisper/transcribe.py, which requires as input a Whisper model and the path to the "wav" file and computes and returns the transcription.
Is this available in Trasformers.js?
Is there a way I contribute in making it available in case you don't have time?
Hi @sim0 !
No, it is not currently available in Transformers.js.
I assume you can work on a separate branch and try to make it available in Transformers.js, and once you are done then you can create a merge request.