How to classify large file using the above model.
Below is the error I get on classifying large-size wave audio files.
The size of the audio file is 4.60 GB
Traceback (most recent call last):
File "D:\Ssharma\speechbrain_venv\Model_CommonLanguage\test.py", line 20, in
out_prob, score, index, text_lab = classifier.classify_file(row[0])
File "D:\Ssharma\speechbrain_venv\speechbrain\speechbrain\pretrained\interfaces.py", line 1006, in classify_file
waveform = self.load_audio(path)
File "D:\Ssharma\speechbrain_venv\speechbrain\speechbrain\pretrained\interfaces.py", line 261, in load_audio
return self.audio_normalizer(signal, sr)
File "D:\Ssharma\speechbrain_venv\speechbrain\speechbrain\dataio\preprocess.py", line 56, in call
resampled = resampler(audio.unsqueeze(0)).squeeze(0)
File "C:\Users\ssharma\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Ssharma\speechbrain_venv\speechbrain\speechbrain\processing\speech_augmentation.py", line 600, in forward
resampled_waveform = self._perform_resample(waveforms)
File "D:\Ssharma\speechbrain_venv\speechbrain\speechbrain\processing\speech_augmentation.py", line 673, in _perform_resample
conv_wave = torch.nn.functional.conv1d(
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 3817748704 bytes.