Batch vs Single Inference - Speed

#57
by amaye15 - opened

When running predictions one by one versus in batches I saw a huge difference in speed is this a known issue? or do you have some advice?

  • one by one processing approx 15 minutes
  • batch processing incomplete after 1 hour of running
  • M1 Mac & Google Colab T4 GPU
  • Inference tasks OCR

Sign up or log in to comment