Submitted by pommedeterresautee t3_10xp54e in MachineLearning
pommedeterresautee OP t1_ja26tgi wrote
Reply to comment by stevevaius in [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl by pommedeterresautee
Our work is for GPU with capacity >= 80 (A10, A100, 3090RTX, etc.) . On Colab you will likely get a T4, etc. (75). Your best bet is to copy paste related to CUDA graph from Kernl library and use with PyTorch 2.0 nightly.
stevevaius t1_ja27q2v wrote
Thanks. For a simple uploading a wav file and transcribe it, is there any implementation on colab? Sorry to bother you. I am working on whisper.cpp but large model is not fast on streaming. Looking to solve this issue by faster methods.
Viewing a single comment thread. View all comments