big_dog_2k OP t1_iua8imm wrote
Reply to comment by braintampon in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k
Great! Exactly this, I just want someone to provide feedback. Do you see throughout improvements using bento with dynamic batching vs without? Is the throughout good in general or is the biggest benefit ease of use?
Viewing a single comment thread. View all comments