braintampon t1_iua3jkt wrote
Reply to comment by yubozhao in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k
Hahah i mean your answer is quite pertinent to OPs post and also i dont see how selling is wrong lmao
But being the founder of BentoML what is your answer to OPs question tho? Which is the fastest, most dev friendly model serving framework acc to you? Which model serving framework in your opinion is the biggest threat (competitor) to BentoML? Is there some benchmarking you guys have done that indicates some potential inferencing speed ups?
My organisation uses BentoML and I personally love what yall have done w it btw. Would be awesome to get your honest opinion on OPs question
TIA!
big_dog_2k OP t1_iua8imm wrote
Great! Exactly this, I just want someone to provide feedback. Do you see throughout improvements using bento with dynamic batching vs without? Is the throughout good in general or is the biggest benefit ease of use?
Viewing a single comment thread. View all comments