yubozhao t1_iu9qzm3 wrote
Reply to comment by braintampon in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k
I guess others see this is spammy or ads? Honestly I disclosure who I am and didn’t try to sell (from my pov). I guess that’s not welcomed in this sub. shrug.jpg
Edit: typo
braintampon t1_iua3jkt wrote
Hahah i mean your answer is quite pertinent to OPs post and also i dont see how selling is wrong lmao
But being the founder of BentoML what is your answer to OPs question tho? Which is the fastest, most dev friendly model serving framework acc to you? Which model serving framework in your opinion is the biggest threat (competitor) to BentoML? Is there some benchmarking you guys have done that indicates some potential inferencing speed ups?
My organisation uses BentoML and I personally love what yall have done w it btw. Would be awesome to get your honest opinion on OPs question
TIA!
big_dog_2k OP t1_iua8imm wrote
Great! Exactly this, I just want someone to provide feedback. Do you see throughout improvements using bento with dynamic batching vs without? Is the throughout good in general or is the biggest benefit ease of use?
programmerChilli t1_iufqn15 wrote
Well, you disclosed who you are, but that's pretty much all you did :P
The OP asked a number of questions, and you didn't really answer any of them. You didn't explain what BentoML can offer, you didn't explain how it can speed up inference, you didn't really even explain what BentoML is.
Folks will tolerate "advertising" if it comes in the form of interesting technical content. However, you basically just mentioned your company and provided no technical content, so it's just pure negative value from most people's perspective.
yubozhao t1_iufs7wp wrote
Fair enough. I will probably get to it. I don’t know about you. I need to “charge up” and make sure my answer is good. That takes time and it was the Halloween weeks after all.
Viewing a single comment thread. View all comments