braintampon
braintampon t1_iu9n6k7 wrote
Reply to comment by yubozhao in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k
Why is this dude downvoted
braintampon t1_irow1qf wrote
Reply to comment by DaltonSC2 in 1080 vs 2060 for deeplearning by ccppoo0
I agree. I was joking myself when i built myself a pc for deep learning couple years ago. Selling it off within this month to get a macbook and will invest in colab/sagemaker.
dm if any1 is lookin to buy lol
braintampon t1_iua3jkt wrote
Reply to comment by yubozhao in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k
Hahah i mean your answer is quite pertinent to OPs post and also i dont see how selling is wrong lmao
But being the founder of BentoML what is your answer to OPs question tho? Which is the fastest, most dev friendly model serving framework acc to you? Which model serving framework in your opinion is the biggest threat (competitor) to BentoML? Is there some benchmarking you guys have done that indicates some potential inferencing speed ups?
My organisation uses BentoML and I personally love what yall have done w it btw. Would be awesome to get your honest opinion on OPs question
TIA!