WH7EVR
WH7EVR t1_jce683f wrote
Reply to comment by ReginaldIII in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
Quality > Effort. I welcome the higher-quality comments and content we'll be getting by augmenting human laziness with AI speed and ability.
WH7EVR t1_jbngk56 wrote
Reply to [D] chatGPT and AI ethics by [deleted]
It took about 120 GPU-years (A100 80GB) to train LLaMA. If you want to train it from scratch, it'll cost you a ton of money and/or time. That said, you can fine-tune llama as-is. No real point is recreating it.
WH7EVR t1_jce6k91 wrote
Reply to comment by noxiousmomentum in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
nat friedman is a multi-millionaire tech entrepreneur, since he uh -- didn't really introduce himself.
​
/u/nat_friedman not everyone knows who you are, or that you're loaded bro.