kallikalev

kallikalev t1_jdhj0tf wrote

We’re talking about direct computations. Someone with a massive memory of pi has it memorized, they aren’t computing it via an infinite series in the moment.

The point being made is that it’s much more efficient, both in time and energy, in having the actual computation done by a dedicated and optimized program that only takes a few CPU instructions, rather than trying to approximate it using the giant neural network mind that is a LLM. And this is similar to humans, our brains burn way more energy multiplying large numbers in our head than a CPU would in the few nanoseconds it would take.

7

kallikalev t1_j1pkid7 wrote

And the fact that they’re just toys now means they’ll never be put into use? The newest big stuff like image generation is less than a year old, things take time. Just recently generative art was nothing but a pipe dream, then all the outputs were messes of scribbles that vaguely resembled the prompts, and now they’re mind-blowing. Give it a few more years of refinement and business interests, and you’re going to see image generators and chatbots commonplace.

As a first example of widespread deployment, popular graphic design tool Canva has added a text-to-image tab on its website directly in the editor, allowing people to create stock photos, logos, backgrounds, etc on the fly. And on the “toy” side of things, Midjourney launched about six months ago and already has millions of users paying $10-$30 every month for image generation. Most are using it as a toy but some are making album or book covers, character art for roleplaying games, sketches and inspiration for their own drawings, etc. Just because something is a toy doesn’t mean it won’t have any impact.

17