Submitted by Just-A-Lucky-Guy t3_1200joq in Futurology
WorkO0 t1_jdgkj5b wrote
Reply to comment by Ill-Construction-209 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
No need for that. Just like you would use a calculator/computer to solve algorithmic problems so will AI in the future. Doing mental math is slow and inefficient, our own brains prove it. OTOH, using implicit extensions to do it will make GPT do things previously unimaginable.
Mercurionio t1_jdgocc4 wrote
Our brain does NOT prove it. It's actually the opposite. Ask any autistic kid about 174th number in Pi and he will easily answer your question (exaggerating, but still).
What our brain proves is that it's highly concentrated even when we think it's not. Manipulating our body is a VERY demanding task, it consumes a lot of resources. So, when you are on a "trip", your brain will just relax and do whatever it wants. And your creativity will burst way better than gpt4, for example.
angrathias t1_jdh2y92 wrote
Shit, and I thought the LLMs were the big halluncinators 😂
kallikalev t1_jdhj0tf wrote
We’re talking about direct computations. Someone with a massive memory of pi has it memorized, they aren’t computing it via an infinite series in the moment.
The point being made is that it’s much more efficient, both in time and energy, in having the actual computation done by a dedicated and optimized program that only takes a few CPU instructions, rather than trying to approximate it using the giant neural network mind that is a LLM. And this is similar to humans, our brains burn way more energy multiplying large numbers in our head than a CPU would in the few nanoseconds it would take.
Viewing a single comment thread. View all comments