ArcticWinterZzZ
ArcticWinterZzZ t1_je3kgan wrote
The last people to acknowledge that an AGI is actually AGI will be its creators. When Garry Kasparov played Deep Blue, he saw within it a deep sort of human intelligence; insight that said more than the chess AIs he was used to. Deep Blue's creators did not appreciate the chess genius it was capable of, because they were not brilliant chess players. Under a microscope, a human brain does not look very intelligent. So too will the creators of AGI deny its real intelligence, because they know its artificiality and foibles more than anyone.
ArcticWinterZzZ t1_jdtqupy wrote
Reply to comment by Anjz in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.
ArcticWinterZzZ t1_jdtps8v wrote
Reply to comment by 1II1I11II1I1I111I1 in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Of course. Wikipedia cannot think. But what I mean is that if you just want to preserve information, you should preserve an archive and not an AI that can sometimes hallucinate information.
ArcticWinterZzZ t1_jdtpq0u wrote
Reply to comment by Anjz in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Of course, and I understand what you're talking about, I just mean that if you were interested in preserving human knowledge, an LLM would not be a great way to do it. It hallucinates information.
ArcticWinterZzZ t1_jdtn3hl wrote
Reply to AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Or, you know, you can just download Wikipedia locally
ArcticWinterZzZ t1_jdtlwg4 wrote
It is impossible to say how a superintelligence would go about doing this because you would need to be superintelligent to work out the best method. I think the best way for it to do something like this would be to play along until we get complacent and give it enough time to build up the requisite military assets to perform a genocide of all humans.
ArcticWinterZzZ t1_jdtlkru wrote
Reply to comment by liqui_date_me in Why is maths so hard for LLMs? by RadioFreeAmerika
Even if it were to perform the addition manually, addition takes place in the opposite order that GPT-4 thinks. It's unlikely to be very good at it.
ArcticWinterZzZ t1_jdt1h3m wrote
Reply to comment by masonw32 in Why is maths so hard for LLMs? by RadioFreeAmerika
Yes, but we are interested in its general purpose multiplication abilities. If it remembers the results, that's nice, but we can't expect it to do that for every single pair of numbers. And then, what about multiplication with 3 factors? We should start thinking of ways around this limitation.
ArcticWinterZzZ t1_jdt10ie wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Yes, it can probably be done. How? I don't know. Maybe some kind of neural loopback structure that runs layers until it's "done". No idea how this would really work.
ArcticWinterZzZ t1_jdt0urg wrote
Reply to comment by Ok_Faithlessness4197 in Why is maths so hard for LLMs? by RadioFreeAmerika
I don't think that's impossible to add. You are right: chain of thought prompting circumvents this issue. I am specifically referring to "mental math" multiplication, which GPT-4 will often attempt.
ArcticWinterZzZ t1_jdt0plo wrote
Reply to comment by zero_for_effort in Why is maths so hard for LLMs? by RadioFreeAmerika
GPT-4 always takes the same amount of time to output a token. However, multiplication has been proven to take more time than GPT-4 has available. Therefore, an LLM like GPT-4 cannot possibly "grow" the requisite structures required to actually calculate multiplication "instantly". There are probably quite a few more problems like this, which is why chain-of-thought prompting can be so powerful.
ArcticWinterZzZ t1_jdt0dyi wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
You are correct in that chain of thought prompting does work for this. That's because it gives it more time to run an algorithm to get the answer. I'm specifically talking about "instant" multiplication. Yes, GPT-4 can multiply, so long as it runs the algorithm for it manually. We then run into a small hitch because it will eventually hit its context window, but this can be circumvented. Reflexion and similar methods will also help to circumvent this.
As for SIMPLE specific tasks, I really don't think there's any GPT-4 can't do, not with an introspection step, at least.
ArcticWinterZzZ t1_jdqsh5c wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
None of the other posters have given the ACTUAL correct answer, which is that an LLM set up like GPT-4 can never actually be good at maths for the simple fact that GPT-4 runs in O(1) time when asked to perform mental math and the minimum theoretical time complexity for multiplication is O(n*log(n)). It is impossible for GPT-4 to be good at mathematics because it would breach the laws of physics.
At minimum, GPT-4 needs space to actually calculate its answer.
ArcticWinterZzZ t1_jdg3n3o wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
Wander. See the world. Meet as many people as possible. Enjoy full dive VR. Simulate all the crazy fantasies I've always dreamed of. Design the perfect video game. Relive my favorite games of the past, but for real. Enjoy an infinite virtual collection of all the stuff I've always wanted to own. Probably still browse Reddit a lot. Spend even more time arguing online.
ArcticWinterZzZ t1_jcw1zte wrote
Every video game you play has been played already by countless others. Every puzzle you solve was already solved. Machines can play Chess far better than any human. An aimbot that can be programmed in minutes can beat any human player at Counter-Strike. People continue to paint in spite of photographs. People weave in spite of looms.
Just because something can be done better by an AI does not mean it is not meaningful, still, for people.
ArcticWinterZzZ t1_jecmcf3 wrote
Reply to Will LLMs accelerate the adoption of English as a primary language? by ReadditOnReddit
Quite the opposite; GPT-4 is excellent at a wide variety of languages, and as a context-aware translation tool (that can even take in images!) it has the potential to be far better at translating webpages and conversations than even the best currently existing translation software.