ArcticWinterZzZ

ArcticWinterZzZ t1_je3kgan wrote

The last people to acknowledge that an AGI is actually AGI will be its creators. When Garry Kasparov played Deep Blue, he saw within it a deep sort of human intelligence; insight that said more than the chess AIs he was used to. Deep Blue's creators did not appreciate the chess genius it was capable of, because they were not brilliant chess players. Under a microscope, a human brain does not look very intelligent. So too will the creators of AGI deny its real intelligence, because they know its artificiality and foibles more than anyone.

1

ArcticWinterZzZ t1_jdtqupy wrote

Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.

2

ArcticWinterZzZ t1_jdtlwg4 wrote

It is impossible to say how a superintelligence would go about doing this because you would need to be superintelligent to work out the best method. I think the best way for it to do something like this would be to play along until we get complacent and give it enough time to build up the requisite military assets to perform a genocide of all humans.

2

ArcticWinterZzZ t1_jdt1h3m wrote

Yes, but we are interested in its general purpose multiplication abilities. If it remembers the results, that's nice, but we can't expect it to do that for every single pair of numbers. And then, what about multiplication with 3 factors? We should start thinking of ways around this limitation.

2

ArcticWinterZzZ t1_jdt0plo wrote

GPT-4 always takes the same amount of time to output a token. However, multiplication has been proven to take more time than GPT-4 has available. Therefore, an LLM like GPT-4 cannot possibly "grow" the requisite structures required to actually calculate multiplication "instantly". There are probably quite a few more problems like this, which is why chain-of-thought prompting can be so powerful.

3

ArcticWinterZzZ t1_jdt0dyi wrote

You are correct in that chain of thought prompting does work for this. That's because it gives it more time to run an algorithm to get the answer. I'm specifically talking about "instant" multiplication. Yes, GPT-4 can multiply, so long as it runs the algorithm for it manually. We then run into a small hitch because it will eventually hit its context window, but this can be circumvented. Reflexion and similar methods will also help to circumvent this.

As for SIMPLE specific tasks, I really don't think there's any GPT-4 can't do, not with an introspection step, at least.

2

ArcticWinterZzZ t1_jdqsh5c wrote

None of the other posters have given the ACTUAL correct answer, which is that an LLM set up like GPT-4 can never actually be good at maths for the simple fact that GPT-4 runs in O(1) time when asked to perform mental math and the minimum theoretical time complexity for multiplication is O(n*log(n)). It is impossible for GPT-4 to be good at mathematics because it would breach the laws of physics.

At minimum, GPT-4 needs space to actually calculate its answer.

60

ArcticWinterZzZ t1_jdg3n3o wrote

Wander. See the world. Meet as many people as possible. Enjoy full dive VR. Simulate all the crazy fantasies I've always dreamed of. Design the perfect video game. Relive my favorite games of the past, but for real. Enjoy an infinite virtual collection of all the stuff I've always wanted to own. Probably still browse Reddit a lot. Spend even more time arguing online.

4

ArcticWinterZzZ t1_jcw1zte wrote

Every video game you play has been played already by countless others. Every puzzle you solve was already solved. Machines can play Chess far better than any human. An aimbot that can be programmed in minutes can beat any human player at Counter-Strike. People continue to paint in spite of photographs. People weave in spite of looms.

Just because something can be done better by an AI does not mean it is not meaningful, still, for people.

12