Viewing a single comment thread. View all comments

ArgentStonecutter t1_j56fhsa wrote

I don't think we're anywhere near human level intelligence, or even general mammalian intelligence. The current technology shows no signs of scaling up to human intelligence and there is fundamental research into the subject required before we have a grip on how to get there.

2

BadassGhost t1_j56i9dt wrote

https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks

LLMs are close to, equal to, or beyond human abilities in a lot of these tasks. Some of them, they're not there yet though. I'd argue this is pretty convincing that they are more intelligent than typically mammals in abstract thinking. Clearly animals are much more intelligent in other ways, even more so than humans in many different domains (e.g. chimps selecting 10 numbers on a screen in order from memory experiment). But in terms of high-level reasoning, they're pretty close to human performance

7

ArgentStonecutter t1_j56sxck wrote

Computers have been better than humans at an increasing number of tasks since before WWII. Many of these tasks, like Chess and Go, were once touted as requiring 'real' intelligence. No possible list of such tasks is even meaningful.

2

BadassGhost t1_j570h0y wrote

Then what would be meaningful? What would convince you that something is close to AGI, but not yet AGI?

For me, this is exactly what I would expect to see if something was almost AGI but not yet there.

The difference from previous specialized AI is that these models are able to learn seemingly any concept, both in training and after training (in context). Things that are out of distribution can be taught with a single digit number of examples.

(I am not the one downvoting you)

3