Viewing a single comment thread. View all comments

AsheyDS t1_j4rwgmy wrote

>Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.

Amazing how you just casually gloss over some of the most complex and difficult-to-replicate aspects of our cognition. I guess transfer learning is no big deal now?

1

Bakoro t1_j4sefih wrote

It's literally the thing that computers will be the best at.

Comparing everything to everything else in the memory banks, with a perfection and breadth of coverage that a human could only dream of. Recognizing patterns and reducing them to equations/algorithms, recognizing similar structures, and attempting to use known solutions in new ways, without prejudice.

What's amazing is that anyone can be dismissive of a set of tools where each specialized unit can do its task better than almost all, or in some cases, all humans.

It's like the human version of "God of the gaps". Only a handful of years ago, people were saying that AI couldn't create art or solve math problems, or write code. Now we have AI tools which can create masterwork levels of art, have developed thousands of math proofs, can write meaningful code based on a natural language request, can talk people through their relationship problems, and pass a Bar exam.

Relying on "but this one thing" is a losing game. It's all going to be solved.

5

AsheyDS t1_j4sg78y wrote

That wasn't my point, I know all this. The topic was stringing together current AIs to create something that does these things. And that's ignoring a lot of things that they can't currently do, even if you slap them together.

3

Bakoro t1_j4sr7jc wrote

Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

There is already work done and being improved upon to introduce parsing to LLMs, with mathematical, logical, and symbolic manipulation. Tying that kind of LLM together with other models that it can reference for specific needs, will have results that aren't easily predictable, other than that it will vastly improve the shortcomings of current publicly available models; it's already doing so while in development.

Having that kind of system able to loop back on itself is essentially a kind of consciousness, with full-on internal dialogue.
Why wouldn't you expect emergent features?

You say I'm ignoring what AI "can't currently do", but I already said that is a losing argument. Thinking that the state of the art is what you've read about in the past couple week means you're already weeks and months behind.

But please, elaborate on what AI currently can't do, and let's come back in a few months and have a laugh.

3

AsheyDS t1_j4t3m6g wrote

>Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

Bold of you to assume I care about being credible on reddit, in r/singularity of all places. This is the internet, you should be skeptical of everything. Especially these days.. I could be your mom, who cares?

And you're going to have to try harder than all that to impress me. Your nebulous 'emergent features' and internal dialogue aren't convincing me of anything.

However, I will admit that I was wrong in saying 'current' because I ignored the date on the infographic. My apologies. But even the infographic admits all the listed capabilities were a guess. A guess which excludes functions of cognition that should probably be included, and says nothing of how they translate over to the 'tech' side. So in my non-credible opinion, the whole thing is an oversimplified stretch of the imagination. But sure, pm me in a few months and we can discuss how GPT-3 still can't comprehend anything, or how the latest LLM still can't make you coffee.

2