Viewing a single comment thread. View all comments

AsheyDS t1_j4sg78y wrote

That wasn't my point, I know all this. The topic was stringing together current AIs to create something that does these things. And that's ignoring a lot of things that they can't currently do, even if you slap them together.

3

Bakoro t1_j4sr7jc wrote

Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

There is already work done and being improved upon to introduce parsing to LLMs, with mathematical, logical, and symbolic manipulation. Tying that kind of LLM together with other models that it can reference for specific needs, will have results that aren't easily predictable, other than that it will vastly improve the shortcomings of current publicly available models; it's already doing so while in development.

Having that kind of system able to loop back on itself is essentially a kind of consciousness, with full-on internal dialogue.
Why wouldn't you expect emergent features?

You say I'm ignoring what AI "can't currently do", but I already said that is a losing argument. Thinking that the state of the art is what you've read about in the past couple week means you're already weeks and months behind.

But please, elaborate on what AI currently can't do, and let's come back in a few months and have a laugh.

3

AsheyDS t1_j4t3m6g wrote

>Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

Bold of you to assume I care about being credible on reddit, in r/singularity of all places. This is the internet, you should be skeptical of everything. Especially these days.. I could be your mom, who cares?

And you're going to have to try harder than all that to impress me. Your nebulous 'emergent features' and internal dialogue aren't convincing me of anything.

However, I will admit that I was wrong in saying 'current' because I ignored the date on the infographic. My apologies. But even the infographic admits all the listed capabilities were a guess. A guess which excludes functions of cognition that should probably be included, and says nothing of how they translate over to the 'tech' side. So in my non-credible opinion, the whole thing is an oversimplified stretch of the imagination. But sure, pm me in a few months and we can discuss how GPT-3 still can't comprehend anything, or how the latest LLM still can't make you coffee.

2