Submitted by Akimbo333 t3_10easqx in singularity
Bakoro t1_j4rqxyi wrote
Reply to comment by No_Ninja3309_NoNoYes in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
>Sure we can have AI listen, read, write, speak, move, and see for some definition of these words. But is that what a brain is about? Learn from lots of data and reproduce that?
Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.
>And imitation learning is not enough either.
If you think modern AI is just "imitation", you're really not understanding how it works. It's not just copy and paste, it's identifying and classifying the root process, rules, similarities... The very core of "understanding".
Maybe you could never learn from just watching, but an AI can and does. AI already surpasses humans in a dozen different ways. AI has already contributed to the body of academic knowledge. Even without general intelligence, there has been a level of domain mastery that most humans could never hope to achieve.
Letting AI "explore the world" is just letting it have more data.
AsheyDS t1_j4rwgmy wrote
>Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.
Amazing how you just casually gloss over some of the most complex and difficult-to-replicate aspects of our cognition. I guess transfer learning is no big deal now?
Bakoro t1_j4sefih wrote
It's literally the thing that computers will be the best at.
Comparing everything to everything else in the memory banks, with a perfection and breadth of coverage that a human could only dream of. Recognizing patterns and reducing them to equations/algorithms, recognizing similar structures, and attempting to use known solutions in new ways, without prejudice.
What's amazing is that anyone can be dismissive of a set of tools where each specialized unit can do its task better than almost all, or in some cases, all humans.
It's like the human version of "God of the gaps". Only a handful of years ago, people were saying that AI couldn't create art or solve math problems, or write code. Now we have AI tools which can create masterwork levels of art, have developed thousands of math proofs, can write meaningful code based on a natural language request, can talk people through their relationship problems, and pass a Bar exam.
Relying on "but this one thing" is a losing game. It's all going to be solved.
AsheyDS t1_j4sg78y wrote
That wasn't my point, I know all this. The topic was stringing together current AIs to create something that does these things. And that's ignoring a lot of things that they can't currently do, even if you slap them together.
Bakoro t1_j4sr7jc wrote
Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.
There is already work done and being improved upon to introduce parsing to LLMs, with mathematical, logical, and symbolic manipulation. Tying that kind of LLM together with other models that it can reference for specific needs, will have results that aren't easily predictable, other than that it will vastly improve the shortcomings of current publicly available models; it's already doing so while in development.
Having that kind of system able to loop back on itself is essentially a kind of consciousness, with full-on internal dialogue.
Why wouldn't you expect emergent features?
You say I'm ignoring what AI "can't currently do", but I already said that is a losing argument. Thinking that the state of the art is what you've read about in the past couple week means you're already weeks and months behind.
But please, elaborate on what AI currently can't do, and let's come back in a few months and have a laugh.
AsheyDS t1_j4t3m6g wrote
>Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.
Bold of you to assume I care about being credible on reddit, in r/singularity of all places. This is the internet, you should be skeptical of everything. Especially these days.. I could be your mom, who cares?
And you're going to have to try harder than all that to impress me. Your nebulous 'emergent features' and internal dialogue aren't convincing me of anything.
However, I will admit that I was wrong in saying 'current' because I ignored the date on the infographic. My apologies. But even the infographic admits all the listed capabilities were a guess. A guess which excludes functions of cognition that should probably be included, and says nothing of how they translate over to the 'tech' side. So in my non-credible opinion, the whole thing is an oversimplified stretch of the imagination. But sure, pm me in a few months and we can discuss how GPT-3 still can't comprehend anything, or how the latest LLM still can't make you coffee.
Viewing a single comment thread. View all comments