Viewing a single comment thread. View all comments

dmit0820 t1_j6f8nis wrote

I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence, although it doesn't technically meet the requirements to be called AGI yet. It's able to take any input and output a result that, while not better than a human expert, exceeds the quality of the average human most of the time.

ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.

Moreover, we aren't anywhere near the limits of transformer architecture. We can make them multi-modal, inputting and outputting every type of data, embodied, by giving them control of and input from robotic systems, goal directed, integrated with the internet, real time, and potentially much more intelligent simply by improving algorithms, efficiency, network size, and data.

Given how many ways we still have left to make them better it's not unreasonable to think systems like this might lead to AGI.

31

StevenVincentOne t1_j6fjosp wrote

>I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence

It could be. A big part of the problem with the discussion is that most equate "intelligence" and "sentience". An amoeba is an intelligent system, within the limits of its domain, though it has no self-awareness of itself or its domain. A certain kind of intelligence is at work in a chemical reaction. So intelligence and even general intelligence might not be as high of a standard as most may think. Sentience, self-awareness, agency...these are the real benchmarks that will be difficult to achieve, even impossible, with existing technologies. It's going to take environmental neuromorphic systems to get there, imho.

9

dmit0820 t1_j6g0pkd wrote

Some of that might not be too hard, self-awareness and agency can be represented as text. If you give Chat GPT a text adventure game it can respond as though it has agency and self-awareness. It will tell you what it wants to do, how it wants to do it, explain motivations, ect. Character. AI takes this to another level, where the AI bots actually "believe" they are those characters, and seem very aware and intelligent.

We could end up creating a system that acts sentient in every way and even argues convincingly that it is, but isn't.

3

StevenVincentOne t1_j6g1ixu wrote

Sure. But I was talking about creating systems that actually are sentient and agentic not just simulacra. Though one could discuss whether or not for all practical purposes it matters. If you can’t tell the difference does it really matter as they used to say in Westworld.

5

Ok-Hunt-5902 t1_j6i5jkz wrote

>‘Sentience, self-awareness, agency’

Wouldn’t we be better off with a ‘general intelligence’ that was none of those things

1

StevenVincentOne t1_j6ihhc3 wrote

It may be that we don't have to choose or that we have no choice. There is probably an inherent tendency for systems to self-organize into general intelligence and then sentience and beyond into supersentience. There's probably a tipping point at which "we" no longer call those shots and also a tipping point at which "we" and the systems we gave rise to are not entirely distinguishable as separate phenomena. That's just evolution doing its thing. "We" should want to participate in that fully and agentically, not reactively.

1

TacomaKMart t1_j6g1v5x wrote

>ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.

And this is what makes it different from the naysayers who claim it's a glorified autocorrect. It obviously has its flaws and limitations, but this is the VIC-20 version and already it's massively disruptive.

The goofy name ChatGPT sounds like a 20 year old instant messenger client like ICQ. The name hides that it's a serious, history-altering development, as does the media coverage that fixates on plagiarized essays.

6

turnip_burrito t1_j6fa8be wrote

Yep, any data which can be structured as a time series.... oh wait that's ALL data, technically.

2