Comments

You must log in or register to comment.

RiotNrrd2001 t1_j8w88ju wrote

The main problem is that there is no generally agreed upon definition of "intelligence". For some people the recent Large Language Models totally meet their definition, so for them, yes, we have made it to the promised land. For others, the models don't meet their definitions, so no, we still have a long ways to go and may never get there. I have a feeling this split is going to keep on keeping on for some time.

9

Snipgan OP t1_j8whfc9 wrote

True. I guess this is the best answer.

ChatGPT might meet the broad and marketed umbrella term of "AI", but that doesn't mean it is "intelligent" enough for some people's meaning of the word.
Regardless if it is or isn't, I am excited to see where this technology takes us!

1

Surur t1_j8w68z6 wrote

ChatGPT is an AI like every other AI currently in use. Is it an AGI - definitely not.

How its trained is simple, but the result is obviously very sophisticated - it takes a huge amount of intelligence to accurately predict the next word in a sensible and on-topic way.

4

Snipgan OP t1_j8wdw4b wrote

Is it intelligence to accurately predict something? I have been told it's not, it is, and maybe.

A calculator can predict numbers accurately for math problems, but many wouldn't say that is an AI.

That's why I set this discussion up to see what people have to say and think on the topic.

1

Surur t1_j8wes6i wrote

You are oversimplifying.

A calculator can not accurately predict a complex pattern. The more complex the pattern the more complex the algorithm would need to be, and that complexity is what we call intelligence.

Think it through carefully - surely you would need to be very intelligent to generate coherent and on-topic text.

1

Snipgan OP t1_j8wgbbb wrote

Calculators can most certainly predict complex patterns through the formulas its fed.

We had to calculate with machinery at the time for the complex patterns to even get to the moon. And that wasn't AI as far as I am told.

I am not saying chatGPT isn't complex, but so far that doesn't necessarily mean it is intelligent. But then I am told it is maybe at least a "weak" AI for being able to react and produce language well, but then get told it is just assigning predictability/a number to do so from the data it is fed.

So, if it is complexity that determines if it is an AI, what is the threshold for it being complex enough?

1

Surur t1_j8wh1fj wrote

> So, if it is complexity that determines if it is an AI, what is the threshold for it being complex enough?

A reasonable question. I am sure you have purchased some home appliances with the AI label that simply chooses the right wash program based on some sensors, and the developers call that AI, so it's just a label really.

The question is not whether ChatGPT is AI, it's where it is an AGI, and for that, it will need to fulfil a variety of criteria, those being able to reason, problem-solve, learn and plan at the same level as a human in a broad range of areas.

Clearly ChatGPT can not do that yet, so it's not an AGI.

It can however be envisioned that these capabilities can be developed, and future LLM with the right capabilities would meet the criteria for AGI.

2

Snipgan OP t1_j8whrxq wrote

I see. ChatGPT sounds like then it would at least fit into the umbrella term for AI.

So, this might just be me and others misunderstanding AI as something like AGI?

1

Surur t1_j8wmbvu wrote

Yes, but that is also reasonable, since chatGPT is so accomplished.

But it does have to tick all the boxes, and chatGPT cant learn anything new for example, and its reasoning capabilities are pretty good, but still flawed, with basic logic errors some times.

2

valis010 t1_j8w6dj6 wrote

Personally I don't consider the Turing test a very good indicator of sentience. And I think it's all spitballing at this point.

2

Representative_Pop_8 t1_j8wahj1 wrote

the Turing test is not and no one pretends it to be a test of sentience, it is a test of intelligence which is completely dientes concept.
a dog is sentient and would never pass a Turing test. chatGpt is (most likely) not sentient but could pass a Touring test.

2

ChronoPsyche t1_j8w75ij wrote

"AI" is a very broad term that encompasses everything from simple pathfinding algorithms to game decision trees to state of the art machine learning models to AGI itself. It all falls under the category of artificial intelligence.

Large language models like ChatGPT are a type of neural network that are trained with deep learning techniques, which itself is a subset of machine learning, which is in turn a subset of artificial intelligence.

So yes, ChatGPT is 100% AI.

Also, why don't you simply look that term up on Wikepedia? It makes it abundantly clear what AI encompasses.

2

Snipgan OP t1_j8wdkv1 wrote

I did look up the meaning at different places and I get varied results. IBM makes it sound like this is a "weak" AI, but then I get responses to me claiming it such as wrong.

While others say if it just passes the Turing test. While others say predictive algorithms aren't really intelligent and don't constitute an AI.

I guess it comes down to what is the "intelligence part" and if predictive algorithms fit into that. What is the threshold.

So, I figured I get a consensus on what people think if it is.

0

ChronoPsyche t1_j8x3q6t wrote

You are definitely overthinking this. The fact that it is divided into "weak ai" and "strong ai" just proves my point that AI is a catch-all term. The chess app on your phone is AI. So are large language models. So is stable diffusion. So is the boss in your video game.

Any algorithm or software that can make any level of decision on its own based on a given input is AI, no matter how useful or limited it may be. You're crowdsourcing opinion from people who don't know what they're talking about, so that's not really useful.

1

[deleted] t1_j8w7bsj wrote

[deleted]

0

Representative_Pop_8 t1_j8wbau9 wrote

i find it very hard to classify chatGPT as narrow. Sure it was trained only on language, but that allows it to handle an extreme range of subjects, even if not being specifically trained to. Many of the things it can't do are not so much related to its internal capacities but to the lack of external sensors to connect it to the world ( no senses), it not able to see nor make images ( thogh its cousin dall-e already can) , and it is not allow to keep its memory between sessions which seriously cripples its ability to do on context learning.

So, while not as broad as a human intelligence yet, i wouldn't say it is narrow, it is an AGI but not yet at human level on most subjects.

0

[deleted] t1_j8wd33p wrote

[deleted]

0

Representative_Pop_8 t1_j8wh8gd wrote

it is at the very least something half way between narrow and general, but i'd argue it is already a ,very simple and limited, AGI

1