Viewing a single comment thread. View all comments

FacelessFellow t1_j74dgpw wrote

If there’s only ONE objective/factually reality, then we can program AI to perceive only ONE objective/factual reality.

The sun is hot. Agree? You think a good AI would be able to say, “no, the sun is cold.”?

The gasses we release into the atmosphere effect climate. Agree? You think a good AI would be able to say, “no, humans cannot effect climate.”?

Science aims to be as factual and accurate as possible. I imagine a true AI would know the scientific method and execute it perfectly.

Yes, some scientists are wrong, but the truth/facts usually prevail.

I don’t know if I’m making sense haha

−3

Outrageous_Apricot42 t1_j74jacv wrote

This is not how it works. Check out papers how chat gpt was trained. If you use biased training data you will get biased model. This is known since inception of machine learning.

9

FacelessFellow t1_j74nqc6 wrote

Is AI not gonna change or improve in the near future?

Is all AI going to be the same?

−4

Sad-Combination78 t1_j74y7wa wrote

Think about it like this: Anything which learns based on its environment is susceptible to bias.

Humans have biases themselves. Each person has different life experiences and weighs their own lived experiences above hypothetical situations they can't verify themselves. We create models of perception to interpret the world based on our past experiences, and then use these models to further interpret our experiences into the future.

Racism, for example, can be a model taught by others, or a conclusion arrived at by bad data (poor experiences due to individual circumstance). I'm still talking about humans here, but all of this is true for AI too.

AI is not different. AI still needs to learn, and it still needs training data. This data can always be biased. This is just part of reality. We have no objective book to pull from. We make it up as we go. Evaluate, analyze, and expand. That is all we can do. We will never be perfect. Neither will AI.

Of course one advantage of AI is that it won't have to reset every 100 years and hope to pass on enough knowledge to its children as it can. Still, this advantage will be one seen only in age.

6

FacelessFellow t1_j75215s wrote

So if a human makes an AI the AI will have the humans biases. What about when the AI start making AI. Once that snowball starts rolling, won’t future generations of AI be far enough removed from human biases?

Will no AI ever be able to perceive all of reality instantaneously and objectively? When computational powers grow so immensely that they can track every atom in the universe, won’t that help AI see objective truth?

Perfection is a human construct, but flawlessness may be obtainable by future AI. With enough computational power it can check and double check and triple check and so on, to infinity. Will that not be enough to weed out all true reality?

1

Sad-Combination78 t1_j75312i wrote

you missed the point

the problem isn't humans, it's the concept of "learning"

you don't know something, and from your environment, you use logic to figure it out

the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.

AI does not and cannot solve this, it is fundamental to learning

6

FacelessFellow t1_j757skq wrote

But I thought AI was computers. And I thought computers could communicate at the speed of light. Wouldn’t that mean the AI could have input from billions of devices? Scientific instruments nowadays can connect to the web. Is it far fetched to imagine future where all collectible data from all devices could be perceived simultaneously by the AI?

1

Fake_William_Shatner t1_j74g1qn wrote

>If there’s only ONE objective/factually reality,

There isn't though.

There can be objective facts. But there are SO MANY facts. Sometimes people lie. Sometimes they get bad data. Sometimes the look at the wrong things.

Your simplification to a binary choice of a social issue isn't really helping. And, there is no "binary choice" what AI produces writing and art at the moment. There is no OBVIOUS answer and no right or wrong answer -- just people saying "I like this one better."

>I imagine a true AI would know the scientific method and execute it perfectly.

You don't seem to understand how current AI works. It throws in a lot of random noise and data so it can come up with INTERESTING results. An expert system, is one that is more predictable. A neural net adapts, but needs a mechanism to change after it adapts -- and what are the priorities? What does success look like?

Science is a bit easier than social planning I'd assume.

4