Viewing a single comment thread. View all comments

ForescytheGiant t1_jab4xwr wrote

I think the “other driving factor” could be the idea, for example, that “might does not make right”. That there must be something about consciousness/sentience, perspective of other- or even more metaphysically, an observation of “oneness” at the highest order or something - a baseline rightness, that comes alongside the capability to choose and evaluate the path. That simply to be able to dominate isn’t the point of everything. And, I hope that ASI/AGI will be able to observe that.

2

undefined7196 t1_jab8hoq wrote

Any form of AI will be the product of the mind that creates it. All forms of basic AI we have, has all of our biases and beliefs because AI has to be taught and it is taught by its creator. We could possibly find a way around this but I don't see how. I build "AI" models for a living. You have to train the models on something or else they are useless, the only thing we have to train them on is ourselves.

2

Porkinson t1_jaewe1s wrote

Maybe in the future you could train an AI from just predicting what happens on its surroundings. Just like you can make an ai that predicts the next token of text.

1

undefined7196 t1_jaexo9d wrote

Perhaps, but those surroundings would inevitably have human influence. I suppose you could make a simulated world and put simulated AI in it, you would need many entities so they could learn empathy and interaction with other beings. It would work similarly to a GAN (Generative Adversarial Network). Where the AI entities compete and that is what drives the learning. Then you just don't allow any human interference at all, just AI vs AI interactions. That could work.

That being said though, that could be what we are experiencing right now. We may be those entities being simulated to create a pure AI in a simulated environment. It would be identical to what we are experiencing, and we ended up being manipulative and destructive on our own.

1