Viewing a single comment thread. View all comments

pgriz1 t1_j23zy5p wrote

Harnessing powerful technology before we learn to play nice with each other is just giving us more powerful weapons. I very much want humanity to explore the solar system and then the solar neighbourhood, but we also have to figure out how to control our baser impulses in order for that investment of time, money and effort to be positive, rather than another expansion/colonization effort based on the destruction of whatever is there already.

18

CoivaraPA t1_j24w5lj wrote

We will never play nice with another, its not in our nature. And competition between each other is key to human advancement

6

pgriz1 t1_j25oekv wrote

Then for humans to advance further, we'll need to learn to compete without destroying.

11

omegasix321 t1_j26pv2i wrote

Or bypass human nature altogether. After all, since when has what was natural stopped us before?

2

pgriz1 t1_j27dq6f wrote

If AI development results in a self-learning system that achieves self-awareness, we may find ourselves as potentially endangered species. Using human history as a dataset, it may decide that human management of affairs is lacking, and may choose to limit human influence to things that don't cause harm. And if we don't agree... Taking over the controls of water, power, transportation, and potentially even the military systems, may persuade us to play nice. But at that point, the sentience running the planet won't be human.

1

omegasix321 t1_j28mpaj wrote

None of that sounds like a bad thing. If the AI is smart enough to manage resources better than us, with the end goal of improving human quality of life in mind, I see no problem. Who cares what's running everything so long as things get done and the people are prospering?

Even more so if it can do it in a way that denies resources from its detractors while providing infrastructure to those that allow it to work for them. Visibly improving society as it does so. Effectively shutting down our more violent, power-hungry, and suicidally independent natures without firing a single bullet.

1

pgriz1 t1_j28sq3t wrote

>Who cares what's running everything so long as things get done and the people are prospering?

That's the big "if" - would such an AI put human interests high on its priority list, or will it decide that we're (ie, humanity) more trouble than it's worth and need to kept limited (or even, severely reduced). Would it decide that our concepts of rights, freedoms, opportunities are now quaint anachronisms, and coerce us to a zoo-like existence? And all that speculation is not taking into account that it may feel that humanity has not proven itself capable of self-regulation, and may decide to impose "corrective" measures to restore balance.

There are also possibilities that the human contributors to the AI development deliberately fed it "curated" examples of human behaviour which then skews the AI response to favour certain groups over others.

1