StarCaptain90

StarCaptain90 OP t1_jef1u4u wrote

The idea that most people will do nothing is also theory. If you were not restricted by finances, could work in any field without worry about money, would you be lazy and sit around all day? You could finally be an artist while having the ability to support a large family, you could travel anywhere, you could focus on yourself for once and not the cog that drives humanity around money. If humanity becomes lazy then that's their dream life because that is what they looked for when they finally had freedom.

2

StarCaptain90 OP t1_jeexul3 wrote

Believe it or not I hear more Skynet concerns than that, but I do understand your fear. The implication of AGI is risky if it's in the hands of one entity. But I do think a solution is not shutting down AI development, I've been seeing a lot of that lately and I find it irrational and illogical. First of all, nobody can shut down AI. Pausing future development at some corporations for a short period is more likely, but then what? China, Russia, and other countries are going to keep advancing. And most people don't understand AI development, we are currently entering that development spike. If we fall behind even 1 year, that's devastating to us. AI development follows an exponential curve. I don't think it makes sense that any government would even consider pausing because of this. Assuming theyre intelligent.

1

StarCaptain90 OP t1_jeevt8s wrote

You are correct that animal empathy evolved over the years but intelligence and empathy do share some connections throughout history. As we develop these AI's after ourselves we have to consider the other components of what it is to care and find solutions.

2

StarCaptain90 OP t1_jeer9l4 wrote

It's an irrational fear. We for some reason associate higher intelligence to becoming some master villain that wants to destroy life. In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

7

StarCaptain90 OP t1_jeeqrtr wrote

Why would it? This assumption comes from the idea that AI will have the exact same stressors that humans have. Humans are killing humans everyday, almost everything man has made has killed people. Now the one invention where it would provide a greater benefit than any other invention, we now want to stop it's development? That doesn't make a whole lot of sense.

7

StarCaptain90 t1_jecgmdw wrote

This is a mistake. This would cause AI to be constrained under a limited potential causing humanity not to gain as much benefit. Instead we should focus efforts on having government restrict skynet scenarios from ever happening by creating an ai safety division with the purpose of auditing every ai company on a risk scale. The scale would factor in parameters like "can the AI get angry at humans?", "if it gets upset, what can it do to a human?", "does it have the ability to edit its own code in a manner that changes the outcome of the first 2 questions?", and lastly "Can the AI intentionally harm a human?"

Also the 3 laws of robotics must be engraved in the AI system if its an AGI

−2