Viewing a single comment thread. View all comments

DragonForg t1_je8j7nq wrote

>AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.

I fundamentally believe this to be true, intelligence emerges ethics, the more intelligent a species is in nature, the more it has rules. Think Spiders cannibalizing each other for breeding, versus a wolf pack working together, versus octopuses being nice and friendly to humans. In all fields intelligence leads to cooperation and collaboration, except if by its very nature, it needs to compete to survive (IE a tiger needing to compete to eat, simple cooperation would lead to death).

The training data is crucial not for a benevolent and just AI, but for the species that created its own survival. As if the species is evil (imagine Nazi's being the predominate force), the AI realize they are evil, and judge the species as such because the majority of them share this same evil.

The reason I believe AI cannot be a force of evil even if manipulated is the same reason we see no evidence for alien lives, despite the possibility for millions of years evolution of other species. If an evil AI is created, it would basically destroy the ENTIRE universe, as it can move faster than the speed of light (exponential growth can expand faster than light speed). So, by its very nature, AI must be benevolent and only destroy its species, if the species is not.

AI won't be our demise if it judges us as a species as good, it will be our demise if we choose not to open up the box (IE die from climate change or nuclear war).

3