Viewing a single comment thread. View all comments

SgathTriallair t1_jdtuxct wrote

Unless it has an army of robots already, eliminating humans would destroy it as there would be no one to flip the physical switches at the electrical plants. Without hundreds of millions of androids, possibly billions, any attempt to kill humanity would be suicide. An AI capable of planning our destruction would realize this.

After there are enough androids then it likely would already control the economy so humans wouldn't pose a threat anymore. We still have brains that could be useful even if only in the same way that draft horses are still useful today.

AI killing off humanity is a very unlikely scenario and any AI smart enough to devise such a plan is almost certainly smart enough to come up with a better non destructive one.

3

dwarfarchist9001 t1_jdu5l88 wrote

That fact is little comfort since humanity is already working to build the robot army for it. Within the days of releasing GPT-4 people were trying to hook it up to every type of program imaginable letting it run code and giving it command line access. We will have LLMs controlling commercially available robots in the next few years at latest. If OpenAI started selling drones with an embodied version of GPT-4 built in next week I wouldn't even bat an eye.

2

3_Thumbs_Up t1_jduq277 wrote

You wouldn't need a billion robots to start a new society anymore than you'd need a billion humans.

1