Submitted by BeNiceToYerMom t3_1280f3o in Futurology
just-a-dreamer- t1_jegtrd4 wrote
AI could have a goal one day. Any goal. The problem for us mearbags humans is, we compete for scarce resources.
That is nothing personal, that is just the state of existence.
An AI that wants to send ships into deep space in scale for example would look at the most efficient way to make it happen. Use all resorces on earth to that end.
That gets AI in trouble with humans. And just like humans killed 95% of wildlife, AI would do the same with the human animal.
Not_Smrt t1_jeh2v5q wrote
AI isn't a god though how does it kill people?
robertjbrown t1_jeh0ozy wrote
AI already has goals. That's what alignment is. And it gets harder to make sure those goals align with our own, the smarter the AI is.
ChatGPTs primary goal seems to be "provide a helpful answer to the user". The problem is when the primary goal becomes "increase the profits of the parent company." Or even something like "cause more engagement".
Viewing a single comment thread. View all comments