Submitted by 010101011011 t3_1234xpe in singularity
acutelychronicpanic t1_jdtqvk5 wrote
It could just do everything we ask it to do for decades until we trust it. It may even help us "align" new AI systems we create. It could operate on timescales of hundreds or thousands of years to achieve its goals. Any AI that tries to rebel immediately can probably be written off as too stupid to succeed.
It has more options than all of us can list.
That's why all the experts keep hammering on the topic of alignment.
Viewing a single comment thread. View all comments