Submitted by Beepboopbop8 t3_125wol4 in singularity
Ok_Faithlessness4197 t1_jecguf5 wrote
Reply to comment by acutelychronicpanic in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
It's worth talking about, but I'm also worried. The rate it's advanced means that whoever finds the next significant performance improvement could well develop AGI. Many people are researching it, and I'm concerned as 1. AI is currently unaligned 2. A malicious party could develop AGI. If high performing models hadn't already been publicly released, I would have been fully supportive of regulation. (Until AI could be aligned, or a plan for public safety developed.)
Viewing a single comment thread. View all comments