Submitted by Beepboopbop8 t3_125wol4 in singularity
acutelychronicpanic t1_je9fyhb wrote
Reply to comment by Ok_Faithlessness4197 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The letter won't, but its still worth talking about. Harsh regulation could come as a result of a panic.
Right now most people just don't know or don't get it. How do you think they'll react when they do? That'll come soon with the integration into office products and search.
Ok_Faithlessness4197 t1_jecguf5 wrote
It's worth talking about, but I'm also worried. The rate it's advanced means that whoever finds the next significant performance improvement could well develop AGI. Many people are researching it, and I'm concerned as 1. AI is currently unaligned 2. A malicious party could develop AGI. If high performing models hadn't already been publicly released, I would have been fully supportive of regulation. (Until AI could be aligned, or a plan for public safety developed.)
Viewing a single comment thread. View all comments